Search is not available for this dataset
text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Excited Quarks}
We consider a model of composite quarks with excited states that have spin 1/2
and weak isospin 1/2. The effective Lagrangian for chromomagnetic transitions
between excited quarks ($q^*$) of mass $M$ and common quarks (q) is
constrained by gauge invariance to be~\cite{ref_qstar}:
\begin{equation}
\label{eq_Lagrangian}
{\cal L} = \frac{g_s f_s}{4M}\bar{q}^*_R \sigma^{\mu\nu} \lambda_a
G_{\mu\nu}^a q_L \ + \ h.c.
\end{equation}
where $G^a$ are gluon fields, $\lambda_a$ are SU(3) structure constants,
and $g_s$ is the strong coupling. Here we have chose the compositeness
scale to be $\Lambda=M$, by writing $M$ in the denominator in
Eq.~\ref{eq_Lagrangian}, because the excited quark mass should be close to the
energy scale of quark compositeness. The constant $f_s$ depends on
the unknown dynamics of the quark consituents, and is generally assumed to
be equal to 1, thereby giving standard model couplings. Excited quarks decay
to common quarks via the emission of a gluon in approximately 83\% of all
decays. Excited quarks can also decay to common quarks by emitting a W, Z, or
photon, through an effective Lagrangian similar to
Eq.~\ref{eq_Lagrangian}.
We consider the process $qg\rightarrow q^* \rightarrow qg$ for
discovering an excited quark at a hadron collider.
The signal is two high energy jets, resulting from hadronization of the final
state quark and gluon, which form a peak in the dijet invarian mass
distribution. The subprocess differential cross section is a Breit-Wigner:
\begin{equation}
\frac{d\hat{\sigma}}{d\hat{t}} = \frac{2\pi\alpha_s^2}{9M^4}
\frac{\hat{s}}{ \left( \hat{s} - M^2\right)^2 + \Gamma^2 M^2}
\label{eq_xsec}
\end{equation}
where $\alpha_s$ is the strong coupling, $\hat{s}$ and $\hat{t}$ are
subprocess Mandelstam variables, and $\Gamma$ is the width of the excited
quark. The sum of the partial widths in the gluon, W, Z, and photon channels,
gives a half width at half maximum of $\Gamma/2 \approx 0.02M$.
In Eq.~\ref{eq_xsec} we have already averaged over the angular
distribution in the center of mass frame, $dN/d\cos\theta^* \sim 1 +
\cos\theta^*$, where $\theta^*$ is the angle between the initial state
and final state quark in the subprocess center of mass frame.
In hadron collisions this subprocess angular distribution results in an
isotropic dijet angular
distribtion $dN/d\cos\theta^* \sim 1$. This is because for every quark in
hadron 1 that becomes an excited quark and emerges in the final state at a
fixed $\cos\theta^*$, with rate proportonal to $1 + \cos\theta^*$, there is
a quark in hadron 2 which is headed in the opposite direction, and emerges at
the same value of $\cos\theta^*$ with rate proportional to $1 - \cos\theta^*$.
The sum of the two angular distributions is isotropic.
\section{Background and Cuts}
Normal parton-parton scattering via QCD produces a large background to the
dijet decays of excited quarks.
However, QCD is dominated by t-channel gluon
exchange which gives a dijet angular distribution $dN/d\cos\theta^*\sim
1/(1-\cos\theta^*)^2$, where $\theta^*$ is the angle between the incoming
parton and the jet in subprocess center of mass. In contrast excited quark
production and decay results in an isotropic dijet
angular distribution as discussed above. Therefore to
suppress QCD
backgrounds we require $|\cos\theta^*|<2/3$ and we also require the
pseudorapidity of each jet satisfy $|\eta|<2$. We note that any dijet
analysis will
generally make a $|\cos\theta^*|$ cut to have uniform trigger acceptance as
a function of dijet mass, and an $|\eta|$ cut is to stay within a defined
region of the detector.
We include all lowest order QCD subprocesses in our background calculation:
$qq\rightarrow qq$, $q\bar{q} \rightarrow gg$, $qg \rightarrow qg$, $gg
\rightarrow gg$ and $gg \rightarrow q\bar{q}$.
\section{Cross Section}
For both the excited quark signal and the lowest order QCD background, we
convolute the subprocess differential cross section with CTEQ2L parton
distributions~\cite{ref_cteq} of the colliding hadrons, within the above range of
$\cos\theta^*$ and $\eta$.
This gives the differential cross section as a function
of dijet mass, $d\sigma/dm$, for both the excited quark signal and the lowest
order QCD background. For the excited quark signal we consider only the
first generation, $u^*$ and $d^*$, and we assume they are degenerate in mass.
The half width of the excited quark resonance
remains $\Gamma/2 \approx 0.02M$.
This is significantly more narrow than the dijet mass
resolution at the Tevatron,
which is roughly Gaussian with RMS
\onecolumn
\begin{figure}[tbh]
\hspace*{-0.25in}
\epsffile{exq_pipe_xsec_fig.eps}
\caption[]{ Lowest order parton level cross sections within a 16\% wide search
window for QCD dijets (dashed curve) and excited quarks decaying to dijets
(solid curve) are shown as a function of excited quark mass at {\bf a)} the
future energy of the Tevatron, {\bf b)} the LHC, {\bf c)} a VLHC with center
of mass energy 50 TeV, and {\bf d)} 200 TeV.
All cross sections are for dijets with $|\eta|<2$, $|\cos\theta^*|<2/3$.}
\label{fig_xsec}
\end{figure}
\begin{figure}[tbh]
\hspace*{-0.25in}
\epsffile{exq_pipe_discovery_fig.eps}
\caption[]{
The predicted cross section for
dijet decays of excited quarks (solid curve) is compared to the 5$\sigma$
discovery reach (dotted curves) at various luminosities for
{\bf a)} the
future energy of the Tevatron, {\bf b)} the LHC, {\bf c)} a VLHC with center
of mass energy 50 TeV, and {\bf d)} 200 TeV. All
cross sections are for dijets with $|\eta|<2$, $|\cos\theta^*|<2/3$, and
invariant mass within 16\% of the excited quark peak assuming a 10\% dijet mass
resolution.}
\label{fig_reach}
\end{figure}
\twocolumn
\noindent
deviation $\sigma \approx 0.1M$. If we assume
a Gaussian dijet resolution of width $\sigma \approx 0.1M$ at all hadron
colliders, then 90\% of the dijet events from an excited quark will be inside a
16\% mass window $0.84M < m < 1.16M$, where $m$ is the dijet invariant mass.
We integrate the differential cross section, $d\sigma/dm$, for
both the excited quark signal and the QCD background within the 16\% mass
window to obtain an estimate of the signal and background cross section for a
search. Figure~\ref{fig_xsec} shows the resulting total signal and background
cross section in the search window at the Tevatron, LHC and VLHC as a function
of excited quark mass.
\section{Discovery Mass Reach}
The QCD background rate
is used to find the 5 $\sigma$ discovery cross section. This
is conservatively defined as the cross section which is
above the background by 5 $\sigma$, where $\sigma$ is the statistical error on
the measured cross section (not the background). For example, if the
background were zero events the $5\sigma$ discovery rate would be 25 events.
In Fig.~\ref{fig_reach} we compare the
excited quark cross section to the 5 $\sigma$ discovery cross section at
various luminosities for the future Tevatron, the LHC, and the VLHC.
The excited quark discovery mass reach, defined as the mass at which an
excited quark would be discovered with a 5$\sigma$ signal, is tabulated as
a function of mass for the LHC and VLHC proton-proton colliders in
Table I. We have also performed the calculation for VLHC proton-antiproton
colliders, where the QCD background is slightly higher but the excited quark
signal is exactly the same, which yields a 3\% smaller mass reach.
Because of space limitations, Figs.~\ref{fig_xsec} and \ref{fig_reach} do not
display curves for a 100 TeV VLHC, but the mass reach of a 100 TeV VLHC
tabulated in Table I was determined from curves similar to those in
Fig.~\ref{fig_reach}.
The mass reach at the future Tevatron is $0.94$ TeV for collider run II
(2 fb$^{-1}$) and $1.1$ TeV for TeV33 (30 fb$^{-1}$). This can be compared
to the published 95\% CL limit of 570 GeV from CDF~\cite{ref_qstar_cdf} and the
the preliminary limits of 750 GeV from CDF and 720 GeV from
D0~\cite{ref_qstar_d0}.
The mass reach at the LHC is 6.3 TeV for 100 fb$^{-1}$, which could be
obtained by running for one year ($\sim 10^7$ seconds) at the design
luminosity of $10^{34}$ cm$^{-2}$ s$^{-1}$. Since the design luminosity may not
be quickly achieved, we note that with only 10 fb$^{-1}$ at the beginning of the
LHC the mass reach is still 5.3 TeV. Ultimately, the LHC may be able to
integrate 1000 fb$^{-1}$, which will provide a mass reach of 7.3 TeV.
The mass reach at the VLHC varies widely depending on the
energy of the machine and it's luminosity. A 50 TeV machine with only 1
fb$^{-1}$ of integrated luminosity has a mass reach of $10.5$ TeV,
significantly better than LHC with any conceivable luminosity. At the other
extreme, a 200 TeV machine with $10^4$ fb$^{-1}$ would have a mass
reach of 78 TeV.
\begin{table}[tbh]
Table I: The 5$\sigma$ discovery mass reach for excited quarks of a
proton-proton collider
as a function of integrated luminosity is tabulated for the LHC with a
center of mass energy of 14 TeV and the VLHC with a center of mass energy
of 50, 100 and 200 TeV. \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|}\hline
& \multicolumn{4}{c|}{Excited Quark Mass Reach} \\
Integrated & LHC & VLHC & VLHC & VLHC \\
Luminosity & 14 & 50 & 100 & 200 \\
(fb$^{-1}$) & (TeV) & (TeV) & (TeV) & (TeV) \\ \hline
1 & -- & 10.5 & 17 & 26 \\
10 & 5.3 & 14.0 & 24 & 39 \\
100 & 6.3 & 17.5 & 31 & 52 \\
$10^3$ & 7.3 & 21 & 38 & 65 \\
$10^4$ & -- & 24.5 & 45 & 78 \\ \hline
\end{tabular}
\end{center}
\end{table}
The mass reach in table I appears to be a smooth function of the proton-proton
center of mass energy, $\sqrt{s}$, and integrated luminosity, $L$. The
following analytic function exactly reproduces the VLHC mass reach in Table I
for the energy range $50<\sqrt{s}<200$ TeV and the luminosity
range $1<L<10^4$ fb$^{-1}$:
\begin{equation}
\label{eq_lum}
M = 7 + 3\log_2\left(\frac{\sqrt{s}}{50}\right) + k(1+\log_{10}L)
\end{equation}
where $k$ depends on the energy of the machine according to
\begin{equation}
\label{eq_lum2}
k = \frac{7}{2} + \frac{11}{3}\left(\frac{\sqrt{s}}{50} -1\right)
-\frac{1}{6}\left(\frac{\sqrt{s}}{50} - 1\right)^2
\end{equation}
Although Eq.~\ref{eq_lum} and \ref{eq_lum2} reproduces the VLHC mass reach,
at LHC these equations give a mass reach that is 40\% lower than the
numbers in Table I. We provide Eq.~\ref{eq_lum} and \ref{eq_lum2}
for interpolation among the VLHC entries in Table I only. We do not
recommend these equations be used for extrapolation outside the energy
range $50<\sqrt{s}<200$ TeV and the luminosity range $1<L<10^4$ fb$^{-1}$.
\section{Energy vs. Luminosity}
To clarify the superior gains obtained by increasing the
energy of a machine, as opposed to increasing the luminosity, we show in
Fig.~\ref{fig_lum} the mass reach for the VLHC which is also tabulated in
Table I. Note that the mass reach is proportional to the logarithm of the
luminosity, but is almost directly proportional to the energy of the machine.
To clarify the energy vs. luminosity tradeoff consider the following
hypothetical case.
\subsection{Discovery of New Scale at LHC}
Suppose the LHC sees a classic signal of new physics:
an excess of high transverse energy jets which also have an angular
distribution that is significantly more isotropic than
predicted by QCD, an effect that cannot be due to parton distributions
within the proton.
Suppose further that this measurement corresponds to a scale of new
physics $\Lambda \sim 15$ TeV,
which is roughly the largest contact interaction that
the LHC could see in the jet channel. We would have strong
evidence of new physics, and the angular distribution might begin
to separate between compositeness and other sources of new physics.
But, we would probably not know for certain which source of new
physics the scale $\Lambda \approx 15$ TeV corresponded too, and we would
need an independent experimental confirmation that quarks were composite.
If the
source of new physics were quark compositeness, we would expect to see excited
quarks with mass close to the compositeness scale.
To be safe, we suppose the excited quark mass could be as high as 25 TeV, and
we want to decide which machine to build to find the excited quark and
confirm that the new physics is quark compositeness.
\subsection{Discovery of 25 TeV $q^*$ at VLHC}
In Fig.~\ref{fig_lum} the horizontal
dashed line at 25 TeV intersects the VLHC excited quark mass reach at
an integrated luminosity of about $1.3\times10^4$ fb$^{-1}$ for a 50 TeV
machine, 13 fb$^{-1}$ for a 100 TeV machine, and $0.9$ fb$^{-1}$ for a 200 TeV machine.
Clearly, to find a 25 TeV excited quark, one would build either the 100 TeV
or possible even the 200 TeV machine and quickly accumulate the relatively low
integrated luminosities of 1-10 fb$^{-1}$, rather than build a 50 TeV machine
and have to integrate between 3 and 4 orders of magnitude more luminosity.
Note that the common accelerator wisdom that a factor of 2 in energy is worth
a factor of 10 in luminosity is only roughly right for comparing the 100 TeV
and 200 TeV machines; when comparing the 50 TeV and 100 TeV machines discovery
potential for a 25 TeV excited quark, a factor of 2 in energy is worth a
factor of $1000$ in luminosity!
\section{Systematics}
In this analysis, we have not included any systematic uncertainties on the
signal, and we have assumed
that the shape and magnitude of the qcd background spectrum is reasonably
approximated by lowest order QCD. We also assumed that the dijet mass
resolution will be
roughly 10\% at all hadron colliders, ignoring a long tail to low mass caused
by radiation.
Adding systematics on the signal and the background will
likely decrease the mass reach of a real search.
To get a rough idea of the effect of systematics, we examine
the TeV2000 report~\cite{ref_tev2000}, which included systematics in the mass
reach for excited quarks.
Our discovery mass reach for the future
Tevatron is about 10\% better than the 95\% CL mass reach quoted in the
TeV2000 report, because ours is for $\sqrt{s}=2.0$ TeV
instead of 1.8 TeV and because ours does not include systematic uncertainties.
If we increase the mass reach in the TeV2000 report by 10\% to account for
the increase in center of mass energy from $1.8$ to $2.0$ TeV, then the two
results are roughly the same. From this we see that including systematic
uncertainties would roughly change our $5\sigma$ result to merely a 95\% CL
result. However, the systematics in the TeV2000 report were likely
overestimates, because they were based on previous dijet searches for excited
quarks~\cite{ref_qstar_cdf} in which there was no signal: if a signal is
present the systematic uncertainties will likely be smaller.
\begin{figure}[tbh]
\hspace*{-0.25in}
\epsfysize=3.8in
\epsffile{exq_pipe_lum_fig.eps}
\caption[]{
The discovery mass reach for dijet decays of excited quarks is shown
as a function of integrated luminosity for a VLHC of energy 50 TeV, 100
TeV and 200 TeV (solid curves). The horizontal dashed line is for a
hypothetical 25 TeV excited quark discussed in the text.}
\label{fig_lum}
\end{figure}
\vspace*{-0.5in}
\section{Summary and Conclusions}
We have estimated the discovery mass reach for excited quarks at future
hadron colliders. The mass reach at the Tevatron is $0.94$ TeV for Run II
(2 fb$^{-1}$) and $1.1$ TeV for TeV33 (30 fb$^{-1}$). The mass reach at the
LHC is $6.3$ TeV for 100 fb$^{-1}$. At a VLHC with a center of mass energy
of 50 TeV (200 TeV) the mass reach is 25 TeV (78 TeV) for an
integrated luminosity of $10^4$ fb$^{-1}$. However, an excited quark with
a mass of 25 TeV would be discovered at a hadron collider with $\sqrt{s}=100$
TeV and an integrated luminosity of only 13 fb$^{-1}$: here a factor of 2
increase in energy from a 50 TeV to a 100 TeV machine is worth a factor of
$1000$ increase in luminosity at a fixed machine energy of 50 TeV. When the
goal is to discover new physics at high energy scales, even a modest increase
in machine energy can be more desirable than a large increase in luminosity.
| proofpile-arXiv_065-1193 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Pion interferometry has been a part of particle physics
for several decades \cite{Zaj88a}. The main sub-branch of this science,
concerning
itself with Bose-Einstein correlations, endeavours to elicit
information on the size, shape and temporal evolution of
the source emitting pions. This is based on an analogy
between optical intensity interferometry and quantum mechanical
interference between incoherent pion amplitudes.
\hfill\par \vspace{\baselineskip}
Measurements of correlations between identical particles contain,
besides amplitude interference, a plethora of other effects such as
coherence, decaying resonances, variations in impact parameter and
momentum distribution, contamination by kaons and protons,
final-state interactions etc. Understandably, a solid basis for
subtracting all such effects has been singularly hard to create. Given
the number and degree of theoretical and experimental uncertainties
entering even second-order correlations, the corresponding higher-order
measurements have received only scant attention: if it is hard to
extract source parameters in an honest and unambiguous way from
second-order correlation data, it probably becomes even harder for
third order.
\hfill\par \vspace{\baselineskip}
In the present paper, we elect to take a different approach:
We measure higher-order correlations not so much with a view
to extracting source parameters or ``true'' Bose-Einstein
correlations, but in order to perform {\it consistency checks}.
While few theorists have so far worked out the implications
of their respective models for higher orders, this is in
principle possible, and some examples of higher-order
predictions exist \cite{Wei89a,Biy90a,Lyu91a,Plu92a,And93a}.
If a given theory provides formulae
for both second and higher orders, then these should apply to
the corresponding data using the same parameter values
throughout.
\hfill\par \vspace{\baselineskip}
With the aid of rapidly-improving measurement technology, we are
attempting to put such predictions to the test. While the results
reported here are quite preliminary in nature, they hopefully point the
way to more general and sophisticated testing of theories rather than
just measuring their respective parameters. {\it Falsifying\/}
theories is arguably the best (some would say the only) way of making
progress in a confused situation \cite{Pop35a}.
\hfill\par \vspace{\baselineskip}
Our tools for performing these consistency checks are {\it cumulants\/}
and the {\it correlation integral\/} \cite{Lip92a,Egg93d,Egg93a}.
Cumulants, in subtracting out trivial lower-order contributions, have
proven far more sensitive than the corresponding moments; their
implementation in various forms of the correlation integral has, at the
same time, improved statistical accuracy to a degree where such
measurements have become meaningful.
\section{Quantum statistics theory}
The test we shall be reporting here is confined to
one particular variable, the four-momentum difference
$q_{ij} = [(\vec p_i - \vec p_j)^2 - (E_i - E_j)^2]^{1/2}$.
For this variable, the second and third-order
cumulants are \cite{Egg93d,Ber77a}
\begin{eqnarray}
\label{cua}
C_2(q) &=& \rho_2(q) - \rho_1{\otimes}\rho_1(q) \,, \\
C_3
&=& \rho_3 - \sum_{(3)} \rho_2{\otimes}\rho_1
+ 2 \rho_1{\otimes}\rho_1{\otimes}\rho_1 \,,
\end{eqnarray}
where the third order quantities are functions of the three pair
variables $(q_{12},q_{23},q_{31})$. These cumulants, including the
crossed ``${\otimes}$'' quantities and event-mixing normalizations
can be found from data samples in a precisely prescribed
algorithm \cite{Egg93d}.
\hfill\par \vspace{\baselineskip}
The quantum statistics (QS) theory itself has a long and distinguished
tradition \cite{Fow78a,Gyu79a};
the version we concentrate on is based on analogies
to quantum optics (for details, we refer the reader to
Refs.\ \cite{Biy90a,Plu92a,And93a}).
Briefly, the main features of interest to us are:
\noindent
{\bf a)}
The pion field is split up into a ``coherent'' and a
``chaotic'' part:
\begin{equation}
\label{cub}
\Pi(x) = \Pi_0(x) + \Pi_{ch}(x) \,.
\end{equation}
\noindent
{\bf b)}
The ratio of chaotically created pions to the total
number of pions is embodied in the ``chaoticity parameter'',
\begin{equation}
\label{cuc}
p = \langle n_{ch} \rangle / \langle n_{ch} + n_0 \rangle \,.
\end{equation}
\noindent
{\bf c)}
Much of the dynamics is contained within the
normalized field correlator,
\begin{equation}
\label{cud}
d_{ij} \equiv
{\langle \Pi_{ch}^{\dag} (\vec k_i) \Pi_{ch}(\vec k_j) \rangle
\over
\left[ \,
\langle \Pi_{ch}^{\dag} (\vec k_i) \Pi_{ch}(\vec k_i) \rangle \;
\langle \Pi_{ch}^{\dag} (\vec k_j) \Pi_{ch}(\vec k_j) \rangle \,
\right]^{1/2} } \ \;,
\end{equation}
(where $\langle A \rangle = {\rm Tr}(\rho A)$ is the
ensemble average over states, weighted by the density matrix
$\rho$)
which is closely related to the Fourier transform of the chaotic
field source functions.
\noindent
{\bf d)}
Working out two-point, three-point and higher-order averages,
this theory of quantum statistics predicts unambiguously
the normalized moments and cumulants of all orders. When relative
phases are neglected, the first three ``QS cumulants''
of interest are \cite{And93a}
\begin{eqnarray}
\label{cue}
k_2 \equiv {C_2 \over \rho_1{\otimes}\rho_1}
&=& 2p(1-p)d_{12} + p^2 d_{12}^2 \,, \\
\label{cuf}
k_3 \equiv {C_3 \over \rho_1{\otimes}\rho_1{\otimes}\rho_1}
&=& 2p^2(1-p)[ d_{12}d_{23} + d_{23}d_{31} + d_{31}d_{12} ]
+ 2p^3 d_{12} d_{23} d_{31} \,,\\
\label{cug}
k_4 \equiv {C_4 \over \rho_1{\otimes}\rho_1{\otimes}\rho_1{\otimes}\rho_1}
&=& \sum_{(24)} p^3(1-p) d_{12} d_{23} d_{34}
\nonumber\\
&&\ {+} 2p^4 [d_{12}d_{23}d_{34}d_{41} +
d_{12}d_{24}d_{43}d_{31} +
d_{14}d_{42}d_{23}d_{31} ]\,,
\end{eqnarray}
where the brackets under the sum indicate the number of permutations.
These cumulants are functions of 1, 3 and 6 pair variables
$q_{ij}$ respectively.
Note the combination of ``ring''-- and ``snake''--like structures in
the combinatorics.
\hfill\par \vspace{\baselineskip}
While in principle calculable from a given density matrix, the
correlator is usually parametrized in a plausible and/or convenient
way. Specifically, the parametrizations we shall be testing are, in
terms of the 4-momentum difference correlators $d_{ij} {=} d(q_{ij})$,
\begin{eqnarray}
\label{cuh}
{\rm gaussian{:}\ \ \ \ \ \ }
d_{ij} &=& \exp(-r^2 q_{ij}^2) \,,
\mbox{\ \ \ \ \ \ }
\\
\label{cui}
{\rm exponential{:}\ \ \ \ \ \ }
d_{ij} &=& \exp(-r q_{ij}) \,,
\mbox{\ \ \ \ \ \ }
\\
\label{cuj}
{\rm power\ law{:}\ \ \ \ \ \ }
d_{ij} &=& q_{ij}^{-\alpha} \,.
\mbox{\ \ \ \ \ \ }
\end{eqnarray}
\section{UA1 data}
We have measured second- and third-order normalized cumulants using a
sample of about 160,000 minimum bias events taken with the UA1 detector
for $p\bar p$ collisions at 630 GeV/c. For details of the detector
and other experimental information regarding particle pairs,
the reader is referred to Ref.~\cite{UA1-93a} The following cuts were
applied to this sample: $-3 \leq \eta \leq 3$, $p_\perp \geq 0.15$
GeV, $45^\circ \leq \phi \leq 135^\circ$ (by means of this ``good
azimuth'' cut, our statistics were reduced considerably but acceptance
corrections due to the ``holes'' in the UA1 detector at small $\phi$
were thereby avoided). Cumulants were calculated for positives and
negatives separately and then averaged to yield ``like-sign'' values.
No Coulomb corrections were applied.
\section{Fits to the second order cumulant}
In Figure~1, we show the second-order like-sign differential
cumulant $\Delta K_2 = (\int \rho_2 / \int \rho_1{\otimes}\rho_1 )$
$-1$, where numerator and denominator are integrals over bins
spaced logarithmically between $q = 1$ GeV and 20 MeV\footnote{
It should be remarked that previous work has shown the utility
of using logarithmic rather than linear binning: much of what is
interesting in correlations happens at small $q$, and
this region is probed better by using logarithmic bins.}.
Fits to the data were performed using the three parametrizations
(\ref{cuh})--(\ref{cuj}), either in the full QS form (\ref{cue}) or in
a simple form $\Delta K_2 = p d_{12}$. All fits shown
include, besides the free parameters $p$ and $r$ (or $\alpha$), an
additive constant as free parameter. These additive constants,
necessary because UA1 data is non-poissonian in nature, will be
commented on further below. Best fit parameter values obtained were $p
= 0.66 \pm 0.07$, $r = 1.16 \pm 0.05$ fm for the QS exponential and $p
= 0.05 \pm 0.01$, $\alpha = 0.64 \pm 0.05$ for the QS power law.
Goodness-of-fits were $\chi^2/{\rm NDF} = 1.3,\ 4.2$ and 11.5 for QS
power, exponential and gaussian respectively.
\hfill\par \vspace{\baselineskip}
To check its influence on fit values, the data point at smallest $q$,
being of doubtful quality, was excluded; the resulting fit values do
not differ much from the full fit.
We note that the QS exponential misses the last three points (apart
from the point at $q=20$ MeV) and that the power laws (single or QS)
appear to do the best job. The gaussian fits are too bad to warrant
further attention and will be neglected from here on. Similar
conclusions were reached by UA1 earlier \cite{UA1-93a}.
\begin{figure}
\centerline{
\epsfysize=100mm
\epsfbox{ua1_fig1.eps}
}
\vspace*{-8mm}
\caption{Second order like-sign UA1 cumulant with fits using various
parametrizations for $d$ inside both the quantum statistical (QS)
formula (\ref{cue}) and a simple parametrization $\Delta K_2 = pd$. }
\label{fig1}
\end{figure}
\section{Consistency checks with third order cumulants}
As stressed already, we are interested not so much in obtaining
numerical values for parameters but rather in using these to check the
theoretical formulae (\ref{cue})--(\ref{cuj}) for consistency with the
data. Three separate checks were performed: two based on
approximations, the third involving a novel approach tentatively called
``theory${\otimes}$experiment'' which will be explained in Section 5.2.
\hfill\par \vspace{\baselineskip}
\subsection{Approximate checks}
Third-order correlations and cumulants are
functions of the three pair variables $(q_{12},q_{23},q_{31})$,
so that the question arises how best to view such three-dimensional
correlations. The easiest projection involves setting
the pair variables equal \cite{Biy90a,Plu92a,UA1-92a},
$q_{12}{=}q_{23}{=}q_{31}{\equiv}q$,
so that Eq.\ (\ref{cuf}) reduces to the simple formula
\begin{equation}
\label{cuk}
k_3(q) = 6p^2(1-p)d^2 + 2p^3d^3 \,.
\end{equation}
Experimentally, however, the prescription of {\it three\/} mutually
equal $q$'s is so restrictive as to make measurement impossible. The
usual way out \cite{UA1-92a,NA22-95a} has been to include all triplets whose
{\it mean\/} of the three $q_{ij}$'s is equal to a given $q$ while still
applying Eq.\ (\ref{cuk}) (the effect of this approximation has, to our
knowledge, not been checked).
\hfill\par \vspace{\baselineskip}
The second approximation involves setting $p \equiv 1$ without
restricting the pair variables. Fortuitously, Eqs.~(\ref{cuf}),
(\ref{cui}) then become $k_3 = 2\exp[-r(q_{12}+q_{23}+q_{31})]$, so
that a simple change to the ``GHP~sum'' variable $S =
(q_{12}{+}q_{23}{+}q_{31})$ does the trick.
\hfill\par \vspace{\baselineskip}
In Figure~2, we show the UA1 third-order cumulant $\Delta K_3$
as a function of the GHP~sum variable $S$.
The lower line represents the first approximation,
i.e.\ formula (\ref{cuk}) using the exponential parametrization
(\ref{cui}) and best-fit values from
$\Delta K_2$ plus an arbitrary additive constant.
(Similar approximations using the gaussian form (\ref{cuh})
with the variable\footnote{
Note that the linear sum $S$ is quite distinct from the
pythagorean sum variable $Q$.}
$Q \equiv \sqrt{q_{12}^2 + q_{23}^2 + q_{31}^2}$
and equal pair $q$'s have been used before \cite{NA22-95a}.)
The upper line, representing the second approximate check, was
calculated by first fitting $\Delta K_2$ with $p{=}1$ and an QS
exponential for $d$ to obtain $r = 0.89 \pm 0.02$ fm (not shown) and
then importing this value into $k_3(p{=}1) = 2\exp({-rS})$.
\hfill\par \vspace{\baselineskip}
We see that, in both cases, the theoretical curves lie
well below the $\Delta K_3$ data. Even an arbitrary shift
by an additive constant does not improve the match because
of the different shape of the curves as compared to the data
points.
\begin{figure}
\centerline{
\epsfysize=90mm
\epsfbox{ua1_fig2.eps}
}
\caption{Approximate test predictions, compared to the third-order UA1
cumulant using GHP sum topology.}
\label{fig2}
\end{figure}
\begin{figure}
\centerline{
\epsfxsize=135mm
\epsfbox{ua1_fig3.eps}
}
\caption{
Third-order GHP max and GHP sum cumulants, together with
theory${\otimes}$ex\-pe\-ri\-ment predictions from QS theory and
parameter values from $\Delta K_2$. Filled circles represent UA1
data, triangles are predictions based on the QS power-law
parametrization; squares are QS exponential predictions.
}
\label{fig3}
\end{figure}
\subsection{The ``theory${\otimes}$experiment'' method}
The approximate consistency checks performed above are
unsatisfactory for two reasons: first, because they rely on simplifications
of the formulae which may be unwarranted, second, because
they are suitable only for the exponential parametrization
(or, using $Q$, for the gaussian equivalent). As shown above,
however, the data for $\Delta K_2$, while not excluding an
exponential form, would seem to prefer the power law ---
and the power law cannot be handled by these approximations.
A seemingly better methodology emerges, surprisingly, from
some considerations about normalization.
\hfill\par \vspace{\baselineskip}
{\it Theory\/} and theorists usually work with infinitesimally
differential normalized quantities; for example, the second-order
normalized cumulant is often written down as
\begin{equation}
\label{cul}
R(\vec k_1,\vec k_2) =
{ \rho_2(\vec k_1,\vec k_2) \over \rho_1(\vec k_1) \rho_1(\vec k_2) } -1
=
{C_2(\vec k_1,\vec k_2) \over \rho_1(\vec k_1) \rho_1(\vec k_2) }
\end{equation}
which is (implicitly)
fully differential in the momenta $\vec k_1,\vec k_2$.
Similarly, the normalized theoretical cumulants
$k_i = C_i/\rho_1{\otimes}\cdots{\otimes}\rho_1$ used above
assume essentially perfect measurement accuracy and infinite
statistics.
\hfill\par \vspace{\baselineskip}
{\it Experimentally\/}, one can never measure fully
differential quantities; rather, the numerator and denominator
are averaged over some bin of finite size $\Omega$ (however small)
before the ratio is taken; for example
\begin{equation}
\label{cum}
\Delta K_2(\Omega) =
{\int_\Omega C_2(q)\, dq \over
\int_\Omega \rho_1{\otimes}\rho_1(q)\, dq} \,,
\end{equation}
which approaches the theoretical cumulant $k_2(q)$
only in the limit $\Omega\to 0$.
\hfill\par \vspace{\baselineskip}
This observation can be converted into an exact prescription
for folding a given theoretical normalized quantity with
experimentally measured one-particle distributions. For
simplicity, we take second order quantities as an example.
Since trivially $C_2(q) = k_2(q) \, \rho_1{\otimes}\rho_1(q)$,
we can take $k_2$ from theory, $\rho_1{\otimes}\rho_1$ from
experiment and write exactly
\begin{equation}
\label{cun}
\Delta K_2(\Omega) =
{\int_\Omega k_2^{\rm th}(q) \;
\rho_1{\otimes}\rho_1^{\rm expt}(q) \, dq \over
\int_\Omega \rho_1{\otimes}\rho_1^{\rm expt}(q)\, dq}
\equiv
{\int_\Omega C_2^{{\rm th} {\otimes} {\rm expt}}(q) \; dq \over
\int_\Omega \rho_1{\otimes}\rho_1^{\rm expt}(q)\, dq}
\,.
\end{equation}
Correlation integral theory prescribes that \cite{Egg93d,Egg93a}
\begin{equation}
\label{cuo}
\rho_1{\otimes}\rho_1^{\rm expt}(q)
= \left\langle \left\langle \sum_{i,j} \delta[q - Q_{ij}^{ab}]
\right\rangle_{\!\!b} \right\rangle_{\!\!\!a} ,
\end{equation}
where $Q_{ij}^{ab}
= [({\vec {p_i}}^a - {\vec {p_j}}^b)^2 - (E_i^a - E_j^b)^2]^{1/2}$
is the four-momentum difference between two tracks $i$ and
$j$ taken from different events $a$ and $b$.
Taking, for example, the QS cumulant (\ref{cue}) and the
exponential parametrization (\ref{cui}), this leads to
\begin{equation}
\label{cup}
C_2^{{\rm th} {\otimes} {\rm expt}} (q)
= \left\langle \left\langle \sum_{i,j} \delta[q - Q_{ij}^{ab}]
[ 2p(1-p) \exp(-rQ_{ij}^{ab}) + p^2 \exp(-2rQ_{ij}^{ab}) ]
\right\rangle_{\!\!b} \right\rangle_{\!\!\!a} ,
\end{equation}
which can be binned in $q$ or otherwise integrated. In passing, we
observe that Eq.~(\ref{cun}) reduces to the theoretical $k_2$ for
infinitesimal $\Omega$ or for constant $\rho_1{\otimes}\rho_1$ as
required.
\hfill\par \vspace{\baselineskip}
Clearly, this can be generalized to all possible moments and cumulants,
independently of variable or integration topology. The procedure
exemplified by Eq.~(\ref{cup}) and its generalizations amounts to a
Monte Carlo integration of a {\it theoretical correlation function\/}
sampled according to the {\it experimental uncorrelated one-particle
distribution\/}; for this reason, we like to call it by the diminutive
``Monte Karli'' or ``MK'' for short. MK can, of course, be implemented
only for fixed numerical values of the theo\-retical parameters, in this
case $p$ and $r$. These must be determined either by more naive
fitting methods (and then checked for consistency) or by a very
cumbersome fitting procedure using the full event sample many times
over.
\hfill\par \vspace{\baselineskip}
In Figure 3, the results of implementing the MK prescription
are shown. Besides the GHP sum topology used in (b),
we show in (a) a separate analysis using the ``GHP max''
topology \cite{Egg93a}, which bins triplets according to the largest
of the three variables, max$(q_{12},q_{23},q_{31})$.
Fit parameter values used for the respective power law
and exponential MK points were taken from the naive
QS fit to $\Delta K_2$ of Figure 1. (The consistency
of this procedure was checked by inserting these
parameter values back into the MK formulae for $\Delta K_2$
and finding agreement between UA1 data and MK predictions.)
Again, all MK points shown are determined only up to an additive
constant, so that the curves may be shifted up and down.
It is again clear, though, that the shape of third-order
cumulant data measured differs appreciably from that predicted
by the QS formulae and parameter values from $\Delta K_2$.
This conclusion holds independently of the topology used
and of the functional form taken for $d$.
\section{Discussion}
Concerning the fits to $\Delta K_2$, we have concluded that the
gaussian parametrization $d_{ij} = \exp(-r^2q_{ij}^2)$ is quite unsuitable,
while the exponential is better but not good. The best fit was obtained
using either a simple or QS (double) power law. This confirms earlier
results \cite{UA1-93a}\cite{NA22-93a}.
The fits were reasonably stable even when
excluding the point at smallest $q$, so that the effect is not due to
this last point.
\hfill\par \vspace{\baselineskip}
Parameter values obtained from fits $\Delta K_2$ were then applied
to third-order cumulant data in
three different checks. Both the two
approximations as well as the exact theory${\otimes}$experiment
(Monte Karli) prescription yielded predictions that did not match the data. The
tests performed in this paper, namely checking three specific
parametrizations (gaussian, exponential and power-law) within one
specific variable $q$ for consistency between $\Delta K_2$ and
$\Delta K_3$ appear to indicate that, {\it under these specific
conditions}, the theory is contradicted by the data.
\hfill\par \vspace{\baselineskip}
It should be clear, though, that this conclusion can at this stage be
preliminary and limited in scope only, for the following reasons:
\begin{itemize}
\item
The data shown is preliminary only and will have to
await further checks such as acceptance corrections, full-azimuth studies,
sensitivity to binning, etc.
\item
The most important caveat relates to the structure of the overall
multiplicity distribution. The fact that UA1 data is not poissonian in
nature \cite{UA1-83a} can be seen immediately at large $q$ where
$\Delta K_2$ converges not to zero (as a poissonian cumulant would) but
to $\approx 0.4$. The same holds for $\Delta K_3$. Theories, however,
are almost universally based on an overall poissonian: as can be easily
verified from Eqs.\ (\ref{cue})--(\ref{cug}), all cumulants tend to
zero for large $q$. The policy followed here, namely reconciling
poissonian theory with non-poissonian data by means of an additive
constant in the cumulants, is a sensible but hitherto poorly-understood
first step. The question of handling cumulants more adequately within a
non-poissonian overall multiplicity distribution is presently being
considered \cite{Liptb}. We also hope that our results may goad
theorists into more careful consideration of their work with respect to
the implicit poissonian normalization used in most theories. See also
Ref.~\cite{And94a}.
\item
Closely related to these additive constants is the question of correct
normalization. Traditional lore in second order divides the density
correlation function (moment) $\Delta F_2$ by an additional
normalization factor $f$, taken as the moment at some large value of
$q$. An alternative methods creates as many background pairs as
necessary to achieve the limit of unity for $\Delta F_2$. While the
third order moment can similarly be normalized to unity, the
prescription fails for third order cumulants. A brief scan of the
literature on third-order cumulants reveals that no adjustments were
made for possible non-poissonian multiplicity structure.
\cite{Biy90a,Lyu91a,Plu92a,And93a,Egg93d,Egg93a,Ber77a,UA1-92a,
Ken78a,DELPHI-95a,NA22-94a,Jur89a,Liu86a}
\item
Finally, one may mention possible changes to the present application of
the theory such as inclusion of relative phases in the correlators,
possible non-gaussian source currents, modelling the
momentum dependence \cite{And93a} of $p$, variable
trans\-for\-ma\-tions \cite{And93b} in $d$ and so on.
\end{itemize}
Beyond these caveats, the following points are of relevance to
the interpretation of our results:
\begin{itemize}
\item
No corrections for Coulomb repulsion \cite{Zaj88a}
were included. We could argue that the same Coulomb effects
that might shift $\Delta K_2$ data upward would increase $\Delta K_3$
data even more, since there are three pairs involved rather than one.
Even more convincing is the fact that $\Delta K_3$ data rises
more strongly than theoretical predictions even for large $q$
(several hundred MeV) where Coulomb repulsion is not expected to
be important.
\item
Strictly speaking, the good power-law
fit to $\Delta K_2$ in itself is inconsistent: QS theory
requires \cite{And93a} $\lim_{q\to 0}\; d(q) = 1$, while the power-law
parametrization diverges. Attempts to explain this in terms of variable
transformations \cite{And93b} or source size
distributions \cite{Gyu82a,Bia92a} may therefore provide useful starting
points in explaining the discrepancies in $\Delta K_3$.
\item
Track mismatching can lead to strong correlation effects because the
reconstruction program may split a single track into a closely
correlated pair. A great deal of effort in early experimental
intermittency studies went into creating clean ``split-track-finding''
algorithms \cite{Lipphd} and these are included in our analysis. We have
checked through additional small-$q$ cuts that mismatching does not
appear to explain our strong rises in the cumulants.
\item
The UA1 sample consists of $\sim$ 15\% kaons and protons which cannot
be distinguished from pions. The effect that these would have on
$\Delta K_3$ is unclear.
\item
Resonances are known to increase $\Delta K_2$ at small $q$, the main
effect in {\it second order} deriving from interference between
``direct'' pions and resonance-decay products.\footnote{
We have also looked at like-sign correlations resulting from
resonance decay chains such as
$(\eta^\prime \to \eta\pi^+\pi^-)$; $(\eta \to \pi^+\pi^-\pi^0)$:
using PYTHIA with the Bose-Einstein routines switched off,
we find no significant like-sign correlations in PYTHIA from
resonances alone.}
How and whether resonances would contribute to like-sign cumulants in
{\it third order} (and for values of $q$ of several hundred MeV shown
in $\Delta K_3$) is still quite mysterious.
\item
At this point, one could wonder whether it is wise to even attempt to
eliminate resonances from hadron-hadron collision data: apart from the
theoretical and technical difficulties, what dynamical information does
the typical ``size'' $r < 1$~fm of a hypothetical ``source'' contain
that is more important than a cascade structure containing resonances
whose existence is beyond doubt? If the ``source'' is scarcely larger
than a nucleon, then how can one speak of incoherent or even classical
production of two pions? And if one eliminates long-lived resonances,
then one would presumably still be left with the short-lived ones
rather than the holy grail of an abstract quantum mechanical ``source''.
\end{itemize}
\noindent
The results of the present paper may appear, at first sight, to
contradict the conclusion \cite{Plu92a}, based on an earlier UA1
paper \cite{UA1-92a}, that QS theory was compatible with
higher-order moments. The apparent discrepancy is explained by pointing
out that 1) measurement techniques have improved considerably since
then, 2) these techniques have permitted the present direct
measurements of cumulants, which are considerably more sensitive than
moments, and 3) even for these moment fits \cite{UA1-92a}, the radii
were not quite constant but showed a systematic increase.
\hfill\par \vspace{\baselineskip}
Bose-Einstein correlation measurements with a view to extracting source
parameters are by now well-established in hadronic and heavy ion
phenomenology. Our intention here was to show that consistency checks
between cumulants of different orders might be a second route to
learning something about the system: if by this method a given theory
can be tested already on a qualitative rather than quantitative basis,
then opportunities for feedback and improvement of such theories may
expand.
\bigskip
{\bf Acknowledgements:}
This work was supported in part by the Austrian Fonds zur F\"orderung
der wissenschaftlichen Forschung (FWF) by means of a Lise-Meitner
fellowship and an APART fellowship of the Austrian Academy of
Sciences.
| proofpile-arXiv_065-1204 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
In this lecture I will review the status of theoretical calculations of
exclusive $B$-decays. It is intended that this talk should complement
those presented at this conference by N.~Uraltsev~\cite{kolya}
(theory of heavy quark physics), A.~Ali~\cite{ahmed} (rare
$B$-decays) and M.~Gronau~\cite{gronau} ($CP$-violation).
The two main topics which will be discussed here are:
\begin{itemize}
\item[i)] {\em Leptonic Decays} in which the $B$-meson decays into
leptons, e.g. $B\to \tau\nu_\tau$. These are the simplest
to consider theoretically (see sec.~\ref{sec:fb}). Their observation
at future $b$-factories would have a significant impact on the
phenomenology of beauty decays.
\item[ii)] {\em Semileptonic Decays} in which the $b$-quark decays
into a lighter quark + leptons. Examples of such decays include
$B\to (D\ {\mathrm or}\ D^*) + l\nu_l$ and $B\to (\pi\ {\mathrm or}\
\rho) + l\nu_l$, which are being used to determine the $V_{cb}$ and
$V_{ub}$ matrix elements of the CKM-matrix (see
sec.~\ref{sec:semilept}). Many of the theoretical issues concerning
these decays are relevant also for rare decays, such as $B\to
K^*\gamma$.
\end{itemize}
{\em Non-Leptonic Decays} in which the $B$-meson decays into two or
more hadrons, such as $\bar B^0\to\pi^-D^+$, are considerably more
complicated to treat theoretically, and with our current level of
understanding require model assumptions. I will not discuss them
further in this talk (see however the talk by Gronau~\cite{gronau}).
In studying the decays of $B$-mesons, we are largely interested in
extracting information about the properties and parameters of the weak
interactions, and in looking for possible signatures of physics beyond
the standard model. The most important theoretical problem in
interpreting the experimental results, is to control the strong
interaction effects which are present in these decays. This is a
non-perturbative (and hence very difficult) problem, and is the main
subject of this talk. The main theoretical tools that are used to
quantify the effects are lattice simulations and QCD sum rules,
combined with the formalism of the heavy quark effective theory (HQET)
where appropriate.
As with any problem in non-perturbative quantum field theory, the
exploitation of all available symmetries is very important. For the
case of heavy quark physics, the use of the spin-flavour symmetries
that are present when the masses of the heavy quarks are $\gg
\Lambda_{QCD}$, leads to considerable simplifications (see
refs.~\cite{kolya} and \cite{mn,ms} for recent reviews and references
to the original literature). In particular, as will be seen in the
following sections, the use of heavy quark symmetries and the
HQET is particularly helpful for $B$-decays.
It is not appropriate in this lecture to present a detailed critical
review of the systematic errors present in lattice simulations (see
ref.~\cite{lattbphys} for a recent review). Since many of the results
below are based on lattice simulations, it is, however, necessary to
mention at least the main source of uncertainty present in the
calculations of quantities in $B$-physics. The number of space time
points on a lattice is limited by the available computing resources.
One therefore has to compromise between two competing requirements:
(i) that the lattice be sufficiently large in physical units to
contain the particle(s) whose properties are being studied, i.e. the
length of the lattice in each direction should be $\gg 1$\,fm, and
(ii) that the spacing between neighbouring lattice points, $a$, be
sufficiently small to avoid errors due to the granularity of the
lattice (called ``lattice artefacts'' or ``discretization errors'' in
the literature), i.e. $a^{-1}\gg \Lambda_{QCD}$. Much effort is
currently being devoted to reducing the discretization effects by
constructing ``improved'' (or even ``perfect''~\cite{perfect}) lattice
actions and operators following the approach of
Symanzik~\cite{improvement}. Typical values of $a^{-1}$ in current
simulations are about 2--3 GeV, i.e. the lattice spacings are larger
than the Compton wavelength of the $b$-quark, and the propagation of a
$b$-quark on such lattices cannot be studied directly. The results
presented below are obtained by extrapolating those computed directly
for lighter quarks (with masses typically around that of the charm
quark). In addition, calculations can be performed in the HQET and
the results obtained in the infinite mass limit can then be used to
guide this extrapolation. I should also add that, except where
explicitly stated to the contrary, the results below have been
obtained in the quenched approximation, in which sea-quark loops are
neglected. This approximation is very gradually being relaxed, as
computing resources and techniques are improved.
The second non-perturbative method which is used extensively to
compute amplitudes for $B$-decays is QCD sum rules~\cite{svz}. In this
approach, correlation functions are calculated at intermediate
distances, keeping a few terms in the Operator Product Expansion
(OPE), and by using dispersion relations are related to spectral
densities. The evaluation of the systematic uncertainties, such as
those due to the truncation of the perturbation series and OPE or to
the specific models that are used for the continuum contribution to
the spectral densities, is a very complicated issue; see
refs.~\cite{mn,ms} and the papers cited below for any discussion of
this important question.
I now review the status of leptonic and semileptonic decays of
$B$-mesons in turn.
\section{Leptonic Decays}
\label{sec:fb}
\begin{figure}[t]
\begin{center}
\begin{picture}(180,40)(0,15)
\SetWidth{2}\ArrowLine(10,41)(50,41)
\SetWidth{0.5}
\Oval(80,41)(15,30)(0)
\ArrowLine(79,56)(81,56)\ArrowLine(81,26)(79,26)
\GCirc(50,41){7}{0.8}
\Photon(110,41)(150,41){3}{7}
\ArrowLine(150,41)(167,51)\ArrowLine(167,31)(150,41)
\Text(20,35)[tl]{${\mathbf B^-}$}
\Text(80,62)[b]{$b$}\Text(80,20)[t]{$\bar u$}
\Text(172,53)[l]{$l^-$}\Text(172,30)[l]{$\bar\nu$}
\Text(132,48)[b]{$W$}
\end{picture}
\caption{Diagram representing the leptonic decay of the $B$-meson.}
\label{fig:fb}
\end{center}
\end{figure}
Leptonic decays of $B$-mesons, see fig.~\ref{fig:fb}, are particularly
simple to treat theoretically~\footnote{For simplicity the
presentation here is for the pseudoscalar $B$-meson. A parallel
discussion holds also for the vector meson $B^*$.}. The strong
interaction effects are contained in a single \underline{unknown} number,
called the decay constant $f_B$. Parity symmetry implies that only the
axial component of the $V$--$A$ weak current contributes to the decay,
and Lorentz invariance that the matrix element of the axial current is
proportional to the momentum of the $B$-meson (with the constant of
proportionality defined to be $f_B$):
\begin{equation}
\langle 0\,|\, A_\mu(0)\, |\, B(p)\rangle = i\, f_B \, p_\mu\ .
\label{eq:fbdef}\end{equation}
Knowledge of $f_B$ would allow us to predict the rates for the
corresponding decays:
\begin{equation}
\Gamma(B\to\,l\nu_l\, + \,l\nu_l\,\gamma) =
\frac{G_F^2 V_{ub}^2}{8\pi}f_B^2m_l^2m_B
\left(1 - \frac{m_l^2}{m_B^2}\right)^2\,( 1 + O(\alpha))\ ,
\label{eq:rate}\end{equation}
where the $O(\alpha)$ corrections are also known.
In addition to leptonic decays, it is expected that the knowledge of
$f_B$ would also be useful for our understanding of other processes in
$B$-physics, particularly for those for which ``factorization'' might
be thought to be a useful approximation. For example, in
$B$--$\overline{B}$ mixing, the strong interaction effects are
contained in the matrix element of the $\Delta B$\,=\,2 operator:
\begin{equation}
M = \langle \overline{B}^0\,|\,\bar b \gamma_\mu(1-\gamma_5)q\
\bar b \gamma_\mu(1-\gamma_5)q\,|\,B^0\rangle\ .
\label{eq:mbbar}\end{equation}
It is conventional to introduce the $B_B$-parameter through the definition
\begin{equation}
M = \frac{8}{3}\, f_B^2 M_B^2 B_B \ .
\label{eq:bbdef}\end{equation}
In the vacuum saturation approximation (whose precision is difficult
to assess a priori) $B_B=1$. It appears that $B_B$ is considerably
easier to evaluate than $f_B$, e.g. recent lattice results (for the
matrix element $M$ of the operator renormalized at the scale $m_B$ in
the $\overline{MS}$ scheme) include $B(m_b) = 0.90(5)$ and
0.84(6)~\cite{jlqcdbb} and 0.90(3)~\cite{bbs}. Thus it is likely that
the uncertainty in the value of the matrix element $M$ in
eq.~(\ref{eq:mbbar}) is dominated by our ignorance of $f_B$.
\paragraph {${\mathbf f_{D_s}}$:}
Since experimental results are beginning to become available for
$f_{D_s}$, I will start with a brief review of the decay constants of
charmed mesons. Many lattice computations of $f_D$ have been performed
during the last ten years, and my summary of the results
is~\cite{marseille}~\footnote{\label{foot:fdfb} The rapporteur at the
1995 Lattice conference summarized the results for the decay
constants as $f_D\simeq f_B\simeq 200\,\mbox{GeV}\pm 20\%
$~\protect\cite{allton}.}
\begin{equation}
f_D = 200 \pm 30\ \mbox{MeV}\ ,
\label{eq:fdlatt}\end{equation}
using a normalization in which
$f_{\pi^+}\simeq 131$~MeV.
The value of the decay constant is found to decrease as the mass of
the light valence quark is decreased (as expected), so that $f_{D_s}$
is 7--15\% larger than $f_D$, $f_{D_s}=220\pm 35$~MeV. As an
example of the many lattice results which have been published for
$f_{D_s}$, I give here the two new ones presented at this year's
international symposium on lattice field theory. The MILC
collaboration found $f_{D_s} = 211 \pm 7 \pm 25\pm 11$ MeV, where the
first error is statistical, the second an estimate of the
systematic uncertainties within the quenched approximation, and the
third an estimate of the quenching errors~\cite{milc}. The JLQCD
collaboration found $f_{D_s} = 216\pm 6\err{22}{15}$~MeV, where the
second error is systematic (within the quenched
approximation)~\cite{jlqcdfds}. These results illustrate the fact that
the errors are dominated by systematic uncertainties, and the main
efforts of the lattice community are being devoted to controlling
these uncertainties.
It is very interesting to compare the lattice \underline{prediction}
of $220\pm 35$~MeV with experimental measurements for $f_{D_s}$. The
1996 Particle Data book~\cite{pdg} quotes the results $f_{D^+}<
310$~MeV and
\begin{eqnarray}
f_{D_s^+} & = & 232 \pm 45 \pm 20 \pm 48 \ \mbox{MeV}\hspace{1in}\mbox{WA75}
\label{eq:fdspdgwa75}\\
f_{D_s^+} & = & 344 \pm 37 \pm 52 \pm 42 \ \mbox{MeV}\hspace{1in}\mbox{CLEO}
\label{eq:fdspdgcleo}\\
f_{D_s^+} & = & 430 \errr{150}{130} \pm 40 \ \mbox{MeV}\hspace{1.5in}
\label{eq:fdspdgbes}\mbox{BES}\ .
\end{eqnarray}
More recently the CLEO result has been updated~\cite{cleoupdate}
($f_{D_s^+}= 284 \pm 30 \pm 30 \pm 16$~MeV) and the E653 collaboration
has found~\cite{e653} $f_{D_s^+}= 194 \pm 35 \pm 20 \pm 14$~MeV.
Combining the four measurements of $f_{D_s}$ from $D_s\to\mu\nu$
decays, the rapporteur at this year's ICHEP conference
found~\cite{richman}
\begin{equation}
f_{D_s} = 241 \pm 21 \pm 30\ \mbox{MeV}\ .
\label{eq:richman}\end{equation}
In spite of the sizeable errors, the agreement with the lattice
prediction is very pleasing and gives us further confidence in the
predictions for $f_B$ and related quatnities.
\paragraph {${\mathbf f_B}$:}
For sufficiently large masses of the heavy quark, the decay constant
of a heavy--light pseudoscalar meson ($P$) scales with its mass ($M_P$) as
follows:
\begin{equation}
f_P = \frac{A}{\sqrt{M_P}}\left[\alpha_s(M_P)^{-2/\beta_0}\left\{1 +
O(\alpha_s(M_P)\,)\,\right\} + O\left(\frac{1}{M_P}\right)\,\right]\ ,
\label{eq:fpscaling}\end{equation}
where $A$ is independent of $M_P$.
Using the scaling law~(\ref{eq:fpscaling}), a value of about 200~MeV
for $f_D$ would correspond to $f_B\simeq 120$~MeV. Results from
lattice computations, however, indicate that $f_B$ is significantly
larger than this and that the $O(1/M_P)$ corrections on the right-hand
side of eq.~(\ref{eq:fpscaling}) are considerable. My summary of the
lattice results is~\cite{marseille} (see also footnote~\ref{foot:fdfb}):
\begin{equation}
f_B = 180 \pm 40\ \mbox{MeV}\ .
\label{eq:fblatt}\end{equation}
The coefficient of the $O(1/M_P)$ corrections is found to be typically
between 0.5 and 1~GeV.
Present lattice studies of heavy--light decay constants are
concentrating on relaxing the quenched approximation, on calculating
the $O(1/M_P)$ corrections in eq.~(\ref{eq:fpscaling}) explicitly, and
on reducing the discretization errors through the use of improved
actions and operators. The results obtained using QCD sum rules are
in very good agreement with those from lattice simulations (see, for
instance, ref.~\cite{mn} and references therein, and
ref.~\cite{narisoncracow}).
\section{Semileptonic Decays}
\label{sec:semilept}
\begin{figure}
\begin{center}
\begin{picture}(180,60)(20,10)
\SetWidth{2}\ArrowLine(10,41)(43,41)
\Text(15,35)[tl]{${\mathbf B}$}
\SetWidth{0.5}
\Oval(100,41)(20,50)(0)
\SetWidth{2}\ArrowLine(157,41)(190,41)
\Text(180,35)[tl]{${\mathbf D,\,D^*,\,\pi,\,\rho}$}
\SetWidth{0.5}
\Vertex(100,61){3}
\GCirc(50,41){7}{0.8}\GCirc(150,41){7}{0.8}
\Text(75,48)[b]{$b$}\Text(117,48)[b]{$c,u$}
\Text(100,16)[t]{$\bar q$}
\Text(100,71)[b]{$V$--$A$}
\ArrowLine(101,21)(99,21)
\ArrowLine(70,57)(72,58)\ArrowLine(128,57.5)(130,56.5)
\end{picture}
\caption{Diagram representing the semileptonic decay of the $B$-meson.
$\bar q$ represents the light valence antiquark, and the black circle
represents the insertion of the $V$--$A$ current with the appropriate
flavour quantum numbers.}
\label{fig:sl}
\end{center}
\end{figure}
For the remainder of this talk I will discuss semileptonic decays of
$B$-mesons, considering in turn the two cases in which the $b$-quark
decays semileptonic\-ally into a $c$-quark or a $u$-quark, see
fig.~\ref{fig:sl}. In both cases it is convenient to use space-time
symmetries to express the matrix elements in terms of invariant
form factors (I use the helicity basis for these as defined below).
When the final state is a pseudoscalar meson $P=D$ or $\pi$, parity
implies that only the vector component of the $V$--$A$ weak current
contributes to the decay, and there are two independent form factors,
$f^+$ and $f^0$, defined by
\begin{eqnarray}
\langle P(p_P)| V^\mu|B(p_B)\rangle & = &
f^+(q^2)\left[\,(p_B+p_P)^\mu -
\frac{M_B^2 - M_P^2}{q^2}\,q^\mu\right] \nonumber\\
& + & \ \ \ f^0(q^2)\,\frac{M_B^2 - M_P^2}{q^2}\,q^\mu\ ,
\label{eq:ffpdef}\end{eqnarray}
where $q$ is the momentum transfer, $q=p_B-p_P$. When the final-state
hadron is a vector meson $V=D^*$ or $\rho$, there are four independent
form factors:
\begin{eqnarray}
\langle V(p_V)| V^\mu|B(p_B)\rangle & = &
\frac{2V(q^2)}{M_B+M_V}\epsilon^{\mu\gamma\delta\beta}
\varepsilon^*_\beta p_{B\,\gamma}p_{V\,\delta}\label{eq:ffvvdef}\\
\langle V(p_V)| A^\mu|B(p_B)\rangle & = &
i (M_B + M_V) A_1(q^2) \varepsilon^{*\,\mu}\, - \nonumber\\
&&\hspace{-1in}
i\frac{A_2(q^2)}{M_B+M_V} \varepsilon^*\hspace{-3pt}\cdot\hspace{-2pt}p_B
(p_B + p_V)^\mu + i \frac{A(q^2)}{q^2} 2 M_V
\varepsilon^*\hspace{-3pt}\cdot\hspace{-2pt}p_B q^\mu\ ,
\label{eq:ffvadef}\end{eqnarray}
where $\varepsilon$ is the polarization vector of the final-state meson,
and $q = p_B-p_V$.
Below we shall also discuss the form factor $A_0$, which is given
in terms of those defined above by $A_0 = A + A_3$, with
\begin{equation}
A_3 = \frac{M_B + M_{D^*}}{2 M_{D^*}}A_1 -
\frac{M_B - M_{D^*}}{2 M_{D^*}}A_2\ .
\label{eq:a3def}\end{equation}
\subsection{Semileptonic ${\mathbf B\to D}$ and ${\mathbf B\to D^*}$
Decays}
\label{subsec:vcb}
$B\to D^*$ and, more recently, $B \to D$ decays are used to determine
the $V_{cb}$ element of the CKM matrix. Theoretically they are
relatively simple to consider, since the heavy quark symmetry
implies that the six form factors are related, and that there is only
one independent form factor $\xi(\omega)$, specifically:
\begin{eqnarray}
f^+(q^2) & = & V(q^2) = A_0(q^2) = A_2(q^2)
\nonumber\\
& = & \left[1 -
\frac{q^2}{(M_B + M_D)^2}\right]^{-1} A_1(q^2) = \frac{M_B+M_D}
{2\sqrt{M_BM_D}}\,\xi(\omega)\ ,
\label{eq:iw}\end{eqnarray}
where $\omega = v_B\cdot v_D$. Here the label $D$ represents the $D$-
or $D^*$-meson as appropriate. In this leading approximation the
pseudoscalar and vector mesons are degenerate. The unique form factor
$\xi(\omega)$, which contains all the non-perturbative QCD effects, is
called the Isgur--Wise (IW) function. Vector current conservation
implies that the IW-function is normalized at zero recoil, i.e. that
$\xi(1) =1$. This property is particularly important in the extraction
of the $V_{cb}$ matrix element.
The relations in eq.~(\ref{eq:iw}) are valid up to perturbative and
power corrections. The theoretical difficulty in making predictions
for the form factors lies in calculating these corrections with
sufficient precision.
The decay distribution for $B\to D^*$ decays can be written as:
\begin{eqnarray}
\frac{d\Gamma}{d\omega} & = & \frac{G_F^2}{48\pi^3}
(M_B-M_{D^*})^2 M_{D^*}^3 \sqrt{\omega^2 -1}\,(\omega+1)^2\cdot
\nonumber\\
& &\hspace{-0.35in}\left[ 1 + \frac{4\omega}{\omega + 1}
\frac{M_B^2 - 2\omega M_BM_{D^*}+M_{D^*}^2}{(M_B-M_{D^*})^2}\right]
|V_{cb}|^2\, {\cal F}^2(\omega)\ ,
\label{eq:distr}\end{eqnarray}
where ${\cal F}(\omega)$ is the IW-function combined with perturbative
and power corrections. It is convenient theoretically to
consider this distribution near $\omega = 1$. In this case $\xi(1) =
1$, and there are no $O(1/m_Q)$ corrections (where $Q= b$ or $c$) by
virtue of Luke's theorem~\cite{luke}, so that the expansion of ${\cal
F}(1)$ begins like:
\begin{equation}
{\cal F}(1) = \eta_A\left( 1 + 0\,\frac{\Lambda_{QCD}}{m_Q} +
c_2\frac{\Lambda^2_{QCD}}{m_Q^2} + \cdots\right)\, ,
\label{eq:fexpansion}\end{equation}
where $\eta_A$ represents the perturbative corrections.
The one-loop contribution to $\eta_A$ has been known for some time now,
whereas the two-loop contribution was evaluated this year, with the
result~\cite{czarnecki}:
\begin{equation}
\eta_A = 0.960\pm 0.007\ ,
\end{equation}
where we have taken the value of the two loop contribution as an
estimate of the error.
The power corrections are much more difficult to estimate reliably.
Neubert has recently combined the results of
refs.~\cite{fn}--\cite{suv} to estimate that the $O(1/m_Q^2)$ terms in
the parentheses in eq.~(\ref{eq:fexpansion}) are about $-0.055\pm
0.025$ and that
\begin{equation}
{\cal F}(1) = 0.91 (3)\ .
\label{eq:f1result}\end{equation}
In considering eq.~(\ref{eq:f1result}), the fundamental question that
has to be asked is whether the power corrections are sufficiently
under control. There are differing, passionately held views on this
subject. The opinion of G.~Martinelli and myself is that the
uncertainty in eq.~(\ref{eq:f1result}) is
underestimated~\cite{ht}. The power corrections are proportional
to matrix elements of higher-dimensional operators. These have either
to be evaluated non-perturbatively or to be determined from some other
physical process. In either case, before the matrix element can be
determined a subtraction of large terms is required (since
higher-dimensional operators in general contribute to non-leading
terms). The ``large'' terms are usually only known in perturbation
theory at tree level, one-loop level or exceptionally at two-loop
level. Therefore the precision of such a subtraction is limited.
Moreover the definition of the higher-dimensional operators, and hence
the value of their matrix elements, depend significantly on the
treatment of the higher-order terms of the perturbation series for the
coefficient function of the leading twist operator (this series not
only diverges, but is not summable by any standard technique). These
arguments are expanded, with simple examples and references to the
original literature, in ref.~\cite{ht}. Considerable effort is being
devoted at present to improving the theoretical control over power
corrections.
Bearing in mind the caveat of the previous paragraph, the procedure
for extracting the $V_{cb}$ matrix element is to extrapolate the
experimental results for $d\Gamma/d\omega$ to $\omega = 1$ and to use
eq.~(\ref{eq:distr}) with the theoretical value of ${\cal F}(1)$. See
for example the results presented by Artuso at this
conference~\cite{artuso}.
Having discussed the theoretical status of the normalization ${\cal
F}(1)$, let us now consider the shape of the function ${\cal
F}(\omega)$, near $\omega = 1$. A theoretical understanding of the
shape would be useful to guide the extrapolation of the experimental
data, and also as a test of our understanding of the QCD effects. We
expand ${\cal F}$ as a power series in $\omega -1$:
\begin{equation}
{\cal F}(\omega) = {\cal F}(1)\, \left[1 - \hat\rho^2\,(\omega -1)
+\hat c\, (\omega -1 )^2 + \cdots\right]\ ,
\label{eq:ftaylor}\end{equation}
where~\cite{neubertcernschool}
\begin{equation}
\hat\rho^2 = \rho^2 + (0.16\pm 0.02) + \mbox{power corrections}\ ,
\label{eq:rhohat}\end{equation}
and $\rho^2$ is the slope of the IW-function. What is known theoretically
about the parameters in eqs.~(\ref{eq:ftaylor}) and (\ref{eq:rhohat})?
Bjorken~\cite{bj} and Voloshin~\cite{voloshin} have derived lower and
upper bounds, respectively, for the $\rho^2$:
\begin{equation}
\frac{1}{4} < \rho^2 < \frac{1}{4} + \frac{\overline{\Lambda}}{2 E_{min}}
\ ,
\label{eq:rhosqbounds}\end{equation}
where $\overline\Lambda$ is the binding energy of the $b$-quark in the
$B$-meson, and $E_{min}$ is the difference in masses between the ground
state and the first excited state. There are perturbative corrections
to the bounds in eq.~(\ref{eq:rhosqbounds})~\cite{gk}, on the basis of
which Korchemsky and Neubert~\cite{kn} conclude that
\begin{equation}
0.5 < \rho^2 < 0.8\ .
\label{eq:rhosqbounds2}\end{equation}
Values of $\rho^2$ obtained using QCD sum rules and lattice
simulations are presented in table~\ref{tab:rhosq}. The theoretical
results are broadly in agreement with the experimental measurements,
e.g. in fig.~\ref{fig:iw} we show the comparison of the lattice
results from ref.~\cite{ukqcdiw} with the data from the CLEO
collaboration~\cite{cleoii}.
\begin{table}[htb]
\centering
\begin{tabular}{|c|l|}
\hline
$\rho^2$ & \hspace{0.4in}Method\\ \hline
$0.84\pm 0.02$ & QCD sum rules~\protect\cite{bagan}\\
$0.7\pm 0.1$ & QCD sum rules~\protect\cite{neubertrhosq}\\
$0.70 \pm 0.25$ & QCD sum rules~\protect\cite{bs}\\
$1.00 \pm 0.02$ & QCD sum rules~\protect\cite{narison}\\ \hline
$0.9\errr{0.2}{0.3}\errr{0.4}{0.2}$&Lattice QCD~\protect\cite{ukqcdiw}
\\ \hline
\end{tabular}
\caption{Values of the Slope of the IW--function of a heavy meson, obtained
using QCD sum rules or Lattice QCD.}
\label{tab:rhosq}
\end{table}
\begin{figure}
\begin{picture}(120,200)
\put(50,-20){\ewxy{vcb.ps}{110mm}}
\end{picture}
\caption{Fit of the UKQCD lattice results for
$|V_{cb}|{\cal F}(\omega)$~\protect\cite{ukqcdiw}
to the experimental data from the CLEO collaboration
~\protect\cite{cleoii}.}
\label{fig:iw}\end{figure}
Recently, using analyticity and unitarity properties of the
amplitudes, as well as the heavy quark symmetry, Caprini and Neubert
have derived an intriguing result for the curvature parameter $\hat
c$~\cite{caprini}:
\begin{equation}
\hat c \simeq 0.66\, \hat\rho^2 - 0.11 \ .
\label{eq:chat}\end{equation}
This result implies that one of the two parameters in
(\ref{eq:ftaylor}) can essentially be eliminated, simplifying
considerably the extrapolation to $\omega = 1$. Earlier attempts
to exploit similar methods gave weaker bounds on the parameters.
Finally in this section I consider $B\to D$ semileptonic decays,
which are beginning to be measured experimentally~\cite{artuso}
with good precision. Theoretically the first complication is that
the $1/m_Q$ corrections do not vanish at $\omega = 1$. However, as
pointed out by Shifman and Voloshin~\cite{sv}, these corrections would
vanish in the limit in which the $b$- and $c$-quarks are degenerate.
This leads to a suppression factor
\begin{equation}
S = \left(\,\frac{M_B-M_D}{M_B+M_D}\,\right)^2\simeq 0.23
\label{eq:sv}\end{equation}
in the $1/m_Q$ corrections, which reduces their size considerably.
Ligeti, Nir, and Neubert estimate the $1/m_Q$ corrections to be
between approximately $-$1.5\% to +7.5\%~\cite{lnn}. The $1/m_Q^2$
corrections for this decay have not yet been studied systematically.
\subsection{Semileptonic ${\mathbf B\to \rho}$ and ${\mathbf B\to \pi}$
Decays}
\label{subsec:vub}
In this subsection I consider the semileptonic decays $B\to\rho$ and
$B\to\pi$. They decays are currently being studied experimentally,
with the goal of extracting the $V_{ub}$ matrix element.
Heavy quark symmetry is less predictive for heavy$\to$light decays
than it is for heavy$\to$heavy ones. In particular, as we have seen
in the preceding subsection, the normalization condition $\xi(1)=1$
was particularly useful in the extraction of $V_{cb}$. There is no
corresponding normalization condition for heavy$\to$light decays.
Heavy quark symmetry does, however, give useful scaling laws for the
behaviour of the form factors with the mass of the heavy quark at
fixed $\omega$:
\begin{equation}
V,A_2,A_0,f^+ \sim M^{\frac{1}{2}};\ \ \ \
A_1, f^0 \sim M^{-\frac{1}{2}};\ \ \ \
A_3 \sim M^{\frac{3}{2}}\ .
\label{eq:scaling}\end{equation}
Each of the scaling laws in eq.~(\ref{eq:scaling}) is valid up to
calculable logarithmic corrections.
Several groups have tried to evaluate the form factors using lattice
simulations~\cite{elc}--\cite{ukqcdbtorho} (for a review see
ref.~\cite{jmfstlouis}). The results that I will use for illustration
are taken from the UKQCD collaboration, who have attempted to study
the $q^2$ dependence of the form factors.
From lattice simulations we can only obtain the form factors for part
of the physical phase space. In order to keep the discretization
errors small, we require that the three-momenta of the $B$,
$\pi$ and $\rho$ mesons be small in lattice units. This implies that we can
only determine the form factors at large values of momentum transfer
$q^2 = (p_B-p_{\pi,\rho})^2$. Fortunately, as we will see below, for
$B\to\rho$ decays, this region of momentum space is appropriate for
the extraction of $V_{ub}$.
\begin{figure}
\begin{picture}(120,200)
\put(50,0){\ewxy{A1mbqsq.ps}{110mm}}
\end{picture}
\caption{The form factor $A_1(q^2)$ for the decay
$\bar B^0\to\rho^+l^-\bar\nu_l$. Squares are measured lattice data,
extrapolated to the $B$ scale at fixed $\omega$. The three curves
and points at $q^2 = 0$ have been obtained by fitting the squared
using the three procedures described in the text: constant
(dashed line and octagon), pole (solid line and diamond) and
dipole (dotted line and cross).}
\label{fig:ukqcda1}\end{figure}
As an example, I show in fig.~\ref{fig:ukqcda1} the values of the
$A_1$ form factor from ref.~\cite{ukqcdbtorho}. These authors
evaluate the form factors for four different values of the mass of the
heavy quark (in the region of that of the charm quark), and then
extrapolate them, using the scaling laws in eq.~(\ref{eq:scaling}), to
the $b$-quark. The squares in fig.~\ref{fig:ukqcda1} represent the
extrapolated values, and as expected they are clustered at large
values of $q^2$. In order to estimate them over the full kinematical
range some assumption about the $q^2$ behaviour is required.
Fig.~\ref{fig:ukqcda1} also contains three such extrapolations in
$q^2$, performed assuming that:
\begin{itemize}
\item[i)] $A_1$ is independent of $q^2$ (dashed line). The
extrapolated value of $A_1(0)$ is denoted by an octagon, and the
$\chi^2/$dof is poor for this fit.
\item[ii)] The behaviour of $A_1(q^2)$ satisfies pole dominance, i.e.
that $A_1$ is given by
\begin{equation}
A_1(q^2) = \frac{A_1(0)}{(1 - q^2/M_n)^n}\ ,
\label{eq:multipole}\end{equation}
with $n=1$ (solid line). $A_1(0)$ and $M_1$ are parameters of the fit,
but the value of $M_1$ is in the range expected for the $1^+\ b\bar u$
resonance. The extrapolated value of $A_1(0)$ is denoted by the
diamond.
\item[iii)] The behaviour of $A_1(q^2)$ takes the dipole form
(\ref{eq:multipole}) with $n=2$ (dotted line). This is almost
indistinguishable from the pole fit. The extrapolated value of
$A_1(0)$ is now denoted by a cross.
\end{itemize}
The $\chi^2/$dof for the pole and dipole fits are both very good.
The UKQCD collaboration~\cite{ukqcdbtorho} comment that for $b\to\rho$
decays in particular, the fact that the lattice results are obtained
at large values of $q^2$ is not a serious handicap to the extraction
of the $V_{ub}$ matrix element. Indeed they advocate using the
experimental data at large values of $q^2$ (as this becomes available
during the next few years) to extract $V_{ub}$. To get some idea of the
precision that might be reached they parametrize the distribution by:
\begin{equation}
\frac{d\Gamma(\bar B^0\to\rho^+l^-\bar\nu)}{dq^2}
= 10^{-12}\,\frac{G_F^2|V_{ub}|^2}{192\pi^3M_B^3}\,
q^2 \, \lambda^{\frac{1}{2}}(q^2)
\, a^2\left( 1 + b(q^2 - q^2_{max})\right)\ ,
\label{eq:distr2}\end{equation}
where $a$ and $b$ are parameters to be determined from lattice computations,
and the phase-space factor $\lambda$ is given by $\lambda(q^2)
= (M_B^2+M_\rho^2 - q^2)^2 - 4 m_B^2M_\rho^2$. Already from their current
simulation the UKQCD collaboration are able to obtain $a^2$ with good
precision~\cite{ukqcdbtorho}
\begin{equation}
a^2 = 21\pm 3\ {\mathrm GeV}^2\ .
\label{eq:asq}\end{equation}
Although $b$ is obtained with less precision,
\begin{equation}
b = (-8 \er{4}{6}\,)\,10^{-2}\ {\mathrm GeV}^{-2}\, ,
\label{eq:bsq}\end{equation}
the fits are less sensitive to this parameter at large $q^2$. The
prediction for the distribution based on these numbers is presented in
fig.~\ref{fig:vub}, and the UKQCD collaboration estimate that they
will be able to determine $V_{ub}$ with a precision of about 10\% or
better.
\begin{figure}
\begin{picture}(120,200)
\put(50,0){\ewxy{vub.ps}{110mm}}
\end{picture}
\caption{Differential decay rate as a function of $q^2$ for the semileptonic
decay $\bar B^0\to\rho^+l^-\bar\nu_l$. Squares are measured lattice
data, solid curve is fit from eq.~(\protect\ref{eq:distr2}) with
parameters given in eqs.~(\protect\ref{eq:asq})
and (\protect\ref{eq:bsq}). The vertical dotted line marks the charm
threshold.}
\label{fig:vub}\end{figure}
Although, in this case, the difficulty of extrapolating lattice
results from large values of $q^2$ to smaller ones may not have
significant implications for extracting physical information, this is
not always the case. Already for $B\to\pi$ decays, using results at
large values of $q^2$ restricts the precision with which $V_{ub}$ can
be extracted. This problem is even more severe for the
penguin-mediated rare decay $B\to K^*\gamma$, where the physical
process occurs at $q^2=0$. Much effort is being devoted to this
extrapolation, trying to include the maximum number of constraints
from heavy quark symmetry (as discussed above) and
elsewhere~\cite{extrapolations}. A simple example of such a constraint
for $B\to\pi$ semileptonic decays is that at $q^2=0$, the two form
factors $f^+$ and $f^0$ must be equal. Similar constraints exist for
other processes.
An interesting approach to the problem of the extrapolation to low
values of $q^2$ has been suggested by Lellouch~\cite{lpl}. By
combining lattice results at large values of $q^2$ with kinematical
constraints and general properties of field theory, such as unitarity,
analyticity and crossing, he is able to tighten the bounds on form
factors at all values of $q^2$. This technique can, in principle, be
used with other approaches, such as sum rules, quark models, or even
in direct comparisons with experimental data, to check for
compatibility with QCD and to extend the range of results.
\begin{figure}
\begin{center}
\begin{picture}(180,100)(20,10)
\SetWidth{2}\ArrowLine(10,41)(43,41)
\Text(15,35)[tl]{${\mathbf B}$}
\SetWidth{0.5}
\Oval(100,41)(20,50)(0)
\SetWidth{2}\ArrowLine(157,41)(190,41)
\Text(180,35)[tl]{${\mathbf \rho}$}
\SetWidth{0.5}
\Photon(100,61)(140,101){3}{5}
\Line(140,101)(150,108)\Line(140,101)(150,94)
\Text(158,101)[l]{leptons}
\Text(113,89)[r]{${\mathbf W}$}
\GCirc(50,41){7}{0.8}\GCirc(150,41){7}{0.8}
\Text(75,48)[b]{$b$}\Text(122,48)[b]{$u$}
\Text(100,16)[t]{$\bar q$}
\ArrowLine(101,21)(99,21)
\ArrowLine(70,57)(72,58)\ArrowLine(128,57.5)(130,56.5)
\ZigZag(92,60.5)(92,22){2}{6}
\Text(98,41)[l]{$g$}
\end{picture}
\caption{Representation of a contribution to the semileptonic
$B\to\rho$ decay.}
\label{fig:bbkinematics}\end{center}
\end{figure}
Ball and Braun have recently re-examined $B\to\rho$ decays using
light-cone sum rules~\cite{patricia}, extending the earlier analysis
of ref.~\cite{abs}. Consider, for example, the graph of
fig.~\ref{fig:bbkinematics}, which represents a contribution to the
decay amplitude. For large heavy-quark masses and small $q^2$ there
are two competing contributions of the same order (e.g.
$O(m_Q^{-3/2})$ for the form factor $A_1$). The first one comes from
the region of phase space in which the momentum of the gluon ($g$) is
of the order of $\sqrt{m_b\Lambda_{QCD}}$, so that this contribution
corresponds to small transverse separations and can be treated in
perturbation theory (the non-perturbative effects are contained in the
wave functions at the origin, i.e. in the decay constants). This is
similar to the treatment of hard exclusive processes, such as the form
factors of the pion and the proton at large momentum transfers.
However, there is a second contribution in which the $\rho$-meson is
produced in a very asymmetric configuration with most of the momentum
carried by one of the quarks. In this case there are no hard
propagators. For most other hard exclusive processes the ``end-point''
contribution is suppressed by a power of the large momentum transfer.
Although, in principle, for $m_Q$ very
large, the end-point is suppressed by Sudakov factors~\cite{asy}, this
suppression is not significant for the $b$-quark. The end-point
contribution has to be included and treated non-perturbatively,
since it comes from the region of large transverse separations.
This is the motivation for introducing light-cone sum
rules~\cite{abs}, based on an expansion of operators of increasing
twist (rather than dimension). The non-perturbative effects are
contained in the light-cone wave function of the $\rho$-meson, and the
leading twist contribution to this wave function was recently
re-examined in ref.~\cite{ballbraun}.
An interesting consequence of the analysis of the previous paragraph
is a set of scaling laws for the behaviour of the form factors with
the mass of the heavy quark at fixed (low) $q^2$, rather than at fixed
$\omega$ as in eq.~(\ref{eq:scaling}). An example of fixed $q^2$
scaling laws is:
\begin{equation}
A_1(0)\,\Theta\,M_P^{3/2} = \mbox{const} (1 + \gamma/M_P + \delta/M_P^2
\ + \cdots)\ ,
\label{eq:scalingm}\end{equation}
where $M_P$ is the mass of the heavy pseudoscalar meson. The factor
$\Theta$ contains the perturbative logarithmic corrections.
\begin{figure}
\begin{picture}(250,250)
\put(-20,20){\ewxy{fig2.eps}{200mm}}
\end{picture}
\caption{Results for the form factors $A_1(q^2)$, $A_2(q^2)$ and
$V(q^2)$ for $B\to\rho$ semileptonic decays as a function of
$t=q^2$~\protect\cite{patricia}. The curves correspond to the
results obtained with light-cone sum rules by Ball and
Braun~\protect\cite{patricia}, and the points to the results from
the UKQCD collaboration~\protect\cite{ukqcdbtorho}.}
\label{fig:pball}\end{figure}
Some of the results of Ball and Braun are presented in
fig.~\ref{fig:pball}, where the form factors $A_1, A_2$ and $V$ are
plotted as functions of $q^2$. The results are in remarkable agreement
with those from the UKQCD collaboration, in the large $q^2$ region
where they can be compared.
\section{Conclusions}
\label{sec:concs}
The principal difficulty in deducing weak interaction properties from
experimental measurements of $B$-decays lies in controlling the strong
interaction effects. These are being studied using non-perturbative
methods such as lattice simulations or QCD sum rules. Considerable
effort and progress is being made in reducing the systematic
uncertainties present in lattice computations.
Although both the theoretical and experimental errors on the value of
$f_{D_s}$ are still sizeable, it is nevertheless very pleasing that
they are in agreement. It is also satisfying that the values of
$V_{cb}$ extracted from exclusive and inclusive measurements are in
good agreement. The theoretical uncertainties for the two processes
are different, and the agreement is evidence that they are not
significantly underestimated.
It has been argued that $B\to\rho$ decays at large $q^2$, where the
evaluation of the relevant form factors using lattice simulations is
reliable, will soon provide a determination of $V_{ub}$ at the 10\%
level or better~\cite{ukqcdbtorho}. It will also be interesting to
observe developments of the light-cone approach to these decays.
Many lattice computations of $f_B$ have been performed using static
heavy quarks ($m_Q=\infty$), and serve as a very valuable check of the
consistency of the extrapolation of the results obtained with finite
heavy-quark masses. Such checks have not been performed yet for many
other quantities in $B$-physics; this is an important omission, which
should be put right.
This talk has been about the decays of $B$-mesons. Detailed
experimental and theoretical studies are also beginning for the
$\Lambda_b$-baryon. For example, the first lattice results for the
Isgur--Wise function of the $\Lambda_b$ has been presented in
ref.~\cite{lambdab}.
\subsection*{Acknowledgements}
It is a pleasure to thank Nando Ferroni and the other organizers of
Beauty `96 for the opportunity to participate in such an interesting
and stimulating meeting. I gratefully acknowledge many helpful and
instructive discussions with Patricia Ball, Volodya Braun, Jonathan
Flynn, Laurent Lellouch, Guido Martinelli, Matthias Neubert, Juan
Nieves, Nicoletta Stella and Kolya Uraltsev. I also acknowledge the
Particle Physics and Astronomy Research Council for their support
through the award of a Senior Fellowship.
| proofpile-arXiv_065-1207 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
An important problem of perturbation theory is the calculation of
physically meaningful numbers from expansions which are
usually divergent asymptotic series with coefficients growing $\propto
k!$ in high orders $k$. For small expansion parameters $g$ a direct
evaluation of the series truncated at a finite order $k \approx 1/g$ can
yield a reasonably good approximation, but for larger couplings such
series become completely useless and require some kind of resummation.
Well-known examples are field theoretical $\epsilon$-expansions for
the computation of critical exponents of phase transitions, but also the
standard Stark and Zeeman effects in atomic physics lead to divergent
perturbation expansions.
The paradigm for studying this problem is the quantum
mechanical anharmonic oscillator with a potential
$
V(x) = \frac{1}{2} \omega^2 x^2 + \frac{1}{4} g x^4 \hspace{0.2cm}
( \omega^2,g>0).
$
The Rayleigh-Schr\"{o}dinger perturbation theory yields for the
ground-state energy a power-series expansion
\begin{equation}
E^{(0)}(g) = \omega \sum_{k=0}^{\infty} E^{(0)}_{k} \left(
\frac{g/4}{\omega^3} \right)^k,
\label{eq:E_exp}
\end{equation}
where the $E^{(0)}_k$ are rational numbers $1/2$, $3/4$,
$-21/8$, $333/16$, $-30885/128$, \dots ~, which can easily be
obtained to very high orders from the recursion relations of
Bender and Wu \cite{bewu}. Their large-order behavior is analytically
known to exhibit the typical factorial growth,
\begin{equation}
E^{(0)}_k = -(1/\pi) (6/\pi)^{1/2} (-3)^k k^{-1/2} k! (1 + {\cal O}(1/k)).
\label{eq:Ek_asy}
\end{equation}
Standard resummation methods are Pad\'e or Borel techniques whose accuracy,
however, decreases rapidly in the strong-coupling limit. In this note
we summarize recent work on a new approach based on variational perturbation
theory \cite{syko,PI}. Our results demonstrate that by this means the
divergent series expansion (\ref{eq:E_exp})
can be converted into a sequence of exponentially fast converging
approximations, uniformly in the coupling strength $g$
\cite{jk1,jk2,jk3,conv}. This allows us
to take all expressions directly to the strong-coupling limit, yielding
a simple scheme for calculating the coefficients $\alpha_i$ of the convergent
strong-coupling series expansion,
$E^{(0)}(g)= (g/4)^{1/3}\left[ \alpha _0
+ \alpha _1 (4 \omega^3/g)^{2/3}
+ \alpha _2 (4 \omega^3/g)^{4/3}
+\dots\right]$.
\section{Variational Perturbation Theory}
The origin of variational perturbation theory can be traced back to
a variational principle for the evaluation of quantum partition
functions in the path-integral formulation \cite{PI,variational}.
While in many applications
the accuracy was found to be excellent over a wide range of temperatures,
slight deviations from exact or simulation results at very low temperatures
motivated a systematic study of higher-order corrections \cite{syko,PI}.
In the zero-temperature limit the calculations simplify and lead to a
resummation scheme for the energy eigenvalues which can be summarized as
follows. First, the harmonic term of the potential is
split into a new harmonic term with a trial frequency $\Omega$ and a remainder,
$\omega^2 x^2 = \Omega^2 x^2 + \left(\omega^2-\Omega^2\right)x^2$, and
the potential is rewritten as
$
V(x) = \frac{1}{2} \Omega^2 x^2 + \frac{1}{4} g (-2 \sigma x^2/ \Omega + x^4),
$
where $\sigma = \Omega ( \Omega^2 - \omega^2)/g$.
One then performs a perturbation expansion in powers of $\hat{g} \equiv
g/\Omega^3$ at a fixed $\sigma$,
\begin{equation}
\hat{E}_{N}^{(0)}(\hat g,\sigma) = \sum_{k=0}^{N} \varepsilon^{(0)}_{k}(\sigma)
\left( \hat g/4 \right)^k,
\label{eq:E_reexp}
\end{equation}
where $\hat E_N^{(0)} \equiv E_N^{(0)}/\Omega$ is the dimensionless reduced
energy. The new expansion coefficients
$\varepsilon^{(0)}_{k}$ are easily found by inserting
$\omega = \sqrt{\Omega^2 -g \sigma/\Omega}
= \Omega \sqrt{1 - {\hat g}\sigma}$
in (\ref{eq:E_exp}) and reexpanding in powers of $\hat g$,
\begin{equation}
\varepsilon^{(0)}_{k}( \sigma ) = \sum_{j=0}^{k} E^{(0)}_{j}
\left( \begin{array}{c} (1 - 3 j)/2 \\ k-j \end{array} \right)
(-4 \sigma )^{k-j}.
\label{eq:eps_k}
\end{equation}
The truncated power series
$W_{N}(g,\Omega) \equiv \Omega \hat{E}^{(0)}_{N} \left(\hat{g},\sigma\right)$
is certainly independent of $\Omega$ in the limit $N \rightarrow \infty$.
At any finite order, however, it {\em does} depend on $\Omega$,
the approximation having its fastest speed of convergence where it depends
least on $\Omega$, i.e., at points where $\partial W_N/\partial \Omega = 0$.
If we denote the order-dependent optimal value of $\Omega$ by $\Omega_{N}$,
the quantity $W_{N}(g,\Omega_{N})$ is the new approximation to $E^{(0)}(g)$.
At first sight the extremization condition $\partial W_N/\partial \Omega = 0$
seems to require the determination of the roots of a polynomial in
$\Omega$ of degree $3N$, separately for each value of $g$. In Ref.~\cite{jk1}
we observed, however, that this task can be greatly simplified. While
$W_N$ does depend on both $g$ and $\Omega$ separately, we could prove
that the derivative can be written as $\partial W_N/\partial \Omega=
(\hat g/4)^N P_N(\sigma)$, where
$P_N(\sigma) = -2 d \varepsilon^{(0)}_{N+1}(\sigma)/d \sigma$ is a
polynomial of degree $N$ in $\sigma$. The optimal values
of $\sigma$ were found to be well fitted by
\begin{equation}
\sigma_N = cN \left( 1 + 6.85/N^{2/3}\right),
\label{eq:sigma_opt}
\end{equation}
with $c=0.186\,047\,272\dots$
determined analytically (cp. Sec.~3).
This observation simplifies the calculations
considerably and shows that the optimal solutions $\Omega_N$ depend
only trivially on $g$ through $\sigma_N = \Omega_N(\Omega_N^2 -
\omega^2)/g$. Since the explicit knowledge of $\Omega_N$ is only needed
in the final step when going back from ${\hat E}_N^{(0)}$ to $E_N^{(0)}$,
this suggests that the variational resummation scheme can be
taken directly to the strong-coupling limit.
To this end we introduce the reduced frequency
$\hat{\omega} = \omega/\Omega$, write the
approximation as
$W_N = \left( g/\hat{g} \right)^{1/3} w_N(\hat{g},\hat{\omega}^2)$,
and expand the function $w_N(\hat g,\hat \omega^2)$
in powers of $\hat{\omega}^2 = (\omega^3/g)^{2/3} \hat{g}^{2/3}$.
This gives \cite{jk2}
\begin{equation}
W_N = (g/4)^{1/3} \left[ \alpha_0
+ \alpha_1 \left(4\omega^3/g\right)^{2/3}
+ \alpha_2 \left(4\omega^3/g\right)^{4/3} +\dots \right],
\label{eq:W_N}
\end{equation}
with the coefficients,
\begin{equation}
\alpha_n = (\hat{g}/4)^{(2n-1)/3}
\sum_{k=0}^N (-1)^{k+n} \sum_{j=0}^{k-n} E_{j}^{(0)}
\left( \begin{array}{c} (1 - 3 j)/2 \\ k-j \end{array} \right)
\left( \begin{array}{c} k-j \\ n \end{array} \right)
(-\hat{g}/4)^j.
\label{eq:alpha_n}
\end{equation}
If this is evaluated at ${\hat g} = 1/\sigma_N$ with $\sigma_N$ given in
(\ref{eq:sigma_opt}), we obtain the exponentially fast approach to the
exact limit as shown in Fig.~\ref{fig:contour} for $\alpha_0$. The
exponential falloff is modulated by oscillations. Our result,
$\alpha_0 = 0.667\,986\,259\,155\,777\,108\,270\,96$,
agrees to all 23 digits with the most accurate 62-digit value in the
literature. The computation of the higher-order coefficients
$\alpha_n$ for $n>0$ proceeds similarly and the results up
to $n=22$ are given in
Table~1 of Ref.~\cite{jk2}.
\section{Convergence Behavior}
To explain the convergence behavior \cite{jk3,conv} we recall that
the ground-state energy $E^{(0)}(g)$ satisfies a subtracted dispersion
relation which leads to an integral representation of the original
perturbation coefficients,
\begin{equation}
E^{(0)}_k=\frac{4^k}{2{\pi}i}\int_0^{-\infty}
\frac{d g}{g^{k+1}}
\mbox{disc} E^{(0)}(g),
\label{eq:disc}
\end{equation}
where $ \mbox{disc}\,E^{(0)}(g) = 2i{\rm Im\,}E^{(0)}(g-i \eta )$
denotes the discontinuity across the left-hand cut
in the complex $g$-plane.
For large $k$, only its $g \longrightarrow 0^-$ behavior
is relevant and a semiclassical calculation
yields
$\mbox{disc}\,E^{(0)}(g) \! \approx \! -2i \omega (6/\pi)^{1/2}
(-4\omega^3\!/3g)^{1/2} \exp(4\omega^3\!/3g)$, which in turn implies the
large-order behavior (\ref{eq:Ek_asy}) of $E_k^{(0)}$.
The reexpanded series (\ref{eq:E_reexp}) is obtained
from (\ref{eq:E_exp}) by the replacement of
$\omega \longrightarrow \Omega \sqrt{1- \sigma \hat g}$.
In terms of the coupling constant, the above replacement amounts to
$\bar g \equiv g/ \omega ^3 \longrightarrow \hat g/(1- \sigma \hat
g)^{3/2}$. Using this mapping it is straightforward to show~\cite{jk3}
\begin{figure}[t]
\vskip 6.0 truecm
\special{psfile="slope1.ps"
angle=0 hscale=90 vscale=90 hoffset=-68 voffset=-552}
\special{psfile="contour_new.ps"
angle=0 hscale=90 vscale=90 hoffset=90 voffset=-540}
\caption[a]{
L.h.s.: Exponentially fast convergence of the $N$th approximants for
$\alpha_0$ to the exact value. The dots show $\Delta_N =
|(\alpha_0)_N-\alpha_0|$.
R.h.s.: Cuts in the complex $\hat g$-plane. The cuts inside the shaded circle
happen to be absent due to the converge of the strong-coupling
expansion for $g > g_{\rm s}$.
}
\label{fig:contour}
\end{figure}
that ${\hat E}^{(0)} \equiv E^{(0)}/\Omega$ satisfies a dispersion
relation in the complex $\hat g$-plane.
If $C$ denotes the cuts in this plane
and $ \mbox{disc}_C {\hat E}^{(0)}(\hat g)$
is the discontinuity across these cuts,
the dispersion integral for the expansion coefficients
$\varepsilon_k^{(0)}$ reads
\begin{equation}
\varepsilon^{(0)} _k=\frac{4^k}{2{\pi}i}\int_C
\frac{d\hat g}{{\hat g^{k+1}}}
\mbox{disc}_C\hat E^{(0)}(\hat g).
\label{eq:eps_disc}
\end{equation}
In the complex $\hat g$-plane, the cuts $C$ run along the contours
$C_1,C_{\bar 1},C_2,C_{\bar 2}$, and $C_3$, as shown on the r.h.s. of
Fig.~\ref{fig:contour}. The first four cuts are the images of the
left-hand cut in the complex $g$-plane, and the curve $C_3$ is due to the
square root of $1- \sigma \hat g$ in the mapping from $\bar g$ to $\hat g$.
Let us now discuss the contributions of the various cuts to the $k$th
term $S_k$. For the cut $C_1$ and the empirically observed
optimal solutions
$\sigma_N = cN(1+b/N^{2/3})$, a saddle-point approximation
shows \cite{jk3}
that this term gives a convergent contribution,
$S_N(C_1)\propto e^{ -[ -b\log(-\gamma)+(cg)^{-2/3}] N^{1/3} }$,
only if one chooses $c=0.186\,047\,272\dots$ and
\mbox{$\gamma = -0.242\,964\,029\dots$}.
Inserting the fitted value of $b=6.85$ this yields an exponent
of
$-b\log(-\gamma) = 9.7$,
in rough agreement with the convergence seen in Fig.~\ref{fig:contour}.
If this was the only contribution the convergence behavior could be changed
at will by varying the parameter $b$. For $b < 6.85$, a slower convergence
was indeed observed. The convergence cannot be improved, however, by
choosing $b > 6.85$, since the optimal convergence is limited by the
contributions of the other cuts.
The cut $C_{\bar 1}$ is still harmless; it contributes a last term
$S_N(C_{\bar 1})$ of the negligible order $e^{-N\log N}$.
The cuts $C_{2,\bar 2,3}$, however, deserve a careful consideration.
If they would really start
at $\hat g=1/ \sigma $, the
leading behavior would be
$\varepsilon _k^{(0)}(C_{2,\bar 2,3}) \propto \sigma ^{k}$, and therefore
$S_N(C_{2,\bar 2,3}) \propto ( \sigma \hat g)^N$,
which
would be in contradiction to the empirically observed
convergence in the strong-coupling limit. The important point is
that
the cuts in Fig.~\ref{fig:contour}
do not really reach the point
$\sigma \hat g=1$.
There exists a small circle of radius $ \Delta \hat g>0$
in which $\hat E^{(0)}(\hat g)$ has no singularities at all,
a consequence of the fact that the strong-coupling expansion
(\ref{eq:W_N}) converges for $g>g_{\rm s}$.
The complex conjugate pair of singularities
gives a contribution,
\begin{equation}
S_N(C_{2,\bar 2,3})\approx
e^{-N^{1/3}a\cos \theta}\cos(N^{1/3}a\sin \theta),
\label{eq:S_N_2}
\end{equation}
with $a=1/(|\bar{g}_{\rm s}|c)^{2/3}$. By analyzing the convergence
behavior of the strong-coupling series we find
$|\bar g_{\rm s}| \! \approx \! 0.160$
and $\theta \! \approx \! -0.467$, which implies for the envelope an asymptotic
falloff of $e^{-9.23N^{1/3}}$, and furthermore also explains the
oscillations in the data~\cite{jk3}.
\section{Conclusions}
To summarize, we have shown how variational perturbation theory
can be used to convert the divergent weak-coupling perturbation series
of the anharmonic oscillator into a sequence of converging
approximations for the strong-coupling expansion.
By making use of dispersion relations
and identifying the relevant singularities
we are able to explain
the exponentially fast convergence with superimposed oscillations
in the strong-coupling limit.\\[-0.20cm]
W.J. thanks the Deutsche Forschungsgemeinschaft for a Heisenberg
fellowship.
| proofpile-arXiv_065-1208 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\chapter{Relativistic Fluid Dynamics}
In this chapter I review the basic theory of fluid kinematics
and dynamics (without dissipation) in relativistic spacetime. The
classic paper in this field is Ellis' 1971 review \cite{e}.
That paper is
at a more advanced level than these lectures.
For a basic
introduction to tensors, relativity and fluids,
see for example \cite{d}.
I use units in which the speed of light in vacuum, Einstein's
gravitational constant and Boltzmann's constant are all one:
$$
c=8\pi G=k=1
$$
I use $A\doteq B$ to denote equality of $A$ and $B$ in an
instantaneous orthonormal frame at a point (defined below).
\section{Brief Review of Relativity}
The observed universe is a 4 dimensional spacetime. Physical laws
should be expressible as equations in spacetime that are independent
of the observer. Together with experimental and observational
evidence, and further principles, this leads to Einstein's
relativity theory - special
relativity in the case where the gravitational field may be neglected,
and general relativity when gravity is incorporated.
Local coordinates, which are typically based on observers,
are usually chosen
so that $x^0$ is a time parameter and $x^i$ are space coordinates.
A change of coordinates (or of observers) is
\begin{equation}
x^\alpha=(x^0,x^i)=(t,\vec{x})\quad\rightarrow\quad
x^{\alpha'}=(x^{0'},x^{i'})=(t',\vec{x}\,')
\label{1}\end{equation}
Physical laws should then be invariant under such transformations.
This means that these laws are expressible in terms of tensor fields
and
tensor--derivatives. Tensors have different types $(r,s)$, but they
all transform linearly under (\ref{1}). The simplest example is a
scalar, which is invariant. Using the chain rule, the tranformation
of the coordinate differentials is seen to be linear:
$$
dx^{\alpha'}=\sum_\alpha{\partial x^{\alpha'}\over\partial x^\alpha}dx^\alpha\equiv {\partial x^{\alpha'}\over
\partial x^\alpha}dx^\alpha
$$
Extending this to partial derivatives of scalars and generalising,
we are led to the transformation properties of tensors in general:
\begin{eqnarray}
(0,0)~\mbox{scalar} & & f~\rightarrow~ f \nonumber\\
(1,0)~\mbox{vector} & & u^\alpha~\rightarrow~u^{\alpha'}={\partial x^{\alpha'}\over
\partial x^\alpha}u^{\alpha}
\nonumber\\
(0,1)~\mbox{covector} & & k_\alpha~\rightarrow~k_{\alpha'}={\partial x^\alpha \over\partial
x^{\alpha'}}k_\alpha \nonumber\\
(1,1)~\mbox{tensor} & & T^\alpha{}_\beta~\rightarrow~T^{\alpha'}{}_{\beta'}
={\partial x^{\alpha'}\over\partial x^\alpha}{\partial x^\beta\over\partial x^{\beta'}}T^\alpha{}_\beta
\nonumber\\
\cdots & & \cdots \nonumber\\
(r,s)~\mbox{tensor} & & J^{\alpha_1\cdots\alpha_r}{}{}{}_{\beta_1
\cdots\beta_s}
~\rightarrow\nonumber\\
{}&& J^{\alpha_1'\cdots\alpha_r'}{}{}{}_{\beta_1'\cdots\beta_s'}=
{\partial x^{\alpha_1'}\over\partial x^{\alpha_1}}\cdots{\partial x^{\beta_s}\over\partial x^{\beta_s'}}
J^{\alpha_1\cdots\alpha_r}{}{}{}_{\beta_1\cdots\beta_s} \label{2}
\end{eqnarray}
It follows that if a tensor vanishes in one coordinate frame, it
vanishes in all frames. Consequently, if two tensors are equal in
one frame, they are equal in all frames.
Fields and equations that transform according to (\ref{2}) are
called tensorial or covariant. Restricted covariance arises when
the class of allowable coordinate systems is restricted. In special
relativity (flat spacetime), one can choose orthonormal coordinates
$x^\alpha$ which correspond to inertial observers,
and if $x^{\alpha'}$ is required to be also orthonormal, then
\begin{equation}
{\partial x^{\alpha'}\over\partial x^\alpha}=\Lambda^{\alpha'}{}_\alpha\quad\Leftrightarrow\quad
x^{\alpha'}=\Lambda^{\alpha'}{}_\alpha x^\alpha+C^\alpha
\label{3}\end{equation}
where $\Lambda, C$ are constants and $\Lambda$ is a Lorentz matrix.
In other words, special relativity says that the laws of
physics (leaving aside gravity) are invariant under Lorentz
transformations that connect any inertial
observers in relative motion.
Under this restriction, the partial derivatives of tensors transform
according to (\ref{2}), i.e. they are Lorentz covariant. We use the
notation
\begin{equation}
J^{\alpha\cdots}{}{}_{\cdots\beta,\mu}\equiv \partial_\mu J^{\alpha\cdots}{}{}_
{\cdots\beta}\equiv{\partial\over\partial x^\mu}J^{\alpha\cdots}{}{}_{\cdots\beta}
\label{4}\end{equation}
for partial derivatives. Thus in special relativity, physical laws
are expressed in orthonormal coordinates as PDE's; for example
the Klein--Gordon equation for a massless scalar field is
\begin{equation}
\Box\Psi\equiv \eta^{\alpha\beta}\partial_\alpha\partial_\beta\Psi=0
\label{5}\end{equation}
where
\begin{equation}
\eta_{\alpha\beta}=\mbox{diag}\,(-1,1,1,1)=\eta^{\alpha\beta}
\label{6}\end{equation}
are the orthonormal components of the metric tensor.
The metric $g_{\alpha\beta}$
of any (in general curved) spacetime
determines the spacetime interval between events, the scalar
product of vectors, and the raising and lowering of indices on
general tensors:
\begin{eqnarray}
ds^2 &=& g_{\alpha\beta}dx^\alpha dx^\beta \label{7}\\
u\cdot v &=& g_{\alpha\beta}u^\alpha v^\beta=u^\alpha v_\alpha=u_\alpha v^\alpha \label{8}\\
J^\alpha{}_{\beta\mu} &=& g^{\alpha\nu}g_{\beta\sigma}J_\nu{}^\sigma{}_\mu~,~~
\mbox{etc.} \label{9}
\end{eqnarray}
where the inverse metric is defined by
$g^{\alpha\mu}g_{\mu\beta}=\delta^\alpha{}_\beta$\,.
The metric is a symmetric tensor. For any rank--2 tensor, we can
define its covariant symmetric and skew parts:
\begin{equation}
V_{(\alpha\beta)}={\textstyle{1\over2}}\left(V_{\alpha\beta}+V_{\beta\alpha}\right)\,,\quad
V_{[\alpha\beta]}={\textstyle{1\over2}}\left(V_{\alpha\beta}-V_{\beta\alpha}\right)
\label{9b}\end{equation}
so that $g_{\alpha\beta}=g_{(\alpha\beta)}$.
At any point (or event) $P$,
an observer can choose coordinates $x^\alpha$ that bring
$g_{\alpha\beta}(P)$ into orthonormal form. I will call such a
coordinate system an instantaneous orthonormal frame (IOF),
characterised by
\begin{equation}
g_{\alpha\beta}\doteq\eta_{\alpha\beta}\quad\Leftrightarrow\quad
g_{\alpha\beta}(P)\Big|_{\mbox{iof}}=\eta_{\alpha\beta}
\label{9a}\end{equation}
At each event along the
observer's worldline, the IOF is in general different.
In fact an IOF is orthonormal
in a neighbourhood of the original point $P$
if and only if the spacetime is locally flat.
In curved spacetime, the partial derivative (\ref{3}) is not
covariant (except when $J$ is a scalar).
The metric defines a connection that `corrects' for
the variations in the coordinate basis (equivalently, that
provides a rule for parallel transport of vectors):
\begin{equation}
\Gamma^\alpha{}_{\beta\sigma}={\textstyle{1\over2}}g^{\alpha\mu}\left(
g_{\mu\beta,\sigma}+g_{\sigma\mu,\beta}-g_{\beta\sigma,\mu}\right)=
\Gamma^\alpha{}_{(\beta\sigma)}
\label{10}\end{equation}
The connection, which is not a tensor since it corrects for
non--tensorial variations, defines a covariant derivative
\begin{eqnarray}
f_{;\alpha} &=& f_{,\alpha} \nonumber\\
u^\alpha{}_{;\beta} &=& u^\alpha{}_{,\beta}+\Gamma^\alpha{}_{\mu\beta}u^\mu
\nonumber\\
k_{\alpha;\beta} &=& k_{\alpha,\beta}-\Gamma^\mu{}_{\alpha\beta}k_\mu \nonumber\\
\cdots & & \cdots \nonumber\\
J^{\alpha\cdots}{}{}_{\cdots\beta;\sigma} &=& J^{\alpha\cdots}{}{}_{\cdots
\beta,\sigma}+\Gamma^\alpha{}_{\mu\sigma}J^{\mu\cdots}{}{}_{\cdots\beta}
+\cdots -\cdots -\Gamma^\mu{}_{\beta\sigma}J^{\alpha\cdots}{}{}
_{\cdots\mu}
\label{11}\end{eqnarray}
We also write $\nabla_\sigma J^{\alpha\cdots}{}{}_{\cdots\beta}$ for
the covariant derivative. One can always
find an IOF at any event $P$ such that the connection vanishes
at $P$:
\begin{equation}
\Gamma^\alpha{}_{\beta\gamma}\doteq 0\quad\Rightarrow\quad
J^{\alpha\cdots}{}{}_{\cdots\beta;\mu}\doteq J^{\alpha\cdots}{}{}_{\cdots\beta,\mu}
\label{11a}\end{equation}
From now on, any IOF will be assumed to have this property.
The connection also defines a covariant
measure of spacetime curvature -- the Riemann tensor:
\begin{equation}
R^\alpha{}_{\beta\mu\nu}=-\Gamma^\alpha{}_{\beta\mu,\nu}+\Gamma^\alpha{}
_{\beta\nu,\mu}+\Gamma^\alpha{}_{\sigma\mu} \Gamma^\sigma{}_{\beta\nu}
-\Gamma^\alpha{}_{\sigma\nu} \Gamma^\sigma{}_{\beta\mu}
\label{12}\end{equation}
Curvature is fundamentally reflected in the non--commutation
of covariant derivatives\footnote{except for scalars:
$f_{;[\alpha\beta]}=0$.}, as given by the Ricci identity
\begin{equation}
u_{\alpha;\beta\gamma}-u_{\alpha;\gamma\beta}=R^\mu{}_{\alpha\beta\gamma}u_\mu
\label{13}\end{equation}
and its generalisations for higher rank tensors.
The trace--free part of the Riemann tensor is the Weyl tensor
$C^\alpha{}_{\beta\mu\nu}$, which represents the `free' gravitational
field and describes gravity waves,
while the trace gives the Ricci tensor and Ricci scalar
\begin{equation}
R_{\alpha\beta}=R^\mu{}_{\alpha\mu\beta}=R_{\beta\alpha}\,,\quad R=R^\alpha{}_\alpha
\label{14}\end{equation}
which are determined by the mass--energy--momentum distribution via
Einstein's field equations
\begin{equation}
R_{\alpha\beta}-{\textstyle{1\over2}}Rg_{\alpha\beta}=T_{\alpha\beta}
\label{15}\end{equation}
where $T_{\alpha\beta}$ is the energy--momentum tensor, discussed
below. The Ricci tensor obeys the contracted Bianchi identity
\begin{equation}
\left(R^{\alpha\beta}-{\textstyle{1\over2}}Rg^{\alpha\beta}\right)_{;\beta}=0
\label{15a}\end{equation}
\section{Fluid Kinematics}
Consider the motion of a particle with rest
mass $m$. An observer records
the particle's history
-- its worldline -- as $x^\alpha=(t,x^i(t))$. We need a
covariant (observer--independent) description of the worldline
and velocity of the particle. If $m>0$, then along the
worldline $ds^2<0$ (the particle moves slower than light). If
$\tau$ is the time recorded by a clock comoving with the
particle, the worldline is
given by $x^\alpha=x^\alpha(\tau)$, independently of any
observer. The covariant comoving time is called the proper time.
In an IOF
$ds^2\doteq-d\tau^2$. Since both sides of this equation are tensors
(scalars), the equation holds in any frame, and at all
points along the worldline, i.e. $ds^2=-d\tau^2$.
The kinematics of the particle are covariantly described by
the 4--velocity
\begin{equation}
u^\alpha={dx^\alpha\over d\tau}\quad\Rightarrow\quad u^\alpha u_\alpha =-1
\label{16}\end{equation}
and the 4--acceleration
\begin{equation}
\dot{u}^\alpha=u^\alpha{}_{;\beta}u^\beta
\label{17}\end{equation}
where $\dot{u}^\alpha u_\alpha=0$.
The particle moves in free--fall, subject to no non--gravitational
forces, if and only if $\dot{u}_\alpha=0$, in which case its worldline is
a (timelike) geodesic.
In the observer's IOF
\begin{equation}
u^\alpha\doteq\gamma(v)(1,{d\vec{x}\over dt})=\gamma(1,\vec{v})\,,\quad
\gamma(v)=(1-v^2)^{-1/2}={dt\over d\tau}
\label{18}\end{equation}
where $t$ is the observer's proper time at that point, and
$\vec{v}$ is the measured velocity of the particle.
If $m=0$, the particle (photon or massless neutrino or graviton)
moves at the speed of
light, and along its worldline $ds^2=0$, so that proper time
cannot parametrise the worldline.
In the IOF of an observer $u^\alpha$, the light ray has angular
frequency $\omega$ and wave vector $\vec{k}$ (where $|\vec{k}|\doteq
\omega$), with phase $\phi\doteq\vec{k}\cdot\vec{x}-\omega t$, so
that
$$
\phi_{,\alpha}\doteq (-\omega,\vec{k})\quad\mbox{and}\quad\phi_{,\alpha}
\phi^{,\alpha}\doteq 0
$$
Now the phase is a covariant scalar, and its gradient is
a covariant null vector, which we call the 4--wave vector, and
which is geodesic:
\begin{equation}
k_\alpha=\phi_{,\alpha}\quad\mbox{and}\quad k_\alpha k^\alpha=0\quad\Rightarrow\quad
k^\alpha{}_{;\beta}k^\beta=0
\label{19}\end{equation}
From the above, in the observer's IOF, $\omega\doteq -k_\alpha u^\alpha
=\dot{\phi}$.
This gives a covariant expression for the redshift between
events $E$ (`emitter') and $R$ (`receiver') along a ray:
\begin{equation}
1+z\equiv{\omega_E\over\omega_R}=
{\left(u_\alpha k^\alpha\right)_E\over\left(u_\alpha k^\alpha\right)_R}
\label{20}\end{equation}
\\
A fluid is modelled as a continuum with a well--defined average
4--velocity field $u^\alpha$, where $u^\alpha u_\alpha=-1$.
This hydrodynamic description requires
that the mean collision time is much less than any macroscopic
characteristic time (such as the expansion time in an expanding
universe); equivalently, the mean free path must be much less than
any macroscopic characteristic length. For a perfect fluid, $u^\alpha$
is uniquely defined\footnote{If the fluid is out of equilibrium as a
result of dissipative effects, then there is no unique average
4--velocity}
as the 4--velocity relative to which there is no
particle current, i.e.
\begin{equation}
n^\alpha=n u^\alpha
\label{21}\end{equation}
where $n$ is the number density.
The field of comoving observers $u^\alpha$ defines a covariant
splitting of spacetime into time $+$ space ($1+3$) via the
projection tensor
\begin{eqnarray}
h_{\alpha\beta}=g_{\alpha\beta}+u_\alpha u_\beta=h_{\beta\alpha}~\Rightarrow &&
h_{\alpha\beta}u^\beta=0\,,~h_\alpha{}^\mu h_{\mu\beta}=h_{\alpha\beta} \nonumber\\
{}&& h^\alpha{}_\alpha=3\,,~
h_{\alpha\beta}q^\beta=q_\alpha~~\mbox{if}~~q_\alpha u^\alpha=0
\label{22}\end{eqnarray}
which projects at each point into the instantaneous rest space of
the fluid/ observer, and provides a 3--metric in the rest space.
In the comoving IOF
$$u^\alpha\doteq(1,\vec{0})\,,\quad h_{\alpha\beta}\doteq\mbox{diag}~(0,1,1,1)\,,
\quad h_{\alpha\beta}q^\alpha q^\beta\doteq\vec{q}\cdot\vec{q}$$
where $q_\alpha u^\alpha=0$.
This allows us to compare relativistic fluid kinematics and
dynamics with its Newtonian limit.
The covariant time derivative along $u^\alpha$ is
\begin{equation}
\dot{A}^{\alpha\cdots}{}{}_{\beta\cdots}=A^{\alpha\cdots}{}{}_{\beta\cdots;\mu}u^\mu
\label{23}\end{equation}
and describes the rate--of--change relative to comoving observers. In
the comoving IOF
$$
\dot{A}^{\alpha\cdots}{}{}_{\beta\cdots}\doteq{d\over d\tau}A^{\alpha\cdots}
{}{}_{\beta\cdots}
$$
The covariant spatial derivative is
\begin{eqnarray}
\mbox{D}_\alpha f &=& h_\alpha{}^\beta f_{,\beta} \label{24}\\
\mbox{D}_\alpha q_\beta &=& h_\alpha{}^\mu h_\beta{}^\nu \nabla_\mu q_\nu \label{25}\\
\mbox{D}_\alpha\sigma_{\beta\gamma} &=& h_\alpha{}^\mu h_\beta{}^\nu h_\gamma{}^\kappa
\nabla_\mu\sigma_{\nu\kappa}\,,~~\mbox{etc.}
\label{26}\end{eqnarray}
and describes spatial variations relative to
comoving observers.
In the comoving IOF, with $q^\alpha\doteq(q^0,\vec{q}\,)$
\begin{equation}
\mbox{D}_\alpha f\doteq (0, \vec{\nabla}f)\,,\quad \mbox{D}^\alpha q_\alpha\doteq
\vec{\nabla}\cdot\vec{q}\,,\quad \varepsilon^{ijk}\mbox{D}_j q_k\doteq \left(
\vec{\nabla}\times\vec{q}\right)^i
\label{27}\end{equation}
\\
Any spacetime vector can be covariantly split as
\begin{equation}
V^\alpha=Au^\alpha+B^\alpha\,,\quad\mbox{where}\quad A=-u_\alpha V^\alpha\,,
~B^\alpha=h^\alpha{}_\beta V^\beta~\Leftrightarrow~B^\alpha u_\alpha=0
\label{28}\end{equation}
For a rank--2 tensor:
\begin{equation}
V_{\alpha\beta}=Au_\alpha u_\beta+B_\alpha u_\beta +u_\alpha C_\beta+F_{\alpha\beta}
\label{29}\end{equation}
where $A=V_{\alpha\beta}u^\alpha u^\beta$, $B_\alpha u^\alpha=0= C_\alpha u^\alpha$ and
$$
F_{\alpha\beta}=h_\alpha{}^\mu h_\beta{}^\nu V_{\mu\nu}\quad\Leftrightarrow\quad
F_{\alpha\beta}u^\alpha=0=F_{\alpha\beta}u^\beta
$$
For example, if $V_{\alpha\beta}=W_{\alpha;\beta}$, then $F_{\alpha\beta}=\mbox{D}_\beta W_\alpha$.
Now $F_{\alpha\beta}$ may be further decomposed into symmetric and
skew parts:
$$
F_{\alpha\beta}=F_{(\alpha\beta)}+F_{[\alpha\beta]}
$$
In the comoving IOF, the corresponding decomposition of the
matrix of components $F_{ij}$ is simply
$$
F\doteq(F)+[F]={\textstyle{1\over2}}\left(F+F^T\right)
+{\textstyle{1\over2}}\left(F-F^T\right)
$$
and $(F)$ may be further split into its trace and
trace--free parts:
$$
(F)\doteq\left\{{\textstyle{1\over3}}
\mbox{tr}~F\right\}I+ \langle F\rangle
$$
The covariant expression of this is
$$
F_{(\alpha\beta)}=\left\{{\textstyle{1\over3}}F^\gamma{}_\gamma
\right\}h_{\alpha\beta}+ F_{<\alpha\beta>}
$$
where the symmetric, spatial trace--free part of any tensor
is defined by
\begin{equation}
V_{<\alpha\beta>}=h_\alpha{}^\mu h_\beta{}^\nu\left\{ V_{(\mu\nu)}-{\textstyle{1\over3}}
V_{\sigma\kappa}h^{\sigma\kappa}h_{\mu\nu}\right\}
\label{30}\end{equation}
Thus we can rewrite the decomposition (\ref{29}) in the covariant
irreducible form
\begin{equation}
V_{\alpha\beta}=Au_\mu u_\nu+B_\alpha u_\beta+
u_\alpha C_\beta+{\textstyle{1\over3}}V_{\mu\nu}h^{\mu\nu}h_{\alpha\beta}
+V_{<\alpha\beta>}+V_{[\mu\nu]}h^\mu{}_\alpha h^\nu{}_\beta
\label{31}\end{equation}
\\
Now we are ready to define the quantities that covariantly describe
the fluid kinematics. These quantities are simply the irreducible
parts of the covariant derivative of the fluid 4--velocity. With
$V_{\alpha\beta}=u_{\alpha;\beta}$, we have $A=0=C_\alpha$ since $u_{\alpha;\beta}u^\alpha=0$,
and then $B_\alpha=-u_{\alpha;\beta}u^\beta=-\dot{u}_\alpha$. Thus (\ref{31}) gives
\begin{eqnarray}
&& u_{\alpha;\beta}=Hh_{\alpha\beta}+\sigma_{\alpha\beta}+\omega_{\alpha\beta}-\dot{u}_\alpha u_\beta
\quad\mbox{where}\quad
3H=u^\alpha{}_{;\alpha}=\mbox{D}^\alpha u_\alpha\,, \nonumber\\
&&\sigma_{\alpha\beta}=
u_{<\alpha;\beta>}=\mbox{D}_{<\beta}u_{\alpha>}\,,~~\omega_{\alpha\beta}=h_\alpha{}^\mu h_\beta{}^\nu
u_{[\mu;\nu]}=\mbox{D}_{[\beta}u_{\alpha]}
\label{32}\end{eqnarray}
In a comoving IOF at a point $P$, $\vec{v}$ is zero at $P$, but its
derivatives are not, and we find using (\ref{27}) that
$$
3H\doteq\vec{\nabla}\cdot\vec{v}\,,~~\varepsilon^{ijk}\omega_{jk}\doteq-\left(
\vec{\nabla}\times\vec{v}\right)^i
$$
so that $H$ generalises the Newtonian expansion rate and
$\omega_{\alpha\beta}$ generalises the Newtonian vorticity.
Similarly, it
can be seen that $\sigma_{\alpha\beta}$ is the relativistic generalisation
of the Newtonian shear. These kinematic quantities therefore have
the same physical interpretation as in Newtonian fluids.
A small sphere
of fluid defined in the IOF of a comoving observer
at $t=0$, and then measured in the observer's IOF a short time
later, undergoes the following changes:
\begin{itemize}
\item
due to $H$, its volume
changes but not its spherical shape;
\item
due to $\sigma_{\alpha\beta}$, its
volume is unchanged but its shape is distorted in a way defined by
the eignevectors (principal axes) of the shear;
\item
due to $\omega_{\alpha\beta}$, its volume and shape are unchanged, but it is
rotated about the direction $\vec{\nabla}\times\vec{v}$.
\end{itemize}
The expansion rate defines a comoving scale factor $a$ that determines
completely the volume evolution:
\begin{equation}
H={\dot{a}\over a}
\label{33}\end{equation}
\section{Conservation Laws - Perfect Fluids}
Assuming there are no unbalanced creation/ annihilation processes,
particle number is conserved in the fluid. In an IOF, this is
expressed via the continuity equation
$$
{\partial n\over \partial t}+\vec{\nabla}\cdot\left(n\vec{v}\right)\doteq 0
$$
By (\ref{21}), the covariant form of particle conservation is
\begin{equation}
n^\alpha{}_{;\alpha}=0\quad\Leftrightarrow\quad \dot{n}+3Hn=0 \quad
\Leftrightarrow\quad n a^3=\mbox{comoving const}
\label{34}\end{equation}
where (\ref{33}) was used to show that the comoving particle number
$N\propto na^3$ is constant.
A perfect fluid is described by its 4--velocity $u^\alpha$,
number density $n$, energy
(or mass--energy) density $\rho$, pressure $p$ and specific
entropy $S$. In a comoving IOF, the pressure is isotropic and
given by the Newtonian stress tensor $\tau_{ij}\equiv p\delta_{ij}$.
This can be
covariantly combined with the energy density into the symmetric
energy--momentum tensor\footnote{The form of the energy--momentum
tensor may be justified via relativistic kinetic theory}
\begin{equation}
T_{\alpha\beta}=\rho u_\alpha u_\beta+ph_{\alpha\beta}
\label{35}\end{equation}
so that $T_{00}\doteq\rho=\,$energy density,
$T_{ij}\doteq\tau_{ij}=\,$momentum density, $T_{0i}\doteq0$.
Just as the divergence of $n^\alpha$ produces a conservation law (\ref{34}),
so too does the divergence of $T^{\alpha\beta}$:
\begin{eqnarray}
T^{\alpha\beta}{}{}_{;\beta}=0~~\Rightarrow && \dot{\rho}+3H(\rho+p)=0
\label{36}\\
{}&& (\rho+p)\dot{u}_\alpha+\mbox{D}_\alpha p=0
\label{37}\end{eqnarray}
In a comoving IOF these become
$$
{\partial\rho\over\partial t}+(\rho+p)\vec{\nabla}\cdot\vec{v}\doteq 0\,,~~
(\rho+p){\partial\vec{v}\over\partial t}\doteq-\vec{\nabla}p
$$
so that (\ref{36}) is an energy conservation equation, generalising
the mass conservation equation of Newtonian fluid theory, while
(\ref{37}) is a momentum conservation equation, generalising the
Euler equation. (In relativity, the pressure contributes to the
effective energy density.)
The energy--momentum conservation equation
also follows from Einstein's field equations (\ref{15}) and the
contracted Bianchi identity (\ref{15a}). Equivalently, the
conservation equation ensures that the identity holds, i.e. that
this integrability condition of the field equations is satisfied.
Finally, the entropy is also conserved. In a comoving IOF, there is
no entropy flux, and the specific entropy $S$ is constant for each
fluid particle. The covariant expression of this statement
is
\begin{equation}
S^\alpha{}_{;\alpha}=0\quad\mbox{where}\quad S^\alpha=Sn^\alpha\quad\Rightarrow\quad
\dot{S}=0
\label{38}\end{equation}
where (\ref{34}) was used. Note that $S$ is constant along fluid
particle worldlines, and not throughout the fluid in general. If
$S$ is the same constant on each worldline -- i.e. if
$\mbox{D}_\alpha S=0$ as well as $\dot{S}=0$, so that $S_{,\alpha}=0$ --
then the fluid is called isentropic.
\section{Equilibrium Thermodynamics}
A perfect fluid is characterised by $(n^\alpha, S^\alpha, T^{\alpha\beta})$,
or equivalently by $(n,\rho,p,S, u^\alpha)$, subject to the
conservation laws above. What are the further relations amongst the
thermodynamic scalars $n,\rho,p,S$ and $T$, the temperature?
Firstly, the temperature is defined via the Gibbs equation
\begin{equation}
TdS=d\left({\rho\over n}\right)+pd\left({1\over n}\right)
\label{39}\end{equation}
where $df=f_{,\alpha}dx^\alpha$.
Secondly, thermodynamical equations of state are needed in order
to close the system of equations. Equations of state are
dependent on the particular physical properties of the fluid, and
are deduced from microscopic physics (i.e. kinetic
theory and statistical mechanics), or from phenomenological
arguments. In fact, assuming the metric is known (and so leaving
aside Einstein's field equations),
there are 7 equations -- i.e. (\ref{34}),
(\ref{36}), (\ref{37})$_i$, (\ref{38}), (\ref{39}) -- for 8 variables
-- i.e. $n,\rho,p,u_i,S,T$.
Thus a single scalar equation of state will
close the system.
The Gibbs equation shows that in general two of the
thermodynamical scalars are needed as independent variables.
For example, taking $n,\rho$ as independent, the remaining
thermodynamical scalars are $p(n,\rho),S(n,\rho),T(n,\rho)$,
and given any one of these, say $p=p(n,\rho)$,
the others will be determined. Often a barotropic equation of
state for the pressure is assumed, i.e. $p=p(\rho)$. By the
Gibbs equation, this implies $S$ is constant (see
below), i.e. the fluid is isentropic.
The adiabatic
speed of sound $c_s$ in a fluid is given in general by
\begin{equation}
c_s^2=\left({\partial p\over\partial\rho}\right)_S
\label{39b}\end{equation}
For a perfect fluid, this becomes
\begin{equation}
c_s^2={\dot{p}\over\dot{\rho}}
\label{39a}\end{equation}
as can be seen by choosing $\rho, S$ as independent variables,
and using the fact that $\dot{S}=0$:
$$
\dot{p}=\left({\partial p\over\partial\rho}\right)_{\!S}\dot{\rho}+
\left({\partial p\over\partial S}\right)_{\!\rho}\dot{S}
$$
The preceding considerations are phenomenological and mathematical.
If the fluid model is based on microscopic physics, further
conditions are imposed. For example, if the fluid
is a collision--dominated gas in equilibrium, then relativistic
kinetic theory, based essentially on imposing energy--momentum
conservation at a microscopic level, leads to stringent
conditions\footnote{Note that kinetic theory incorporates
assumptions about the interactions of particles, in particular that
the interactions are described by the Boltzmann collision integral.}.
If $m>0$ is the rest mass of the particles and
$$
\beta_\mu={\beta \over m}u_\mu\,,\quad \beta={m\over T}
$$
then the following conditions hold:\footnote{See \cite{i}.
In standard units, $\beta=mc^2/kT$.}
\begin{eqnarray}
&& \beta_{(\mu;\nu)} = 0 \label{40}\\
&& mn=c_0{K_2(\beta)\over\beta}\,,~~~ p = nT \label{41}\\
&& \rho=c_0\left[{K_1(\beta)\over\beta}+3{K_2(\beta)\over\beta^2}
\right] \label{42}
\end{eqnarray}
where $c_0$ is a constant and $K_n$ are modified Bessel functions
of the second kind. Furthermore, (\ref{40}) shows
that $\beta_\mu$ is a Killing vector field, so that the spacetime
is stationary. In particular, (\ref{32}) implies
\begin{equation}
H=0\,,\quad\dot{u}_\alpha=-\mbox{D}_\alpha\ln T\,,\quad\sigma_{\alpha\beta}=0
\label{43}\end{equation}
and then (\ref{34}), (\ref{36}) lead to
\begin{equation}
\dot{n}=\dot{\rho}=\dot{p}=\dot{T}=0
\label{44}\end{equation}
Thus if the perfect fluid is a relativistic Maxwell--Boltzmann gas in
equilibrium, severe restrictions are imposed not only on the
fluid dynamics but also on the spacetime geometry.
In the case of a gas of massless particles in collisional equilibrium,
the conditions are less severe:
\begin{eqnarray}
&& \beta_{(\mu;\nu)}=-{\dot{T}\over T^2}g_{\mu\nu}~~\Rightarrow~~
H=-{\dot{T}\over T}\,,~\sigma_{\alpha\beta}=0 \label{45}\\
&& n=b_0 T^3\,,~~~\rho=3p=3nT \label{46}
\end{eqnarray}
Thus $\beta_\mu$ is a conformal Killing
vector field, so that expansion is possible in equilibrium.
Kinetic theory shows that a purely phenomenological approach to fluid
thermodynamics holds potential problems in the form of hidden or
unknown consistency conditions that may be violated. Any
phenomenological model needs to be applied with caution.
The best motivated barotropic
perfect fluid model is that for incoherent
radiation or massless particles,
for which $p={1\over3}\rho$,
as in (\ref{46}). The energy conservation equation (\ref{36}) integrates,
on using (\ref{33}):
\begin{equation}
\rho=(\mbox{comoving const}) a^{-4}
\label{46a}\end{equation}
Cold, non--relativistic matter is often modelled as pressure--free
`dust', so that
\begin{equation}
p=0\quad\Rightarrow\quad \rho=(\mbox{comoving const})a^{-3}=mn
\label{46b}\end{equation}
A kinetic theory motivation for the dust model arises from (\ref{41}),
(\ref{42}) in the limit $\beta\gg 1$:
\begin{equation}
p=nT\,,~~~\rho\approx mn+{\textstyle{3\over2}}nT\quad\mbox{where}\quad
T\ll m
\label{46c}\end{equation}
The energy density is $\rho\approx n(mc^2+\varepsilon)$, where $mc^2$ is
the rest mass energy per particle, and $\varepsilon={3\over2}kT$ is the
thermal energy per particle.
While (\ref{46c}) is still reasonable at high temperatures
(e.g. for the electron, $m\approx 10^9$K, and (\ref{46c}) should
be very accurate for $T$ up to about $10^6$K), the
exact limiting dust case is only reasonable at low temperatures,
when random velocities are negligible. Of course the hydrodynamic
description is no longer valid in this limit.
We can find the evolution of the temperature easily in the
case of radiation. Comparing (\ref{46}) and (\ref{46a}), we get
\begin{equation}
\mbox{radiation:}\quad\quad T\propto {1\over a}
\label{46e}\end{equation}
In the general case, the Gibbs equation (\ref{39}) can be written as
$$
dS=-\left({\rho+p\over Tn^2}\right)dn+{1\over Tn}d\rho
$$
and the integrability condition
$$
{\partial^2S\over \partial T\partial n}={\partial^2S\over\partial n\partial T}
$$
becomes
\begin{equation}
n{\partial T\over\partial n}+(\rho+p){\partial T\over\partial\rho}=T{\partial p\over\partial\rho}
\label{46h}\end{equation}
Furthermore,
since $T=T(n,\rho)$, it follows on using number and energy
conservation (\ref{34}) and (\ref{36}) that
$$
\dot{T}=-3H\left[n{\partial T\over\partial n}+(\rho+p){\partial T\over \partial \rho}\right]
$$
and then (\ref{46h}) implies
\begin{equation}
{\dot{T}\over T}=-3H\left({\partial p\over\partial\rho}\right)_{\!n}
\label{46f}\end{equation}
From the derivation of (\ref{46f}), we see that it will hold identically
if the Gibbs integrability condition, number conservation and
energy conservation are satisfied.
This equation holds for any perfect fluid. For
non--relativistic matter (\ref{46c}) gives
$$
p={\textstyle{2\over3}}(\rho-mn)
$$
so that (\ref{46f}) implies:
\begin{equation}
\mbox{non--relativistic matter:}\quad\quad T\propto {1\over a^2}
\label{46g}\end{equation}
This shows that the mean particle speed decays like $a^{-1}$,
since the thermal energy per particle is $\varepsilon\approx {3\over2}
kT\approx{1\over2}m\bar{v}^2$.
Strictly, the limiting case of dust has $T=0$, but if dust is
understood as negligible pressure and temperature rather than
exactly zero pressure, then (\ref{46g}) holds.
Note that the Gibbs integrability condition shows explicitly
that one cannot independently specify equations of state for
the pressure and temperature. This is clearly illustrated in the
barotropic case.
\[ \]
{\bf Barotropic Perfect Fluids}\\
With $\rho, p$ as the independent variables in
the Gibbs equation (\ref{39}) in the general perfect fluid case,
we find:
\begin{eqnarray}
{n^2T\over(\rho+p)}dS &=&-\left[{\partial n\over\partial\rho}d\rho+
{\partial n\over\partial p}dp\right]+{n\over(\rho+p)}d\rho \nonumber\\
{}&=&\left[{n\over\rho+p}-{\partial n\over\partial\rho}\right]d\rho-
{\partial n\over\partial p}dp \nonumber\\
{}&=&\left[{n\over\rho+p}-{\dot{n}\over\dot{\rho}}+{\dot{p}\over
\dot{\rho}}{\partial n\over\partial p}\right]d\rho-{\partial n\over\partial p}dp \nonumber\\
{}&=& {\dot{p}\over\dot{\rho}}{\partial n\over\partial p}-{\partial n\over\partial p}dp
\nonumber
\end{eqnarray}
where we used
the conservation equations (\ref{34}) and (\ref{36}). Thus, for
any perfect fluid
\begin{equation}
n^2TdS=(\rho+p){\partial n\over\partial p}\left[{\dot{p}\over\dot{\rho}}d\rho-dp
\right]
\label{46k}\end{equation}
Suppose now that the pressure is barotropic: $p=p(\rho)$. It follows
immediately from (\ref{46k}) that $dS=0$, i.e. the fluid is isentropic.
The same conclusion follows in the case of barotropic temperature.
Choosing $\rho, T$ as the independent variables, we find
$$
n^2TdS=(\rho+p){\partial n\over\partial T}\left[{\dot{T}\over\dot{\rho}}d\rho-dT
\right]
$$
so that $T=T(\rho)$ implies $dS=0$.
If the pressure and temperature are
barotropic, then the Gibbs integrability
condition (\ref{46h}) strongly restricts the form of $T(\rho)$:
\begin{equation}
p=p(\rho)\quad\mbox{and}\quad T=T(\rho)\quad\Rightarrow\quad
T\propto \exp\int {dp\over\rho(p)+p}
\label{46i}\end{equation}
The radiation and dust models are cases of a linear barotropic
equation of state that is often used for convenience
\begin{equation}
p=(\gamma-1)\rho\quad\Rightarrow\quad \rho=(\mbox{comoving const})
a^{-3\gamma}
\label{46d}\end{equation}
By (\ref{39a}), the speed of sound is $c_s=\sqrt{\gamma-1}$.
For fluids which have some basis in kinetic theory,
one can impose the restriction $1\leq\gamma\leq{4\over3}$. In
principle ${4\over3}<\gamma\leq 2$ still leads to an allowable
speed of sound ($\gamma=2$ is known as `stiff matter').
The false
vacuum of inflationary cosmology may be formally described by the
case $\gamma=0$.
If (\ref{46d}) holds then the Gibbs integrability condition (\ref{46h})
becomes
$$
n{\partial T\over\partial n}+\gamma\rho{\partial T\over\partial \rho}=(\gamma-1)T
$$
whose solution by the method of characteristics yields
\begin{equation}
T=\rho^{(\gamma-1)/\gamma}F\left({\rho^{1/\gamma}\over n}\right)
\label{46l}\end{equation}
where $F$ is an arbitrary function. By (\ref{34}) and (\ref{46d}),
$F$ is a comoving constant, i.e. $\dot{F}=0$.
If $T$ is also barotropic, then $F$ is constant and
we have a power--law form with fixed exponent for the temperature:
\begin{equation}
T\propto \rho^{(\gamma-1)/\gamma}
\label{46j}\end{equation}
The same result follows directly from (\ref{46i}).
Note that (\ref{46d}) and (\ref{46j}) are
consistent with the ideal gas law $p=nT$. For
dissipative fluids, this is no longer true.
\section{Example: Cosmological Fluids}
The Ricci identity (\ref{13}) for the fluid 4--velocity,
appropriately projected and contracted, together with the
field equations (\ref{15}), leads to an evolution equation for
the expansion rate
\begin{equation}
3\dot{H}+3H^2-\dot{u}^\alpha{}_{;\alpha}+\sigma_{\alpha\beta}\sigma^{\alpha\beta}
-\omega_{\alpha\beta}\omega^{\alpha\beta}=-{\textstyle{1\over2}}(\rho+3p)
\label{47}\end{equation}
known as Raychaudhuri's equation.
In the standard FRW cosmological models, the rest spaces
of comoving observers mesh together to form spacelike 3--surfaces
$\{t=\,\mbox{const}\}$, where $t$ is proper time for comoving observers.
Each comoving observer sees that there are no preferred
spatial directions - i.e. the cosmic 3--surfaces are spatially
isotropic and homogeneous. Thus for any covariant scalar $f$
and vector $v_\alpha$
$$
\mbox{D}_\alpha f=0~~~\left[\Leftrightarrow f=f(t)\right]\,,\quad h_\alpha{}^\beta
v_\beta=0~~~\left[\Leftrightarrow v_\alpha=V(t)u_\alpha\right]
$$
and
$$
u_{\alpha;\beta}\propto h_{\alpha\beta}\quad\Leftrightarrow\quad
\dot{u}_\alpha=0\,,~~~\sigma_{\alpha\beta}=0=\omega_{\alpha\beta}
$$
Raychaudhuri's equation (\ref{47}) reduces to
\begin{equation}
3\dot{H}+3H^2=-{\textstyle{1\over2}}(\rho+3p)
\label{48}\end{equation}
The momentum conservation equation (\ref{37}) is identically
satisfied. Since $\rho=\rho(t)$, $p=p(t)$, it follows that
$p=p(\rho)$, i.e. one may assume a barotropic equation of
state (for a single--component fluid).
Then (\ref{48}) and the energy conservation equation (\ref{36}) are
coupled equations in the 2 variables $H, \rho$, and can be
solved for a given $p(\rho)$. However it is more convenient to
use the Friedmann equation, the $(0,0)$ field equation, which is
a first integral of the Raychaudhuri equation:
\begin{equation}
H^2={\textstyle{1\over3}}\rho-{k\over a^2}
\label{49}\end{equation}
where $a(t)$ is the scale factor defined by (\ref{33}) and $k=0,\pm 1$
is the curvature index for the cosmic 3--surfaces, which by symmetry
are spaces of constant curvature.
In comoving spherical coordinates,
the FRW metric and 4--velocity are
\begin{equation}
ds^2=-dt^2+a(t)^2\left[{dr^2\over 1-kr^2}+r^2d\Omega^2\right]\,,\quad
u^\alpha=\delta^\alpha{}_0
\label{50}\end{equation}
where $d\Omega^2$ is the metric of the unit sphere.
The expansion of the universe ($H>0$) is confirmed by the
systematic redshift in electromagnetic radiation that reaches us
from distant galaxies. By (\ref{20}) and (\ref{50})
\begin{equation}
1+z={a(t_R)\over a(t_E)}
\label{50a}\end{equation}
showing that $a$ is increasing,
so that by (\ref{46d}) $\rho$ is decreasing. The early universe is
very hot, as confirmed by the after--glow we observe in the form
of the cosmic microwave background radiation. The early universe
is modelled by a radiation fluid (\ref{46a}), while
the late universe is cold and the dust model (\ref{46b}) is
appropriate. The transition from radiation-- to matter--domination
requires a careful analysis, and has to deal with the interaction
between radiation and matter. This covers the recombination era
of the universe, and involves dissipative processes which I will
discuss later.
Leaving aside this transition (which occupies a very
short time in the evolution of the universe), the matter and
radiation are effectively non--interacting. In the super--hot
conditions of the early universe, matter particles are
ultra--relativistic and effectively massless, so that a radiation
fluid in equilibrium is a good approximation. In the late universe,
(\ref{46a}) and (\ref{46b}) show that the energy density of
radiation is negligible compared to that of matter, and the
dust model becomes reasonable. For the flat universe case ($k=0$),
(\ref{46a}), (\ref{46b}) and (\ref{49}) lead to the solutions:
\begin{equation}
\mbox{radiation:}~~~ a\propto t^{1/2}\,,\quad\quad\mbox{matter:}~~~
a\propto t^{2/3}
\label{51}\end{equation}
Einstein's theory predicts that a radiation FRW universe will
begin at $t=0$ with infinite energy density and temperature.
However, for times less than the Planck time $t_P\approx
10^{-43}$ sec, quantum gravity effects are expected to become
dominant, and Einstein's theory will no longer hold. As yet, no
satisfactory quantum gravity theory has been developed, and models
of the very early universe are necessarily speculative.
One fairly successful model, which applies during
the semi--classical
period between the quantum era and the classical Einstein era,
is inflation. Inflationary models aim to answer some of the
problems that arise in the standard classical cosmology
(the `big bang' model).
In these models, the energy density of the universe is
dominated by a scalar field at around $10^{-34}$ ---
$10^{-32}$ sec. The pressure of the scalar field is negative,
which acts like an effective repulsive force,
leading to accelerated expansion, or inflation, during
which the scale factor $a$ increases by around $10^{30}$.
Although the scalar field is not a fluid, it has an energy--momentum
tensor of the perfect fluid form (\ref{35}). The condition for
accelerated expansion is $\ddot{a}>0$, so that, by (\ref{48})
\begin{equation}
\mbox{inflation}\quad\Leftrightarrow\quad \ddot{a}>0\quad
\Leftrightarrow\quad p<-{\textstyle{1\over3}}\rho
\label{52}\end{equation}
Particular forms of inflation are exponential inflation in
a flat FRW universe, for which
\begin{equation}
a\propto \exp(H_I t)\quad\mbox{and}\quad p=-\rho
\label{53}\end{equation}
and power--law inflation, for which $a\propto t^N$, $N>1$.
\chapter{Dissipative Relativistic Fluids}
Perfect fluids in equilibrium generate no entropy and no
`frictional' type heating, because their dynamics is
reversible and without dissipation. For many processes in
cosmology and astrophysics, a perfect fluid model is adequate.
However, real fluids behave irreversibly, and some processes in
cosmology and astrophysics cannot be understood except as
dissipative processes, requiring a relativistic theory of
dissipative fluids.
In order to model such processes, we need non--equilibrium
or irreversible thermodynamics. Perhaps the most satisfactory
approach to irreversible thermodynamics is via non--equilibrium
kinetic theory. However, this is very complicated, and I will take
instead a standard phenomenological approach, pointing out how
kinetic theory supports many of the results. A comprehensive, modern
and accessible discussion of irreversible thermodynamics is given in
\cite{jcl}. This text includes relativistic thermodynamics, but most
of the theory and applications are non--relativistic. A
relativistic, but more advanced, treatment may be found in
\cite{i} (see also \cite{is}, \cite{hl}).
Standard, or classical, irreversible thermodynamics was first
extended from Newtonian to relativistic fluids by Eckart in
1940. However, the Eckart theory, and a variation of it due
to Landau and Lifshitz in the 1950's, shares with its Newtonian
counterpart the problem that dissipative perturbations propagate
at infinite speeds. This non--causal feature
is unacceptable in a relativistic theory -- and worse still, the
equilibrium states in the theory are unstable.
The problem is rooted in the way that non--equilibrium states
are described -- i.e. via the local equilibrium variables alone.
Extended irreversible thermodynamics takes its name from the fact
that the set needed to describe
non--equilibrium states is extended to include the dissipative
variables. This feature leads to causal and stable behaviour under
a wide range of conditions.
A non--relativistic
extended theory was developed by Muller in the 1960's,
and independently a relativistic version was developed by Israel
and Stewart in the 1970's. The extended theory is also known as
causal thermodynamics, second--order thermodynamics (because the
entropy includes terms of second order in the dissipative
variables), and transient thermodynamics (because the theory
incorporates transient phenomena on the scale of the mean free
path/ time, outside the quasi--stationary regime of the classical
theory).
In this chapter I will give a simple introduction to these
features, leading up to a formulation of relativistic
causal
thermodynamics that can be used for applications in cosmology
and astrophysics.
\section{Basic Features of Irreversible\protect\\ Thermodynamics}
For a dissipative fluid, the particle 4--current will be taken
to be of the same form as (\ref{34}). This corresponds to
choosing an average 4--velocity in which there is no particle
flux -- known as the particle frame.
At any event in spacetime, the thermodynamic state of the fluid is
close to a fictitious equilibrium state at that event\footnote{Note
that the
equilibrium states are different at different events, and therefore
not subject to differential conditions such as (\ref{40}) -- (\ref{46})}.
The local
equilibrium scalars are denoted $\bar{n}, \bar{\rho}, \bar{p},
\bar{S}, \bar{T}$, and the local equilibrium 4--velocity is
$\bar{u}^\mu$. In the particle frame, it is possible to choose
$\bar{u}^\mu$ such that the number and energy densities coincide
with the local equilibrium values, while the pressure in general
deviates from the local equilibrium pressure:
\begin{equation}
n=\bar{n}\,,\quad\rho=\bar{\rho}\,,\quad p=\bar{p}+\Pi
\label{1'}\end{equation}
where $\Pi=p-\bar{p}$ is the bulk viscous pressure. From now on I
will drop the bar on the equilibrium pressure and write $p+\Pi$ for
the effective non--equilibrium pressure:
$$
p_{\mbox{eff}}=p+\Pi\quad\quad(p\rightarrow p_{\mbox{eff}}\,,
\quad\bar{p}\rightarrow p)
$$
The form of the
energy--momentum tensor may be deduced from
the equilibrium form (\ref{35}) and the
general covariant decomposition (\ref{31}), given that
$T_{\alpha\beta}$ is symmetric:
\begin{equation}
T_{\alpha\beta}=\rho u_\alpha u_\beta+(p+\Pi)h_{\alpha\beta}+q_\alpha u_\beta+q_\beta u_\alpha+\pi_{\alpha\beta}
\label{2'}\end{equation}
where
$$
q_\alpha u^\alpha=0\,,\quad\pi_{\alpha\beta}=\pi_{<\alpha\beta>}~\Rightarrow~
\pi_{\alpha\beta}u^\beta=\pi_{[\alpha\beta]}=\pi^\alpha{}_\alpha=0
$$
In a comoving IOF, $q_\alpha\doteq(0,\vec{q})$ and $\pi_{\alpha\beta}\doteq
\pi_{ij}\delta_\alpha{}^i\delta_\beta{}^j$, so
that $\vec{q}$ is an
energy flux (due to heat flow in the particle frame) relative to the
particle frame, while $\pi_{ij}$ is the anisotropic
stress.
Both the standard and extended theories
impose conservation of particle number and
energy--momentum:
$$
n^\alpha{}_{;\alpha}=0\,,\quad\quad T^{\alpha\beta}{}{}_{;\beta}=0
$$
Particle number conservation leads to the same equation (\ref{34}) that
holds in the equilibrium case. However the equilibrium energy and
momentum conservation equations (\ref{36}) and (\ref{37}) are changed by
the dissipative terms in (\ref{2'}):
\begin{eqnarray}
\dot{\rho}+3H(\rho+p+\Pi)+\mbox{D}^\alpha q_\alpha+2\dot{u}_\alpha q^\alpha+
\sigma_{\alpha\beta}\pi^{\alpha\beta}&=&0 \label{3'}\\
(\rho+p+\Pi)\dot{u}_\alpha+\mbox{D}_\alpha(p+\Pi)+\mbox{D}^\beta\pi_{\alpha\beta}+
\dot{u}^\beta\pi_{\alpha\beta} &&{}\nonumber\\
{}+h_\alpha{}^\beta\dot{q}_\beta
+\left(4Hh_{\alpha\beta}+\sigma_{\alpha\beta}+\omega_{\alpha\beta}\right)q^\beta&=&0 \label{4'}
\end{eqnarray}
In irreversible thermodynamics, the entropy is no longer
conserved, but grows, according to the second law of thermodynamics.
The rate of entropy production is given by the divergence
of the entropy 4--current, so that
the covariant form of the second law of thermodynamics is
\begin{equation}
S^\alpha{}_{;\alpha}\geq 0
\label{5'}\end{equation}
$S^\alpha$ no longer has the simple form in (\ref{38}), but
has a dissipative term:
\begin{equation}
S^\alpha=Snu^\alpha+{R^\alpha\over T}
\label{6'}\end{equation}
where $S=\bar{S}$ and $T=\bar{T}$ are
still related via the Gibbs equation
(\ref{39}).\footnote{In extended thermodynamics, this is
the Israel--Stewart approach.
An alternative
approach is to extend the Gibbs equation by
including
dissipative terms, and to use a generalised temperature, specific
entropy and pressure. The two approaches agree near
equilibrium \cite{gl}.}
The dissipative part $R^\alpha$ of $S^\alpha$ is assumed to be an algebraic
function (i.e. not containing derivatives) of $n^\alpha$ and $T^{\alpha\beta}$,
that vanishes in equilibrium:
$$
R^\alpha=R^\alpha(n^\beta,T^{\mu\nu})\quad\mbox{and}\quad \bar{R}^\alpha=0
$$
This assumption is part of the hydrodynamical description, in the
sense that non--equilibrium states are assumed to be adequately
specified by the hydrodynamical tensors $n^\alpha, T^{\alpha\beta}$
alone.\footnote{In kinetic theory, this corresponds to truncating
the non--equilibrium distribution function -- via the
Grad 14--moment approximation \cite{is}.}
The standard and extended theories of irreversible thermodynamics
differ in the form of this function.
\section{Standard Irreversible Thermodynamics}
The standard
Eckart theory makes the simplest possible assumption about $R^\alpha$
-- i.e. that it is linear in the dissipative quantities. The only
such vector that can be algebraically constructed from $(\Pi,q_\alpha,
\pi_{\alpha\beta})$ and $u^\alpha$ is
$$
f(n,\rho)\Pi u^\alpha+g(\rho,n)q^\alpha
$$
Now the entropy density $-u_\alpha S^\alpha$ should be a maximum in
equilibrium, i.e.
$$
\left[{\partial\over\partial\Pi}(-u_\alpha S^\alpha)\right]_{\mbox{eqm}}=0
$$
This implies $f=0$.
In a comoving IOF,
$q_\alpha/T\doteq(0,\vec{q}/T)$, which is the entropy flux due to heat
flow. Thus $g=1$ and (\ref{6'}) becomes
\begin{equation}
S^\alpha=Snu^\alpha+{q^\alpha\over T}
\label{7'}\end{equation}
Using the Gibbs equation (\ref{39}) and
the conservation equations (\ref{34}) and (\ref{3'}),
the divergence of (\ref{7'}) becomes
\begin{equation}
TS^\alpha{}_{;\alpha}=-\left[3H\Pi+\left(\mbox{D}_\alpha\ln T+\dot{u}_\alpha\right)q^\alpha
+\sigma_{\alpha\beta}\pi^{\alpha\beta}\right]
\label{8'}\end{equation}
Notice that the equilibrium conditions (\ref{43}) from kinetic theory
lead to the vanishing of each factor multiplying
the dissipative terms on the
right, and therefore to $S^\alpha{}_{;\alpha}=0$.
From (\ref{8'}), we see that the simplest way to satisfy (\ref{5'})
is to impose the following linear relationships between the
thermodynamic `fluxes' $\Pi, q_\alpha, \pi_{\alpha\beta}$ and the
corresponding thermodynamic `forces' $H, \dot{u}_\alpha+\mbox{D}_\alpha\ln T,
\sigma_{\alpha\beta}$:
\begin{eqnarray}
\Pi &=& -3\zeta H \label{9'}\\
q_\alpha &=& -\lambda \left(\mbox{D}_\alpha T+T\dot{u}_\alpha\right) \label{10'}\\
\pi_{\alpha\beta} &=& -2\eta \sigma_{\alpha\beta} \label{11'}
\end{eqnarray}
These are the constitutive equations for dissipative quantities in the
standard Eckart theory of relativistic irreversible thermodynamics.
They are relativistic generalisations of the corresponding Newtonian
laws:
\begin{eqnarray}
\Pi &=& -3\zeta \vec{\nabla}\cdot\vec{v}\quad\quad\mbox{(Stokes)}
\nonumber\\
\vec{q} &=& -\lambda \vec{\nabla} T\quad\quad\mbox{(Fourier)} \nonumber\\
\pi_{ij} &=& -2\eta \sigma_{ij}\quad\quad\mbox{(Newton)}\nonumber
\end{eqnarray}
This is confirmed by using a comoving IOF in (\ref{9'}) -- (\ref{11'}) --
except that in the relativistic case, as discovered by Eckart,
there is an acceleration term in (\ref{10'})
arising from the inertia of heat
energy. Effectively, a heat flux will arise from accelerated matter
even in the absence of a temperature gradient.
The Newtonian laws allow us to identify the thermodynamic
coefficients:
\begin{itemize}
\item
$\zeta(\rho,n)$ is the bulk viscosity
\item
$\lambda(\rho,n)$ is the thermal conductivity
\item
$\eta(\rho,n)$ is the shear viscosity
\end{itemize}
Given the linear constitutive equations (\ref{9'}) -- (\ref{11'}), the
entropy production rate (\ref{8'}) becomes
\begin{equation}
S^\alpha{}_{;\alpha}={\Pi^2\over\zeta T}+{q_\alpha q^\alpha\over\lambda T^2}+
{\pi_{\alpha\beta}\pi^{\alpha\beta}\over2\eta T}
\label{12'}\end{equation}
which is guaranteed to be non--negative provided that
$$
\zeta\geq 0\,,\quad\lambda\geq0\,,\quad\eta\geq0
$$
Note that the Gibbs equation (\ref{39}) together with number and
energy conservation (\ref{34}) and (\ref{3'}), leads to an evolution
equation for the entropy:
\begin{equation}
Tn\dot{S}=-3H\Pi-q^\alpha{}_{;\alpha}-\dot{u}_\alpha q^\alpha-\sigma_{\alpha\beta}\pi^{\alpha\beta}
\label{12a'}\end{equation}
Many, probably most, of the applications of irreversible
thermodynamics in relativity have used this Eckart theory.
However the algebraic nature of the Eckart constitutive equations
leads to severe problems. Qualitatively, it can be seen that if
a thermodynamic force is suddenly switched off, then the
corresponding thermodynamic flux instantaneously vanishes. This
indicates that a signal propagates through the fluid at infinite
speed, violating relativistic causality.\footnote{Even in the
Newtonian case, infinite signal speeds present a problem, since
physically we expect the signal speed to be limited by the
maximum molecular speed.}
\section{Simple Example: Heat Flow}
For a quantitative demonstration, consider the flow of heat
in a non--accelerating, non--expanding and vorticity--free
fluid in flat spacetime, where the comoving
IOF may be chosen as a global orthonormal frame.
In the non--relativistic regime the fluid energy density is
given by (\ref{46c}), and then the energy conservation equation
(\ref{3'}) gives
$$
{\textstyle{3\over2}}n{\partial T\over\partial t}=-\vec{\nabla}\cdot\vec{q}
$$
since $\partial n/\partial t=0$ by (\ref{34}). The Eckart law (\ref{10'}) reduces to
$$
\vec{q}=-\lambda\vec{\nabla}T
$$
Assuming that $\lambda$ is constant, these two equations lead to
\begin{equation}
{\partial T\over\partial t}=\chi\nabla^2 T\quad\mbox{where}\quad\chi={2\lambda
\over 3n}
\label{13'}\end{equation}
which is the heat conduction equation. This equation is parabolic,
corresponding to infinite speed of propagation.
Apart from causality violation, the Eckart theory has
in addition the pathology of unstable equilibrium states. It can
be argued that a dissipative fluid will very rapidly tend
towards a quasi--stationary state that is adequately described by
the Eckart theory. However, there are many processes in which
non--stationary relaxational effects dominate.\footnote{For
examples and further discussion, see \cite{jcl}, \cite{s}.}
Furthermore, even
if the Eckart theory can describe the asymptotic states, it is
clearly unable to deal with the evolution towards these states,
or with the overall dynamics of
the fluid, in a satisfactory way.
Qualitatively, one expects that if a thermodynamic force is
switched off, the corresponding thermodynamic flux should die away
over a finite time period. Referring to the heat flow example above,
if $\vec{\nabla}T$ is set to zero at time $t=0$, then instead
of $\vec{q}(t)=0$ for $t\geq0$,
as predicted by the Eckart law, we expect that
$$
\vec{q}(t)=\vec{q}_0\exp\left(-{t\over\tau}\right)
$$
where $\tau$ is a characteristic relaxational time for transient heat
flow effects. Such a relaxational feature would arise if the
Eckart--Fourier law were modified as
\begin{equation}
\tau\dot{\vec{q}}+\vec{q}=-\lambda\vec{\nabla}T
\label{14'}\end{equation}
This is the Maxwell--Cattaneo modification of the Fourier law,
and it is
in fact qualitatively what arises in the extended theory.
With the Maxwell--Cattaneo form (\ref{14'}), the heat conduction
equation (\ref{13'}) is modified as
\begin{equation}
\tau{\partial^2T\over\partial t^2}+{\partial T\over\partial t}-\chi\nabla^2 T=0
\label{13a'}\end{equation}
which is a damped wave equation. A thermal plane--wave solution
$$
T\propto \exp\left[i(\vec{k}\cdot\vec{x}-\omega t)\right]
$$
leads to the dispersion relation
$$
k^2={\tau\omega^2\over\chi}+i\omega
$$
so that the phase velocity is
$$
V={\omega\over\mbox{Re}(k)}=\left[{2\chi\omega\over \tau\omega+
\sqrt{1+\tau^2\omega^2}}\right]^{1/2}
$$
In the high frequency limit, i.e. $\omega\gg \tau^{-1}$, we see that
$$
V\approx \sqrt{{\chi\over\tau}}
$$
The high--frequency limit gives the speed of thermal pulses --
known as second sound -- and
it follows that this speed is finite for $\tau>0$.
Thus the introduction of a
relaxational term removes the problem of infinite propagation
speeds.
The intuitive arguments of this section form an introduction to the
development of the extended theory of Israel and Stewart.
\section{Causal Thermodynamics}
Clearly the Eckart postulate (\ref{7'}) for $R^\alpha$ is too simple.
Kinetic theory indicates that in fact $R^\alpha$ is second--order in the
dissipative fluxes. The Eckart assumption, by truncating at first
order, removes the terms that are necessary to provide causality and
stability. The most general algebraic form for $R^\alpha$ that is at
most second--order in the dissipative fluxes is
\begin{eqnarray}
S^\mu &=& Snu^\mu+{q^\mu\over T}-
\left(\beta_0\Pi^2
+\beta_1q_\nu q^\nu+\beta_2\pi_{\nu\kappa}
\pi^{\nu\kappa}\right){u^\mu\over 2T} \nonumber\\
{}&& +{\alpha_0\Pi q^\mu\over T}+{\alpha_1\pi^{\mu\nu}q_\nu\over T}
\label{15'}\end{eqnarray}
where $\beta_A(\rho,n)\geq0$ are thermodynamic
coefficients for scalar, vector and tensor dissipative contributions
to the entropy density, and
$\alpha_A(\rho,n)$ are thermodynamic
viscous/ heat coupling coefficients. It follows from (\ref{15'}) that
the effective entropy density (measured by comoving observers) is
\begin{equation}
-u_\mu S^\mu=Sn-
{1\over2T}\left(\beta_0\Pi^2
+\beta_1q_\mu q^\mu+\beta_2\pi_{\mu\nu}
\pi^{\mu\nu}\right)
\label{16'}\end{equation}
independent of $\alpha_0, \alpha_1$.
(Note that the entropy density is a maximum in equilibrium.)
For simplicity, I will assume
\begin{equation}
\alpha_0=0=\alpha_1\quad\quad\mbox{i.e. no viscous/ heat coupling}
\label{17'}\end{equation}
This assumption is consistent with linearisation in a perturbed
FRW universe, since the coupling terms lead to non--linear
deviations from the FRW background. However, the assumption (\ref{17'})
may not be reasonable for non--uniform stellar models and other
situations where the background solution is inhomogeneous.
The divergence of the extended current (\ref{15'}) --
with (\ref{17'}) -- follows
from the Gibbs equation and
the conservation equations (\ref{34}), (\ref{3'}) and (\ref{4'}):
\begin{eqnarray}
TS^\alpha{}_{;\alpha} &=& -\Pi\left[3H+\beta_0\dot{\Pi}+{\textstyle{1\over2}}
T\left({\beta_0\over T}u^\alpha\right)_{;\alpha}\Pi\right]\nonumber\\
{}&& -q^\alpha\left[\mbox{D}_\alpha\ln T+\dot{u}_\alpha+\beta_1\dot{q}_\alpha+{\textstyle{1\over2}}
T\left({\beta_1\over T}u^\mu\right)_{;\mu}q_\alpha\right]\nonumber\\
{}&&-\pi^{\alpha\mu}\left[\sigma_{\alpha\mu}+\beta_2\dot{\pi}_{\alpha\mu}+
{\textstyle{1\over2}}T\left({\beta_2\over T}u^\nu\right)_{;\nu}
\pi_{\alpha\mu}\right]
\label{18'}\end{eqnarray}
The simplest way to satisfy the second law of thermodynamics (\ref{5'}),
is to impose, as in the standard theory, linear relationships
between the thermodynamical fluxes and forces (extended), leading to
the following constitutive or transport equations\footnote{This linear
assumption is in fact justified by kinetic theory, which leads to the
same form of the transport equations \cite{is}.}:
\begin{eqnarray}
\tau_0\dot{\Pi}+\Pi &=& -3\zeta H-\left[{\textstyle{1\over2}}\zeta T
\left({\tau_0\over\zeta T}u^\alpha\right)_{;\alpha}\Pi\right] \label{19'}\\
\tau_1 h_\alpha{}^\beta\dot{q}_\beta+q_\alpha &=& -\lambda\left(\mbox{D}_\alpha T+T\dot{u}_\alpha
\right)-\left[{\textstyle{1\over2}}\lambda T^2\left({\tau_1\over\lambda T^2}
u^\beta\right)_{;\beta}q_\alpha\right] \label{20'}\\
\tau_2 h_\alpha{}^\mu h_\beta{}^\nu\dot{\pi}_{\mu\nu}+\pi_{\alpha\beta} &=&
-2\eta\sigma_{\alpha\beta}-\left[\eta T\left({\tau_2\over 2\eta T}u^\nu
\right)_{;\nu}\pi_{\alpha\beta}\right] \label{21'}
\end{eqnarray}
where the relaxational times $\tau_A(\rho,n)$ are given by
\begin{equation}
\tau_0=\zeta\beta_0\,,\quad\tau_1=\lambda T\beta_1\,,\quad
\tau_2=2\eta\beta_2
\label{21a'}\end{equation}
With these transport equations, the entropy production rate
has the same non--negative form (\ref{12'}) as in the standard theory.
Because of the simplifying assumption (\ref{17'}), there are no
couplings of scalar/ vector/ tensor dissipative fluxes.
As well as these viscous/ heat couplings, kinetic theory shows
that in general there will also be couplings of heat flux and
anisotropic pressure to the vorticity -- which, unlike the shear,
does not vanish in general
in equilibrium (see (\ref{43})). These couplings
give rise to the following additions to the right hand
sides of (\ref{20'}) and (\ref{21'}) respectively:
$$
+\lambda T\gamma_1 \omega_{\alpha\beta}q^\beta\quad\quad\mbox{and}\quad\quad
+2\eta\gamma_2 \pi^\mu{}_{<\alpha}\omega_{\beta>\mu}
$$
where $\gamma_1(\rho,n),\gamma_2(\rho,n)$ are the thermodynamic
coupling coefficients. In a comoving IOF, (\ref{27}) shows that the
addition to (\ref{20'}) has the form
$$
\lambda T\gamma_1 \vec{\omega}\times\vec{q}\quad\mbox{where}\quad
\vec{\omega}\doteq \vec{\nabla}\times\vec{v}
$$
If the background solution has zero
vorticity, as is the case in a perturbed FRW universe, then these
vorticity coupling terms will vanish in linearised theory.
However, they would be important in rotating stellar models, where
the background equilibrium solution has $\omega_{\alpha\beta}\neq0$.
The terms in square brackets on the right
of equations (\ref{19'}) -- (\ref{21'}) are often omitted. This
amounts to the implicit assumption that these terms are negligible
compared with the other terms in the equations. I will call
the simplified equations the truncated Israel--Stewart equations.
One needs to investigate carefully the conditions under which the
truncated equations are reasonable. This will
be further discussed in the next chapter. The truncated equations,
together with the no--coupling assumption (\ref{17'}), are of
covariant relativistic
Maxwell--Cattaneo form:
\begin{eqnarray}
\tau_0\dot{\Pi}+\Pi &=& -3\zeta H
\label{22'}\\
\tau_1 h_\alpha{}^\beta\dot{q}_\beta+q_\alpha &=& -\lambda\left(\mbox{D}_\alpha T+T\dot{u}_\alpha
\right)
\label{23'}\\
\tau_2 h_\alpha{}^\mu h_\beta{}^\nu\dot{\pi}_{\mu\nu}+\pi_{\alpha\beta} &=&
-2\eta\sigma_{\alpha\beta}
\label{24'}
\end{eqnarray}
The crucial difference between the standard Eckart and the extended
Israel--Stewart transport equations is that the latter are differential
evolution equations, while the former are algebraic relations. As we
saw in the previous section, the evolution terms, with the relaxational
time coefficients $\tau_A$, are needed for causality -- as well as
for modelling high--frequency or transient phenomena,
where `fast' variables and relaxation effects
are important. The price paid for the improvements that the
extended causal thermodynamics brings is
that new thermodynamic coefficients are introduced.
However, as is the case with the coefficients $\zeta,\lambda,\eta$
that occur also in standard theory, these new coefficients may be
evaluated or at least estimated via kinetic theory. The
relaxation times $\tau_A$
involve complicated collision integrals. In fact, they are
usually estimated as mean collision times,
of the form
\begin{equation}
\tau\approx{1\over n\sigma v}
\label{25'}\end{equation}
where $\sigma$ is a collision cross section and $v$ the mean
particle speed.
It is important to remember that the derivation of the causal
transport equations is based on the assumption that the fluid is
close to equilibrium. Thus the dissipative fluxes are small:
\begin{equation}
|\Pi|\ll p\,,\quad \left(\pi_{\alpha\beta}\pi^{\alpha\beta}\right)^{1/2}
\ll p\,,\quad \left(q_\alpha q^\alpha\right)^{1/2}\ll \rho
\label{26'}\end{equation}
Consider the evolution of entropy in the Israel--Stewart theory.
The equation (\ref{12a'}) still holds in the extended case:
\begin{equation}
Tn\dot{S}=-3H\Pi-q^\alpha{}_{;\alpha}-\dot{u}_\alpha q^\alpha-\sigma_{\alpha\beta}\pi^{\alpha\beta}
\label{27'}\end{equation}
Consider a comoving volume of fluid, initially of size
$a_0^3$, where $a$ is the scale factor defined in general by
(\ref{33}). The entropy in this comoving volume is given by
\begin{equation}
\Sigma=a^3nS
\label{28'}\end{equation}
Then, by virtue of number conservation (\ref{34}) and (\ref{27'}),
it follows that
the growth in comoving entropy over a proper time
interval $t_0\rightarrow t$ is
\begin{equation}
\Sigma(t)=\Sigma_0-\int_{t_0}^t{a^3\over T}\left(3H\Pi+
q^\alpha{}_{;\alpha}+\dot{u}_\alpha q^\alpha+\sigma_{\alpha\beta}\pi^{\alpha\beta}\right)dt
\label{29'}\end{equation}
The second law, which is built into the theory, guarantees that
$\Sigma(t)\geq\Sigma_0$. However, it is possible that the local
equilibrium specific entropy $S$ is {\em not} increasing at all
times -- but the effective, non--equilibrium specific entropy
$-u_\alpha S^\alpha/n$ is monotonically increasing \cite{jcl}.
Next we look at the temperature behaviour
in causal thermodynamics. The Gibbs integrability condition (\ref{46h})
still holds:
\begin{equation}
n{\partial T\over\partial n}+(\rho+p){\partial T\over\partial\rho}=T{\partial p\over\partial\rho}
\label{30'}\end{equation}
However, the change in the energy conservation equation (\ref{3'})
leads to a generalisation of the temperature evolution (\ref{46f}):
\begin{eqnarray}
{\dot{T}\over T} &=& -3H\left({\partial p\over\partial\rho}\right)_{\!n}
-{1\over T}
\left({\partial T\over\partial\rho}\right)_{\!n}\left[3H\Pi+q^\alpha{}_{;\alpha}+
\dot{u}_\alpha q^\alpha+\sigma_{\alpha\beta}\pi^{\alpha\beta}\right]
\label{31'}\\
&=&-3H\left({\partial p\over\partial\rho}\right)_{\!n}+n\dot{S}\left({\partial T\over
\partial\rho}\right)_{\!n} \nonumber
\end{eqnarray}
Note that if the Gibbs integrability condition, number conservation
and energy conservation are satisfied, then the evolution
equation (\ref{31'}) will be an identity. This evolution equation
shows quantitatively how the relation of temperature to
expansion is affected by dissipation.
The first term on the right of (\ref{31'}) represents the cooling due
to expansion. In the second, dissipative term, viscosity in general
contributes to heating effects, while the contribution of heat flow
depends on whether heat is being transported into or out of a
comoving volume.
If instead of $(n,\rho)$ we choose $(n,T)$ as independent variables,
then the Gibbs integrability condition (\ref{30'}) becomes
\begin{equation}
T{\partial p\over\partial T}+n{\partial\rho\over\partial n}=\rho+p
\label{32'}\end{equation}
and the temperature evolution equation (\ref{31'}) becomes
\begin{eqnarray}
{\dot{T}\over T} &=& -3H\left({\partial p/\partial T \over\partial\rho/\partial T}
\right)_{\!n}-{1\over T
(\partial \rho/\partial T)_n}\left[3H\Pi+q^\alpha{}_{;\alpha}+\dot{u}_\alpha
q^\alpha+\sigma_{\alpha\beta}\pi^{\alpha\beta}\right]
\label{33'}\\
&=&-3H\left({\partial p/\partial T \over\partial\rho/\partial T}\right)_{\!n} +
n\dot{S}{1\over(\partial\rho/\partial T)_n} \nonumber
\end{eqnarray}
Finally, we consider briefly the issue of equations of state for the
pressure and temperature in dissipative fluids. Using the energy
conservation equation (\ref{3'}),
the Gibbs equation in the form (\ref{46k}) generalises to
$$
n^2TdS = \left[{n{\cal D}
\over 3H(\rho+p)+{\cal D}}\right]
d\rho +(\rho+p){\partial n\over\partial p}\left[{\dot{p}\over\dot{\rho}}d\rho
-dp\right]
$$
where the dissipative term is
\begin{equation}
{\cal D}=3H\Pi+q^\alpha{}_{;\alpha}+\dot{u}_\alpha q^\alpha+
\sigma_{\alpha\beta}\pi^{\alpha\beta}
\label{34'}\end{equation}
It follows that in the presence of dissipation, barotropic
pressure no longer forces $dS$ to vanish:
\begin{equation}
dS ={1\over nT} \left[{{\cal D}\over 3H(\rho+p)+{\cal D}}\right]d\rho
\label{35'}\end{equation}
As in the equilibrium case, it remains true, via the Gibbs
integrability condition, that barotropic $T=T(\rho)$ together with
$p=(\gamma-1)\rho$ leads to the power--law form (\ref{46j}) for
the temperature. However, in the dissipative case, these relations
are not in general compatible with the ideal gas law $p=nT$:
\begin{eqnarray}
p=nT\,,~~p=(\gamma-1)\rho\,,~~T\propto \rho^{(\gamma-1)/\gamma}
&\Rightarrow& n\propto \rho^{1/\gamma}\nonumber\\
{}&\Rightarrow & {\dot{n}\over n}={1\over\gamma}{\dot{\rho}\over\rho}
\nonumber
\end{eqnarray}
Then number and energy conservation imply ${\cal D}=0$.
However, it is possible to impose the $\gamma$--law and the ideal gas
law simultaneously, provided the temperature is not barotropic.
The temperature evolution equation (\ref{33'}) and energy conservation
(\ref{3'}) give
\begin{equation}
{\dot{T}\over T}=\left[\left({\gamma-1\over\gamma}\right)
{\dot{\rho}\over\rho}+{{\cal D}\over\gamma\rho}\right]\left[1+
{{\cal D}\over nT}\right]
\label{36'}\end{equation}
These results have interesting implications for a dissipative fluid
which is close to a
thermalised radiation fluid, i.e. $p={1\over3}\rho$. If we
insist that $p=nT$, then the Stefan--Boltzmann law $\rho\propto T^4$
cannot hold out of equilibrium. Alternatively, if we impose the
Stefan--Boltzmann law, then the ideal gas law cannot hold unless
the fluid returns to equilibrium.
\chapter{Applications to Cosmology and Astrophysics}
The evolution of the
universe contains a sequence of important dissipative
processes, including:
\begin{itemize}
\item GUT (Grand Unified Theory) phase transition ($t\approx
10^{-34}$ sec, $T\approx 10^{27}$~K),
when gauge bosons acquire mass (spontaneous symmetry breaking).
\item
Reheating of the universe at the end of inflation (at about
$10^{-32}$ sec), when the
scalar field decays into particles.
\item
Decoupling of neutrinos from the cosmic plasma
($t\approx 1$ sec, $T\approx 10^{10}$ K), when the temperature
falls below the threshold for interactions that keep the
neutrinos in thermal contact.
The growing neutrino mean free path leads to
heat and momentum transport by neutrinos and thus damping of
perturbations. Shortly after decoupling, electrons and positrons
annihilate, heating up the photons in a non--equilibrium process.
\item
Nucleosynthesis (formation of light nuclei) ($t\approx 100$ sec).
\item
Decoupling of photons from matter during the recombination era
($t\approx 10^{12}$ sec, $T\approx 10^3$ K), when electrons
combine with protons and so no longer scatter the photons.
The growing photon mean free path leads to heat and momentum
transport and thus damping.
\end{itemize}
\noindent Some astrophysical dissipative processes are:
\begin{itemize}
\item
Gravitational collapse of local inhomogeneities to form
galactic structure, when viscosity and heating lead to dissipation.
\item
Collapse of a radiating star to a neutron star or black hole,
when neutrino emission is responsible for dissipative heat
flow and viscosity.
\item
Accretion of matter around a neutron star or black hole.
\end{itemize}
Further discussion of such processes can be found in \cite{kt},
\cite{cl} (but not from a causal thermodynamics standpoint).
The application of causal thermodynamics to cosmology and astrophysics
remains relatively undeveloped -- partly because of the complexity
of the transport equations, partly because all of the important
dissipative processes have been throughly analysed using the
standard theory or kinetic theory or numerical methods.
Causal bulk viscosity in cosmology
has been fairly comprehensively investigated - see \cite{bnk} --
\cite{z2}. Shear viscosity in
anisotropic cosmologies has been considered in \cite{bnk},
\cite{rp}, while
heat flow in inhomogeneous cosmologies has been discussed in
\cite{tp}. Causal dissipation in astrophysics has been
investigated in \cite{s}, \cite{fl} -- \cite{mz}.
In all of these papers,
it is found that causal thermodynamic
effects can have a significant impact and can
lead to predictions very different from those in the standard
Eckart theory.
In this chapter I will briefly
discuss some overall features of causal thermodynamics in
a cosmological/ astrophysical setting, and then conclude with
a more detailed discussion of bulk viscosity in an FRW universe,
which is the most accessible problem.
\section{General Features of Cosmic Dissipation}
The expanding universe defines a natural time--scale -- the
expansion time $H^{-1}=a/\dot{a}$. Any particle species will
remain in thermal equilibrium with the cosmic fluid so long as
the interaction rate is high enough to allow rapid adjustment to
the falling temperature. If the mean interaction time is $t_c$, then
a necessary condition for maintaining thermal equilibrium is
\begin{equation}
t_c < H^{-1}
\label{1.}\end{equation}
Now $t_c$ is determined by
\begin{equation}
t_c={1\over n\sigma v}
\label{2.}\end{equation}
where $\sigma$ is
the interaction cross--section,
$n$ is the number density of the target particles with which
the given species is interacting, and $v$ is the mean relative
speed of interacting particles.
As an example, consider neutrinos in the early universe. At high
enough temperatures, the neutrinos are kept in thermal equilibrium
with photons and electrons via interactions with electrons that
are governed by the weak interaction. The cross--section is
\begin{equation}
\sigma_w=g_0 T^2
\label{3.}\end{equation}
where $g_0$ is a constant. The number density of electrons is
$n\propto T^3$, by (\ref{46}), since the electrons are effectively
massless at these very high temperatures. Since $v=1$, (\ref{2.})
gives $t_c \propto T^{-5}$. By (\ref{46e}), we can see that
$H\propto T^2$. Thus
\begin{equation}
t_cH=\left({T_*\over T}\right)^3
\label{4.}\end{equation}
and using (\ref{1.}) and the numerical values of the various constants,
it follows that the neutrinos will decouple for
$$
T<T_*\approx 10^{10}\,\mbox{K}
$$
Other
cosmic decoupling processes may be analysed by a similar approach.
The differences arise from the particular forms of
$\sigma(T)$, $n(T)$ and $H(T)$. For example,
in the case of photons interacting with electrons via Thompson
scattering, the Thompson cross--section is constant, while the
number density of free electrons is given by a complicated
equation (the Saha equation),
which takes account of the process of recombination. The
expansion rate $H$ is also fairly complicated, since the universe
is no longer radiation--dominated. One finds that the decoupling
temperature is about $10^3$ K.
In the case of a collapsing star, similar arguments are applied --
except that the characteristic time in this case is determined
by the rate of collapse, which is governed by stellar dynamics.
For example, for neutrinos in the core of a neutron star,
interactions with electrons and nucleons determine an interaction
time that must be compared with the collapse time to estimate the
decoupling conditions for
the neutrinos -- after which they transport heat and
momentum away from the core.
The entropy generated in a dissipative process that begins at
$t_0$ and ends at $t_0+\Delta t$ is given by (\ref{29'}):
\begin{equation}
\Delta\Sigma=-\int_{t_0}^{t_0+\Delta t}{a^3\over T}\left(3H\Pi+
q^\alpha{}_{;\alpha}+\dot{u}_\alpha q^\alpha+\sigma_{\alpha\beta}\pi^{\alpha\beta}\right)dt
\label{5.}\end{equation}
For example, $\Delta t$ could be
the time taken for a decoupling process in the universe or
a star.
The observed universe
has a high entropy, as indicated by the high number of photons per
baryon, about $10^8$. This gives a total entropy in the observable
universe of about $10^{88}$. Inflationary cosmology predicts that
nearly all of this entropy is generated by the reheating process at
the end of inflation -- i.e. that all other dissipative processes
in the evolution of the universe make a negligible contribution to
entropy production by comparison. In this model, the formula
(\ref{5.}) would have to be modified to include
the dissipation not just from the fluid effects
that we have been discussing, but also from particle production.
Particle production, at a rate $\nu$, leads to non--conservation
of particle number, so that (\ref{34}) is replaced by
\begin{equation}
n^\alpha{}_{;\alpha}=\dot{n}+3Hn=\nu n
\label{8.}\end{equation}
Then it is found that $\nu$ contributes to entropy production.
The contribution from particle production may be modelled
as an effective bulk viscosity.
Many dissipative processes are well described
by a radiative fluid -- i.e. a fluid consisting of interacting
massless and massive particles. The radiative fluid is dissipative,
and kinetic theory or fluctuation theory arguments may be used to
derive the dissipative coefficients in terms of the relaxation
times $\tau_A$ (which are usually assumed equal to the appropriate
interaction time $t_c$). The results are collected in the table below.
The table also includes the case of a relativistic Maxwell--Boltzmann
gas -- i.e. a dilute monatomic gas with high collision rate -- in
both the ultra--relativistic and non--relativistic limits. The
local equilibrium energy density and pressure are given by the
equations of state (\ref{41}), (\ref{42}) (but not subject to the
global equilibrium conditions (\ref{43}), (\ref{44})).
\[ \]
\[ \]
$$
\begin{array}{|l|c|c|c|} \hline
{}&{}&{}&{} \\
{}& \zeta & \lambda & \eta \\
{}&{}&{}&{} \\ \hline\hline
{}&{}&{}&{} \\
\mbox{radiative fluid}&{}&{}&{}\\
\mbox{(massless/ massive)}& 4r_0T^4\Gamma^2\tau_0 & {\textstyle{4\over3}}r_0
T^3c^2\tau_1 & {\textstyle{4\over15}}r_0T^4\tau_2 \\
{}&{}&{}&{} \\ \hline
{}&{}&{}&{} \\
\mbox{Maxwell--Boltzmann gas:}&{}&{}&{} \\
\mbox{ultra--relativistic~} (\beta\ll 1) & {\textstyle{1\over216}}\beta^4 p\tau_0
& {\textstyle{4\over5}}T^{-1}p\tau_1 & {\textstyle{2\over3}}p\tau_2 \\
{}&{}&{}&{}
\\ \hline
{}&{}&{}&{} \\
\mbox{Maxwell--Boltzmann gas:}&{}&{}&{} \\
\mbox{non--relativistic~} (\beta\gg 1) & {\textstyle{5\over6}}\beta^{-2} p\tau_0
& {\textstyle{5\over2}}\beta^{-1}T^{-1}p\tau_1 & p\tau_2 \\
{}&{}&{}&{}
\\ \hline
\end{array}
$$
\[ \]
\[ \]
In the table, $\beta$ is given in standard units by
$$
\beta={mc^2\over kT}
$$
where $m$ is the mass of the matter particles (usually electrons);
$r_0$ is the radiation constant for photons, and
${7\over8}$ times the radiation constant for massless
neutrinos; $\Gamma$ is effectively
the deviation of $p/\rho$
from its pure--radiation value:
\begin{equation}
\Gamma={\textstyle{1\over3}}-\left({\partial p\over\partial\rho}\right)_n=
{\textstyle{1\over3}}-{(\partial p/\partial T)_n \over (\partial\rho/\partial T)_n}
\label{6.}\end{equation}
where $p,\rho$ refer to the pressure and energy density of the
radiation/ matter mixture as a whole. For example, when the matter
is non--relativistic, (\ref{46}) and (\ref{46c}) show that in
standard units
\begin{equation}
p\approx nkT+{\textstyle{1\over3}}r_0T^4\,,\quad\quad\rho\approx
mc^2n+{\textstyle{3\over2}}nkT+r_0T^4
\label{7.}\end{equation}
where $n$ is the number density of matter.
Note that for both the radiative fluid and the
Maxwell--Boltzmann gas, the bulk viscosity
tends to zero in
the ultra--relativistic and non--relativistic
limits. Bulk viscous effects are greatest in the mildly
relativistic intermediate regime, $\beta \approx 1$. This discussed
further in the next section.
The radiative fluid and Maxwell--Boltzmann gas are perhaps the
best motivated dissipative fluid models. However, their equations
of state and thermodynamic coefficients are very complicated,
and for the purposes of analytical rather than numerical
investigations, simplified equations are often assumed.
These are usually barotropic:
\begin{equation}
p=p(\rho)\,,\quad T=T(\rho)\,,\quad \zeta=\zeta(\rho)\,,\quad
\lambda=\lambda(\rho)\,,\quad\eta=\eta(\rho)\,,\quad
\tau_A=\tau_A(\rho)
\label{9.}\end{equation}
However
these assumptions are subject to consistency
conditions (as shown earlier in the case of $p$ and $T$),
and may correspond to unphysical behaviour. Whenever
such assumptions are made in a model, the consequences should be
carefully checked. An example is given in the next section.
\section{Causal Bulk Viscosity in Cosmology}
I will use the simplest case of scalar dissipation due to bulk
viscosity in order to illustrate some of the issues that arise
in modelling cosmological dissipation via Israel--Stewart theory.
Furthermore, this case covers the standard cosmological models.
If one assumes that the universe is exactly
isotropic and homogeneous -- i.e.
an FRW universe (\ref{50}) -- then the symmetries show that only
scalar dissipation is possible -- i.e. $q_\alpha=0=\pi_{\alpha\beta}$.
In this event, the no--coupling assumption (\ref{17'}) is
automatically fulfilled.
Bulk viscosity arises typically in mixtures -- either of different
species, as in a radiative fluid,
or of the same species but with different energies, as in a
Maxwell--Boltzmann gas. Physically, we can think of bulk viscosity
as the internal `friction' that sets in due to the different cooling
rates in the expanding mixture. The dissipation due to bulk
viscosity converts kinetic energy of the particles into heat,
and thus we expect it to reduce the effective pressure in an
expanding fluid -- i.e. we expect $\Pi\leq 0$ for $H\geq 0$. This
is consistent with $\dot{S}\geq0$ by (\ref{12a'}):
\begin{equation}
Tn\dot{S}=-3H\Pi
\label{18.}\end{equation}
Any dissipation in an exact FRW universe is scalar, and therefore
may be modelled as a bulk viscosity within a thermodynamical
approach. As I have argued in the previous chapter, the
Israel--Stewart thermodynamics is causal and stable under a wide
range of conditions, unlike the standard Eckart theory. Therefore,
in order to obtain the best thermo--hydrodynamic
model with the available physical
theories, one should use the causal Israel--Stewart theory of
bulk viscosity.
Writing out the full Israel--Stewart transport equation
(\ref{19'}) (using $\tau\equiv\tau_0$), we get
\begin{equation}
\tau\dot{\Pi}+\Pi=-3\zeta H-{\textstyle{1\over2}}\tau\Pi\left[3H+
{\dot{\tau}\over\tau}-{\dot{\zeta}\over\zeta}-{\dot{T}\over T}\right]
\label{12.}\end{equation}
A natural question is -- what are the conditions under which
the truncated form (\ref{22'}) is a reasonable approximation
of the full Israel--Stewart
transport equation? It follows from (\ref{12.}) that if
\begin{equation}
{T\over a^3H}\left|\Pi\left({a^3\tau\over\zeta T}\right)^{\displaystyle{\cdot}}\right|
\ll 1
\label{13.}\end{equation}
holds, then the additional terms in (\ref{12.}) are negligible
in comparison with $3\zeta H$. The condition (\ref{13.}) is clearly
very sensitive to the
particular forms of the functions $p(n,\rho)$, $\zeta(n,\rho)$
and $\tau(n,\rho)$. The temperature is determined on the basis
of these particular forms by the Gibbs integrability
condition (\ref{30'}) and the evolution equation (\ref{31'})\footnote{or
equivalently by (\ref{32'}) and (\ref{33'})}:
\begin{equation}
{\dot{T}\over T}=-3H\left[\left({\partial p\over\partial\rho}\right)_n+
{\Pi\over T}\left({\partial T\over\partial\rho}\right)_n\right]
\label{19.}\end{equation}
The second term on the right shows that bulk stress tends to
counteract the cooling due to expansion.
For simplicity, suppose that the pressure and temperature
are barotropic, with $p$ linear:
\begin{equation}
p=(\gamma-1)\rho
\label{16.}\end{equation}
This pressure equation is not unreasonable if the
local equilibrium state is radiation or cold matter.
Since the temperature is also barotropic,
it then
follows from the Gibbs integrability condition (\ref{30'}) that
as in the perfect fluid case, $T$ must have the power--law
form (\ref{46j}):
\begin{equation}
T\propto \rho^{(\gamma-1)/\gamma}
\label{14.}\end{equation}
Thus there is no freedom to choose the form of $T(\rho)$ -- it is
a power--law, with index
fixed by $\gamma$. With these forms of $p(\rho)$ and
$T(\rho)$, we can see that the temperature evolution equation
(\ref{19.}) is identically satisfied by virtue of the
energy conservation equation (\ref{3'}):
\begin{equation}
\dot{\rho}+3H(\rho+p+\Pi)=0
\label{15.}\end{equation}
A simple relation between $\tau$ and $\zeta$ is found
as follows. It is shown in the appendix to this chapter that
\begin{equation}
{\zeta\over(\rho+p)\tau}=c_b^2
\label{11.}\end{equation}
where $c_b$ is the speed of bulk viscous perturbations -- i.e. the
non--adiabatic contribution to the speed of sound $v$ in a dissipative
fluid without heat flux or shear viscosity. The dissipative speed
of sound is given by
\begin{equation}
v^2=c_s^2+c_b^2 \leq1
\label{10.}\end{equation}
where $c_s$ is the adiabatic contribution (\ref{39b}), and the
limit ensures causality. When (\ref{16.}) holds, $c_s^2=\gamma-1$, so
that
$$
c_b^2\leq 2-\gamma
$$
We will assume that $c_b$ is constant, like $c_s$.
Putting together the thermodynamic relationships (\ref{16.}),
(\ref{14.}) and (\ref{11.}), the full transport equation (\ref{12.})
becomes
\begin{equation}
\tau_*\dot{\Pi}+\Pi=-3\zeta_* H\left[1+{1\over\gamma c_b^2}
\left({\Pi\over\rho}\right)^2\right]
\label{17.}\end{equation}
where the effective relaxation time and bulk viscosity are
\begin{equation}
\tau_*={\tau\over 1+3\gamma\tau H}\,,\quad\zeta_*={\zeta\over
1+3\gamma\tau H}=c_b^2\gamma\rho\tau_*
\label{20.}\end{equation}
Now the near--equilibrium condition (\ref{26'}) with (\ref{16.})
implies
$$
|\Pi|\ll\rho
$$
and shows that the second
term in square brackets in (\ref{17.}) is negligible. Thus the full
equation leads to a truncated equation with {\em reduced relaxation
time and reduced bulk viscosity:}
\begin{equation}
\tau_*\dot{\Pi}+\Pi=-3\zeta_* H
\label{23.}\end{equation}
The amount of reduction depends
on the size of $\tau$ relative to $H$. If $\tau$ is of the order
of the mean interaction time, then the hydrodynamical description
requires $\tau H<1$. If $\tau H\ll 1$, then $\tau_*\approx\tau$
and $\zeta_*\approx\zeta$. But if $\tau H$ is close to 1, the
reduction could be significant.
Although this reduction is based on the simplified thermodynamical
relations assumed above, it indicates that the validity of
the truncated Israel--Stewart equation can impose significant
conditions.
More realistic thermodynamical relations will require numerical
calculations. In the case of
a Maxwell--Boltzmann gas, such calculations show that the
behaviour of the truncated and full theories can be very
different. The conclusion seems to be that the full theory should
be used, unless one is able to derive explicitly -- and satisfy -- the
conditions under which the truncated version is adequate.
Assuming that the FRW universe is flat, the Friedmann equation
(\ref{49}) is
\begin{equation}
\rho=3H^2
\label{21.}\end{equation}
By (\ref{15.}) and (\ref{21.}) we get
\begin{equation}
\Pi=-2\dot{H}-3\gamma H^2
\label{22.}\end{equation}
and together with (\ref{23.}) and (\ref{20.}), this leads to the
evolution equation for $H$:
\begin{equation}
\ddot{H}+(6\gamma+ N)H\dot{H}+{\textstyle{3\over2}}\gamma\left[
3(\gamma-c_b^2)+N\right]H^3=0
\label{24.}\end{equation}
where
\begin{equation}
N=(\tau H)^{-1}
\label{25.}\end{equation}
is of the order of the number of interactions in an expansion time.
Intuitively, when $N\gg 1$, the fluid is almost perfect, while
when $N$ is close to 1, the dissipative effects are significant.
This is confirmed by (\ref{24.}). For $N\gg 1$, the equation reduces
to
$$
\dot{H}+{\textstyle{3\over2}}\gamma H^2\approx 0
$$
with the well--known perfect fluid solution:
$$
H\approx{2\over 3\gamma (t-t_0)}
$$
On the other hand, for $N$ close to 1, the second derivative
in (\ref{24.}) cannot be neglected, and the solutions will show a range
of behaviour very different from the perfect fluid -- and the
standard Eckart -- solutions. (Note that the Eckart limit $\tau
\rightarrow 0$ is $c_b\rightarrow\infty$ by (\ref{11.}); the causality
condition (\ref{10.}) does not hold.)
Of course, a complete model requires the specification of $N$.
Consider the ultra--relativistic fluid of the early universe,
with a particle species whose growing mean free path is giving rise to
dissipation, such as the neutrino. Suppose that $\tau\approx t_c$,
where $t_c$ is the mean interaction time, and that the interaction
cross--section is proportional to $T^2$, like the neutrino's.
Then by (\ref{4.}) we get
\begin{equation}
N=\left({T\over T_*}\right)^3=\left({H\over H_*}\right)^{3/2}
\label{26.}\end{equation}
For $T\gg T_*$, we have $N\gg 1$, and dissipation is negligible. But
for $T$ close to $T_*$, dissipation effects become significant. The
evolution equation (\ref{24.}) becomes
\begin{equation}
\ddot{H}+\left[8+ \left({H\over H_*}\right)^{3/2}\right]
H\dot{H}+2\left[
4-3c_b^2+\left({H\over H_*}\right)^{3/2}\right]H^3=0
\label{27.}\end{equation}
One could try to solve this equation perturbatively, by the ansatz
$$
H={1\over 2(t-t_0)}+\varepsilon H_1+O(\varepsilon^2)
$$
\[ \]
I will briefly discuss the question of bulk viscous inflation.
Suppose dissipation in the cosmic fluid produced sufficiently
large bulk viscous stress to drive the effective pressure
negative and thus initiate inflationary expansion.
By (\ref{52}), using the effective pressure, the condition for
inflationary expansion is
\begin{equation}
-\Pi>p+{\textstyle{1\over3}}\rho
\label{28.}\end{equation}
For a fluid, this violates the near--equilibrium
condition
$$
|\Pi|\ll p
$$
Thus viscous fluid inflation, if it were physically possible, would
involve non--linear thermodynamics, far from equilibrium. The
Israel--Stewart theory, as well as other versions of extended
thermodynamics and
also Eckart's standard thermodynamics,
are all based on near--equilibrium
conditions, and cannot be applied to inflationary expansion --
unless one makes the drastic assumption that the linear theory
applies in the strongly non--linear regime.
Furthermore, there are serious physical problems with
hydrodynamic inflation (without particle production\footnote{See
\cite{z}, \cite{gl2} for particle production models.}).
The point is that under conditions of
super--rapid expansion -- i.e. very small expansion
time -- the hydrodynamic regime requires even smaller
interaction time. It is hard to see how the fluid interaction
rate could increase to stay above the expansion rate under
conditions where fluid particles are expanding apart from each
other extremely rapidly.
For a satisfactory model of bulk viscous inflation, one needs:
(a)~a non--linear generalisation of the Israel--Stewart transport
equation (\ref{12.});\footnote{One possible generalisation is
developed in \cite{mm}.} (b)~a consistent model of fluid behaviour
under super--rapid expansion and strongly non--linear conditions.
On the other hand,
the reheating period at the end of inflation
can be modelled by near--equilibrium theory, and
the expansion rate is no longer inflationary. However, a
thermodynamic model needs to incorporate particle
production.\footnote{See \cite{zpmc}.}
\[ \]
Finally, for those who like to analyse and solve differential
equations\footnote{See also \cite{cj2} -- \cite{cjm}.}
more than they like physical analysis, I
will give the evolution equation of $H$ with mathematically
more general (but physically no more satisfactory) thermodynamic
equations of state. Suppose
$p$ and $T$ are given as above, by (\ref{16.}) and (\ref{14.}),
but instead
of the relation (\ref{11.}), with constant $c_b$,
linking $\tau$ and $\zeta$, we assume
the barotropic forms
\begin{equation}
\zeta\propto\rho^r\,,\quad\quad\tau\propto\rho^q
\label{29.}\end{equation}
where $r$ and $q$ are constants.
Then with (\ref{29.}), the (non--truncated)
evolution equation (\ref{12.}) becomes
\begin{eqnarray}
&&\ddot{H}+3\left[1+{\textstyle{1\over2}}(1+q-r)\gamma \right]H\dot{H}
+\alpha_1 H^{-2q}\dot{H}+\left(q-r-1+\gamma^{-1}\right)H^{-1}
\dot{H}^2 \nonumber\\
{}&&+{\textstyle{9\over4}}\gamma H^3+{\textstyle{3\over2}}\gamma\alpha_1
H^{2(1-q)}+{\textstyle{3\over2}}\alpha_2 H^{2r-2q+1} =0
\label{30.}\end{eqnarray}
where $\alpha_1$ and $\alpha_2$ are constants. One can find
special exact solutions, including exponential and power--law
inflation, and perform a qualitative dynamical analysis of (\ref{30.}),
or of similar equations arising from different forms for
the equations of state and thermodynamic coefficients.
\newpage
\section{Appendix: Bulk Viscous Perturbations}
A comprehensive analysis of the causality and stability properties
of the full Israel--Stewart theory has been performed by
Hiscock and Lindblom \cite{hl}. They consider
general perturbations -- i.e. $\Pi, q_\alpha, \pi_{\alpha\beta}$ all nonzero --
about a (global) equilibrium in flat spacetime, but the results
are valid in cosmology for short wavelength perturbations. In this
appendix, I will extract from their complicated
general results the special case
of scalar perturbations (only $\Pi\neq0$), when remarkably simple
expressions can be obtained.
The characteristic velocities for general dissipative perturbations
are given by equations (110) -- (128) in \cite{hl}. The purely
bulk viscous case is
\begin{equation}
\alpha_0=0=\alpha_1\,;\quad {1\over\beta_1}\,,~{1\over\beta_2}
~~\rightarrow~~0\,;\quad\beta_0\equiv {\tau\over\zeta}
\label{a1}\end{equation}
(See (\ref{15'}) and (\ref{19'}) -- (\ref{21a'}).)
Equation (127) of \cite{hl} gives the speed of the
propagating transverse modes:
$$
v_T^2={(\rho+p)\alpha_1^2+2\alpha_1+\beta_1 \over
2\beta_2\left[\beta_1(\rho+p)-1\right]}~~\rightarrow~~0
$$
on using (\ref{a1}). This is as expected for scalar sound--wave
perturbations. Equation (128) governing the speed $v=v_L$ of
propagating longitudinal modes becomes, on dividing by
$\beta_0\beta_2$ and setting $\alpha_0=0=\alpha_1$:
\begin{eqnarray}
\left[\beta_1(\rho+p)-1\right]v^4+\left[{2n\over T}\left({\partial T\over
\partial n}\right)_{\!S}-{(\rho+p)\over nT^2}\left({\partial T\over\partial S}\right)
_{\!n}-\beta_1\left\{(\rho+p)\left({\partial p\over\partial\rho}\right)_{\!S}
+{1\over\beta_0}\right\}\right]v^2 && \nonumber\\
{}+{1\over nT^2}\left({\partial T\over\partial S}\right)_{\!n}\left[(\rho+p)
\left({\partial p\over\partial \rho}\right)_{\!S}+{1\over\beta_0}\right]
-\left[{n\over T}\left({\partial T\over\partial n}\right)_{\!S}\right]^2=0 &&
\nonumber
\end{eqnarray}
Dividing by $\beta_1$ and taking the limit $\beta_1\rightarrow\infty$,
this gives
\begin{equation}
v^2=\left({\partial p\over\partial\rho}\right)_{\!S}+{1\over(\rho+p)\beta_0}
\label{a2}\end{equation}
The first term on the right is the adiabatic contribution $c_s^2$
to $v^2$, and the second term is the dissipative contribution $c_b^2$,
as in (\ref{11.}).
It is also shown in \cite{hl} (pp 478--480)
that causality and stability require
$$
\Omega_3(\lambda)\equiv(\rho+p)\left\{1-\lambda^2\left[\left({\partial p
\over\partial\rho}\right)_{\!S}+{1\over(\rho+p)\beta_0}\right]\right\}\geq0
$$
for all $\lambda$ such that $0\leq\lambda\leq1$. This condition
is shown to hold for all $\lambda$ if it holds for $\lambda=1$,
leading to the requirement
\begin{equation}
c_b^2\equiv{\zeta\over(\rho+p)\tau}\leq1-c_s^2
\label{a3}\end{equation}
i.e. $v^2\leq1$, as expected. This establishes (\ref{10.}).
These results refine and correct
the widely--quoted statement in \cite{bnk}
that $\zeta/\rho\tau=1$ is required by causality.
\vfill
\noindent
{\bf Acknowledgements}
\[ \]
Thanks to Sunil Maharaj for organising the Workshop so well and for
his wonderful hospitality. The participants at the Workshop helped
improve these notes by their questions and comments.
I was supported by a Hanno Rund Research Fellowship.
I have had many useful and inspiring discussions with
Diego Pavon, Winfried Zimdahl, David Jou, Josep Triginer, David
Matravers and others.
| proofpile-arXiv_065-1218 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In a recent paper \cite{AdAr95}, henceforth called ``I'', we
formally have separated for an arbitrary relativistic transition
operator the center-of-mass (c.m.) motion from the intrinsic one.
This has been achieved by exploiting the general properties of the
Poincar\'{e} generators in conjunction with a $1/m$-expansion.
As a result, the frame dependence of an arbitrary transition
operator has been derived explicitly up to the lowest-order
relativistic contributions without reference to any specific dynamic
model, leaving undetermined only the genuine intrinsic operators.
The frame dependent terms have a clear physical meaning describing
effects of a Lorentz transformation for a scalar, vector and a general
Lorentz tensor of higher rank as well as modifications due to
Lorentz contraction and Wigner rotation.
On the other hand, for the determination of the remaining intrinsic
operators one needs a specific dynamic model.
There exist several techniques for the derivation of the
electromagnetic (e.m.) currents
using a $1/m$-expansion \cite{Fr80}. In the leading relativistic
order, they all lead to unitarily equivalent descriptions, i.e.,
the various Hamiltonians and e.m.\ currents are connected by unitary
transformations. The e.m.\ charge and current densities, which we
will denote below by
$\mbox{$J_0 (\vec{k})$}_{FW}$\ and $\mbox{$\vec{J}(\vec{k})$}_{FW}$\ (from Foldy-Wouthuysen),
are obtained in a general
reference frame in terms of individual nucleon coordinates.
In this paper we will start from the e.m.\ currents obtained in the
framework of the extended S-matrix method \cite{ATA}. They
are listed in Appendix A for completeness. The derivation starts
from the relativistic Feynman amplitudes from which the iterated Born
contribution of the one-nucleon current is subtracted in order
to avoid double-counting.
The $1/m$-expansion is then employed at the last stage
making the technique relatively transparent and easy to use.
In order to write
down the transition amplitudes in terms of matrix elements between
states with separated c.m.\ motion, one has to include the effect of
the Lorentz boost on the rest frame wave functions, which depend on
the individual particle variables \cite{Fr75}. This is done with the help
of a unitary transformation which introduces additional so-called
boost currents to be added to the FW ones.
Then, the c.m.\ motion can be
separated also for the nuclear current operators \cite{AdAr95,FrGP},
and the transition amplitudes are expressed in terms of
matrix elements of intrinsic operators between intrinsic wave
functions and simple kinematical factors expressing the c.m.\
motion effects. The intrinsic currents
have a simpler structure than the FW ones and their explicit construction
for the one-boson-exchange (OBE) model is the main subject of this paper.
In the next section we first collect the neccessary general expressions
as obtained in I. In particular,
we give the relations of the intrinsic currents to the FW ones.
The boost contributions are written in a convenient form in
momentum representation. For simplicity, we give all explicit
expressions for the currents of a two-nucleon system, but the extension
to a larger number of nucleons is straightforward. Then, we present
in Sect.\ 3 the intrinsic currents for the one-nucleon and
the interaction-dependent meson exchange two-nucleon currents (MEC),
corresponding to the exchange of scalar, vector and pseudoscalar
mesons, both for isospin 0 and 1. Finally, we summarize our results and
give an outlook
in Sect.\ 4.
\section{General considerations}
We will start from the general expressions for a Lorentz
vector of ``type I'' having the leading order in $1/m$ in the zero
component as derived in I and \cite{FrGP}.
Separating the c.m.\ motion of the initial and final states, we could write
the full operators in terms of pure intrinsic operators where the c.m.\
motion effects are described by a known functional dependence on
$\vc{K} = \vci{P}{f} + \vci{P}{i}$. Here, $\vec P_{i/f}$ denote the total
momentum of the initial and final hadronic system, respectively.
The intrinsic operators, introduced in I, depend only on the momentum
transfer $\vc{k} = \vci{P}{f} - \vci{P}{i}$ and are denoted
by $\rho(\vec{k}\, ) $\ and $\vec{\jmath}\, (\vec{k}\, ) $\ with
their nonrelativistic (\rhp{0}, \jp{1}) and leading
order relativistic parts (\rhp{2}, \jp{3}). Note that the upper index in
brackets refers to the order in $1/m$.
According to Eqs.\ (91)-(92) of I,
the full operators, which have to be evaluated between intrinsic wave
functions only, are given as
\begin{eqnarray}
\Jzkp{0} &=& \rhp{0} \, , \label{frh0I} \\
\Jkp{1} &=& \jp{1} +
\frac{\vc{K}}{2M} \, \rhp{0} \, , \label{fj1I}\\
\Jzkp{2} &=& \rhp{2}
+ \Big( \hat{L} + \hat{W} +
\frac{\vcs{K}}{8M^2} \Big) \rhp{0}
+ \frac{1}{2M} \vc{K} \cdot \jp{1} \, , \label{frh2I} \\
\Jkp{3} & = & \jp{3} + (\hat{L} + \hat{W}) \Jkp{1}
+ \frac{\vc{K}}{8M^2} \vc{K} \cdot \jp{1} \nonumber\\
&&
+ \frac{\vc{K}}{2M} \Big[ \rhp{2} -
\frac{1}{2M} \Big( \epsilon_{f}^{(1)}+\epsilon_{i}^{(1)}
+\frac{\vcs{k}}{4M}
\Big) \rhp{0} \Big] \, ,
\label{fj3I}
\end{eqnarray}
where the operators $\hat{L}$\ and $\hat{W}$\ are defined by
\begin{eqnarray}
\hat{L} f(\vc{k}\, ) &=&
- \frac{2\om{fi}{1}+\om{R}{1}}{4M} \ps{\vc{K}}{\vnab{k}} f(\vc{k}\, )
\, , \label{Lhat} \\
\hat{W} f(\vc{k}\, ) &=& - \frac{i}{8M^2}
\Big\{ \pth{\vc{S}}{\vc{k}}{\vc{K}} \, , f(\vc{k}\, ) \Big\} \, .
\label{What}
\end{eqnarray}
Here $M$ denotes the total mass of the system and
$f$\ stands for $\rho $\ or $\vc{\jmath}$.
The gradient \vnab{k}\ {\em does not} act on the nucleon form
factors \cite{AdAr95,FrGP}. \vc{S}\ is the total spin operator
of the composite system. The nonrelativistic intrinsic excitation
energy is denoted by
\begin{equation}
\om{fi}{1} = \epsilon_{f}^{(1)} - \epsilon_{i}^{(1)} \label{omfi}\,,
\end{equation}
with $\epsilon_{i/f}^{(1)}$ as the nonrelativistic energies for the initial
and final states, respectively.
Finally, \om{R}{1}\ is the nonrelativistic recoil energy
\begin{equation}
\om{R}{1} = \frac{{\vci{P}{f}}^2}{2M} - \frac{{\vci{P}{i}}^2}{2M} =
\frac{\ps{\vc{k}}{\vc{K}}}{2M}
\, . \label{omr}
\end{equation}
The $\hat{L}$-term describes the Lorentz contraction of
the system, while $\hat{W}$ reflects the Wigner rotation of the
total spin associated with the transformation from the Breit
frame ($\vci{P}{f}=-\vci{P}{i}=\vc{k}/2$)
to a general one \cite{FrGP,Fr73}. Note, that
in $k$\/-congruent frames, i.e., those for which $\vci{P}{i}$
and thus $\vc{K}$ are parallel to \vc{k}, the $\hat{W}$-term vanishes.
The $\hat{L}$-term can be absorbed into the nonrelativistic operator
by replacing $\vec k$ by an effective momentum transfer
$\vci{k}{eff}$ having the same direction as $\vec k$ and with
$\vc{k}^{\, 2}_{eff}=\vec k^{\,2}-(k_0^{(1)})^2+
(\omega_{fi}^{(1)})^2$) \cite{AdAr95}.
The intrinsic operators, introduced in I, are obtained from the
expressions (\ref{frh0I})--(\ref{fj3I}) if taken in the
Breit frame, i.e.,
\begin{equation}
j_{\lambda} (\vc{k}\, ) = J_{\lambda} (\vc{k},\vc{0}) \, .
\end{equation}
Except for the $\hat{L}$ and $\hat{W}$ terms, all other contributions
in (\ref{frh0I})-(\ref{fj3I}) can be obtained by the Lorentz transformation
of the charge and current operators from the Breit
frame to a general one. The parameters \mbox{$\vec{\beta}$}\ and $\gamma $\
of such a transformation are given by
\begin{equation}
\mbox{$\vec{\beta}$} = \frac{\vc{K}}{E_f + E_i} \ \ \mbox{and} \ \
\gamma = \frac{1}{\sqrt{1 - \beta^2}} \, .
\label{beta}
\end{equation}
For an arbitrary $k$\/-congruent frame,
one can generalize the expressions in (\ref{frh0I})-(\ref{fj3I}) to
\begin{eqnarray}
\mbox{$ J_0(\vec{k},\vec{K})$} & = & \gamma \Big(
\rho (\vc{k}_{eff}) + \mbox{$\vec{\beta}$} \cdot \vec{\jmath}\, (\vc{k}_{eff}) \Big) \, ,
\label{lorch}\\
\mbox{$\vec{J}(\vec{k},\vec{K})$} & = & \Big( \vec{\jmath}\, (\vc{k}_{eff}) +
\frac{\gamma-1}{\beta^2} (\mbox{$\vec{\beta}$} \cdot \vec{\jmath}\, (\vc{k}_{eff}) ) \mbox{$\vec{\beta}$}
+ \gamma \mbox{$\vec{\beta}$} \, \rho (\vc{k}_{eff}) \Big)
\, . \label{lorcur}
\end{eqnarray}
Then all kinematical effects related to the Lorentz vector
structure of the current are taken into account exactly.
Only in the intrinsic charge and current densities remain approximations
with respect to the $1/m$-expansion in the derivation
and with the introduction of the effective momentum transfer
$\vci{k}{eff}$.
As described in detail in I, the current operator
$J_{\lambda}(\vc{k},\vc{K})$\ acting
in the space of intrinsic wave functions is defined by
the matrix element of the e.m.\ operators between plane waves
describing the c.m.\ motion of the system
\begin{equation}
\langle \vci{P}{f} | J_{\lambda}(\vc{k}\,) | \vci{P}{i} \rangle =
\Big( \frac{M_f M_i}{E_f E_i} \Big) ^{1/2}
J_{\lambda}(\vc{k},\vc{K}) \, \delta ( \vci{P}{f} - \vci{P}{i} -\vc{k} )
\, , \label{totj}
\end{equation}
with nonrelativistic normalization of the plane waves
\begin{equation}
\langle \vci{P}{f} | \vci{P}{i} \rangle =
\delta ( \vci{P}{f} - \vci{P}{i} ) \, .
\label{cmnorm}
\end{equation}
Therefore, the factor in front of $J_{\lambda}(\vc{k},\vc{K})$\ in
(\ref{totj}) ensures that \mbox{$ J_0(\vec{k},\vec{K})$}\ and \mbox{$\vec{J}(\vec{k},\vec{K})$}\
represent {\em covariantly} normalized operators in the space of intrinsic
wave functions.
We have chosen the normalization convention (\ref{cmnorm})
since it is usually adopted implicitly in the derivations of the FW operators
on which the construction of the intrinsic operators is based. For this
reason we introduce in addition {\em noncovariantly} normalized operators of
the full current by
\begin{equation}
\tilde J_{\lambda}(\vc{k},\vc{K})=
\Big( \frac{M_f M_i}{E_f E_i} \Big) ^{1/2}
J_{\lambda}(\vc{k},\vc{K}) \, .
\label{totjtilde}
\end{equation}
Up to the order considered here,
the full charge and current operators on the l.h.s.\ of (\ref{totj})
are given by
\begin{equation}
J_{\lambda}(\vc{k}\,) = J_{\lambda}(\vc{k}\,)_{FW} +
i \, \Big[ \chi \, , J_{\lambda}(\vc{k}\,)_{FW} \, \Big] \, ,
\label{boostop}
\end{equation}
where $\chi $\ describes the wave function boost \cite{Fr75}.
The commutator term is
usually called the boost contribution to the charge and current.
The derivation of the intrinsic currents as well as their final form simplify
considerably, if one splits their relativistic parts in the following way,
employed implicitly by Friar, Gibson and Payne \cite{FrGP},
\begin{eqnarray}
\rhp{2} & = & \rhpF{2} -
\frac{\vcs{k}}{8M^2} \ps{\vc{k}}{\vnab{k}} \rhp{0} +
\frac{\vcs{k}}{8M^2} \rhp{0} \, , \label{FGPch}\\
\jp{3} & = & \jpF{3} -
\frac{\vcs{k}}{8M^2} \ps{\vc{k}}{\vnab{k}} \jp{1} \, .
\label{FGPcr}
\end{eqnarray}
The gradient terms can then be absorbed in the operator $\hat{L}$
leading in turn to a new effective momentum transfer $\vc{k}_{F}$
given by
\begin{equation}
\vc{k}_{F}^{\, 2} = \vc{k}^{\, 2} - (k_0^{(1)})^2 + (\om{fi}{1})^2
- \frac{\vc{k}^{\, 4}}{4M^2} \, ,
\end{equation}
where the direction of $\vc{k}_{F}$\ is again parallel to \vc{k}.
Note, that $\vc{k}_{F}^{\, 2}$\ is still effectively a Lorentz scalar,
up to the order considered. The introduction of this new effective
momentum transfer $\vec k_F$ leads to a rearrangement of the intrinsic
operators in the following way
\begin{eqnarray}
\rhp{0} & \rightarrow & \rho^{(0)}(\vec k_F)\,,
\label{ch0eff}\\
\jp{1} & \rightarrow & \vec\jmath^{\,\,(1)}(\vec k_F) \,,
\label{j0eff}\\
\rhp{2} & = & \rhpF{2}
+ \frac{\vcs{k}}{8M^2} \rhp{0} \, , \label{ch2eff}\\
\jp{3} & = & \jpF{3} \,.
\label{j2eff}
\end{eqnarray}
Note, that now part of the relativistic effects are contained in
$\rho^{(0)}$ and $\vec \jmath^{\,\,(1)}$.
Therefore, it is sufficient to determine the operators $\rho_F$\ and
$\vc{\jmath}_F$. In terms of noncovariantly normalized operators,
they are in general given by
\begin{eqnarray}
\rhpF{0} & = &
\tilde J_0^{(0)}(\vec k,0)
\, , \label{intrh0}\\
\jpF{1} & = & \vec {\tilde J}^{(1)}(\vec k,0)
\, , \label{intj1}\\
\rhpF{2} & = &
\tilde J_0^{(2)}(\vec k,0)
+ \rho_{sep}^{(2)}(\vc{k}\,)
\, , \label{intrh2}\\
\jpF{3} & = &
\vec {\tilde J}^{(3)}(\vec k,0)
+ \vec{\jmath}_{sep}^{\,\,(3)}(\vc{k}\,)
\, , \label{intj3}
\end{eqnarray}
where we have introduced the separation charge and current operators
\begin{eqnarray}
\rho_{sep}^{(2)}(\vc{k}\,) & = &
\frac{\vcs{k}}{8M^2} \ps{\vc{k}}{\vnab{k}} \, \rhpF{0}
\, , \label{seprh2}\\
\vec{\jmath}_{sep}^{\,\,(3)}(\vc{k}\,) & = &
\frac{\vcs{k}}{8M^2}\Big(1+ \ps{\vc{k}}{\vnab{k}}\Big) \, \jpF{1}
\, , \label{sepj3}
\end{eqnarray}
and $\rhpF{0}=\rhp{0}$\ and
$\jpF{1}=\jp{1}$ in order to unify our notation.
The e.m.\ current should satisfy the continuity equation, which means in
our notation
\begin{equation}
\vc{k} \cdot \vc{J} (\vc{k},\vc{K}) = k_0 J_0 (\vc{k},\vc{K}) \,.
\end{equation}
According to I, it implies for the intrinsic operators the following
relations
\begin{eqnarray}
\vc{k} \cdot \jpF{1} &=& \bigg[h^{(1)},\,\rhpF{0}\bigg]\,, \label{con1}\\
\vc{k} \cdot \jpF{3} &=& \bigg[h^{(1)},\,\rhpF{2}\bigg] +
\bigg[h^{(3)} - \frac{\vcs{k}}{8M^2}h^{(1)},\, \rhpF{0}\bigg] \,.
\label{con3}
\end{eqnarray}
where $h$ denotes the intrinsic Hamiltonian.
It is useful to separate the contributions of the one-body and the meson
exchange currents, denoted by
$a(1;k)$ and $a(2;k)$, respectively. Splitting $h$ into kinetic and
potential energy $h=t+v$, one would expect for the one-body charge and
current operators of the two-nucleon system ($M=2m$ denoting by $m$
the nucleon mass) to satisfy the relations
\begin{eqnarray}
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k})&=&
\bigg[t^{(1)},\,\rho_F^{(0)}(\mbox{1};\vc{k})\bigg]
\,,\label{con11body}\\
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (3)}(\mbox{1};\vc{k})&=&
\bigg[t^{(1)},\,\rho_F^{(2)}(\mbox{1};\vc{k})\bigg]+
\bigg[t^{(3)}-\frac{\vcs{k}}{32m^2}t^{(1)},\,
\rho_F^{(0)}(\mbox{1};\vc{k})\bigg]\,.
\label{con31body}
\end{eqnarray}
Consequently, the two-body MEC operators would have to fulfil
\begin{eqnarray}
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (1)}(\mbox{2};\vc{k})&=&
\bigg[v^{(1)},\,\rho_F^{(0)}(\mbox{1};\vc{k})\bigg]
\,,\label{con12body}\\
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (3)}(\mbox{2};\vc{k})&=&
\bigg[h^{(1)},\,\rho_F^{(2)}(\mbox{2};\vc{k})\bigg]
+\bigg[v^{(1)},\,\rho_F^{(2)}(\mbox{1};\vc{k})\bigg]
+ \bigg[v^{(3)}-\frac{\vcs{k}}{32m^2}v^{(1)}
,\,\rho_F^{(0)}(\mbox{1};\vc{k})\bigg]\,,
\label{con32body}
\end{eqnarray}
where we already have used the fact that $\rho_F^{(0)}(\mbox{2};\vc{k})$
vanishes. However, the relations (\ref{con31body}) and (\ref{con32body})
will be slightly modified in Sect.\ III.A (see (\ref{cont21}) and
(\ref{cont23})), because first of all
we will incorporate into the one-body current
$\vc{\jmath}_{F}^{\, \, (3)}(\mbox{1};\vc{k})$ a two-body part effectively,
and secondly, the first commutator on the r.h.s.\ of (\ref{con32body}) will
contain only $t^{(1)}$ because the commutator with $v^{(1)}$ will be of
higher order in the meson nucleon coupling constants not considered here.
In order to get the intrinsic charge and current densities from
(\ref{intrh0}) through (\ref{intj3}) one needs the FW and boost
contributions
expressed in the Breit frame. The FW currents are listed in the appendices
and their evaluation in the Breit frame is straightforward.
The wave function boost $\chi$ contains in general
a kinetic $\chi_0$ and an interaction dependent part $\chi_V$.
In the OBE model, the interaction boost is non-zero only for a
pseudoscalar exchange interaction. It is dealt with
explicitly in Appendix C. The leading order kinetic contribution
($ \sim 1/m^2$) reads in terms of c.m.\ and relative variables
\begin{equation}
\chi_0 = - \frac{1}{2} \sum_{a=1}^A
\frac{ \ps{\vci{\rho}{a}}{\vc{P}} \ps{\vci{\pi}{a}}{\vc{P}} + h.c.}{2M^2} -
\frac{1}{2} \sum_{a=1}^A
\frac{\ps{\vci{\rho}{a}}{\vc{P}} (\vci{\pi}{a}^2) + h.c.}{2M m_a}
+ \sum_{a=1}^A
\frac{\vci{s}{a} \times \vci{\pi}{a} \cdot \vc{P}}{2M m_a} \, ,
\label{chi0}
\end{equation}
where $\vci{\rho}{a} = \vci{r}{a} - \vc{R}, \, \vci{\pi}{a} = \vci{p}{a} -
\frac{m_a}{M} \vc{P} $, \vc{P}\ is the total momentum operator
and \vc{R}\ the c.m.\ coordinate given by the usual expression
\begin{equation}
\vc{R} = \sum_{a=1}^A m_a \vci{r}{a}/ M \, .
\label{coor}
\end{equation}
For two particles with equal mass ($m$), the second term
of (\ref{chi0}) vanishes and one gets
\begin{equation}
\chi_0 = - \mpw{16}{2}
\Big( \ps{\vc{r}}{\vc{P}} \ps{\vc{p}}{\vc{P}} +
\ps{\vc{p}}{\vc{P}} \ps{\vc{r}}{\vc{P}} \Big) +
\mpw{8}{2} (\vs{1} - \vs{2}) \times \vc{p} \cdot \vc{P} \, ,
\label{chi0d}
\end{equation}
where $\vc{r} = \vci{r}{1} - \vci{r}{2}$ and $\vc{p} =
\frac{1}{2} (\vci{p}{1} - \vci{p}{2})$.
Let us now evaluate the boost commutator in the Breit frame
using the momentum representation. To this end, we denote an
intrinsic operator $a(\vc{k})$\ in momentum representation by
\begin{equation}
a(\vc{k},\vc{q},\vc{Q} ) =
\langle \vcp{p} | a(\vc{k}) | \vc{p} \, \rangle \, ,
\label{aq}
\end{equation}
where $\vc{q} = \vcp{p} - \vc{p}$\ and $\vc{Q} = \vcp{p} + \vc{p}$.
Noting, that the boost operator $\chi $\ is diagonal with respect to
the c.m.\ plane waves, i.e.,
\begin{equation}
\langle \vcp{P} \, |\, \chi \, | \vc{P} \, \rangle =
\chi (\vc{P},\vc{r},\vc{p}\, )\,
\delta ( \vcp{P} - \vc{P} ) \,,
\end{equation}
one finds, that the intrinsic commutator of the kinetic boost with an operator
$a(\vc{k})$\ reads
\begin{equation}
\langle \, \vcp{p}|\, i \Big( \chi_0 (\frac{\vc{k}}{2},\vc{r},\vc{p}\, )\,
a(\vc{k}) \, - a(\vc{k}) \,
\chi_0 (- \frac{\vc{k}}{2},\vc{r},\vc{p}\, ) \, \Big)
| \vc{p} \, \rangle = a_{\chi_r}(\vec k,\vec q, \vec Q) +
a_{\chi_\sigma}(\vec k,\vec q, \vec Q)\,,
\label{boostq}
\end{equation}
where
\begin{eqnarray}
a_{\chi_r}(\vec k,\vec q, \vec Q)& = &
\mpw{32}{2} \Big( \vcs{k} +
\ps{\vc{k}}{\vc{q}\,}\, \ps{\vc{k}}{\vnab{q}} +
\ps{\vc{k}}{\vc{Q}} \,\ps{\vc{k}}{\vnab{Q}}
\Big) \, a(\vc{k},\vc{q},\vc{Q})\,, \label{chir}\\
a_{\chi_\sigma}(\vec k,\vec q, \vec Q)& = &
\mpwf{32}{2}{i} \Big(\Big\{ a(\vc{k},\vc{q},\vc{Q}), \,
\pssm{\vc{Q}}{\vc{k}} \, \Big\}\nonumber\\
&& - \Big[ a(\vc{k},\vc{q},\vc{Q}), \, \pssm{\vc{q}}{\vc{k}} \Big]
\Big) \, .
\label{chis}
\end{eqnarray}
Since the FW currents are usually given in momentum representation,
the boost contributions follow directly from (\ref{boostq}).
The extension to systems with more than two nucleons is straightforward.
The plane wave basis in the intrinsic space can be constructed with
respect to the set of Jacobi coordinates and,
employing their commutator relations, one can derive an analogue of
(\ref{boostq}). In this case, the contribution of the second term of
(\ref{chi0}) has to be included.
\section{Intrinsic currents}
In this section, explicit expressions are given for the intrinsic currents
$\rho_F$\ and $\vec{\jmath}_F$\
of the two-nucleon system as they follow from
(\ref{intrh0})-(\ref{intj3}).
For the two-body currents, we distinguish different terms
$ a_{F}^{(n)}(\mbox{2};\beta;\vc{k},\vc{q},\vc{Q})_B^{iso}$,
labelled by $\beta$ ($= \mbox{pro, mes, and ret}$), according
to their meson propagator structure. Furthermore,
we denote the isospin structure of the MECs by the superscript ``$iso$''
$(=+,\,-)$, and by the subscript ``$B$'', the exchanged meson type.
Each of the currents may be decomposed in general according to the different
contributions arising from the FW-part, the boost and the so-called
separation part. Labelling them by $\alpha$ ($=FW,\,\mbox{$\chi_r$}, \,\mbox{$\chi_{\sigma}$}$, and
$sep$), this reads in momentum space
\begin{eqnarray}
a_F^{(n)}(\mbox{1}; \vc{k},\vc{q},\vc{Q}) &=&
\sum_{\alpha}
a_{\alpha}^{(n)}(\mbox{1}; \vc{k},\vc{q},\vc{Q}) \, , \\
a_F^{(n)}(\mbox{2};\beta; \vc{k},\vc{q},\vc{Q})_B^{iso} &=&
\sum_{\alpha}
a_{\alpha}^{(n)}(\mbox{2};\beta;\vc{k},\vc{q},\vc{Q})_B^{iso} \, .
\label{form2}
\end{eqnarray}
Further details of our notation are explained in Appendix A.
For each case (one-nucleon or MEC operator) we will
begin with the FW terms in the Breit frame,
then consider the boost commutators (\ref{boostq}) and finally the
additional terms from the r.h.s.\ of (\ref{intrh2})-(\ref{intj3}).
The latter ones are referred to as ``separation'' operators
and labelled by ``$sep$'' as already introduced above.
Finally, the total intrinsic currents are listed in a way
which makes clear which part of the intrinsic continuity equations
(\ref{con11body})-(\ref{con32body}) or their modified forms
(\ref{cont11s})-(\ref{cont23}) they saturate.
However, if the corresponding nonrelativistic operator does
not exist, as in the case of the interaction dependent charge
densities or ``$+$'' parts of MECs, the intrinsic operators
$\rho_F$\ and $\vec{\jmath}_F$\ are simply
equal to the FW ones taken in the Breit frame. In such cases,
we do not write them down repeatedly, but list them together
with other intrinsic operators. Also, some nonrelativistic
operators do not depend on the momentum $\vc{Q}$ which further
simplifies our notation.
\subsection{One-nucleon currents}
We will consider explicitly the currents of the nucleon labelled ``1'',
while those of the second one follow by a replacement $(1 \leftrightarrow 2
)$.
This replacement, of course, affects also the relative variables introduced
above, changing the sign of
$ \vc{r}, \vc{p}, \vcp{p}, \vc{q}, \mbox{ and}\ \vc{Q}$.
For the nonrelativistic operators, one finds immediately from (\ref{r10}) and
(\ref{j11}) in the Breit frame
\begin{eqnarray}
\rho_F^{(0)}(\mbox{1};\vc{k},\vc{q}\, )_1
&=& \mbox{$\hat{e}_1$} \, \delta(\frac{\vc{k}}{2} - \vc{q}\, ) \, ,
\label{irh10}\\
\vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q} )_1
&=& \frac{1}{2m} \Big(\mbox{$\hat{e}_1$} \, \vc{Q} +
i (\mbox{$\hat{e}_1$} + \mbox{$\hat{\kappa}_1$} ) \, \pvso{\vc{k}} \Big)
\, \delta(\frac{\vc{k}}{2} - \vc{q}\, ) \nonumber\\
&=& \vc{\jmath}_c^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1 +
\vc{\jmath}_s^{\, \, (1)}(\mbox{1};\vc{k},\vc{q}\,)_1
\, , \label{ij11}
\end{eqnarray}
where $\vc{\jmath}_{c,s}^{\,\,(1)}$\ stand for the usual nonrelativistic
convection and spin currents.
Now we turn to the relativistic contributions. We first note that,
since the currents of the first nucleon contain
$\delta(\vc{k}/2 - \vc{q}\, )$, the contributions to the boost
commutator (\ref{boostq}) simplify to
\begin{eqnarray}
a_{\chi_r}(1;\vec k,\vec q, \vec Q)_1& = &\mpw{32}{2} \Big(
\frac{\vcs{k}}{2} \ps{\vc{k}}{\vnab{q}}
+ \ps{\vc{k}}{\vc{Q}}\, \ps{\vc{k}}{\vnab{Q}}
\Big) \, a(\rm{1};\vc{k},\vc{q},\vc{Q})_1\,,\\
a_{\chi_\sigma}(1;\vec k,\vec q, \vec Q)_1& = & \frac{i}{32m^2}
\Big\{ a(\rm{1};\vc{k},\vc{q},\vc{Q})_1\, ,
\, \pssm{\vc{Q}}{\vc{k}} \, \Big\}
\, .
\label{boostqia}
\end{eqnarray}
For the FW part of the charge density, one obtains from (\ref{r12})
in the Breit frame
\begin{equation}
\rho_{FW}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
= - \frac{\mbox{$\hat{e}_1$} + 2 \mbox{$\hat{\kappa}_1$}}{8m^2} \,
\Big( \vcs{k} + i \psso{\vc{Q}}{\vc{k}} \Big)
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \, . \label{rh12fwb}
\end{equation}
The boost and separation contributions to the charge density are
\begin{eqnarray}
\rho_{\mbox{$\chi_r$}}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& - \frac{\mbox{$\hat{e}_1$} \, \vcs{k}}{32m^2} \,\ps{\vc{k}}{\vnab{k}}
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \, , \label{rh1chr}\\
\rho_{\mbox{$\chi_{\sigma}$}}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& i\frac{\mbox{$\hat{e}_1$}}{16m^2} \, \pssm{\vc{Q}}{\vc{k}}
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \, , \label{rh1chs}\\
\rho_{sep}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
& = & \frac{\mbox{$\hat{e}_1$} \, \vcs{k}}{32m^2}\, \ps{\vc{k}}{\vnab{k}}
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \, . \label{rh1sep}
\end{eqnarray}
The expressions (\ref{rh1chr}) and (\ref{rh1sep}) cancel completely.
This is, of course, the reason why $\rho_F(\vc{k}\, )$\ and
$\vc{\jmath}_F(\vc{k}\, )$\ are introduced in
(\ref{FGPch})-(\ref{FGPcr}). The relativistic part of
the intrinsic one-nucleon charge density is then
\begin{eqnarray}
\rho_F^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=&- \frac{\mbox{$\hat{e}_1$}}{16m^2}\,
\Big( 2 \vcs{k} + i \pssp{\vc{Q}}{\vc{k}} \Big)
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \nonumber\\
&& - \frac{\mbox{$\hat{\kappa}_1$}}{4m^2}\,
\Big( \vcs{k} + i \psso{\vc{Q}}{\vc{k}} \Big)
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \nonumber\\
&=& \rho_{F,e}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1 +
\rho_{F,\kappa}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
\, . \label{irh12}
\end{eqnarray}
In a similar way, one gets for the spatial current
\begin{eqnarray}
\vc{\jmath}_{FW}^{\, \, (3)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& - \mpw{16}{3} \Big[ (\vcs{Q} + \vcs{k})
\Big(\mbox{$\hat{e}_1$} \, \vc{Q} + i(\mbox{$\hat{e}_1$} + \mbox{$\hat{\kappa}_1$} ) \, \pvso{\vc{k}} \Big) \nonumber\\
&&
+ \Big(\mbox{$\hat{e}_1$} \, \ps{\vc{k}}{\vc{Q}}+ 4\mbox{$\hat{\kappa}_1$} m \om{fi}{1} \Big)\,
(\vc{k} + i\pvso{\vc{Q}} ) \nonumber\\
&&
+ \mbox{$\hat{\kappa}_1$} \, \vc{k} \times [ \vc{Q} \times ( \vc{k} + i\pvso{\vc{Q}})]
\Big] \delta(\frac{\vc{k}}{2} - \vc{q}\, ) \, , \label{j13fwb}\\
\vc{\jmath}_{\mbox{$\chi_r$}}^{\, \, (3)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& - \frac{\vcs{k}}{32m^2} \, \ps{\vc{k}}{\vnab{k}}
\vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q} )_1
\nonumber\\
&& + \mpw{32}{2} \Big( \vc{k}\, ( \vc{k} \cdot
\vc{\jmath}_c^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1) +
\vcs{k} \, \vc{\jmath}_s^{\, \, (1)}(\mbox{1};\vc{k},\vc{q}\, )_1 \Big)
\, , \label{j1chr}\\
\vc{\jmath}_{\mbox{$\chi_{\sigma}$}}^{\, \, (3)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& \mpw{32}{3} \Big[
i\mbox{$\hat{e}_1$} \, \pssm{\vc{Q}}{\vc{k}} \, \vc{Q}
\nonumber\\
&& - (\mbox{$\hat{e}_1$} + \mbox{$\hat{\kappa}_1$})\, [\vc{k} \times (\vc{k} \times \vc{Q} )
+ \pth{\vs{2}}{\vc{Q}}{\vc{k}}\pv{\vc{k}}{\vs{1}} ]
\Big]
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \, , \label{j1chs}\\
\vc{\jmath}_{sep}^{\, \, (3)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
& = & \frac{\vcs{k}}{32m^2} \Big(1+ \ps{\vc{k}}{\vnab{k}}\Big)
\vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q} )_1
\, . \label{j1sep}
\end{eqnarray}
There is again a cancellation between the separation and boost
terms. The final expression for the relativistic part of the
one-nucleon intrinsic current operator then is
\begin{eqnarray}
\vc{\jmath}_{F}^{\, \, (3)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& - \mpw{16}{3} \Big[\mbox{$\hat{e}_1$} (\vcs{Q} + \frac{\vcs{k}}{4})
\vc{Q} + i(\mbox{$\hat{e}_1$} + \mbox{$\hat{\kappa}_1$} )(\vcs{Q} + \frac{\vcs{k}}{2})\,
\pvso{\vc{k}} \nonumber\\
&&
+ (\mbox{$\hat{e}_1$}\, \ps{\vc{k}}{\vc{Q}} + 4\mbox{$\hat{\kappa}_1$} m \om{fi}{1} )
(\vc{k} + i \pvso{\vc{Q}} ) \nonumber\\
&&
- \frac{i}{2} \mbox{$\hat{e}_1$} \, \pssm{\vc{Q}}{\vc{k}}\,\vc{Q} +
\frac{1}{2} (\mbox{$\hat{e}_1$} + \mbox{$\hat{\kappa}_1$}) \pth{\vs{2}}{\vc{Q}}{\vc{k}} \pv{\vc{k}}{\vs{1}}
\nonumber\\
&&
+\mbox{$\hat{\kappa}_1$} \, \vc{k} \times [ \vc{Q} \times (\frac{1}{2} \vc{k} + i\pvso{\vc{Q}})]
\Big]
\delta(\frac{\vc{k}}{2} - \vc{q}\, ) \nonumber\\
&& - \mpw{32}{2} \vc{k} \, (\vc{k} \cdot
\vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q} )_1 )
\, . \label{ij13}
\end{eqnarray}
Now we will consider the divergence of these currents in view of the
general continuity equations for the intrinsic operators in (\ref{con1}) and
(\ref{con3}). Using $\vc{k}=2\vc{q}$, one finds for the divergence of
the nonrelativistic intrinsic current
\begin{equation}
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
= \frac{\ps{\vc{q}}{\vc{Q}}}{m} \,
\rho_F^{(0)}(\mbox{1};\vc{k},\vc{q}\, )_1 \, .
\label{cont11}
\end{equation}
and for the relativistic one
\begin{eqnarray}
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (3)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& -\frac{1}{16m^3}\,\bigg[2\mbox{$\hat{e}_1$} (\vcs{Q} + \frac{\vcs{k}}{4})
\ps{\vc{q}}{\vc{Q}}+ \mbox{$\hat{e}_1$}( 2\vcs{k} + i\pssp{\vc{Q}}{\vc{k}} )
\ps{\vc{q}}{\vc{Q}} \nonumber\\
&&+ 4\mbox{$\hat{\kappa}_1$} m \om{fi}{1} ( \vcs{k} + i \psso{\vc{Q}}{\vc{k}})\bigg]
\delta(\frac{\vc{k}}{2} - \vc{q}\, )
- \mpwf{32}{2}{\vcs{k}}\, (\vc{k} \cdot
\vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q} )_1 )\nonumber\\
&=& - \mpw{8}{3}(\vcs{Q}+\vcs{q})\ps{\vc{q}}{\vc{Q}}\rho_F^{(0)}(\mbox{1}
;\vc{k},\vc{q}\, )_1 + \frac{1}{m}
\ps{\vc{q}}{\vc{Q}}\rho_{F,e}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
\nonumber\\
&&+\om{fi}{1}\rho_{F,\kappa}^{(2)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
- \mpwf{32}{3}{\vcs{k}}\, \frac{\ps{\vc{q}}{\vc{Q}}}{m} \,
\rho_F^{(0)}(\mbox{1};\vc{k},\vc{q}\, )_1\,,
\label{cont13}
\end{eqnarray}
where $\rho^{(2)}_{F,e} (\rm{1};\vc{k}\, )$ and $\rho^{(2)}_{F,\kappa}
(\rm{1};\vc{k}\, )$ are defined in (\ref{irh12}).
Since for an intrinsic operator in momentum representation
$a(\vc{q},\vc{Q})$ the following relations hold
\begin{eqnarray}
\langle \vc{p}^{\,\prime}|\Big[t^{(1)},a\Big]|\vc{p}\,\rangle& = &
\frac{\ps{\vc{q}}{\vc{Q}}}{m}a(\vc{q},\vc{Q})\label{comt1}
\,,\\
\langle \vc{p}^{\,\prime}|\Big[t^{(3)},a\Big]|\vc{p}\,\rangle& = &
-\frac{\ps{\vc{q}}{\vc{Q}}}{8m^3}(\vcs{Q}+\vcs{q})a(\vc{q},\vc{Q})
\,,\label{comt3}
\end{eqnarray}
one finds that the r.h.s.\ of
(\ref{cont11}) is just the commutator of the nonrelativistic intrinsic kinetic
energy $\vc{p}^{\, 2}/m$ with the charge density
\begin{equation}
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (1)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
=\langle \vc{p}^{\,\prime}|\Big[t^{(1)},\rho_F^{(0)}(\mbox{1};\vc{k})_1
\Big]|\vc{p}\,\rangle\,,\label{cont11s}
\end{equation}
i.e.\ the relation (\ref{con11body}), and
for the divergence of the relativistic contribution in (\ref{cont13})
\begin{eqnarray}
\vc{k} \cdot \vc{\jmath}_{F}^{\, \, (3)}(\mbox{1};\vc{k},\vc{q},\vc{Q})_1
&=& \langle \vc{p}^{\,\prime}|\Big[t^{(1)},
\rho_{F,e}^{(2)}(\mbox{1};\vc{k})_1\Big]|\vc{p}\,\rangle
+\langle \vc{p}^{\,\prime}|\Big[h^{(1)},\rho_{F,\kappa}^{(2)}(\mbox{1};\vc{k})_1
\Big]|\vc{p}\,\rangle
\nonumber\\
&&+
\langle \vc{p}^{\,\prime}|\Big[t^{(3)} - \mpwf{32}{2}{\vcs{k}}\,
t^{(1)},\rho_F^{(0)}(\mbox{1};\vc{k})_1 \Big]|\vc{p}\,\rangle\,,
\label{cont13s}
\end{eqnarray}
which almost equals (\ref{con31body}) with the sole exception, that in the
commutator of $\rho_{F,\kappa}^{(2)}(\mbox{1};\vc{k})$ the full nonrelativistic
intrinsic Hamiltonian $h^{(1)}$ appears instead of the kinetic energy
$t^{(1)}$. This modification of (\ref{con31body}), we had already alluded to,
is a consequence of the fact that we have kept in the $\hat \kappa$-part of
$\vc{\jmath}_{F}^{\, \, (3)}(\mbox{1};\vc{k})$ in (\ref{ij13})
the total intrinsic energy transfer,
thus including implicitly a two-body contribution.
That means, that in order to satisfy the full
continuity equation for the intrinsic currents, one should
find for the intrinsic
model-dependent interaction currents the following relations involving
commutators with the $NN$ potential $v$
\begin{eqnarray}
\vc{k} \cdot \vc{\jmath}_F^{\, \, (1)} (\rm{2};\vc{k},\vc{q},\vc{Q})
& = & \langle \vcp{p} \, |
\Big[ v^{(1)}\, , \, \rho_F^{(0)}(\rm{1};\vc{k}\, ) \Big]
| \vc{p} \, \rangle \, ,
\label{cont21}\\
\vc{k} \cdot \vc{\jmath}_F^{\, \, (3)} (\rm{2};\vc{k},\vc{q},\vc{Q})
& = & \langle \vcp{p} \, |\Big[ t^{(1)},\rho_F^{(2)}(2;\vc{k})
\Big] + \Big[ v^{(1)} \, , \, \rho^{(2)}_{F,e} (\rm{1};\vc{k}\, ) \Big]
| \vc{p} \, \rangle\nonumber\\
&& + \langle \vcp{p} \, |
\Big[ v^{(3)} - \frac{\vcs{k}}{32m^2}v^{(1)} , \,
\rho_F^{(0)}(\rm{1};\vc{k}\, ) \Big] | \vc{p} \, \rangle
\, , \label{cont23}
\end{eqnarray}
where the last term is the interaction-dependent
part of the recoil commutator in (\ref{con3}). Note, that only $\rho_{F,e}^{(2)
}(\rm{1};\vc{k}\, )$ appears in the commutator with $v^{(1)}$
in (\ref{cont23}) because
$\rho_{F,\kappa}^{(2)}(\rm{1};\vc{k}\, )$ is already contained in
(\ref{cont13s}). It is clear that
(\ref{cont21}) and (\ref{cont23}) should hold also for each meson
contribution separately.
For the explicit evaluation of the commutators of the potential with the
one-body charge density, we first note for a one-body operator
\begin{equation}
a(1;\vec k, \vec q, \vec Q)= a(\vec k, \vec Q)_1\delta(\frac{\vec k}{2}
-\vec q) +\mbox{$(1 \leftrightarrow 2)$}
\end{equation}
the general relation
\begin{eqnarray}
\langle \vcp{p} \, | \Big[ v_B \, ,
a(\rm{1};\vc{k} \, ) \Big] | \vc{p} \, \rangle &=&
{v}_B (- \vci{q}{2}, \vc{Q} + \frac{1}{2} \vc{k} \, ) \,
a(\vc{k}, \vc{Q} + \vci{q}{2} \, )_1
- a(\vc{k}, \vc{Q} - \vci{q}{2} \, )_1
{v}_B (- \vci{q}{2}, \vc{Q} - \frac{1}{2} \vc{k} \, )
\nonumber\\
&&+\mbox{$(1 \leftrightarrow 2)$} \,.
\end{eqnarray}
Specializing now to the charge density operator as given in (\ref{irh10})
and (\ref{irh12}) and using the isospin dependence as introduced in
(\ref{fisoe})-(\ref{gisom}), we find the following relations
\begin{eqnarray}
\langle \vcp{p} \, | \left[ v_B^{(1)}\, ,
\rho^{(0)}_F(\rm{1};\vc{k} \, ) \right] | \vc{p} \, \rangle &=&
-\Ff{e,1}{-}\tilde v_B^{(1)}(\vc{q_2}\,) +\mbox{$(1 \leftrightarrow 2)$} \,,
\label{comv1rh0}\\
\langle \vcp{p} \, | \left[ v_B^{(1)}\, ,
\rho^{(2)}_{F,e}(\rm{1};\vc{k} \, ) \right] | \vc{p} \, \rangle &=&
- i\frac{\Ff{e,1}{+}}{32m^2}\Big(\Big[\tilde v_B^{(1)}(\vc{q_2}\,),\,
\pssp{\vc{Q}}{\vc{k}}\Big]
\nonumber\\
&&+\Big\{\tilde v_B^{(1)}(\vc{q_2}\,),\,\pssp{\vc{q}_2}{\vc{k}}\Big\}\Big)
\nonumber\\
&&+ \frac{\Ff{e,1}{-}}{32m^2}
\Big(4\vcs{k}\tilde v_B^{(1)}(\vc{q_2}\,)+i\Big\{\tilde v_B^{(1)}(\vc{q_2}\,),\,
\pssp{\vc{Q}}{\vc{k}}\Big\}
\nonumber\\
&&+i\Big[\tilde v_B^{(1)}(\vc{q_2}\,),\,\pssp{\vc{q}_2}{\vc{k}}\Big]\Big)
+\mbox{$(1 \leftrightarrow 2)$} \,.
\label{comv1rh2}\\
\langle \vcp{p} \, | \left[ v_B^{(3)}\, ,
\rho^{(0)}_F(\rm{1};\vc{k} \, ) \right] | \vc{p} \, \rangle &=&
\frac{\Ff{e,1}{+}}{2}
\Big(\tilde v_B^{(3)}(-\vci{q}{2},\vc{Q} + \frac{1}{2}\vc{k}\,)
-\tilde v_B^{(3)}(-\vci{q}{2},\vc{Q} - \frac{1}{2}\vc{k}\,)\Big)
\nonumber\\
&&
-\frac{\Ff{e,1}{-}}{2}
\Big(\tilde v_B^{(3)}(-\vci{q}{2},\vc{Q} + \frac{1}{2}\vc{k}\,)
+\tilde v_B^{(3)}(-\vci{q}{2},\vc{Q} - \frac{1}{2}\vc{k}\,)\Big)
\nonumber\\
&& +\mbox{$(1 \leftrightarrow 2)$} \,,
\label{comv3rh0}
\end{eqnarray}
These relations will be useful for checking the continuity condition
(\ref{cont23}) for the various meson contributions. Since the interaction
currents and the
commutators in (\ref{cont23}) separate into ``+'' and ``$-$'' parts with
respect to $F^\pm$ and $G^\pm$, we can check the continuity condition
separately for these parts.
There are two interesting
properties of the relativistic part of the continuity equation
that are demonstrated explicitly below for particular meson exchanges.
First, since we use the
representation in which there is no intrinsic retardation potential
($\nu = 1/2$), the retardation part of the current has to satisfy
\begin{eqnarray}
\vc{k} \cdot \vc{\jmath}_F^{\, \, (3)} (\mbox{2;ret};\vc{k},\vc{q},\vc{Q})
&=& \frac{\ps{\vc{q}}{\vc{Q}}}{m}\,
\rho_F^{(2)}(\mbox{2;ret};\vc{k},\vc{q},\vc{Q})
\nonumber\\
&=& \langle \vcp{p} \, | \left[ t^{(1)}\, ,
\rho^{(2)}_F(\mbox{2;ret};\vc{k} \, ) \right] | \vc{p} \, \rangle
\, .
\label{contret}
\end{eqnarray}
This is possible only when the boost contributions from $\chi_r$\
are included. Consequently, if retardation is considered, it makes little
sense to take the corresponding FW operators neglecting at the same time
the effects of the wave function boost. Another consequence is that in
(\ref{cont23}) we need to consider the ``pro'' and ``mes'' MEC
contributions as defined in Appendix A only.
Second, between the com\-mu\-ta\-tors
$\Big[ v^{(1)}\, , \, \rho^{(2)}_{F,e} (\rm{1};\vc{k}\, ) \Big]$
and $\Big[ v_{LS}^{(3)}\, , \, \rho_F^{(0)} (\rm{1};\vc{k}\, ) \Big]$
there are cancellations, where $v_{LS}^{(3)}$\ is the spin-orbit part
of the potential. Recall, that also in the relativistic charge density
$\rho^{(2)}_{F,e}(\rm{1};\vc{k}\, )$\
a can\-cel\-la\-tion occurs between the spin-orbit FW term and
the $\chi_{\sigma}$
term. This suggests that for a proper description of
the relativistic intrinsic MECs the inclusion of the spin
part of the boost is also important.
Now we will derive the detailed expressions for the intrinsic meson
exchange current operators. The FW-operators are listed in Appendix A. They
have to be evaluated in the Breit frame using (\ref{q12b})-(\ref{QBreit}).
In order to keep track of the various pieces contributing to the intrinsic
operators, we give in Tables \ref{tab1} and \ref{tab2} a survey on the
nonvanishing terms with reference to the corresponding equations where the
explicit expressions are given. In the case of nonrelativistic
contributions, the intrinsic operators are given by the FW-ones.
\subsection{Scalar meson exchange}
The intrinsic potential contributions including leading relativistic
order following from the exchange of a single scalar meson are
\begin{eqnarray}
\vto{S} &=& - \con{S}\, \prop{S}{} \, , \label{vs1}\\
\vtt{S} &=& - \mpw{8}{2}\, \vto{S} \Big(
2 \vcs{Q} + i \pssp{\vc{q}}{\vc{Q}} \Big) \nonumber\\
&=& \vtt{S,Q} + \vtt{S,LS} \, . \label{vs3}
\end{eqnarray}
Here, $\prop{S}{}$ denotes the meson propagator including a hadronic form
factor (see Appendix A). The
last term in (\ref{vs3}) is the spin-orbit potential.
In some OBE potentials the following replacement is done in the first term
of (\ref{vs3})
\begin{equation}
\vcs{Q} = \vcs{Q} + \vcs{q} - \vcs{q} \rightarrow
\vcs{Q} + \vcs{q} + m_S^2 \, ,
\label{mrep}
\end{equation}
since $\vto{S}(\vcs{Q} + \vcs{q})/2$\, corresponds to the intrinsic
anticommutator $\{ \hat{\vc{p}}^{\, \, 2},\,v_S^{(1)} \}$.
Turning now to the corresponding exchange currents, we will
start from its ``$+$'' part (see Appendix A).
Since in this case there are no nonrelativistic contributions
neither to charge nor to current densities, the intrinsic
operators are simply given by the FW ones taken in the Breit frame. In all
following expressions we have kept the notation
$\vec q_2=\frac{1}{2} \vec k -\vec q$. Hence, we get
from (\ref{j2spro+})-(\ref{j2sret+})
\begin{eqnarray}
\jrcq{F}{pro}{S}{+} &=&
- \Ff{e,1}{+} \mpw{4}{2} \vtos{S} ( \vc{Q} + i \pvso{\vc{k}} ) +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{intjspro+}\\
\rhqQ{ret}{S}{+} &=& \Ff{e,1}{+}\, \conm{S}{8}{ } \,
\ps{\vc{k}}{\vci{q}{2}}\, \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{intrhsret+}\\
\jrcq{F}{ret}{S}{+} &=& \conm{S}{16}{2} \Big\{
\Ff{e,1}{+}\, \Big[ \ps{\vc{k}}{\vci{q}{2}}\, \vc{Q} -
2 \ps{\vc{Q}}{\vci{q}{2}}\, \vci{q}{2} \Big]
\nonumber\\
&& + i\Gm{+}\, \ps{\vc{k}}{\vci{q}{2}}\, \pvso{\vc{k}}
\Big\} \, \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{intjsret+}
\end{eqnarray}
where $\propp{S}{2}$ is defined in (\ref{propp}). Note that for the exchange
\mbox{$(1 \leftrightarrow 2)$}\ one has $\vec q_1=\frac{1}{2} \vec k +\vec q$.
As next we will look at the consequences of the continuity equation.
The retardation charge and current obviously satisfy (\ref{contret}).
Thus remains the divergence of the ``pro'' current which is
\begin{equation}
\vc{k} \cdot \jrcq{F}{pro}{S}{+} =
- \frac{\Ff{e,1}{+}}{4m^2} \, \ps{\vc{k}}{\vc{Q}} \, \vtos{S} \, +\mbox{$(1 \leftrightarrow 2)$} \, .
\label{contspro+}
\end{equation}
This indeed is the sum of the commutators on the r.h.s.\ of (\ref{cont23})
because one finds directly from (\ref{comv1rh0}) through (\ref{comv3rh0})
with (\ref{vs1}) and (\ref{vs3})
\begin{eqnarray}
\langle \vcp{p} \, | \Big[ v_{S}^{(1)}\, ,
\, \rho^{(0)}_{F} (\rm{1};\vc{k}\, ) \Big]^+
| \vc{p} \, \rangle &=& 0\,,\\
\langle \vcp{p} \, | \Big[ v_{S}^{(1)}\, ,
\, \rho^{(2)}_{F,e} (\rm{1};\vc{k}\, ) \Big]^+
| \vc{p} \, \rangle \, &=&-i \frac{\Ff{e,1}{+}}{16m^2}\,
\vtos{S}\pssp{\vci{q}{2}}{\vc{k}} +\mbox{$(1 \leftrightarrow 2)$}
\,,\\
\langle \vcp{p} \, | \Big[ v_{S}^{(3)}\, ,
\, \rho^{(0)}_{F} (\rm{1};\vc{k}\, ) \Big]^+
| \vc{p} \, \rangle \, &=&- \frac{\Ff{e,1}{+}}{16m^2}\,
\vtos{S}\Big(4\ps{\vc{k}}{\vc{Q}}-i
\pssp{\vci{q}{2}}{\vc{k}}\Big)
\nonumber\\
&&+\mbox{$(1 \leftrightarrow 2)$}\,,
\end{eqnarray}
where the superscript ``+'', referring to the isospin dependence,
indicates that only the ``$+$'' part of the commutator is retained.
Clearly, the resulting current and the continuity equation it satisfies
are completely different from those one would obtain, e.g.,
by a minimal replacement in the LS potential neglecting further
relativistic effects.
Let us now consider the ``$-$'' part of the operators.
There is one nonrelativistic term in (\ref{j2smes1-}) which we will not
list here again.
The relativistic contributions to the charge and current densities
follow from (\ref{j2spro-})-(\ref{j2sret-}) and the boost and separation
ones are listed in Appendix B.
After summing up these various terms one obtains finally for the
relativistic intrinsic operators
\begin{eqnarray}
\rhqQ{mes}{S}{-} &=&
\Ff{e,1}{-} \conm{S}{2}{} \, \ps{\vc{k}}{\vc{Q}}
\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, , \label{intrhsmes-}\\
\rhqQ{ret}{S}{-} &=&
\Ff{e,1}{-} \conm{S}{4}{ } \, \ps{\vci{q}{2}}{\vc{Q}}
\, \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$} \, . \label{intrhsret-}\\
\jrcq{F}{pro}{S}{-} &=&
- \Ff{e,1}{-} \conm{S}{4}{2} \,
(\frac{7}{8} \vc{k} + i \pvso{\vc{Q}} ) \, \prop{S}{2}+\mbox{$(1 \leftrightarrow 2)$} \nonumber\\
&=&
- \Ff{e,1}{-} \conm{S}{4}{2} \,
(\frac{3}{4} \vc{k} + i \pvso{\vc{Q}} ) \, \prop{S}{2}+\mbox{$(1 \leftrightarrow 2)$} \nonumber\\
&& - \frac{\vc{k}}{32m^2} \, \vc{k} \cdot \jnrq{mes}{S}
\, , \label{intjspro-}\\
\jrcq{F}{mes}{S}{-} &=&
- \Ff{e,1}{-}\, \conm{S}{4}{2} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, \vc{q}\,
\Big[ 2 \vcs{Q} -
i \pssp{\vc{Q}}{\vc{q}\, } \nonumber\\
&& - i\pssm{\vc{Q}}{\vc{k}}
- 2 \frac{\ps{\vc{k}}{\vc{Q}}}{\ps{\vc{k}}{\vc{q}\, }}\,
\ps{\vc{q}}{\vc{Q}} \, \Big] \, , \label{intjsmes3-}\\
\jrcq{F}{ret}{S}{-} &=& - \conm{S}{8}{2}
\Big[ \frac{\Ff{e,1}{-}}{\ps{\vc{k}}{\vc{q}\, }} \,
\Big( \frac{1}{4} \ps{\vc{k}}{\vci{q}{2}}\,
\vc{k} \times (\vc{k} \times \vc{q}\, ) \nonumber\\
&&
+ \ps{\vc{Q}}{\vci{q}{2}}\, \Big[\vc{k} \times (\vc{q} \times \vc{Q}\, )
- 2 \vc{q}\, \ps{\vc{q}}{\vc{Q}} \, \Big]
\Big) \nonumber\\
&& - i\Gm{-}\, \ps{\vc{Q}}{\vci{q}{2}} \, \pvso{\vc{k}} \,
\Big] \, \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$} \, . \label{intjsret-}
\end{eqnarray}
The nonrelativistic current satisfies
\begin{equation}
\vc{k} \cdot \jnrq{mes}{S} = - \Ff{e,1}{-}\, \vtos{S} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{contsmes1-}
\end{equation}
which in conjunction with (\ref{comv1rh0}) is in agreement with (\ref{cont21}).
Obviously, the retardation charge and
current densities satisfy (\ref{contret}).
For the remaining divergence of the relativistic ``pro'' and ``mes''
currents, one finds from the above expressions
\begin{eqnarray}
\vc{k} \cdot\Big( \jrcq{F}{pro}{S}{-}+ \jrcq{F}{mes}{S}{-}\Big)&=&
\nonumber\\
&&\hspace*{-6cm}\frac{\Ff{e,1}{-}}{8m^2} \vtos{S}
\Big[ 2 \vcs{Q} + \frac{3}{2} \vcs{k}
+ i \, (\vs{1} + \vs{2}) \times \vc{Q} \cdot (\vc{k} - \vc{q}\, )
\Big]
+ \mbox{$(1 \leftrightarrow 2)$}
\nonumber\\
&&\hspace*{-6cm}
+ \Ff{e,1}{-} \conm{S}{2}{2} \,\ps{\vc{q}}{\vc{Q}} \ps{\vc{k}}{\vc{Q}}
\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}
-\frac{\vcs{k}}{32m^2} \, \vc{k} \cdot \jnrq{mes}{S}
\, .
\label{cons-}
\end{eqnarray}
The first term on the r.h.s.\ is equal to the sum of the following two
commutators
\begin{eqnarray}
\langle \vcp{p} \, |
\Big[ v_{S}^{(1)}\, ,
\, \rho^{(2)}_{F,e} (\rm{1};\vc{k}\, ) \Big]^- | \vc{p} \, \rangle&=&
\frac{\Ff{e,1}{-}}{16m^2} \vtos{S}\,
\Big[ 2\vcs{k} +
i \, (\vs{1} + \vs{2}) \cdot (\vc{Q} \times \vc{k} \, )
\Big] + \mbox{$(1 \leftrightarrow 2)$} \, ,
\\
\langle \vcp{p} \, |\Big[ v_{S}^{(3)}\, , \,
\rho_F^{(0)} (\rm{1};\vc{k}\, ) \Big]^-| \vc{p} \, \rangle &=&
\frac{\Ff{e,1}{-}}{8m^2} \vtos{S}\,
\Big[ 2\vcs{Q} + \frac{1}{2} \vcs{k}
+ i \, (\vs{1} + \vs{2}) \cdot ( \vc{Q} \times \vc{q_2}\, )
\Big]\nonumber\\
&& + \mbox{$(1 \leftrightarrow 2)$} \,,
\end{eqnarray}
while the next term is just the commutator of $t^{(1)}$ with
the mesonic density (\ref{intrhsmes-}) according to (\ref{comt1}).
The last term in (\ref{cons-}) is the recoil current contribution
in (\ref{cont23}).
This completes the verification of the ``$-$'' part of
the continuity equation.
Finally, let us shortly describe how one can obtain the conserved current
if the replacement (\ref{mrep}) is made in the relativistic part of the
NN potential. First of all, it follows from (\ref{comv3rh0})
that the ``$+$'' part of the continuity
equation is not affected. For the ``$-$'' part, an additional term arises
in the commutator of the nonrelativistic charge density with the potential,
namely
\begin{equation}
- \Ff{e,1}{-} \conm{S}{4}{2} (m_S^2 + \vcsi{q}{2}) \prop{S}{2}
+ \mbox{$(1 \leftrightarrow 2)$} \, .
\label{mrepcons}
\end{equation}
Notice, that due to the nucleon exchange term this additional contribution
vanishes in the absence of strong form factors. This suggests the
following prescription to handle the more general case with form factors:
(i) Rearrange the different terms of the current without form factors,
i.e., shift some part from the ``mes'' to the ``pro'' component,
so that its divergence explicitly contains the additional piece
(\ref{mrepcons}), (ii) Introduce then the
strong form factors into the modified potential and the rearranged current.
Obviously, this procedure is not unique, but there is no other more consistent
way of solving this problem.
The resulting modified current is a little more complicated for scalar meson
exchange, but for vector meson exchange it simplifies significantly.
Let us consider the vector
\begin{equation}
- \frac{1}{2} (\vci{q}{1} - \vci{q}{2}) \Big[ \prop{S}{2} + \mbox{$(1 \leftrightarrow 2)$} -
(2 m_S^2 + \vcsi{q}{1} + \vcsi{q}{2} ) \prop{S}{1} \prop{S}{2}
\Big] \, ,
\label{mrepcur}
\end{equation}
which vanishes if the strong form factors are disregarded and whose
divergence is just
\begin{equation}
( m_S^2 + \vcsi{q}{2} ) \prop{S}{2} - \mbox{$(1 \leftrightarrow 2)$} \, .
\end{equation}
Multiplying (\ref{mrepcur}) with $ - \Ff{e,1}{-} \conm{S}{4}{2} $\
and adding it to the intrinsic current
above, one gets a new conserved current for the modified
potential. For the currents of this section
this means the following replacements
in (\ref{intjspro-}) $\frac{3}{4} \vc{k} \rightarrow (\frac{3}{4} \vc{k} -
\vc{q}\,)$ and in (\ref{intjsmes3-}) $\vcs{Q} \rightarrow ( \vcs{Q} + m_S^2 +
\vcs{q} + \frac{1}{4} \vcs{k} )$.
\subsection{Vector meson exchange}
The exchange of a vector meson contributes to the potential
by
\begin{eqnarray}
\vto{V} &=& \con{V}\, \prop{V}{} \, , \label{vv1}\\
\vtt{V} &=& - \mpw{4}{2}\, \vto{V} \Big[
(1+ 2 \kappa_V) \vcs{q} + (1+\kappa_V)^2
\pf{\vs{1}}{\vc{q}\, }{\vs{2}}{\vc{q}\, } \nonumber\\
&& - \vcs{Q} - i(\frac{3}{2} + 2 \kappa_V)
\pssp{\vc{q}}{\vc{Q}} \Big] \nonumber\\
&=& \vtt{V,q} + \vtt{V,\sigma q} + \vtt{V,Q} + \vtt{V,LS} \, . \label{vv3}
\end{eqnarray}
The term $\vtt{V,\sigma q}$\ contains a central spin-spin and a tensor
part. Again, \vcs{q}\ is often replaced by $ - m_V^2$\
and then $\vtt{V}$\ may be redefined to be
\begin{eqnarray}
\hat v_V^{(3)}(\vc{q},\vc{Q}) &=& \mpw{4}{2}\, \vto{V} \Big\{
(1+ 2 \kappa_V) m_V^2 + (1+\kappa_V)^2
\Big[ m_V^2 \ps{\vs{1}}{\vs{2}} +
\ps{\vs{1}}{\vc{q}\, } \ps{\vs{2}}{\vc{q}\, } \Big]
\nonumber\\
&& + \vcs{Q} + i(\frac{3}{2} + 2 \kappa_V)
\pssp{\vc{q}}{\vc{Q}} \Big\} \, . \label{vv3r}
\end{eqnarray}
Like the nonrelativistic potential \vto{V}, many currents are obtained
from those of scalar exchange given in the previous section by the
replacements $ m_S \rightarrow m_V$ and $g_S^2 \rightarrow - g_V^2$.
For the ``$+$'' part, one gets in this way
the retardation current from (\ref{intrhsret+})
and (\ref{intjsret+}). In addition, there is the ``pro'' part of the current
which follows from (\ref{j2vpro+})
\begin{equation}
\jrcq{F}{pro}{V}{+} =
\mpwf{4}{2}{\Ff{e,1}{+}} \vtos{V}
\Big[ \vc{Q} - i(1+\kappa_V) \pvsp{\vci{q}{2}} \Big] +\mbox{$(1 \leftrightarrow 2)$} \, .
\label{intjvpro+}
\end{equation}
Its divergence should equal the sum of the following commutators
\begin{eqnarray}
\langle \vcp{p} \, | \Big[ v_{V}^{(1)}\, ,
\, \rho^{(2)}_{F,e} (\rm{1}) \Big]^+ | \vc{p} \, \rangle \, &=&
- i\frac{\Ff{e,1}{+}}{16m^2} \vtos{V} \pssp{\vci{q}{2}}{\vc{k}}
+ \mbox{$(1 \leftrightarrow 2)$} \, ,\\
\langle \vcp{p} \, | \Big[ v_{V}^{(3)}\, ,
\, \rho_F^{(0)} (\rm{1};\vc{k}\, ) \Big]^+ | \vc{p} \, \rangle &=&
\frac{\Ff{e,1}{+}}{8m^2} \vtos{V}\Big[ 2\ps{\vc{k}}{\vc{Q}}
-i (\frac{3}{2} + 2 \kappa_V) \pssp{\vci{q}{2}}{\vc{k}}\Big]
\nonumber\\
&&+\mbox{$(1 \leftrightarrow 2)$} \, ,
\end{eqnarray}
which is easy to verify.
For the ``$-$'' part the replacements
$ m_S \rightarrow m_V$ and $g_S^2 \rightarrow - g_V^2$\ yield
the nonrelativistic current \jnrq{mes}{V} from (\ref{j2smes1-}),
the retardation charge \rhqQ{ret}{V}{-} and current
\jrcq{F}{ret}{V}{-} from (\ref{intrhsret-}) and (\ref{intjsret-}),
respectively, and the mesonic charge \rhqQ{mes}{V}{-}
from (\ref{intrhsmes-}). In order to get the remaining currents,
one should take (\ref{j2vpro-})-(\ref{j2vmestr-}) and add
the nonretardation parts of the boost and separation
contributions, i.e., (\ref{jschrpro}) and (\ref{jschs})
with the appropriate replacement of the
meson parameters. Keeping the ``mes-tr'' mesonic current separately
(see appendix A), we finally obtain
\begin{eqnarray}
\jrcq{F}{pro}{V}{-} &=&
\Ff{e,1}{-} \conm{V}{4}{2} \,
\Big[ (1+\kappa_V) \vci{q}{2} - \kappa_V \vci{q}{1} -
\frac{1}{4} \vc{k}
- i (1+ 2 \kappa_V) \pvso{\vc{Q}} \nonumber\\
&& - (1+\kappa_V)^2 \vs{1} \times ( \vs{2} \times \vci{q}{2})\Big]
\, \prop{V}{2}+\mbox{$(1 \leftrightarrow 2)$} \nonumber\\
&& - \frac{\vc{k}}{32m^2} \, \vc{k} \cdot \jnrq{mes}{V}
\, , \label{intjvpro-}\\
\jrcq{F}{mes}{V}{-} &=&
\Ff{e,1}{-}\, \conm{V}{2}{2} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, \vc{q}
\Big[ (1+2 \kappa_V)(\vcs{q} + \frac{\vcs{k}}{4}) - \vcs{Q}
\nonumber\\
&&
- (1+\kappa_V)^2
\pf{\vs{1}}{\vci{q}{1}}{\vs{2}}{\vci{q}{2}}
+ i(\frac{3}{2} + 2 \kappa_V) \pssp{\vc{Q}}{\vc{q}\, }\nonumber\\
&& + i(\frac{1}{2} + \kappa_V) \pssm{\vc{Q}}{\vc{k}}
- \frac{\ps{\vc{k}}{\vc{Q}}}{\ps{\vc{k}}{\vc{q}\, }}\,
\ps{\vc{q}}{\vc{Q}}
\Big] \, , \label{intjvmes-}\\
\rhqQ{mes-tr}{V}{-} &=&
\Ff{e,1}{-}\, \conm{V}{2}{} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \nonumber\\
&& \Big[ 2 \ps{\vc{k}}{\vc{Q}} +
i (1+\kappa_V) \pssp{\vc{q}}{\vc{k}} \Big] \, ,
\label{intrhvmestr-} \\
\jrcq{F}{mes-tr}{V}{-} &=&
\Ff{e,1}{-}\, \conm{V}{4}{2} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \nonumber\\
&& \Big\{ 2 \ps{\vc{q}}{\vc{Q}}
\Big[ 2 \vc{Q} +
i(1+\kappa_V) \Big(\pvsp{\vc{q}} + \frac{1}{2} \pvsm{\vc{k}}\Big) \Big]
\nonumber\\
&&
+ \vc{k} \times \Big[ (1+\kappa_V)^2 \, (\vs{1} \times \vci{q}{1})
\times (\vs{2} \times \vci{q}{2}) \nonumber\\
&&
+ i (1+\kappa_V) \Big( \pvsm{\vc{q}} + \frac{1}{2} \pvsp{\vc{k}}
\Big)\times \vec Q \Big] \Big\}
\, .
\label{intjvmestr-}
\end{eqnarray}
Let us now examine the continuity equation for the ``$-$'' part
of the current. Its nonrelativistic and retardation parts
are satisfied exactly as in the case of scalar meson exchange.
Since also the transverse mesonic current satisfies
\begin{equation}
\vc{k} \cdot \jrcq{F}{mes-tr}{V}{-} =
\frac{\ps{\vc{q}}{\vc{Q}}}{m}\, \rhqQ{mes-tr}{V}{-}\,,
\end{equation}
we are left with the divergence of the relativistic ``pro'' and ``mes''
currents for which one finds
\begin{eqnarray}
\vc{k} \cdot\Big( \jrcq{F}{pro}{V}{-}+ \jrcq{F}{mes}{V}{-}\Big)&=&
\nonumber\\
&&\hspace*{-6.5cm}\frac{\Ff{e,1}{-}}{4m^2} \vtos{V}
\bigg( \frac{1}{4} \vcs{k}-\vcs{Q} +(1 + 2 \kappa_V) \vcsi{q}{2}
+(1 + \kappa_V)^2 \pf{\vs{1}}{\vci{q}{2}}{\vs{2}}{\vci{q}{2}}
\nonumber\\
&&\hspace*{-6.5cm}+i \Big( \frac{3}{2}+
2 \kappa_V \Big) \pssp{\vc{Q}}{\vc{q}\, }
-i \Big( \frac{1}{2}+ \kappa_V \Big) \pssp{\vc{Q}}{\vc{k}}\bigg)
+ \mbox{$(1 \leftrightarrow 2)$}
\nonumber\\
&&\hspace*{-6.5cm}
- \Ff{e,1}{-} \conm{V}{2}{2} \,\ps{\vc{q}}{\vc{Q}} \ps{\vc{k}}{\vc{Q}}
\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}
-\frac{\vcs{k}}{32m^2} \, \vc{k} \cdot \jnrq{mes}{V}
\, .
\label{conv-}
\end{eqnarray}
Again, the first term on the r.h.s.\ is equal to the sum of the following
two commutators
\begin{eqnarray}
\langle \vcp{p} \, |
\Big[ v_{V}^{(1)}\, ,
\, \rho^{(2)}_{F,e} (\rm{1};\vc{k}\, ) \Big]^- | \vc{p} \, \rangle &=&
\frac{\Ff{e,1}{-}}{16m^2} \,\vtos{V}
\Big( 2 \vcs{k} + i \pssp{\vc{Q}}{\vc{k}} \Big) +\mbox{$(1 \leftrightarrow 2)$} \,, \\
\langle \vcp{p} \, |\Big[ v_{V}^{(3)}\, , \,
\rho_F^{(0)} (\rm{1};\vc{k}\, ) \Big]^-| \vc{p} \, \rangle &=&
\frac{\Ff{e,1}{-}}{4m^2} \,\vtos{V}
\Big((1 + 2 \kappa_V)\, \vcsi{q}{2} + (1 + \kappa_V)^2\,
\pf{\vs{1}}{\vci{q}{2}}{\vs{2}}{\vci{q}{2}} \nonumber\\
&& \hspace*{-0.5truecm}
- \vcs{Q} - \frac{1}{4} \vcs{k}
- i ( \frac{3}{2}+ 2 \kappa_V ) \pssp{\vc{Q}}{\vci{q}{2}} \Big)
+\mbox{$(1 \leftrightarrow 2)$} \, ,
\end{eqnarray}
while the next term is just the commutator of $t^{(1)}$ with
the mesonic density and
the last term in (\ref{conv-}) is the recoil current contribution
in (\ref{cont23}). Thus the continuity condition is satisfied also for
vector meson exchange.
Finally, in order to obtain the conserved currents for the
modified potential (\ref{vv3r}), one can use again the same procedure
as in the previous section. Namely, switching off the strong form
factors for a moment and using in the mesonic current the following
identities
\begin{eqnarray}
\ps{\vci{q}{1}}{\vci{q}{2}} & = & \frac{\vcs{k}}{2} -
( \vcsi{q}{1} + \vcsi{q}{2} ) \, , \\
\vcsi{q}{1} + \vcsi{q}{2} &=&
- 2 m_V^2 + (m_V^2 + \vci{q}{1}) + (m_V^2 + \vci{q}{2}) \, ,
\end{eqnarray}
one gets the modified intrinsic currents
\begin{eqnarray}
\jrcq{F}{pro}{V}{-} &=&
\Ff{e,1}{-} \conm{V}{4}{2} \,
\Big( \frac{1}{4} \vc{k}
+ (1+\kappa_V)^2 \Big( \frac{\vc{k}}{2} \ps{\vs{1}}{\vs{2}}
- \vs{2} \ps{\vs{1}}{\vci{q}{2}} \Big)
\nonumber\\
&&
- i (1+ 2 \kappa_V) \pvso{\vc{Q}} \Big)
\, \prop{V}{2}+\mbox{$(1 \leftrightarrow 2)$} \nonumber\\
&& - \frac{\vc{k}}{32m^2} \, \vc{k} \cdot \jnrq{mes}{V}
\, , \label{intjvpror-}\\
\jrcq{F}{mes}{V}{-} &=&
\Ff{e,1}{-}\, \conm{V}{2}{2} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, \vc{q}
\,\Big[ - (1+2 \kappa_V) m_V^2 - \vcs{Q}
\nonumber\\
&&
+ i(\frac{3}{2} + 2 \kappa_V) \pssp{\vc{Q}}{\vc{q}\, }
+ i(\frac{1}{2} + \kappa_V) \pssm{\vc{Q}}{\vc{k}}
\nonumber\\
&&
+ (1+\kappa_V)^2 \Big(
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{1}}
- (m_V^2 + \frac{\vcs{k}}{2}) ( \vs{1} \cdot \vs{2} )
\Big)
\nonumber\\
&& - \frac{\ps{\vc{k}}{\vc{Q}}}{\ps{\vc{k}}{\vc{q}}}\,
\ps{\vc{q}}{\vc{Q}}
\Big] . \label{intjvmes3r-}
\end{eqnarray}
Unlike for the scalar exchange, the modified current is in this case
somewhat simpler than the original one.
\subsection{Pseudoscalar meson exchange}
For pseudoscalar exchange one faces the problem of different, unitarily
equivalent representations which can be characterized by a parameter \mbox{$\tilde{\mu}$}\
(see Appendix A).
The simplest form of the pseudoscalar exchange potential
corresponds to the representation with $\mbox{$\tilde{\mu}$} = 0$.
In this case, only one simple relativistic contribution appears besides the
nonrelativistic potential
\begin{eqnarray}
\vto{PS} &=& - \conm{PS}{4}{2}\,
\ps{\vs{1}}{\vc{q}\, } \ps{\vs{2}}{\vc{q}\, } \prop{PS}{} \, ,\label{vps1}\\
\vtt{PS} &=& \conm{PS}{16}{4}\, (\vcs{Q} + \vcs{q} )\,
\ps{\vs{1}}{\vc{q}\, } \ps{\vs{2}}{\vc{q}\, } \prop{PS}{} \, . \label{vps3}
\end{eqnarray}
Since the operator form of the last term is $ -\{ \vcs{p}, \, v_{PS}^{(1)}\}
$, its inclusion into the Lippmann-Schwinger equation causes numerical
instabilities \cite{GAr}. Therefore, most realistic $NN$ potentials
disregard any relativistic correction to the OPEP.
Let us first consider the ``$+$'' part of the e.m.\ operators. The intrinsic
currents follow from (\ref{rprounit}) and (\ref{jprounit}) in conjunction
with the expressions of (\ref{rh2pspro+})-(\ref{j2psret+}),
(\ref{rh2psvos})-(\ref{j2psvos}) and (\ref{j2pschiv})
\begin{eqnarray}
\rhqQ{pro}{PS}{+} &=& \Ff{e,1}{+}\, \conm{PS}{32}{3}
\Big[ 3 \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}} - \frac{1}{2} \,
\pf{\vc{k}}{\vci{q}{2}}{\vs{1}}{\vs{2}} \, \Big]\!\!\prop{PS}{2} \nonumber\\
&& +\mbox{$(1 \leftrightarrow 2)$} \, , \label{intrhpspro+}\\
\jrcq{F}{pro}{PS}{+} &=& \conm{PS}{16}{4} \Big\{ \Ff{e,1}{+} \Big[
\vc{Q} \Big( \vs{1} \cdot(\vc{k}-\vc{q}\,) \ps{\vs{2}}{\vci{q}{2}}
+\frac{1}{8}\, \Sigma^{(+)}(\vec q_2,\,\vec k)
\Big)
\nonumber\\
&&
-\vs{1} \ps{\vci{q}{2}}{\vc{Q}}\ps{\vs{2}}{\vci{q}{2}}
- i \vc{k} \times \vci{q}{2} \ps{\vs{2}}{\vci{q}{2}}
- \frac{1}{4} \vci{q}{2}\, \Sigma^{(+)}(\vec q_2,\,\vec Q)
\Big]
\nonumber\\
&&
+ \frac{1}{4} \Gm{+} \,
\vc{k} \times \Big[ 3\vs{1}\times\vec Q \,\ps{\vs{2}}{\vci{q}{2}}
-\vs{1}\times \vci{q}{2} \,\ps{\vs{2}}{\vc{Q}}
\nonumber\\
&&
- \frac{i}{2} \vci{q}{2} \ps{\vs{2}}{\vc{k}\, } \Big] \prop{PS}{2}
+\mbox{$(1 \leftrightarrow 2)$} \, , \label{intjpspro+}\\
\rhqQ{ret}{PS}{+} &=& \Ff{e,1}{+}\, \conm{PS}{32}{3} \,
\ps{\vc{k}}{\vci{q}{2}} \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}
\, \propp{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{intrhpsret+}\\
\jrcq{F}{ret}{PS}{+} &=& \conm{PS}{64}{4} \Big\{
\Ff{e,1}{+}\, \Big[ 2 \vci{q}{2}\, \ps{\vc{Q}}{\vc{q}\, } + \vc{k} \times
\pv{\vc{Q}}{\vci{q}{2}} \Big] \ps{\vs{1}}{\vci{q}{2}}
\nonumber\\
&&
- \Gm{+}\, \vc{k} \times \Big[
2 \vs{1} \times \vci{q}{2} \ps{\vc{Q}}{\vci{q}{2}}
+i\vci{q}{2} \ps{\vc{k}}{\vci{q}{2}} \Big] \Big\} \ps{\vs{2}}{\vci{q}{2}}\,
\propp{PS}{2} \nonumber\\
&& +\mbox{$(1 \leftrightarrow 2)$} \, , \label{intjpsret+}
\end{eqnarray}
where $\Sigma^{(\pm)}(\vec a, \,\vec b)$ is defined in (\ref{Bab}).
With respect to the continuity condition (\ref{cont23}), one notes that
again the retardation operators satisfy (\ref{contret}),
while for the divergence of the ``pro'' current one gets
\begin{eqnarray}
\vc{k} \cdot \jrcq{F}{pro}{PS}{+} &=&
\Ff{e,1}{+} \conm{PS}{16}{4}\,
\Big\{ \Big[\ps{\vc{k}}{\vc{Q}} \ps{\vs{1}}{\vci{q}{2}}
+ \ps{\vc{q}}{\vc{Q}} \ps{\vs{1}}{\vc{k}} \Big]\ps{\vs{2}}{\vci{q}{2}}
\nonumber\\
&&
- \frac{1}{4} \ps{\vc{k}}{\vci{q}{2}} \, \Sigma^{(+)}(\vec q_2,\,\vec Q)
+ \frac{1}{8} \ps{\vc{k}}{\vc{Q}}\, \Sigma^{(+)}(\vec q_2,\,\vec k)
\Big\}
\prop{PS}{2} \nonumber\\
&& +\mbox{$(1 \leftrightarrow 2)$} \, . \label{conps1}
\end{eqnarray}
Explicit evaluation of the commutators on the r.h.s.\ of (\ref{cont23})
leads to
\begin{eqnarray}
\langle \vcp{p} \, | \Big[ v^{(1)}_{PS} \, , \,
\rho^{(0)} (1;\vc{k}\, ) \Big]^+
| \vc{p} \, \rangle &=& 0\,,\\
\langle \vcp{p} \, |
\Big[ t^{(1)}\, ,
\, \rho^{(2)}_{F} (\rm{2};\mbox{pro};\vc{k}\, )^+_{PS} \Big]
| \vc{p} \, \rangle &=&
\Ff{e,1}{+} \conm{PS}{64}{4}\, \ps{\vc{q}}{\vc{Q}}
\Big[ 5 \ps{\vs{1}}{\vc{k}} \ps{\vs{2}}{\vci{q}{2}} \nonumber\\
&& +
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vc{k}} \Big]
\prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{conps2}\\
\langle \vcp{p} \, | \Big[ v^{(1)}_{PS} \, , \,
\rho^{(2)}_{F,e} (1;\vc{k}\, )\Big]^+
| \vc{p} \, \rangle &=& \Ff{e,1}{+} \conm{PS}{64}{4}
\Big[\ps{\vc{Q}}{\vci{q}{2}}\Sigma^{(+)}(\vec q_2,\,\vec k)
\nonumber\\
&&
- \ps{\vc{k}}{\vci{q}{2}}\Sigma^{(+)}(\vec q_2,\,\vec Q)
\Big]\!
\prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{conps4}\\
\langle \vcp{p} \, | \Big[ v^{(3)}_{PS}
, \, \rho^{(0)} (1;\vc{k}\, ) \Big]^+ | \vc{p} \, \rangle &=&
\Ff{e,1}{+} \conm{PS}{16}{4}\, \ps{\vc{k}}{\vc{Q}}
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}
\prop{PS}{2}
\nonumber\\
&&+\mbox{$(1 \leftrightarrow 2)$} \, , \label{conps3}
\end{eqnarray}
and their appropriate sum equals (\ref{conps1}).
With respect to the ``$-$'' currents, we do not repeat the expressions
for the nonrelativistic currents (\ref{j2pspro1-})-(\ref{j2psmes1-}), unchanged
they define the corresponding nonrelativistic intrinsic ones.
The FW-currents for $\mbox{$\tilde{\mu}$} = 0$ follow from
(\ref{rprounit})-(\ref{jprounit}). Let us first collect the
charge densities. There are no additional contributions to them
from the \mbox{$\chi_{\sigma}$}\ and \mbox{$\chi_r$}\ boost commutators since there
is no nonrelativistic exchange charge operator, and the ``$-$'' part of the
interaction dependent boost $\chi_V$ of (\ref{rh2pschiv}) disappears in the
Breit frame. Thus one finds
\begin{eqnarray}
\rhqQ{pro}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{32}{3}
\Sigma^{(+)}(\vec q_2, \,\vec Q)
\prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,\label{intrhpspro-}\\
\rhqQ{mes}{PS}{-} &=&
- \Ff{e,1}{-} \conm{PS}{8}{3} \, \ps{\vc{k}}{\vc{Q}} \,
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, ,
\label{intrhpsmes-}\\
\rhqQ{ret}{PS}{-} &=&
\Ff{e,1}{-} \conm{PS}{16}{3}\, \ps{\vc{Q}}{\vci{q}{2}}\,
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}
\ \propp{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, .
\label{intrhpsret-}
\end{eqnarray}
The spatial currents are more complicated than those for the scalar and
vector meson exchanges, since first of all the nonrelativistic meson
nucleon vertex depends on
the nucleon spin and, furthermore, there is a nonrelativistic ``pro'' current
generating additional boost and separation contributions, which are listed in
Appendix B.
The \mbox{$\tilde{\mu}$} -independent FW-current densities are listed in
(\ref{j2pspro3-}), (\ref{j2psmes3-}) and (\ref{j2psret-}).
Combining all terms, we find for the relativistic intrinsic current operators
\begin{eqnarray}
\jrcq{F}{pro}{PS}{-} &=& \conm{PS}{32}{4}\, \Big\{ \Ff{e,1}{-}\,
\Big[ \vs{1} \Big\{ \Big(3 \ps{\vc{k}}{\vc{q}\,}-4\vcs{q} - 2\vcs{Q}\Big)
\ps{\vs{2}}{\vci{q}{2}}
\nonumber\\
&&
+\ps{\vc{q}}{\vci{q}{2}}\ps{\vs{2}}{\vc{q}\,}
-\ps{\vc{Q}}{\vci{q}{2}}\ps{\vs{2}}{\vc{Q}}
+\frac{i}{2}(\vec k \times \vec q\,)\cdot \vec Q \Big\}
\nonumber\\
&&
+ \frac{1}{4}\vc{k}\, \Big\{ \Big(7\ps{\vs{1}}{\vc{q}\,} -
4\ps{\vs{1}}{\vc{k}}\Big) \ps{\vs{2}}{\vci{q}{2}}-\frac{1}{2}
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vc{k}\, }\Big\}
\nonumber\\
&&
-\vc{q}\,\Big\{\vs{1}\cdot (\vc{k} + \vci{q}{1})
\ps{\vs{2}}{\vci{q}{2}}
+\frac{1}{2}\ps{\vs{1}}{\vc{q}\,} \ps{\vs{2}}{\vc{k}\, }\Big\}
\nonumber\\
&&
-i(2\vec k -\vec q\,)\times \vec Q\, \ps{\vs{2}}{\vci{q}{2}}
+\frac{1}{2} \vec Q\,\Sigma^{(-)}(\vec q_2, \,\vec Q)
\Big]
\nonumber\\
&&
+ \frac{1}{4} \Gm{-} \vc{k} \times \Big[
\Big( 6i\, \vc{Q} + 5\, \vc{k} \times \vs{1} \Big) \ps{\vs{2}}{\vci{q}{2}} +
\vci{q}{2} \times \vs{1}\, \ps{\vs{2}}{\vc{k}\, }
\nonumber\\
&&
- 2i \vci{q}{2}\, \ps{\vs{2}}{\vc{Q}} \, \Big]
\Big\}\prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \,,
\label{intjpspro-}\\
\jrcq{F}{mes}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{32}{4}\, \vc{q}\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}\
\Big\{\vcs{q}\Big[\ps{\vs{1}}{\vc{k}} \ps{\vs{2}}{\vci{q}{2}}
+ \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}}\Big]
\nonumber\\
&&
+ 4 \Big(\vcs{q} + \vcs{Q} -
\frac{\ps{\vc{k}}{\vc{Q}}\ps{\vc{q}}{\vc{Q}}}{\ps{\vc{k}}{\vc{q}\, }}
\Big) \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}}
\nonumber\\
&&
+ 2\ps{\vci{q}{2}}{\vc{Q}} \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{Q}}
+ 2\ps{\vci{q}{1}}{\vc{Q}} \ps{\vs{1}}{\vc{Q}} \ps{\vs{2}}{\vci{q}{2}}
\nonumber\\
&&
-i\,(\vec k \times \vec q\,)\cdot \vec Q\,\Big(\ps{\vs{1}}{\vci{q}{1}}+
\ps{\vs{2}}{\vci{q}{2}}\Big) \Big\} \, ,
\label{intjpsmes-}\\
\jrcq{F}{ret}{PS}{-} &=& - \conm{PS}{64}{4} \Big\{ \Ff{e,1}{-}\,
\Big[
\Big( 4 \, \ps{\vci{q}{2}}{\vc{Q}}^2 - \ps{\vc{k}}{\vci{q}{2}}^2 \, \Big)
\Big( \vs{1} - \vc{q}\,\frac{\ps{\vs{1}}{\vci{q}{1}}}{\ps{\vc{k}}{\vc{q}\,}}
\Big)
\nonumber\\
&& + \ps{\vs{1}}{\vci{q}{2}}
\Big( \vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}} - 2
\vc{Q}\, \ps{\vci{q}{2}}{\vc{Q}} \, \Big) \, \Big] \nonumber\\
&&
+ \Gm{-} \vc{k} \times \Big[ 2i \vci{q}{2}\,
\ps{\vci{q}{2}}{\vc{Q}}
+ \vs{1} \times \vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}}\,
\Big] \Big\}\ps{\vs{2}}{\vci{q}{2}} \propp{PS}{2}
\nonumber\\
&& + \mbox{$(1 \leftrightarrow 2)$} \, .
\label{intjpsret-}
\end{eqnarray}
Finally, with respect to the continuity equation (\ref{cont23}), it is easy
to see that the ``ret''-part satisfies (\ref{contret}). The
verification for the remaining current is straightforward but lengthy. Since
it is desirable to see in detail the correspondance between the various
commutators and the associated currents, we have outlined it in detail
in Appendix E.
\section{Summary and Outlook}
In this work we have
consistently derived a complete set of intrinsic operators of the
e.m.\ charge and current density operators for one- and two-body
interaction currents including leading order relativistic contributions for
a one-boson-exchange potential consisting of scalar, pseudoscalar and
vector meson exchanges. These operators have to be evaluated between
intrinsic rest frame wave functions only, since they include already the
effects of the wave function boosts. Furthermore, these operators respect
gauge and Lorentz invariance up to the terms of leading relativistic order.
The definition of the intrinsic operators is based
upon the general separation of the center-of-mass motion from the intrinsic
one as described in \cite{AdAr95}. It relies on the general properties of
the Poincar\'{e} generators in conjunction with a $1/m$-expansion.
Although the explicit construction of the operators has been done here
in the framework of the extended S-matrix method of \cite{ATA}, other
methods are known to give unitarily equivalent results.
These operators will now allow to study relativistic effects in e.m.\ processes
in nuclei in a more systematic and consistent way, at least at not too high
energy and momentum transfer, since they rely on the $1/m$-expansion.
They will also allow a systematic comparison with covariant approaches
\cite{Tj} currently being applied only for the simplest two-nucleon
system. Confrontation of their results \cite{HuTj} with more conventional
ones using the $1/m$-expansion seems to indicate that it might be more
important to incorporate relativistic effects into the
nuclear currents than into the nuclear dynamics.
At present, there are only a few consistent treatments within the
$1/m$-expansion, namely for deuteron photodisintegration
using a simplified dynamical model of a pure one-pion-exchange interaction
\cite{GAr} and
for electrodisintegration using a realistic interaction model \cite{TrA,Tam}.
However, additional simplifications have been introduced in \cite{BWAr} by
considering only quasifree kinematics where the interaction currents in
general are negligible, and in \cite{TrA} by keeping only local
velocity-independent terms. In those approximate treatments,
effects due to genuine relativistic components of the intrinsic wave
functions have been neglected, too.
However, a thorough treatment has also to include the relativistic effects
in the internal dynamics. This would in particular
imply a readjustment of realistic
potential models, because at least some relativistic effects
are, implicitly or explicitly, accounted for by fitting some
phenomenological parameters of
a realistic nucleon-nucleon potential to experimental scattering data,
although it is then often inserted into the nonrelativistic Schr\"{o}dinger
equation.
Therefore, the next task will be to include all
leading order terms in a realistic calculation for the two-body system,
which is not very difficult for the charge operator.
However, when one turns to the spatial current,
the number of terms increases enormously. But with present day methods, it
is not impossible. Particularly interesting will be the study of
polarization observables because some of them appear to be very
sensitive to relativistic e.m.\ currents already at rather
small momentum transfer \cite{BWAr,ArL95}. In this region it seems reasonable
to include them in the framework of the $1/m$-expansion.
\section*{Acknowledgements}
This work has been supported by the Deutsche Forschungsgemeinschaft
(SFB 201), by grants GA AV CR 148410 and GA CR 202/94/037 and by the
US Department of Energy under the contract \#DE-AC05-84ER40150 .
J.~A.\ thanks the Alexander von Humboldt-Foundation for
supporting his stay in Mainz.
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix A: Foldy-Wouthuysen currents}
In this appendix we list the FW one-nucleon and exchange currents
as obtained by the extended S-matrix technique of \cite{ATA}.
The equations from \cite{ATA} are
referred to as ATA-(number of eq.). The one-nucleon currents and the
one-pion-exchange ones were carefully compared to the results of \cite{GAr}.
A couple of misprints in \cite{ATA} were found and we correct
them below.
We follow the notation of \cite{ATA} for particle momenta
($i= 1,\,2$)
\begin{eqnarray}
\vci{q}{i} &=& \vcpi{p}{i} - \vci{p}{i} \, , \\
\vci{Q}{i} &=& \vcpi{p}{i} + \vci{p}{i} \, ,
\end{eqnarray}
and for the e.m.\ form factors
\begin{eqnarray}
\hat{e}_i &=& \frac{1}{2}
\Big( F_1^{is} (k^2) + F_1^{iv} (k^2) \, \tau_i^3 \, \Big) \, , \\
\hat{\kappa}_i &=& \frac{1}{2}
\Big( F_2^{is} (k^2) + F_2^{iv} (k^2) \, \tau_i^3 \, \Big) \, ,
\end{eqnarray}
where $\tau_i^3$\ is the third isospin component of the ith nucleon.
$F_{1,2}^{is/iv}$\ denote the isoscalar and isovector Dirac-Pauli nucleon form
factors, respectively.
In passing to the Breit frame, one finds for the two nucleon system
\begin{eqnarray}
\vci{q}{1,2} & = & \frac{1}{2} \vc{k} \pm \vc{q} \, ,
\label{q12b} \\
\vci{Q}{1,2} & = &\pm \vc{Q} \, ,
\end{eqnarray}
with $\vc{q} = \vcp{p} - \vc{p}$, $\vc{Q} = \vcp{p} + \vc{p}$ and
$\vec p^{\,(\prime)}=(\vec p^{\,(\prime)}_1-\vec p^{\,(\prime)}_2)/2$.
However, we keep the notation $ \vci{q}{1,2}$\ in the expressions for
the intrinsic currents instead of (\ref{q12b}), whenever it
simplifies the formulas. The time components of $q_i$, that
describe the energy transferred in the corresponding vertex, are
expressed in terms of the on-shell nucleon kinetic energies,
i.e., up to the order considered
\begin{equation}
q_{i0} = \frac{1}{2m} \ps{\vci{q}{i}}{\vci{Q}{i}} +\ord{-3}\, ,
\label{QBreit}
\end{equation}
where $m$ denotes the nucleon mass.
With respect to the meson exchange currents associated with the various
mesons of a given OBE potential, it is useful to separate the isospin
dependence in the potential contribution.
$V_B = T\, \tilde{v}_B$,
where $T= \ps{\vci{\tau}{1}}{\vci{\tau}{2}}$ for an
isovector meson, and $T= 1$\ for an isoscalar one.
Because the isospin dependence of the MECs originate from the e.m.\ form
factors and from the isospin operators of the potential,
it is convenient to separate the MECs into pieces proportional
to $F_{e/\kappa,i}^{\pm}$\ and $G_{M,i}^{\pm}$ as defined by
\begin{eqnarray}
\Ff{e,i}{\pm} &=& \hat{e}_i \, T \, \pm \, T \, \hat{e}_i \, ,
\label{fisoe}\\
\Ff{\kappa,i}{\pm} &=& \hat{\kappa}_i \, T \, \pm \, T \, \hat{\kappa_i} \, ,
\label{fisok}\\
G_{M,i}^{\pm} &=& \Ff{e,i}{\pm} \, + \, \Ff{\kappa,i}{\pm} \, ,
\label{gisom}
\end{eqnarray}
We would like to emphasize that this is {\em not} the separation into
isoscalar and isovector parts.
Notice also that $F_{e/\kappa,i}^{-}$\ and $G_{M,i}^{-}$ are
odd under nucleon interchange and thus vanish for isoscalar exchange.
\subsection*{One-nucleon currents}
For completeness we will begin with the well-known
expressions for the one-nucleon current in two-nucleon space
in an arbitrary frame of reference. To this end we remind the reader
at the general definition for the matrix element of an operator $a(\vc{k})$
in two-nucleon momentum space transferring a momentum $\vec k$
\begin{equation}
\langle \vcp{P}\,\vcp{p}|a(\vc{k})|\vc{P}\,\vc{p}\,\rangle
= a(\vc{k},\vc{K},\vc{q},\vc{Q})\,\delta(\vcp{P}-\vc{P}-\vc{k}) \, ,
\end{equation}
with $\vc{K} = \vcp{P} + \vc{P}$.
For a one-body operator $a(1;\vc{k})$ one finds
\begin{eqnarray}
\langle \vcp{P}\,\vcp{p}|a(1;\vc{k})|\vc{P}\,\vc{p}\,\rangle &=&
\langle \vcp{p_1}\,\vcp{p_2}|a(1;\vc{k})|\vc{p_1}\,\vc{p_2}\rangle
\nonumber\\
&=& a(1;\vc{k},\vc{Q_1})_1 \delta(\vc{q_2}) + \mbox{$(1 \leftrightarrow 2)$}
\nonumber\\
&=& \Big(a(1;\vc{k},\vc{Q_1})_1 + \mbox{$(1 \leftrightarrow 2)$}\Big)
\delta(\vcp{P}-\vc{P}-\vc{k})\,.
\end{eqnarray}
The last line follows from the fact that $a(1;\vc{k},\vc{Q_1})_1$ contains
$\delta(\vc{q_1}-\vc{k})$.
Therefore one gets
\begin{equation}
a(1;\vc{k},\vc{K},\vc{q},\vc{Q}) = a(1;\vc{k},\vc{Q_1})_1
+ \mbox{$(1 \leftrightarrow 2)$}\,,\label{akQ}
\end{equation}
where $\vci{q}{1,2}$ is given in (\ref{q12b}) and
$\vci{Q}{1,2}=\vc{K}/2\pm \vc{Q}$.
Thus it is sufficient to list the contribution of nucleon ``1'', i.e.,
$a(1;\vc{k},\vc{Q_1})_1$, since the contribution of the other one is
obtained by adding the exchange \mbox{$(1 \leftrightarrow 2)$}.
For the one-body current including the leading
relativistic order, one has
\begin{eqnarray}
\rho_{FW}^{(0)} (1;\vc{k},\vci{Q}{1})_1 &=& \mbox{$\hat{e}_1$} \delta(\vc{q_1}-\vc{k})\, ,
\label{r10}\\
\rho_{FW}^{(2)} (1;\vc{k},\vci{Q}{1})_1 &=& - \frac{\mbox{$\hat{e}_1$} + 2 \mbox{$\hat{\kappa}_1$}}{8m^2}
\Big( \vcs{k} + i \psso{\vc{Q}_1}{\vc{k}} \Big)\, \delta(\vc{q_1}-\vc{k})
\, ,\label{r12} \\
\vc{\jmath}_{FW}^{\, \, (1)} (1;\vc{k},\vci{Q}{1})_1 &=& \frac{1}{2m}
\Big( \mbox{$\hat{e}_1$} \vci{Q}{1} + i (\mbox{$\hat{e}_1$} + \mbox{$\hat{\kappa}_1$} )\, \pvso{\vc{k}} \, \Big)\,
\delta(\vc{q_1}-\vc{k})\, ,\label{j11}\\
\vc{\jmath}_{FW}^{\, \, (3)} (1;\vc{k},\vci{Q}{1})_1 &=&
- \frac{\vcsi{Q}{1} + \vcs{k}}{8m^2}
\vc{\jmath}_{FW}^{\, \, (1)} (1;\vc{k},\vci{Q}{1})_1
\nonumber\\
&& - \frac{1}{16m^3}\Big[
\Big( \mbox{$\hat{e}_1$} \,\ps{\vc{k}}{\vci{Q}{1}} + 4 \mbox{$\hat{\kappa}_1$} m k_0 \Big)
(\vc{k} + i \pvso{\vci{Q}{1}} ) \nonumber\\
&&
+\mbox{$\hat{\kappa}_1$} \, \vc{k} \times\Big[\vci{Q}{1} \times (\vc{k}+i\pvso{\vci{Q}{1}} )
\Big] \Big]
\delta(\vc{q_1}-\vc{k}) \, .\label{j13}
\end{eqnarray}
Note that ATA-(4.1d), corresponding to (\ref{j13}),
has a wrong power of $m$ and the vector product is missing in the
last term. In the second line of (\ref{j13}), $k_0$ stands for the total
energy transfer. This means that the corresponding one-nucleon current
contains implicitly some interaction effects.
The divergence of this part of the current equals
the commutator of the full nonrelativistic Hamiltonian with
that part of the Darwin-Foldy and spin-orbit charge density of (\ref{r12})
which is proportional to \mbox{$\hat{\kappa}_1$}.
For the free nucleon one has the relation $k_0 = \ps{\vc{k}}{\vci{Q}{1}}/
2m$. It is possible to redefine the
one-nucleon current by replacing
$k_0$\ by its free-nucleon value or, alternatively, introducing the
full momentum transfer also in the preceeding term proportional
to \mbox{$\hat{e}_1$}. In this case, the MECs have to be redefined consistenly \cite{ATA}.
In this paper we adopt the particular form (\ref{j13}), since then the
corresponding MECs have the simplest form.
In order to get the currents in the Breit frame, one simply replaces
$\vci{Q}{1}$\ by $\vc{Q}$, and the $\delta$-function in (\ref{akQ})
then becomes $\delta (\frac{\vc{k}}{2} - \vc{q}\, )$. Furthermore, for the
$\mbox{$(1 \leftrightarrow 2)$}$-exchange one has to make the replacements $\vec Q \rightarrow -\vec
Q$ and $\delta (\frac{\vc{k}}{2} - \vc{q}\, )\rightarrow
\delta (\frac{\vc{k}}{2} + \vc{q}\, )$.
\subsection*{Meson exchange currents}
The various contributions from the exchange of a given meson type to
the OBE potential are derived in \cite{ATA} for an arbitrary reference frame.
Here we need only the intrinsic (\vc{P}-independent) parts
of the potential. Their momentum representation
is denoted by $\tilde{v}_B (\vc{q}, \vc{Q})$ where the isospin dependence
has been separated. Thus, up to the order considered one has
\begin{equation}
\tilde{v}_B (\vc{q}, \vc{Q}) = \vto{B} + \vtt{B} + \ord{-5} \, ,
\label{potform}
\end{equation}
where \vto{B}\ is an even function of \vc{q}.
Unlike in \cite{ATA}, we include here explicitly
the hadronic form factors into the potentials and exchange currents. This is
done by modifying expressions containing a single propagator of a meson
of mass $m_B$ by
\begin{eqnarray}
{\Delta}_B (z)= \frac{1}{m_B^2 + z } \rightarrow
\tilde{\Delta}_B (z) & = &
\frac{f_B^2 (z)}{m_B^2 + z} \, , \label{prop} \\
{\Delta}_B^{\, \prime} (z)= - \frac{1}{(m_B^2 + z)^2} \rightarrow
\tilde{\Delta}_B^{\, \prime} (z) & = &
\frac{d}{d \, z} \ \tilde{\Delta}_B (z) \, ,
\label{propp}
\end{eqnarray}
where $z$ stands for $\vcs{q}$ and $f_B$ is the
strong form factor at the meson nucleon vertex.
For the meson-in-flight or mesonic currents the introduction
of a hadronic form factor
consistent with gauge invariance is achieved by the replacement
\begin{equation}
{\Delta}_B (z_1){\Delta}_B (z_2) \rightarrow
f_B(z_1,\,z_2) =
- \, \frac{1}{z_1-z_2} \Big(\tilde{\Delta}_B (z_1) -
\tilde{\Delta}_B (z_2) \Big) \, ,
\label{fqq}
\end{equation}
where $z_i$ stands for $\vcs{q_i}$ ($i=1,\,2$). This is the minimal form
which fulfills the continuity equation \cite{Ris84}.
In the case that we assume the hadronic form factor to be of the form
\begin{equation}
f_{B} (z) = \Big( \frac{\Lambda^2 - m_B^2}{\Lambda^2 + z}
\Big)^{n/2} ,\,\, n = 1, 2, \ldots
\end{equation}
where $\Lambda$ is a range parameter, it can be shown that
then (\ref{fqq}) corresponds to the minimal substitution in the meson
propagator yielding
\begin{eqnarray}
f_B(z_1,\,z_2) &=& (-)^{n-1} \frac{(\Lambda^2 - m_B^2)^{n}}{(n-1)!}
\nonumber\\
&&
\frac{d^{n-1}}{d (\Lambda^2)^{n-1}}
\Big[ \frac{1}{\Lambda^2- m_B^2}
\Big( \frac{1}{z_1 + m_B^2} \frac{1}{z_2 + m_B^2}
- \frac{1}{z_1 +\Lambda^2} \frac{1}{z_2 + \Lambda^2} \Big)\Big] \,.
\end{eqnarray}
All mesonic currents are of nonrelativistic order.
In order to get higher order terms, one has to consider the
higher order contributions to the meson-nucleon vertices and the
effects of retardation of the exchanged mesons, i.e., dependence
of the propagator function on the energies transferred. For the general
expression (\ref{fqq}) this means using the Taylor decomposition
\begin{eqnarray}
f_B(z_1-\delta_1, z_2-\delta_2)& =& f_B(z_1, z_2)
+ \, \frac{1}{(z_1- z_2)} \Big(
(\delta_1 - \delta_2) f_B(z_1, z_2) \nonumber\\
& &+ \delta_1 \tilde{\Delta}_B^{\, \prime} (z_1)
-\delta_2 \tilde{\Delta}_B^{\, \prime} (z_2) \Big)+{\cal O}(\delta^2) \,
,\label{ffret}
\end{eqnarray}
where $\delta_i=q_{i0}^2$. This relation generalizes ATA-(4.10e).
The first term on the r.h.s.\ of (\ref{ffret}) contributes to
the nonrelativistic mesonic current, the second one also has a
propagator structure of the mesonic current and thus will be added to the
relativistic ``mes'' part. It can be shown that
the divergence of the corresponding current saturates the commutator
of the kinetic energy with the mesonic charge density. The last
terms have to be combined with the retardation currents.
The MECs as derived in \cite{ATA} are associated with particular
relativistic Feynman diagrams: nucleon Born, contact, and mesonic
contribution. In other techniques, the same currents formally
arise from a different set of
(time-ordered) diagrams. Therefore we rather group the currents
according to their propagator structure, this also allows us to
combine several contributions and to present the results in a
more compact form. The MECs with a single meson propagator
$\prop{B}{2}$ attached are labelled
``pro'', those corresponding to meson-in-flight currents
containing $f_B(\vec q_1^{\,\,2},\, \vec q_2^{\,\,2})$
``mes'', and finally those containing
the derivative of a propagator ``ret'', since they arise
from the retardation or boost contributions.
The ``pro'' and ``ret'' operators can be separated
according to which of the two nucleons interacts with the e.m.\ field. We always
give only the terms corresponding to the e.m.\ interaction of
the first nucleon.
The retardation of the exchanged mesons gives rise to contributions
both to the potential and to the MECs. The different prescriptions
of how to treat the retardation yield different, but unitarily equivalent
results \cite{Fr80}. The unitary transformation is acting in the
intrinsic space only.
This unitary equivalence can be parametrized by
a parameter $\nu $. The value $\nu = 1/2$\ corresponds to the nonretardation
potential in the c.m.\ frame, but it is impossible to find a suitable
value for $\nu$\
which would simplify {\em all} MECs. Nevertheless, the representation
corresponding to $\nu = 1/2$\ is most frequently used and we give
all intrinsic operators for this choice. For completeness, we list the
additional operators for a general $\nu $\ in Appendix D
where we also have corrected several misprints of \cite{ATA}. For this
reason, we will directly refer to this appendix with respect to the
``ret'' contributions.
For pseudoscalar meson exchange,
there is an additional unitary freedom related to
various off-energy shell continuations of the energy dependence
of the $\pi NN$ vertex. The corresponding unitary transformation
is parametrized by a parameter \mbox{$\tilde{\mu}$}\ .
The simplest form for potential and MEC results
from the choice $\mbox{$\tilde{\mu}$} = 0$, and we have adopted this choice
in the main body of this paper. However, the widely used
OBEPQ Bonn potentials correspond to the value $\mbox{$\tilde{\mu}$} = -1$
\cite{AGA}.
Therefore, we present the additional \mbox{$\tilde{\mu}$}-dependent
operators in Appendix A.
We have already mentioned in Sect.\ 3,
that the MECs simplify for all exchanges if
a part of the relativistic one-nucleon current density is considered
to contain interaction effects through its dependence on
the total energy transfer $k_0$ (see (\ref{j13})). Then, the currents, denoted
as ``external'' in \cite{ATA}, are implicitly included and should
not be added to the MECs (see section 5 of \cite{ATA}).
For simplicity, all MECs are multiplied by the Dirac e.m.\
form factor $F_1$. For this reason, one has to replace in the explicit
expressions of \cite{ATA} the term
$-iF_B^{mes/con}(\vec \tau_1 \times \vec \tau_2\,)^3$
by $F_{e,1}^-$.
It is shown in the appendix B of \cite{ATA} how one can modify the MECs
in the $1/m$ expansion framework in order to incorporate
independent e.m.\ form factors as suggested in
\cite{GrR}. The resulting additional currents of ATA-(B.17)
are proportional to the differences $F_{\gamma BB}- F^V_1$ or
$F_{\gamma NNB}- F^V_1$.
These differences are of the order
of $\vcs{k}/\Lambda^2$, where $\Lambda$ is the corresponding
cut-off parameter. The currents ATA-(B.17) are therefore of
leading relativistic order. Moreover, they are frame-independent
and therefore can be added to the intrinsic currents considered
in this paper without any further modification.
With respect to our notation, we would like to remark that the currents in the
momentum space representation depend in general on \vc{k}, \vc{K}, \vc{q},
and \vc{Q}, but for the sake of briefness only their \vc{k}-dependence
is retained in our notation of the operators while in the explicit
expressions on the r.h.s.\ of the equations we use \vc{k},
$\vci{q}{1/2}=\frac{1}{2}\vc{k}\pm \vc{q}$, and $\vci{Q}{1/2}=\frac{1}{2}
\vc{K}\pm \vc{Q}$ for convenience.
\subsubsection*{Scalar meson exchange}
With respect to the ``$+$'' part of the MECs for scalar exchange,
there is one single nonretardation current from ATA-(4.4b)
\begin{equation}
\jrcFq{pro}{S}{+} =
- \Ff{e,1}{+} \mpw{4}{2} \vtos{S} ( \vci{Q}{1} + i \pvso{\vc{k}} )
+ \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{j2spro+}
\end{equation}
where $\vtos{S}$ is given in (\ref{vs1}).
Furthermore, one gets the retardation charge and current densities from
(\ref{rhret})-(\ref{jretSV}) for $\tilde \nu = 0$
\begin{eqnarray}
\rhFqQ{ret}{S}{+} &=& \Ff{e,1}{+}\, \conm{S}{8}{ } \,
\ps{\vc{k}}{\vci{q}{2}}\, \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2sret+}\\
\jrcFq{ret}{S}{+} &=& \conm{S}{16}{2}
\Big\{
\Ff{e,1}{+}\, \Big( \vci{Q}{1}\, \ps{\vc{k}}{\vci{q}{2}} +
\vci{q}{2}\, (\vci{Q}{1} + 3\vci{Q}{2}) \cdot \vci{q}{2}\, \Big)
\nonumber\\
&& + i G_{M,1}^{+}\, \pvso{\vc{k}}\, \ps{\vc{k}}{\vci{q}{2}}\,
\Big\} \, \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$} \, . \label{j2sret+}
\end{eqnarray}
Similarly, one obtains for the the ``$-$'' part from
ATA-(4.3b) one nonrelativistic current
\begin{equation}
\jnrFq{mes}{S} =
\Ff{e,1}{-} \con{S} \, (\vci{q}{1} - \vci{q}{2} )\,
\mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, . \label{j2smes1-}
\end{equation}
The relativistic ``pro'' and ``mes'' contributions follow from
ATA-(4.4b, 4.7d, 4.10a)
\begin{eqnarray}
\jrcFq{pro}{S}{-} &=&
\Ff{e,1}{-} \mpw{4}{2} \vtos{S} ( \vc{k} + i \pvso{\vci{Q}{1}} ) +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{j2spro-}\\
\rhFqQ{mes}{S}{-} &=&
\Ff{e,1}{-} \con{S} \, (q_{10} - q_{20}) \, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, , \label{rh2smes-}\\
\jrcFq{mes}{S}{-} &=&
- \Ff{e,1}{-}\, \conm{S}{8}{2} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, (\vci{q}{1} - \vci{q}{2}\, )
\Big(
\vcsi{Q}{1} + \vcsi{Q}{2}
\nonumber\\
&&
+ i \psso{\vci{q}{1}}{\vci{Q}{1}} + i \psst{\vci{q}{2}}{\vci{Q}{2}}
-8m^2\frac{q_{10}^2 - q_{20}^2}{\vcsi{q}{1} - \vcsi{q}{2}} \Big) \,
\, , \label{j2smes-}
\end{eqnarray}
while again the ``ret'' currents come from
(\ref{rhret})-(\ref{jretSV}) for $\tilde \nu = 0$
\begin{eqnarray}
\rhFqQ{ret}{S}{-} &=&
- \Ff{e,1}{-} \conm{S}{8}{ }\, (\vci{Q}{1} + 3\vci{Q}{2}) \cdot \vci{q}{2}
\ \propp{S}{2} + \mbox{$(1 \leftrightarrow 2)$}
\, . \label{rh2sret-}\\
\jrcFq{ret}{S}{-} &=& - \conm{S}{16}{2} \,
\Bigg\{ \Ff{e,1}{-}\, \Big( \vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}} +
\vci{Q}{1}\, (\vci{Q}{1} + 3\vci{Q}{2}) \cdot \vci{q}{2}
\, \Big) \nonumber\\
&&
+ iG_{M,1}^{-}\, \pvso{\vc{k}} \
(\vci{Q}{1} + 3\vci{Q}{2}) \cdot \vci{q}{2}
+ \Ff{e,1}{-}\,16m^2
\frac{\vci{q}{1} - \vci{q}{2}}{(\vcsi{q}{1} - \vcsi{q}{2})} \,
q_{20}^2 \, \Bigg\} \propp{S}{2}\nonumber\\
&& + \mbox{$(1 \leftrightarrow 2)$} \, . \label{j2sret-}
\end{eqnarray}
\subsubsection*{Vector meson exchange}
Large parts of the MEC for vector meson exchange can
be obtained from the scalar case by the replacements
$m_S \rightarrow m_V$, $g_S^2 \rightarrow - g_V^2$.
Those parts will not be listed again, rather we shall refer to the
corresponding expressions for scalar exchange keeping in mind the
aforementioned replacements. Only the additional currents will now be listed
explicitly.
We start again with the ``$+$'' part for which we have only
one additional current from ATA-(4.4c)
\begin{equation}
\jrcFq{pro}{V}{+} =
- \Ff{e,1}{+} \mpw{4}{2} \vtos{V} \Big( \vci{Q}{2} +
i(1+\kappa_V )\, \pvsp{\vci{q}{2}} \Big) +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{j2vpro+}
\end{equation}
where $\kappa_V$\ is the usual ratio of the normal (vector) to the
anomalous (tensor) $VNN$\ coupling constants. The retardation currents
follow from (\ref{rh2sret+}) and (\ref{j2sret+}).
For the ``$-$'' part, we have first the retardation terms from (\ref{rh2sret-})
and (\ref{j2sret-}). For the additional currents we collect
from the expressions in
ATA-(4.4c, 4.9a)
\begin{eqnarray}
\jrcFq{pro}{V}{-} &=&
\Ff{e,1}{-} \mpw{4}{2} \vtos{V}
\Big( (1+\kappa_V) \vci{q}{2} - \kappa_V \vci{q}{1}
- (1+\kappa_V)^2 \, \vs{1} \times (\vs{2} \times \vci{q}{2} )
\nonumber\\
&&
- i \kappa_V \pvso{\vci{Q}{1}} + i (1+\kappa_V) \pvso{\vci{Q}{2}}
\Big) +\mbox{$(1 \leftrightarrow 2)$} \, .
\label{j2vpro-}
\end{eqnarray}
For the ``mes'' currents, one has two contributions from ATA-(4.7c, 4.8).
First from (\ref{rh2smes-}) the charge density $ \rhFqQ{mes}{V}{-}$,
and from ATA-(4.10b) plus the last term of (\ref{j2smes-}) the current
density
\begin{eqnarray}
\jrcFq{mes}{V}{-} &=&
\Ff{e,1}{-}\, \conm{V}{4}{2} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, (\vci{q}{1} - \vci{q}{2}\, )
\nonumber\\
&&
\Big[ (\frac{1}{2} + \kappa_V) ( \vcsi{q}{1} + \vcsi{q}{2}) +
\ps{\vci{Q}{1}}{\vci{Q}{2}}
- (1+\kappa_V)^2
\pf{\vs{1}}{\vci{q}{1}}{\vs{2}}{\vci{q}{2}}
\nonumber\\
&& + i(\frac{1}{2} + \kappa_V) \Big(
\psso{\vci{Q}{1}}{\vci{q}{1}} + \psst{\vci{Q}{2}}{\vci{q}{2}} \Big)
\nonumber\\
&& - i(1+\kappa_V) \Big(
\psso{\vci{Q}{2}}{\vci{q}{1}} + \psst{\vci{Q}{1}}{\vci{q}{2}} \Big)
-4m^2\frac{q_{10}^2 - q_{20}^2}{\vcsi{q}{1} - \vcsi{q}{2}}
\Big] \, . \label{j2vmes-}
\end{eqnarray}
Second, from the transverse part of the $\gamma VV$\ vertex as given in
ATA-(4.7e, 4.10c)
\begin{eqnarray}
\rhFqQ{mes-tr}{V}{-} &=&
\Ff{e,1}{-}\, \conm{V}{2}{} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \nonumber\\
&& \Big( \vc{k} \cdot (\vci{Q}{1}-\vci{Q}{2}) +
i(1+\kappa_V) \pssp{\vci{q}{1}}{\vci{q}{2}} \Big) \, ,
\label{rh2vmestr-} \\
\jrcFq{mes-tr}{V}{-} &=&
\Ff{e,1}{-}\, \conm{V}{4}{2} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \nonumber\\
&& \Big\{ 2m k_0 \Big[ \vci{Q}{1} - \vci{Q}{2} +
i(1+\kappa_V) (\pvso{\vci{q}{1}} - \pvst{\vci{q}{2}} ) \Big]
\nonumber\\
&&
+ \vc{k} \times \Big[ (1+\kappa_V)^2 \, (\vs{1} \times \vci{q}{1})
\times (\vs{2} \times \vci{q}{2}) - \vci{Q}{1} \times \vci{Q}{2}
\nonumber\\
&& + i (1+\kappa_V) \Big(
(\vs{2} \times \vci{q}{2} ) \times \vci{Q}{1} -
(\vs{1} \times \vci{q}{1} ) \times \vci{Q}{2} \Big) \Big] \Big\}
\, .
\label{j2vmestr-}
\end{eqnarray}
The usual minimal form of this part
of the vertex is adopted here, in accordance with ATA. The discussion
and implications of a nonminimal form can be found in the appendix
B of ATA and in \cite{GrR}.
\subsubsection*{Pseudoscalar meson exchange}
In the case of pseudoscalar meson exchange, a unitary freedom
shows up in the potential and in the interaction currents which
is not present for scalar and vector exchange up to the order
considered here. Let us recall that in order to derive the
nuclear potential and e.m.\ current operators from a
meson-nucleon Lagrangian, one has to fix the energy transfer
at the meson-nucleon vertices. This is done in different ways
within various techniques \cite{Fr80}. In particular, in \cite{ATA}
the vertex energy transfer was replaced by the difference of the
on-mass-shell nucleon energies. For the off-energy-shell
parameter $\beta$\ introduced in \cite{AGA} this means $\beta = 0$. Most of the
other techniques, e.g.\ those of \cite{GAr,Fr77}, correspond to a symmetric
off-energy-shell continuation with $\beta = 1/2$.
Whatever choice is adopted, the final results
are unitarily equivalent \cite{Fr80}, the unitary parameter \mbox{$\tilde{\mu}$}\ being
given by
\begin{equation}
\mbox{$\tilde{\mu}$} = 4 \beta \, \Big( \frac{1-c}{2} (\mu -1 ) + 1 \Big) - 1 \, ,
\label{mut}
\end{equation}
where $\mu = 0\, (1) $\ for $PS (PV) \ \pi NN $\-coupling, and
$c$ is the so-called Barnhill parameter (for a more detailed discussion
see \cite{Fr80,GAr,Fr77,AGA}).
This unitary freedom does not affect the nonrelativistic parts of the
potential and the MEC but the leading order relativistic contributions. It
means that $\tilde v^{(3)}$ and $j_\lambda^{(3)}(2;k)$ for an arbitrary
choice of the parameter $\mbox{$\tilde{\mu}$}$ are obtained from the operators for
$\mbox{$\tilde{\mu}$}=0$ by the approximate unitary transformation $(1-i\mbox{$\tilde{\mu}$} U_{PS})$ where
\begin{eqnarray}
\langle \vcpi{p}{1} \vcpi{p}{2} \, |\, U_{PS} \, |
\vci{p}{1} \vci{p}{2} \, \rangle & = & \tilde{U}_{PS} (\vc{q},
\vc{K}, \vc{Q}) \ps{\vci{\tau}{1}}{\vci{\tau}{2}}\, \delta(\vci{q}{1} +
\vci{q}{2}\, ) \, ,
\label{vosu1}\\
i \tilde{U}_{PS} (\vc{q},\vc{K}, \vc{Q}) &=& \conm{PS}{32}{3}
\Big[ \frac{1}{2}\Sigma^{(-)}(\vc{q},\, \vec K) - \Sigma^{(+)}(\vc{q},\, \vec Q)
\Big] \prop{PS}{}
\label{vosu2}\,,
\end{eqnarray}
where we have introduced as abbreviation for two vectors $\vec a$ and $\vec b$
\begin{equation}
\Sigma^{(\pm)}(\vec a,\,\vec b) = \ps{\vec\sigma_1}{\vec a}
\ps{\vec\sigma_2}{\vec b} \pm
\ps{\vec\sigma_1}{\vec b}\ps{\vec\sigma_2}{\vec a}\,. \label{Bab}
\end{equation}
Note that here $\vec P^{\,\prime}=\vec P$ and
$\vec K=2\vec P=2(\vec p_1+\vec p_2)$. In detail one finds
\begin{equation}
\vtt{PS (\mbox{$\tilde{\mu}$})} = \vtt{PS} + \vtt{\mbox{$\tilde{\mu}$}}\,,
\end{equation}
where
\begin{eqnarray}
\vtt{\mbox{$\tilde{\mu}$}} &=& i\mbox{$\tilde{\mu}$}\langle \vcpi{p}{1} \vcpi{p}{2} \, | [t^{(1)},\, U_{PS}]
| \vci{p}{1} \vci{p}{2} \, \rangle
\nonumber\\
&=&- \mbox{$\tilde{\mu}$} \conm{PS}{32}{4} \ps{\vc{q}}{\vc{Q}} \Sigma^{(+)}(\vc{q},\, \vec Q)
\prop{PS}{}\, ,
\label{v3psmut}
\end{eqnarray}
and
\begin{equation}
j_{\lambda,FW} (\mbox{2};\mbox{$\tilde{\mu}$};\vc{k}\, )_{PS} =
j_{\lambda,FW} (\mbox{2};\vc{k}\, )_{PS}+
j_{\lambda,FW\,\mbox{$\tilde{\mu}$}} (\mbox{2};\vc{k}\, )_{PS}\,,
\label{voscom}
\end{equation}
with
\begin{equation}
j_{\lambda,FW\,\mbox{$\tilde{\mu}$}} (\mbox{2};\vc{k}\, )_{PS} =
i\mbox{$\tilde{\mu}$}\Big[ j_{\lambda,FW} (1;\vc{k}\, ) ,\, U_{PS} \Big] \, .
\label{voscom1}
\end{equation}
Evaluating (\ref{voscom1}) with (\ref{vosu2}),
one finds for the additional $\mbox{$\tilde{\mu}$}$-dependent FW currents
\begin{eqnarray}
\rho^{(2)}_{FW\,\mbox{$\tilde{\mu}$}}(2;\vec k\,)_{PS} &=& - \mbox{$\tilde{\mu}$} \conm{PS}{32}{3} \, \Big\{ \,
\Ff{e,1}{+}\, \ps{\vs{1}}{\vc{k}} \ps{\vs{2}}{\vci{q}{2}} \nonumber\\
&&
+ \Ff{e,1}{-}\, \Big[ \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{Q}{2}}
- \ps{\vs{1}}{\vci{Q}{1}} \ps{\vs{2}}{\vci{q}{2}} \Big]\, \Big\}
\prop{PS}{2}
+\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2psvos}\\
\vec \jmath^{\,\,(3)}_{FW\,\mbox{$\tilde{\mu}$}} (2;\vec k\,)_{PS}
&=& - \mbox{$\tilde{\mu}$} \conm{PS}{64}{4} \nonumber\\
&&
\Big\{\, \Ff{e,1}{+}\, \Big[
\vci{Q}{1} \ps{\vs{1}}{\vc{k}} \ps{\vs{2}}{\vci{q}{2}} - \vci{q}{2}
\Big(\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{Q}{2}}
- \ps{\vs{1}}{\vci{Q}{1}} \ps{\vs{2}}{\vci{q}{2}} \Big) \Big]
\nonumber\\
&&
+ \Gm{+}\, \vc{k} \times \Big[
\vci{q}{2} \times \vs{1} \ps{\vs{2}}{\vci{Q}{2}}
- \vci{Q}{1} \times \vs{1} \ps{\vs{2}}{\vci{q}{2}}\, \Big] \nonumber\\
&&
+ \Ff{e,1}{-}\, \Big[
\vci{Q}{1} \Big\{\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{Q}{2}}
- \ps{\vs{1}}{\vci{Q}{1}} \ps{\vs{2}}{\vci{q}{2}} \Big\}
- \vci{q}{2}\, \ps{\vs{1}}{\vc{k}} \ps{\vs{2}}{\vci{q}{2}}
\Big] \nonumber\\
&&
+ \Gm{-}\, \vc{k} \times \Big[
\vc{k} \times \vs{1} \ps{\vs{2}}{\vci{q}{2}} +
i \vci{Q}{1} \ps{\vs{2}}{\vci{q}{2}} -
i \vci{q}{2} \ps{\vs{2}}{\vci{Q}{2}} \, \Big] \Big\} \prop{PS}{2}
\nonumber\\
&&+\mbox{$(1 \leftrightarrow 2)$} \, . \label{j2psvos}
\end{eqnarray}
After these remarks concerning the unitary freedom, we will now proceed to
give the explicit expressions for the FW currents. We will start with those
given in \cite{ATA} which correspond to the unitary
representation $\mbox{$\tilde{\mu}$} = -1$ \cite{Fr80,AGA}. However, this unitary freedom
affects the ``pro''-currents only for which we will characterize the
expressions from \cite{ATA} by an additional subscript ``$ATA$''.
The FW currents for the
unitary representation $\mbox{$\tilde{\mu}$} = 0$ are then obtained by adding
(\ref{rh2psvos}) or (\ref{j2psvos}), respectively, for $\mbox{$\tilde{\mu}$} =1$
to the corresponding expressions, i.e.,
\begin{eqnarray}
\rhFqQ{pro}{PS}{\pm} &=& \rhFqQ{pro}{PS,\,ATA}{\pm} + \rho^{(2)}_{FW\,\mbox{$\tilde{\mu}$}=1}
(2;\vec k\,)_{PS}^{\pm}\,,\label{rprounit}
\\
\jrcFq{pro}{PS}{\pm} &=& \jrcFq{pro}{PS,\,ATA}{\pm} + \vec \jmath^{\,\,(3)}
_{FW\,\mbox{$\tilde{\mu}$}=1} (2;\vec k\,)_{PS}^{\pm}\,. \label{jprounit}
\end{eqnarray}
All other FW currents are \mbox{$\tilde{\mu}$} -independent.
Considering first the ``$+$'' terms,
their ``pro'' part follows from ATA-(4.4d) and (4.19a,b). Explicitly, one finds
\begin{eqnarray}
\rhFqQ{pro}{PS,\,ATA}{+} &=& \Ff{e,1}{+}\, \conm{PS}{8}{3} \,
\ps{\vs{1}}{\vc{k}} \ps{\vs{2}}{\vci{q}{2}}\,
\prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2pspro+x}\\
\jrcFq{pro}{PS,\,ATA}{+} &=& \conm{PS}{16}{4}
\Big\{ \Ff{e,1}{+}\, \Big[ \vci{Q}{1} \ps{\vs{1}}{\vci{q}{2}} +
\vs{1} \ps{\vc{k}}{\vci{Q}{1}} + \vs{1} \ps{\vci{q}{2}}{\vci{Q}{2}}
- i \pv{\vc{k}}{\vci{q}{2}} \, \Big] \nonumber\\
&&
+ \Ff{\kappa,1}{+}\, \vc{k} \times (\vs{1} \times \vci{Q}{1} ) \Big\}
\ps{\vs{2}}{\vci{q}{2}}\, \prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, .
\label{j2pspro+x}
\end{eqnarray}
The transformations (\ref{rprounit}) and (\ref{jprounit}) give then the
FW-``pro''-current for $\mbox{$\tilde{\mu}$} = 0$
\begin{eqnarray}
\rhFqQ{pro}{PS}{+} &=& \Ff{e,1}{+}\, \conm{PS}{32}{3} \,
3\, \ps{\vs{1}}{\vc{k}} \ps{\vs{2}}{\vci{q}{2}}\,
\prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2pspro+}\\
\jrcFq{pro}{PS}{+} &=& \conm{PS}{16}{4}
\Big\{ \Ff{e,1}{+}\,\Big( \frac{1}{4}\vec q_2\,\Big[
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{Q}{2}}
- \ps{\vs{1}}{\vci{Q}{1}}\ps{\vs{2}}{\vci{q}{2}}\Big]\nonumber\\
&&
+ \Big[ \vci{Q}{1} \ps{\vs{1}}{(\vci{q}{2}+
\frac{3}{4}\vc{k}\,)} + \vs{1} \ps{\vci{q}{2}}{\vci{Q}{2}}
- i \pv{\vc{k}}{\vci{q}{2}}\Big)\Big]
\ps{\vs{2}}{\vci{q}{2}} \,\Big) \nonumber\\
&&
+ \frac{1}{4}\Gm{+}\, \vc{k} \times
\Big[3\,\vs{1} \times \vci{Q}{1}\,\ps{\vs{2}}{\vci{q}{2}}
- \vs{1} \times \vci{q}{2}\,\ps{\vs{2}}{\vci{Q}{2}} \Big] \Big\}
\prop{PS}{2} \nonumber\\
&&
+ \mbox{$(1 \leftrightarrow 2)$} \, .
\label{j2pspro+}
\end{eqnarray}
The ``ret'' part we take from (\ref{rhret}) through (\ref{jret}) of Appendix
D.
\begin{eqnarray}
\rhFqQ{ret}{PS}{+} &=& \Ff{e,1}{+}\, \conm{PS}{32}{3} \,
\ps{\vc{k}}{\vci{q}{2}} \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}\,
\propp{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2psret+}\\
\jrcFq{ret}{PS}{+} &=& \conm{PS}{64}{4}
\Big\{ \Ff{e,1}{+}\, \ps{\vs{1}}{\vci{q}{2}}
\Big[ \vci{Q}{1}\, \ps{\vc{k}}{\vci{q}{2}} +
\vci{q}{2}\, (\vci{Q}{1} + 3 \vci{Q}{2} ) \cdot \vci{q}{2}
\, \Big] \nonumber\\
&&
+ \Gm{+}\, \vc{k} \times \, \Big[
\vs{1} \times \vci{q}{2}\,
(\vci{Q}{1} + 3 \vci{Q}{2} ) \cdot \vci{q}{2}
-i\vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}} \Big] \Big\}
\ps{\vs{2}}{\vci{q}{2}}\, \propp{PS}{2}
\nonumber\\
&&+ \mbox{$(1 \leftrightarrow 2)$} \, .
\label{j2psret+}
\end{eqnarray}
With respect to the ``$-$'' part, there are first of all two well-known
\mbox{$\tilde{\mu}$} -independent nonrelativistic currents from ATA-(4.3d,e), namely
\begin{eqnarray}
\jnrFq{pro}{PS} &=&
\Ff{e,1}{-} \conm{PS}{4}{2} \, \vs{1} \ps{\vs{2}}{\vci{q}{2}} \prop{PS}{2}
+ \mbox{$(1 \leftrightarrow 2)$} \, , \label{j2pspro1-}\\
\jnrFq{mes}{PS} &=&
- \Ff{e,1}{-} \conm{PS}{4}{2} \, (\vci{q}{1} - \vci{q}{2} )\,
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, .
\label{j2psmes1-}
\end{eqnarray}
For the relativistic MECs the ``pro'' parts are taken from
ATA-(4.4d, 4.7b, 4.9b, 4.19).
Notice that that the first ``$-$'' term in ATA-(4.4d) should read $\vs{1}
\vcsi{q}{2}$ and
the sign in front of the $G^{-}$-terms in ATA-(4.19b) should be changed.
Explicitly, one finds that $ \rhFqQ{pro}{PS}{-} $ vanishes since ATA-(4.7b)
and ATA-(4.19a) cancel exactly. For the current one has
\begin{eqnarray}
\jrcFq{pro}{PS,\,ATA}{-} &=& - \conm{PS}{32}{4} \Big\{
\Ff{e,1}{-} \Big[ \Big( \vs{1} ( 3 \vcsi{q}{2} +
\vcsi{Q}{1} + \vcsi{Q}{2}) + 2\vc{k} \ps{\vs{1}}{\vci{q}{2}}
+ \vci{q}{1} \ps{\vs{1}}{\vci{q}{1}}
\nonumber\\
&&
+ \vci{Q}{1} \ps{\vs{1}}{\vci{Q}{1}}
+ i (\vc{k} + \vci{q}{2}) \times \vci{Q}{1} \Big)
\ps{\vs{2}}{\vci{q}{2}}
+ \vs{1} \ps{\vs{2}}{\vci{Q}{2}} \ps{\vci{q}{2}}{\vci{Q}{2}} \Big]
\nonumber\\
&&
- 2 \Gm{-}\, \vc{k} \times \Big( i \vci{Q}{1} + \vc{k} \times \vs{1} \Big)
\ps{\vs{2}}{\vci{q}{2}} \Big\} \prop{PS}{2}
+\mbox{$(1 \leftrightarrow 2)$} \, .\label{j2pspro3-x}
\end{eqnarray}
Again the transformations (\ref{rprounit}) and (\ref{jprounit}) give then the
FW-``pro''-current for $\mbox{$\tilde{\mu}$} = 0$
\begin{eqnarray}
\rhFqQ{pro}{PS}{-} &=&- \Ff{e,1}{-}\,\conm{PS}{32}{3} \,
\Big[ \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{Q}{2}}
- \ps{\vs{1}}{\vci{Q}{1}} \ps{\vs{2}}{\vci{q}{2}} \Big]
\prop{PS}{2}
\nonumber\\
&&
+\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2pspro-}\\
\jrcFq{pro}{PS}{-} &=& - \conm{PS}{32}{4} \Big\{
\Ff{e,1}{-} \Big[ \Big( \vs{1} ( 3 \vcsi{q}{2} +
\vcsi{Q}{1} + \vcsi{Q}{2}) + 2\vc{k} \ps{\vs{1}}{\vci{q}{2}}
+ \vci{q}{1} \ps{\vs{1}}{\vci{q}{1}}
\nonumber\\
&&
-\frac{1}{2}\vci{q}{2} \ps{\vs{1}}{\vc{k}}
+ i (\vc{k} + \vci{q}{2}) \times \vci{Q}{1} \Big)
\ps{\vs{2}}{\vci{q}{2}}
+ \vs{1} \ps{\vci{q}{2}}{\vci{Q}{2}}\ps{\vs{2}}{\vci{Q}{2}} \Big]
\nonumber\\
&&
+\frac{1}{2}\vci{Q}{1}\Big(
\ps{\vs{1}}{\vci{Q}{1}}\ps{\vs{2}}{\vci{q}{2}}
+\ps{\vs{1}}{\vci{q}{2}}\ps{\vs{2}}{\vci{Q}{2}}\Big)
\nonumber\\
&&
- \frac{1}{2}\Gm{-}\, \vc{k} \times \Big[
3\Big( i \vci{Q}{1} + \vc{k} \times \vs{1} \Big)
\ps{\vs{2}}{\vci{q}{2}} +i \vci{q}{2}\ps{\vs{2}}{\vci{Q}{2}}
\Big\} \prop{PS}{2}
\nonumber\\
&&
+\mbox{$(1 \leftrightarrow 2)$} \, .\label{j2pspro3-}
\end{eqnarray}
The ``mes'' currents follow from ATA-(4.7f, 4.10d,e)
\begin{eqnarray}
\rhFqQ{mes}{PS}{-} &=&
- \Ff{e,1}{-} \conm{PS}{4}{2} \, (q_{10} - q_{20}) \,
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, , \label{rh2psmes-}\\
\jrcFq{mes}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{32}{4}\,
\mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}\, (\vci{q}{1} - \vci{q}{2}) \bigg\{
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{Q}{2}} \ps{\vci{q}{2}}{\vci{Q}{2}}
\nonumber\\
&&
+ \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}} \Big[ \vcsi{q}{1}
+ \vcsi{Q}{1} -4m^2\frac{q_{10}^2 - q_{20}^2}{\vcsi{q}{1} - \vcsi{q}{2}}
\Big]
+\mbox{$(1 \leftrightarrow 2)$} \bigg\} \, , \label{j2psmes3-}
\end{eqnarray}
and finally the corrected retardation operators from (\ref{rhret})-(\ref{jret})
for $\tilde \nu = 0$
\begin{eqnarray}
\rhFqQ{ret}{PS}{-} &=&
- \Ff{e,1}{-} \conm{PS}{32}{3}\, (\vci{Q}{1} + 3\vci{Q}{2}) \cdot \vci{q}{2}
\, \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}
\ \propp{PS}{2}
\nonumber\\
&& + \mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2psret-}\\
\jrcFq{ret}{PS}{-} &=& - \conm{PS}{64}{4} \bigg\{ \Ff{e,1}{-}\,
\Big[ 16 m^2\, q_{20}^2\,
\Big( \vs{1} -
\frac{\vci{q}{1} - \vci{q}{2}}{\vcsi{q}{1} - \vcsi{q}{2}}
\, \ps{\vs{1}}{\vci{q}{1}}\, \Big)
\nonumber\\
&&
+ \ps{\vs{1}}{\vci{q}{2}}
\Big( \vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}} +
\vci{Q}{1}\, (\vci{Q}{1} + 3\vci{Q}{2}) \cdot \vci{q}{2} \, \Big)
\, \Big] \nonumber\\
&&
- G_{M,1}^{-}\, i \vc{k} \times \Big( \vci{q}{2}\,
(\vci{Q}{1} + 3\vci{Q}{2}) \cdot \vci{q}{2} +
i \vs{1} \times \vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}}\,
\Big) \bigg\} \ps{\vs{2}}{\vci{q}{2}} \propp{PS}{2}
\nonumber\\
&&+ \mbox{$(1 \leftrightarrow 2)$} \, .
\label{j2psret-}
\end{eqnarray}
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix B: Boost and separation contributions for scalar and
pseudoscalar meson exchange currents}
In this appendix, we collect the separation and boost current
contributions to the interaction currents for scalar and pseudoscalar
exchange only, since for vector exchange they are formally equal to the
scalar ones. The separation and kinematic boost contributions arise from
the nonrelativistic meson exchange current whereas a potential dependent
boost appears from the one-body current for pseudoscalar exchange only.
\subsection*{Scalar meson exchange}
Taking the only nonrelativistic current contribution from (\ref{j2smes1-}),
we find by separating into the ``pro'', ``mes'' and ``ret'' parts
\begin{eqnarray}
\jrcq{\mbox{$\chi_{\sigma}$}}{mes}{S}{-} &=&
i \Ff{e,1}{-} \, \conm{S}{8}{2} \, \vc{q} \, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \,
\pssm{\vc{Q}}{\vc{k}} \, , \label{jschs}\\
\jrcq{\mbox{$\chi_r$}}{pro}{S}{-} &=&
-\Ff{e,1}{-} \, \conm{S}{32}{2} \, \vc{k} \, \prop{S}{2} +\mbox{$(1 \leftrightarrow 2)$}
\, , \label{jschrpro}\\
\jrcq{\mbox{$\chi_r$}}{ret}{S}{-} &=&
- \Ff{e,1}{-} \, \conm{S}{16}{2} \, \vc{q} \,
\ps{\vc{k}}{\vci{q}{2}} \, \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$}
\, , \label{jschrret}\\
\jrcq{sep}{ret}{S}{-} &=&
\Ff{e,1}{-} \, \conm{S}{32}{2} \,\vc{q}\,\,\vcs{k} \,
\frac{\ps{\vc{k}}{\vci{q}{2}}}
{\ps{\vc{k}}{\vc{q}\, }} \propp{S}{2} +\mbox{$(1 \leftrightarrow 2)$} \, . \label{jssep}
\end{eqnarray}
\subsection*{Pseudoscalar meson exchange}
We will list
first the contributions from the kinematic boost and start with
the \mbox{$\chi_{\sigma}$} -currents with separation according to the propagator structure
\begin{eqnarray}
\jrcq{\mbox{$\chi_{\sigma}$}}{pro}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{64}{4}\,
\nonumber\\
&&
\Big\{ \Big[ \vec k\,\ps{\vs{1}}{\vc{q}\,} -
i \pv{\vc{k}}{\vc{Q}} \Big] \ps{\vs{2}}{\vci{q}{2}}
-\vec q\,\ps{\vs{1}}{\vci{q}{1}}\ps{\vs{2}}{\vc{k}}
\nonumber\\
&& \hspace*{-0.1truecm}
+ \vs{1}\, \Big[ \pf{\vs{2}}{\vci{q}{2}}{\vc{k}}{\vc{q}\, } +
i \pv{\vc{k}}{\vc{q}\, } \cdot \vc{Q} \Big] \Big\} \,
\prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpschispro} \\
\jrcq{\mbox{$\chi_{\sigma}$}}{mes}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{32}{4}
\vc{q}\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \Big\{
\vcs{q}\ps{\vs{1}}{\vc{k}}\ps{\vs{2}}{\vc{k}}
+ \vcs{k}\ps{\vs{1}}{\vc{q}\,}\ps{\vs{2}}{\vc{q}\, }
\nonumber\\
&&
- \pv{\vc{k}}{\vc{q}\, } \cdot
\Big[ i \vc{Q}\, ( \vs{1} \cdot \vci{q}{1} + \vs{2} \cdot \vci{q}{2} )
- \pv{\vs{1}}{\vs{2}} \ps{\vci{q}{1}}{\vci{q}{2}}\Big] \, .
\label{jpschismes}
\end{eqnarray}
Analogously, we get from (\ref{chir}) and (\ref{sepj3}) for the
\mbox{$\chi_r$} - and separation currents
\begin{eqnarray}
\jrcq{\mbox{$\chi_r$}}{pro}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{128}{4}
\Big\{\vs{1}\Big(\vcs{k}\ps{\vs{2}}{\vci{q}{2}}
- \ps{\vc{k}}{\vc{q}\,} \ps{\vs{2}}{\vc{k}}\Big)
\nonumber\\
&&
-\vc{q}\,\Big( \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}} -
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, } \Big)
\nonumber\\
&&
- \vc{k}\, \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}}
\Big\} \prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpschirrpro} \\
\jrcq{\mbox{$\chi_r$}}{ret}{PS}{-} &=& -\Ff{e,1}{-}\, \conm{PS}{64}{4}
\Big[ \vs{1} - \vc{q} \,
\frac{\ps{\vs{1}}{\vci{q}{1}}}{\ps{\vc{k}}{\vc{q}\, }} \Big]\,
\nonumber\\
&& \ps{\vs{2}}{\vci{q}{2}}\ps{\vc{k}}{\vc{q}\,} \ps{\vc{k}}{\vci{q}{2}}
\propp{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpsrsrret}\\
\jrcq{sep}{pro}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{128}{4} \vs{1}\,
\vcs{k}\ps{\vs{2}}{(\vc{k}-\vc{q}\,)}\prop{PS}{2}+ \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpsrsrpro1}\\
\jrcq{sep}{mes}{PS}{-} &=& - \Ff{e,1}{-}\, \conm{PS}{128}{4}\,
\vc{q}\,\,\vcs{k}\mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \nonumber\\
&& \Big[ \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}} +
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, } \Big] \, ,
\label{jpsrsrmes}\\
\jrcq{sep}{ret}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{128}{4}
\Big[ \vs{1} - \vc{q} \,
\frac{\ps{\vs{1}}{\vci{q}{1}}}{\ps{\vc{k}}{\vc{q}\, }} \Big]\,
\nonumber\\
&& \ps{\vs{2}}{\vci{q}{2}}\,\vcs{k}\, \ps{\vc{k}}{\vci{q}{2}}
\propp{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpsrsrret1}
\end{eqnarray}
where we have again separated the contributions with respect to their
propagator structure.
Finally, we list the interaction dependent boost contribution
of the one-body current
\begin{equation}
j_{\lambda,\chi_V} (2;\vc{k}\, )_{PS} =
-i \Big[ j_{\lambda,FW} (1;\vc{k}\, ) , \, (\mbox{$\tilde{\mu}$}-1)\chi_V + \delta
\chi_V\Big] \, ,
\label{chivcom}
\end{equation}
where $\chi_V$ is given by
\begin{eqnarray}
\langle \vcpi{p}{1} \vcpi{p}{2} \, |\, \chi_V \, |
\vci{p}{1} \vci{p}{2} \, \rangle & = & \tilde{\chi}_V (\vc{q},
\vc{K}, \vc{Q}) \ps{\vci{\tau}{1}}{\vci{\tau}{2}}\, \delta(\vci{q}{1} +
\vci{q}{2}\, ) \, ,
\label{chiv1}\\
i \tilde{\chi}_V (\vc{q},\vc{K}, \vc{Q}) & = &
\frac{g^2_{PS}}{(2\pi )^3 64 m^3}
\Sigma^{(-)}(\vec q,\,\vec K)
\prop{PS}{}
\, ,\label{chiv2}
\end{eqnarray}
and $\delta \chi_V $ is the model-independent
interaction boost which vanishes for a two-particle system with equal
masses \cite{Fr75,Fr77}.
Evaluation of (\ref{chivcom}) gives for the boost contributions
\begin{eqnarray}
\rho_{\chi_V}^{(2)} (2;\vc{k}, \vec q,\vec K, \vec Q\, )_{PS}
&=& (\mbox{$\tilde{\mu}$}-1) \conm{PS}{64}{3} \, \Big\{ \,
\Ff{e,1}{+}\, \pf{\vc{k}}{\vci{q}{2}}{\vs{1}}{\vs{2}} \nonumber\\
&&
- \Ff{e,1}{-}\,
\pf{\vc{K}}{\vci{q}{2}}{\vs{1}}{\vs{2}} \Big\} \,
\prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2pschiv}\\
\vec{\jmath}_{\chi_V}^{\,\,(3)} (2;\vc{k}, \vec q,\vec K, \vec Q\, )_{PS}
&=& (\mbox{$\tilde{\mu}$} - 1) \conm{PS}{128}{4} \nonumber\\
&&
\Big\{\, \Ff{e,1}{+}\, \Big[
\vci{Q}{1}\, \pf{\vc{k}}{\vci{q}{2}}{\vs{1}}{\vs{2}}
+ \vci{q}{2}\, \pf{\vc{K}}{\vci{q}{2}}{\vs{1}}{\vs{2}} \Big]
\nonumber\\
&&
+ \Gm{+}\, \vc{k} \times \Big[
\vci{q}{2} \times \vs{1} \ps{\vs{2}}{\vc{K}}
- \vc{K} \times \vs{1} \ps{\vs{2}}{\vci{q}{2}}
+ i \vci{q}{2} \ps{\vs{2}}{\vc{k}} \, \Big] \nonumber\\
&&
- \Ff{e,1}{-}\, \Big[
\vci{Q}{1}\, \pf{\vc{K}}{\vci{q}{2}}{\vs{1}}{\vs{2}}
+ \vci{q}{2}\, \pf{\vc{k}}{\vci{q}{2}}{\vs{1}}{\vs{2}} \Big]
\nonumber\\
&&
+ \Gm{-}\, \vc{k} \times \Big[
\vc{k} \times \vs{1} \ps{\vs{2}}{\vci{q}{2}} -
\vci{q}{2} \times \vs{1} \ps{\vs{2}}{\vc{k}}
\nonumber\\
&&+ i \vc{K} \ps{\vs{2}}{\vci{q}{2}} -
i \vci{q}{2} \ps{\vs{2}}{\vc{K}} \, \Big] \Big\} \prop{PS}{2}
+\mbox{$(1 \leftrightarrow 2)$} \, . \label{j2pschiv}
\end{eqnarray}
In the Breit frame, these expressions yield then
\begin{eqnarray}
\rhqQa{\chi_V}{pro}{PS}{+}
&=& (\mbox{$\tilde{\mu}$}-1) \Ff{e,1}{+}\,\conm{PS}{64}{3} \,
\pf{\vc{k}}{\vci{q}{2}}{\vs{1}}{\vs{2}}\prop{PS}{2} \nonumber\\
&& +\mbox{$(1 \leftrightarrow 2)$} \label{rh2pschiv+}\, ,\\
\jrcq{\chi_V}{pro}{PS}{+} &=& (\mbox{$\tilde{\mu}$} -1) \conm{PS}{128}{4}\, \Big\{
\Ff{e,1}{+}\, \vc{Q}\, \pf{\vc{k}}{\vci{q}{2}}{\vs{1}}{\vs{2}}
\nonumber\\
&& + i\Gm{+}\, \vc{k} \times
\vci{q}{2}\, \ps{\vs{2}}{\vc{k}} \, \Big\}\prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \,,
\label{jpschiv+}\\
\jrcq{\chi_V}{pro}{PS}{-} &=& -(\mbox{$\tilde{\mu}$}-1) \conm{PS}{128}{4}\, \Big\{
\Ff{e,1}{-}\, \vci{q}{2}\,
\pf{\vs{1}}{\vs{2}}{\vc{k}}{\vci{q}{2}} \nonumber\\
&&
+ \Gm{-}\, \vc{k} \times \big[
\vci{q}{2} \times \vs{1} \, \ps{\vs{2}}{\vc{k}\, } -
\vc{k} \times \vs{1} \, \ps{\vs{2}}{\vci{q}{2}}\, \big] \Big\}\,
\prop{PS}{2} \nonumber\\
&&+ \mbox{$(1 \leftrightarrow 2)$} \, .
\label{jpschiv-}
\end{eqnarray}
\subsection*{Alternative representation of \mbox{$\chi_r$}\ and separation currents}
We would like to remind the reader that the separation currents
were introduced because for the one-body currents a
useful cancellation occurs between them and the \mbox{$\chi_r$}-ones.
Also for the exchange currents it is convenient to consider the
following combination of the $\mbox{$\chi_r$}$, $sep$ and $recoil$ currents,
\begin{equation}
\vc{\jmath}_{\chi s r}^{\, \, (3)} (\rm{2};\vc{k}\, )=
\vc{\jmath}_{\mbox{$\chi_r$}}^{\, \, (3)} (\rm{2};\vc{k}\, )
+ \vc{\jmath}_{sep}^{\, \, (3)} (\rm{2};\vc{k}\, )
- \vc{\jmath}_{rec}^{\, \, (3)} (\rm{2};\vc{k}\, )\,,
\end{equation}
where the recoil current is given by
\begin{equation}
\vc{\jmath}^{\,\,(3)}_{rec} (\vc{k}) = - \frac{\vc{k}}{32 m^2} \, \,
\vec k \cdot \vc{\jmath}^{\,\, (1)} (\vc{k}) \, .
\label{jrec}
\end{equation}
Explicitly, one finds from (\ref{sepj3}), (\ref{chir}) and (\ref{jrec})
the following expression for this combination
\begin{eqnarray}
\vc{\jmath}_{\chi s r}^{\, \, (3)} (\rm{2};\vc{k}\, )
& = &
\frac{\vc{k}}{32m^2}\, \vc{k} \cdot
\vc{\jmath}^{\, \, (1)} (2;\vc{k}\, ) +
\frac{\vcs{k}}{16m^2}\, \vc{\jmath}^{\, \, (1)} (2;\vc{k}\, )
\nonumber\\
&&
+ \frac{1}{32m^2}
\Big[ \ps{\vc{k}}{\vci{q}{1}} \ps{\vc{k}}{\vnab{q_1}\,} +
\ps{\vc{k}}{\vci{q}{2}} \ps{\vc{k}}{\vnab{q_2}\,} \Big]\,
\vc{\jmath}^{\, \, (1)} (\rm{2};\vc{k}\, ) \, .
\label{jrsr}
\end{eqnarray}
For a ``mes''-type current, having the generic form
\begin{equation}
\vc{\jmath}^{\, \, (1)} (\rm{2;mes};\vc{k}\, ) =
(\vci{q}{1} - \vci{q}{2} )\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \, g(\vci{q}{1}, \vci{q}{2}) \, ,
\label{jmesgen}
\end{equation}
the expression (\ref{jrsr}) becomes
\begin{eqnarray}
\vc{\jmath}_{\chi s r}^{\, \, (3)}
(\rm{2};\vc{k}\, )
& = & \frac{\vc{k}}{16m^2}\, \vc{k} \cdot
\vc{\jmath}^{\, \, (1)} (\rm{2;mes};\vc{k}\, ) \nonumber\\
&&
+ \frac{1}{16m^2}\, g(\vci{q}{1}, \vci{q}{2})\,
\frac{\vc{q}}{\ps{\vc{k}}{\vc{q}\, }}\, \ps{\vc{k}}{\vci{q}{2}}^2\,
\propp{B}{2} \nonumber\\
&& + 2\, \vc{q}\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \,
\Big[ \ps{\vc{k}}{\vci{q}{1}} \ps{\vc{k}}{\vnab{q_1}\,} +
\ps{\vc{k}}{\vci{q}{2}} \ps{\vc{k}}{\vnab{q_2}\,} \Big]\,
g(\vci{q}{1}, \vci{q}{2}) \, .
\label{jrsrmes}
\end{eqnarray}
For scalar and vector
exchanges there is only a ``mes''-type nonrelativistic exchange current
given in (\ref{j2smes1-}) where
$g(\vci{q}{1}, \vci{q}{2})$\ of (\ref{jmesgen}) is just a constant. One finds
without separating into ``pro'', ``mes'' and ``ret'' parts
\begin{eqnarray}
\vc{\jmath}_{\chi s r}^{\, \, (3)}
(\rm{2};\vc{k},\,\vc{q},\,\vc{Q} )_{B}^{-} &=&
\frac{\vc{k}}{16m^2}\, \vc{k} \cdot \jnrq{mes}{B} \nonumber\\
&& \pm \Ff{e,1}{-} \, \conm{B}{16}{2} \,\vc{q}\,\,
\frac{\ps{\vc{k}}{\vci{q}{2}}^2}
{\ps{\vc{k}}{\vc{q}\, }} \propp{B}{2} +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jsrsr}
\end{eqnarray}
where $B=S$ or $V$ and the minus sign applies for vector exchange.
For pseudoscalar exchange one has to apply (\ref{jrsr}) and
(\ref{jrsrmes}) to the nonrelativistic currents (\ref{j2pspro1-}) and
(\ref{j2psmes1-}), respectively. Collecting all terms one finds
\begin{eqnarray}
\vc{\jmath}_{\chi s r}^{\, \, (3)}
(\rm{2};\vc{k},\,\vc{q},\,\vc{Q} )_{PS}^{-} &=&
\jrcq{sep}{mes}{PS}{-} \nonumber\\
&&
+ \Ff{e,1}{-} \, \conm{PS}{64}{4} \Big[ \vs{1} -\ps{\vs{1}}{\vci{q}{1} }
\frac{\vc{q}}{\ps{\vc{k}}{\vc{q}\, }}\Big] \ps{\vs{2}}{\vci{q}{2} }
\ps{\vc{k}}{\vci{q}{2}}^2 \, \propp{PS}{2}\nonumber \\
&&
+ \Ff{e,1}{-} \, \conm{PS}{128}{4} \, \vc{D} \prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpsrsrpro}
\end{eqnarray}
where
$\jrcq{sep}{mes}{PS}{-}$ is given in (\ref{jpsrsrmes}) and
\begin{eqnarray}
\vc{D} &=& \vs{1} \Big[ - 2 \vcs{k} \ps{\vs{2}}{\vc{q}\, } +
\Big(\frac{3}{2} \vcs{k} - \ps{\vc{k}}{\vc{q}\, }\Big)
\ps{\vs{2}}{\vc{k}} \Big]
- 2 \vc{k} \, \ps{\vs{1}}{\vc{q}\, } \ps{\vs{2}}{\vci{q}{2}} \nonumber\\
&& - \vc{q}\, \Big[ \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}} -
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, } \Big] \label{D1}
\\
&=& 2 \vc{k}\, \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}
+ \vci{q}{2}\, \Sigma^{(+)}(\vc{k},\,\vci{q}{2}) +
\frac{1}{2} \vc{k} \, \Big[ \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}}
+ \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, } \Big] \nonumber\\
&& - \vc{k} \times
\Big[\, 2\, \vc{k} \times \vs{1} \, \ps{\vs{2}}{\vci{q}{2}}
+ \vci{q}{2} \times \vs{1} \, \ps{\vs{2}}{\vc{k}} \Big] \, ,\label{D2}
\end{eqnarray}
The first form of $\vec{D}$\ in (\ref{D1}) is more convenient for
comparison with the results given earlier, whereas the second one in (\ref{D2})
is used in the appendix E in order to obtain the ``pro-IV'' current.
\renewcommand{\theequation}{C.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix C: \mbox{$\tilde{\mu}$}-dependent currents}
Collecting the \mbox{$\tilde{\mu}$}-dependent terms from (\ref{rh2psvos}), (\ref{j2psvos}),
and (\ref{rh2pschiv+})-(\ref{jpschiv-}), we get for the total \mbox{$\tilde{\mu}$}-dependent
current
\begin{eqnarray}
\rho_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{q},\vc{Q}\, )_{PS}
&=& \mbox{$\tilde{\mu}$} \conm{PS}{32}{3} \, \Big[ \,
- \frac{1}{2} \Ff{e,1}{+}\, \Sigma^{(+)}(\vci{q}{2},\, \vec k)
+ \Ff{e,1}{-}\, \Sigma^{(+)}(\vci{q}{2},\, \vec Q) \Big]
\, \prop{PS}{2} \nonumber\\
&&+\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2psmut}\\
\vec{\jmath}_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{q},\vc{K},\vc{Q}\,)_{PS}
&=& \frac{\vci{Q}{1}}{2m} \rho_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{q},\vc{Q}\, )_{PS}
\nonumber\\
&&+\mbox{$\tilde{\mu}$} \conm{PS}{64}{4} \Big\{\, \vci{q}{2}\,\Big(
\frac{1}{2} \Ff{e,1}{-}\,\Sigma^{(+)}(\vci{q}{2},\, \vec k)
- \Ff{e,1}{+} \Sigma^{(+)}(\vci{q}{2},\, \vec Q) \Big)
\nonumber\\
&&
+ \Gm{+}\, \vc{k} \times \Big[
\vci{q}{2} \times \vs{1} \ps{\vs{2}}{\vc{Q}}
+ \vc{Q} \times \vs{1} \ps{\vs{2}}{\vci{q}{2}}
+ \frac{i}{2} \vci{q}{2} \ps{\vs{2}}{\vc{k}} \, \Big] \nonumber\\
&&
- \Gm{-}\, \vc{k} \times \Big[
\frac{1}{2} \vci{q}{2} \times \vs{1} \ps{\vs{2}}{\vc{k}} +
\frac{1}{2} \vc{k} \times \vs{1} \ps{\vs{2}}{\vci{q}{2}} +
i \vci{q}{2} \ps{\vs{2}}{\vc{Q}} \nonumber\\
&&+ i \vc{Q} \ps{\vs{2}}{\vci{q}{2}}
\, \Big] \Big\} \prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, . \label{j2psmut}
\end{eqnarray}
The charge density $\rho_{\mbox{$\tilde{\mu}$} } (2;\vc{k}\, )_{PS}$ is frame-independent
and adds to the intrinsic charge density operator.
The current $\vec{\jmath}_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{q},\vc{K},\vc{Q})_{PS}$\
depends on the reference frame, i.e. on \vc{K}, only through
$\vci{Q}{1} = \vc{Q} + \vc{K}/2$ in the first term of (\ref{j2psmut}).
Obviously,
\begin{equation}
\vec{\jmath}_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{q},\vc{K},\vc{Q})_{PS} -
\vec{\jmath}_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{q},\vc{0},\vc{Q})_{PS} =
\frac{\vc{K}}{4m} \, \rho_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{q},\vc{Q}\, )_{PS} \, ,
\end{equation}
which is consistent with the general expression (\ref{fj3I}). Furthermore,
for the divergence of the current one easily finds from
(\ref{rh2psmut}) and (\ref{j2psmut})
\begin{eqnarray}
\vc{k} \cdot \vec{\jmath}_{\mbox{$\tilde{\mu}$} } (2;\vc{k},\vc{0})_{PS} &=&
\frac{\ps{\vc{q}}{\vc{Q}}}{m} \rho_{\mbox{$\tilde{\mu}$} } (2;\vc{k}\, )_{PS} \nonumber\\
&&+ \mbox{$\tilde{\mu}$} \conm{PS}{32}{4}
\Big\{ - \frac{1}{2} \Ff{e,1}{+}\, \Big[
\ps{\vc{k}}{\vci{q}{2}} \Sigma^{(+)}(\vci{q}{2},\, \vec Q)
+ \ps{\vc{Q}}{\vci{q}{2}} \Sigma^{(+)}(\vci{q}{2},\, \vec k) \Big]
\nonumber\\
&&
+ \Ff{e,1}{-}\, \Big[ \ps{\vc{Q}}{\vci{q}{2}} \Sigma^{(+)}(\vci{q}{2},\, \vec
Q)
+ \frac{1}{4} \ps{\vc{k}}{\vci{q}{2}} \Sigma^{(+)}(\vci{q}{2},\, \vec k)
\Big]\,
\Big\} \prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \nonumber\\
&=& \langle \vcp{p}\, |\Big[t^{(1)}\,,\,\rho_{\mbox{$\tilde{\mu}$} } (2;\vc{k}\, )_{PS}
\Big]+ \Big[ v^{(3)}_{\mbox{$\tilde{\mu}$} } \, , \,
\rho^{(0)} (1;\vc{k}\, ) \Big] \, | \vc{p} \, \rangle \, ,
\end{eqnarray}
where the \mbox{$\tilde{\mu}$} -dependent intrinsic potential $\tilde{v}^{(3)}_{\mbox{$\tilde{\mu}$} } (\vc
{q},\vc{Q})$ is given in (\ref{v3psmut}).
According to (\ref{voscom1}) and (\ref{chivcom}), the
\mbox{$\tilde{\mu}$}-dependent operators of (\ref{v3psmut}), (\ref{rh2psmut}) and
(\ref{j2psmut}) as well as the foregoing continuity equation
can be obtained from the nonrelativistic one-nucleon current and
the kinetic energy with the help of another (approximate)
unitary transformation $(1-i\mbox{$\tilde{\mu}$} U_{\mbox{$\tilde{\mu}$} })$ \cite{AGA}, where
\begin{equation}
U_{\mbox{$\tilde{\mu}$} }= U_{PS} -\chi_V\,.
\end{equation}
In detail, this gives for $U_{\mbox{$\tilde{\mu}$} }$
\begin{eqnarray}
\langle \vcpi{p}{1} \vcpi{p}{2} \, |\, U_{\mbox{$\tilde{\mu}$}} \, |
\vci{p}{1} \vci{p}{2} \, \rangle & = & \tilde{U}_{\mbox{$\tilde{\mu}$}} (\vc{q},
\vc{Q}) \ps{\vci{\tau}{1}}{\vci{\tau}{2}}\, \delta(\vci{q}{1} +
\vci{q}{2}\, ) \, ,\\
i\tilde{U}_{\mbox{$\tilde{\mu}$} } (\vc{q},\vc{Q}) &=& - \conm{PS}{32}{3}\,
\Sigma^{(+)}(\vec q,\, \vec Q) \prop{PS}{}\,,
\end{eqnarray}
where we already have used the fact that there is no
$\vec K$-dependence.
Also for more than two nucleons the difference
$U_{\mbox{$\tilde{\mu}$} }= U_{PS} -\chi_V\ $\
gives the operator depending only on the relative coordinates
$\vci{p}{a} - \frac{m}{M} \vc{P}$.
One can use any unitary transformation acting in the intrinsic
space only
with the generator $U = {\cal O}(1/m^2)$ in two ways. First, one
can apply it only to the intrinsic Hamiltonian and currents. The transformed
operators still obey the intrinsic continuity equation.
Then, inserting these into the full operators
(\ref{frh0I})-(\ref{fj3I}) and inspecting the $\vc{K}$-dependent
terms, one finds that only the convection current
$(\vc{K}/2M)\,\rho^{(2)}(\vc{k}\, )$\ is affected by the unitary
transformation resulting in an additional contribution
$(\vc{K}/2M)\,\rho^{(2)}_U (2;\vc{k}\, )$ with
\begin{equation}
\rho^{(2)}_U (2;\vc{k}\, ) = i\Big[\, \rho^{(0)} (1;\vc{k}\, )\, , \, U\,
\Big] \, .
\label{intunit}
\end{equation}
Since the general parametrization
(\ref{frh0I})-(\ref{fj3I}) is preserved, the full current is still
conserved and
transformes properly under the Poincar\'{e} transformation.
Alternatively, one may let act the transformation in the full
Hilbert space including the c.m.\ motion part. That is, one transforms
also the nonrelativistic convection current
$(\vc{K}/2M)\,\rho^{(0)}(\vc{k}\, )$\ and gets the correction to
its relativistic part as given by (\ref{intunit}). Both views are
equivalent and consistent, and it implies that it is sufficient to
transform the conserved intrinsic current only.
Another example of such a unitary transformation is encountered
in the next appendix.
\renewcommand{\theequation}{D.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix D: Retardation currents for $\nu \neq 1/2 $ }
In the framework of the $1/m$-expansion techniques \cite{Fr80}, the effects
of retardation of the exchanged mesons are included via the Taylor
expansion of the meson propagators (see also (\ref{propp}) and
(\ref{ffret}))
\begin{equation}
\tilde{\Delta}_B (\vcs{q} - q_0^2 ) \simeq
\prop{B}{} - q_0^2\, \propp{B}{} \, ,
\end{equation}
and expressing the meson energy in terms of the energies of the
on-mass-shell nucleons. This procedure is unambiguous for the nonrelativistic
reduction of the Feynman amplitudes corresponding to the MEC operators,
but in order to define the general nuclear potential one has to allow for
an
off-energy-shell continuation of the corresponding amplitude, and then
$q_0$ is no longer fixed by energy conservation at the vertex. It has been
argued
in \cite{ATA} that up to the order considered the most general expression
symmetric with respect to nucleon interchange reads
\begin{equation}
q_0^2 \rightarrow - q_{10}\, q_{20} + \frac{1-\nu}{2}
(q_{10} + q_{20} )^2 =
\mpw{4}{2} \Big[ \ps{\vc{q}}{\vc{P}}^2 - 2 \mbox{$\tilde{\nu}$} \ps{\vc{q}}{\vc{Q}}^2
\Big] +\ord{-4} \, ,
\end{equation}
where $\nu$\ is an arbitrary parameter and $\tilde \nu = \nu - 1/2$.
In \cite{ATA} it is described in detail
how this freedom translates into the $\nu$-dependent retardation
contribution to the MECs from the positive-energy nuclear pole Born diagram.
It is convenient to introduce the following notation
\begin{eqnarray}
\wt{S}{} &=& - \frac{g_S^2 }{(2\pi )^3} \, \propp{S} \, , \\
\wt{V}{} &=& \frac{g_V^2 }{(2\pi )^3} \, \propp{V} \, , \\
\wt{PS}{} &=& - \conm{PS}{4}{2} \ps{\vs{1}}{\vc{q}\, }
\ps{\vs{2}}{\vc{q}\, } \, \propp{PS} \, .
\end{eqnarray}
The retardation contribution to the potential in an arbitrary frame
is then given by
\begin{equation}
\tilde{v}_{B,ret}^{\,(3)} (\vc{q},\vc{Q},\vc{P}) = -
\mpw{4}{2} \wt{B}{} \ \Big[ \ps{\vc{q}}{\vc{P}}^2
- 2 \mbox{$\tilde{\nu}$} \ps{\vc{q}}{\vc{Q}}^2 \Big] \, ,\label{potretnu}
\end{equation}
where the first \vc{P}-dependent term is required by the Foldy condition
and the second one is the retardation contribution to the intrinsic potential
\begin{equation}
\tilde{v}_{B,\mbox{$\tilde{\nu}$}}^{(3)} (\vc{q},\vc{Q}) =
\frac{\mbox{$\tilde{\nu}$}}{2m^2} \wt{B}{} \ \ps{\vc{q}}{\vc{Q}}^2 \, .
\label{vnut}
\end{equation}
Obviously, (\ref{vnut}) vanishes for $\mbox{$\tilde{\nu}$} = 0$, i.e. $\nu = 1/2$, which is
the choice for the intrinsic potentials and the retardation e.m.\ operators
adopted in the main part of the paper. It is also common for
the construction of realistic $NN$-potentials which usually do not
include the retardation terms in the c.m.\ frame.
In this appendix we present for completeness the currents for arbitrary
\mbox{$\tilde{\nu}$}.
Since the expressions for the retardation operators ATA-(4.21-4.24)
contain a number of misprints and do not contain the strong form factors,
we find it useful to present the correct form here
\begin{eqnarray}
\rhFqQ{ret}{B}{} &=& - \mpw{4}{}\, \wt{B}{2}\,
\Big( \Ff{e,1}{+} R_k - \Ff{e,1}{-} R_Q \Big) + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{rhret}\\
\jrcFq{ret}{B}{} &=& - \mpw{8}{2}\, \wt{B}{2}\,
\Big( \Ff{e,1}{+} [R_k \vci{Q}{1} + R_Q \vci{q}{2} ] \nonumber\\
&& - \Ff{e,1}{-} [R_Q \vci{Q}{1} + R_k \vci{q}{2} ] \Big)
+ \mbox{$(1 \leftrightarrow 2)$} + \jrcFq{ret-tr}{B}{}\nonumber\\
&=&
\jrcFq{ret-tr}{B}{} +
\frac{\vci{Q}{1}}{2m}\rhFqQ{ret}{B}{}
\nonumber\\
&&
- \mpw{8}{2}\, \wt{B}{2}\, \vec q_2\,
\Big( \Ff{e,1}{+} R_Q - \Ff{e,1}{-} R_k \Big) + \mbox{$(1 \leftrightarrow 2)$}
\, ,
\label{jretB}\\
\jrcFq{ret-tr}{S,V}{} &=& - \mpw{8}{2}\, \wt{S,V}{2}\,
\Big( \Gm{+} R_k - \Gm{-} R_Q \Big) i \pv{\vs{1}}{\vc{k}} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jretSV}\\
\jrcFq{ret-tr}{PS}{} &=& - \conm{PS}{32}{4}\, i \vc{k} \times
\Big\{ \Gm{+} [ R_k \vci{q}{2} + R_Q i \vs{1} \times \vci{q}{2} ]
\nonumber\\
&& - \Gm{-} [ R_Q \vci{q}{2} + R_k i \vs{1} \times \vci{q}{2} ]
\Big\}\, \ps{\vs{2}}{\vci{q}{2}} \propp{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\end{eqnarray}
with
\begin{eqnarray}
R_k &=& (\frac{1}{2} - \mbox{$\tilde{\nu}$}) \ps{\vc{k}}{\vci{q}{2}} \, , \\
R_Q &=& \frac{1}{2} (\vci{Q}{1} + 3 \vci{Q}{2}) \cdot \vci{q}{2} -
\mbox{$\tilde{\nu}$} \, (\vci{Q}{1} - \vci{Q}{2}) \cdot \vci{q}{2} \, .
\label{jret}
\end{eqnarray}
The \mbox{$\tilde{\nu}$}-independent part of these operators is included in the retardation
currents in Appendix A and then added to the intrinsic currents in the
main part of the paper. Separating the \mbox{$\tilde{\nu}$}-dependent part we get
\begin{eqnarray}
\rho_{\mbox{$\tilde{\nu}$} }^{(2)} (\mbox{2;ret};\vc{k}\, )_{B}
&=& \frac{\mbox{$\tilde{\nu}$}}{4m} \wt{B}{2} \, \Big[
\Ff{e,1}{+}\, \ps{\vc{k}}{\vci{q}{2}}
- 2 \Ff{e,1}{-} \ps{\vc{Q}}{\vci{q}{2}} \Big]
+\mbox{$(1 \leftrightarrow 2)$} \, , \label{rh2nut}\\
\vec{\jmath}_{\mbox{$\tilde{\nu}$} }^{\,\,(3)} (\mbox{2;ret};\vc{k},\vc{K})_{B}
&=&
\vec{\jmath}_{\mbox{$\tilde{\nu}$} } (\mbox{2;ret-tr};\vc{k}\, )_{B} +
\frac{\vci{Q}{1}}{2m}\rho_{\mbox{$\tilde{\nu}$} }^{(2)} (\mbox{2;ret};\vc{k}\, )_{B}
\nonumber\\
&&
+\frac{\mbox{$\tilde{\nu}$}}{8m^2} \wt{B}{2} \,\vec q_2\, \Big(
2 \Ff{e,1}{+}\,\ps{\vc{Q}}{\vci{q}{2}}
- \Ff{e,1}{-}\,\ps{\vc{k}}{\vci{q}{2}} \Big)
+ \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{j2nut}\\
\vec{\jmath}_{\mbox{$\tilde{\nu}$} }^{\,\,(3)} (\mbox{2;ret-tr};\vc{k}\, )_{S,V} &=&
\frac{\mbox{$\tilde{\nu}$}}{8m^2} \wt{S,V}{2} \, \Big[ \Gm{+}\, \ps{\vc{k}}{\vci{q}{2}} -
2\Gm{-} \ps{\vc{Q}}{\vci{q}{2}} \Big] \, i \vs{1} \times \vc{k}
\nonumber\\
&&
+ \mbox{$(1 \leftrightarrow 2)$} \, , \label{j2btrnut}\\
\vec{\jmath}_{\mbox{$\tilde{\nu}$} }^{\,\,(3)} (\mbox{2;ret-tr};\vc{k}\, )_{PS} &=&
\mbox{$\tilde{\nu}$}\, \conm{PS}{32}{4}\, i \vc{k} \times
\Big\{ \Gm{+} \Big[ \vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}} +
2 i \vs{1} \times \vci{q}{2} \ps{\vc{Q}}{\vci{q}{2}} \Big]
\nonumber\\
&&
- \Gm{-} \Big[ 2\vci{q}{2}\, \ps{\vc{Q}}{\vci{q}{2}} +
i \vs{1} \times \vci{q}{2}\, \ps{\vc{k}}{\vci{q}{2}} \Big]
\Big\}\, \ps{\vs{2}}{\vci{q}{2}} \propp{PS}{2} \nonumber\\
&&+ \mbox{$(1 \leftrightarrow 2)$} \, .
\label{j2pstrnut}
\end{eqnarray}
It is easy to show that
\begin{equation}
\vc{k} \cdot \vec{\jmath}_{\mbox{$\tilde{\nu}$} }^{\,\,(3)} (\mbox{2;ret};\vc{k},\vc{0})_{B}
= \langle \vcp{p}\, |\Big[ t^{(1)},\, \rho_{\mbox{$\tilde{\nu}$} }^{(2)}
(\mbox{2;ret};\vc{k}\,)_{B} \Big]+ \Big[ v^{(3)}_{B,\mbox{$\tilde{\nu}$} } \, , \,
\rho^{(0)} (1;\vc{k}\, ) \Big]| \vc{p} \, \rangle \, ,
\end{equation}
because one finds for the commutator
\begin{eqnarray}
\langle \vcp{p}\, |\Big[ \tilde v^{(3)}_{B,\mbox{$\tilde{\nu}$} } \, , \,
\rho^{(0)} (1;\vc{k}\, ) \Big] | \vc{p} \, \rangle
&=& \frac{\mbox{$\tilde{\nu}$}}{2m^2} \wt{B}{2} \Big( \Ff{e,1}{+}
\ps{\vc{k}}{\vci{q}{2}} \ps{\vc{Q}}{\vci{q}{2}} - \Ff{e,1}{-}
( \ps{\vc{Q}}{\vci{q}{2}}^2 + \frac{1}{4} \ps{\vc{k}}{\vci{q}{2}}^2) \Big)
\nonumber\\
&& + \mbox{$(1 \leftrightarrow 2)$}
\end{eqnarray}
One can also check that, similar to the \mbox{$\tilde{\mu}$}-dependent ones discussed at
the end of the previous appendix,
all \mbox{$\tilde{\nu}$} -dependent operators can be generated with
the help of a unitary transformation $(1-i U_{B,\mbox{$\tilde{\nu}$}})$ from the
nonrelativistic one-nucleon currents and the kinetic energy
\begin{eqnarray}
j_{\lambda,\mbox{$\tilde{\nu}$}}^{(3)} (2;\vc{k}\, )_{B} &=&
i \Big[ j_{\lambda,FW}^{(1)} (1;\vc{k}\, ) \, , \, U_{B,\mbox{$\tilde{\nu}$}} \Big] \, , \\
\tilde{v}_{B,\mbox{$\tilde{\nu}$}}^{(3)} &=& i\Big[ \frac{\vcs{p}}{m} \, , \, U_{B,\mbox{$\tilde{\nu}$}}
\Big]
\, ,
\end{eqnarray}
where
\begin{eqnarray}
i \tilde{U}_{B,\mbox{$\tilde{\nu}$}} (\vc{q},\vc{Q}) &=& \tilde \nu
\frac{\ps{\vc{q}}{\vc{Q}}}{2m} \, \wt{B}{} \nonumber\\
&=& \frac{\tilde \nu}{2}
\langle \vcp{p}\, |\Big[ t^{(1)},\, \wt{B}{}
\Big]| \vc{p} \, \rangle \, .
\end{eqnarray}
\renewcommand{\theequation}{E.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix E: Continuity equation for the \mbox{$F^-$}\ currents
for pseudoscalar exchange}
In this appendix we want to identify the various pieces of the ``pro''
and ``mes'' currents corresponding to the various commutators in
(\ref{cont23}). To this end
we split the total ``pro'' and ``mes'' currents into several
parts (labelled by I, II,..) which will be specified in the following
\begin{eqnarray}
\jrcq{F}{pro}{PS}{-} &=& \sum_{i= I}^{V} \jrcq{F}{pro-i}{PS}{-} \, ,
\\
\jrcq{F}{mes}{PS}{-} &=& \sum_{i= I}^{IV} \jrcq{F}{mes-i}{PS}{-} \, .
\end{eqnarray}
As first current, we single out the purely transverse $G_M^-$ term
since it drops out from the continuity equation
\begin{eqnarray}
\jrcq{F}{pro-V}{PS}{-} &=& \Gm{-} \, \conm{PS}{128}{4}\,
\vc{k} \times \Big[
\Big( 6i\, \vc{Q} + 5\, \vc{k} \times \vs{1} \Big) \ps{\vs{2}}{\vci{q}{2}}
\nonumber\\
&& + \vci{q}{2} \times \vs{1}\, \ps{\vs{2}}{\vc{k}\, }
- 2i \vci{q}{2}\, \ps{\vs{2}}{\vc{Q}} \, \Big]
\Big\}\prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, .
\label{intjpsproIV-}
\end{eqnarray}
Now we start with
the first commutator in (\ref{cont23}) which has two contributions
from the ``pro'' and ``mes'' parts of $\rho_F^{(2)}(2;\vec k\,)$,
namely
\begin{eqnarray}
\langle \vcp{p} \, | \Big[ t^{(1)}, \, \rho^{(2)}_{F} (2;\mbox{pro};\vc{k}\, )
\Big]^- | \vc{p} \, \rangle&=&\Ff{e,1}{-}\conm{PS}{32}{4}\ps{\vec q\,}{\vec Q}
\,\Sigma^{(+)}(\vec q_2, \,\vec Q)
\prop{PS}{2}
\nonumber\\
&&
+\mbox{$(1 \leftrightarrow 2)$} \label{cpst1pro}
\end{eqnarray}
and
\begin{equation}
\langle \vcp{p} \, | \Big[ t^{(1)}, \, \rho^{(2)}_{F} (2;\mbox{mes};\vc{k}\, )
\Big]^- | \vc{p} \, \rangle = -\Ff{e,1}{-}\conm{PS}{8}{4}
\ps{\vec k}{\vec Q}\ps{\vec q}{\vec Q}
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}} \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$} \label{cpst1mes}
\,.
\end{equation}
Their current counterparts are easy to identify, namely
\begin{eqnarray}
\jrcq{F}{pro-I}{PS}{-} &=&
- \Ff{e,1}{-}\, \conm{PS}{64}{4}\, \Big\{ \vc{Q}\,
\Sigma^{(-)} ( \vc{Q}, \vci{q}{2} ) \, +
2\, \vs{1}\, \ps{\vc{Q}}{\vci{q}{2}} \ps{\vs{2}}{\vc{Q}} \, \Big\}\,
\nonumber\\
&& \prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{intjpsproII-}\\
\jrcq{F}{mes-I}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{16}{4}\, \vc{q}\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}
\nonumber\\
&&
\Big\{ \ps{\vci{q}{2}}{\vc{Q}} \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{Q}}
+ \ps{\vci{q}{1}}{\vc{Q}} \ps{\vs{1}}{\vc{Q}} \ps{\vs{2}}{\vci{q}{2}}
\Big\} \, ,
\label{intjpsmesII-}
\end{eqnarray}
with the divergence
\begin{equation}
\vc{k} \cdot\Big( \jrcq{F}{pro-I}{PS}{-}+ \jrcq{F}{mes-I}{PS}{-}\Big)=
\langle \vcp{p} \, | \Big[ t^{(1)}, \, \rho^{(2)}_{F} (2;\mbox{pro};\vc{k}\, )
\Big]^- | \vc{p} \, \rangle\,,
\end{equation}
and
\begin{equation}
\jrcq{F}{mes-II}{PS}{-} = - \Ff{e,1}{-}\, \conm{PS}{8}{4}\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}\
\frac{\vc{q}}{\ps{\vc{k}}{\vc{q}\, }}\, \,
\ps{\vc{k}}{\vc{Q}} \ps{\vc{q}}{\vc{Q}}
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}} \, ,
\label{intjpsmesIII-}
\end{equation}
with
\begin{equation}
\vc{k} \cdot \jrcq{F}{mes-II}{PS}{-} =
\langle \vcp{p} \, | \Big[ t^{(1)}, \, \rho^{(2)}_{F} (2;\mbox{mes};\vc{k}\, )
\Big]^- | \vc{p} \, \rangle\,.
\label{conpsmesIII}
\end{equation}
Furthermore, the divergence of the ``recoil'' current
\begin{equation}
\jrcq{F}{pro-II}{PS}{-} =
- \Ff{e,1}{-}\, \conm{PS}{128}{4}\, \vc{k}\, \,
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}} \,
\prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, ,
\label{intjpsproIII-} \\
\end{equation}
yields the recoil contribution to (\ref{cont23}), i.e.\
\begin{equation}
\vc{k} \cdot \jrcq{F}{pro-II}{PS}{-} =
- \frac{\vcs{k}}{32m^2}\,
\langle \vcp{p} \, | \Big[ v^{(1)}_{PS}
, \, \rho^{(0)}_{F} (1;\vc{k}\, )\Big]^- | \vc{p} \, \rangle \, .
\label{conpsproIII}
\end{equation}
Of the two remaining commutators of (\ref{cont23}) we will first
consider the commutator of the nonrelativistic potential with the
relativistic charge density $\rho^{(2)}_{F,e} (1;\vc{k}\, )$
\begin{eqnarray}
\langle \vcp{p} \, | \Big[ v^{(1)}_{PS} \, , \, \rho^{(2)}_{F,e} (1;\vc{k}\, )
\Big]^- | \vc{p} \, \rangle &=&
- \Ff{e,1}{-} \conm{PS}{64}{4}\, \Big\{ i \pth{\vc{k}}{\vci{q}{2}}{\vc{Q}}
(\vs{1} + \vs{2}) \cdot \vci{q}{2}
+ \vcsi{q}{2} \Sigma^{(+)}(\vc{k},\vci{q}{2})
\nonumber\\
&&
+ 2 \ps{\vc{k}}{\vci{q}{1}} \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}
\Big\} \prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, . \label{conpsIa}
\end{eqnarray}
Recall that the current satisfying the continuity equation with the
similar commutator of the $\kappa$-part
$[ v^{(1)}_{PS} \, , \, \rho^{(2)}_{F,\kappa} (1;\vc{k}\, ) ] $\ has been
absorbed into the single-nucleon current, as discussed in the Sect.\ IIIA.
Explicitly, it is given in (ATA-4.17b). Note that this current does
not change in the transition from the FW currents to the intrinsic ones, since
$ \rho^{(2)}_{FW,\kappa} (1;\vc{k}\, ) = \rho^{(2)}_{F,\kappa} (1;\vc{k}\, )$.
We can get the corresponding FW current proportional to \mbox{$\hat{e}_1$}\
replacing $ \Ff{\kappa,1}{-} \rightarrow \Ff{e,1}{-}/2$\ in (ATA-4.17b)
giving only a ``pro'' contribution
\begin{eqnarray}
\jrcq{FW,e}{pro}{PS}{-} &=& - \Ff{e,1}{-}\, \conm{PS}{32}{4}\,
\Big[ \vci{q}{1}\, \ps{\vs{1}}{\vci{q}{2}} \nonumber\\
&&
+ \vs{1} \, \vcsi{q}{2} + i \vci{q}{2} \times \vc{Q} \Big] \,
\ps{\vs{2}}{\vci{q}{2}}\, \prop{PS}{2} + \mbox{$(1 \leftrightarrow 2)$} \, .
\label{jpsexte}
\end{eqnarray}
But since $\rho^{(2)}_{F,e} (1;\vc{k}\, ) = \rho^{(2)}_{FW,e} (1;\vc{k}\, )
+ \rho^{(2)}_{\mbox{$\chi_{\sigma}$}} (1;\vc{k}\, )$\ one has to add to (\ref{jpsexte})
in this case the \mbox{$\chi_{\sigma}$}\ currents from (\ref{jpschispro}) and
(\ref{jpschismes}) in order to obtain
the intrinsic ``pro'' and ``mes'' currents saturating (\ref{conpsIa})
\begin{eqnarray}
\jrcq{F}{pro-III}{PS}{-} &=& \jrcq{FW,e}{pro}{PS}{-} +
\jrcq{\mbox{$\chi_{\sigma}$}}{pro}{PS}{-} \, , \nonumber\\
&=& - \Ff{e,1}{-} \,
\conm{PS}{64}{4} \,
\Big\{\vs{1} \Big[ (2 \vcsi{q}{2} - \ps{\vc{k}}{\vci{q}{2}} )
\ps{\vs{2}}{\vci{q}{2}} + \vcsi{q}{2} \ps{\vs{2}}{\vc{k}\, } \nonumber\\
&&
- i \vc{k} \times \vc{q} \cdot \vc{Q}\, \Big]
+ \vc{q}\, \Big[ \, 2 \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}} +
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, } \Big] \nonumber\\
&&
+ \Big[ \vc{k}\, \vs{1} \cdot ( 2 \vci{q}{2} - \frac{1}{2} \vc{k}\, )
+ 2i (\vc{k} - \vc{q}\, ) \times \vc{Q} \, \Big] \ps{\vs{2}}{\vci{q}{2}}
\Big\}
\, \prop{PS}{2} \nonumber\\
&&+\mbox{$(1 \leftrightarrow 2)$} \label{jpsproIc} \\
&=& - \Ff{e,1}{-} \,
\conm{PS}{64}{4} \,
\Big\{\vs{1} \Big[ 2 \ps{\vc{q}}{\vci{q}{2}} \ps{\vs{2}}{\vc{q}\, }
+ ( \vcsi{q}{2} - \ps{\vc{q}}{\vci{q}{2}} ) \ps{\vs{2}}{\vc{k}\, }
\nonumber\\
&& - i \vc{k} \times \vc{q} \cdot \vc{Q}\, \Big]
+ 2 \vci{q}{1} \, \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}} +
\vc{q}\, \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, } \nonumber\\
&& - \vc{k}\, \ps{\vs{1}}{\vc{q}\, } \ps{\vs{2}}{\vci{q}{2}}
+ 2i (\vc{k} - \vc{q}\, ) \times \vc{Q} \, \Big] \ps{\vs{2}}{\vci{q}{2}}
\Big\}
\, \prop{PS}{2}\nonumber\\
&&
+\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpsproIa}
\\
\jrcq{F}{mes-III}{PS}{-} &=& \Ff{e,1}{-}\, \conm{PS}{32}{4}\, \vc{q}\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}\
\nonumber\\
&&
\Big\{ \vcs{q}\, \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vc{k}\, } +
\vcs{k}\, \ps{\vs{1}}{\vc{q}\, } \ps{\vs{2}}{\vc{q}\, } \nonumber\\
&&
+ \vc{k} \times \vc{q} \cdot
\Big[ (\vs{1} \times \vs{2}\, ) \ps{\vci{q}{1}}{\vci{q}{2}}
- i \vc{Q}\, (\ps{\vs{1}}{\vci{q}{1}} + \ps{\vs{2}}{\vci{q}{2}} )
\Big] \Big\} \, .
\label{jpsmesIa}
\end{eqnarray}
The first form (\ref{jpsproIc}) is convenient for reconstructing
the total ``pro'' current, the second one (\ref{jpsproIa}) for
verifying the continuity equation.
Finally, the currents belonging to the remaining commutator
of (\ref{cont23})
\begin{eqnarray}
\langle \vcp{p} \, | \Big[ v^{(3)}_{PS} , \,
\rho^{(0)}_{F} (1;\vc{k}\, )\Big]^- | \vc{p} \, \rangle &=&
- \Ff{e,1}{-} \conm{PS}{32}{4}\,
\Big[ 2 \vcs{Q} + 2 \vcsi{q}{2} + \frac{1}{2} \vcs{k} \Big]
\nonumber\\
&&
\ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}} \prop{PS}{2}
+\mbox{$(1 \leftrightarrow 2)$} \, , \label{conpsIb}
\end{eqnarray}
are made up from the remaining pieces of the total ``pro'' and
``mes'' currents that have not yet been singled out.
Let us write them in the form
\begin{eqnarray}
\jrcq{F}{pro-IV}{PS}{-} &=& - \Ff{e,1}{-} \,
\conm{PS}{32}{4} \, \Big\{ \Big[\, 2 \, \vs{1}\, ( \vcsi{q}{2} + \vcs{Q} )
+ \frac{1}{2} \vc{k} \, \ps{\vs{1}}{\vci{q}{2}}
+ 2 \vc{q}\, \ps{\vs{1}}{\vci{q}{1}} \nonumber\\
&&
+ i \vc{k} \times \vc{Q} \Big]
\ps{\vs{2}}{\vci{q}{2}}
- \frac{1}{8} \vc{k}\,
\Big[ \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}}\, +
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, }\, \Big] \nonumber\\
&&
+ \frac{1}{2} \vc{k} \times \Big[ \,
\vc{k} \times \vs{1} \ps{\vs{2}}{\vci{q}{2}}\,
+ \frac{1}{2} \vci{q}{2} \times \vs{1} \ps{\vs{2}}{\vc{k}\, }\, \Big]
\Big\} \nonumber\\
&&
\prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \label{jpsproId}\\
&=& - \Ff{e,1}{-} \,
\conm{PS}{32}{4} \, \Big\{ \, \vs{1}\,
\Big[\, ( 2 \vcsi{q}{2} + 2 \vcs{Q} - \frac{1}{2} \vcs{k} )
\ps{\vs{2}}{\vci{q}{2}} \nonumber\\
&&
- \frac{1}{4} \ps{\vc{k}}{\vci{q}{2}} \ps{\vs{2}}{\vc{k}\, } \, \Big]
+ i \vc{k} \times \vc{Q} \ps{\vs{2}}{\vci{q}{2}}
\nonumber\\
&&
+ \vc{q}\, \Big[\, 2 \ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}}
- \frac{1}{4} \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vc{k}\, }\, \Big]
\nonumber\\
&&
+ \vc{k} \, \Big[\,
\frac{1}{2} \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vci{q}{2}}\, +
\frac{3}{8} \ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}}
\nonumber\\
&&
+ \frac{1}{8} \ps{\vs{1}}{\vci{q}{2}} \ps{\vs{2}}{\vc{k}\, }\,
\Big]\, \Big\} \prop{PS}{2} +\mbox{$(1 \leftrightarrow 2)$} \, ,
\label{jpsproIb}\\
\jrcq{F}{mes-IV}{PS}{-} &=& \Ff{e,1}{-} \,
\conm{PS}{32}{4} \, \vc{q}\, \mbox{$f(\vec{q}_1^{\, 2},\vec{q}_2^{\, 2})$}
\Big\{ \Big[ 4 \vcs{q} + 4 \vcs{Q} + \vcs{k}\, \Big]
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vci{q}{2}} \nonumber\\
&& - \frac{\vcs{k}}{4} \Big[
\ps{\vs{1}}{\vc{k}\, } \ps{\vs{2}}{\vci{q}{2}} +
\ps{\vs{1}}{\vci{q}{1}} \ps{\vs{2}}{\vc{k}\, } \Big] \Big\} \, .
\label{jpsmesIb}
\end{eqnarray}
Again, the first form (\ref{jpsproId}) is more convenient for
checking the continuity equation, the second one (\ref{jpsproIb})
for constructing the the total current of (\ref{intjpspro-}).
\begin{table}
\caption{Survey on the contributons to the two-body charge density
$\rho (2;\vec k\,)$ with a listing of the corresponding equations.
}
\begin{center}
\begin{tabular}{cccccc}
meson & type & {$\rho^{(2)\,+}_{FW}$} & {$\rho^{(2)\,+}_{\chi_V}$} &
{$\rho^{(2)\,+}_{F}$} &
{$\rho^{(2)\,-}_{F}$} \\
\hline
S & mes & & & &
(\ref{intrhsmes-}) \\
S & ret & (\ref{rh2sret+}) & & (\ref{intrhsret+}) &
(\ref{intrhsret-}) \\
\hline
V & mes & & & &
(\ref{intrhsmes-}) \\
V & ret & (\ref{rh2sret+}) & & (\ref{intrhsret+}) &
(\ref{intrhsret-}) \\
V & mes-tr & & & & (\ref{intrhvmestr-}) \\
\hline
PS & pro & (\ref{rh2pspro+}) & (\ref{rh2pschiv+}) & (\ref{intrhpspro+}) &
(\ref{intrhpspro-}) \\
PS & mes & & & &
(\ref{intrhpsmes-}) \\
PS & ret & (\ref{rh2psret+}) & & (\ref{intrhpsret+}) &
(\ref{intrhpsret-})\\
\end{tabular}
\end{center}
\label{tab1}
\end{table}
\begin{table}
\caption{Survey on the contributions to the
two-body current density $\vec\jmath\,(2;\vec k\,)$ with a
listing of the corresponding equations.
}
\begin{center}
\begin{tabular}{cccccccccccc}
meson & type &
{$\vec \jmath^{\,\,(1)\,-}_{F}$} &
{$\vec \jmath^{\,\,(3)\,+}_{FW}$} &
{$\vec \jmath^{\,\,(3)\,+}_{\chi_V}$} &
{$\vec \jmath^{\,\,(3)\,+}_{F}$} &
{$\vec \jmath^{\,\,(3)\,-}_{FW}$} &
{$\vec \jmath^{\,\,(3)\,-}_{sep}$} &
{$\vec \jmath^{\,\,(3)\,-}_{\chi_r}$} &
{$\vec \jmath^{\,\,(3)\,-}_{\chi_\sigma}$} &
{$\vec \jmath^{\,\,(3)\,-}_{\chi_V}$} &
{$\vec \jmath^{\,\,(3)\,-}_{F}$}
\\
\hline
S & pro & & (\ref{j2spro+}) & & (\ref{intjspro+}) & (\ref{j2spro-}) & &
(\ref{jschrpro}) &
& & (\ref{intjspro-}) \\
S & mes & (\ref{j2smes1-}) & & & & (\ref{j2smes-}) & & & (\ref{jschs}) &
& (\ref{intjsmes3-}) \\
S & ret & & (\ref{j2sret+}) & & (\ref{intjsret+}) & (\ref{j2sret-}) &
(\ref{jssep}) & (\ref{jschrret}) & & & (\ref{intjsret-}) \\
\hline
V & pro & & (\ref{j2vpro+}) & & (\ref{intjvpro+}) & (\ref{j2vpro-}) & &
(\ref{jschrpro}) &
& & (\ref{intjvpro-}) \\
V & mes & (\ref{j2smes1-}) & & & & (\ref{j2vmes-}) & & & (\ref{jschs}) &
& (\ref{intjvmes-}) \\
V & ret & & (\ref{j2sret+}) & & (\ref{intjsret+}) & (\ref{j2sret-}) &
(\ref{jssep}) & (\ref{jschrret}) & & & (\ref{intjsret-}) \\
V & mes-tr & & & & & (\ref{j2vmestr-}) & & & & &
(\ref{intjvmestr-}) \\
\hline
PS & pro & (\ref{j2pspro1-}) & (\ref{j2pspro+}) & (\ref{jpschiv+}) &
(\ref{intjpspro+}) &
(\ref{j2pspro3-}) & (\ref{jpsrsrpro1}) & (\ref{jpschirrpro}) &
(\ref{jpschispro}) & (\ref{jpschiv-}) &
(\ref{intjpspro-}) \\
PS & mes & (\ref{j2psmes1-}) & & & & (\ref{j2psmes3-}) & (\ref{jpsrsrmes}) &
& (\ref{jpschismes}) & & (\ref{intjpsmes-}) \\
PS & ret & & (\ref{j2psret+}) & & (\ref{intjpsret+}) & (\ref{j2psret-}) &
(\ref{jpsrsrret1}) & (\ref{jpsrsrret}) & & & (\ref{intjpsret-}) \\
\end{tabular}
\end{center}
\label{tab2}
\end{table}
| proofpile-arXiv_065-1241 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years many attempts have been made to understand
the nucleon structure functions measured in lepton deep-inelastic
scattering (DIS).
Although perturbative QCD is successful in describing the
variation of structure functions with the squared momentum transfer,
their magnitude and shape is governed by the non-perturbative physics of
composite particles, and is, so far, not calculable directly from QCD.
A variety of models has been invoked to describe
nucleon structure functions.
Bag model calculations for example,
which are driven by the dynamics of quarks bound
in a nucleon bag,
quite successfully describe non-singlet unpolarized and polarized
structure functions (see e.g. \cite{Jaffe75,ScThLo89} and references
therein).
However such calculations are not relativistically covariant.
A covariant approach to nucleon structure functions is given by
so called ``spectator models'' \cite{MeyMul91,MeScTh94,MePiTh95}.
Here the leading twist, non-singlet quark distributions are
calculated from the process in which the target nucleon splits into
a valence quark, which is scattered by the virtual photon,
and a spectator system carrying baryon number $2/3$.
Furthermore the spectrum of spectator states is assumed to be
saturated through single scalar and vector diquarks. Thus,
the main ingredient of these models are covariant quark-diquark
vertex functions.
Until now vertex functions have been merely parameterized such that the
measured quark distributions are reproduced, and no attempts have been made
to connect them to some dynamical models of the nucleon.
In this work we construct the vertex functions from a model Lagrangian
by solving the Bethe--Salpeter equation (BSE) for the quark-diquark
system.
However, we do not aim
at a detailed, quantitative description of nucleon structure functions
in the present work.
Rather we outline how to extract quark-diquark vertex functions from
Euclidean solutions of the BSE.
In this context several simplifications are made.
We consider only scalar diquarks as spectators and restrict ourselves
to the $SU(2)$ flavor group.
The inclusion of pseudo-vector diquarks and the generalization to
$SU(3)$ flavor are relatively straightforward extensions
and will be left for future work.
It should be mentioned that the quark-diquark Lagrangian used here
does not account for quark confinement inside nucleons.
However the use of a confining quark-diquark interaction
should also be possible within the scheme that we use.
As an important result of our work
we find that the vertex function of the nucleon is
highly relativistic even in the case of weak binding.
Furthermore we observe that the
nucleon structure function, $F_1$, is determined to a large extent by
the relativistic kinematics of the quark-diquark system and is
not very sensitive to its dynamics as long as the spectator system is
treated as a single particle.
The outline of the paper is as follows. In Sec.\ref{spectatorModel}
we introduce the spectator model for deep-inelastic scattering.
Section \ref{diquarkModel} focuses on the scalar diquark model
for the nucleon which yields the quark-diquark vertex function as a
solution of a ladder BSE.
In Sec.\ref{numerical} we present numerical results for the
quark-diquark vertex function and the nucleon structure function,
$F_1$.
Finally we summarize and conclude in Sec.\ref{summary}.
\section{Deep-Inelastic Lepton Scattering in the
\protect \\Spectator Model}
\label{spectatorModel}
Inclusive deep-inelastic scattering of leptons from hadrons is described
by the hadronic tensor
\begin{eqnarray}
W^{\mu \nu}(P,q)
&=& \frac{1}{2\pi} \int d^4\xi\, e^{iq\cdot \xi}
\left\langle P\left|J^{\mu}(\xi)\, J^{\nu}(0)
\right|P
\right\rangle,
\end{eqnarray}
where $P$ and $q$ are the four-momenta of the target and exchanged,
virtual photon respectively, and $J^{\mu}$ is the hadronic
electromagnetic current.
In unpolarized scattering processes only the symmetric piece of
$W^{\mu \nu} = W^{\nu \mu}$ is probed.
It can be expressed in terms of two structure functions $F_1$
and $F_2$, which depend on the Bjorken scaling variable, $x = Q^2/2 P\cdot q$,
and the squared momentum transfer, $Q^2 = -q^2$:
\begin{equation}
W^{\mu \nu}(q,P) =
\left( -g^{\mu\nu}+\frac{q^\mu q^\nu}{q^2} \right)
F_1(x,Q^2) +
\left( P^\mu - q^\mu \frac{P\cdot q}{q^2} \right)
\left( P^\nu - q^\nu \frac{P\cdot q}{q^2} \right)
\frac{F_2(x,Q^2)}{P\cdot q}.
\end{equation}
In the Bjorken limit ($Q^2, P\cdot q \rightarrow \infty$; but finite $x$)
in which we work throughout,
both structure functions depend up to logarithmic corrections on $x$ only,
and are related via the Callan-Gross relation:
$F_2 = 2 x F_1$.
The hadronic tensor, $W^{\mu \nu}$, is connected via the optical theorem to
the amplitude $T^{\mu \nu}$ for virtual photon-nucleon forward
Compton scattering:
\begin{equation}
\frac{1}{\pi}\hbox{Im}\,T^{\mu\nu}(q,P) = W^{\mu\nu}(q,P).
\label{dispersion}
\end{equation}
In the Bjorken limit the interaction of the virtual photon with a
valence quark from the target leads to a spectator system carrying
diquark quantum numbers, i.e. baryon number $2/3$ and spin $0$ or $1$.
In the spectator model it is assumed that the spectrum of spectator
states can be saturated through a single scalar and pseudo-vector
diquark \cite{MeyMul91,MeScTh94,MePiTh95}.
In the following we will restrict ourselves to contributions from
scalar diquarks only. The generalization to include a pseudo-vector
diquark contribution is left for future work.
The corresponding Compton amplitude is
(Fig. \ref{fig:diquarkSpectatorForT}):
\begin{eqnarray}
T^{\mu\nu}_S(q,P) & = &
\VEV{\frac{5}{6}+\frac{\tau_3}{2}}_N
\int \frac{d^4k}{(2\pi)^4 i} \bar u(P)\bar\Gamma(k,P-k)
S(k)\gamma^\mu S(k+q)\gamma^\nu S(k)
\label{compton}\\
& & \qquad\qquad\qquad\qquad\qquad\qquad
\times D(P-k) \Gamma(k,P-k) u(P),
\nonumber
\end{eqnarray}
where the flavor matrix has to be evaluated in the nucleon iso-spin space.
The integration runs over the quark momentum $k$.
The Dirac spinor of the spin averaged nucleon target with momentum $P$
is denoted by $u(P)$.
Furthermore $S(k)=1/(m_q-\myslash{k} - i\epsilon)$ and
$D(k)=1/(m_D^2-k^2-i\epsilon)$ are the propagators of the quark
and diquark respectively, while $\Gamma$ is the quark-diquark vertex
function. To obtain the hadronic tensor,
the scattered quark and the diquark spectator have to be put on mass-shell
according to Eq.(\ref{dispersion}):
\begin{eqnarray}
S(k+q) & \rightarrow & i\pi \delta(m_q^2-(k+q)^2)
(m_q+\myslash{k}+\myslash{q}),
\nonumber \\
D(P-k) & \rightarrow & i\pi \delta(m_D^2-(P-k)^2).
\label{onshell}
\end{eqnarray}
The vertex function $\Gamma$ for the target, which in our
approach is a positive energy, spin-1/2 composite state of a quark and a
scalar diquark, is given by two independent Dirac structures:
\begin{equation}
\left. \Gamma(k, P-k)\right|_{(P-k)^2=m_D^2} =
\left( f^{\rm on}_1(k^2) +
\frac{2\myslash{k}}{M}
f^{\rm on}_2(k^2)
\right)\Lambda^{(+)}(P),
\label{DISpara}
\end{equation}
where $\Lambda^{(+)}(P) = 1/2 + \myslash{P}/2M$ is the
projector onto positive energy spin-1/2 states and
$M=\sqrt{P^2}$ is the invariant mass of the nucleon target.
Note that according to the on-shell condition in Eq.(\ref{onshell})
the scalar functions, $f^{\rm on}_{1/2}$, will depend on
$k^2$ only.
{}From Eqs.(\ref{dispersion} -- \ref{DISpara}) we then obtain
for the valence quark contribution to the structure function $F_1$:
\begin{eqnarray}
F_{1}^{\rm val.}(x)&=&
\VEV{\frac{5}{6}+\frac{\tau_3}{2}}_N
\frac{1}{16\pi^3}\int_{-\infty}^{k^2_{\rm max}}\,
\frac{d k^2}{m_q^2 - k^2}
\nonumber\\
& &\times\left\{ \left( 1 - x +
\frac{( m_q + M )^2 - m_D^2}{ m_q^2 - k^2 } x
\right) \frac{f^{\rm on}_1(k^2)^2}{4}
\right.
\label{FofX}\\
& &\qquad
- \left( 1+x + \frac{2\,m_q}{M} x +
\left( 1 - \frac{2\,m_q}{M} \right) \,
\frac{ ( m_q + M )^2 - m_D^2 }
{ m_q^2 - k^2 } x
\right) \frac{f^{\rm on}_1(k^2)\,f^{\rm on}_2(k^2)}{2}
\nonumber\\
& &\qquad
+ \left(
4 \frac{m_q^2-k^2}{M^2}
+ \left(1 - \left({2m_q\over M}\right)^2\right)(1+2x)
\right.
\nonumber\\
& &\qquad\qquad\left.\left.
+ \left(1-\frac{2\,m_q}{M} \right)^2\,x
+ \left(1-\frac{2\,m_q}{M} \right)^2\,
\frac{ ( m_q + M )^2 - m_D^2 }{m_q^2-k^2}\,x
\right) \frac{f^{\rm on}_2(k^2)^2}{4}
\right\}.
\nonumber
\end{eqnarray}
The upper limit of the $k^2$--integral is denoted by:
\begin{equation}
k^2_{\rm max}=x\left( M^2 -\frac{m_D^2}{1-x}\right),
\label{k2max}
\end{equation}
Note that $k^2_{\rm max} \rightarrow -\infty$ for
$x\rightarrow 1$. This implies that for any
regular vertex function
$F_{1}^{\rm val.} \rightarrow 0 $ for $x\rightarrow 1$ and thus the
structure function automatically has the correct support.
Since the spectator model of the nucleon is valence-quark dominated,
the structure function $F_1^{\rm val.}$ in Eq.(\ref{FofX}) is
identified with the leading twist part of $F_1$ at some typical
low momentum scale, $\mu^2 \lsim 1$ GeV$^2$.
The physical structure function at large
$Q^2 \gg \mu ^2$ is then to be generated via $Q^2$-evolution.
It should be mentioned that the Compton amplitude in Eq.(\ref{compton})
and also the expression for the structure function in (\ref{FofX})
contain poles from the quark propagators attached to the
quark-diquark vertex functions.
{}From Eq.(\ref{k2max}) it follows that these
poles do not contribute when $M < m_D + m_q$.
This condition is automatically satisfied
if the nucleon is considered as a bound state of the quark and
diquark, as done in the following.
In the next section we shall determine the vertex function $\Gamma$,
or equivalently $f_1^{\rm on}$ and $f_2^{\rm on}$ from Eq.(\ref{DISpara})
as solutions of a ladder BSE.
\section{Scalar Diquark Model for the Nucleon}\label{diquarkModel}
We now determine the vertex function (\ref{DISpara})
as the solution of a BSE for a quark-diquark system.
We start from the following Lagrangian:
\begin{eqnarray}
&{\cal L} = & \bar\psi_a\left(i \partial \llap / - m_q\right)\psi_a
+ \partial_\mu\phi_a^*\partial^\mu\phi_a - m_D^2 \phi_a^*\phi_a
\label{action}\\
& & + i \frac{g}{2\sqrt{2}} \epsilon^a_{bc}
\psi^T_b C^{-1}\gamma_5 \tau_2\psi_c \, \phi^*_a
-i \frac{g}{2\sqrt{2}} \epsilon^a_{bc}
\bar\psi_b C^{-1}\gamma_5 \tau_2\bar\psi^T_c \, \phi_a,
\nonumber
\end{eqnarray}
where we have explicitly indicated $SU(3)$ color indices but
have omitted flavor indices.
We restrict ourselves to flavor $SU(2)$,
where $\tau_2$ is the symmetric generator which acts on the
iso--doublet quark field, $\psi$, with mass $m_q$.
The charged scalar field, $\phi$, represents the flavor--singlet
scalar diquark carrying an invariant mass $m_D$.
Similar Lagrangians have been used recently to
describe some static properties of the nucleon, such as its mass
and electromagnetic charge (see e.g. \cite{BuAlRe92,HuaTjo94,AsIsBeYa95}).
The nucleon with 4--momentum $P$ and spin $S$
is described by the bound state Bethe-Salpeter (BS) vertex function $\Gamma$:
\begin{equation}
\hbox{F.T.} \left\langle 0\left|
T \psi_a(x)\phi_b(y)
\right|P,S\right\rangle
= \delta_{ab} S(k) D(P-k) i \Gamma(k,P-k) u(P,S).
\end{equation}
Here $u(P,S)$ is the nucleon Dirac spinor and
F.T. stands for the Fourier transformation.~\footnote{
We use the normalization
$ \VEV{P^\prime | P}= 2 P^0(2\pi)^3 \delta^{(3)}(\vec P^\prime-\vec P)$
and $\sum_{S} u(P,S)\bar u(P,S)= \sqrt{P^2} + \myslash{P}= M+\myslash{P}$.}
(Again we have omitted $SU(2)$ flavor indices.)
We will now discuss the integral equation for the vertex function
$\Gamma$ in the framework of the ladder approximation.
\subsection{Ladder BSE}
For the following discussion of the integral equation for the vertex function
$\Gamma$ we write the quark momentum as $q + \eta_q P$ and the diquark
momentum as $-q + \eta_D P$.
The weight factors $\eta_q$ and $\eta_D$ are
arbitrary constants between 0 and 1 and satisfy $\eta_q + \eta_D = 1$.
Within the ladder approximation the BSE for the vertex function
of a positive energy, spin-1/2 model nucleon can be written as
(see Fig.(\ref{fig:ladderBSE})):
\begin{equation}
\Gamma(q,P)u(P,S)=g^2 \int\frac{d^4k}{(2\pi)^4 i}
S(-k-q-(\eta_q-\eta_D)P) S(\eta_q P+k) D(\eta_D P-k)
\Gamma(k,P)u(P,S),
\label{ladderBSE}
\end{equation}
where the flavor and color factors have already been worked out.
The scattering kernel is given by a $u$--channel quark exchange
according to the interaction Lagrangian in Eq.(\ref{action}).
Since we are only interested in positive energy solutions,
we may write the vertex function as:
\begin{equation}
\Gamma(q,P)=\left( a f_1(q,P) + b f_2(q,P)
+ \frac{\myslash{q}}{M} f_2(q,P) \right)
\Lambda^{(+)}(P).
\label{BSvertexPara}
\end{equation}
The arguments of the scalar functions $f_\alpha(q,P)$
are actually $q^2$ and $P\cdot q$, but we use this shorthand
notation for brevity.
With $a$ and $b$ we denote as yet unspecified scalar functions of
$q^2$ and $P\cdot q$ which will be chosen later for convenience.
(The definition of $f_{1/2}^{\rm on}$ in Eq.(\ref{DISpara}) corresponds
to a specific choice of $a$ and $b$.)
\subsection{Wick Rotation}\label{wickRotation}
After multiplying the BSE in (\ref{ladderBSE}) with appropriate
projectors (which depend on $a$ and $b$), we obtain a pair of coupled
integral equations for the scalar functions $f_1(q,P)$ and $f_2(q,P)$:
\begin{eqnarray} \label{prjctd_BSE}
&f_\alpha(q,P)=g^2 \int \frac{d^4k}{(2\pi)^4i}\,
& \tilde D_q(-q-k-(\eta_q-\eta_D)P)\\
& &\times D_q(\eta_qP+k)\,D_D(\eta_qP-k)\,
K_{\alpha\beta}(q,k,P)\, f_\beta(k,P), \nonumber
\end{eqnarray}
where $D_q(p)\equiv 1/(m_q^2-p^2-i\epsilon)$ and
$D_D(p)\equiv 1/(m_D^2-p^2-i\epsilon)$
are the denominators of the quark and diquark propagators, respectively.
The indices $\alpha$ and $\beta$ stand for the independent Dirac structures
of the vertex function $\Gamma$,
i.e. in the scalar--diquark model they run from 1 to 2
according to (\ref{BSvertexPara}).
Consequently the function
$K_{\alpha \beta}(q,k,P)$ is a $2\times 2$ matrix,
where its explicit form depends on the
definition of the scalar functions $f_\alpha(q,P)$.
We use a form factor for the quark--diquark coupling which
weakens the short range interaction between the quark and the diquark and
ensures the existence of solutions with a positive norm.
For simplicity, we use a $u$--channel form factor which can be
conveniently absorbed into
the denominator of the exchanged quark propagator as follows:
\begin{equation}
D_q(p) \rightarrow \tilde D_q(p) \equiv
D_q(p) \frac{\Lambda^2}{\Lambda^2 - p^2-i\epsilon}.
\label{exchD}
\end{equation}
As a next step let us
analyze the singularities of the integrand in Eq.(\ref{prjctd_BSE}).
For this purpose we choose the nucleon rest frame
where $P_\mu=P^{(0)}_\mu\equiv(M,\vec 0\,)$
and put the weight constants $\eta_q$ and $\eta_D$ to the classical values:
\begin{eqnarray}
\eta_q & \equiv & \frac{m_q}{m_q+m_D} \equiv \frac{1-\eta}{2}\;,
\label{etaq} \\
\eta_D & \equiv & \frac{m_D}{m_q+m_D} \equiv \frac{1+\eta}{2}\;.
\label{etad}
\end{eqnarray}
Here we have introduced the asymmetry parameter
$\eta\equiv\frac{m_D-m_q}{m_q+m_D}$, such that the invariant quark and
diquark mass is given by $m_q=\bar m(1-\eta)$ and $m_D=\bar m(1+\eta)$
respectively, where $\bar m=(m_q+m_D)/2$.
In the complex $k_0$ plane $D_q(\eta_qP+k)$ and $D_D(\eta_qP-k)$
will be singular for:
\begin{eqnarray}
& k^0 & = -\eta_q M \pm E_q(\vec k) \mp i\epsilon,
\label{qcut} \\
& k^0 & = \eta_D M \pm E_D(\vec k) \mp i\epsilon,
\label{dcut}
\end{eqnarray}
where $E_q(\vec k)=\sqrt{m_q^2+{\vec k\,}^2}$ and
$E_D(\vec k)=\sqrt{m_D^2+ {\vec k\,}^2}$.
The cuts lie in the second and forth quadrant of the complex
$k_0$-plane.
However for a bound state, $0 < M < m_q+m_D$, a gap occurs between these
two cuts which includes the imaginary $k_0$ axis.
Next, consider the singularities of the exchanged quark propagator:
\begin{equation}
k^0 = -q^0 +\eta M \pm E_{m_i}(\vec q+\vec k) \mp i\epsilon,
\label{exch_pole}
\end{equation}
where $E_{m_i}(\vec k)=\sqrt{m_i^2+{\vec k\,}^2}$ and $m_i = m_q, \Lambda$
for $i=1, 2$, respectively.
The sum of the second and third term at the RHS of
Eq.(\ref{exch_pole}) is bound by:
\begin{eqnarray}
\eta M + E_{m_i}(\vec q+\vec k) & \ge (m_D-m_q)
\frac{M}{m_q+m_D} + m_i,
\label{bound1} \\
\eta M - E_{m_i}(\vec q+\vec k) & \le (m_D-m_q)
\frac{M}{m_q+m_D} - m_i.
\label{bound2}
\end{eqnarray}
The diquark should be considered as a bound state of
two quarks which implies $m_D < 2 m_q$.
Together with $m_i \geq m_q$, namely setting the form factor mass
$\Lambda$ larger than $m_q$,
we have $m_D-m_q+m_i>0$ and $m_D-m_q-m_i<0$.
Consequently we find from Eqs.(\ref{bound1},\ref{bound2})
$\eta M + E_{m_i}(\vec q+\vec k) >0$ and
$\eta M - E_{m_i}(\vec q+\vec k) < 0$ for any momenta
$\vec q$ and $\vec k$.
Therefore, if $-q^0 +\eta M - E_{m_i}(\vec q+\vec k) > 0$ or
$-q^0 +\eta M + E_{m_i}(\vec q+\vec k) <0$ a so-called ``displaced''
pole will occur in the first or third quadrant, respectively.
In other words, the displaced-pole-free condition is:
\begin{equation}
\eta M - E_{m_i}(\vec q+\vec k) < q^0 <
\eta M + E_{m_i}(\vec q+\vec k),
\label{cndition0}
\end{equation}
for any $\vec k$. Since $\vec k$ is an integration variable,
$E_{m_i}(\vec q+\vec k)$ will adopt its minimum value, $m_q$, at
$\vec k = -\vec q$ for $i=1$. The above condition therefore simplifies to:
\begin{equation}
(q^0-\eta M)^2 < m_q^2.
\label{condition_halfWR}
\end{equation}
If $q^0$ is Wick rotated to pure imaginary values, i.e.
$q^{\mu} \rightarrow
\tilde q^\mu = (iq^4, \vec q\,)$ with real $q^4 \in (-\infty,\infty)$,
the displaced poles will move to the second and forth quadrant.
Then, after also rotating the momentum
$k^{\mu} \rightarrow \tilde k^\mu = (ik^4, \vec k)$,
we obtain the Euclidean vertex functions $f_\beta(\tilde k,P^{(0)})$
{}from the Wick rotated BSE:
\begin{eqnarray}
&f_\alpha(\tilde q,P^{(0)})=g^2 \int\frac{d^4k_E}{(2\pi)^4}\,
&\tilde D_q(-\tilde q-\tilde k-(\eta_q-\eta_D)P^{(0)})\label{WR_BSE}\\
& &D_q(\eta_qP^{(0)}+\tilde k)\,D_D(\eta_qP^{(0)}-\tilde k)\,
K_{\alpha \beta}(\tilde q,\tilde k,P^{(0)})\,
f_\beta(\tilde k,P^{(0)}),
\nonumber
\end{eqnarray}
where $d^4k_E = d k^4 d^3\vec k$.
If we are in a kinematic situation where no displaced poles
occur, i.e. Eq.(\ref{condition_halfWR}) is fulfilled, we may
obtain the Minkowski space vertex function $f_\alpha(q,P)$ from
the Euclidean solution through:
\begin{eqnarray}
&f_\alpha(q,P^{(0)})=g^2 \int_{}^{}\frac{d^4k_E}{(2\pi)^4}\,
& \tilde D_q(-q-\tilde k-(\eta_q-\eta_D)P^{(0)}) \label{halfWR_BSE}
\\
& &\times D_q(\eta_qP^{(0)}+\tilde k)\,D_D(\eta_qP^{(0)}-\tilde k)\,
K_{\alpha\beta}(q,\tilde k,P^{(0)})\, f_\beta(\tilde k,P^{(0)}).
\nonumber
\end{eqnarray}
It should be emphasized that for a given Euclidean solution
$f_\beta(\tilde k,P^{(0)})$, Eq.(\ref{halfWR_BSE})
is not an integral equation but merely an algebraic relation
between $f_\alpha(q,P^{(0)})$ and $f_\beta(\tilde k,P^{(0)})$.
If however displaced poles occur,
i.e. Eq.(\ref{condition_halfWR}) is not fulfilled,
one needs to add contributions from the displaced poles to the RHS of
Eq.(\ref{halfWR_BSE}).
This will lead to an inhomogeneous integral equation for the
function $f_\alpha(q,P^{(0)})$, where the inhomogeneous term
is determined by the Euclidean solution $f_\beta(\tilde k,P^{(0)})$.
Since the Euclidean solutions $f_\alpha(\tilde q,P^{(0)})$ are functions
of $\tilde q^2=-q_E^2\equiv-((q_4)^2+|\vec q|^2)$,
$\tilde q\cdot P^{(0)}=i q_4 M$ for a fixed $M$, it is convenient to introduce
4-dimensional polar coordinates:
\begin{eqnarray}
q^4 & = & q_E \cos\alpha_q\,,
\nonumber \\
q^i & = & |\vec q\,|\, \hat q^i\,,
\label{Euclidian_vec} \\
|\vec q\,| & = & q_E \sin\alpha_q\,.
\nonumber
\end{eqnarray}
Here $0 < \alpha_q <\pi$ and the 3-dimensional unit vector
$\hat q^i$ is parameterized by the usual polar and azimuthal angles
$\hat q^i=(\sin\theta_q\cos\phi_q,\sin\theta_q\sin\phi_q,\cos\theta_q)$.
In the following we therefore consider
$f_\alpha(\tilde q,P^{(0)})$ as a function of $q_E$ and $\cos\alpha_q$.
Furthermore it is often convenient (and traditional) to factor out
the coupling constant $g^2$ together with a factor $(4\pi)^2$, and
define the ``eigenvalue'' $\lambda^{-1}=(g/4\pi)^2$.
Then the BSE in (\ref{WR_BSE}) is solved as an eigenvalue
problem for a fixed bound state mass $M$.
\subsection{$O(4)$ Expansion} \label{O4expansion}
In the following we will define the scalar functions $f_\alpha(q,P)$
for positive energy ($M > 0$) bound states via:
\begin{equation}
\Gamma (q,P)
=\left[ f_1(q,P)+
\left(
-\frac{P\cdot q}{M^2}
+\frac{\myslash{q}}{M}
\right) f_2(q,P)
\right]\Lambda^{(+)}(P),
\label{define_f}
\end{equation}
i.e., we now make a specific choice for the scalar functions $a$ and
$b$ in Eq.(\ref{BSvertexPara}).
In the rest frame of this model nucleon, this leads to:
\begin{equation}
\Phi^{J^P=1/2^+}_{J_3=S/2}(q,P^{(0)})=
\Gamma (q,P^{(0)}) u(P^{(0)},S) =
\left(\matrix{
f_1(q,P^{(0)})
\cr
\frac{\vec q\cdot\vec\sigma}{M}
f_2(q,P^{(0)})
\cr}
\right) \chi_S.
\label{diracrep}
\end{equation}
Here we have explicitly used the Dirac representation.
The Pauli matrices $\vec\sigma$ act on the two component spinor
$\chi_S$, where the spin label $S=\pm 1$ is the
eigenvalue of $\sigma_3$: $\sigma_3\chi_S=S\,\chi_S$.
In terms of the O(3) spinor harmonics ${\cal Y}^J_{lm}$
\cite{Edomonds},
\begin{equation}
{\cal Y}^{1/2}_{0S}(\hat q)=\frac{1}{\sqrt{4\pi}}\chi_S
\qquad\hbox{and}\qquad
{\cal Y}^{1/2}_{1S}(\hat q)=-\hat q\cdot
\vec \sigma{\cal Y}^{1/2}_{0S}(\hat q),
\label{O3spnrharmonics}
\end{equation}
we have:
\begin{equation}
\Phi^{J^P=1/2^+}_{J_3=S/2}(q,P^{(0)})=\sqrt{4\pi}
\left(\matrix{
f_1(q,P^{(0)})\,{\cal Y}^{1/2}_{0S}(\hat q)
\cr
-\frac{|\vec q\,|}{M}f_2(q,P^{(0)})
\,{\cal Y}^{1/2}_{1S}(\hat q)
\cr}
\right).
\label{diracrep2}
\end{equation}
{}From Eq.(\ref{diracrep2}) we observe that
$f_1$ and $f_2$ correspond to the upper and
lower components of the model nucleon, respectively.
After the Wick rotation, as discussed in the previous subsection,
the scalar functions
$f_\alpha$ become functions of $q_E$ and $\cos\alpha_q$.
Therefore we can expand them in terms of Gegenbauer polynomials $C^1_n(z)$
\cite{Betheman}:
\begin{equation}
f_\alpha(q_E, \cos\alpha_q)
= \sum_{n=0}^{\infty}\,i^n\,f^n_\alpha(q_E)
C^1_n(\cos\alpha_q).
\label{expanf}
\end{equation}
We have introduced the phase $i^n$ to ensure that the coefficient functions
$f^n_\alpha$ are real.
The integral measure in $O(4)$ polar coordinates is:
\begin{equation}
\int\frac{d^4k_E}{(2\pi)^4}=\frac{1}{(4\pi)^2}
\int_{0}^{\infty}dk_E\,k_E^3\,\frac{2}{\pi}
\int_{0}^{\pi}d\alpha_k\sin^2\alpha_k\,
\frac{1}{2\pi}\int d\Omega_{\hat k}.
\label{measure}
\end{equation}
Multiplying the BSE in Eq.(\ref{WR_BSE})
with the Gegenbauer polynomial $C^1_n(\cos\alpha_q)$
and integrating over the hyper-angle $\alpha_q$,
reduces the BSE to an integral equation
for the $O(4)$ radial functions $f^n_\alpha$:
\begin{equation}
\lambda(M)\,f^n_\alpha(q_E)= \sum_{\beta=1}^{2}\sum_{m=0}^{\infty}\,
\int_{0}^{\infty}dk_E\,
{\cal K}^{n\,m}_{\alpha\,\beta}(q_E,k_E)\; f^m_\beta(k_E).
\label{fn_eq}
\end{equation}
Here $\lambda(M)$ is the eigenvalue which corresponds to a
fixed bound state mass $M$.
Furthermore note that the integral kernel
\begin{eqnarray}
& {\cal K}^{n\,m}_{\alpha\,\beta}(q_E,k_E) = (-i)^n&i^m\,\frac{2}{\pi}
\int_{0}^{\pi}d\alpha_q\sin^2\alpha_q\,
C^1_n(\cos\alpha_q)\,\frac{2}{\pi}\,
\int_{0}^{\pi}d\alpha_k\sin^2\alpha_k\,C^1_m(\cos\alpha_k)
\nonumber\\
& &\times\frac{1}{2\pi}\int d\Omega_{\hat k}\, k_E^3\,
\tilde D_q(-\tilde q-\tilde k-(\eta_q-\eta_D)P^{(0)})\,
K_{\alpha \beta}(\tilde q,\tilde k,P^{(0)})
\label{kernelmatrix}\\
& &\qquad\times\,D_q(\eta_qP^{(0)}+\tilde k)\,
D_D(\eta_qP^{(0)}-\tilde k)
\nonumber
\end{eqnarray}
is real, so that we can restrict ourselves
to real $O(4)$ radial functions $f^n_\alpha$.
To close this section we shall introduce normalized $O(4)$ radial functions.
Since the scalar functions $f_1(q, P)$ and $f_2(q, P)$
correspond to the upper and lower components of the model nucleon
respectively, one may expect that $f_2(q, P)$ becomes negligible when the
quark-diquark system forms a weakly bound state.
Thus one can use the relative magnitude of the
two scalar functions, $f_{2}(q,P)/f_{1}(q,P)$, as a measure of
relativistic contributions to the model nucleon.
To compare the magnitude of the Wick rotated scalar functions
$f_1(\tilde q, P)$ and $f_2(\tilde q, P)$
we introduce normalized $O(4)$ radial functions.
Recall the $O(4)$ spherical spinor harmonics \cite{Rothe,Ladanyi}:
\begin{equation}
{\cal Z}_{njlm}(\alpha,\theta,\phi)=
\left[\frac{2^{2l+1}(n+1)(n-l)!}{\pi(n+l+1)!}\right]^{1/2}\,
l!\,(\sin\alpha)^l\,C^{1+l}_{n-l}(\cos\alpha)\,
{\cal Y}^j_{l\,m}(\theta,\phi).
\label{O4harmocics}
\end{equation}
The integers $n$ and $l$ denote the $O(4)$ angular momentum and
the ordinary $O(3)$ orbital angular momentum respectively.
The half--integer quantum numbers
$j$ and $m$ stand for the usual $O(3)$ total angular momentum and the magnetic
quantum number.
We rewrite the Wick rotated solution
$\Phi^{J^P=1/2^+}_{J_3=S/2}(\tilde q,P^{(0)})$ in terms of
the spinor harmonics
${\cal Z}_{njlm}(\alpha,\theta,\phi)$ and define the normalized
$O(4)$ radial functions $F_n(q_E)$ and $G_n(q_E)$ as:
\begin{equation}
\Phi^{J^P=1/2^+}_{J_3=S/2}(\tilde q,P^{(0)})
\equiv \sqrt{2}\;\pi
\left(\matrix{\sum_{n=0}^{\infty}i^n\,F_n(q_E)\,
{\cal Z}_{n\,\frac{1}{2}\,0\,S}(\alpha_q,\hat q)\cr
\sum_{n=1}^{\infty}i^{n-1}\,G_n(q_E)\,
{\cal Z}_{n\,\frac{1}{2}\,1\,S}(\alpha_q,\hat q) \cr}
\right).
\label{O4diracrep}
\end{equation}
The extra factor $\sqrt{2}\;\pi$ is introduced for convenience.
The normalized $O(4)$ radial functions $F_n$ and $G_n$ are
then linear combinations of the $f_\alpha^n$:
\begin{eqnarray}
F_n(q_E) & = & f_1^n(q_E),
\label{Ffunc} \\
G_n(q_E) & = & -\frac{q_E}{ 2M }\sqrt{n(n+2)}
\left( \frac{f_2^{n-1}(q_E)}{n}
+ \frac{f_2^{n+1}(q_E)}{n+2}
\right).
\label{Gfunc}
\end{eqnarray}
Equivalently we can express the Wick rotated scalar functions as:
\begin{eqnarray}
f_1(q_E,\cos\alpha)&=&
\sum_{n=0}^{\infty}i^n\,F_n(q_E)\,C^1_n(\cos\alpha),
\label{f1andF}\\
f_2(q_E,\cos\alpha)&=&
-\sum_{n=1}^{\infty}\frac{2M}{q_E}
\frac{i^{n-1}}{\sqrt{n(n+2)}}\,
G_n(q_E)\,C^2_{n-1}(\cos\alpha).
\label{f2andG}
\end{eqnarray}
\subsection{Euclidean Solutions}\label{Euclidean}
In this section we present our results for the integral equation
in (\ref{fn_eq}).
For simplicity we considered the quark and diquark mass to be equal:
$m_q=m_D=\bar m$. In this case the kernel ${\cal K}_{\alpha \beta}^{n m}$
can be evaluated analytically in a simple manner,
since for $\eta =0$ the denominator
of the propagator for the exchanged quark does not depend on
the nucleon momentum $P$.
We fixed the scale of the system by setting the mass $\bar m$ to unity.
The ``mass'' parameter in the form factor was fixed at $\Lambda=2\bar m$.
We solved Eq.(\ref{fn_eq}) as follows.
First we terminated the infinite sum in Eq.(\ref{expanf})
at some fixed value $n_{\rm max}$.
Then the kernel in Eq.~(\ref{kernelmatrix}) for the truncated system
becomes a finite matrix with dimension $(2\times n_{\rm max})^2$.
Its elements are functions of $q_E$ and $k_E$.
Next we discretized the Euclidean momenta and performed the integration
over $k_E$ numerically together with some initially assumed
radial functions $f^n_\alpha$.
In this way new radial functions and
an ``eigenvalue'' $\lambda$ associated with them were generated.
The value of $\lambda$ was determined by imposing the
normalization condition on $f^n_\alpha$ such that resultant valence
quark distribution is properly normalized.
We then used the generated radial functions as an input and
repeated the above procedure until the radial functions and
$\lambda$ converged.
Note that our normalization differs from the commonly used
one \cite{Nakanishi_survey},
since we are going to apply the vertex function only
to processes with the diquark as a spectator, i.e., we do not consider
the coupling of the virtual photon
directly to the diquark. The ordinary choice of
normalization would not lead to an integer charge for the model
nucleon in the spectator approximation. We therefore normalize the
valence quark distribution itself.
Regarding Eq.(\ref{fn_eq}) as an eigenvalue equation,
we found the ``eigenvalue'' $\lambda$ (coupling constant) as a
function of $M^2$, varying the latter over the range
$0.85\,\bar m^2 \le M^2 \le 1.99 \,\bar m^2$.
The eigenvalue $\lambda$ was stable, i.e. independent
of the number of grid points and the maximum value of $k_E$.
Furthermore, for a weakly bound state, $M> 1.6 \,\bar m$,
the solutions were independent of the choice of the starting functions
and the iteration converged in typically $10\sim 25$ cycles.
However, for a strongly bound state, $M < 1.4 \,\bar m$,
we found that the choices of the starting functions were crucial for
convergence. A possible reason of this instability for a strongly
bound system is due to the fact that we did not use the $O(4)$
eigenfunctions
$F_n$ and $G_n$ in numerical calculations but the functions $f_\alpha^n$
defined in Eq.~(\ref{expanf}).
Since strongly bound systems, $M \sim 0$, are approximately
$O(4)$ symmetric, a truncated set of functions $f_\alpha^n$ may be an
inappropriate basis for numerical studies of the BSE.
We found that the eigenvalue $\lambda$ converges quite rapidly
when $n_{\rm max}$, the upper limit of the
$O(4)$ angular momentum, is increased.
This stability of our solution with respect to
$n_{\rm max}$ is independent of $M$.
We observe that contributions to the eigenvalue $\lambda$ from
$f_\alpha^n(q_E)$ with $n>4$ are negligible.
This dominance of the lowest $O(4)$ radial functions has also been observed
in the scalar--scalar--ladder model \cite{L+M} and utilized
as an approximation for solving the BSE in a
generalized fermion--fermion--ladder approach \cite{Jain+Munczek}.
To compare the magnitude of the two scalar functions
$f_1(\tilde q, P)$ and $f_2(\tilde q, P)$ we show in Fig.\ref{fig:FandG}
the normalized $O(4)$ radial functions $F_n$ and $G_n$.
As the dependence of $\lambda$ on $n_{\rm max}$ suggests,
radial functions with $O(4)$ angular momenta $n>4$ are
quite small compared to the lower ones.
Together with the fast convergence of $\lambda$,
this observation justifies the truncation
of Eq.(\ref{fn_eq}) at $n=n_{\rm max}$.
Note that even for very weakly bound systems ($\sim0.5\%$ binding energy)
the magnitude of the ``lower--component'' $f_2(\tilde q, P)$ remains
comparable to that of the ``upper--component'' $f_1(\tilde q, P)$.
This suggests that the spin structure of relativistic bound states
is non-trivial, even for weakly bound systems.
So-called ``non--relativistic'' approximations, in which one neglects
the non--leading components of the vertex function ($f_2(\tilde q, P)$ in
our model), are therefore only valid for extremely weak binding,
$2\bar m \rightarrow M$ only.
\subsection{Analytic Continuation}\label{analyticContinuation}
In the previous section we obtained the quark-diquark vertex
function in Euclidean space.
Its application to deep-inelastic scattering, as discussed in
Sec.\ref{spectatorModel}, demands an analytic continuation to
Minkowski space.
Here the scalar functions $f_\alpha$, which determine the
quark-diquark vertex function through Eq.(\ref{BSvertexPara}),
will depend on the Minkowski space momenta $q^2$ and $P\cdot q$.
Recall that our Euclidean solution is based on the expansion
of the scalar functions $f_{\alpha}$ in terms of Gegenbauer polynomials
in Eq.(\ref{expanf}).
This expansion was defined in Sec.\ref{O4expansion}
for real hyper-angles $\alpha_q$, with $-1<\cos\alpha_q<1$.
Consequently the infinite sum over the $O(4)$ angular momenta $n$
in (\ref{expanf}) is absolutely convergent for
pure imaginary energies $q^0$.
Now we would like to analytically continue $q^0$ to physical, real values.
The Euclidean hyper-angle $\alpha_q$ is defined in Euclidean space such that:
\begin{equation}
\cos\alpha_q=\frac{q^0}{i\sqrt{-q^2}}.
\label{alpha}
\end{equation}
In Minkowski space $\cos\alpha_q$
is then purely imaginary ($\cos\alpha_q = -i\,q^0/\sqrt{-q^2}$)
for space-like $q$, and real ($\cos\alpha_q = -q^0/\sqrt{q^2}$)
if $q$ is time-like.
Note that the angular momentum sum (\ref{expanf}) converges even
for complex values of $\cos\alpha_q$ as long as $|\cos\alpha_q|<1$.
Then an analytic continuation of $f_\alpha$ to Minkowski space
is possible.
In terms of the Lorentz invariant scalars
$q^2$ and $P\cdot q$ we obtain:
\begin{equation}
z =\cos\alpha_q = -
\hbox{sgn}(q^2)\,\frac{P\cdot q}{\sqrt{q^2\,M^2}}.
\label{zMinkowski}
\end{equation}
Then the convergence condition for the sum over the $O(4)$ angular
momenta in Eq.(\ref{expanf}) reads:
\begin{equation}
(P\cdot q)^2 < M^2 \;|q^2|.
\label{condition1}
\end{equation}
Even if Eq.({\ref{condition1}) is satisfied
the radial functions $f_\alpha^n$
themselves may contain singularities which prevent us from performing the
analytic continuation by numerical methods.
However, note that the Euclidean solutions for $f_\alpha^n$ are
regular everywhere on the imaginary $q^0$-axis.
Consequently the RHS of the ``half Wick rotated'' equation (\ref{halfWR_BSE})
contains no singularities if the
displaced-pole-free condition in Eq.(\ref{condition_halfWR}) is met.
Therefore, in Minkowski space,
the radial functions $f_\alpha$ are regular everywhere
in the momentum region where the displaced-pole-free condition
(\ref{condition_halfWR}) and the convergence condition (\ref{condition1})
are satisfied.
Here the analytic continuation to Minkowski space is straightforward.
Recall the normalized $O(4)$ radial functions $F_n$
and $G_n$ from Eqs.(\ref{Ffunc},\ref{Gfunc}),
which are linear combinations of $f_\alpha^n$.
Writing them as $F_n(q_E)=q_E^n\,\tilde F_n(q_E^2)$ and
$G_n(q_E)=q_E^n\,\tilde G_n(q_E^2)$, we find for the
scalar functions $f_\alpha(q^2,P\cdot q)$ from
Eqs.(\ref{f1andF},\ref{f2andG}): \hfill
\begin{eqnarray}
& & f_1(q^2, P\cdot q)=\sum_{n=0}^{\infty}
\frac{\tilde F_n(-q^2)}{M^{n}}\,
\left(\sqrt{q^2\,M^2}\right)^n\,C^1_n(z),
\label{f1Minkowski} \\
& & f_2(q^2, P\cdot q)=-\sum_{n=1}^{\infty}
\frac{2}{\sqrt{n(n+2)}}
\frac{\tilde G_n(-q^2)}{M^{n-2}}\,
\left(\sqrt{q^2\,M^2}\right)^{n-1}\,
C^{2}_{n-1}(z).
\label{f2Minkowski}
\end{eqnarray}
Note that the Gegenbauer polynomials $C^1_n$ ($C^{2}_{n-1}$)
together with the square root factors $(\sqrt{q^2\,M^2})^n$
($(\sqrt{q^2\,M^2})^{n-1}$) are $n$-th ($(n-1)$-th) order polynomials of
$q^2$, $M^2$, and $P\cdot q$ and contain therefore no $\sqrt{q^2\,M^2}$
factors.
Since in the kinematic region under consideration, $f_1(q^2, P\cdot q)$ and
$f_2(q^2, P\cdot q)$ are regular, it is possible to extrapolate
$\tilde F_n(-q^2)$ and $\tilde G_n(-q^2)$ numerically from space-like $q^2$ to
time-like $q^2$ as necessary.
Finally we are interested in the quark-diquark vertex function
as it appears in the handbag diagram for deep-inelastic scattering.
Therefore we need the functions $f_\alpha$ for on-shell diquarks only.
The squared relative momentum $q^2$ and the Lorentz scalar $P\cdot q$
are then no longer independent but related by:
\begin{equation}
P\cdot q = -\frac{m_q+m_D}{2 m_D}
\left[ -q^2 + \left( \frac{m_D}{m_q+m_D} \right) ^2
\left( (m_q+m_D)^2 - M^2 \right) \right].
\label{on-shellPq}
\end{equation}
Then $f_1$ and $f_2$ from Eq.(\ref{f1Minkowski}) and (\ref{f2Minkowski})
are functions of the squared relative momentum $q^2$ only.
In Sec.\ref{spectatorModel} the parameterization (\ref{DISpara})
for the Dirac matrix structure of the vertex function was more convenient
to use.
The corresponding functions $f_{\alpha}^{\rm on}$
which enter the nucleon structure function in Eq.(\ref{FofX})
are given by :
\begin{eqnarray}
& & f^{\rm on}_1(k^2)=f_1(q^2,P\cdot q)
+\frac{m_D^2-k^2}{2M^2}f_2(q^2,P\cdot q),
\label{fon1}\\
& & f^{\rm on}_2(k^2)=\frac{1}{2}f_2(q^2,P\cdot q).
\label{fon2}
\end{eqnarray}
Here the arguments $q^2$ and $P\cdot q$, of the scalar functions
$f_\alpha$ on the RHS should be understood as functions of $k^2$
through Eq.~(\ref{on-shellPq}), together with the relation:
\begin{equation}
q^2 = \frac{m_D}{m_q+m_D}\left(k^2 - m_q\left(\frac{M^2}{m_q+m_D} - m_D
\right)\right).
\end{equation}
As already mentioned, the procedure just described yields radial
functions $f_{\alpha}^{\rm on}$ in Minkowski space only in the kinematic
region where the conditions Eqs.(\ref{condition_halfWR},\ref{condition1})
are met.
These are fulfilled for weakly bound states $M^2 \lsim (m_D + m_q)^2$
at moderate values of $|k^2|$.
On the other hand,
the nucleon structure function in (\ref{FofX})
at small and moderate values of $x$ is dominated
by contributions from small quark momenta $|k^2| < m_q^2$.
Consequently, the Minkowski space vertex function obtained in the
kinematic region specified by the displaced-pole-free condition
(\ref{condition_halfWR}) and the convergence condition (\ref{condition1})
determines the valence quark distribution of a weakly bound nucleon
at small and moderate $x$.
In the case of strong binding, $M^2 \ll (m_q + m_D)^2$, or at
large $x$ the nucleon structure function is dominated by
contributions from large space-like $k^2$.
Here the above analytic continuation to Minkowski space is
not possible and
the sum over the $O(4)$ angular momenta in Eq.(\ref{expanf})
should be evaluated first.
In principle this is possible through
the Watson-Sommerfeld method \cite{collision_th,Domokos,G+S+G1}
where the leading power behavior of $f_\alpha(q^2, P\cdot q)$
for asymptotic $P\cdot q$ can be deduced by
solving the BSE at complex $O(4)$ angular momenta \cite{G+S+G2} ,
or by assuming conformal invariance of the amplitude
and using the operator product expansion technique
as outlined in ref.\cite{G+S+G1}.
However, the use of the operator product expansion is questionable here, since
in the quark-diquark model which is being used we have introduced
a form factor for the quark-diquark coupling and
our model does not correspond to an asymptotically free theory.
Existence of the form factor also makes the analysis of
complex $O(4)$ angular momenta complicated.
Therefore a simpler approach is used.
It can be shown from a general analysis
that BS vertex functions which satisfy a ladder BSE are regular
for space-like $k^2$, when one of the constituent particles is on
mass shell. Furthermore, from the numerical solution studied in the
previous section, we found that the magnitude of the $O(4)$ partial
wave contributions to the function $f^{\rm on}_\alpha$
decreases reasonably fast for large $O(4)$ angular momenta $n$,
except at very large $k^2$.
We therefore use the expansion formulae
(\ref{f1Minkowski}) and (\ref{f2Minkowski}) with an upper limit
on $n \leq n_{\rm max}$ to evaluate $f^{\rm on}_\alpha$
defined by Eqs.(\ref{fon1},\ref{fon2}) as an approximation.
Nevertheless, this application of BS vertex functions to deep-inelastic
scattering emphasizes the need to solve Bethe-Salpeter equations
in Minkowski space from the very beginning, as has been
done recently for scalar theories without derivative coupling \cite{K+W}.
\section{Numerical Results}\label{numerical}
In this section we present results for the valence contribution to
the nucleon structure function, $F_{1}$, from Eq.(\ref{FofX}),
based on the numerical solutions discussed above.
First we show in Fig. \ref{fig:fon} the physical, on--shell scalar functions
$ f^{\rm on}_\alpha$ for a bound state mass
$M = 1.8\,\bar m$.
The maximal $O(4)$ angular momentum is fixed at $n_{\rm max}=4$.
Figure \ref{fig:fon} demonstrates that
the magnitude of $f^{\rm on}_1$ and
$f^{\rm on}_2$ is quite similar, even for a weakly bound
quark-diquark system.
Furthermore we find that for weakly bound states
($M \gsim 1.8 \,\bar m$)
the dependence of $f^{\rm on}_\alpha$
on $n_{\rm max}$ is negligible in the region of moderate, space-like
$-k^2 \lsim 5\,\,\bar m^2$.
However for larger space-like values of $k^2$ the convergence of
the $O(4)$ expansion in Eq.(\ref{expanf})
decreases for any $M^2$, and numerical results
for fixed $n_{\rm max}$ become less accurate.
In Fig.~\ref{fig:struct} the structure function, $F_1^{\rm val.}$, is shown
for various values of $M^2$ using $n_{\rm max}=4$.
The distributions are normalized to unity.
One observes that for weakly bound systems ($M = 1.99\,\bar m $)
the valence quark distribution peaks around $x\sim 1/2$.
On the other hand, the distribution becomes flat if binding is strong
($M = 1.2\,\bar m$).
This behavior turns out to be mainly of kinematic origin.
To see this, remember that $F_1^{\rm val.}$ is given by
an integral (c.f., Eq~ (\ref{FofX}))
over the squared quark momentum, $k^2$, bounded by
$k^2_{\rm max} = x(M^2 - m_D^2/(1-x))$.
The latter has a maximum at $x=1-m_D/M$.
Therefore the peak of the valence distribution for weakly bound
systems occurs at $x\approx 1/2$ for $m_D = m_q$.
For a more realistic choice $m_D \sim 2\, m_q$ the valence distribution
would peak at $x\sim1/3$.
The more strongly the system is bound, the less $k^2_{\rm max}$ varies
with $x$. This leads to a broad distribution in the case of
strong binding.
Thus the global shape of $F_1^{\rm val.}$ is determined to a large extent
by relativistic kinematics.
To investigate the role of the relativistic spin
structure of the vertex function we discuss the contribution of the
``relativistic'' component, $f^{\rm on}_2$, to
the nucleon structure function $F_{1}^{\rm val.}$.
Figure \ref{fig:weaklybound} shows that
the contribution from $f^{\rm on}_2$ is negligible for
a very weakly bound quark-diquark state $(M = 1.99\,\bar m)$.
Here the ``non--relativistic'', leading component, $f^{\rm on}_1$,
determines the structure function.
However, even for moderate binding the situation is different.
In Fig.\ref{fig:modbound} one observes that
the contribution from the ``relativistic'' component is quite significant
for $M = 1.8\,\bar m$.
Nevertheless, the characteristic $x$ dependence, i.e. the
peak of the structure function at $x \approx 1/2$, is still due to the
``non--relativistic'' component.
\section{Summary}\label{summary}
The aim of this work was to outline a scheme whereby
structure functions can be obtained from a relativistic
description of a model nucleon as a quark-diquark bound state.
For this purpose we solved the BSE for the
nucleon starting from a simple quark-diquark Lagrangian.
{}From the Euclidean solutions of the BSE we extracted
the physical quark-diquark vertex functions.
These were applied to the spectator model for DIS, and the
valence quark contribution to the structure function
$F_1$ was calculated.
Although the quark-diquark Lagrangian used here is certainly not
realistic, and the corresponding BSE was solved by applying
several simplifications,
some interesting and useful observations were made.
We found that the spin structure of the nucleon,
seen as a relativistic quark-diquark bound state, is non-trivial,
except in the case of very weak binding. Correspondingly, the valence
quark contribution to the structure function is governed by the
``non--relativistic'' component of the nucleon vertex function only
for a very weakly bound state.
Furthermore, we observed that the shape of the unpolarized
valence quark distribution is mainly determined by
relativistic kinematics and does not depend on details
of the quark-diquark dynamics.
However at large quark momenta
difficulties in the analytic continuation of
the Euclidean solution for the vertex function
to Minkowski space emphasize the need to treat
Bethe-Salpeter equations in Minkowski space from
the very beginning.
\section*{Acknowledgments}
This work was supported in part by the the Australian Research Council,
BMBF and the Scientific Research grant \#1491 of Japan Ministry of Education,
Science and Culture.
| proofpile-arXiv_065-1247 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection*{Figure captions}
\newcounter{bean}
\vspace*{-0.3cm}\epsfig{file=fig1.eps, width=7.6cm}
\begin{list}%
{Fig. \arabic{bean}}{\usecounter{bean}
\setlength{\rightmargin}{\leftmargin}}
\item[Fig.\ 1] $\langle n \rangle (\mu)$ on a $4\times 4$ lattice at
$\beta t_{pd} = 10,15$ for the parameter set:
$ U_d = 3.2 t_{pd}, \Delta = 2.7 t_{pd}, t_{pp} = -0.4 t_{pd}$. \\
Inset: $ \langle n \rangle (\mu)$ on a $4\times 4$ lattice at
$\beta t_{pd} = 10$ for $ U_d = 6 t_{pd}, \Delta = 4 t_{pd}, t_{pp} = 0.0$.
\cite{Dopf}
\vspace*{0.3cm}
\end{list}
\centerline{\epsfig{file=fig2.eps, width=6.8cm}}
\begin{list}%
{Fig. \arabic{bean}}{\usecounter{bean}
\setlength{\rightmargin}{\leftmargin}}
\item[Fig.\ 2]
a) $S_d(R) $ on an $ 8 \times 8$ lattice at zero temperature and
$ U_d = 3.2 t_{pd}, \Delta = 2.7 t_{pd}, t_{pp} = -0.4 t_{pd}$.
b) Vertex contribution to $S_d (R)$ shown in a)
\end{list}
\end{document}
| proofpile-arXiv_065-1272 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In a previous paper \cite{Zhao94a}, we undertook a numerical implementation of methods proposed by Munn and Silbey \cite{Munn85} for the determination of polaron properties in the presence of simultaneous local and nonlocal exciton-phonon coupling, local coupling being defined as a nontrivial dependence of the exciton site energies on lattice coordinates and nonlocal coupling as a nontrivial dependence of the exciton transfer integral on lattice coordinates.
Those methods were essentially perturbative, with the coefficients of a canonical transformation being fixed so as to limit the secular growth with the temperature of the perturbation remaining after transformation.
The present work is motivated by several limitations of those methods that were found in the course of our prior work \cite{Zhao94a}:
1) Since the Munn-Silbey method was motivated by an interest in controlling perturbations at high temperatures, one should not necessarily expect the method to produce the best description of polaron states at low temperatures.
2) Since the method is perturbative, it is reasonable to expect that the description of the polaron ground state so obtained could be improved.
3) Since the self-consistency equations on which the method is based do not involve the exciton transfer integral, it is questionable whether the method can be relied upon beyond the very narrow band regime.
4) Since the results of our numerical investigation showed some significant differences with analytical results based on the same underlying methods, as well as with some other recent analyses
\cite{Bryksin91}-\cite{Capone96},
it would appear desirable to have corroboration of our results by independent methods.
The central findings of our prior work \cite{Zhao94a}, that polaron binding energies may be much larger than previously thought, and polaron bands much narrower, underscores the importance of independent corroboration, since such findings are of central importance to understanding the influence of nonlocal exciton-phonon coupling on the nature of polaron states and of polaron transport.
Simultaneous local and nonlocal coupling appears to be particularly important in the characterization of solid-state excimers, where a variety of experimental and theoretical considerations suggest that a strong dependence of electronic tunneling upon certain coordinated distortions of neighboring molecules (nonlocal exction-phonon coupling) is crucial to the formation of excited bound states
\cite{Bryksin91, Sewell63, Tanaka63, Murrell64, Song69, Tanaka74, Umehara79, Toyozawa80, Fischer83, Umehara83, Port84, Walker85, Sumi89, Wu90a, Wu90b, Wu91, Schopka91, Weiss96, Wu93, Zhao94b, Silinsh94}.
In this paper we approach the problem of simultaneous local and nonlocal exciton-phonon coupling by variational methods.
Our central interest in this paper is in the polaron energy band, computed as
\begin{equation}
E^\kappa = \langle \Psi ( \kappa ) | \hat{H} | \Psi ( \kappa ) \rangle
\end{equation}
wherein $\kappa$ is the total crystal momentum label, $| \Psi ( \kappa ) \rangle$ is an appropriately normalized delocalized trial state, and $\hat{H}$ is the system Hamiltonian.
It should be noted that the trial states we use are eigenfunctions of the appropriate total momentum operator, so that variations for distinct $\kappa$ are independent, and the set of $E^\kappa$ so produced constitute a variational estimate (upper bound) for the polaron energy band \cite{Lee53,Toyozawa61}.
As our system Hamiltonian, we choose perhaps the simplest one embracing exciton tunneling and simultaneous local and nonlocal exciton-phonon coupling, a slight generalzation of the traditional Holstein Hamiltonian \cite{Holstein59i,Holstein59ii}.
\begin{equation}
\hat{H} = \hat{H}^{ex} + \hat{H}^{ph} + \hat{H}^{ex-ph}
\end{equation}
\begin{equation}
\hat{H}^{ex} = - J \sum_n a_n^{\dagger} ( a_{n+1} + a_{n-1} )
\end{equation}
\begin{equation}
\hat{H}^{ph} = \omega \sum_n b_n^{\dagger} b_n
\end{equation}
\begin{equation}
\hat{H}^{ex-ph} = g \omega \sum_n a_n^{\dagger} a_n ( b_n^{\dagger} + b_n ) + \frac {\phi} {2} \omega \sum_n ( a_n^{\dagger} a_{n-1} + a_{n-1}^{\dagger} a_n - a_{n+1}^{\dagger} a_n - a_n^{\dagger} a_{n+1} ) ( b_n^{\dagger} + b_n )
\end{equation}
in which $a_n^\dagger$ creates an exciton in the rigid-lattice Wannier state at site $n$, and $b_n^\dagger$ creates a quantum of vibrational energy in the Einstein oscillator at site $n$.
Both excitons and phonons are treated as bosons.
The Einstein frequency is given by $\omega$, $J$ is the exciton transfer integral between nearest neighbor sites, $g$ is the local coupling strength, and $\phi$ is the nonlocal coupling strength characterizing phonon assisted transfers between nearest neighbor sites.
All of the methods used in this paper can be applied as well to common generalizations of this Hamiltonian, involving, for example, phonon dispersion, exciton transfers beyond nearest neighbors, and/or different exciton-phonon coupling geometry.
In Ref~\cite{Zhao94a} we considered both {\it antisymmetric} nonlocal coupling, as above, and {\it symmetric} nonlocal coupling, in which all nonlocal coupling terms would be of the same algebraic sign.
These two kinds of coupling represent different physical circumstances.
Antisymmetric coupling, for example, would be appropriate to the description of certain librations that promote exciton transfers between neighboring molecules when these molecules tilt toward each other, effectively decreasing the gap through which tunneling must take place.
This has the consequence that tunneling between a molecule and its neighbor to the right (for example) is promoted (and tunneling on the left inhibited) when the librator tilts to the right, and tunneling on the left is promoted (and tunneling on the right inhibited) when the librator tilts to the left.
Symmetric coupling, on the other hand, describes the circumstance in which tunneling between a molecule and its neighbors on both the left and right is promoted during the same phase of oscillation and inhibited during the complementary phase; this may happen, for example, if the strength with which a mobile exciton is bound varies with the coordinate described by the oscillator.
In this paper, we restrict our attention to antisymmetric nonlocal coupling.
In momentum space, these several terms take the form
\begin{equation}
\hat{H}^{ex} = \sum_k J_k a_k^{\dagger} a_k
\end{equation}
\begin{equation}
\hat{H}^{ph} = \sum_q \omega b_q^{\dagger} b_q
\end{equation}
\begin{equation}
\hat{H}^{ex-ph} = N^{-1/2} \sum_{k q} \omega f _{- k}^q a_{k+q}^{\dagger} a_k ( b_q + b_{- q}^{\dagger} )
\end{equation}
where
\begin{equation}
J_k = - 2 J \cos k , ~~~
f^q_k = g+i\phi[\sin k-\sin(k-q)],
\end{equation}
Throughout this paper, we use the the Fourier conventions for ladder operators $c^{\dagger} = a^{\dagger} , b^{\dagger}$, etc. and scalars $\gamma = \alpha , \beta $, etc.
\begin{equation}
c^{\dagger}_n = N^{-1/2} \sum_p e^{-ipn} c^{\dagger}_p, ~~~
c^{\dagger}_p = N^{-1/2} \sum_n e^{ipn} c^{\dagger}_n,
\end{equation}
\begin{equation}
\gamma_n ~=~ N^{-1} \sum_p e^{ipn} \gamma_p, ~~~~
\gamma_p ~=~ \sum_n e^{-ipn} \gamma_n,
\end{equation}
To assist in the interpretation of the various formulas to follow, we denote exciton wave vectors by latin $k$'s, phonon wave vectors by latin $q$'s, and reserve the greek $\kappa$ for the total crystal momentum label.
Also throughout this paper, we use the Einstein frequency as the scale of the energy, so that all energies are dimensionless.
This leaves us with a three-dimensional parameter space in which a general point may be identified as $( J , g , \phi )$.
A complete exploration of the $(J, g, \phi)$ parameter space is far beyond the scope of this paper, so we must narrow our focus significantly.
Since our principal interest here is in nonlocal exciton-phonon coupling, we defer an in-depth discussion of the $( J, g , 0 )$ plane to other work.
Since we wish to compare our present results with those of the Munn-Silbey method, we present results for the $( 0, g, \phi )$ plane in a manner that closely follows prior work.
Since the regime of strong nonlocal coupling has received much less attention than that of local coupling, we present a more systematic survey of the $(J, 0, \phi )$ plane, including a phase diagram and a discussion of characteristic polaron structures and the nonlocal-coupling version of the self-trapping transition.
Finally, we present sample results for general points in the $(J, g, \phi )$ space.
The layout of the paper is as follows.
As a preliminary study, we briefly touch upon the application of the small polaron Ansatz to nonlocal exciton-phonon coupling, as was originally suggested by Merrifield in his 1964 paper \cite{Merrifield64}, but was never carried out.
Toyozawa's Ansatz is then introduced to examine the same problem in greater depth.
The rest of the paper is devoted to exploring the parameter space under Toyozawa's Ansatz, and a comparison of our present results with comparable results of the Munn-Silbey approach \cite{Zhao94a} is made.
We must emphasize that these comparisons are not to the original analytical calculations of Munn and Silbey \cite{Munn85}, but to our own implementation of their approach by numerical methods \cite{Zhao94a}.
Since the results of our numerical study differed quantitatively and qualitatively from the original calculations of Munn and Silbey in some significant respects and thereby generalized them, it is important in the following to distinguish between the Munn-Silbey {\it method} and our own numerical {\it results} obtained by this method.
\section{The Small Polaron Ansatz}
As in Merrifield's original calculation, the normalized small polaron trial state may be written
\begin{equation}
| \Psi (\kappa) \rangle = N^{-1/2} \sum_n e^{i \kappa n} a^{\dagger}_n \exp [ - \sum_{n_2} ( \beta_{n_2 -n}^\kappa b^{\dagger}_{n_2} - \beta_{n_2 -n}^{\kappa \ast} b_{n_2 } )] |0\rangle,
\end{equation}
\begin{equation}
| \Psi ( \kappa ) \rangle = N^{-1/2} \sum_n e^{i \kappa n} a^{\dagger}_n \exp [ - N^{-1/2} \sum_q ( \beta^\kappa_q e^{-iqn}
b^{\dagger}_q - \beta^{\kappa \ast}_q e^{iqn} b_q )] |0\rangle .
\end{equation}
This trial state may be viewed as a phased sum over ``form factors'' describing local exciton-phonon correlations.
The form factor in this case is the product of an exciton completely localized on a single lattice site and a lattice function defined {\it relative to} the exciton.
Using this trial state, we form the expectation value of the Hamiltonian and minimize the total energy $ E^\kappa $ with respect to the phonon amplitudes, yielding the self-consistency equations
\begin{equation}
\label{eq:sce}
\beta^\kappa_q = \frac { g + i 2 \phi S^\kappa \cos(\kappa - \Phi^\kappa - \frac q 2 ) \sin \frac q 2 }
{ 1 - 4 J S^\kappa \sin (\kappa- \Phi^\kappa - \frac q 2 ) \sin \frac q 2
- 8 \phi S^\kappa R^\kappa_q \sin \frac q 2 } ,
\end{equation}
\begin{equation}
S^\kappa = \exp [ N^{-1} \sum_q | \beta_q^\kappa |^2 (\cos q -1) ] ,
\end{equation}
\begin{equation}
\Phi^\kappa = N^{-1} \sum_q | \beta_q^\kappa |^2 \sin q ,
\label{eq:phi}
\end{equation}
\begin{equation}
R^\kappa_q = N^{-1} \sum_{q ^\prime}
Im ( \beta_{q ^\prime}^\kappa ) \sin ( \kappa - \Phi^\kappa - \frac q 2
- \frac {q ^\prime} 2 ) \sin \frac {q ^\prime} 2 .
\end{equation}
in which $S^\kappa$ and $\Phi^\kappa$ are the magnitude and phase of the Debye-Waller factor (see Section III).
It is evident from (\ref{eq:sce}) that unlike the original work of Merrifield restricted to local coupling only \cite{Merrifield64}, $\beta ^\kappa_q$ here can not be taken to be real owing to the presence of nonlocal exciton-phonon coupling.
The phonon mode amplitude $\beta^\kappa_q$ is real for all $\kappa$ and $q$ when $\phi = 0$; when $\phi \not =0$, $\beta_q^{\kappa}$ is real only along the lines defined by $ [ \sin ( \kappa - \Phi^\kappa ) - \sin ( \kappa -\Phi^\kappa - q ) ] = 0 $, among which is the $ q = 0 $ line.
Using a numerical iteration scheme, we may solve the above set of self-consistency equations to desired precision without any artificial constraints.
For all values of local and nonlocal coupling strengths, we found our final solutions to be insensitive to our choice of both numerical method and states used to initialize calculation.
For the purpose of illustration, we may express $ \beta^\kappa_q $ in terms of two real matrices $\xi^ \kappa_q, ~\eta^\kappa_q $ as in Ref~\cite{Zhao94a}:
\begin{equation}
\beta^\kappa_q = g \xi^\kappa_q + i \phi \eta^\kappa_q
[ \sin ( \kappa - \Phi^\kappa ) - \sin ( \kappa -\Phi^\kappa - q ) ].
\end{equation}
From Eq.~(\ref{eq:sce}) it follows that $\eta^\kappa_q = \xi^\kappa_q S^\kappa$.
The $q$-independence of $S^\kappa$ suggests that the $\xi$ and $\eta$ matrices may be more similar in shape than we found in our numerical implementation of the Munn-Silbey method \cite{Zhao94a}; moreover, since $\kappa$-dependence of $S^\kappa$ can be relatively weak in some regimes, the difference between $\xi$ and $\eta$ can be quite small.
Results of such calculations for weak nonlocal coupling $( 0, 1, 0.1)$ are indicated in Fig.~\ref{f4.1}.
Only $ \xi^\kappa_q $ is displayed since $S^\kappa$ in this case varies by only a fraction of a percent across the entire Brillouin zone so that $\eta_q^\kappa$ would be indistinguishable from $\xi_q^\kappa$.
Since nonlocal coupling is weak relative to local coupling in this particular example, it follows as well that $\beta_q^\kappa ~ \approx ~ \xi_q^\kappa$.
The weak asymmetry of $\beta_q^\kappa$ ($\xi_q^\kappa$) with respect to $q$ at intermediate values of $\kappa$ implies (through Eq. \ref{eq:phi}) that the Debye-Waller phase $\Phi^\kappa$ varies slightly with $\kappa$; however, this weak non-constancy of $\Phi^\kappa$ has little direct influence over the properties of the solution both because of the smallness of $\Phi^\kappa$ and the small value of the nonlocal coupling constant.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 4.8in
\epsffile{f4.1.eps}
\end{center}
\caption{
The phonon displacement factor $\xi^\kappa_q$ calculated from the small polaron Ansatz for $( J, g, \phi ) = ( 0, 1, 0.1 )$.
The variation in $\xi_q^\kappa$ is small in this case because the nonlocal coupling strength $\phi$ is weak relative to the local coupling strength $g$.
}
\label{f4.1}
\end{figure}
Antisymmetric nonlocal exciton-phonon coupling introduces characteristic distortions in quantities such as $\xi^\kappa_q$ and the polaron energy band.
These distortions are ``bimodal'' with respect to the total crystal momentum $ \kappa $ in the sense that this modulation characteristically is most pronounced at intermediate values of $| \kappa |$.
With respect to the phonon wave vector, these distortions characteristically are most pronounced at $ |q|=\pi$, indicating that the strongest exciton-phonon coupling involves lattice dimerization.
On the other hand, $\xi^\kappa_q$ is weakly structured at low phonon wave vectors, and is in fact equal to unity along $q=0$, consistent with the lattice ``sum rule''
\begin{equation}
\label{eq:sumrule}
\beta_{q = 0} = \sum_n \beta_n = g ~ .
\end{equation}
The polaron band energy $ E^\kappa $ can be calculated from
\begin{eqnarray}
E^\kappa & = &
N^{-1} \sum_q | \beta_q |^2 - 2J
S^\kappa \cos(k- \Phi^\kappa ) - 2N^{-1} g \sum_q Re( \beta^\kappa_q ) \nonumber \\
& & - 4 N^{-1} \phi S^\kappa \sum_q Im( \beta^\kappa_q )
\cos ( \kappa - \Phi^\kappa - \frac q 2 ) \sin ( \frac q 2 ) .
\end{eqnarray}
Comparison between the small polaron band and that calculated numerically from the Munn-Silbey approach \cite{Zhao94a} is shown in Fig. \ref{f4.2} for the case of $( J, g, \phi ) = ( 0, 1, 1 )$.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.2.eps}
\end{center}
\caption{
Comparison of the polaron bands between the Munn-Silbey approach \protect\cite{Zhao94a}\protect (points) and the small polaron Ansatz (solid line) for $( J, g, \phi ) = ( 0, 1, 1,)$ at $T=0$.
The small polaron Ansatz gives a higher ground state energy and a larger bimodal variation in the polaron band.
}
\label{f4.2}
\end{figure}
The Munn-Silbey method yields a flatter and lower energy band than does the small-polaron method; however, since the perturbative result falls below the variational result, no conclusion can be drawn as to which approach gives the better polaron band; for example, without further information we cannot exclude the possibility that the actual ground state band might lie between these two results and closer to the variational band.
Higher-quality variations are needed to make such determinations.
Through its Fourier transform, the nonuniformity of $ \xi^\kappa_q $ with respect to $q$ (though small on an absolute scale in this case) implies a spreading of phonon distortion in site space even in the absence of a direct transfer integral $J$.
With increasing nonlocal coupling, the amplitude of this distortion increases, as does its spread in real space.
Since nonlocal exciton-phonon coupling provides the only transport mechanism when $J=0$, it is reasonable to expect in analogy with the usual small polaron picture (e.g., as in Ref.~\cite{Merrifield64}) that a small polaron approach such as this one may break down when nonlocal coupling becomes comparable in strength to local coupling.
At the least, the approach sketched here needs to be generalized to admit wave function symmetries more consistent with the dimeric structures favored by nonlocal coupling.
This could be done in a minimal way by building up trial Bloch states from dimeric form factors; we elect, however, to skip this step and proceed to a still more flexible trial state introduced by Toyozawa that is well known in the context of local-coupling polaron theory.
\section{Toyozawa's Ansatz}
The major shortcoming of the small polaron Ansatz is the fact that it is built up from form factors in which the electronic component is completely localized on a single lattice site.
In the presence of significant exciton tunneling, or, as we have observed, in the presence of significant nonlocal exciton-phonon coupling, it would appear reasonable to expect self-consistent local exciton-phonon correlations to reflect at least some of the characteristics of spreading wave packets.
To accommodate this expecatation, the completely-localized exciton component of the small polaron form factor can be generalized to a superposition of exciton amplitudes to be determined self-consistently with the amplitudes of the lattice oscillators.
This is the basis of many theories characterizable as ``large polaron'' theories.
Important among these for our purposes are delocalized-state theories based on phased sums of such generalized form factors (historically identified with Toyozawa), and localized-state theories based upon using such generalized form factors as trial states in their own right without invoking the phased-sum construction that assures delocalization (historically identified with Pekar
\cite{Pekar46a,Pekar46b,Landau48,Pekar54}
and Davydov
\cite{Davydov69,Davydov73,Davydov76,Davydov79,Fischer84,Venzl85,Skrinjar88,Zhang88,Brown89,Ivic89,Wang89,Brown90,Wang90}).
Toyozawa's Ansatz may be viewed as a delocalization of the Davydov Ansatz $| \psi \rangle$
\begin{equation}
| \psi \rangle ~=~ | \alpha \rangle \otimes | \beta \rangle ,
\end{equation}
where $\otimes$ denotes the direct product, and $| \alpha \rangle$ and $| \beta \rangle$ are the exciton and phonon part of the form factor respectively, built from exciton and phonon vacua $|0\rangle_{ex}$ and $|0\rangle_{ph}$ as
\begin{equation}
| \alpha \rangle = \sum_n \alpha_n a_n^{\dagger} |0\rangle_{ex} ,
\end{equation}
\begin{equation}
| \beta \rangle = \exp [ { \sum_n ( \beta_n b_n^{\dagger} -
\beta_n^\ast b_n )} ] |0\rangle_{ph} .
\end{equation}
After delocalization one obtains Toyozawa's Ansatz state, given by
\begin{equation}
| \Psi ( \kappa ) \rangle = | \kappa \rangle \langle \kappa | \kappa \rangle^{-1/2}
\end{equation}
\begin{equation}
| \kappa \rangle = \sum_{n n_1} e^{i \kappa n} \alpha_{n_1 - n}^\kappa
a_{n_1}^{\dagger}
\exp [ - \sum_{ n_2} (\beta_{n_2 -n}^\kappa
b_{n_2}^{\dagger} -\beta_{n_2 -n}^{ \kappa \ast} b_{n_2 } )] |0\rangle
\end{equation}
\begin{equation}
| \kappa \rangle = N^{-1/2} \sum_{nk} e^{i( \kappa -k)n} \alpha_k^\kappa a_k^{\dagger}
\exp [ - N^{-1/2} \sum_q ( \beta_q^\kappa e^{-iqn}
b_q^{\dagger} -\beta_q^{ \kappa \ast} e^{iqn} b_q )] |0 \rangle .
\end{equation}
where $|0\rangle = | 0 \rangle_{ex} \otimes | 0 \rangle_{ph}$.
The auxiliary vector $| \kappa \rangle$ is not normalized, but simplifies presentation of some results.
It may be worth noting that many localized-state approaches such as the Davydov theory take the exciton component $| \alpha \rangle$ to be normalized such that $\sum_n | \alpha_n |^2 = 1$, since in this approach the norm of exciton amplitudes has a physical interpetation as the exciton number.
In the delocalized-state approach, however, the norm of the exciton amplitudes does not have this physical meaning, and by contributing only a multiplicative factor to $\langle \kappa | \kappa \rangle$, makes no contribution to the normalized trial state.
Consequently, the norm of the amplitudes $\alpha$ is arbitrary, and in fact drifts in the course of numerical variation; since this drift has no physical consequences, we have found it convenient throughout this paper to normalize the exciton amplitudes such that $\alpha_{n=0} = 1$.
For variational calculations we require the total energy associated with each trial state, for which we evaluate the expectation values of the three principal terms in the Holstein Hamiltonian:
\begin{equation}
\langle \kappa | \hat{H}^{ex} | \kappa \rangle = -2 J N^{-1} \sum_k S_{ \kappa - k
}^\kappa \cos k | \alpha_k^\kappa |^2 ,
\end{equation}
\begin{equation}
\langle \kappa | \hat{H}^{ph} | \kappa \rangle = N^{-2} \sum_{kq}
S_{ \kappa - k - q }^\kappa | \alpha_k^\kappa |^2
|\beta_q^\kappa |^2 ,
\end{equation}
\begin{equation}
\langle \kappa | \hat{H}^{ex-ph} | \kappa \rangle = - N^{-2} \sum_{kq}
f^{-q}_{-k-q}
\alpha_k^{\kappa \ast} \alpha_{k+q}^\kappa
( S_{ \kappa - k - q }^\kappa\beta_q^{\kappa \ast} + S_{ \kappa - k }^\kappa\beta_{-q}^\kappa ) .
\end{equation}
\begin{equation}
\langle \kappa | \kappa \rangle = N^{-1} \sum_k
S_{ \kappa - k }^\kappa | \alpha_k^\kappa |^2 .
\end{equation}
Here, $S_k^\kappa$ is the Fourier transform of the generalized Debye-Waller factors $S_n^\kappa$
\begin{equation}
\label{eq:skk}
S_k^\kappa= \sum_n e^{-ikn} S_n^\kappa,
\end{equation}
\begin{equation}
\label{eq:skn}
S_n^\kappa = \exp [ N^{-1} \sum_q |\beta_q^\kappa |^2 (e^{iqn} -1) ],
\end{equation}
which quantifies the overlap between the lattice components of polaron wavefunctions displaced from each other by $n$ lattice sites.
These are to be distinguished from the Franck-Condon factor
\begin{equation}
_{ph}\langle 0 | \beta \rangle = e^{- \frac 1 2 \sum_n |\beta_n |^2}
,
\end{equation}
which quantifies the overlap between the lattice component of the polaron wavefunction with the undistorted phonon ground state.
The Debye Waller factors $S_{n = \pm 1}^\kappa$ ($= S^{\kappa} e^{\pm i \Phi^\kappa}$ of section II), appear routinely in the transport terms of effective (small) polaron Hamiltonians, where they strongly influence the renormalization of the effective mass.
In the general case we address here, the spread of the exciton amplitudes $\alpha_n$ causes Debye-Waller factors between non-nearest-neighbor sites to contribute to polaron structure in complex ways.
Owing to the large number of Debye-Waller factors that may be involved in the general case, their resolution into magnitudes and phases as in section II ceases to be advantageous.
One can show from (\ref{eq:skk}) and (\ref{eq:skn}) that $S^{\kappa}_{k}$ is strictly real, so that computation is simplified by abandoning the magnitude and phase characterization and working in momentum space with the real quantities $S_k^\kappa$.
Minimization of the total energy $E^\kappa$ with respect to the phonon amplitudes $\beta_q^{ \kappa \ast}$ leads to
\begin{equation}
\label{eq:beta}
\beta_q^\kappa ~=~ \frac { L_q^\kappa}
{ M_q^\kappa ~+~ H_q^{\kappa} ~-~ M_q^\kappa E^\kappa }
\end{equation}
where
\begin{equation}
L_q^\kappa = N^{-1} \sum_k f^{-q}_{-k-q} S_{ \kappa - k - q }^\kappa
\alpha_k^{ \kappa \ast} \alpha_{k+q}^\kappa,
\end{equation}
\begin{equation}
M_q^\kappa = N^{-1} \sum_k S_{ \kappa - k - q }^\kappa
| \alpha_k^\kappa |^2,
\end{equation}
and $H_q^\kappa$ is the sum of three terms:
\begin{equation}
H_q^{\kappa} = H_q^{ex} + H_q^{ph} + H_q^{ex-ph},
\end{equation}
\begin{equation}
H_q^{ex} = -2J N^{-1} \sum_k
S_{ \kappa - k - q }^\kappa
\cos k | \alpha_k^\kappa |^2,
\end{equation}
\begin{equation}
H_q^{ph} = N^{-2} \sum_{kq^\prime}
S_{ \kappa - k - q - q^\prime }^\kappa
| \alpha_k^\kappa |^2 |\beta_{q^\prime} ^\kappa |^2,
\end{equation}
\begin{equation}
H_q^{ex-ph} = - N^{-2} \sum_{kq^\prime}
f^{-q^\prime}_{-k-q^\prime}
\alpha_k^{ \kappa \ast} \alpha_{k+q^\prime}^\kappa
( S_{ \kappa - k - q - q^\prime }^\kappa\beta_{q^\prime}^{ \kappa \ast}
+ S_{ \kappa - k - q }^\kappa\beta_{-q^\prime}^\kappa ).
\end{equation}
Similarly, after minimizing $E^\kappa$ with respect to $ \alpha_k^{ \kappa \ast }$ we arrive at
\begin{equation}
\alpha_k^\kappa = { { L_k^\kappa } \over
{ M_k^\kappa - ( E^\kappa + 2J \cos k ) S_{ \kappa - k }^\kappa }}
\end{equation}
where
\begin{equation}
L_k^\kappa = N^{-1} \sum_q f^{-q}_{-k-q} \alpha_{k+q}^\kappa
( S_{ \kappa - k - q }^\kappa\beta_q^{ \kappa \ast} + S_{\kappa-k}^\kappa\beta_{-q}^\kappa )
\end{equation}
and
\begin{equation}
M_k^\kappa = N^{-1} \sum_q S_{ \kappa - k - q }^\kappa
|\beta_q^\kappa |^2 .
\end{equation}
We may recover the lattice sum rule (\ref{eq:sumrule}) from (\ref{eq:beta}) by noting that
\begin{equation}
H_{q=0}^{\kappa} = E^\kappa \langle \kappa | \kappa \rangle ~,~~~~~
g^{-1} L_{q=0}^\kappa = M^\kappa_{q=0} = \langle \kappa | \kappa \rangle .
\end{equation}
\section{\lowercase{$g$} and $\phi$}
The effect of nonlocal exciton-phonon coupling is most evident when $J$ is small or absent.
Moreover, since local and nonlocal coupling influence polaron structure in distinct ways, it is well to examine the influence of nonlocal coupling both in isolation and in concert with local coupling.
The first panel of Fig.~\ref{f4.3} shows the exciton and phonon amplitudes $\beta^{\kappa}_n$ and $\alpha^{\kappa}_n$ for the pure nonlocal coupling case $( J , g , \phi ) = ( 0 , 0 , 1 )$ at $\kappa = 0$ where both of these quantities are real.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.3a.eps}
\vspace{.1in}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.3b.eps}
\end{center}
\caption{
Variational parameters for the $\kappa~=~0$ state calculated from Toyozawa's Ansatz.
Second panel: the exciton amplitude $\alpha_n^{\kappa=0}$ (dashed line) and the phonon displacement $\beta_n^{\kappa=0}$ (solid line) for $( J, g, \phi ) = ( 0, 0, 1 )$;
First panel: the exciton amplitude $\alpha_n^{\kappa=0}$ (dashed line) and the phonon displacement $\beta_n^{\kappa=0}$ (solid line) for $( J, g. \phi ) = ( 0, 0.3, 1 )$.
}
\label{f4.3}
\end{figure}
In the absence of local coupling, the exciton amplitudes $\alpha^{\kappa=0}_n$ are bond-centered and even under inversion through the occupied bond, while $\beta^{\kappa=0}_n$ is bond-centered and odd under inversion, reflecting the antisymmetry of the nonlocal exciton-phonon coupling.
Approximately 84\% of the total exciton density is shared equally by the two sites defining the central bond.
This is characteristic of the strong nonlocal coupling regime, and justifies some the assumptions of our previous work.
The first panel of Fig.~\ref{f4.3} should be compared to the exciton-phonon correlation function $\bar{C}_l$ produced as a diagnostic of polaron structure in our previous paper \cite{Zhao94a};
$\bar{C}_l$ was constructed around a dimeric exciton function $\tilde{\psi}_n$ restricted to two sites (i.e., 100\% of the total exciton density shared equally between $n=0,1$) that we presumed to be representative of the exciton amplitudes $\alpha^{\kappa=0}_n$ in actual polaron states.
The similarity between $\beta^{\kappa=0}_n$ of Fig.~\ref{f4.3}a and $\bar{C}_l$ is striking.
(The previous calculation included a small local coupling strength that does not significantly affect the comparison.)
The second panel of Fig.~\ref{f4.3} shows $\beta^{\kappa=0}_n$ and $\alpha^{\kappa=0}_n$ for $( J, g, \phi ) = ( 0, 0.3, 1.0 )$, reflecting the addition of a moderate amount of local coupling to the previous scenario.
It is still the case that most of the total exciton density resides on the two sites defining the central bond (this does not change materially until $J$ becomes signficant); however, the addition of a non-negligible amount of local coupling destroys the bond-centered symmetry by introducing a site-centered structure localized on only one of the two central sites.
Fig.~\ref{f4.8} displays the energy band comparison between the Munn-Silbey approach \cite{Zhao94a} and Toyozawa's Ansatz for $( 0, 0.03, 1)$ (essentially Figure~\ref{f4.3}a) and $(0,1,1)$, both of which were studied in Ref.~\cite{Zhao94a}.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.8.eps}
\end{center}
\caption{
Comparison between the polaron band calculated from the Munn-Silbey approach \protect\cite{Zhao94a}\protect at $T=0$, (diamonds, $( 0, 1, 1 )$, crosses, $( 0, 0.03, 1)$ and that from Toyozawa's Ansatz (solid line, $( 0, 1, 1 )$, dashed line, $( 0, 0.03, 1)$.
}
\label{f4.8}
\end{figure}
Though still imperfect, the energy band comparison is much more favorable here than we found in section II.
The consonance between our present results and our previous calculations by the Munn-Silbey method appears to be due to the ability of both approaches to embrace dimeric exciton-phonon correlations essential to nonlocal-coupling polarons.
Our present results based on Toyozawa's Ansatz give the best estimate of the polaron ground state energy among the three we consider, and can be viewed as corroborating our prior numerical calculations by the Munn-Silbey method, at least when nonlocal coupling dominates.
Due to the variational nature of Toyozawa's Ansatz, it is now safe to say that the Munn-Silbey approach over estimates the ground state energy at $T=0$.
\section{$J$ and $\phi$}
In this section we put local coupling aside (i.e., we set $g=0$), and focus on the interplay between exciton tunneling ($J$) and nonlocal coupling ($\phi$).
We have seen that although nonlocal coupling is an exciton-phonon interaction that supports polaron formation, it is also a transport mechanism and competes with local coupling both by promoting transport and by driving the exciton-lattice correlations toward dimeric structures rather than the site-localized structures preferred by local coupling interactions.
On the other hand, although nonlocal coupling is a transport mechanism, it also competes with direct, phonon-free exciton transfers since the lattice distortions inherent in phonon-assisted transfers inhibit direct transfers.
It is convenient to use the device of a phase diagram to organize our discussion.
Fig~\ref{f4.5} depicts the $J - \phi$ plane at $g=0$, in which we have indicated two narrow, tongue-shaped regions that divide the plane into two more-or-less distinct regions, a {\it strong} nonlocal coupling region occupying the upper portion of the diagram and a {\it weak} nonlocal coupling region occupying the lower.
The phenomena indicated by these tongues can be characterized as a kind of self-trapping.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.5.eps}
\end{center}
\caption{
Phase diagram on the $J - \phi$ plane for Toyozawa's Ansatz.
The two wedges correspond to the discontinuity near the Brillouin zone center (upper wedge) and that near the Brillouin zone boundary (lower wedge), respectively.
}
\label{f4.5}
\end{figure}
In the more familiar self-trapping phenomenon associated with local exciton-phonon coupling, a polaron is understood to be characterized by exciton-phonon correlations that are more-or-less spatially compact.
For ``weak'' local coupling, correlations may extend over many lattice sites, corresponding to a ``large'' polaron.
With increasing local coupling strength, the width of the correlated region diminishes until the excitation is essentially confined to a single lattice site, corresponding to a ``small'' polaron.
Along the way, other polaron properties change as well, most notably the polaron effective mass growing from the free mass at weak coupling to values arbitrarily large in the strong coupling region.
Whether this change as a function of system parameters is necessarily smooth or may occur discontinuously is a subject of some contention \cite{Gerlach87i,Gerlach87ii,Lowen88,Gerlach91}; nonetheless, it is common for approximate treatments to find the change to be discontinuous in some regimes, and hence the imprecise notion of a discrete ``self-trapping transition'' is widespread.
More important than the question of continuity is the fact that in the vicinity of this transition, exciton-phonon correlations and related physical quantities such as the effective mass undergo strong changes with relatively small changes in control parameters.
The qualitative characteristics of this familiar self-trapping transition apply as well to our treatment of nonlocal coupling.
We find a discrete transition to occur in the vicinity of the Brillouin zone {\it center}, bearing some resemblance to the more familiar local coupling phenomenon; however, we also find a discrete transition occurring in the vicinity of the zone {\it edge}.
Moreover, whereas the usual conception of self-trapping focusses on dramatic changes that occur at the Brillouin zone center (e.g., the jump in the effective mass) leading to the notion of a {\it sharp} transition, we find the discrete transitions in both the inner and outer zones to be {\it broad} in a sense to be described presently.
We must now attempt to be precise in specifying the meaning of ``discrete transition'' in the present context, for clarity focussing on the inner-zone transition.
At an arbitrarily chosen point $(J,g,\phi)$ in parameter space, there exist one or more minima in the variational energy $E^\kappa$ for each $\kappa$, the lowest of which identifies the global energy minimum in each total momentum sector; the collection of energies for all $\kappa$ identifies the polaron energy band $E^\kappa$ at $(J,g,\phi)$, and the collection of {\it states} associated with these global energy minima constitute the polaron Bloch states.
The distinct classes of relative minima that may coexist over certain invervals of $\kappa$ are associated with distinct classes of variational states corresponding to different polaron structures.
In the present problem, typically, either:
1) one class of minima exists for all $\kappa$, or
2) one class of minima exists over an interval $| \kappa | \in ( 0, \kappa_1 )$, and another exists over an interval $| \kappa | \in ( \kappa_2 , \pi )$, with the two classes of relative minima coexisting for $| \kappa | \in ( \kappa_2 , \kappa_1 )$.
In the latter case, there exists a particular total crystal momentum $\kappa^* \in ( \kappa_2 , \kappa_1 )$ at which the polaron energy band changes from being defined by one class of global energy minima in the inner zone ($| \kappa | < \kappa^*$) to being defined by the other class of global energy minima in the outer zone ($| \kappa | > \kappa^*$); this $\kappa^*$, like the energy band itself, is a function of the system parameters $( J,g,\phi )$.
We describe the low-$\kappa$ states as ``large-polaron-like'' and the high-$\kappa$ states as ``small-polaron-like'' because through sequences of infinitesimal steps in $\kappa$ and/or exciton-phonon coupling parameters it is possible to smoothly deform any large-polaron-like state into a traditional large polaron state at $\kappa = 0$, and any small-polaron-like state into a traditional small polaron state at $\kappa = 0$.
Significant for the explication of the discrete transition phenomenon is the fact that it is {\it not} generally possible to smoothly deform large-polaron-like states into small-polaron-like states (or vice versa) through such sequences.
In particular, at fixed $(J,g,\phi )$ smooth changes in $\kappa$ are followed by smooth changes in all polaron properties until $\kappa$ reaches $\kappa^*$.
At $\kappa^*$, discontinuities appear in at least some polaron properities, consequent to the switching of the global energy minimum from one class of state in one part of the variational state space to another class of state finitely separated from the first.
The appearance of a $\kappa$-dependent discrete transition at $\kappa^*$ is unambiguous in our results, and by our description is a discrete transition between small- and large-polaron-like states.
What may be less clear to this point is what relationship, if any, exists between this transition phenomenon and the more traditional notion of a discrete self-trapping transition.
The ``order parameter'' commonly used as an indicator of the traditional self-trapping transition is the polaron effective mass, a quantity defined at $\kappa = 0$; as such, its characteristic jump indicates for us the point in parameter space at which the $\kappa=0$ polaron Bloch state switches from being large-polaron-like to small-polaron-like.
It is possible to connect this traditional perspective of a $\kappa=0$ transition with our present $\kappa$-dependent transition through the dependence of $\kappa^*$ upon the system parameters $(J,g,\phi)$.
The general result is that whenever a $\kappa$-dependent transition exists between small- and large-polaron-like states at a finite $\kappa^*$, there exist sequences of infinitesimal changes in the parameters $(J,g,\phi)$ that cause $\kappa^*$ to vanish.
This vanishing of $\kappa^*$ has the consequence that the large-polaron-like region ceases to exist, rendering the entire polaron band small-polaron-like, {\it including} the $\kappa = 0$ state.
This shows the usual self-trapping line to mark a {\it boundary} of the finite region of parameter space over which the $\kappa$-dependent transition exists.
This smooth connection also allows us to characterise the $\kappa$-dependent transition as a {\it continuation} to finite $\kappa$ of the traditional self-trapping transition at $\kappa~=~0$.
Where such finite-$\kappa$ transitions occur, some surgery must be performed to arrive at the polaron band that at every $\kappa$ identifies the global energy minimum.
This surgery consists of discarding the higher-lying of the coexisting variational energy minima in the interval $( \kappa_2 , \kappa_1 )$ and the states associated with them, retaining only those yielding the lowest energy at every $\kappa$.
The result is a polaron band that is large-polaron-like for $| \kappa | < \kappa^*$ and small-polaron-like for $| \kappa | > \kappa^*$.
Outside of the transition regions, polaron bands are determined without necessity of surgery; however this qualitative characterization of the $\kappa$ dependence of the polaron band as being more large-polaron-like at long wave lengths and more small-polaron-like at short wave lengths continues to apply.
The outer-zone transition is qualitatively similar in most respects, except that instead of going to zero in some limit, the $\kappa^*$ associated with the transition can be pushed to the Brillouin zone edge, and the polaron states on each side of the transition differ from those of the inner-zone transition in some characteristic ways addressed later in this section.
The tongue-shaped regions indicated on the phase diagram in Figure \ref{f4.5} identify parameters associated with such discrete transitions; the upper, wider tongue is associated with the inner-zone transition, and the lower, more narrow tongue with the outer-zone transition.
Where the tongues overlap, discrete transitions are found to occur in the inner and outer zones simultaneously, such that the same energy band embraces distinct polaron states in the inner, intermediate, and outer Brillouin zone.
The way in which these discrete transitions depend on the parameters $J$ and $\phi$ can be sketched qualitatively as follows:
For fixed $J$ sufficiently large, a finite $\kappa^*$ first appears as $\phi$ is increased from zero to the weak-coupling boundary of the zone-center transition region.
As $\phi$ is further increased, this finite $\kappa^*$ decreases until at the strong-coupling boundary of the zone-center transition $\kappa^*$ vanishes, identifying the {\it strong-coupling} boundary with the usual notion of a discrete self-trapping transition occurring at the zone center.
Transits across the outer-zone transition region have a similar structure.
We can study the nature of the ``large'' and ``small'' polaron states associated with nonlocal coupling by examining the structure of typical states at selected points in the system parameter space.
We turn first to the inner-zone transition and specifically the point $( J, g, \phi ) = ( 6, 0, 4 )$.
This point is near the strong-coupling edge of the transition region, where two distinct classes of stable states exist in a small interval around $\kappa = 0$.
The structure of the two stable $\kappa = 0$ solutions is illustrated in Fig.~\ref{f4.6}.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.6a.eps}
\vspace{.1in}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.6b.eps}
\end{center}
\caption{
Two types of convergent solutions for $Re[ \beta^{\kappa=0}_n] $ (first panel) and $ Re[ \alpha^{\kappa=0}_n ]$ (second panel).
$( J, g, \phi ) = ( 6, 0, 4 )$.
The solid line is obtained when this point is approached from the strong coupling regime, and the dashed line is obtained when this point is approached from the weak coupling regime.
}
\label{f4.6}
\end{figure}
The solid lines are iterative results obtained when this point is approached from the upper portion of the phase diagram.
The dashed lines are results obtained upon approach from the lower portion of the phase diagram.
Consequently, the solid lines show the polaron structure typical of the upper portion of the $J - \phi$ plane where nonlocal exciton-phonon coupling is relatively strong, and the dashed lines show the polaron structure typical of the lower portion of the phase diagram where exciton-phonon coupling is relatively weak.
As seen in Fig.~\ref{f4.6}, the strong-coupling polaron state (solid curves) are characterized by a larger phonon displacement $\beta_n^{\kappa=0}$ and a slightly more localized exciton amplitude $\alpha_n^{\kappa=0}$ than those of the weak-coupling polaron state (dashed curves), consistent with the notion of small and large polarons, respectively.
The ``size'' of the polarons represented by the two solutions in Fig.~\ref{f4.6} do not differ markedly, although upon close observation the exciton and phonon amplitudes of the ``small polaron'' state (solid curves) do decay somewhat more rapidly than do those of the large polaron state (dashed curves).
A feature more characteristic of the difference between the large and small polaron states in the presence of nonlocal coupling is the fact that amplitudes in the small polaron state are characterized by more or stronger alternations of sign than are found in the large polaron state.
Although these alternations are ultimately due to the antisymmetric nature of the nonlocal coupling used in this paper, their greater prominance on the strong-coupling side of the zone-center transition is consistent with the more compact exciton-phonon correlations typical of small polarons.
Next we turn to the outer-zone transition, and specifically the point $( J, g, \phi ) = ( 6, 0, 3.2 )$.
This point is near the strong-coupling edge of the transition region, where two distinct classes of stable states exist in the neighborhood of $| \kappa | = \pi$.
Structures typical of these two kinds of solution are illustrated in Fig.~\ref{f4.10}.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.10a.eps}
\vspace{.1in}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.10b.eps}
\end{center}
\caption{
Two types of convergent solutions for $Re[ \beta ^{\kappa=\pi}_n]$ (first panel) and $ Re[ \alpha^{\kappa=\pi}_n ]$ (second panel).
$( J, g, \phi ) = ( 6, 0, 3.2 )$.
The solid line is obtained when this point is approached from the strong coupling regime, and the dashed line is obtained when this point is approached from the weak coupling regime.
In the second panel, the irregular tails of the dashed line are caused by numerical difficulties in the weak coupling regime.
}
\label{f4.10}
\end{figure}
As in Fig.~\ref{f4.6}, the solid curves in Fig.~\ref{f4.10} are obtained when this point $(J,g, \phi ) = (6,0,3.2)$ is approached from the strong-coupling regime, while the dashed curves are obtained by approaching from the weak-coupling regime.
A significant feature of the zone-edge solutions is that the phonon amplitudes can be resolved into two components; one component is a strongly localized asymmetric lattice distortion as we have seen in all the other nonlocal coupling scenarios, and the other component is a nearly-uniform plane wave.
The zone-edge transition can be viewed as a binding or unbinding of the free-phonon component suggested by this plane wave.
In the large polaron regime {\it below} the transition (higher $\kappa$, weaker coupling), the plane-wave component dominates the phonon amplitudes, while after crossing over into the small polaron regime (lower $\kappa$, stronger coupling), this plane-wave component is absent.
In the particular solution illustrated, the plane-wave component has the wave vector $q~=~\pi$, consistent with the notion of a zone-edge solution in which the phonon component carries all the crystal momentum.
More generally, however, the wave vector of the plane wave component is not given strictly by $q = \pi$, but by $q = \kappa$, still reflecting a state in which the phonon component carries all the crystal momentum.
The latter property suggests that outer-zone states on the weak coupling side of this transition are not simply large polaron states as we might otherwise characterize them, but mixtures of large polarons essentially at rest and ``unbound'' free phonons.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.7a.eps}
\vspace{.1in}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.7b.eps}
\end{center}
\caption{
$\beta_{n}^{\kappa=0}$ (first panel) and $ \alpha_{n}^{\kappa=0}$ (second panel) for $( J, g, \phi ) = ( 2, 0, 1.6 )$ (solid line), and $( J, g, \phi) = ( 1, 0, 1.6)$ (dashed line).
}
\label{f4.7}
\end{figure}
The changes in polaron structure that occur as $J$ is varied from small through large values at (relatively) fixed $\phi$, can be seen in Figures \ref{f4.3}a and \ref{f4.7}; corresponding changes in the polaron energy band are illustrated in Figure \ref{f4.11}.
The particular value of $\phi$ used in this illustration was chosen to be significant on an absolute scale, but to fall below both the zone-center and zone-edge transitions; this corresponds to a large polaron region in the sense that applies to nonlocal coupling.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.11.eps}
\end{center}
\caption{
Polaron bands for $g=0, \phi=1.6$, and $J=0$ (top, dashed), $J=1$ (middle, solid), and $J=2$ (bottom, dotted), calculated from Toyozawa's Ansatz.
}
\label{f4.11}
\end{figure}
At $(J,g,\phi ) = (0,0,1.6)$, the structure of the polaron state is essentially identical to that shown in Fig~\ref{f4.3}a, with slightly larger amplitudes and weaker decays due to the somewhat larger value of $\phi$.
The energy band at this point (see Figure \ref{f4.11}) has a strongly bimodal variation characterized by a negative effective mass at the zone center.
These qualitative characteristics of the polaron state and energy band are characteristic of the small $J$ regime.
At $(1,0,1.6)$, on the other hand, the exciton density is essentially completely localized on the two central sites, essentially vanishing elsewhere.
This coincides with a critcal flattening of the polaron energy band at the zone center resulting in a loss of the bimodal band structure and a divergence of the effective mass through negative values.
With further increases in $J$, e.g. to $(2,0,1.6)$, the exciton amplitudes again spread in space with a structure dominated by $J$, and the energy band is unimodal with a finite, positive effective mass.
\section{$J$ , \lowercase{$g$} and $\phi$}
Allowing the simultaneous action of local and nonlocal coupling in the presence of finite transfer integrals should lead to polaron structures and energy bands that blend the qualities seen in Figures \ref{f4.3} and \ref{f4.8} ($J=0$; $g, \phi$ finite) and Figures \ref{f4.6}, \ref{f4.10}, \ref{f4.7}, and \ref{f4.11} ($g=0$; $J, \phi$ finite).
Examples of such mixed results for general $(J,g,\phi )$ are shown in Figures ~\ref{f4.12} and \ref{f4.9}.
Figure~\ref{f4.12} shows how the exciton amplitudes and the phonon displacements vary with $J$ for fixed, moderate values of both the local and nonlocal coupling coefficients.
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.12a.eps}
\vspace{.1in}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.12b.eps}
\end{center}
\caption{
Variational parameters for the $\kappa~=~0$ state calculated from Toyozawa's Ansatz.
First panel: the phonon displacement $\beta_n^{\kappa=0}$ for $g=\phi=1$, $J=0$ (dashed line), $J=1$ (solid line) and $J=3$ (dotted line);
Second panel: the exciton amplitude $\alpha_n^{\kappa=0}$ for $g=\phi=1$, $J=0$ (dashed line), $J=1$ (solid line) and $J=3$ (dotted line).
}
\label{f4.12}
\end{figure}
In the absence of the transfer integral ($J=0$), the phonon displacement is mostly localized on a single site due to the dominance of local exciton-phonon coupling in this case.
The effect of nonlocal coupling shows both in the nontrivial spread of the exciton and phonon amplitudes in space (phonon-assisted transport) and in the alternations in sign that persist, though weakened by competition with local coupling.
Increasing the transfer integral $J$ results in increased spreading that further smoothes both the exciton and phonon amplitudes.
At $J=3$, all amplitudes are positive, and the influence of nonlocal exciton-phonon coupling is evident only in the asymmetry of the exciton distribution and its lattice distortion.
In the absence of the transfer integral, Toyozawa's Ansatz and the Munn-Silbey approach yield very similar polaron energy bands, though the variational approach based on Toyozawa's Ansatz is hereby established as quantitatively superior.
It is well to ask, however, {\it how} superior the present method is, and under what circumstances that difference makes a practical difference.
The first {\it semi}-quantitative conclusion in this regard is that the quantitative discrepancies increase with the transfer integral, confirming one of our expectations that motivated this work; the nature of this trend can be seen in a comparison of the lower curves of Figure~\ref{f4.8} (both methods for $(0,1,1)$) with the curves of Figure~\ref{f4.9} (both methods for $(0.5,1,1)$).
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfxsize = 3.2in
\epsffile{f4.9.eps}
\end{center}
\caption{
The polaron bands for $(J, g, \phi ) = ( 0.5, 1, 1 )$ at $T=0$, calculated from Toyozawa's Ansatz (solid line) and the Munn-Silbey approach \protect\cite{Zhao94a}\protect (points).
}
\label{f4.9}
\end{figure}
Whether the discrepancies illustrated in Figure~\ref{f4.9} are significant depends on the purpose to which a band structure calculation is to be put.
The typical ``gap'' between the two results is roughly 0.18 in units of the Einstein frequency, 0.09 in units of the rigid lattice energy bandwidth, or about 0.75 in units of the resulting polaron energy band width.
For Einstein frequencies typical of molecular phonons, a shift of 0.18 is easily resolved by optical spectroscopies, so for such purposes one would certainly want to use the present method.
On the other hand, the absolute position of the energy band is of less importance to other purposes; for example, energy transport in the band regime is more sensitive to the {\it shape} of the energy band than to its position.
The gross shapes of the two energy bands in Figure~\ref{f4.9} are similar, though there are signficant differences; for example, the ``energy barrier'' separating the two finite-$\kappa$ minima is more than twice as large when computed by the present method, while the ``effective mass'' (at $\kappa = 0$ or at the finite-$\kappa$ minima) is less than half that computed by the Munn-Silbey method.
\section{Conclusions}
In this paper we have obtained variational estimates of the ground state energy bands for the Holstein Hamiltonian incorporating simultaneous local and nonlocal exciton-phonon coupling.
We have examined the interplay between the transfer integral $J$ and local and nonlocal exciton-phonon coupling, with an emphasis on nonlocal coupling effects.
The most obvious effect of nonlocal exciton-phonon coupling is a bimodal distortion of the polaron energy band that is most obvious in the limit of small exciton transfer integral and small local exciton-phonon coupling.
Moreover, since nonlocal coupling is a transfer mechanism, a non-trivial polaron energy band remains even when both the transfer integral and local coupling vanish.
The polaron structure induced by local coupling is ``site centered'', while that induced by nonlocal coupling is ``bond centered''.
When local and nonlocal coupling act in concert, the microscopic forces associated with each superpose with the result that polaron structure is neither completely site-centered nor completely bond centered.
The nature of these interactions is such, however, that nonlocal coupling must be relatively strong before a bond-centered component becomes noticeable against the typically more prominent site-centered component driven by local coupling.
This latter quality is consistent with findings of other analyses of solid-state excimers, where strong nonlocal coupling was found to be an essential element in producing the dimeric lattice distortions characteristic of excimers as well as other excimeric properties.
Like polarons born of local coupling only, polarons embracing simultaneous local and nonlocal coupling experience self-trapping transitions, manifested in our present calculations as in many other approximate treatments by discontinuities in the dependence of some polaron properties on system parameters.
Unlike the traditional perspective that focusses on the change in the $\kappa = 0$ polaron state as some ``self-trapping line'' is crossed in the $J-g$ plane, our analysis shows that the traditional notion of self-trapping is a limiting one, the self-trapping line corresponding to a one-dimensional boundary of a three-dimensional volume in the $(J,g,\phi)$ space in which a more general, $\kappa$-dependent self-trapping occurs.
This more general self-trapping phenomenon is associated with polaron bands that
are large-polaron-like in the inner portion of the Brillouin zone and small-polaron-like in the outer portion of the Brillouin zone, the traditional self-trapping corresponding to the limit in which the large-polaron-like region vanishes.
This characterization of self-trapping continues to hold in the $J-\phi$ plane, where the local coupling underlying the traditional self-trapping concept no longer exists.
In addition to generalizing the traditional self-trapping concept, we find a second type of transition phenomenon occurring near the Brillouin zone boundary.
Rather than constituting a small-to-large polaron type of transition, this phenomeon appears to reflect the binding or unbinding of a free phonon.
While conveniently described as ``transitions'', each of these phenomena are best understood as reflecting rapid but smooth changes in polaron structure that are ``marked'' by discontinuties only because of the approximate nature of our method.
For example, there is no reason to expect that the ``critical point'' of one of these transitions has any special meaning other than marking the point at which the changes in polaron structure first occur too rapidly to be accurately represented by our particular calculation method, permitting discontinuities to appear.
This quality of self-trapping transitions will receive greater attention elsewhere \cite{Zhao}.
For small exciton transfer integrals ($J << 1$), Toyozawa's Ansatz and the Munn-Silbey approach yield very similar polaron energy bands; however, these bands grow increasingly dissimilar with increasing transfer integral.
In all cases studied, we have found the variational approach based on Toyozawa's Ansatz to yield polaron energy bands lower than those of the Munn-Silbey method at all $\kappa$, establishing the variational energy bands as the quantitatively superior results.
Without generalization, the methods used in this paper apply to the limit of zero temperatures.
The Munn-Sibley method, on the other hand, was constructed with the aim of maintaining a sound perturbation theory at finite temperatures.
Considering the quality of the comparison between the results of our present method and those of the Munn-Silbey method at zero temperature where the latter is {\it not} at its best, we would speculate that a direct, numerical implementation of the Munn-Silbey method would provide a sound approach at finite temperatures, provided that exciton transfer integrals are small relative to both local and nonlocal coupling.
The variational approach we have employed here needs to be generalized in order to reach more reliable conclusions about the finite-temperature scenario \cite{Emin73,Gosar75,Yarkony76}.
\section*{Acknowledgments}
One of the authors (Y.Z.) would like to thank Prof. Yu Lu for
encouragements.
| proofpile-arXiv_065-1275 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The cosmological origin and evolution of quasars (QSOs) has remained an
outstanding issue in astrophysics. QSOs first appear at a high redshift
$z>4$ and their evolution appears very slow between $z\sim 3$ and
the critical redshift $z_c\sim 2$. Below $z_c$, bright QSOs seem rapidly
disappearing (for a review, see Hartwick \& Shade 1991). In the optical,
the evolution is often described as pure luminosity evolution
(Mathez 1976,1978, Weedman 1986) with the luminosity decreasing
as $(1+z)^K$ with $K\sim 3-4$ depending on the cosmological
model (Marshall 1985, Weedman 1986, Boyle et al. 1988, Hartwick
\& Shade 1991). In the X-ray, a similar evolution is seen although
the optical evolution appears somewhat steeper (Maccacaro et al. 1991,
Della Ceca et al. 1992, Boyle et al. 1993, 1994, Page et al. 1996).
In the conventional picture (for a review, see e.g. Rees 1990, Frank et al.
1992), QSOs are powered by massive accreting black holes which
grow through accretion of gas. It remains largely
unexplained why QSOs are suddenly quenched at $z<2$ (cf. Turner 1991,
Fukugita \& Turner 1996). A tempting possibility that low $z$ Seyfert galaxies
may be low luminosity remnants of QSOs is ruled out simply due to the fact
that the former are more abundant than the latter at least by an order of
magnitude (Huchra \& Burg 1992).
Why do QSOs suddenly turn off? What drives the evolution of the QSO luminosity
function? What are the present remnants of bright QSOs? We address these
issues.
We assume that engines of QSOs are indeed massive accreting black holes.
The accretion flows have often been modeled by a geometrically thin,
optically thick disk (e.g. Frank et al. 1992, Pringle 1981)
and the so-called blue bumps have been interpreted as emission from such disks
(e.g. Sun \& Malkan 1988, Wandel \& Petrosian 1988, Laor \& Netzer 1989, and
references therein). For the optical QSO evolution,
some phenomenological prescriptions have been proposed (Heisler \& Ostriker
1988, Koo \& Kron 1988, Boyle et al. 1988,1993, Pei 1995).
For the X-ray evolution, it has been shown that the pure luminosity evolution
can account for the observed evolution (Boyle et al. 1993,1994,
Page et al. 1996). Although there have been various models
on the possible driving mechanism for the QSO cosmological evolution
(e.g. Caditz et al. 1991, Small \& Blandford 1992, Haehnelt \& Rees 1993),
a definite conclusion has not been reached.
For a typical QSO luminosity $L\sim 10^{46} erg/s$, a black hole of
mass $m\equiv M/10^8 M_{\sun}$ accreting at the Eddington rate with the nominal
radiative efficiency $\eta_n=0.1$,
\begin{equation}
{\dot M}_{Edd}=L_{Edd}/(\eta_n c^2)=4\pi G M /\eta_n \kappa_{es} c
=[1.4\times 10^{26} g/s] m,
\end{equation}
where $\kappa_{es}$ is the electron scattering opacity,
would exponentially grow on a time scale
\begin{equation}
t_{Edd}=\left|{\dot M}_{Edd}/M\right|^{-1}=[4.5\times 10^{7} yr] (\eta_n/0.1).
\end{equation}
This suggests that even when accretion-powered QSO activity is short-lived
($<10^9 yr$), massive black holes residing in most luminous QSOs
($L>10^{46} erg/s$) could grow to $M>10^{9} M_{\sun}$.
We suggest that this rapid growth itself may directly affect the QSOs'
evolution. The basic idea is that the massive black holes, accreting at a
scaled accretion rate ${\dot m}\equiv {\dot M}/{\dot M}_{Edd}$ below a
certain critical rate ${\dot m}_c\sim 10^{-2}$ (Narayan \& Yi 1995b),
emit radiation with an efficiency $\eta\propto {\dot m}\ll\eta_n$.
Although the idea of the low accretion efficiency (Rees et al. 1982)
has occasionally been mentioned, the lack of a self-consistent model has been
a major obstacle.
Since ${\dot M}_{Edd}\propto M$ increases due to accretion
(even when ${\dot M}$ remains nearly constant), ${\dot m}\propto {\dot M}/M$
decreases and the QSOs' luminosity decrease could be a direct result of the
rapid mass accretion. We suggest that the transition in accretion flow
from high efficiency ($\eta\sim \eta_n=0.1$) to low efficiency
($\eta\ll \eta_n$) flows could naturally explain the major
properties of the X-ray luminosity evolution of QSO population (Yi 1996).
\section{Cosmological Evolution of QSO Luminosity: Transition in Accretion Flows}
The idea of transition in accretion flow
hinges on the recently proven stability of the advection-dominated flows
(Narayan \& Yi 1994, 1995ab, Abramowitz et al. 1995, Chen et al. 1995).
As the accretion rate falls, the density of the accreting plasma drops to
a point where the Coulomb ion-electron energy transfer
becomes inefficient compared with the rate of viscous heating on ions
(Rees et al. 1982). In the absence of other couplings,
the accretion flow becomes
a two temperature plasma with a very low radiative cooling efficiency
(Narayan \& Yi 1995b, Rees 1982) and
most of accretion energy is lost into black holes through advection.
Optically thin (magnetized) advection-dominated flows emit synchrotron
(in radio) and Comptonized synchrotron and
bremsstrahlung emission (in X-ray) resulting in characteristically flat
spectra. Detailed spectral calculations and results have been successfully
applied to low luminosity systems such as Galactic black hole systems
(Narayan et al. 1996), the Galactic center (Narayan et al. 1995), and the
extragalactic system NGC 4258 (Lasota et al. 1996).
In the case of the Galactic center, the estimated accretion efficiency
$\eta\sim 10^{-5}\alpha^{-1}$ and in NGC 4258
$\eta\sim 10^{-4}\alpha^{-1}$ where $\alpha\sim 10^{-2}-1$ is the
dimensionless viscosity parameter (e.g. Pringle 1981, Frank et al. 1992).
Fabian and Rees (1995) have also suggested that dim galactic nuclei
in large elliptical galaxies (due to low efficiency advection-dominated flows)
contain remnants of luminous QSOs.
An accretion flow enters the low efficiency regime at a critical accretion rate,
${\dot m}_{c}\equiv {\dot M}_{c}/{\dot M}_{Edd}$,
which is primarily determined by $\alpha (\le 1)$ (Narayan \& Yi 1995b),
\begin{equation}
{\dot m}_{c}\approx {\dot M}_{c}/{\dot M}_{Edd}=0.3\alpha^2
\end{equation}
for $R<10^3R_s$ where $R$ is the radial distance and $R_s=2GM/c^2$.
For $R>10^3 R_s$,
\begin{equation}
{\dot m}_{c}\approx 0.3\alpha^2(R/10^3R_s)^{-1/2}.
\end{equation}
Since most of the accretion energy is dissipated at $R<10^3 R_s$, the
critical rate given in eq. (2-1) is relevant.
When ${\dot m}>{\dot m}_{c}$, the accretion flow
takes the form of high efficiency (i.e. $\eta\sim\eta_n=0.1$) or
\begin{equation}
L=\eta_n {\dot M} c^2.
\end{equation}
For $\dot m<\dot m_{c}$,
the low efficiency is approximately described by the luminosity
\begin{equation}
L\approx 30{\dot m}^xL_{Edd}
\end{equation}
with $x\approx 2-2.2$. We take $\alpha=0.3$ or ${\dot m}_{c}\sim 0.03$
and $x=2.2$ for our discussions (Narayan \& Yi 1995b).
Unless the accretion rate rapidly increases toward the present epoch,
the growth of black holes always tend to decrease
${\dot m}={\dot M}/{\dot M}_{Edd}\propto {\dot M}/M$,
which naturally drives the transition. The overall evolution takes the
form of the the pure luminosity evolution.
The direct comparison with the observed X-ray evolution is complicated by
the uncertainties in X-ray emission from luminous QSOs. The conventional
thin disks (with high efficiency) have the effective disk temperatures
$\le 10^6$ K and hence cannot emit X-rays efficiently (e.g. Frank et al. 1992).
We assume that a significant fraction of the bolometric
luminosity is emitted as X-rays (presumably from accretion disk coronae) and
that $\eta_n=0.1$ represents this uncertainty.
The advection-dominated hot flows differ from the thin disks as the former
emit hard X-rays up to several $100$ keV.
As long as most of the accretion flow is advection-dominated,
the emission spectra are characteristically flat, i.e. luminosity
per decade of frequency is roughly constant (e.g. Narayan et al. 1995,
Lasota et al. 1996). The X-ray luminosity can be approximated as a fixed
fraction of the total luminosity. In our discussions, we simply assume that the
observed X-ray luminosity is proportional to the bolometric luminosity.
We caution that our assumption could introduce serious errors if the
X-ray to bolometric luminosity ratio is a complex function of some yet
unknown parameters such as $L$, ${\dot M}$, or ${\dot m}$.
For instance, if the accretion flow contains a thin disk
in the outer region (eq. (2-2)), the resulting
emission spectra could be significantly affected. In this sense, it is
uncertain how our suggestion on the X-ray luminosity evolution could
be applied to the optical evolution (see below).
We begin with a simplest scenario;
(i) The black holes are initially accreting at their
Eddington rates (cf. Padovani 1989). (ii) The accretion rates are
sufficiently slowly
decreasing from the initial epoch at cosmic time $t=t_i$ (or $z=z_i$).
For a flat universe ($\Omega_0=1$) with no cosmological constant
($\Omega_{\Lambda}=0$), the cosmic time $t$ at $z$ is given by
\begin{equation}
H_o(t-t_i)={2\over 3}\left[(1+z)^{-3/2}-(1+z_i)^{-3/2}\right]
\end{equation}
where $t_i=t(z=z_i)$. When ${\dot M}\approx constant$,
or
\begin{equation}
M\approx M_i+{\dot M}(t-t_i)
\end{equation}
where $M_i=M(z=z_i)$,
\begin{equation}
{\dot m}\propto M^{-1}\propto \left[1+{2{\dot M}\over 3M_i H_o}\left((1+z)^{-3/2}
-(1+z_i)^{-3/2}\right)\right]^{-1}.
\end{equation}
where $H_o=50km/s/Mpc$ is the assumed Hubble constant.
While ${\dot m}>{\dot m}_c$, the observed slow luminosity evolution in X-rays
and optical could be purely due to the gradual decrease of ${\dot M}$
(eq. (2-3)). When ${\dot m}<{\dot m}_c$, the luminosity evolves as (eq. (2-4))
\begin{equation}
L\propto \left[1+{2{\dot M}\over 3M_i H_o}\left((1+z)^{-3/2}
-(1+z_i)^{-3/2}\right)\right]^{-x+1}.
\end{equation}
In the limit that the accreted mass over Hubble time is larger than the
initial mass, i.e. ${\dot M}/H_o\gg M_i$,
\begin{equation}
L\propto (1+z)^{3(x-1)/2}\left[(1+z_i)^{3/2}-(1+z)^{3/2}\right]^{-x+1}
\end{equation}
Similarly, for an empty universe ($\Omega_0=\Omega_{\Lambda}=0$),
\begin{equation}
H_o(t-t_i)=\left[(1+z)^{-1/2}-(1+z_i)^{-1/2}\right]
\end{equation}
and
\begin{equation}
L\propto \left[1+{{\dot M}\over M_i H_o}\left((1+z)^{-1/2}
-(1+z_i)^{-1/2}\right)\right]^{-x+1}.
\end{equation}
or in the limit ${\dot M}/H_o\gg M_i$
\begin{equation}
L\propto (1+z)^{(x-1)/2}\left[(1+z_i)^{1/2}-(1+z)^{1/2}\right]^{-x+1}
\end{equation}
The simple behavior of luminosity as a function of redshift is often
approximated as
\begin{equation}
L(z)\propto (1+z)^K,
\end{equation}
with a constant $K$. In our scenario, $K(z)$ is a function of $z$ \& $z_i$ and
depends on the cosmological model. We get for the flat universe,
\begin{equation}
K(z)={d\ln L(z)\over d\ln (1+z)}={3(x-1)\over 2}\left[1+{1\over
((1+z_i)/(1+z))^{3/2}-1}\right]
\end{equation}
and for the empty universe,
\begin{equation}
K(z)={x-1\over 2}\left[1+{1\over ((1+z_i)/(1+z))^{1/2}-1}\right]
\end{equation}
where eqs. (2-9),(2-12) have been used. The redshift dependence of $K$ is
shown in Fig. 1.
In the two cosmological models considered, which cover various other possible
models, the evolution at $z<z_i$ is shown to give a wide range
of $K$ which includes $K\sim 2-3$. $K$ varies from $K>3$ to $K<3$ as $z$
decreases away from $z_i$. The derived expressions for $K(z)$ become invalid
for $z\rightarrow z_i$ as the accretion flows become those with a high
efficiency in our scenario. The derived values of $K$ are interestingly
close to those obtained by Boyle et al. (1994) and Page et al. (1996)
using the recent X-ray observations (see below).
Another important aspect of the QSO evolution is the sudden cutoff in the
luminosity evolution near $z=z_c\sim 2$ which we identify as the critical
redshift at which $\dot m=\dot m_c$ first occurs.
For the flat universe with no cosmological constant, we get
\begin{equation}
{\dot m}_c=0.3\alpha^2={\eta_{n} \kappa_{es} c\over 4\pi G}{\dot M\over M_i}
\left[1+{2{\dot M}\over 3H_o M_i}\left[(1+z_c)^{-3/2}-(1+z_i)^{-3/2}\right]
\right]^{-1}.
\end{equation}
If the initial accretion rate is a fraction $\delta(\le 1)$ of the initial
Eddington rate or
\begin{equation}
{\dot M}/M_i=\delta t_{Edd}^{-1}= (4.5\times 10^7 yr)^{-1} (\eta_{n}/0.1)^{-1}
\delta,
\end{equation}
\begin{equation}
1+z_c=\left[\left(t_{Edd}\over t_{age}\right)\left({1\over {\dot m}_c}
-{1\over\delta}\right)+(1+z_i)^{-3/2}\right]^{-2/3}
\end{equation}
where $t_{age}=2t_H/3=2/3H_o$ is the age of the Universe.
Similarly, for the empty universe ($\Omega_0=\Omega_{\Lambda}=0$)
\begin{equation}
1+z_c=\left[\left(t_{Edd}\over t_{age}\right)\left({1\over
{\dot m}_c}-{1\over\delta} \right)+(1+z_i)^{-1/2}\right]^{-2}
\end{equation}
where $t_{age}=t_H=1/H_o$.
Unless $\delta\le {\dot m}_c\sim 0.03$ (which is unlikely
given the high luminosities of QSOs at high redshift $z=z_i$ and a relatively
short time scale allowed for growth of black holes from $z>z_i$),
$z_c$ only weakly depends on $\delta\sim 1$. Assuming $\alpha=0.3$,
$\delta=1$, and $H_o=50km/s/Mpc$, we get $z_c\approx 1.6 (1.8)$ for
$z_i=3 (4)$ in the flat universe and $z_c\approx 1.9 (2.6)$ for $z_i=3 (4)$
in the empty universe.
Increasing $H_o$ to $H_o=65km/s/Mpc$ would change $z_c$ roughly by
$\sim 10-20$\%. For ${\dot m}_c\ll \sim 3\times 10^{-2}$ or $\alpha<0.3$,
however, $z_c$ could be much lower than the observed $z_c\sim 2$
unless the initial accretion rate is already significantly sub-Eddington.
Our proposed evolutionary model suggests a possible explanation for the
cutoff at $z_c\sim 2$. A definite final answer requires fine details of the
cosmological model and the initial conditions. Nevertheless, it remains
valid that the transition driven by mass growth could always result in
rapid evolution of QSOs regardless of any specific cosmological scenario
such as the present one.
Now we consider luminosity evolution in general cosmological models relaxing
${\dot M}=constant$. We adopt the mass accretion rate of the form
\begin{equation}
{\dot M}={\dot M}_i\exp\left(-(t-t_i)/t_{evol}\right)
\end{equation}
with the evolutionary time scale as $t_{evol}=\beta t_{age}$
($\beta\le 1$). In order to see the effects of the cosmological model,
we consider four different cosmological models (e.g. Kolb \& Turner 1990);
(i) an open universe $\Omega_0=0.4$, $\Omega_{\Lambda}=0$ for which
$t_{age}=0.779(1/H_o)$ and
$$
t(z)={1\over 2H_o}{\Omega_0\over (1-\Omega_0)^{3/2}}
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\qquad\qquad\qquad\qquad
$$
\begin{equation}
\times\left[{2(1-\Omega_0)^{1/2}(1+\Omega_0z)^{1/2}\over\Omega_0(1+z)}-\ln\left[
\Omega_0(1+z)+2(1-\Omega_0)+2(1-\Omega_0)^{1/2}(1+\Omega_0z)^{1/2}\over
\Omega_0(1+z)\right]\right],
\end{equation}
(ii) an empty universe $\Omega_0=\Omega_{\Lambda}=0$ (eq. (2-10)),
(iii) a flat universe $\Omega_0=1$, $\Omega_{\Lambda}=0$ (eq. (2-5)),
(iv) a flat universe $\Omega_0=0.4$, $\Omega_{\Lambda}=0.6$ for which
$t_{age}=0.888(1/H_o)$ and
\begin{equation}
t(z)={2\over 3H_o}{1\over (1-\Omega_0)^{1/2}}\ln\left[\left(1-\Omega_0\over
\Omega_0\right)^{1/2}\left(1\over 1+z\right)^{3/2}+\left(\left(1-\Omega_0\over
\Omega_0\right)\left(1\over 1+z\right)^3+1\right)^{1/2}\right].
\end{equation}
In each cosmological model, we need to specify initial conditions for QSO
evolution. Our simple initial conditions could be partly motivated by the
optical observation that the QSO luminosity function is already established
by $z\sim 3-4$ and changes little from $z\sim 3$ to $z\sim 2$
(Hartwick \& Shade 1990). The present model simply posits that the massive
black holes arrive at the peak of their activities ($z\sim 3-4$), in a nearly
synchronous manner, accreting from surrounding gas at a rate close to
${\dot M}_{Edd}$.
In Fig. 2, we show evolution of QSOs in the four cosmological
models marked by numbers. We have taken $z_i=3.5$ in this example.
In our model, the luminosity evolution is obviously scale-free for our
chosen initial conditions.
That is, the luminosities of QSOs are simply determined by the black hole
masses with an identical redshift dependence. The most significant uncertainty
is the possible wide scatter in the initial conditions (i.e. $z_i$, $\delta$,
$t_{evol}$, etc.).
At $z>z_c$, the evolution is slow (eqs. (2-3),(2-20)) and the time scale of
evolution is set by $t_{evol}$ as $\eta_n=0.1$ is maintained.
After the transition at $z=z_c$, the difference in the power-law among the
shown models is small.
It is encouraging that the observed constant $K\sim 3$ between
$z=0$ and $z=z_c\sim 2$ could be accounted for with small effects from the
cosmological model. For the same set of parameters and the initial conditions,
$z_c$ significantly increases from the flat universe to the empty universe.
Such an effect is, however, indistinguishable from that due to uncertainties
in ${\dot m}_c$ or $\alpha$. $t_{evol}\ll t_{age}$ or $\beta\ll
0.5$ could result in much faster evolution than the observed.
A faster decrease of ${\dot M}$ near $z=0$ would inevitably lead to the
further acceleration of evolution which might be directly observable as
deviation from the observed power-law evolution with a constant $K$.
X-ray observations have established a well defined evolutionary
trend although details differ in several analyses. Della Ceca
et al. (1992) and Maccacaro et al. (1991) obtained $K=2.56$ and Boyle
et al. (1993) derived $K=2.75-2.80$ for the empty universe and
$K=2.76-2.73$ with $z_c\sim 2$ for the flat universe with no cosmological
constant. More recently, Boyle et al. (1994) analyzed a larger sample
and re-derived $K=3.34$ ($z_c=1.79$) for the empty universe and
$K=3.25$ ($z_c=1.60$) or $K=3.17$ ($z_c=1.48$) for the flat universe.
Page et al. (1996) note that no evolution for $z>1.8$ is entirely compatible
with the observed evolution with $K=2.35-2.94$ \& $z_c=1.41-1.82$
depending on the cosmological model. Despite lacking details of evolution,
our proposed model is largely consistent with the observations. The X-ray
evolution is similar to the optical evolution except that the latter is
slightly faster than (cf. Boyle et al. 1988, Boyle et al. 1994).
In the optical, the derived $K$ depends rather sensitively on the assumed
spectral slope which could be a major source of uncertainty (see below).
Caditz et al. (1991) has suggested that optical evolution could be
explained by the accretion disk model where the monochromatic luminosity
at a given frequency evolves rapidly as the fixed frequency crosses the Wien
cutoff frequency due to the increase of the black hole mass through
accretion. Interestingly, such an explanation also results in a luminosity
evolution of the form $L\propto {\dot m}^2$ (cf. eq. (2-4)) in the optical.
In other words, the observed $(1+z)^K$ ($K\sim 3-4$) luminosity evolution
could be the result of $L\propto {\dot m}^2$ arising from the two entirely
different origins. In our model, the long wavelngth emission is unclear due
to the uncertain nature of the outer region of the flow where both
a thin disk and an advection-dominated flow are allowed in principle.
For instance, according to eqs. (2-1),(2-2), there could be an outer
thin disk while the inner region is fully advection-dominated when
\begin{equation}
0.3\alpha^2(R/10^3R_s)^{-1/2}<{\dot m}(R) <0.3\alpha^2
\end{equation}
at a distance $R>10^3R_s$.
If the outer disk continues to exist at $z<z_c$, we can draw a rough estimate
on the bolometric luminosity evolution at long wavelengths by directly
applying eq. (2-2).
Unless the outer disk extends to a region well inside $\sim 10^3R_s$, however,
the optical luminosity is expected to be very low and the following evolutionary
trend is not directly applicable to brightest QSOs.
As ${\dot m}\propto {\dot M}/M$ decreases, the inner region of the disk first
becomes advection-dominated while the outer thin disk, emitting in the optical,
is maintained.
As ${\dot m}$ drops further, a larger fraction of the inner disk becomes
advection-dominated and the thin disk is pushed outward. The
inner radius of the outer thin disk is
\begin{equation}
0.3\alpha^2\left(R_{in}\over 10^3 R_s\right)^{-1/2}={\dot m}\propto {1\over M}
\end{equation}
or
\begin{equation}
R_{in}\approx [2.7\times 10^{13} cm] m {\dot m}^{-2}
\end{equation}
for ${\dot M}\sim constant$ and $\alpha=0.3$ as before.
The bolometric disk luminosity from the outer disk would decrease as
\begin{equation}
L_{disk}\sim {GM{\dot M}\over R_{in}}\propto {M\over R_{in}}\propto
{M\over M^2R_s}\propto M^{-2}
\end{equation}
as $M$ increases.
The disk emission would have the evolving peak temperature
\begin{equation}
T_{disk}\sim \left(M{\dot M}\over R_{in}^3\right)^{1/4}\propto M^{-2}\propto
L_{disk}.
\end{equation}
or
\begin{equation}
T_{disk}\approx [2.6\times 10^3K]\left(m\over 0.1\right)^{-1/4}\left({\dot m}
\over 0.03\right)^{7/4}.
\end{equation}
The peak wavelength is
\begin{equation}
\lambda_{p}\approx [1.9\times 10^4{\AA}]\left(m\over 0.1\right)^{1/4}
\left({\dot m}\over 0.03\right)^{-7/4}\approx [2\times 10^5{\AA}]
\left(M\over 10^{8}\right)^{1/4}\left(M/M_i\over 3\times 10^2\right)^{7/4}.
\end{equation}
The emission from the outer disk is mostly in the infrared regime and the optical
emission from the disk will occur in the Wien tail of the emission at
$\sim T_{disk}$. The monochromatic disk luminosity at a fixed optical wavelength
$\lambda_{opt}<\lambda_p$ would rapidly decline as $\lambda_{p}\propto
M^2$ increases and $\lambda_{opt}\ll \lambda_{p}$. In this case,
the observed optical-UV
emission is dominated by that from the inner advection-dominated flow and
the optical-UV evolution is similar to the X-ray evolution.
At a longer infrared wavelength $\lambda_{IR}>\lambda_p$, the monochromatic
luminosity
\begin{equation}
L_{IR}\propto T_{disk}/\lambda_{IR}^2\propto M^4
\end{equation}
where we have used the Rayleigh-Jeans law. In other words, the infrared
luminosity at $\lambda_{IR}\gg \lambda_p$ would increase as $\propto M^4$
until $\lambda_{IR}\sim \lambda_p$ is reached. When $\lambda_p>\lambda_{IR}$
occurs subsequently, the infrared luminosity also rapidly declines.
One of the major uncertainties in this qualitative prediction is the effects
of dust and irradiation from the central X-ray emission region which could
seriously affect the long wavelength emission from the outer region.
\section{Implications}
If the QSO evolution is indeed due to a single population,
the entire QSO population shifts to a mass range $\ge 10^9M_{\odot}$ by $z=0$,
which suggest that massive elliptical galaxies are most likely hosts for QSO
remnants (Fabian \& Canizares 1988, Fabian \& Rees 1995).
Such massive black holes have not been clearly identified with an exception
of M87 (Kormendy and Richstone 1996). When the radiative efficiency is
a sensitive function of the mass accretion rate, the mass estimates based
on $\eta_n\sim 0.1$ could be seriously misleading (Narayan \& Yi 1995b,
Fabian \& Rees 1995). In our model, the mass of a black hole at $z=0$,
based on its luminosity, could be underestimated by more than two
orders of magnitude. If such massive black holes reside in galactic nuclei
at the present epoch, they may be distinguishable from lower mass black holes
accreting at $\dot m>\dot m_{c}$ (e.g. Seyfert galaxies), primarily
due to their hard X-rays and synchrotron radio emission.
The proposed model could be roughly compatible with the observed
X-ray luminosity evolution if (i) the transition in accretion flow occurs at
$L=10^{-2}L_{Edd}$ or ${\dot m}={\dot m}_c=0.3\alpha^2\sim 10^{-2}$ or
$\alpha\sim 0.3$, (ii) $\eta_n=0.1$ before transition and
$L\propto {\dot m}^2$ after transition, and (iii) QSO remnants have
black holes with $M>10^9M_{\odot}$. These requirements are based on our
particular set of initial conditions. For a more realistic model, some
scatter in $z_i$ and/or $\delta$ is inevitable, for which we lack
detailed information. Despite these serious uncertainties,
the simple model demonstrates a natural evolutionary trend which
essentially needs only the second point (independent of our specific proposal)
both in optical and X-ray.
If QSOs at $z\sim 2-3$ are powered by black holes accreting at near Eddington
rates, our explanation indicates $\dot m_{c}\sim$ a few $\times 10^{-2}$ or
$\alpha\sim O(0.1)$ where the latter value is interestingly close to those
often quoted in Galactic accretion systems (Yi 1996, Narayan et al. 1996).
The critical accretion rate in our model is related to several theoretical
problems including the necessity for the two temperature plasma
(Narayan \& Yi 1995a, Rees et al. 1982, Phinney 1981), which merits further
investigations.
Alternatively, QSOs could be driven by electromagnetic power
extracted from rapidly spinning black holes (Blandford \& Znajek 1978).
Although the spin-up of massive black holes from their formation is not
clearly understood (cf. Thorne 1974), the spin-down of black holes does
provide a time scale comparable to the QSOs' cosmological evolution time
scale (Park \& Vishniac 1988) when ${\dot M}\sim {\dot M}_{Edd}$.
Such an evolutionary possibility still lacks an explanation
as to why the accretion luminosity which is comparable to the electromagnetic
power (Park \& Vishniac 1994) also decreases on a similar time scale.
\acknowledgments
The author thanks Masataka Fukugita, Ramesh Narayan, Martin Rees, Ed Turner,
and Ethan Vishniac for useful discussions and comments.
The author acknowledges support from SUAM Foundation.
| proofpile-arXiv_065-1276 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
As confirmed by the discovery of the top quark, the Standard Model(SM)
continues to do an excellent job at describing essentially all existing
data{\cite {tev,blondel}}. In addition to unravelling the source of symmetry
breaking, one of the most crucial remaining set of tests of the
structure of the SM will occur at future colliders when precision measurements
of the various triple gauge boson vertices(TGVs) become
available{\cite {rev}}. Such analyses are in their infancy today at both the
Tevatron and LEP. If new physics arises at or near the TeV scale,
then on rather general grounds one expects that the deviation of the
TGVs from their canonical SM values, {\it i.e.}, the anomalous
couplings, to be {\it at most} ${\cal O}(10^{-3}-10^{-2})$ with the smaller
end of this range of values being the most likely. To get to
this level of precision and beyond, for all of the TGVs, a number of
different yet complementary reactions
need to be studied using as wide a variety of observables as possible.
In the present analysis we concentrate on the
CP-conserving $\gamma WW$ and $\gamma ZZ$ anomalous couplings that can be
probed in the reactions $\gamma e \rightarrow W\nu ,Ze$ at the NLC using polarized
electrons and polarized backscattered laser photons{\cite {old}}. In the
$\gamma WW$ case, the anomalous
couplings modify the magnitude and structure of the already existing SM tree
level vertex. No corresponding tree level $\gamma ZZ$ vertex exists in
the SM, although it will appear at the one-loop level. One immediate
advantage of the $\gamma e\rightarrow W\nu$ process over, {\it e.g.},
$e^+e^-\rightarrow W^+W^-$ is that the $\gamma WW$ vertex can be trivially isolated
from the corresponding ones for the $ZWW$ vertex, thus allowing us to probe
this particular vertex in a model-independent fashion. In addition, the
$\gamma e\rightarrow W\nu$ process probes the TGVs for on-shell photons whereas
$e^+e^-\rightarrow W^+W^-$ probes the couplings at $q^2 \geq 4M_W^2$.
To set the notation for
what follows, we recall that the $CP-$conserving $\gamma WW$ and
$\gamma ZZ$ anomalous couplings are traditionally
denoted by $\Delta \kappa$, $\lambda$ and $h_{3,4}^0${\cite {rev}},
respectively. We will assume that the $\gamma WW$ and $\gamma ZZ$
anomalous couplings are unrelated; the full details of our analysis can
be found in Ref.{\cite {old}}.
\section{Analysis}
The use of both polarized electron and photon beams available at the NLC
allows one to construct a polarization asymmetry, $A_{pol}$. As we will see,
this asymmetry provides a new handle on possibly anomalous TGVs of both the
$W$ and $Z$. In general the $\gamma e \rightarrow W\nu ,Ze$
(differential or total) cross sections can be written schematically
as
\begin{equation}
\sigma=(1+A_0P)\sigma_{un}+\xi(P+A_0)\sigma_{pol} \,,
\end{equation}
where $P$ is the electron's polarization(which we take to be $>0$ for
left-handed beam polarization),
$-1\leq \xi \leq 1$ is the Stoke's parameter for the circularly polarized
photon, and $A_0$ describes the electron's coupling to the relevant gauge
boson[$A_0=2va/(v^2+a^2)=1$ for $W$'s and $\simeq 0.150$ for $Z$'s, with $v,a$
being the vector and axial-vector coupling of the electron].
$\sigma_{pol}(\sigma_{un})$
represents the polarization (in)dependent contribution to the cross section,
both of which are functions of only a single dimensionless variable at the
tree level
after angular integration, {\it i.e.}, $x=y^2=s_{\gamma e}/M_{W,Z}^2$,
where $\sqrt {s_{\gamma e}}$ is the $\gamma -e$ center of mass energy.
Taking the ratio of the $\xi$-dependent to $\xi$-independent terms in the
expression for $\sigma$ above gives us the asymmetry $A_{pol}$.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=sjbfig2a.ps,height=9.1cm,width=9.1cm,angle=90}
\hspace*{-5mm}
\psfig{figure=sjbfig2b.ps,height=9.1cm,width=9.1cm,angle=90}}
\vspace*{-1cm}
\caption{Separate $\ifmmode \Delta\kappa\else $\Delta\kappa$\fi$ and $\lambda$ dependence of the
value of $y_0$, the zero position for the process $\gamma e \rightarrow W\nu$.}
\end{figure}
\vspace*{0.4mm}
One reason to believe {\it a priori} that $A_{pol}$, or $\sigma_{pol}$ itself,
might be sensitive to modifications in the TGVs due to the presence of the
anomalous couplings is the Drell-Hearn Gerasimov(DHG) Sum Rule{\cite {dhg}}.
In its $\gamma e \rightarrow W \nu, Ze$ manifestation, the DHG sum rule implies that
\begin{equation}
\int_{1}^{\infty} {\sigma_{pol}(x)\over {x}} dx = 0 \,,
\end{equation}
for the tree level SM cross section when the couplings of all the
particles involved in the process are `canonical', {\it i.e.}, gauge invariant.
That this integral is zero results from ($i$) the fact that
$\sigma_{pol}$ is well
behaved at large $x$ and ($ii$) a delicate cancellation occurs
between the two
regions where the integrand takes on opposite signs. This observation is
directly correlated with the existence of a single, unique
value of $x$(or $y$), {\it i.e.}, $x_0$($y_0$), where $\sigma_{pol}$(and,
hence, $A_{pol}$) vanishes. For
this reason $A_{pol}$ is sometimes referred to as $A_{DHG}$.
For the $W(Z)$ case this asymmetry `zero' occurs at approximately
$\sqrt {s_{\gamma e}}\simeq 254(150)$ GeV, both
of which correspond to energies which are easily accessible at the NLC. In the
$Z$ boson case the SM position of the zero can be obtained analytically as a
function of the cut on the angle of the outgoing electron. In the
corresponding $W$ case,
the exact position of the zero can only be determined numerically.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=y0pos.ps,height=9.1cm,width=11cm,angle=-90}}
\vspace*{-1cm}
\caption{Position of the SM polarization asymmetry zero in
$\gamma e \rightarrow Ze$ as a function of $h_{3,4}^0$ for $P=90\%$ with a $10^\circ$
angular cut. The dotted(dashed,
dash-dotted, solid) curve corresponds to the case $h_4^0=0$($h_3^0=0$,
$h_3^0=h_4^0$, $h_3^0=-h_4^0$).}
\end{figure}
\vspace*{0.4mm}
As discussed in detail in Ref.~{\cite {old}}, the inclusion of anomalous
couplings not only moves the position of the zero but also forces the
integral to become non-vanishing and, in most cases,
{\it logarithmically divergent}. In fact,
the integral is only finite when $\Delta \kappa+\lambda=0$, the same condition
necessary for the existence of the radiation amplitude zero{\cite {raz}}. The
reason for the divergence stems from the fact that the most divergent terms
in $\sigma_{pol}$ proportional to the anomalous couplings become constants in
the large $x$ limit; see Ref.~{\cite {old}} for complete expressions. It is
interesting that the anomalous couplings do not induce additional zeros or
extinguish the zero completely.
Unfortunately, since we cannot go to infinite energies we cannot test the DHG
Sum Rule directly but we {\it are} left with the position of the zero, or more
generally, the asymmetry itself as a probe of TGVs. In the
$W$ case the zero position, $y_0$, is found to be far more sensitive to
modifications in the TGVs than is the zero position in in the $Z$ case. The
zero position as a
function of $\Delta \kappa$ and $\lambda$ for the $\gamma e\rightarrow W\nu$ process
is shown in Fig.1 whereas the corresponding $Z$ case is shown in Fig.2. In
either situation, the position of the zero
{\it alone} does not offer sufficient sensitivity to the existence of anomalous
couplings for us to obtain useful constraints.(See Ref. {\cite {old}.)
\noindent
\vspace*{0.1mm}
\hspace*{-0.5cm}
\begin{figure}[htbp]
\centerline{\psfig{figure=sjbfig5.ps,height=9.1cm,width=11cm,angle=90}}
\vspace*{-1.0cm}
\caption{95 $\%$ CL bounds on the $W$ anomalous couplings from the
polarization asymmetry. The
solid(dashed, dash-dotted) curves are for a 500 GeV NLC
assuming complete $y$ coverage using 22(22, 44) bins and an integrated
luminosity per bin of 2.5(5, 1.25)$fb^{-1}$, respectively. The corresponding
bins widths are $\Delta y=$0.2(0.2, 0.1). The dotted curve
corresponds to a 1 TeV NLC using 47 $\Delta y=0.2$ bins with 2.5 $fb^{-1}$/bin.
`s' labels the SM prediction.}
\end{figure}
Our analysis begins by examining the energy, {\it i.e.}, $y$ dependence of $A_{pol}$
for the two processes of interest; we consider the $W$ case first. For a
500(1000) GeV collider, we see that only the range $1\leq y\leq 5.4(10.4)$ is
kinematically accessible since the laser photon energy
maximum is $\simeq 0.84E_e$. Since we are interested in bounds on the
anomalous couplings, we will assume that the SM is valid and generate a set
of binned $A_{pol}$ data samples via Monte Carlo taking
only the statistical errors into account. We further assume that
the electrons are
90$\%$ left-handed polarized as right-handed electrons do not interact
through the $W$ charged current couplings. Our bin width will be assumed to be
$\Delta y=$0.1 or 0.2.
We then fit the resulting distribution to
the $\Delta \kappa$- and $\lambda$-dependent functional form of $A_{pol}(y)$
and subsequently
extract the 95$\%$ CL allowed ranges for the anomalous couplings. The results
of this procedure are shown in Fig. 3, where we see that reasonable
constraints are obtained although only a single observable has been used in
the fit.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=newres.ps,height=9.1cm,width=9.1cm,angle=-90}
\hspace*{-5mm}
\psfig{figure=newres1tev.ps,height=9.1cm,width=9.1cm,angle=-90}}
\vspace*{-1cm}
\caption{Same as the previous figure, but now for a (0.5)1 TeV NLC on the
left(right) and combined with data
on the total cross section and angular distribution in a simultaneous fit. The
dotted(solid) curve uses the polarization
asymmetry and total cross section(all) data. Only
statistical errors are included. The dashed lines are the corresponding
bounds from the LHC from the $pp\rightarrow W\gamma +X$ process with an integrated
luminosity of 100 $fb^{-1}$.}
\end{figure}
\vspace*{0.4mm}
Clearly, to obtain stronger limits we need to make a combined fit with other
observables, such as the energy dependence of the total cross section, the
$W$ angular distribution, or the net $W$ polarization. As an example we
show in Fig. 4 from the results of our Monte Carlo study that the size of
the $95\%$ CL allowed region shrinks drastically in the both the 0.5 and
1 TeV cases when the $W$ angular distribution and energy-dependent total
cross section data are included in a simultaneous fit together
with the polarization asymmetry. For this analysis the angular distribution
was placed into 10 bins and energy averaged over the accessible kinematic
region. The total cross section data was binned in exactly the same way as
was the polarization asymmetry. Note that the constraints obtained by this
analysis are superior to that of the LHC{\cite {rev}} with an integrated
luminosity of
100 $fb^{-1}$. (The LHC constraints on $\Delta \kappa$ are rather poor whereas
the $\lambda$ bounds are somewhat better.)
As is well known, both the total cross section and the $W$ angular
distributions are highly sensitive to
$\Delta \kappa$ and thus the allowed region is highly compressed in that
direction. At 500 GeV(1 TeV), we find that $\Delta \kappa$ is bounded
to the range
$-1.2\cdot 10^{-3}\leq \Delta \kappa \leq 1.4(0.4)\cdot 10^{-3}$ while the
allowed $\lambda$ range is still rather large. Further improvements in these
limits will result from data taken at a 1.5 TeV NLC.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=for050.xmasps,height=9.1cm,width=11cm,angle=-90}}
\vspace*{-1cm}
\caption{$95\%$CL allowed region for the anomalous coupling parameters
$h_3^0$ and $h_4^0$ from a combined fit to the energy dependencies of the total
cross section and polarization asymmetry at a 500 GeV NLC assuming $P=90\%$
and an integrated luminosity of $3(6)fb^{-1}$/bin corresponding to the solid
(dashed) curve. 18 bins of width $\Delta y$=0.2 were chosen to cover the $y$
range $1\leq y \leq 4.6$. The corresponding bounds for negative values of
$h_3^0$ are obtainable by remembering the invariance of the polarization
dependent cross section under the reflection $h_{3,4}^0\rightarrow -h_{3,4}^0$.}
\end{figure}
\vspace*{0.4mm}
With this experience in mind, in the $Z$ case we will follow a similar approach
but we
will simultaneously fit both the energy dependence of $A_{pol}$ as well as
that of
the total cross section. (Later, we will also include the $Z$ boson's angular
distribution
into the fit.) In this $Z$ analysis we make a $10^{\circ}$ angular cut on the
outgoing electron and keep a finite form factor scale, $\Lambda=1.5$ TeV, so
that we may more readily compare with other existing analyses. (The angular
cut also gives us a finite cross section in the massless electron limit;
this cut is not required in the case of the $W$ production process.) We again
assume that $P=90\%$ so that data taking for this analysis can take place
simultaneously with that for the $W$. The accessible $y$ ranges are now
$1\leq y \leq 4.6(9.4)$ for a 500(1000) GeV collider. Fig.5 shows our results
for the 500 GeV NLC while Fig.6 shows the corresponding 1 TeV case. For a
given energy and fixed total integrated luminosity we learn from these figures
that it is best to
take as much data as possible at the highest possible values of $y$.
Generally, one finds that increased sensitivity to the existence of anomalous
couplings occurs at the highest possible collision energies.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=for060.xmasps,height=9.1cm,width=11cm,angle=-90}}
\vspace*{-1cm}
\caption{Same as Fig. 5 but for a 1 TeV NLC. The solid(dashed) curve
corresponds to a luminosity of $4(8)fb^{-1}$/bin for 42 bins of width
$\Delta y$=0.2 which covered the range $1\leq y \leq 9.4$. The dotted curve
corresponds to a luminosity of $8fb^{-1}$/bin but only for the last 21 bins.
The dash-dotted curve corresponds to the case of $16.8fb^{-1}$/bin in only the
last 10 bins.}
\end{figure}
\vspace*{0.4mm}
Even these anomalous coupling bounds can be significantly
improved by including the $Z$ boson angular information in
the fit. To be concrete we examine the case of a 1 TeV NLC with
16.8$fb^{-1}$/bin of integrated luminosity taken in the last 10 $\Delta y$
bins(corresponding to the dash-dotted curve in Fig.6). Deconvoluting the
angular integration and performing instead the integration over the 10
$\Delta y$ bins we obtain the energy-averaged angular distribution. Placing
this distribution into 10 (almost) equal sized $cos \theta$ bins while still
employing our $10^\circ$ cut, we can use this
additional data in performing our overall simultaneous $\chi^2$ fit. The
result of this procedure is shown in Fig.7 together with
the anticipated result from the LHC using the $Z\gamma$ production mode. Note
that the additional angular distribution data has reduced the size of the
$95\%$ CL allowed region by almost a factor of two.
Clearly both machines are complementary in their abilities to probe small
values of the $\gamma ZZ$ anomalous couplings. As in the $W$ case, if
the NLC and LHC results were
to be combined, an exceptionally small allowed region would remain.
The NLC results themselves may be further improved by considering
measurements of the polarization of the final state $Z$ as well as
by an examination of, {\it e.g.}, the complementary $e^+e^- \rightarrow Z\gamma$ process.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=fullfit.resps,height=9.1cm,width=11cm,angle=-90}}
\vspace*{-1cm}
\caption{The solid curve is the same as dash-dotted curve
in Fig. 6, but now including in the fit
the $Z$ boson angular distribution obtained from the highest
10 bins in energy. The
corresponding result for the 14 TeV LHC with 100$fb^{-1}$ of integrated
luminosity from the process $pp\rightarrow Z\gamma+X$ is shown as the dotted curve.}
\end{figure}
\vspace*{0.4mm}
\section{Discussion and Conclusions}
The collision of polarized electron and photon beams at the NLC offers an
exciting opportunity to probe for anomalous gauge couplings of both the $W$
and the $Z$ through the use
of the polarization asymmetry. In the case of $\gamma e \rightarrow W\nu$ we can
cleanly isolate the $\gamma WW$ vertex in a model independent fashion. When
combined with other observables, extraordinary sensitivities to such
couplings for $W$'s are achievable at the NLC in the $\gamma e$ mode. These are
found to be quite complementary to those obtainable in $e^+e^-$ collisions as
well as at the LHC. In the case of the
$\gamma ZZ$ anomalous couplings, we found constraints comparable to those
which can be obtained at the LHC.
\bigskip \bigskip \begin{center} \begin{large
The author would like to thank S. J. Brodsky, I. Schmidt, J.L. Hewett, and
S. Godfrey for discussions related to this work.
\def\MPL #1 #2 #3 {Mod.~Phys.~Lett.~{\bf#1},\ #2 (#3)}
\def\NPB #1 #2 #3 {Nucl.~Phys.~{\bf#1},\ #2 (#3)}
\def\PLB #1 #2 #3 {Phys.~Lett.~{\bf#1},\ #2 (#3)}
\def\PR #1 #2 #3 {Phys.~Rep.~{\bf#1},\ #2 (#3)}
\def\PRD #1 #2 #3 {Phys.~Rev.~{\bf#1},\ #2 (#3)}
\def\PRL #1 #2 #3 {Phys.~Rev.~Lett.~{\bf#1},\ #2 (#3)}
\def\RMP #1 #2 #3 {Rev.~Mod.~Phys.~{\bf#1},\ #2 (#3)}
\def\ZP #1 #2 #3 {Z.~Phys.~{\bf#1},\ #2 (#3)}
\def\IJMP #1 #2 #3 {Int.~J.~Mod.~Phys.~{\bf#1},\ #2 (#3)}
| proofpile-arXiv_065-1311 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{s1}
Let $M$ be a $C^1$ compact manifold of finite dimension $m \geq 1$, equipped with a Riemannian metric dist. The manifold $M$ may have or have not boundary.
Let $C^0(M)$ be the space of continuous maps $f: M \rightarrow M$ with the uniform norm:
$$\|f-g\|_{C^0} := \max_{x \in M} \mbox{ dist}(f(x), g(x)), \qquad \forall \ f,g \in C^0(M).$$
We denote by $\mbox{Hom}(M)$ the space of homeomorphisms $f: M \rightarrow M$ with the uniform norm:
$$\|f-g\|_{Hom} := \max \Big \{\|f-g\|_{C^0}, \ \|f^{-1} - g^{-1}\|_{C^0} \Big \} \ \ \ \forall \ f,g \in \mbox{Hom}(M).$$
Since the metric spaces $C^0(M)$ and $Hom(M)$ are complete they are Baire spaces.
A subset ${\mathcal S} \subset C^0(M)$ (or ${\mathcal S} \subset \mbox{Hom}(M)$) is called \em a $G_{\delta}$-set \em if it is the countable intersection of open subsets of $C^0(M)$ (resp.\ $ \mbox{Hom}(M)$).
We say that a property $P$ of the maps $f \in C^0(M)$ (or $f \in \mbox{Hom}(M)$) is \em generic, \em or that generic maps satisfy $P$, if the set of maps that satisfy $P$ contains a \em dense $G_{\delta}$-set \em in $C^0(M)$ (resp.\ $ \mbox{Hom}(M)$).
The main result of this article is the following theorem.
\begin{maintheorem}
\label{Theorem1}
The generic map $f \in C^0(M)$ has an ergodic Borel probability measure $\mu$ such that $h_{\mu}(f) = + \infty$,
furthermore there exists $p \ge 1$ such that $\mu$ is mixing for the map $f^p$.
\end{maintheorem}
\noindent{\bf Remark.} In the case that $M$ is a compact interval, Theorem \ref{Theorem1} was proved in {\cite[Theorem 3.9 and p.33, para 2]{CT2017}}. So, in this paper we will prove it only for $m$-dimensional manifolds where $m \geq 2$.
Yano proved that generic continuous maps of compact manifolds with or without boundary have infinite topological entropy \cite{Yano}. Therefore, from the variational principle, there exists invariant measures with metric entropies as large as required. Nevertheless, this property alone does not imply the existence of invariant measures with infinite metric entropy. In fact, it is well known that the metric entropy function $\mu \rightarrow h_{\mu}(f)$ is not upper semi-continuous for $C^0$-generic systems. Moreover, we prove that it is \em strongly \em non upper semi-continuous in the following sense:
\begin{maintheorem}
\label{Theorem2}
For a generic map $f \in C^0(M)$ there exists a sequence of ergodic measures $\mu_n$ such that for all $n \ge 1$ we have
$h_{\mu_n} = +\infty$ and $$ {\lim _{n \rightarrow +\infty}\!\!\! }^*\, \mu_n = \mu \mbox{ with } h_{\mu }= 0,$$
where $\lim^*$ denotes the limit in the space of probability measures endowed with the weak$^*$ topology.
\end{maintheorem}
Even if we had a priori some $f$-invariant measure $\mu$ with infinite metric entropy, we do not know if this property alone implies the existence of ergodic measures with infinite metric entropy as Theorems \ref{Theorem1} and \ref{Theorem2} state. Actually, if $\mu$ had infinitely many ergodic components, the proof that the metric entropy of at least one of those ergodic components must be larger or equal than the entropy of $\mu$, uses the upper semi-continuity of the metric entropy function (see for instance \cite[Theorem 4.3.7, p. 75]{Keller}).
Yano also proved that generic homeomorphisms on manifolds of dimension 2 or larger, have infinite topological entropy \cite{Yano}. Thus one wonders if Theorems \ref{Theorem1} and \ref{Theorem2} hold also for homeomorphisms. We give a positive answer to this question.
\begin{maintheorem}
\label{Theorem3}
If $\text{dim}(M) \ge2$, then the generic homeomorphism $f \in \mbox{Hom}(M)$ has an ergodic Borel probability measure $\mu$
satisfying $h_{\mu}(f) = + \infty$, furthermore there exists $p \ge 1$ such that $\mu$ is mixing for the map $f^p$.
\end{maintheorem}
\begin{maintheorem}
\label{Theorem4}
If $\text{dim}(M) \ge2$, then for a generic homeomorphism $f \in \mbox{Hom}(M)$ there exists a sequence of ergodic measures $\mu_n$ such that
for all $n \ge 1$ we have
$h_{\mu_n} = +\infty$ and $$ {\lim _{n \rightarrow +\infty}\!\!\! }^*\, \mu_n = \mu \mbox{ with } h_{\mu }= 0.$$
\end{maintheorem}
To prove the above theorems in dimension two or larger we construct a family ${\mathcal H}$, called models, of continuous maps in the cube $[0,1]^m$, including some homeomorphisms of the cube onto itself, which have a complicated behavior on a Cantor set (Definition \ref{DefinitionModel}).
A periodic shrinking box is a compact set $K \subset M$ that is homeomorphic to the cube $[0,1]^m$ and such that for some $p \geq 1$: $K, f(K), \ldots, f^{p-1}(K)$ are pairwise disjoint and $f^p(K) \subset \mbox{int}(K)$ (Definition \ref{DefinitionPerShrBox}). The main steps of the proofs of Theorems \ref{Theorem1} and \ref{Theorem3} are the following results.
\begin{enumerate}
\item{\bf Lemma \ref{LemmaMain}} \em Any model $\Phi \in {\mathcal H}$ in the cube $[0,1]^m$ has an $\Phi$-invariant mixing measure $\nu$ such that $h_{\nu}(\Phi) = +\infty$. \em
\item {\bf Lemmas \ref{Lemma1} and \ref{Lemma1b}}
\em Generic maps in $C^0(M)$, and generic homeormorphisms, have a periodic shrinking box. \em
\item {\bf Lemmas \ref{Lemma2} and \ref{Lemma2b}} \em Generic maps $f \in C^0(M)$, and generic homeomorphisms for $m \ge 2$, have a periodic shrinking box $K$ such that the return map $f^p|_K$ is topologically conjugated to a model $\Phi \in {\mathcal H}$. \em
\vspace{.2cm}
A \em good sequence of periodic shrinking boxes \em is a sequence $ \{K_n\}_{n \geq 1} $ of periodic shrinking boxes such that accumulate (with the Hausdorff distance) on a periodic point $x_0$, and moreover their iterates $f^{j}(K_n) $ also accumulate on the periodic orbit of $x_0$, uniformly for $j \geq 0$ (see Definition \ref{definitionCleanSequence}).
The main step in the proof of Theorems \ref{Theorem2} and \ref{Theorem4} is Lemma \ref{LemmaMain} together with
\item {\bf Lemma \ref{lemma4}}
\em Generic maps $f \in C^0(M)$, and if $m \geq 2$ also generic homeomorphisms, have a good sequence $\{K_n\}$ of boxes, such that the return maps $f^{p_n}|_{K_n}$ are topologically conjugated to models $\Phi_n \in {\mathcal H}$. \em
\end{enumerate}
\section{Construction of the family of models.} \label{SectionModels}
We call a compact set $K \subset D^m:=[0,1]^m$ or more generally $K \subset M$, where $M$ is an $m$-dimensional manifold with $m \geq 2$, \em a box \em if it is homeomorphic to $D^m$.
Models are certain continuous maps of $D^m:= [0,1]^m$ that we will define in this section.
We denote by $\mbox{RHom}(D^m)$ the space of relative homeomorphisms $\Phi: D^m \rightarrow D^m$ (i.e., $\Phi$ is a homeomorphism onto its image included in $D^m$), with the topology induced by:
$$\|\Phi - \Psi \| _{\mbox{RHom}} := \max \Big \{\|\Phi -\Psi \|_{C^0(D^m)}, \ \|\Phi^{-1} - \Psi^{-1}\|_{C^0(\Phi(D^m) \cap \Psi(D^m))} \Big \} $$ if $\Phi(D^m) \cap \Psi(D^m)) \neq \emptyset. $
Throughout the article we will consider the distance between two relative homeomorphisms $\Phi, \Psi$ only if $ \Phi(D^m) \cap \Psi(D^m)) \neq \emptyset$, and we mostly use the above distance in the case $ \Phi(D^m) = \Psi(D^m)$.
\begin{definition}
{\bf ($\Phi$-relation from a box to another).} \em \label{definition-hRelation}\hspace*{\fill} \\
Let $\Phi \in C^0(D^m)$. Let $B,C \subset \mbox{int}(D^m) $ be two boxes. We write $$B \stackrel{\Phi}{\rightarrow} C \mbox{ if }
\Phi(B) \cap \mbox{int}(C) \neq \emptyset.$$ Observe that this condition is open in $C^0(D^m)$ and also in $\mbox{RHom}(D^m)$.
\end{definition}
\begin{definition} {\bf (Atoms of generation 0 and 1)} \em (See Figure \ref{FigureAtomsGen-0-1}) \label{definitionAtomsOfGeneration0-1}\hspace*{\fill} \\
We call a box $A \subset \mbox{int}(D^m)$ \em an atom of generation 0 \em for $\Phi$, if $A\stackrel{\Phi}{\rightarrow} A$. We call two disjoint boxes $B_1, B_2 \subset \mbox{int}(A)$, \em atoms of generation 1, \em if
$$B_i \stackrel{\Phi}{\rightarrow} B_j, \qquad \forall \ i,j \in \{1,2\}.$$
\begin{figure}[t]
\centering
\hspace{-0.2in}\vspace{0.2in}\includegraphics[scale=.40]{Figure1.jpg}
\caption{An atom $A$ of generation 0 and two atoms $B, C$ of generation 1 for a map $\Phi$ of $D^2$. \label{FigureAtomsGen-0-1}}
\end{figure}
\begin{figure}[t
\centering
\hspace{-0.2in}\includegraphics[scale=.55]{Figure2.jpg}
\caption{An atom $A$ of generation 0, two atoms $B,C$ of generation 1, and $16$ atoms of generation~2. In particular the two atoms $G, H$ of generation 2 satisfy $\Gamma_2(C,B,C) = \{G, H\}$.} \label{FigureAtomsGen-n}
\end{figure}
If $A$ is an atom of generation 0 and $B_1, B_2$ are the two atoms of generation 1, we denote ${\mathcal A}_0:= \{A\}$, ${\mathcal A}_1 := \{B_1,B_2\}.$
\end{definition}
\begin{definition} \label{definitionAtomsGeneration-n}
{\bf Atoms of generation $\mathbf{n \ge 2}$} \em (See Figure \ref{FigureAtomsGen-n}) \hspace*{\fill} \\
Assume by induction that the finite families ${\mathcal A}_0, {\mathcal A}_1 \ldots, {\mathcal A}_{n-1}$ of atoms for $\Phi \in C^0(D^m)$ of generations up to $n-1$ are already defined, such that the atoms of the same generation $j = 1,\ldots,n-1 $ are pairwise disjoint, contained in the interior of the $(j-1)$-atoms in such a way that all the $(j-1)$-atoms contain the same number of $j$-atoms and
$$\#{\mathcal A}_{j} = 2^{ j^2}, \qquad \forall \ j= 0, 1, \ldots, n-1.$$ Assume also that for all $j \in \{1, \ldots, n-1\}$ and for all $B \in {\mathcal A}_j$:
$$ \#\{C \in {\mathcal A}_j \colon B \stackrel{\Phi}{\rightarrow} C\}= 2^j, \ \ \ \ \ \ \#\{D \in {\mathcal A}_j \colon D \stackrel{\Phi}{\rightarrow} B\}= 2^j.$$
Denote
$${\mathcal A}_j^{2*}: = \{(B, C) \in {\mathcal A}^2_j: \ \ B \stackrel{\Phi}{\rightarrow} C\}, $$
$${\mathcal A}_j^{3*}: = \{(D,B, C) \in {\mathcal A}^3_j: \ \ D \stackrel{\Phi}{\rightarrow} B, \ \ \ B \stackrel{\Phi}{\rightarrow} C\}. $$
For fixed $(D, B, C ) \in {\mathcal A}^{3 *}_{j-1}$ denote
$$\Omega_j(B): = \{G \in {\mathcal A}_j \colon G \subset \mbox{int}(B)\},$$
$$\Omega_j(D, B) := \{G \in \Omega_j(B) \colon D \stackrel{\Phi}{\rightarrow} B\},$$
$$\Gamma_j(D, B, C) := \{G \in \Omega_j(D, B) \colon B \stackrel{\Phi}{\rightarrow} C\}.$$
We call the sets of a finite collection ${\mathcal A}_n$ of pairwise disjoint boxes, \em atoms of generation $n$, \em or \em $n$-atoms \em if they satisfy the following conditions (see Figure \ref{FigureAtomsGen-n}):
\noindent {\bf a)} Each atom of generation $n$ is contained in the interior of an atom of generation $n-1$, and the interior of each atom $ B \in {\mathcal A}_{n-1}$ contains exactly $2^n \cdot 2^{n-1} = 2^{2n-1}$ pairwise disjoint $n$-atoms, which we call \em the children of $B$.
\em In other words:
$$\#\Omega_n(B) = 2^{2n-1} \qquad \forall \ B \in {\mathcal A}_{n-1}.$$
Therefore $$
{\mathcal A}_n = \bigcup_{B \in {\mathcal A}_{n-1}} \Omega_n(B),$$ where the families of atoms in the union are pairwise disjoint. Therefore \begin{equation}
\label{eqn100}\# {\mathcal A}_n = (\#{\mathcal A}_{n-1}) (\#\Omega_n(B)) =2^{(n-1)^2} \cdot 2^{2n-1} = 2 ^{n^2}. \end{equation}
\noindent {\bf b)} For each $B \in {\mathcal A}_{n-1}$, the collection of children of $B$ is partitioned in the $2^{n-1}$ sub-collections $\Omega_n(D,B)$, where the atoms $D \in {\mathcal A}_{n-1}$ are such that $D \stackrel{\Phi}{\rightarrow} B$. Besides, $$\# \Omega_n(D, B) = 2^n, \quad \forall (D,B) \in {\mathcal A}_{n-1}^{2*}.$$
In other words
$$\Omega_n(B) = \bigcup_{D \colon (D,B) \in {\mathcal A}_{n-1}^{2*}} \Omega_n(D,B), \qquad \forall \ B \in {\mathcal A}_{n-1}, $$
where the families of atoms in the above union are pairwise disjoint. Thus for any $B \in {\mathcal A}_{n-1}$ we have
$$\#\Omega_n(B) =\big( \# \{D \in {\mathcal A}_{n-1} \colon D \stackrel{\Phi}{\rightarrow} B\} \big) \cdot \big (\# \Omega_n(D,B)\big)=
2^{n-1} \cdot 2^n = 2^{2n-1}.
$$
\noindent {\bf c)} For each 2-tuple $(D,B) \in {\mathcal A}_{n-1}^{2*}$ the collection $\Omega_n(D, B)$ is partitioned in the $2^{n-1}$ sub-collections $\Gamma_n(D,B,C)$, where the atoms $C \in {\mathcal A}_{n-1}$ are such that $B \stackrel{\Phi}{\rightarrow} C$. Besides,
$$\#\Gamma_n (D,B, C) = 2\ \ \forall \ (D,B,C) \in {\mathcal A}_{n-1}^{3*}.$$
For example, in Figure \ref{FigureAtomsGen-n} we have $\{F,G\} = \Gamma_2(C,B,C) $.
\noindent {\bf d)} For each $(D, B,C) \in {\mathcal A}_n^{3*}$ and for each $G \in \Gamma_n(D, B, C)$
$$ G \stackrel{\Phi}{\rightarrow} E, \qquad \forall \ E \in \Omega_n(B,C), $$
$$\mbox{and } \ \Phi(G) \cap E' = \emptyset \ \forall \ E' \in {\mathcal A}_n \setminus \Omega_n(B,C).$$
From the above conditions a) to d) we deduce:
\begin{equation}
\label{eqn99a}\Omega_n(D,B) = \bigcup_{C \colon (B,C) \in {\mathcal A}_{n-1}^{2*}} \Gamma_n(D,B,C),\end{equation}
\begin{equation}
\label{eqn99}
{\mathcal A}_n = \bigcup_{(D, B, C) \in {\mathcal A}_{n-1}^{3*} } \Gamma_n(D, B,C),\end{equation}
where the families of atoms in the unions are pairwise disjoint.
Besides, for any pair $(G,E) \in {\mathcal A}_n^2$:
$$ G \stackrel{\Phi}{\rightarrow} E \mbox{ if and only if } \ \exists \ (D, B,C) \in {\mathcal A}_{n-1} ^{3*} \mbox{ such that }$$ $$ G \in \Gamma_n(D,B, C), \ E \in \Omega_n(B,C), $$
We also deduce the following properties for any atom $G \in {\mathcal A}_n$:
\begin{equation}
\label{eqn101}\#\{E \in {\mathcal A}_n \colon G \stackrel{\Phi}{\rightarrow} E\} = 2^n, \ \ \ \ \#\{E \in {\mathcal A}_n \colon E \stackrel{\Phi}{\rightarrow} G\} = 2^n.\end{equation}
In fact, on the one hand, from condition d) we deduce that for each atom $G \in {\mathcal A}_n$, the number of atoms $E \in {\mathcal A}_n$ such that $G \stackrel{\Phi}{\rightarrow} E$ equals the number $\# \Omega_n(B,C) = 2^n$, where $B$ and $C$ are the unique $(n-1)$-atoms such that $G \in \Gamma_n(D,B,C)$ for some $D \in {\mathcal A}_{n-1}$.
On the other hand, the number of atoms $E\in {\mathcal A}_n$ such that $E \stackrel{\Phi}{\rightarrow} G$ equals $$(\# \Gamma_n (D,B,C)) \cdot (\# \{D \in {\mathcal A}_{n-1}: \ D \stackrel{\Phi}{\rightarrow} B) = 2 \cdot 2^{n-1} = 2^n, $$ where $B$ and $C$ are the unique $(n-1)$-atoms such that $G \in \Omega_n(B,C)$.
Since $2^n < 2^{n^2}$ if $n > 1 $, we conclude that not all the pairs $(G,E)$ of atoms of generation $n \geq 2$ satisfy $G \stackrel{\Phi}{\rightarrow} E$.
\end{definition}
\begin{definition} {\bf (Models)}
\label{DefinitionModel} \em
We call $\Phi \in C^0(D^m)$ \em a model \em if
$\Phi(D^m) \subset \mbox{int}(D^m),$
and there exists a sequence $\{{\mathcal A}_n\}_{n \geq 0}$
of finite families of pairwise disjoint boxes contained in $\mbox{int}(D^m)$ that are atoms of generation $0,1, \ldots, n, \ldots$ respectively for $\Phi$ (according to Definitions \ref{definitionAtomsOfGeneration0-1}, and \ref{definitionAtomsGeneration-n}) such that
\begin{equation}
\label{eqnLimitDiamAtoms=0}
\lim_{n \rightarrow + \infty} \max_{A \in {\mathcal A}_n} \mbox{diam}A = 0.\end{equation}
\end{definition}
Denote by ${\mathcal H}$ the family of all the models in $C^0(D^m)$.
\begin{remark}
\label{RemarkHisG_delta} \em For each fixed $n \geq 1$ the four conditions a) to d) of Definition
\ref{definitionAtomsGeneration-n}, are open conditions. So, the family ${\mathcal H}_n$ of maps that have atoms up to generation $n\geq 1$,
is open in $C^0(D^m)$ and also in $\mbox{RHom}(D^m)$. Moreover, the conditions $\Phi(D^m) \subset \mbox{int}(D^m)$ and
$\max_{A \in {\mathcal A}_n} \mbox{diam}(A) < \varepsilon_n$ are also open. Therefore, the intersection of the families satisfying
the conditions for all $n \geq 1$, which is the family ${\mathcal H}$ of model maps, is a $G_{\delta}$-set in $C^0(D^m)$
and
${\mathcal H} \cap \mbox{RHom}(D^m)$ is a $G_{\delta}$-set in $\mbox{RHom}(D^m)$.
\end{remark}
\noindent {\bf Construction of models.} \label{SectionFamilyModelsIsGood}\\
The rest of this section is dedicated to the proof of the following lemma.
\begin{lemma}
\label{LemmaModelHRHomNonempty}
The family ${\mathcal H}$ of models is a nonempty $G_{\delta}$-set in $C^0(D^m)$ and ${\mathcal H} \cap \mbox{RHom}(D^m)$ is a nonempty $G_{\delta}$-set in $\mbox{RHom}(D^m)$.
\end{lemma}
In Remark \ref{RemarkHisG_delta} we have noted that $\mathcal H$ is a $G_{\delta}$-set. The fact that it is nonempty is
a consequence of the following result.
\begin{lemma}
\label{LemmaConstruccionModelPsifisPhi}
For all $f \in \mbox{RHom}(D^m)$ such that $f (D^m) \subset \mbox{int}(D^m),$ there exists $\psi \in \mbox{Hom}(D^m)$ and $\Phi \in {\mathcal H} \cap \mbox{RHom}(D^m)$ such that \em
$$\psi|_{\partial D^m} = \mbox{id}|_{\partial D^m} \ \ \mbox{ \em and } \ \ \psi \circ f = \Phi.$$
\end{lemma}
We begin by outline the strategy of the proof of Lemma \ref{LemmaConstruccionModelPsifisPhi}. The homeomorphisms $\psi$ and $\Phi$ are constructed as uniform limits of respective convergent sequences $\psi_n$ and $\Phi_n$ of homeomorphisms, such that $\psi_n \circ f= \Phi_n$ for all $n \geq 0$. The homeomorphism $\Phi_n$ has atoms up to generation $n$, and $\Phi_{n+1} $ coincides with $\Phi_n$
outside the interiors of its atoms of generation $n$. Therefore the collections of atoms of generation up to $n$ for $\Phi_n$ is also
a collection of atoms for $\Phi_{n+1}$. To change $\Phi_n$ inside each atom $A$ of generation $n$ we change $\psi_n$ only inside some adequately defined boxes $R \subset \mbox{int}( f(A))$, constructing $\psi_{n+1}|_R $ in such a way that $ \psi_{n+1} |_{\partial R} = \psi_{n}|_{\partial R}$, and finally extending $\psi_{n+1}(x) := \psi_n(x)$ for all $x$ in the complement of the union of all the boxes $R$. The definition of several boxes $R$ inside each atom $A$ of generation $n$, is useful to construct the atoms of generation $n+1$ for $\Phi_{n+1} = \psi_{n+1} \circ f$.
To prove Lemma \ref{LemmaConstruccionModelPsifisPhi}, we need several technical lemmas and some more definitions.
For each $(P,Q) \in {\mathcal A}_n^{2*}$ in the proof of Lemma \ref{LemmaConstruccionModelPsifisPhi} we will
will recursively choose a connected component $S(P,Q)$ of $\Phi_n(P) \cap Q $.
For each $(D,B,C) \in {\mathcal A}_n^{3*}$ let $G_0(D,B,C), \ G_1(D,B,C)$ be two disjoint boxes contained in $\mbox{int }(S(D,B)) \cap \Phi_n^{-1} S(B,C))$. Denote:
$${\mathcal A}_{n+1} := \{G_j(D,B,C): j \in \{0,1\}, (D,B,C) \in {\mathcal A}_n^{3*}\};$$
$$\Omega_{n+1}(B) := \{G_j(D,B,C): j \in \{0,1\}, (D,B,C) \in {\mathcal A}_n^{3*}\} $$ for each fixed $B \in {\mathcal A}_n$;
$$\Omega_{n+1}(D, B) := \{G_j(D,B,C): j \in \{0,1\}, B \stackrel{\phi_n}{\rightarrow} C \}$$ for each fixed $(D,B) \in {\mathcal A}_n^{2^*} $;
$$\Gamma_{n+1}(D, B,C) := \{G_j(D,B,C): j \in \{0,1\} \} $$ for each fixed $(D,B,C) \in {\mathcal A}_n^{3^*}$.
\begin{lemma}
\label{LemmaConstruction(n+1)atoms}
Fix $n \ge 0$. Let $\Phi_n \in RHom(D^m)$. Assume that there exists families ${\mathcal A}_0, {\mathcal A}_1, \ldots, {\mathcal A}_n$ of atoms of generation up to $n$ for $\Phi_n$ and that
$$\mbox{int }(S(D,B)) \cap \Phi_n^{-1} S(B,C)) \neq \emptyset \ \ \forall \ (D,B,C) \in {\mathcal A}_n^{3*}.$$
Then
\begin{enumerate}[label=\alph*)]
\item $\#{\mathcal A}_n^{3^*} = 2^{n^2 + 2n}$
\item $\#{\mathcal A}_{n+1} = 2^{(n+1)^2}$ and $E \cap F = \emptyset$ for all $E, F \in {\mathcal A}_{n+1}$ such that $E \neq F.$
\item The family ${\mathcal A}_{n+1}$ is partitioned in the $2^{n^2}$ different subfamilies $\Omega_{n+1}(B)$ where $B \in {\mathcal A}_n$. Besides $\# \Omega_{n+1}(B) = 2^{2n+1} = 2^{n+1} 2^n $ for all $B \in {\mathcal A}_n$
\item For all $B \in {\mathcal A}_n$ the family of boxes $\Omega_{n+1}(B)$ is partitioned in the $2^n$ subfamilies $\Omega_{n+1} (D,B)$ where $D \in {\mathcal A}_n$ such that $D \stackrel{\Phi_n}{\rightarrow} B$.
\item $\#\Omega_{n+1}(D, B) = 2^{n+1}$ for all $(D,B) \in {\mathcal A}_n^{2*}$.
\item For all $(D, B) \in {\mathcal A}_n^{2*}$ the family of boxes $\Omega_{n+1}(D,B)$ is partitioned in the $2^n$ subfamilies $\Gamma_{n+1} (D,B, C)$ where $C \in {\mathcal A}_n$ such that $B\stackrel{\Phi_n}{\rightarrow} C$.
\item $\#\Gamma_{n+1}(D,B,C)= 2$ for all $(D,B,C) \in {\mathcal A}_n^{3*}$.
\item For all $(B,C) \in {\mathcal A}_n^{2*}$ and for all $E \in {\mathcal A}_{n+1}$,
the intersection $E \cap B \cap \Phi_n^{-1}(C)$ is nonempty if and only if $E \in \Gamma_{n+1}(D, B, C)$ for some $D \in {\mathcal A}_n$ such that $D\stackrel{\Phi_n}{\rightarrow} B$.
\end{enumerate}
\end{lemma}
\begin{remark} \label{remarkConditionsabc}
\em Note that properties b) to g) in Lemma \ref{LemmaConstruction(n+1)atoms} imply that the family ${\mathcal A}_{n+1}$ of boxes such constructed, satisfy conditions a), b) and c) of Definition \ref{definitionAtomsGeneration-n} of atoms of generation $n+1$ for $\Phi_n$. Thus, to be a family of atoms of atoms of generation $n+1$ it is only left to modify, if necessary, $\Phi_n$ in the interior of the boxes of ${\mathcal A}_n$ in such a way that these boxes also satisfy condition d) of Definition \ref{definitionAtomsGeneration-n}.
\end{remark}
\begin{proof} {\em of Lemma \ref{LemmaConstruction(n+1)atoms}.}
a) All the $n$-tuples $(D,B,C) \in {\mathcal A}_n^{3*}$ can be constructed choosing freely $D \in {\mathcal A}_n$, for each $D$ choosing $B \in {\mathcal A}_n$ such that $D\stackrel{\Phi_n}{\rightarrow}B$, and for each such $B$ choosing $C \in {\mathcal A}_n$ such that $B\stackrel{\Phi_n}{\rightarrow}C$. Taking into account that by hypothesis ${\mathcal A}_n$ is a family of atoms, equalities (\ref{eqn100}) and (\ref{eqn101}) of Definition \ref{definitionAtomsGeneration-n} imply
$$\#{\mathcal A}_n^{3*}= \#\{(D,B,C)\in {\mathcal A}_n^3\colon D\stackrel{\Phi_n}{\rightarrow}B, B \stackrel{\Phi_n}{\rightarrow}C \}= $$
$$(\#{\mathcal A}_n) \cdot (\#\{B \in {\mathcal A}_n \colon D\stackrel{\Phi_n}{\rightarrow}B\}) \cdot (\#\{C \in {\mathcal A}_n \colon B \stackrel{\Phi_n}{\rightarrow}C\}) = $$ $$ 2^{n^2} 2^n 2^n = 2^{n^2 + 2n} $$
g) By construction, $\Gamma_{n+1}(D,C,B) = \{G_0(B,C,D), \ G_1(B,C,D)\}$, where the two boxes inside the family are disjoint, hence different. Thus the cardinality of the family is 2.
b) By construction, $E= G_j(D,C,B), \ F= G_{j'}(D', B', C')$. If they $E \neq F$ then,
either $( D, C, B)= ( D', C', B')$ and $i \neq i'$, or $( D, C, B)\neq ( D', C', B')$. In the first case, by construction $G_0(D, C, B) \cap G_1(D, C, B) = \emptyset$, in other words $E \cap F = \emptyset$. In the second case, either $D \neq D'$ or $B \neq B'$ or $C \neq C'$. By construction $G_j(D, B, C) \subset \Phi_n(D)\cap B \cap \Phi_n^{-1}(C)$ and $G_{j'}(D', B', C') \subset \Phi_n(D')\cap B' \cap \Phi_n^{-1}(C')$. Since two different atoms of generation $n$ are pairwise disjoint, and $\Phi_n \in RHom(D^m)$, we deduce that $G_{j }(D , B , C ) \cap G_{j'}(D', B', C') = \emptyset$, hence $E \cap F = \emptyset$, as required.
By construction ${\mathcal A}_{n+1}= \bigcup_{(D, C, B)\in {\mathcal A}_n^{3*}} \Gamma_{n+1}(D,B,C)$,
where the families in the union are pairwise disjoint and have each 2 different boxes of ${\mathcal A}_{n+1}$. Therefore, taking into account part a), we deduce that $$\#{\mathcal A}_{n+1} = 2 \cdot(\#{\mathcal A}_n^{3*}) = 2 \cdot 2^{n^2 + 2n}= 2^{(n+1)^2}.$$
c) By construction of ${\mathcal A}_{n+1}$ and $\Omega_{n+1}(B)$, we have
$${\mathcal A}_{n+1} = \bigcup_{B \in {\mathcal A}_n} \Omega_{n+1}(B).$$
Besides $G \subset B$ for all $G \in \Omega_{n+1} (B)$, because by construction $G \subset S(D,B) \subset \Phi(D) \cap B$ for some $D \in {\mathcal A}_n$. Since two different atoms $B \neq B'$ of generation $n$ are pairwise disjoint, we deduce that $\Omega_{n+1}(B) \cap \Omega_{n+1}(B') = \emptyset$ if $B \neq B'$. We conclude that the above union of different subfamilies $\Omega_{n+1}(B)$ is a partition of ${\mathcal A}_{n+1}$, as required.
Note that
$$\Omega_{n+1}(B) = \bigcup_{D \in {\mathcal A}_n, D \stackrel{\Phi_n}{\rightarrow}B} \ \ \ \bigcup_{C \in {\mathcal A}_n, B \stackrel{\Phi_n}{\rightarrow}C} \Gamma_{n+1} (D,B,C),$$ where the families in the union are pairwise disjoint and each of them has two different boxes. Therefore, taking into account that ${\mathcal A}_n$ is a family of atoms (by hypothesis), equality (\ref{eqn101}) of Definition \ref{definitionAtomsGeneration-n} implies:
$$\#\Omega_{n+1}(B) = 2 \cdot (\#\{ D \in {\mathcal A}_n, D \stackrel{\Phi_n}{\rightarrow}B\}) \cdot \#\{C \in {\mathcal A}_n, B \stackrel{\Phi_n}{\rightarrow}C\} =
$$ $$2 \cdot 2^{n} \cdot 2^{n} = 2^{2n+1}. $$
d)
By construction, $\Omega_{n+1}(B) = \bigcup_{D \in {\mathcal A}_n, D \stackrel{\Phi_n}{\rightarrow} B}\ \ \Omega_{n+1}(D, B)$. Besides, $\Omega_{n+1}(D,B) \cap \Omega_{n+1}(D', B) = \emptyset$ if $D \neq D'$ in ${\mathcal A}_n$, since different atoms of generation $n$ are pairwise disjoint, and $G \in \Omega_{n+1}(D, B) $ implies $G \subset \Phi_n(D)$ which is disjoint with $\Phi_n(D')$ since $\Phi_n$ is a homeomorphism.
f) By construction,
$\Omega_{n+1} (D,B) = \bigcup_{C \in {\mathcal A}_n, B \stackrel{\Phi_n}{\rightarrow} C}\ \ \Gamma_{n+1}(D, B, C)$. Besides, $\Gamma_{n+1}(D,B,C) \cap\Gamma_{n+1}(D,B,C') = \emptyset $ if $C \neq C'$ in ${\mathcal A}_n$, because two different atoms of generation $n$ are pairwise disjoint and $G \in \Gamma_{n+1}(D,B,C)$ implies $ G \subset \Phi_{n}^{-1}(C)$.
e) From Assertions f) and g) proved above, and from equality (\ref{eqn101}) of the definition of atoms of generation $n$, we deduce that
$$\#\Omega_{n+1} (D,B) = 2 \cdot (\# \{C \in {\mathcal A}_n\colon B \stackrel{\Phi_n}{\rightarrow} C\}) = 2 \cdot 2^{n} = 2^{n+1}.$$
h) By construction, $E= G_j(D,B,C)$ for some $j \in \{0,1\}$ and some $(D,B,C) \in {\mathcal A}_n^{3^*}$, i.e., $E \in \Gamma_{n+1}(D,B,C)$. Besides $E=G_j(D,B,C) \subset \mbox{int} (S(D,B) \cap \Phi_{n}^{-1}(S(B,C))) \subset \mbox{int}(\Phi_n(D) \cap B \cap \Phi_{n}^{-1}(C))$. But
for $(B,C) \neq (B,C)$ in ${\mathcal A}_n^{2*}$ the sets $B \cap \Phi_{n}^{-1}(C)$ and $B' \cap \Phi_{n}^{-1}(C')$ are disjoint,
thus the box $E$ belongs to $ \Gamma_{n+1}(D,B,C)$ for some $D \in {\mathcal A}_n$, if and only if it intersects $B \cap \Phi_n^{-1}(C)$.
\end{proof}
\begin{lemma}
\label{LemmaPermutation}
Assume the hypotheses of Lemma \ref{LemmaConstruction(n+1)atoms}.
Let $\widetilde L_{n+1} \subset D^m$ be a finite set with cardinality $2^{(n+1)^2} 2^{n+1}$s, with a unique point $\widetilde e_i (E)\in \widetilde L_{n+1} $ for each $(i,E) \in \{1,2, \ldots 2^{n+1}\}\times {\mathcal A}_{n+1}$.
Assume that $$\widetilde e_i(E) \in \mbox{int }(E) \ \ \forall \ (i,E) \in \{1,2, \ldots 2^{n+1}\} \times {\mathcal A}_{n+1}.$$
Then, there exists a permutation $\theta: \widetilde L_{n+1} \mapsto \widetilde L_{n+1}$ such that
\begin{enumerate}[label=\alph*)]
\item For all $ (i,E) \in \{1,2, \ldots 2^{n+1}\} \times \Gamma_{n+1}(D, B, C) $
$$\theta(\widetilde e_i(E)) = \widetilde e_{i'}(E') $$ for a unique $ i' \in \{1, 2, \ldots, 2^{n+1}\} $ and a unique $ E' \in \Omega_{n+1}(B, C).$
\item For all $(D,B,C) \in {\mathcal A}_n^{3*}$, for all $ E \in \Gamma_{n+1}(D, B, C) $ and for all $F \in \Omega_{n+1}(B, C)$
there exists unique $(i, i') \in \{1,2, \ldots 2^{n+1}\}^2$ such that $\theta(\widetilde e_i(E)) = \widetilde e_{i'}(F) $.
\item For all $(B,C) \in {\mathcal A}_n^{2*}$ $$\theta \Big(\Big \{\widetilde e_i(E)\colon E\in \bigcup_{D \in {\mathcal A}_n, \ (D,B) \in {\mathcal A}_n^{2^*}} \ \ \Gamma(D,B,C), \ \ i\in \{1,2, \ldots 2^{n+1}\}\Big\} \Big) = $$ $$\Big \{\widetilde e_{i'}(F), F \in \Omega_{n+1}(B,C), i'\in \{1,2,, \ldots 2^{n+1}\} \Big \} = \widetilde L_{n+1} \cap S(B,C).$$
\end{enumerate}
\end{lemma}
\begin{proof}
From the construction of the family ${\mathcal A}_{n+1}$ (see Lemma \ref{LemmaConstruction(n+1)atoms}), we deduce that for all $E \in {\mathcal A}_{n+1}$ there exists unique $j \in \{0,1\}$ and unique $(D,B,C) \in {\mathcal A}_{n}^{3*}$ such that $E= G_j(D,B,C)\subset \Gamma_{n+1}(D,B,C)$. Therefore, for all $(i,E) \in \{1, 2, \ldots, 2^{n+1}\} \times {\mathcal A}_{n+1}$, we have $$\widetilde e_i(E) = e_i(G_j(D,B,C))$$
By hypothesis ${\mathcal A}_n$ is the family of atoms of generation $n$ for $\Phi_n$, thus we can apply Equalities (\ref{eqn101}) of Definition \ref{definitionAtomsGeneration-n}. So, for each $B \in {\mathcal A}_n$, we can index the different atoms $D \in {\mathcal A}_n$ such that $D \stackrel{\Phi_n}{\rightarrow} B$ as follows:
\begin{equation}
\label{eqn105D-}
\{D \in {\mathcal A}_n \colon D \stackrel{\Phi_n}{\rightarrow} B\} = \{D_1^-(B), D_2^-(B), \ldots D_{2^n}^-(B)\},
\end{equation}
where $D^-_{k_1}(B) \neq D^-_{k_2}(B)$ if $k_1 \neq k_2$ (actually, they are disjoint atoms of generation $n$).
Analogously
\begin{equation}
\label{eqn105C+}
\{C \in {\mathcal A}_n \colon B\stackrel{\Phi_n}{\rightarrow} C\} = \{C_1^+(B), C_2^+(B), \ldots C_{2^n}^+(B)\},
\end{equation}
where $C^+_{l_1}(B) \neq C^+_{l_2}(B)$ if $l_1 \neq l_2$.
Now, we index the distinct points of $\widetilde L_{n+1}$ as follows:
$$\widehat e_{i, j}(k, B, l):= \widetilde e_i(G_j(D, B, C))= e_i(G_j(D_k^-(B), B, C_l^+(B))),$$
$$\mbox{for all } (i,j,B, k, l) \in \{1,2,\ldots, 2^{n+1}\} \times \{0,1\} \times {\mathcal A}_n \times \{1,2, \ldots, 2^n\}^2. $$
Define the following correspondence $\theta: \widetilde L_{n+1} \rightarrow \widetilde L_{n+1}$:
$$\theta (\widehat e_{i,j}(k, B, l)) =\widehat e_{i',j'}(k', B', l'), \mbox{ where} $$
\noindent{$\bullet$} $ B' := C_l^+(B),$
\noindent{$\bullet$} $k'$ is such that $B= D^-_{k'}(C)$ (such $k'$ exists and is unique because $B \stackrel{\Phi_n}{\rightarrow} C$, using (\ref{eqn105D-})),
\noindent{$\bullet$} $l' = i \ (\mbox{mod. } 2^n),$
\noindent{$\bullet$} $j' = 0$ if $i \leq 2^n$ and $j'= 1$ if $i > 2^n,$
\noindent{$\bullet$} $i' = k + j \cdot 2^n.$
Let us prove that $\theta$ is surjective; hence it is a permutation of the finite set $\widetilde L_{n+1}$.
Let $e_{i',j'}(k', B', l') \in \widetilde L_{n+1}$ be given, where
$$(i',j',B', k', l') \in \{1,2,\ldots, 2^{n+1}\} \times \{0,1\} \times {\mathcal A}_n \times \{1,2, \ldots, 2^n\}^2.$$
Construct
\noindent{$\bullet$} $i: = l' + j' \cdot 2^n$. Then
$l'= i \ (\mbox{mod. }2^n $, $j' = 0$ if $i \leq 2^n$ and $j'=1$ if $i > 2^n$.
\noindent{$\bullet$} $B := D^{-}_{k'}(B')$. Then $B \stackrel{\Phi_n}{\rightarrow}B'$. So, there exists $l$ such that $B' = C^+_{l}(B)$.
\noindent {$\bullet$} $k:= i' \ (\mbox{mod.} 2^n)$, $j:= 0$ if $i' \leq 2^n$ and $j:=1$ if $i' > 2^n$. Therefore $i' = k + 2^n j$.
By the above equalities we have constructed some $\theta^{-1}$ such that $\theta \circ \theta^{-1} $ is the identity map. So, $\theta$ is surjective, hence also one-to-one in the finite set $\widetilde L_{n+1}$, as required.
Now, let us prove that $\theta$ satisfies assertions a), b) and c) of Lemma \ref{LemmaPermutation}.
a) Fix $\widetilde e_i(E) \in {\widetilde L_{n+1}}$. By construction $\theta(\widetilde e_i(E)) = \widetilde e_{i'}(E') \subset \mbox{int}(E')$ for all $(i, E) $ for some $(i,E) \in \{1,2, \ldots, 2^{n+1}\} \times {\mathcal A}_{n+1}$. Since two different boxes of ${\mathcal A}_{n+1}$ are pairwise disjoint (recall Lemma \ref{LemmaConstruction(n+1)atoms}-a)), the box $E'$ is unique. Besides, by hypothesis, $\widetilde e_{i'}(E') \neq \widetilde e_{j'}(E') $ if $j \neq j'$. So, the index $i'$ is also unique. Therefore, to finish the proof of a), it is enough to check that $E ' \in \Omega_{n+1}(B,C)$ if $E \in \Gamma_{n+1}(D,B,C)$.
By the definition of the family $\Gamma_{n+1}(D,B,C) $ in Lemma \ref{LemmaConstruction(n+1)atoms}, if $E \in \Gamma_{n+1}(D,B,C)$, there exists $j \in \{0,1\}$ such that $E = G_j(D,B,C)$. Thus, using the notation at the beginning
$\widetilde e_i(E) = \widetilde e_i (G_j(D,B,C)) = \widehat e_{i,j}(k, B, l)$, where $D= D^-_{k}(B)$ and $C= C^+_{l}(B)$. Then, using the definition of the permutation $\theta$, and the computation of its inverse $\theta^{-1}$, we obtain $\widetilde e_{i'}(E) = \theta (\widetilde e_i(E)) = \widehat e_{i', j'}(k', B', l')$, where
$$ B' = C_l^+ (B) = C, \ \ \ D' =D^-_{k'}(B') = B. $$
We have proved that $\widetilde e_{i'}(E') = \widetilde e_{i'}(G_{j'} (B,C, C'))$.
Finally, from the definition of the family $\Omega_{n+1}(B,C)$ in Lemma \ref{LemmaConstruction(n+1)atoms} we conclude that $E' \in \Omega_{n+1}(B,C) $ as asserted in part a).
b) Fix $(D, B, C) \in {\mathcal A}_{n}^{3*}$ and $E \subset \Gamma_{n+1}(D,B,C)$. Then, using the definition of the family
$\Gamma_{n+1}(D,B,C)$ in Lemma \ref{LemmaConstruccionModelPsifisPhi}, we have unique
$(j, k, l) \in \{0,1\} \times \{1,2, \ldots, 2^n\}$ such that $E= G_j(D,B,C)$, $D= D_k^-(B)$, $C= C^+_l(B)$.
Consider the finite set $Z$ of $2^{n+1}$ distinct points $\widetilde e_i(E) = \widehat e_{i,j}(k, B, l)$,
with $j,K,B,l $ fixed as above and $i\in \{1,2, \ldots, 2^{n+1}\}$.
The image of each point in $Z$ by the permutation $\theta$ is $\theta(\widetilde e_i(E)) = \widehat e _{i', j'}(B, C, C') $
(here we use assertion a)), satisfying the equality $i' = k + 2^n j$. Since $k,j$ are fixed, we deduce that there exists
a unique $i'$ such that all the points of $\theta(Z) $ are of the form $\widetilde e_{i'}(F)$, $F= G_{j'}(B,C,C')$ with $j'\in \{0, 1\}$,
$C' = C^+_{k'}(C), \ k' \in \{1,2, \ldots, 2^{n+1}\}$. We have proved that the permutation $\theta|_Z$ is equivalent to
$$ i \in \{1,2, \ldots, 2^{n+1}\} \rightarrow (j', k') \in \{0,1\} \times \{1,2, \ldots, 2^n\}$$ such that
$\theta (\widehat e_i(E)) = \widehat e_{i', j'}(B, C, C^+_{k'}(C) )$ with $i'$ fixed.
Since $\#\{1,2, \ldots, 2^{n+1} \}= $ $\#( \{0,1\} \times \{1,2, \ldots, 2^n\})$, from the injectiveness of $\theta$ we deduce that $\theta(Z) = \{0,1\} \times \{1,2, \ldots, 2^n\}$. In other words, for every $F \in \Omega(B,C)$ there exists unique
$i$ such that $\theta(\widetilde e_i(E)) = \widetilde e_{i'}(F)$ (where $i'$ is uniquely defined given $E$). This ends the proof of assertion b).
c) For fixed $(B,C) \in {\mathcal A}_n^{2*}$, denote $$P:= \Big\{\widetilde e_i(E) \colon \ \ E \in \bigcup_{ D \in {\mathcal A}_n, D \stackrel{\Phi_n}{\rightarrow} B} \ \ \Gamma_{n+1}(D,B,C), \ \ i\in\{1, 2, \ldots, 2^{n+1}\}\Big \},$$
$$Q:= \{\widetilde e_{i'}(F) \colon \ \ F\in \Omega_{n+1}( B,C), \ \ i'\in\{1, 2, \ldots, 2^{n+1}\} \} \subset \widetilde L_{n+1}. $$
Applying assertion a) we deduce that $\theta(P) \subset Q$. So, to prove that $\theta(P)=Q$ it is enough to prove that $\#P = \#Q$. In fact, applying parts e) and g) of Lemma \ref{LemmaConstruction(n+1)atoms} for the family of boxes ${\mathcal A}_{n+1}$, and equality (\ref{eqn101}) of Definition \ref{definitionAtomsOfGeneration0-1} for the family of atoms ${\mathcal A}_n$, we obtain
\begin{align*}
\#P &= 2^{n+1} \cdot (\#\Gamma_{n+1}(D,B,C)) \cdot (\#\{D \in {\mathcal A_n} \colon D \stackrel{\Phi_n}{\rightarrow}B \}) = 2^{n+1} \cdot 2 \cdot 2^n \\
\#Q & = 2^{n+1} \cdot (\#\Omega_{n+1} )(B,C))= 2^{n+1}\cdot 2^{n+1},
\end{align*}
which proves that $\#P = \#Q$ and thus that $\theta(P)=Q$.
Finally, let us prove that $Q=\widetilde L_{n+1} \cap S(B,C).$
On the one hand, if $F \in \Omega_{n+1}(B,C)$, then $F = G_j(B,C,C')$ for some $(j, C')$. Applying the construction of the boxes of ${\mathcal A}_{n+1}$ in the hypothesis of Lemma \ref{LemmaConstruction(n+1)atoms}, we obtain $ F\subset S(B,C) $, hence $\widetilde e_{i'} (F) \in \widetilde L_{n+1} \cap \mbox{int}(F) \subset \widetilde L_{n+1} \cap S(B,C) $. This proves that $Q \subset \widetilde L_{n+1} \cap S(B,C)$.
On the other hand, if $\widetilde e_{i'}(F) \in \widetilde L_{n+1} \cup S(B,C) $, then $F \in {\mathcal A}_{n+1}$. Applying Lemma \ref{LemmaConstruction(n+1)atoms}, we obtain $F = G_j(D', B', C') \subset S(D', B')$ for some $(D', B', C') \in {\mathcal A}_n^{3*}$. Since $S (D',B') \subset \Phi_n(D') \cap B'$ and $S(B,C) \subset \Phi_n(B) \cap C$, we obtain $S(D',B') \cap S(B,C) = \emptyset$ if $(D', B') \neq (B,C)$. But $\widetilde e_{i'} (F) \in \mbox{int}(F) \cap S(B,C)\subset S(D', B') \cap S(B,C) $. We conclude that $(D',B')= (B,C)$, thus $F = G_j(B, C, C')\in \Omega_{n+1}(B,C)$, hence $\widetilde e_{i'}(F) \in Q$. We have proved that $ \widetilde L_{n+1} \cap S(B,C) \subset Q$. Therefore the proof of assertion c) is complete.
\end{proof}
\begin{lemma}
\label{LemmaSON(n+1)-atomos}
Assume the hypothesis of Lemmas \ref{LemmaConstruction(n+1)atoms} and \ref{LemmaPermutation}. Let $\Phi_{n+1} \in RHom (D^m)$ be such that
$$\Phi_{n+1}(x) = \Phi_n(x) \ \ \forall \ x \not \in \bigcup_{A \in {\mathcal A}_n} A, \ \ \ \Phi_{n+1}(\widetilde e)= \theta(\widetilde e) \ \ \forall \ \widetilde e \in \widetilde L_{n+1},$$
where $\theta$ is the permutation of $\widetilde L_{n+1}$ constructed in Lemma \ref{LemmaPermutation}.
Then,
\begin{enumerate}[label=\alph*)]
\item ${\mathcal A}_0, {\mathcal A}_1, \ldots, {\mathcal A}_{n+1} $ are collections of atoms up to generation $n+1$ for $\Phi_{n+1}$.
\item For each $(E,F) \in {\mathcal A}_{n+1}^2$ such that $E\stackrel{\Phi_{n+1}}{\rightarrow}F$, there exists exactly one point $\widetilde e_i(E) \in \widetilde L_{n+1} \cap \mbox{int}(E) $, and exactly one point $\widetilde e_{i'}(F) \in \widetilde L_{n+1} \cap \mbox{int}(F)$, such that $$\widetilde \Phi_{n+1}(\widetilde e_i(E)) = \widetilde e_{i'}(F).$$
\end{enumerate}
\end{lemma}
\begin{proof} a) Since $\Phi_{n+1}(x) = \Phi_n(x)$ for all $x \not \in \bigcup_{B \in {\mathcal A}_n}B$, the same families ${\mathcal A}_0, {\mathcal A}_1, \ldots,{\mathcal A}_n,$ of atoms up to generation $n$ for $\Phi_n$ are families of atoms up to generation $n$ for $\Phi_{n+1}$.
As observed in Remark \ref{remarkConditionsabc}, the family ${\mathcal A}_{n+1}$ of pairwise disjoint boxes satisfy conditions a), b) and c) of Definition \ref{definitionAtomsGeneration-n} with $n+1$ instead of $n$, for $\Phi_{n+1}$. So, to prove that ${\mathcal A}_{n+1}$ is a family of atoms of generation $n+1$ for $\Phi_{n+1}$ it is enough to prove that its boxes satisfy also condition d) of Definition \ref{definitionAtomsGeneration-n}.
Take $(D,B,C) \in {\mathcal A}_{n}^{3^*}$, $E \in \Gamma_{n+1}(D,B,C)$ and $F \in \Omega_{n+1}(B,C)$. Applying Lemma \ref{LemmaPermutation}-b), there exists $(i, i')$ such that $\theta(\widetilde e_i(E)) ) = \widetilde e_{i'}(F)$. Therefore $$\Phi_{n+1}(\widetilde e_i(E)) )= \widetilde e_{i'}(F).$$
Since $\widetilde e_i(E) \in \mbox{int}(E)$ and $\widetilde e_{i'}(F) \in \mbox{int}(F) $, we conclude that $\Phi_{n+1}(E) \cap \mbox{int}(F) \neq \emptyset $, namely, $E \stackrel{\Phi_n}{\rightarrow}F$. We have proved that
$$E \stackrel{\Phi_n}{\rightarrow}F \ \ \forall E \in \Gamma_{n+1}(D,B,C), \ \forall F\in \Omega_{n+1}(B,C).$$
This proves the first half of condition d) of Definition \ref{definitionAtomsGeneration-n}.
Let us prove the second half. Let $E \in \Gamma_{n+1}(D,B,C)$ and $F \in {\mathcal A}_{n+1} \setminus \Omega_{n+1}(B,C)$. Then $ F \in \Omega_{n+1}(B',C')$ with $(B',C') \neq (B,C)$ (recall that by Lemma \ref{LemmaConstruction(n+1)atoms}, the union of all the families $\Omega_{n+1}(\cdot, \cdot)$ is ${\mathcal A}_{n+1}$). We have $F \in \Omega_{n+1}(B', C')$; therefore $ F\subset \Phi_n(B') \cap C'$. Besides $\Phi_n(B') =\Phi_{n+1}(B')$ (recall that $\Phi_n$ and $\Phi_{n+1}$ are relative homeomorphisms that coincide outside the atoms of generation $n$; so, by continuity, they coincide in the borders of all those atoms, therefore the images by $\Phi_n$ and $\Phi_{n+1}$ of each atom of generation $n$ coincide). We deduce that
\begin{equation}\label{eqn106a} F \subset \Phi_{n+1}(B') \cap C'\mbox{ with } (B'C') \neq (B,C).\end{equation}
Since $E \in \Gamma_{n+1}(D,B,C)$, we know that $E \subset B \cap \Phi_{n}^{-1}(C) = B \cap \Phi_{n+1}^{-1}(C)$ (recall the construction of the atoms of the family ${\mathcal A}_{n+1}$ in the hypothesis of Lemma \ref{LemmaConstruction(n+1)atoms}). Therefore
\begin{equation}\label{eqn106b} \Phi_{n+1}(E) \subset \Phi_{n+1}(B) \cap C \end{equation}
Since two different atoms of genration $n$ are disjoint, and $\Phi_{n+1}$ is a relative homeomorphism, we have $(\Phi_{n+1}(B) \cap C) \cap (\Phi_{n+1}(B') \cap C') = \emptyset$ if $(B,C) \neq (B',C')$. We conclude
that $$\Phi_{n+1}(E) \cap F = \emptyset \ \ \forall \ F \in {\mathcal A}_{n+1} \setminus \Omega_{n+1},$$
ending the proof of condition d) of Definition \ref{definitionAtomsGeneration-n} for the family ${\mathcal A}_{n+1}$. The proof of assertion a) is complete.
b) Take $(E,F) \in {\mathcal A}_{n+1}^2$ such that $E \stackrel{\Phi_{n+1}}{\rightarrow}F$. Then
\begin{equation}
\label{eqn107}
\Phi_{n+1}(E) \cap F \neq \emptyset. \end{equation}
We have $E \in {\mathcal A}_{n+1}$. Therefore, by the construction of this family in Lemma \ref{LemmaConstruction(n+1)atoms}, there exists $(D,B,C) \in {\mathcal A}_{n}^{3*}$ such that $$E \in \Gamma_n(D,B,C). $$
Combining inequality (\ref{eqn107}) with property d) of Definition \ref{definitionAtomsGeneration-n} (putting $n+1$ instead of $n$), we obtain
$$F \in \Omega_{n+1}(B,C).$$
Applying Lemma \ref{LemmaPermutation}-b) there exists a unique $(i, i') \in \{1,2, \ldots, 2^{n+1}\}^2$ such that
$\Phi_{n+1}(\widetilde e_i(E)) = \theta(\widetilde e_i(E)) = \widetilde e_{i'}(F)$.
Since $\widetilde L_{n+1} \cap (\mbox{int}(E)) = \{\widetilde e_i(E)\colon i\in \{1,2, \ldots, 2^{n+1}\}\}$ and analogously for $\widetilde L_{n+1} \cap (\mbox{int}(F))$, we conclude that there exists a unique point $\widetilde e_i(E) \in \widetilde L_{n+1} \cap(\mbox{int}(E)$ and a unique point $\widetilde e_{i'}(F) \in \widetilde L_{n+1} \cap(\mbox{int}(F)$ such that
$\Phi_{n+1}(\widetilde e_{i}(E)) = \widetilde e_{i'}(F)$, as required. The proof of part b) is complete.
\end{proof}
\begin{lemma}
\label{LemmaHomeosEspecificandoFinitosPuntos}
Let $\psi \in Hom(D^m)$, $P, Q \subset D^m$ be boxes such that $\psi(P)=Q$, $p_1, \ldots, p_k \in \mbox{int} (P)$ be distinct points and $q_1, \ldots, q_k \in \mbox{int} (Q)$ be also distinct points.
Then, there exists $\psi^* \in Hom(D^m)$ such that
$$\psi^*(x) = \psi(x) \quad \forall \ x \not \in \mbox{int} (P) \mbox{ and }
\psi^*(p_i) = q_i \quad \forall \ i \in \{1, \ldots, k\}.$$
\end{lemma}
\begin{proof}
We argue by induction on $k \geq 1$.
For $k=1$, it suffices to construct a homeomorphism $\chi: Q \mapsto Q$ such that $\chi|_{\partial Q}$ is the identity map, and $\chi(\psi(p_1))= q_1$. Once $\chi$ is constructed, the homeomorphism defined by $\psi^*(x) := \psi(x)$ if $x \not \in \mbox{int}(P)$ and $\psi^*(x) = \chi \circ \psi (x)$ if $x \in \mbox{int}(P)$, satisfies the required properties.
Since $Q$ is a box we can consider a homeomorphism $\xi: Q \mapsto D^m$. Then, it is enough to construct $\chi' \in Hom(D^m)$
such that
\begin{equation}
\label{eqn111a} \chi'(x) = x \ \ \forall x \in \partial D^m \mbox{ and }
\end{equation}
\begin{equation}
\label{eqn111b} \chi'(\xi \circ \psi (p_1) = \xi(q_1).
\end{equation}
since $\chi:= \xi^{-1} \circ \chi' \circ \xi$ satisfies the required properties.
Let $p'_1 := \xi(\psi(p_1)) \in \mbox{int}(D^m), \ \ q'_1 = \xi(q_1) \in \mbox{int}(D^m)$.
For each point $r \in \partial D_m$ denote by $S_p(r)$ the segment in $D^m$ joining the point $p'_1$ to $r$, and by $S_q(r)$
the segment joining $q'_1$ with $r$. Let $\chi'|S_p(r) : S_p(r) \mapsto S_q(r)$ be the affinity from one segment to the other,
for each $r \in \partial D^m$, leaving $r$ fixed. Define $\chi'(p'_1):= q'_1$
and, for all $x \in D^m \setminus \{p'_i\}$ define $\chi'(x) := \chi'|_{S_p(r_x)} (x)$, where $r_x$ is the unique point in $\partial D_m$ such that $x \in S_p(r(x))$. It is not hard to check that $\chi': D_m \mapsto D_m$ is a homeomorphism. By construction $\chi'$ leaves fixed the points of $\partial D_m$ and $\chi'(p'_1) = q'_1$ as required.
We turn to the inductive step, assume that the assertion of the lemma is true for $k-1$, we add the subscript $j$ to all the objects associated to step $j$ (we will use $j=1$, $j=k-1$ and of course $j=k$ in the proof).
Let us prove it for $k$.
First, the induction hypothesis, yields homeomorphisms $\psi^*_{k-1}: D^m \mapsto D^m$ such that
$\psi^*_{k-1}(x) = \psi(x)$ for all $x \not \in \mbox{int}(P_{j})$ and $\psi^*_{k-1}(p_i) = q_i$ for all $1 \le i \le k-1$.
Since the points $p_1,\dots, p_k$ are distinct we can choose a box
$P_1 \subset \mbox{int}(P_{k})$ such that $p_k \in \mbox{int}(P_1)$ and $p_1, \ldots, p_{k-1} \not \in P_1$.
Consider the ball $Q_1:= \psi_{1}(P_1) \subset \psi_{1}(P_k) = \psi(P_k) = Q_k$ and choose a point $q'_k \in \mbox{int}(Q_1)$.
Using that the lemma is true for $j=1$ (as proved above), construct a homeomorphism $\psi^*_1: D^m \mapsto D^m$
such that $\psi^*_1(p_k) = q'_k$ and $\psi^*_1(x) = \psi^*_{k-1}(x)$ for all $x \not \in P_1$ (hence $\psi^*_1(x) = \psi (x)$
for all $x \not \in P$).
We claim that $q'_k, q_k \not \in \{q_1,\ldots, q_{k-1} \} $. By hypothesis $q_1, q_2, \ldots, q_k$ are all distinct points,
on the other hand $q_i = \psi^*_{k-1}(p_i) = \psi^*_1(p_i)$ for all $i=1, 2, \ldots, k-1$ and $q'_k = \psi_1(p_k)$.
We have $q'_k \not \in \{q_1, \ldots, q_{k-1}\}$ since $\psi_1$ is a homeomorphism,
and $p_1, p_2, \ldots, p_{k-1}, p_{k}$ are all distinct points.
Now, consider a box $Q'_1 \subset \mbox{int}(Q_k)$ such that $q'_k, q_k \in \mbox{int}(Q'_1)$ and $q_1,\ldots, q_{k-1} \not \in Q'_1 $.
Since the lemma was already proved in the case $k= 1$, we can construct a homeomorphism
$\xi: D_m \mapsto D_m$ such that $\xi(x) = x$ for all $x \not \in \mbox{int}(Q'_1)$ and $\xi (q'_k)= q_k$.
Define $$\psi^*_k := \xi \circ \psi_{1} .$$
Then, since $\psi(x) \not \in \mbox{int} (\psi(P))= \mbox{int}(Q) \supset Q'_1$, we have
$$\psi^*_k(x) = \xi \circ \psi_1 (x) = \xi \circ \psi (x) = \psi(x), \quad \hbox{ for all } x \not \in \mbox{int}(P). $$
Besides, since $q_i \not \in Q'_1$ if $i \leq q-1$, we have
$$ \psi (p_i) = \xi \circ \psi_1 (p_i) = \xi(q_i) = q_i, \quad \forall \ i \in \{1, \ldots, k-1\}.$$
Finally $$\psi (p_k) = \xi \circ \psi_1 (p_k) = \xi(q'_k) = q_k,$$
ending the proof of Lemma \ref{LemmaHomeosEspecificandoFinitosPuntos}
\end{proof}
\begin{lemma}
\label{LemmaHomeosEspecificandoFinitosPuntos2}
Let $\psi \in Hom(D^m)$, $r \geq 1$, $P_1, P_2, \ldots P_r \subset D^m$ be pairwise disjoint boxes, and
$Q_j :=\psi(P_j)$ for all $j\in \{1, 2, \ldots, r\}$. For $k \geq 1$ and $j \in \{1,2, \ldots, r\}$, let
$p_{1,j}, \ldots, p_{k,j} \in \mbox{int} (P_j)$ be distinct points and $q_{1,j}, \ldots, q_{k,j} \in \mbox{int} (Q_j)$ also
be distinct points.
Then, there exists a $\psi^* \in Hom(D^m)$ such that
$$\psi^*(x) = \psi(x) \ \ \forall \ x \not \in \bigcup_{j=1}^{r}\mbox{int} (P_j) \mbox{ and }$$
$$\psi^*(p_{i,j}) = q_{i,j} \ \ \forall \ (i,j) \in \{1, \ldots, k\}\times \{1, 2, \ldots, r\}.$$
\end{lemma}
\begin{proof}
Applying Lemma \ref{LemmaHomeosEspecificandoFinitosPuntos}, for each $j \in \{1,2, \ldots, r\}$, yields homeomorphisms $\psi_j: D^m \mapsto D^m$ such that
$$\psi_j(x) = \psi(x) \ \ \forall \ x \in D^m \setminus \mbox{int}( P_j) \qquad \mbox{and} \qquad
\psi_j|_{int(P_j)}(p_{i,j})= q_{i,j}.$$
Construct the homeomorphism
$$\psi^*(x) := \psi(x) \mbox { if } x \not \in \bigcup_{j=1}^{r} \mbox{int}(P_j),$$
{and for each } $j \in \{1, 2, \ldots, r\}$: $$\psi^*(x) := \psi_j(x) \mbox { if } x \in \mbox{int}(P_j). $$
It is immediate to check that $\psi^*: D^m \mapsto D^m$ is a homeomorphism that satisfies the required properties.
\end{proof}
\begin{proof} {\em of Lemma \ref{LemmaConstruccionModelPsifisPhi}}
We divide the construction of $\psi$ and $\Phi \in {\mathcal H}$ into several steps:
\noindent {\bf Step 1. Construction of the atom of generation 0. }
Since $f(D^m) \subset \mbox{int}(D^m)$, there exists a box $A_0 \subset \mbox{int}(D^m)$ such that $f(D^m) \subset \mbox{int}(A_0)$.
The box $A_0$ is the {\em atom of generation 0 } for the relative homeomorphism $\Phi_0 := f$ which satisfies $\Phi_0= \psi_0 \circ f$ where $\psi_0$ is the identity map.
Applying the Brower Fixed Point Theorem, there exists a point
$ e_0 \in \mbox{int} (\Phi_0(A_0)) $
such that $\Phi_0(e_0)= e_0$.
Define $S(A_0,A_0)$ to be the connected component of $A_0 \cap \Phi_0(A_0)$ containing $e_0$.
\noindent {\bf Step 2. Construction of the atoms of generation {\em n+1}}.
Inductively assume that we have constructed families
${\mathcal A}_0, {\mathcal A}_1, $ $ \ldots, {\mathcal A}_{n}$ of atoms up to generation $n$ for $\Phi_n = \psi_n \circ f$, where $\psi_n\in \mbox{Hom}(D^m)$ is a homeomorphism satisfying $\psi_n| _{\partial D^m}$ is the identity map, satisfying:
\begin{enumerate}[label=\Roman*)]
\item $\label{eqn19-n} \max_{B \in {\mathcal A}_i} \max\{\mbox{diam} (B), \mbox{diam}(f(B))\}< \frac{1}{2^i} \ \ \forall i \in \
\{0,1,\ldots n\};$
\item $\Phi_i(x) = \Phi_{i-1}(x), \qquad \forall \ x \in D^m \setminus \bigcup_{B \in {\mathcal A}_{i-1}} B, \ \forall i\in \{1,\ldots, n\};$
\item for all $(D,B,C) \in {\mathcal A}_n ^{3*}$ there exists a point $e (D,B,C) $ such that
$$ L_n:= \{e(D,B,C) \colon (D,B,C) \in {\mathcal A}_n^{3*}\}$$ is $\Phi_n$-invariant, and
\begin{equation} \label{eqn34} e(D,B,C) \in \mbox{int}\big (S(D,B) \cap \Phi_n^{-1}(S(B,C)) \big),\end{equation} where $S(D,B)$ and $S(B,C)$ are previously defined connected components of $B \cap \Phi_n(D)$ and of $C \cap \Phi_n(B)$ respectively;
\item
$\label{eqeqeq} \text{the sets } S(D,B) \text{ and } S(D',B') \text{ are disjoint if } (D,B) \ne (D'B').$
\end{enumerate}
Let us construct the family ${\mathcal A}_{n+1}$ of atoms of generation $n+1$ and the homeomorphisms $\Phi_{n+1}$ and $\psi_{n+1}$. First, for each $(D,B) \in ({\mathcal A}_n)^{2^*}$ we choose a box $R(D,B)$ such that
\begin{equation}
\label{eqn31-n}e(D,B,C) \in \mbox{int}(R(D,B)) \subset \mbox{int}\big (S (D,B) \big) \end{equation} $$\forall \ C \in {\mathcal A}_n \mbox{ such that } B {\stackrel{\Phi_n}{\rightarrow}} C.$$
By \eqref{eqeqeq} such boxes $R(\cdot, \cdot)$ are pairwise disjoint.
Recall that $e(D,B,C) \in L_n$, the set $L_n$ is $\Phi_n$-invariant and moreover
$e(D,B, C) \in \mbox{int}(\Phi_n^{-1}(S(B,C))).$ Thus
$$e(D,B,C) \in \mbox{int} \Big(R(D,B) \cap \Phi_n^{-1}(R(B,C))\Big) \neq \emptyset$$
because
$$\Phi_n(e(D,B,C)) = e(B,C, \cdot) \in \mbox{int}(R(B,C)).$$
Next, for each $(D,B,C) \in {\mathcal A}_n^{3*}$ we choose two pairwise disjoint boxes, $G_0(D,B,C)$ and $ G_1(D,B,C)$,
contained in the interior of $ R(D,B) \cap \Phi_n^{-1}(R(B,C)) $, satisfying
\begin{equation} \label{eqn26a}\max\{\mbox{diam}(G_i(D,B,C)) , \mbox{diam}(f(G_i(D,B,C))) \} < \frac{1}{2^{n+1}}\end{equation} for $i= 0,1$
Now, we use Lemma \ref{LemmaConstruction(n+1)atoms} to define
the family ${\mathcal A}_{n+1}$ of all the boxes $G_i(D,B,C)$ and use its properties. The boxes of the family ${\mathcal A}_{n+1}$ will be the $(n+1)$-atoms of two new homeomorphisms $\widetilde \Phi_{n+1}$ and $\Phi_{n+1}$ that we will construct as follows.
First, in the interior of each box $E \in {\mathcal A}_{n+1}$ we choose $2^{n+1}$ distinct points $\widetilde e_i(E), i= 1, 2 \ldots, 2^{n+1}$, and denote $$\widetilde L_{n+1} := \{\widetilde e_i(E)\colon E \in {\mathcal A}_{n+1}, \ 1 \leq i \leq 2^{n+1}\}.$$ Second, we build a permutation $ \widetilde \theta $ of $\widetilde L_{n+1}$ satisfying the properties of Lemma \ref{LemmaPermutation}.
Third, applying Lemma \ref{LemmaHomeosEspecificandoFinitosPuntos2}, we construct $\widetilde \psi_{n+1} \in Hom(D^m) $ defined satisfying the following constraints.
\noindent{(a)} $$\widetilde \psi_{n+1}|_{\psi_n^{-1}R(B,C)}: \psi_n^{-1}(R(B,C)) \rightarrow R(B,C) \ \ \forall (B,C) \in {\mathcal A}_n^{2^*},$$
\noindent{(b)} $$ \widetilde \psi_{n+1} (x) = \psi_n(x) , $$ $$\forall x \not \in \bigcup_{(B,C)\in {\mathcal A}_n^{2*}} \psi_n^{-1} ( R(B,C)) = \bigcup_{(B,C)\in {\mathcal A}_n^{2*}} f \circ \Phi_n^{-1}( R(B,C)), $$
\noindent{(c)} $$ \widetilde \psi_{n+1}(f(\widetilde e)) = \widetilde \theta(\widetilde e), \qquad \forall \ \widetilde e \in \widetilde L_{n+1}.$$
To prove the existence of such a homeomorphism $\widetilde \psi_{n+1}$ we must verify the hypotheses of Lemma \ref{LemmaHomeosEspecificandoFinitosPuntos2}.
On the one hand, the boxes $R(B, C)$ where $B,C \in {\mathcal A}_n^{2*}$ are pairwise disjoint. So the their preimages by
the homeomorphism $\psi_n$ also are pairwise disjoint. On the other hand,
the finite set $$\{f(\widetilde e): \ \ \ \widetilde e \in \widetilde L_{n+1}\, \cap \, \mbox{int}(\Phi_n^{-1}(R(B,C)))\}$$ is contained in the interior of $f \circ \Phi_n^{-1}(R(B,C)) =\psi_n^{-1} (R(B,C))).$
Besides, it coincides with $$ \{f(\widetilde e_i(E)) \colon E \in \Gamma_{n+1}(D, B, C) \mbox{ for some } D \in {\mathcal A}_n, \ i = 1, \ldots, 2^{n+1}\} $$ (recall that $\widetilde e \in E \cap B \cap \Phi_n^{-1}(C)$ and apply Lemma \ref{LemmaConstruction(n+1)atoms}-h)). So, its image by the permutation $\widetilde \theta $
is the finite set $$\{\widetilde \theta(\widetilde e_i(G))\colon G \in \Gamma_{n+1}(D, B, C) \ \ \mbox{ for some } D \in {\mathcal A}_n, \ i=1,2, \ldots 2^{n+1}\}$$
Applying Lemma \ref{LemmaPermutation}-c), the latter set is
$$ \{e_k(F) \colon F \in \Omega_{n+1}(B,C), k = 1,2, \ldots 2^{n+1} \} = \widetilde L_{n+1} \cap R(B,C), $$ which is contained in the interior of $R(B,C)).$ The hypothesis of Lemma \ref{LemmaHomeosEspecificandoFinitosPuntos2} is satisfied.
We
construct $$\widetilde \Phi_{n+1} :=\widetilde \psi_{n+1} \circ f.$$
Since $\widetilde \Phi_{n+1}(x) = \Phi_n (x), \ \forall x \not \in \cup_{A \in {\mathcal A}_n} A,$ the same atoms up to generation $n$ for $\Phi_n$ are still atoms up to generation $n$ for $\widetilde \Phi_{n+1}$.
But moreover, applying Lemma \ref{LemmaSON(n+1)-atomos}-a), the boxes of the new family ${\mathcal A}_{n+1}$ are now $(n + 1)$-atoms for $\widetilde \Phi_{n+1}$.
\noindent {\bf Step 3. Construction of $\Phi_{n+1}$ and $\Psi_{n+1}$.}
To argue by induction, we will not use the homeomorphisms $\widetilde \Phi_{n+1}$ and $\widetilde \psi_{n+1}$, rather we
need to modify them to obtain new homeomorphisms $\Phi_{n+1}$ and $\psi_{n+1}$ such that Assertion (\ref{eqn34}) also holds for
$n+1$ instead of $n$.
Precisely, modifying $\widetilde \psi_{n+1}$ only in the interiors of the boxes $f(G)$ for all the atoms $G \in {\mathcal A}_{n+1}$, we
construct a new homeomorphism $\psi_{n+1}$ such that $\Phi_{n+1} := \psi_{n+1} \circ f$ has the same atoms up to generation $n+1$
as $\widetilde \Phi_{n+1}$.
From the above construction of $\widetilde \psi_{n+1}$ and $\widetilde \Phi_{n+1}$, and from Lemma \ref{LemmaSON(n+1)-atomos}-b), we know that for each $(G,E) \in {\mathcal A}_{n+1}^{2*}$ there exists a unique point $\widetilde e_i(G) \in \mbox{int}(G) $, and a unique point $\widetilde e_k(E)$, such that $$\widetilde \Phi_{n+1}(\widetilde e_i(G)) = \widetilde \psi_{n+1} \circ f (\widetilde e_i(G)) = \widetilde e_k(E) \in \mbox{int}(E).$$ Therefore
$$ \widetilde e_k(E) \in \mbox{int} ( E \cap \widetilde \Phi _{n+1}(G) ).$$
\noindent Denote by $$S(G,E)$$ the connected component of $ E\, \cap \, \widetilde \Phi_{n+1}(G) $ that contains the point $\widetilde e_k(E) $.
Choose $2^{n+1}$ distinct points $$e_i(G, E) \in \mbox{int}(S(G,E)), \ \ i= 1, \ldots, 2^{n+1} $$
and a permutation $\theta $ of the finite set \begin{equation}\label{eqnLn+1} L_{n+1}:= \{e_i(G,E): \ \ (G,E) \in {\mathcal A}_{n+1} ^{2*}, \ \ i= 1, \ldots, 2^{n+1}\}\end{equation} such that
for each fixed $(G,E,F) \in ({\mathcal A}_{n+1})^{3*}$ there exists a unique point $e_i(G,E)$, and a unique point $e_k(E,F)$, satisfying $$\theta (e_i(G,E)) = e_k(E,F).$$
The proof of the existence of such permutation is similar to the proof of Lemma \ref{LemmaPermutation}.)
Applying Lemma \ref{LemmaHomeosEspecificandoFinitosPuntos2}, construct a homeomorphism $$ \psi_{n+1}\in Hom(D^m)$$
such that $$\psi_{n+1}|_{f(G)}: f(G)\rightarrow \widetilde \psi_{n+1}(f(G)) = \widetilde \Phi_{n+1}(G) \ \ \forall \ G \in {\mathcal A}_{n+1},$$
$$ \psi_{n+1}(x) = \widetilde \psi_{n+1}(x) \ \ \forall \ x \not \in \bigcup_{G \in {\mathcal A}_{n+1}} f(G) $$
$$ \psi_{n+1} (f(e_i(G,E)) = \theta (e_i(G,E)) $$ $$ \forall \ (E, G)\in {\mathcal A}_{n+1}^2 \mbox{ such that } G \stackrel{\Phi}{\rightarrow} E, \qquad \forall \ i= 1, \ldots, 2^{n+1} $$
and extend $ \psi_ {n+1}$ to the whole box $D^m$ by defining $\psi_{n+1}(x) = \widetilde \psi_{n+1}(x), $ $\forall \ x \in {D^m \setminus \bigcup_{G \in {\mathcal A}_{n+1}} f(G)}.$
In particular $$ \psi_{n+1}|_{\partial D^m} = \widetilde \psi_{n+1}|_{\partial D^m} = \mbox{id}|_{\partial D^m}.$$
Define$$ \Phi_{n+1} := \psi_{n+1} \circ f.$$ As said above, the property that $\Phi_{n+1} $ coincides with $ \widetilde \Phi_{n+1}$ outside all the atoms of ${\mathcal A}_{n+1}$ implies that the boxes of the families ${\mathcal A}_0, \ldots, {\mathcal A}_{n+1}$ are also atoms up to generation $n+1$ for $\Phi_{n+1}$. But now, they have the following additional property:
there exists a one-to-one correspondence between the 3-tuples
$(G, E, F) \in ({\mathcal A}_{n+1})^{3*}$ and the points of the set $L_{n+1}$ of Equality (\ref{eqnLn+1}), such that
\begin{equation} \label{eqn30z}e(G,E,F) := e_i(G, E) \in \mbox{int}\big (S(G,E) \cap \Phi_{n+1}^{-1}(S(E,F)) \big), \end{equation}
where $S(G,E)$ and $S(E,F)$ are the previously chosen connected components of $E \cap \Phi_n(G)$ and of
$F \cap \Phi_n(E)$ respectively. Besides, by construction the finite set $L_{n+1}$ in $\Phi_{n+1}$-invariant. In fact $\Phi_{n+1}(L_n) = \psi_{n+1} (f(L_{n+1})) = L_{n+1}$.
Therefore, Assertion (\ref{eqn34}) holds for $n+1$ and the inductive construction is complete.
\noindent{\bf Step 4. The limit homeomorphisms. }
From the above construction we have:
$$\psi_{n+1}(x) = \widetilde \psi_{n+1}(x) = \psi_n(x) \mbox{ if } x \not \in \bigcup_{ B,C } \psi_n^{-1} ( R(B,C)) \subset \bigcup_{B} f(B) $$
$$\psi_{n+1}\circ \psi_n^{-1}(R(B,C)) = \widetilde \psi_{n+1} \circ \psi_n^{-1} (R(B,C)) =$$ $$ \psi_n \circ \psi_n^{-1}( R(B,C)) = R(B,C) \subset C.$$
Therefore,
$$\mbox{dist}(\psi^{-1}_{n+1}(x), \psi^{-1}_n(x)) \leq \max_{B \in {\mathcal A}_n} \mbox{diam}(f(B)) < \frac{1}{2^{n}}, \qquad \forall \ x \in D^m; $$
$$\mbox{dist}(\psi_{n+1}(x), \psi_n(x)) \leq \max_{C \in {\mathcal A}_n} \mbox{diam}(C) < \frac{1}{2^n}, \qquad \forall \ x \in D^m, $$
\begin{equation} \label{eqn27a} \| \psi_{n+1} - \psi_n\|_{\mbox{Hom}} < \frac{1}{2^n}.\end{equation}
From Inequality (\ref{eqn27a}) we deduce that the sequence $\psi_n $ is Cauchy in $Hom(D^m)$. Therefore, it converges to a homeomorphism $\psi $. Moreover, by construction $\psi_n|_{\partial D^m} = \mbox{id}|_{\partial D^m}$ for all $n \geq 1$. Then $\psi |_{\partial D^m} = \mbox{id}|_{\partial D^m}.$
The convergence of $\psi_n$ to $\psi$ in $Hom(D^m)$ implies that
$ h _n = \psi_n \circ f \in \mbox{RHom}(D^m) $ converges to $\Phi = \psi \circ f \in \mbox{RHom}(D^m)$ as $n \rightarrow + \infty$. Since $f(D^m) \subset \mbox{int}(D^m)$ and $\psi \in \mbox{Hom}(D^m)$, we deduce that $\Phi(D^m) \subset \mbox{int}(D^m).$
Moreover, by construction ${\mathcal A}_0, {\mathcal A}_1, \ldots, {\mathcal A}_n$ are families of atoms up to generation $n$ for $\Phi_n$, and $\Phi_{m}(x)= \Phi_n (x) $ for all $ x \in D^m \setminus \bigcup_{B \in {\mathcal A}_n} B$ and for all $ m \geq n .$ Since $\lim_m \Phi_m = \Phi$, the boxes of the family $ {\mathcal A}_n $ are $n$-atoms for $\Phi$ for all $n \geq 0$.
Finally, from Inequality (\ref{eqn19-n}), the diameters of the $n$-atoms converge uniformly to zero as $n \rightarrow + \infty$. Thus $\Phi $ is a model according to Definition \ref{DefinitionModel}.
\end{proof}
\section{Infinite metric entropy and mixing property of the models.} \label{SectionMainLemma}
The purpose of this section is to prove the following Lemma.
\begin{lemma} {\bf (Main Lemma)}
\label{LemmaMain}
Let ${\mathcal H} \subset C^0(D^m)$ be the family of models with $m \ge 2$ (Definition \em \ref{DefinitionModel}). \em
Then, for each $\Phi \in {\mathcal H}$ \em \em there exists a $\Phi$-invariant mixing (hence ergodic) measure $\nu$ supported on an $\Phi$-invariant Cantor set $\Lambda \subset D^m $ such that
$ h_{\nu}(\Phi) = + \infty.$
\end{lemma}
\begin{remark} \em
\label{RemarkMainLemma} Lemma \ref{LemmaMain} holds, in particular, for ${\mathcal H} \cap \mbox{RHom}(D^m)$.
\end{remark}
To prove Lemma \ref{LemmaMain} we need to define the paths of atoms and to discuss their properties. We also need to define the invariant Cantor set $\Lambda$ that will support the measure $\nu$ and prove some of its topological dynamical properties.
\begin{definition} {\bf (Paths of atoms)}
\label{definitionPathOfAtoms} \em
Let $\Phi \in {\mathcal H} \subset C^0(D^m)$, $l \geq 2$ and let $(A_1, A_2, \ldots, A_l)$ be a $l$-tuple of atoms for $\Phi$ of the same generation $n$, such that
$$A_i \stackrel{\Phi}{\rightarrow} A_{i+1}, \qquad \forall \ i \in \{1,2, \ldots, l-1\}.$$
We call $(A_1, A_2, \ldots, A_l)$ an \em $l$-path of $n$-atoms from $A_1$ to $A_l$. \em
Let ${\mathcal A}_n ^{l*} $ denote the family of all the $l$-paths of atoms of generation~$l$.
\end{definition}
\begin{lemma}
\label{lemmaPathsOfAtoms}
For all $n \geq 1$, for all $l \geq 2 n$, and for all $ A_1,A_2 \in {\mathcal A}_n$ there exists an $l$-path of $n$-atoms from $A_1$ to $A_2$.
\end{lemma}
\begin{proof}
For $n= 1$, the result is trivial for all $l\geq 2$ (see Definition \ref{definitionAtomsOfGeneration0-1}).
Let us assume by induction that the result holds for some $n-1 \geq 1$ and let us prove it for $n$.
Let $E, F \in {\mathcal A}_{n}$. From equality (\ref{eqn99}) of Definition \ref{definitionAtomsGeneration-n}, there exists unique atoms $B_{-1}, B_0, B_1 \in {\mathcal A}_{n-1} $ such that $E \in \Gamma_n(B_{-1}, B_0, B_1).$ Then
$B_{-1} \stackrel{\Phi}{\rightarrow} B_0, $ $ E \subset B_0 $ and, by condition d) of Definition \ref{definitionAtomsGeneration-n}:
\begin{equation} \label{eqn20}E \stackrel{\Phi}{\rightarrow} E_1, \qquad \forall \ E_1 \in \Omega_n(B_0, B_1).\end{equation}
Analogously, there exists unique atoms $B_{*}, B_{*+1} \in {\mathcal A}_{n-1} $ such that $F \in \Omega_n(B_*, B_{*+1}).$ Then $B_* \stackrel{\Phi}{\rightarrow} B_{*+1}, $ $ F \subset B_{*+1} $ and \begin{equation} \label{eqn21}E_{*} \stackrel{\Phi}{\rightarrow} F, \qquad \forall \ E_* \in \bigcup_{\substack {B_{*-1} \in {\mathcal A}_{n-1}: \\ B_{*-1} \stackrel{\Phi}{\rightarrow} B_*}} \Gamma_n(B_{*-1}, B_*, B_{*+1})\end{equation}
Since $B_1, B_{*} \in {\mathcal A}_{n-1}$ the induction hypothesis ensures that that for all $l \geq 2n-2 $ there exists an $l$-path $(B_1, B_2, \ldots, B_{l})$ from $B_1$ to $B_l= B_*$. We write $B_{*-1} = B_{l-1}$, $B_{*} = B_l, \ B_{* + 1} = B_{l+1}$.
So Assertion (\ref{eqn21}) becomes
\begin{equation} \label{eqn21b}E_{l} \stackrel{\Phi}{\rightarrow} F, \qquad \forall \ E_l \in \Gamma_n(B_{l-1}, B_l, B_{l+1})\end{equation}
Taking into account that $B_{i-1} \ \stackrel{\Phi}{\rightarrow} B_i$ for $1 <i \leq l$, and applying condition d) of Definition \ref{definitionAtomsGeneration-n}, we deduce that, if $ E_{i-1} \in \Gamma_n( B_{i-2} , B_{i-1}, B_{i} ) \subset {\mathcal A}_n, $ then
\begin{equation}
\label{eqn23}E_{i-1} \stackrel{\Phi}{\rightarrow} E_{i}, \qquad \forall \ E_i \in \Omega_n(B_{i-1}, B_{i}), \qquad \forall \ 1 <i \leq l.\end{equation}
Combining (\ref{eqn20}), (\ref{eqn21b}) and (\ref{eqn23}) yields an $(l+2)$-path $(E, E_1, \ldots, E_{l}, F) $ of atoms of generation $n$ from $E$ to $F$, as required.
\end{proof}
\begin{lemma}
{\label{lemmaPathsOfAtoms2}}
Let $n, l \geq 2$. For each $l$-path $(B_1, \ldots, B_{l})$ of $(n-1)$-atoms
there exists an $l$- path $(E_1, E_2, \ldots, E_{l})$ of $n$-atoms such that
$E_i \subset \mbox{int}(B_i) $ for all $i= 1, 2, \ldots, l.$
\begin{proof}
In the proof of Lemma \ref{lemmaPathsOfAtoms} for each $l$-path $(B_1, B_2, \ldots, B_l)$ of $(n-1)$-atoms we have constructed the $l$-path $(E_1, E_2, \ldots, E_l)$ of $n$-atoms as required.
\end{proof}
\end{lemma}
\begin{definition}{\bf (The $\mathbf \Lambda$-set)} \em \label{definitionLambdaSet}
Let $\Phi \in {\mathcal H} \subset C^0(D^m)$ be a model map. Let
${\mathcal A}_0, {\mathcal A}_1, \ldots , {\mathcal A}_n, \ldots$ be its sequence
of families of atoms. The subset
$$\Lambda := \bigcap_{n \geq 0} \bigcup_{A \in {\mathcal A}_n} A$$
of $\mbox{int}(D^m)$ is called \em the $\Lambda$-set \em of the map $\Phi$.
\end{definition}
From Definitions \ref{definitionAtomsOfGeneration0-1} and \ref{definitionAtomsGeneration-n}, we know that, for each fixed $n \geq 0$, the set
$\Lambda_n := \bigcup_{A \in {\mathcal A}_n} A,$
is nonempty, compact, and $\mbox{int}(\Lambda_n) \supset \Lambda_{n+1}.$ Therefore, $\Lambda$ is also non\-empty and compact. Moreover, $\Lambda_n$ is composed of a finite number of connected components $A \in {\mathcal A}_n$, which by Definition \ref{DefinitionModel}, satisfy
$\lim_{n \rightarrow + \infty} \max_{A \in {\mathcal A}_n} \mbox{diam}A = 0.$
Since $\Lambda := \bigcap_{n \geq 0}\Lambda_n,$
we deduce that the $\Lambda$-set is \em a Cantor set \em contained in $\mbox{int}(D^m)$.
\begin{lemma}
\label{lemmaAtomCapLambda}
Let $n,l \geq 1$ and $A_1, A_2 \in {\mathcal A}_n$. If there exists an $l+1$-path from $A_1$ to $A_2$, then $\Phi^l(A_1 \cap \Lambda) \cap (A_2 \cap \Lambda) \neq \emptyset$.
\end{lemma}
\begin{proof}
Assume that there exists an $(l+1)$-path from $A_1$ to $A_2$. So, from Lemma
\ref{lemmaPathsOfAtoms2}, for all $j \geq n_0$ there exists atoms $B_{j,1}, B_{j,2} \in {\mathcal A}_j$ and an $(l+1)$-path from $B_{j,1}$ to $B_{j,2}$ (with constant length $l +1$) such that
$$B_{n, i}= A_i, \ \ \ B_{j+1, i} \subset B_{j,i}, \qquad \forall \ j \geq n_0, \ \ \ i= 1,2.$$
Construct the following two points $x_1$ and $x_2$:
]$$\{x_i\}= \bigcap_{j \ge n_0} B_{j,i}, \ \ \ \ i= 1,2.$$
By Definition \ref{definitionLambdaSet}, $x_i \in A_i \cap \Lambda.$
So, to finish the proof of Lemma \ref{lemmaAtomCapLambda} it is enough to prove that $\Phi^l(x_1) = x_2$.
Recall that $l $ is fixed.
Since $\Phi$ is uniformly continuous, for any $\varepsilon >0$ there exists $\delta > 0$ such that if
$(y_0, y_1, \ldots, y_{l}) \in (D^m)^l $ satisfies $d(\Phi(y_i), y_{i+1}) < \delta$ for $0 \le i \le l-1$, then the points $y_0$ and $y_l$ satisfy
$d(\Phi^l(y_0), y_l) < \varepsilon$.
We choose $\delta$ small enough such that additional $d(\Phi^l(x), \Phi^l(y)) < \varepsilon$ if
$d(x,y) < \delta$.
From (\ref{eqnLimitDiamAtoms=0}), there exists $j \geq n_0$ such that
$\mbox{diam}(B_{j,i}) < \delta.$
Since there exists an $(l+1)$-path from $B_{j,1}$ to $B_{j,2}$, there exists a $(y_0,\dots,y_l)$ as in the previous paragraph with
$y_0 \in B_{j,1}$ and $ y_l \in B_{j,2}.$
Thus
$$d(\Phi^l(x_1), x_2) \leq d(\Phi^l(x_1), \Phi^l(y_0)) + d(\Phi^l(y_0), y_l) + d(y_l, x_1) $$ $$<\mbox{diam}(\Phi^l(B_{j,1})) + \varepsilon + \mbox{diam}(B_{j,2}) < 3 \varepsilon.$$
Since $\varepsilon>0$ is arbitrary, we obtain
$\Phi^l(x_1) = x_2$, as required.
\end{proof}
\begin{lemma}{\bf (Topological dynamical properties of $\mathbf \Lambda$)}
\label{lemmaLambda}
\begin{enumerate}[label=\alph*)]
\item The $\Lambda$-set of a model map $\Phi \in {\mathcal H}$ is $\Phi$-invariant, i.e., $\Phi(\Lambda) = \Lambda$.
\item The map $\Phi$ restricted to the $\Lambda$-set is topologically mixing.
\item In particular,
$\Phi^l(A_1 \cap \Lambda) \cap (A_2 \cap \Lambda) \neq \emptyset,$
for all $n \geq 1$, for any two atoms $A_1, A_2 \in {\mathcal A}_n$ and for all $l \geq 2n -1$.
\end{enumerate}
\end{lemma}
\begin{proof}
{a) } Let $x \in \Lambda$ and let $\{A_n(x)\}_{n \geq 0}$ the unique sequence of atoms such that
$x \in A_n(x)$ and $ A_n(x) \in {\mathcal A}_n$ for all $ n \geq 0. $ Then, $ \Phi(x) \in \Phi(A_n(x))$ for all $ n \geq 0.$
From Definition \ref{definitionAtomsGeneration-n}, for all $n \geq 0$ there exists an atom $B_n \in {\mathcal{A}_n}$ such that $A_n(x) \stackrel{\Phi}{\rightarrow} B_n.$ Therefore $ \Phi(A_n(x)) \cap B_n \neq \emptyset$. Let $d$ denote the Hausdorff distance between subsets of $D^m$,
we deduce
$$d(\Phi(x), B_n) \leq \mbox{diam}\big (\Phi(A_n(x))\big) + \mbox{diam}\big(B_n\big) .$$
Moreover, Equality (\ref{eqnLimitDiamAtoms=0}) and the continuity of $\Phi$ imply
$$\lim_{n \rightarrow + \infty} \max\big \{\mbox{diam}\big(\Phi(A_n(x))\big), \mbox{diam}\big (B_n\big )\big \} = 0.$$
Then, for all $\varepsilon >0$ there exists $n_0 \geq 0$
such that
$d(\Phi(x), B_n) < \varepsilon $ for some atom $B_n \in {\mathcal A}_n $ for all $ n \geq n_0. $
Since any atom of any generation intersects $\Lambda$, we deduce that
$ d(\Phi(x), \Lambda) < \varepsilon$ for each $\varepsilon >0$. Since $\Lambda$ is compact, this implies
$\Phi(x) \in \Lambda$. We have proved that
$\Phi(\Lambda) \subset \Lambda.$
Now, let us prove the other inclusion.
Let $y \in \Lambda $ and let $\{B_n(y)\}_{n \geq 0}$ the unique sequence of atoms such that
$y = \Phi(x) \in B_n(y)$ and $ B_n(y) \in {\mathcal A}_n$ for all $n \geq 0. $
From Definition \ref{definitionAtomsGeneration-n}, for all $n \geq 0$ there exists an atom $A_n \in {\mathcal{A}_n}$ such that $A_n \stackrel{\Phi}{\rightarrow} B_n(y).$ Therefore $\Phi(A_n) \cap B_n(y) \neq \emptyset.$
We deduce that, for all $n \geq 0$, there exists a point $x_n \in A_n \in {\mathcal A}_n$ such that $\Phi(x_n) \in B_n(y)$. Since any atom $A_n$ contains points of $\Lambda$, we obtain
$$d(x_n,\Lambda) \leq \mbox{diam}(A_n) \hbox{ and }
d(\Phi(x_n), y) \leq \mbox{ diam} (B_n(y)), \qquad \forall \ n \geq 0.$$
Let $x$ be the limit of a convergent subsequence of $\{x_n\}_{n \geq 0}$, applying Equality (\ref{eqnLimitDiamAtoms=0}) and the continuity of $\Phi$, we deduce that
$d(x,\Lambda) = 0$ and $d(\Phi(x), \, y) = 0$.
This means that $y =\Phi(x) $ and $x \in \Lambda$. We have proved that $y \in \Phi(\Lambda)$ for all $y \in \Lambda$; namely $\Lambda = \Phi(\Lambda)$, as required.
c) We will prove a stronger assertion: for any two atoms, even of different generation, there exists $l_0 \geq 1$ such that
\begin{equation}\label{next} \Phi^l(A_1 \cap \Lambda) \cap (A_2 \cap \Lambda) \neq \emptyset \ \ \forall \ l \geq l_0.\end{equation}
It is not restrictive to assume that $A_1$ and $A_2$ are atoms of the same generation $n_0$ (if not, take $n_0$ equal to the largest of both generation and substitute $A_i$ by an atom of generation $n_0$ contained in $A_i$).
Applying Lemma \ref{lemmaPathsOfAtoms}, for all $l \geq 2n_0-1$ there exists an $(l+1)$-path from $A_1$ to $A_2$. So, from Lemma \ref{lemmaAtomCapLambda} $\Phi^l(A_1 \cap \Lambda) \cap (A_2 \cap \Lambda) \neq \emptyset$, as required.
b) The intersection of $\Lambda$ with the atoms of all the generations generates its topology, thus Equation \eqref{next}
implies that $\Lambda$ is topologically mixing.
\end{proof}
For fixed $ (A_0, A_l) \in {\mathcal A}_n^{2}$ we set
$${\mathcal A}_n^{l+1\,*}(A_0, A_l):=\{(A_0, A_1, \ldots, A_{l-1}, A_l) \in {\mathcal A}_n^{l+1\,*}\}.$$
\begin{lemma}
\label{LemmaCardinalA_n^(l+1)}
Let $l,n \geq 1$. Then
\begin{enumerate}[label=\alph*)]
\item $ \label{eqn51}{\mathcal A}_n^{l+1\,*} = 2^{nl} \cdot (\#{\mathcal A}_n).$
\item
${\mathcal A}_n^{l+1\,*}(A_0, A_l) = \frac{2^{nl}}{\#{\mathcal A}_n} \ \ \forall \ (A_0, A_l) \in {\mathcal A}^2_n$,
for all $l \geq 2 n-1$.
\end{enumerate}
\end{lemma}
\begin{proof}
a) Each $(l+1)$-path $(A_0, A_1, \ldots, A_l) $ of $n$-atoms is determined by a free choice of the atom $A_0 \in {\mathcal A}_{n}$, followed by the choice of the atoms $A_j \in {\mathcal A}_n$ such that $A_j \stackrel{\Phi}{\rightarrow} A_{j-1}$ for all $j =1, \ldots, l$. From equality (\ref{eqn101}) of Definition \ref{definitionAtomsGeneration-n}, we know that for any fixed $A \in {\mathcal A}_n$ the number of atoms $B \in {\mathcal A}_n$ such that $B \stackrel{\Phi}{\rightarrow} A$ is $2^n$. This implies \ref{eqn51}, as required.
b) We argue by induction on $n$. Fix $n=1$ and $l \geq 1$. Since any two atoms $A_j, A_{j+1} \in {\mathcal A}_1$ satisfies $A_j \stackrel{\Phi}{\rightarrow} A_{j+1}$, the number of $(l+1)$-paths $$(A_0, A_1, \dots,A_j, A_{j+1}, \ldots A_{l-1}, A_l)$$ of $1$-atoms with $(A_0, A_l)$ fixed, equals $\#({\mathcal A}_1)^{l-1} = 2^{l-1}= 2^{l}/2 = 2^{nl}/(\#{\mathcal A}_n)$ with $n=1$.
Now, let us assume that assertion b) holds for some $n \geq 1$ and let us prove it for $n+1$.
Let $l \geq 2(n+1) -1 = 2n + 1 \geq 3$ and let $(B_0, B_l) \in {\mathcal A}_{n+1}^{2}$. From equality (\ref{eqn99}) and conditions a) and b) of Definition \ref{definitionAtomsGeneration-n}, there exists unique $(A_{-1}, A_0, A_1) \in {\mathcal A}_n^{3*}$ and unique $(A_{l-1}, A_l) \in {\mathcal A}_n^{2*}$ such that
$$B_0 \in \Gamma_{n+1}(A_{-1}, A_0, A_1), \ \ \ \ B_l \in \Omega_{n+1}(A_{l-1}, A_l).$$
As $(A_1, A_{l-1}) \in {\mathcal A}_n^2$ and $l-2 \geq 2n-1$, the induction hypothesis ensures that the number of $(l-1)$-paths $(A_1, A_2, \ldots, A_{l-1})$ from $A_1$ to $A_{l-1}$ is
\begin{equation} \label{eqn113}\#{\mathcal A}_n^{l-1\,*}(A_1, A_{l-1}) = \frac{2^{n(l-2)}}{\#{\mathcal A}_n} = \frac{2^{n(l-2)}}{2^{n^2}}= 2^{n l-2n-n^2}. \end{equation}
Let ${\mathcal C}(B_0,B_l)$ be the set
$$\label{eqn114ToBeProved}
\hspace{-2cm} \bigcup_{\hspace{2cm}(A_1, \ldots, A_{l-1}) \in {\mathcal A}_n^{l-1\,*}(A_1, A_{l-1})} \hspace{-2.5cm}
\big \{(B_0, B_1, \ldots, B_l)\in{\mathcal A}_{n+1}^{l+1} \colon
B_j \in \Gamma_{n+1} (A_{j-1}, A_j, A_{j+1}) \ \forall j \big \}, \nonumber
$$
where the families in the above union are pairwise disjoint.
It is standard to check that the families in the union ${\mathcal C}(B_0,B_l)$ are pairwise disjoint, because for $A \neq \widetilde A$ in ${\mathcal A}_n$, the families $\Gamma_{n+1}(\cdot,A,\cdot)$ and $\Gamma_{n+1}(\cdot,\widetilde A,\cdot)$ are disjoint.
We assert that
\begin{equation}
\label{eqn114ToBeProved}
{\mathcal A}_{n+1}^{l+1\,*}(B_0, B_l) = {\mathcal C}(B_0,B_i)
\end{equation}
First, let us prove that
${\mathcal A}_{n+1}^{l+1\,*}(B_0, B_l) \subset {\mathcal C}(B_0, B_l)$.
In fact, if $(B_0, B_1, \ldots, B_{l-1}, B_l) \in {\mathcal A}_{n+1}^{l+1\,*}(B_0, B_l)$, there exists unique $A_j \in {\mathcal A}_n$ such that $B_j \subset A_j$ for all $j\in \{0,1, \ldots, l\}$. Since $B_{j-1} \stackrel{\Phi}{\rightarrow} B_j \stackrel{\Phi}{\rightarrow} B_{j+1}$ for all $j \in \{1, \ldots, l-1\}$, we deduce that $A_j \stackrel{\Phi}{\rightarrow} B_{j+1} \stackrel{\Phi}{\rightarrow} A_{j+1}$; hence, by definition of the families $\Gamma_{n+1}$ we have $$B_{j} \in \Gamma_{n+1}(A_{j-1}, A_j, A_{j+1}) \ \ \forall \ j \in \{1, \ldots, l-1\}.$$ Thus, $(B_0, B_1, \ldots, B_{l-1}, B_l) \in {\mathcal C}(B_0, B_l)$, as required.
Now, let us prove that
${\mathcal A}_{n+1}^{l+1\,*}(B_0, B_l) \supset {\mathcal C}(B_0, B_l)$.
If $(B_0, B_1, \ldots, B_{l-1}, B_l) \in {\mathcal C}(B_0, B_l)$, then there exists a $(l-1)$ chain $A_1, \ldots, A_{l-1} $ of $n$-atoms from $A_1$ to $A_{l-1}$ such that $$B_j \in \Gamma_{n+1}(A_{j-1}, A_j, A_{j+1})\subset \Omega_{n+1}(A_{j-1}, A_j)$$ for all $j =1,2, \ldots, l-1$. By construction we have $$B_0 \in \Gamma_{n+1}(A_{-1}, A_0, A_1) \ \ \mbox{and} \ \ B_l \in \Omega_{n+1}(A_{l-1}, A_l).$$
Therefore, applying condition d) of Definition \ref{definitionAtomsGeneration-n}, we deduce
that $$B_j \stackrel{\Phi}{\rightarrow} B_{j+1} \ \ \forall \ j= 0, 1, \ldots, l-1.$$
In other words, $(B_0, B_1, \ldots, B_{l-1}, B_l) \in {\mathcal A}_{n+1}^{l+1\,*}(B_0, B_l)$, ending the proof of Equality (\ref{eqn114ToBeProved}).
Now, applying (\ref{eqn113}) and (\ref{eqn114ToBeProved}), we obtain $\#{\mathcal A}_{n+1}^{l+1\,*}(B_0, B_l) =\
$
\begin{align*}
& \hspace{-2.3cm} \sum_{\hspace{2.3cm} (A_1, \ldots, A_{l-1}) \in {\mathcal A}_n^{l-1\,*}(A_1, A_{l-1})} \hspace{-2.9cm} \#\big \{(B_0, B_1, \ldots, B_l)\in{\mathcal A}_{n+1}^{l+1} \colon
B_j \in \Gamma_{n+1} (A_{j-1}, A_j, A_{j+1}) \ \forall j \big \}\\
& = \hspace{-2cm}
\sum_{\hspace{2cm} (A_1, \ldots, A_{l-1}) \in {\mathcal A}_n^{l-1\,*}(A_1, A_{l-1})}\prod_{j=1}^{l-1} \#\Gamma_{n+1} (A_{j-1}, A_j, A_{j+1}) \\
& = (\#{\mathcal A}_n^{l-1\,*}(A_1, A_{l-1})) \cdot 2^{l-1} = 2^{nl-2n-n^2+l-1}= 2^{(n+1)l-(n+1)^2} =
\frac{2^{(n+1)l}}{\#{\mathcal A}_{n+1}}
\end{align*}
as required.
\end{proof}
\begin{lemma} {\bf (Intersection of $\mathbf \Lambda$ with $\mathbf l$-paths)}
\label{lemmaLambda2}
Fix $l,n \geq 1$. Then
\begin{enumerate}[label=\alph*)]
\item For any $G \in {\mathcal A}_{n+l}$, there exists a unique $(l+1)$-path $(A_0, A_1, \ldots, A_l)$ of $n$-atoms such that
$G \cap \Lambda \subset \bigcap_{j=0}^{l} \Phi^{-j}(A_j).$
\item For any atoms $G \in {\mathcal A}_{n+l}$, $A \in
{\mathcal A}_n$ and $j \in \{ 0,1, \ldots, l\}$:
$$ (G \cap \Lambda) \cap \Phi^{-j}(A) \neq \emptyset \ \ \Leftrightarrow \ \ G \cap \Lambda \subset \Phi^{-j}(A).$$
\item For any $(l+1)$-path $\vec{A}_n^l := (A_0, A_1, \ldots, A_l)$ of $n$-atoms,
\begin{equation}
\label{eqn39b}
\Lambda \cap \bigcap_{j=0}^l \Phi^{-j}(A_j) = \bigcup_{G \in {\mathcal F}_{n, l} (\vec{A}_n^l)} G \cap \Lambda, \end{equation}
where
${\mathcal F}_{n, l} (\vec{A}_n^l ) := \Big \{G \in {\mathcal A}_{n+l} \colon G \cap \Lambda \subset \bigcap_{j=0}^l \Phi^{-j}(A_j) \Big\}. $
\item For any atom $G \in {\mathcal A}_{n+l}$ and any path $\vec A_n^l \in {\mathcal A}_n^{l+1 \, *}$:\\
$ G\in {\mathcal F}_{n,l}(\vec A_n^l) $ if and only if there exists $(G_0, G_1, \ldots, G_l) \in {\mathcal A}_{n+1}^{l+1 \, *} $ such that $G_0= G$ and $ G_j \subset A_j $ for all $ j= 0, 1, \ldots, l.$
\item For any $(l+1)$-path $(A_0, A_1, \ldots, A_l)$ of $n$-atoms,
$$\displaystyle {\#{\mathcal F}_{n, l } (\vec{A}_n^l ) =\frac{1}{2^{nl}} \cdot \frac{\#{\mathcal A}_{n+l}}{\#{\mathcal A}_{n}}}.$$
\end{enumerate}
\end{lemma}
\begin{proof} a) From equalities (\ref{eqn99a}) and (\ref{eqn99}) of Definition \ref{definitionAtomsGeneration-n}, for any atom $G$ of generation $n+l$ there exist two unique atoms $B, C$ of generation $n+ l -1$ such that $B \stackrel{\Phi}{\rightarrow} C$, $ G \subset B$ and $G \stackrel{\Phi}{\rightarrow} E $ for all $ E \in \Omega_{n+l}(B,C) $. Moreover, from condition d) of Definition \ref{definitionAtomsGeneration-n}, we have
\begin{equation}\label{eqn40}
\Phi(G) \cap (F) \neq \emptyset \mbox{ if and only if } \ F \in \Omega_{n+l}(B,C).\end{equation}
We claim that
\begin{equation}
\label{eqn41} \Phi(G \cap \Lambda) \subset \mbox{int}(C).\end{equation}
Since $\Lambda$ is $\Phi$-invariant, for any $x \in G \cap \Lambda$, we have $\Phi(x) \in \Phi(G) \cap \Lambda$. Therefore $\Phi(x)$ is in the interior of some atom $E(x)$ of generation $n+l$ (see Definition \ref{definitionLambdaSet}). From (\ref{eqn40}), $E(x) \in \Omega_{n+l}(B,C)$. Thus $E(x) \subset \mbox{int}(C)$ and $\Phi(x) \in \mbox{int}(C)$ for all $ x \in G \cap \Lambda $ proving (\ref{eqn41}).
So, there exists $C_1 \in {\mathcal A}_{n+l-1}$ such that $\Phi(G \cap \Lambda) \subset \mbox{int}(C_1) \cap \Lambda$. Applying the same assertion to $C_1$ instead of $G$, we deduce that there exists $C_2 \in {\mathcal A}_{n+l-2}$ such that $\Phi(C_1 \cap \Lambda) \subset \mbox{int}(C_2) \cap \Lambda$. So, by induction, we construct atoms
$$C_1, C_2, \ldots, C_l \ \ \mbox{ such that } \ \ C_j \in {\mathcal A}_{n+l-j} \ \mbox{ and }$$ $$ \ \Phi^j(G \cap \Lambda) \subset \mbox{int} ( C_j) \cap \Lambda, \qquad \forall \ j= 1, \ldots, l.$$
Since any atom of generation larger than $n$ is contained in a unique atom of generation $n$, there exists $A_0, A_1, \ldots, A_l \in {\mathcal A}_n$ such that $ A_0 \supset G$ and $ A_i \supset C_i, \qquad \forall \ i= 1, \ldots, l.$ We obtain
$$\Phi^j (G \cap \Lambda) \subset \mbox{int}(A_j), \qquad \forall \ j= 0,1, \ldots, l. $$
Besides, $(A_0, A_1, \ldots, A_l)$ is an $(l+1)$-path since
$\emptyset \neq \Phi^j(G \cap \Lambda) \subset \Phi(A_{j-1}) \cap \mbox{int}(A_j)$; hence $A_{j-1} \stackrel{\Phi}{\rightarrow} A_j$ for all $ j= 1, \ldots, l.$ Then,
$G \cap \Lambda \subset \Phi^{-j} (A_j)$ for all $ j= 0,1, \ldots, l$; proving the existence statement in a).
To prove uniqueness assume that $(A_0, A_1, \ldots, A_l)$ and $(A'_0, A'_1, \ldots, A'_l)$ are paths of $n$-atoms such that $$G \cap \Lambda \subset \Phi^{-j}(A_j) \cap \Phi^{-j}(A'_j)\ \ \forall \ j \in \{0,1, \ldots, l\}.$$
Then $A_j \cap A'_j \neq \emptyset$ for all $j \in \{0,1, \ldots, l\}$. Since two different
atoms of the same generation are pairwise disjoint, we deduce that $A_j= A'_j$ for all $j \in \{0,1, \ldots, l\}$ as required.
Trivially, if $\Lambda \cap G \subset \Phi^{-j}(A) $ then $\Lambda \cap G \subset \Phi^{-j} (A)\neq \emptyset$.
Let us prove the converse assertion. Let $G \in {\mathcal A}_{n+1}$ such that $\Lambda \cap G \subset \Phi_{-j_0} (A)\neq \emptyset$ for some $j_0 \in \{0, 1, \ldots, l\}$. Applying
part a) there exists an $(l+1)$-path $(\widetilde A_0, \dots , \widetilde A_l)$ of $n$-atoms such that $G \cap \Lambda \subset \Phi^{-j}(\widetilde A_j) $ for all $j \in \{0,1, \ldots, l\}$.
So, $\Phi^{-j_0}(A_{j_0}) \cap \Phi^{-j_0}(\widetilde A_{j_0}) \neq \emptyset $.
But, if $ \widetilde A_{j_0} \ne A_{j_0}$, then since they are atoms of the same generation they would be disjoint, and
the above intersection would be empty. We deduce that
$A_{j_0}= \widetilde A_{j_0}$, hence $G \cap \Lambda \subset \Phi^{-j_0}( A_{j_0}) $, as required.
b) Trivially, if $G \cap \Lambda \subset \Phi^{-j}(A)$, then $(G \cap \Lambda) \cap \Phi^{-j}(A) \neq \emptyset$. Now, let us prove the converse assertion. Fix $G \in {\mathcal A}_{n+l}$ and $A \in {\mathcal A}_n$ satisfying $(G \cap \Lambda) \cap \Phi^{-j}(A) \neq \emptyset$. Applying part a) there exists $\widetilde A \in {\mathcal A}_n$ such that $G \cap \Lambda \subset \Phi^{-j}(\widetilde A)$. Therefore $G \cap \Lambda \cap \Phi^{-j}(A) \subset \Phi^{-j}(\widetilde A \cap A) \neq \emptyset$. Since $A$ and $\widetilde A$ are atoms of generation $n$, and two different atoms of the same generation are disjoint, we conclude that $\widetilde A = A$, hence $G \cap \Lambda \subset \Phi^{-j}(A)$, as required.
c) For the $(l+1)$-path $\vec{A}_n^l = (A_0, A_1, \ldots, A_l)$ of $n$-atoms, construct
\begin{equation}
\label{eqn112}
\widetilde {\mathcal F}_{n, l}(\vec{A}_n^l) := \big\{G \in {\mathcal A}_{n+l} \colon G \cap \Lambda \cap \Phi^{-j}(A_j ) \neq \emptyset \ \ \forall j \in \{0,1, \ldots, l\} \big \}.\end{equation}
From the definitions of the the families ${\mathcal F}_{n,l}$ and $\widetilde {\mathcal F}_{n,l}$, and taking into account that $\Lambda$ is contained in the union of $(n+1)$-atoms, we obtain:
$$ \bigcup_{G \in {\mathcal F}_{n, l}(\vec{A}_n^l)} G \cap \Lambda \ \ \subset \ \ \Lambda \cap \big(\bigcap_{j=0}^l \Phi^{-j}(A_j)\big) \ \ \subset \ \ \bigcup_{G \in \widetilde {\mathcal F}_{n, l}(\vec{A}_n^l)} G \cap \Lambda . $$
Therefore, to prove Equality (\ref{eqn39b}) it is enough to show that \begin{equation}
\label{eqn112b}
\widetilde {\mathcal F}_{n, l}(\vec{A}_n^l) = {\mathcal F}_{n, l}(\vec{A}_n^l), \end{equation}
but this equality immediately follows from the construction of the families ${\mathcal F}_{n, l}(\vec{A}_n^l)$ and $\widetilde{\mathcal F}_{n, l}(\vec{A}_n^l)$ by assertion b).
d) For each $(l+1)$-path $\vec A_n^l = (A_0, A_1 \ldots, A_l)$ of $n$-atoms construct the family ${\mathcal G}_n^l(\vec A_n^l) := $ $$\big\{G_0 \in {\mathcal A}_{n+1}\colon \ \exists (G_0, G_1, \ldots, G_l) \in {\mathcal A}_{n+1}^{l+1 \,*} \mbox{ such that } G_j \subset A_j \ \forall j \big\} $$
We will first prove that ${\mathcal G}_n^l(\vec A_n^l) \supset {\mathcal F}_n^l(\vec A_n^l) $. In fact, take $G \in {\mathcal F}_n^l(\vec A_n^l) $, and take any point $x \in G \cap \Lambda$. We have $\Phi^j(x) \in A_j \cap \Lambda$ for all $j \in \{0,1, \ldots, l\}$ (recall that $\Lambda$ is $\Phi$-invariant). Since any point in $\Lambda$ is contained in the interior of some atom of any generation, there exists an atom $G_j$ of generation $n+l$ such that $\Phi_j(x) \in \mbox{int}(G_j)$. Recall that each atom of generation $n+l$ is contained in a unique atom of generation $n$. As $\Phi_j(x) \in G_j \cap A_j \neq \emptyset$, and different atoms of the same are disjoint, we conclude that $G_j \subset A_j$. Besides $G_0= G$ because $x \in G \cap G_0$. Finally $(G_0, G_1, \ldots, G_l)$ is a $(l+1)$-path because $\Phi_{j+1}(x) = \Phi (\Phi_{j}(x)) \in \Phi(G_j) \cap \mbox{int}(G_{j+1})$ for all $j \in \{ 0, 1, \ldots, l-1\}$; namely $G_j \stackrel{\Phi}{\rightarrow} G_{j+1}$. We have proved that $G \in {\mathcal G}_n^l(\vec A_n^l)$, as required.
Now, let us prove that ${\mathcal G}_n^l(\vec A_n^l) \subset {\mathcal F}_n^l(\vec A_n^l) $. Assume that $G_0 \in {\mathcal A}_{n+1}$ and $(G _0, G_1, \ldots, G_l) \in {\mathcal A}_{n+l}^{l+1\,*} $ satisfies $G_j \subset A_j$ for all $j \in \{0,1, \ldots, l\}$. Therefore $(G_0, G_1, \ldots, G_j)$ is a $(j+1)$-path of $(n+1)$-atoms for all $j \in \{1, 2, \ldots, l\}$. Applying Lemma \ref{lemmaAtomCapLambda}, we obtain
$ G_0 \cap \Lambda \cap \Phi^{-j}( G_j )\neq \emptyset$. Therefore, taking into account that $G_j \subset A_j$, we deduce that
$$G_0 \cap \Lambda \cap \Phi^{-j}( A_j )\neq \emptyset \ \ \forall \ j \in \{0,1, \ldots, l\}.$$
Therefore $G_0 \in \widetilde {\mathcal F}_{n}^l(\vec A_n^l) = {\mathcal F}_{n}^l(\vec A_n^l)$ (recall (\ref{eqn112}) and (\ref{eqn112b})). This holds for any $G_0 \in {\mathcal G}_n^l(\vec A_n^l)$, thust ${\mathcal G}_n^l(\vec A_n^l) \subset {\mathcal F}_{n}^l(\vec A_n^l) $, as required.
e) From Assertion a) we obtain:
\begin{equation} \label{eqn50} {\mathcal A}_{n+l} = \bigcup_{\vec{A}_n^l\in {\mathcal A}_{n}^{l+1 \,*}} {\mathcal F}_{n, 1}(\vec{A}_n^l) ,\end{equation}
where the families in the above union are pairwise disjoint, due to the uniqueness property of assertion a).
Recall the characterization of the family ${\mathcal F}_{n, 1}(\vec{A}_n^l)$ given by Assertion d). From Definition \ref{definitionAtomsGeneration-n} (condition a) and equality (\ref{eqn101}), the number of atoms of each generation larger than $n$ that are contained in each $A_j \in {\mathcal A}_n$, and also the number of atoms $G_{j} \in {\mathcal A}_{n+1}$ such that $G_j \stackrel{\Phi}{\rightarrow} G_{j+1}$, are constants that depend only on the generations but not on the chosen atom. Therefore, there exists a constant $k_{n, l}$ such that $\#{\mathcal F}_{n, l}(\vec{A}_n^l)= \#{\mathcal G}_{n, l}(\vec{A}_n^l) = k_{n, l}$ for all the $(l+1)$-paths of $n$-atoms. So, from Equality (\ref{eqn50}) we obtain:
$$ \#{\mathcal A}_{n+l} = (\# {\mathcal{A} }_{n}^{l+1\,*}) \cdot ( \# {\mathcal F}_{n,l}(\{A_j\}),$$
and applying Lemma \ref{LemmaCardinalA_n^(l+1)}, we conclude
$$ \#{\mathcal A}_{n+l} = 2^{nl} \cdot (\# {\mathcal{A} }_{n}) \cdot ( \# {\mathcal F}_{n,l}(\{A_j\}),$$
as required.
\end{proof}
We turn to the proof of Lemma \ref{LemmaMain}. We will first construct the measure $\nu$ and then prove that it has the required properties.
We start by defining an additive pre-measure on the $\Lambda$-set of $\Phi$ by
$$\nu^*(A \cap \Lambda) := \frac{1}{\#{\mathcal A}_n}, \qquad \forall \ A \in {\mathcal A}_n, \qquad \forall \ n \geq 0.$$
Since $\nu^*$ is a pre-measure defined in a family of sets that generates the Borel $\sigma$-algebra of $\Lambda$,
there exists a unique Borel probability measure $\nu$ supported on $\Lambda$ such that
\begin{equation}
\label{eqnCosntructionOfNu}
\nu (A \cap \Lambda) := \frac{1}{\#{\mathcal A}_n}, \qquad \forall \ A \in {\mathcal A}_n, \qquad \forall \ n \geq 0.\end{equation}
In the following lemmas we will prove that
$\nu$ is $\Phi$-invariant,
mixing,
and that the metric entropy $h_{\nu}(\Phi)$ is infinite.
\begin{lemma} \label{lemmaNuInvariante}
$\nu$ is invariant by $\Phi$.
\end{lemma}
\begin{proof}
Since the atoms of all generation intersected with $\Lambda$ generates the Borel $\sigma$-algebra of $\Lambda$, it is enough to prove that
\begin{equation}
\label{eqn37ToBeProved}
\nu(C \cap \Lambda) = \nu(\Phi^{-1}(C \cap \Lambda)), \qquad \forall \ C \in {\mathcal A}_n, \qquad \forall \ n \geq 0.\end{equation}
From (\ref{eqn99}), taking into account that $\Lambda$ is invariant and that any point in $\Lambda$ belongs to an atom of generation $n+1$, we obtain:
$$\Phi^{-1}(C \cap \Lambda) = \bigcup_{\substack {(D, B) \in {\mathcal A}^{2^*}_n \\ B \stackrel{\Phi}{\rightarrow} C}} \ \bigcup_ {G \in \Gamma_{n+1}(D,B,C)} (G \cap \Lambda),$$
where both unions are of pairwise disjoint sets. Using (\ref{eqn101}) we obtain
\begin{align}
\nu (\Phi^{-1}(C \cap \Lambda)) &=& \sum_{\substack {B \in {\mathcal A}_n \\ B \stackrel{\Phi}{\rightarrow} C}}\ \ \ \ \sum_{\substack {D \in {\mathcal A}_n \\ D \stackrel{\Phi}{\rightarrow} B}} \ \ \ \sum_ {G \in \Gamma_{n+1}(B,C,D)} \nu (G \cap \Lambda)\nonumber \\
&=& N_C \cdot N_B \cdot (\#\Gamma_{n+1}(B,C,D)) \cdot \frac{1}{\#{\mathcal A}_{n+1}},
\end{align}
where
$N_X:= \#\{Y \in {\mathcal A}_n \colon Y \stackrel{\Phi}{\rightarrow} X \}) = 2^n$ for all $X \in {\mathcal A}_n$. Since $ \#\Gamma_{n+1}(B,C,D)) = 2 $ (this is part c) of Definition \ref{definitionAtomsGeneration-n}) and $ {\#{\mathcal A}_{n+1}}= {2^{(n+1)^2}} $, we conclude $$\nu (\Phi^{-1}(C \cap \Lambda)) = 2^{n} \cdot 2^{n} \cdot 2 \cdot \frac{1}{2^{(n+1)^2}} = \frac{1}{2^{n^2}} = \frac{1}{\#{\mathcal A}_{n }} = \nu(C \cap \Lambda),
$$ proving Equality (\ref{eqn37ToBeProved}) as required.
\end{proof}
\begin{lemma} \label{lemmaNuMixing}
$\nu$ is mixing.
\end{lemma}
\begin{proof}
The family of atoms of all generations intersected with $\Lambda$ generates the Borel $\sigma$-algebra of $\Lambda$, thus it is enough to prove that for any pair $(C_0, D_0) $ of atoms (of equal or different generations) there exists $l_0 \geq 1$ such that
\begin{equation}
\label{eqn200ToBeProved}\nu(\Phi^{-l}(D_0 \cap \Lambda) \cap (C_0\cap \Lambda))= \nu(C_0 \cap \Lambda) \cdot \nu(D_0 \cap \Lambda) \ \ \forall \ l \geq l_0.\end{equation}
\vspace{.3cm}
Let us first prove this in the case that $C_0$ and $D_0$ are atoms of the same generation $n$. Take $l \geq 2n-1$. Applying Lemma \ref{lemmaLambda}-c), we have
$\Phi^{-l}(D_0 \cap \Lambda) \cap (C_0\cap \Lambda) \neq \emptyset \ \ \forall \ l \geq 2n-1. $
Fix $l \geq 2n-1$. Let
$$\vec A^l_n :=(C_0, A_1, \ldots, A_{l-1}, D_0) \in {\mathcal A}_n^{l+1\,*}(C_0, D_0)$$
denote one of the
$2^{nl}/(\#{\mathcal A}_{n})$ different $l+1$-paths of $n$-atoms from $C_0$ to $D_0$
(see Lemma \ref{LemmaCardinalA_n^(l+1)}-b)).
We assert that
\begin{equation}
\label{eqn201ToBeProved}
\Phi^{-l} (D_0 \cap \Lambda) \cap (C_0 \cap \Lambda) = T := \bigcup_{\vec A^l_n \in {\mathcal A}_n^{l+1\,*}(C_0, D_0)} \ \ \bigcup_{B \in {\mathcal F}_{n,l}(\vec A_n^l)} (B \cap \Lambda), \end{equation}
where the family ${\mathcal F}_{n,l}(\vec A_n^l)$ of $(n+l)$-atoms is defined in Lemma \ref{lemmaLambda2}-c).
First, let us prove that
$\Phi^{-l} (D_0 \cap \Lambda) \cap (C_0 \cap \Lambda) \subset T $. Fix $x \in (D_0 \cap \Lambda) \cap (C_0 \cap \Lambda) $. Then
$C_0, D_0$ are the unique atoms of generation $n$ that contain $x$ and $\Phi^l(x) \in \Phi^l(\Lambda) = \Lambda$ respectively. Since
$x \in \Lambda$, there exists a unique atom $B$ of generation $n+l$ that contains $x$. Applying Lemma \ref{lemmaLambda2}-a) there
exists a unique $(A_0, A_1, \ldots, A_l)\in{\mathcal A}_n^{l+1 \,*} $ such that $B \cap \Lambda\subset \Phi^{-j}(A_j)$ for all $j\in \{0,1,\ldots, l\}$. Since the $n$-atom that contains $x$ is $C_0$, and two different $n$-atoms are disjoint, we deduce that $A_0= C_0$. Analogously, since the $n$-atom that contains $\Phi^l(x)$ is $D_0$ and the preimages of two different $n$-atoms are disjoint, we deduce that $A_l = D_0$.
Therefore we have found $\vec A_n^l= (C_0, A_1, \ldots, A_{l-1}, D_0)$ and $B \in {\mathcal F}_{n,l}(\vec A_n^l)$ such that $x \in B \cap \Lambda$. In other words, $x \in T$, as required.
Next, let us prove that
$\Phi^{-l} (D_0 \cap \Lambda) \cap (C_0 \cap \Lambda) \supset T $. Take $B \in {\mathcal F}_{n,l} (\vec A_n^l)$ for some $\vec A_n^l = (C_0, A_1, \ldots, A_{l-1}, D_0)$. From the definition of the family ${\mathcal F}_{n,l} (\vec A_n^l)$ in Lemma \ref{lemmaLambda2}-c), we have
$B \cap \Lambda \subset (C_0 \cap \Lambda) \cap \Phi^{-l}(D_0)$. Besides $B \cap \Lambda \in \Phi^{l}(\Lambda)$ because $\Phi^l(\Lambda)= \Lambda$. We conclude that
$B \cap \Lambda \subset (C_0 \cap \Lambda) \cap \Phi^{-l}(D_0 \cap \Lambda)$, proving that
$T \subset \Phi^{-l} (D_0 \cap \Lambda) \cap (C_0 \cap \Lambda) $, as required.
This ends the proof of equality (\ref{eqn201ToBeProved}).
$n$-atoms are pairwise disjoint, thus the sets in the union constructing $T$ are pairwise disjoint. Therefore, from (\ref{eqn201ToBeProved}), and applying Lemma \ref{LemmaCardinalA_n^(l+1)}-b) and Lemma \ref{lemmaLambda2}-e), we deduce
\begin{align*}
\nu ((C_0 \cap \Lambda) \cap \Phi^{-l}(D_0 \cap \Lambda)) & =
\sum_{\vec A_n^l \in {\mathcal A}_n^{l+1\,*}(C_0, D_0)} \ \ \ \sum_{B \in {\mathcal F}_{n,l}(\vec A_n^l)} \nu(B \cap \Lambda)\\
& = (\# {\mathcal A}_n^{l+1\,*}(C_0, D_0)) \cdot (\# {\mathcal F}_{n,l}(\vec A_n^l)) \cdot \frac{1}{\#{\mathcal A}_{n+l}}\\
& =\frac{2^{nl}}{\#{\mathcal A}_n} \cdot \frac{1}{2^{nl}} \cdot \frac{\#{\mathcal A}_{n+l}}{\#{\mathcal A}_{n}} \cdot \frac{1}{\#{\mathcal A}_{n+l}}= \frac{1}{\#{\mathcal A}_{n}} \cdot \frac{1}{\#{\mathcal A}_{n}} \\
& = \nu(C_0 \cap \Lambda) \cdot \nu(D_0 \cap \Lambda).
\end{align*}
This ends the proof of equality (\ref{eqn200ToBeProved}) in the case that $C_0$ and $D_0$ are atoms of the same generation $n$, taking $l_0= 2n-1$.
\vspace{.3cm}
Now, let us prove equality (\ref{eqn200ToBeProved}) when $C_0$ and $D_0$ are atoms of different generations. Let $n$ equal the
maximum of both generations. Take $l \geq 2n-1$. Since $\Lambda$ is contained in the union of the atoms of any generation, we have
$$C_0 \cap \Lambda = \bigcup_{C \in {\mathcal A}_n, C \subset C_0} C \cap \Lambda, $$ where the sets in the union are pairwise
disjoint.
Analogously $$\Phi^{-l}(D_0 \cap \Lambda) = \bigcup_{D \in {\mathcal A}_n, D \subset D_0} \Phi^{-l}(D \cap \Lambda),$$ where also the
sets in this union are pairwise disjoint.
So,
\begin{align*}
(C_0 \cap \Lambda)\cap \Phi^{-l}(D_0 \cap \Lambda) = \bigcup_{C \in {\mathcal A}_n, C \subset C_0} \ \bigcup_{D \in {\mathcal A}_n, C \subset D_0}\ \hspace{-0.5cm} (C \cap \Lambda) \cap \Phi^{-l}(D \cap \Lambda).
\end{align*}
Since the sets in the union are pairwise disjoint, we deduce
$$\nu((C_0 \cap \Lambda)\cap \Phi^{-l}(D_0 \cap \Lambda)) = \sum_{C \in {\mathcal A}_n, C \subset C_0} \ \sum_{D \in {\mathcal A}_n, C \subset D_0} \hspace{-0.3cm} \nu((C \cap \Lambda) \cap \Phi^{-l}(D \cap \Lambda)).$$
As $C,D$ are atoms of the same generation $n$, and $l \geq 2n-1$, we can apply the first case proved above, to deduce that
\begin{equation}
\label{eqn202}
\nu((C_0 \cap \Lambda)\cap \Phi^{-l}(D_0 \cap \Lambda)) = $$ $$ \#\{C \in {\mathcal A}_n, C \subset C_0\} \cdot \#\{D \in {\mathcal A}_n, C \subset D_0\}\cdot \frac{1}{(\#{\mathcal A}_n)^2}. \end{equation}
The number of atoms of generation $n$ contained in an atom $C_0$ of generation $n_1$ larger or equal than $n$, does not depend of the chosen atom $C_0$. Therefore,
$$\#\{C \in {\mathcal A}_n, C \subset C_0\}= \frac{\#{\mathcal A}_n}{\#{\mathcal A}_{n_1}}=(\#{\mathcal A}_{n }) \cdot \nu (C_0 \cap \Lambda). $$
Analogously
$$\#\{D \in {\mathcal A}_n, D \subset D_0\}= (\#{\mathcal A}_{n }) \cdot \nu (D_0 \cap \Lambda). $$
Finally, substituting in equality (\ref{eqn202}) we conclude that
$$\nu(\Phi^{-l}(D_0 \cap \Lambda) \cap (C_0\cap \Lambda))= \nu(C_0 \cap \Lambda) \cdot \nu(D_0 \cap \Lambda) \ \ \forall \ l \geq 2n-1,$$
ending the proof.
\end{proof}
\begin{lemma} \label{lemmaNuEntropyInfty}
$h_\nu(\Phi) = +\infty$.
\end{lemma}
\begin{proof}
For $n \geq 1$ we consider the partition $\mathcal A_n$ of $\Lambda $ consisting of all the $n$-atoms intersected with $\Lambda$. By the definition of metric entropy
\begin{equation} \label{eqn52a}h_\nu (\Phi) := \sup_{\mathcal P} h({\mathcal P}, \nu) \geq h({\mathcal A}_n, \nu) , \mbox{ where }\end{equation}
\begin{equation} \label{eqn52b}h({\mathcal A}_n, \nu):= \lim_{l \rightarrow + \infty} \frac{1}{l} H\Big (\bigvee_{j=0}^l (\Phi^{-j}{\mathcal A}_n), \nu\Big), \end{equation}
$$ {\mathcal Q}_l:= \bigvee_{j=0}^l \Phi^{-j}{\mathcal A }_n : =\Big \{\bigcap_{j=0}^l \Phi^{-j}A_j \cap \Lambda \neq \emptyset: \ \ \ A_j \in {\mathcal A}_n \Big\}, $$
\begin{equation} \label{eqn52} H ({\mathcal Q}_l, \nu) := - \sum_{X \in {\mathcal Q}_l} \nu(X) \log \nu(X). \end{equation}
For any nonempty
$X := \Lambda \cap \Big(\bigcap_{j=0}^l \Phi^{-j}A_j \Big) \in \nolinebreak {\mathcal Q}_l ,$ Lemma \ref{lemmaLambda2}-c)
yields
$$\nu(X) = \nu \Big( \bigcap_{j=0}^l \Phi^{-j}A_j \cap \Lambda \Big) = \sum_{G \in {\mathcal F}_{n, l}(\{\vec A_j^l\})} \nu(G \cap \Lambda).$$
Since $G $ is an atom of generation $n+l$, we have $\nu(G \cap \Lambda) = {1}/( \#{\mathcal A}_{n+l})$, thus applying Lemma \ref{lemmaLambda2}-e), yields
$$\nu(X) = \frac{ \#{\mathcal F}_{n, l}(\{A_j\})}{\#{\mathcal A}_{n+l}} = \frac{1}{2^{ nl } \cdot \#{\mathcal A}_{n}}. $$
Combining this with (\ref{eqn52}) yields $H ({\mathcal Q}_l) = \log (\#{\mathcal A}_n) + nl \cdot \log 2. $
Finally, substituting in Equality (\ref{eqn52b}), we conclude
$$ h({\mathcal A}_n, \nu):= \lim_{l \rightarrow + \infty} \frac{1}{l} H\Big ( {\mathcal Q}_l, \nu\Big) = n \log 2.$$
Combining with \eqref{eqn52a} yields
$h_{\nu}(\Phi)\geq n \log 2$, for all $n \geq 1$; hence $h_{\nu}(\Phi)= + \infty.$
\end{proof}
\begin{proof}[Proof of Lemma \ref{LemmaMain}]
As proved in Lemmas \ref{lemmaNuInvariante}, \ref{lemmaNuMixing} and \ref{lemmaNuEntropyInfty}, the probability measure $\nu$ constructed by equality (\ref{eqnCosntructionOfNu}) is $\Phi$-invariant, mixing and has infinite metric entropy, as required.
\end{proof}
\section{Periodic Shrinking Boxes} \label{sectionPeriodicShrinkingBoxes}
In this section we will prove Theorems \ref{Theorem1} and \ref{Theorem3}. The proofs are based on the properties of the models proved in the previous sections, and on the existence of the periodic shrinking boxes which we construct here.
{ }
\begin{definition} \em \label{DefinitionPerShrBox}
{\bf (Periodic shrinking box)}
Let $f \in C^0(M)$ and $K \subset M$ be a box.
We call $K $ \em periodic shrinking with period $p \geq 1$ \em for $f$, if
$K, \ f(K), \ f^2(K), \ \ldots, f^{p-1}(K)$ are pairwise disjoint, and
$f^p(K) \subset \mbox{int}(K)$.
If so, we call
$f^p|_{K} : K \rightarrow \mbox{ int} (K)$ the \em return map. \em
\end{definition}
\begin{lemma}
\label{Lemma1} For any $\delta>0$, there exists an open and dense set of maps $f \in C^0(M)$ that have a periodic shrinking box $K$ with $\mbox{diam}(K)< \delta$. For a dense set of $f \in C^0(M)$ the return map to $K$ is a homeomorphism onto its image.
\end{lemma}
The proof of this lemma uses the following definition and technical result.
\begin{definition} \em \label{DefinitionHomothetic}
Let $H \subset K$ be two boxes in the manifold $M$. We say that $H$ is \em homothetic \em to $K$ if there exists a homeomorphism $\phi: K \mapsto [0,1]^m$ such that $\phi(H) = [a,b]^m \subset [0,1]^m$
\end{definition}
\begin{lemma}
\label{Lemma1bis}
Let $f \in C^0(M)$ and $x_0 \in M$. For all $\varepsilon>0$, there exists $g \in C^0(M)$ and a neighborhood $H$ of $x_0$ such that $\|g-f\|_{C^0} < \varepsilon$ and a homeomorphism $g|_H$ onto its image. Moreover, $g$ can be constructed to coincide with $f$ except in an arbitrarily small neighborhood of $x_0$.
\end{lemma}
\begin{proof} Fix $0< \delta < \varepsilon/2 $ small enough such that if $\mbox{dist}(x,y)<\delta$, then $\mbox{dist}(f(x), f(y) < \varepsilon/2$. Construct boxes $H$ and $K$ such that $\mbox{diam}(K)< \delta$ and $H$ is homothetic to $K$ and
$x_0 \in \mbox{int}(H) \subset H \subset \mbox{int}(K)$.
Let $\phi: K \mapsto [0,1]^m$ be the homeomorphism of Definition \ref{DefinitionHomothetic}.
Consider the compact neighborhood $K' \subset M$ of radius $\varepsilon/2$ and center $f(x_0)$, and a homeomorphism $\phi': K' \mapsto [0,1]^m$ (if necessary reduce $\varepsilon$ from the very beginning so the compact balls of radius $\varepsilon/2$ in the manifold are homeomorphic to $[0,1]^m$).
Consider a box $H' \subset \mbox{int}(K')$.
Fix a homeomorphism $\chi: H \mapsto H'$, Define $g \in C^0(M)$ by
$$g(x) := f(x) \mbox{ if } x \not \in \mbox{int}(K)$$
$$g(x) := \chi (x) \mbox{ if } x \in H$$
$$g(x) := {\phi'}^{-1} \circ \xi\circ \phi(x)) \mbox { if } x \in \mbox{int}(K) \setminus H,$$
where $\xi: [0,1]^m \setminus [a,b]^m \to [0,1]^m$ is constructed as follows. Fix $z_0 \in \mbox{int}([a,b]^m)$. For each point $z \in [0,1]^m \setminus [a,b]^m$, consider the half-line starting a $z_0$ containing $z$ with $z_0$. Consider the segment $S(y) = [s_1(y), s_2(y)]$ contained in this half-line, where $s_1(y)$ is the unique point of the half-line in $\partial [0,1]^m$, and $s_2(y)$ is the unique point in $\partial [a,b]^m$. Now define $\xi$ such that $\xi|_{S(y)}: S(y) \to [\phi'\circ f \circ \phi^{-1}(y), \ \ \phi'\circ \chi \circ \phi^{-1}(y)]$ is the affinity mapping $s_1(y)$ to $\phi'\circ f \circ \phi^{-1}(y)$ and $s_2(y)$ to $\phi'\circ \chi \circ \phi^{-1}(y)$.
It is standard to check that $\xi$ is continuous and that ${\phi'}^{-1} \circ \xi \circ \phi|_{\partial K}= f$. Therefore $g \in C^0(M)$. By construction $g|_H= \chi: H \mapsto H'$ is a homeomorphism. Besides, $g(x)$ may differ from $f(x)$ only in the points $x$ of $K$; but both images are inside $K'$ which is a ball of radius $\varepsilon/2$ in $M$. Therefore $\|g-f\|_{C^0} < \varepsilon$, as required.
The final statement holds since $H$ can be chosen arbitrarily small.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lemma1}]
According to Definition \ref{DefinitionPerShrBox}, the same periodic shrinking box $K$ for $f$ is also a periodic shrinking box with the same period for all $g \in C^0(M)$ near enough $f$, proving the openness assertion.
We turn to the denseness assertion. Let $f \in C^0(M)$ and $\varepsilon>0$. We will construct $g \in C^0(M)$ and a periodic shrinking box $K$ for $g$ with $\mbox{diam}(K)< \delta$, such that $\|g-f\|_{C^0} < \varepsilon.$
We suppose $\delta>0$ is to be smaller than the $\varepsilon$-modulus of continuity of $f$.
By the Krylov-Bogolyubov theorem invariant measures exist, and thus by the Poincar\'{e} Lemma, there exists a recurrent point $x_0 \in M$ for $f$. First assume that $x_0 \not \in \partial M$. So, there exists a box $ B \subset M$ with $\mbox{diam}( B) < \delta$ such that $x_0 \in \mbox{int}( B)$.
Since $x_0$ is a recurrent point, there exists a smallest $p \in \mathbb{N}$ such that $f^p(x_0) \in \mbox{int}(B)$.
{ Taking ${B}$ slightly smaller if necessary, we can assume that $f^j(x_0) \not \in B$ for all $j=1,2, \ldots, p-1$.}
So, there exists a small compact box $ U \subset \mbox{int(B)} $ as in Figure \ref{FigureLemma1}, such that $x_0 \in \mbox{int}( U), $ the sets $U, f( U), \ldots, f^{p-1}( U) $ are pairwise disjoint, and
$f^p( U) \subset \mbox{int}(B)$.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{Figure3.jpg}
\caption{Construction of $g$ near $f$ with a periodic shrinking box $K$ for $g$.\label{FigureLemma1}}
\end{center}
\end{figure}
We can choose ${U}$ homothetic to ${B}$.
Since $ U, f^p( U) \subset \mbox{int}( B)$, there exists a box ${K}$ such that $ U, f^p( U) \subset \mbox{int}({K}) \subset {K} \subset \mbox{int}( B).$
We also choose ${K}$ homothetic to $ B$. Therefore, there exists a homeomorphism $\psi: B \rightarrow B$ such that
$\psi(x)= x$ for all $x \in \partial { B}$, and $\psi (K) = U$.
Finally, we construct $g \in C^0(M)$ as follows:
$$g(x):= f(x), \qquad \forall \ x \not \in B, \ \ \ \ g(x) = f \circ \psi (x), \qquad \forall \ x \in B.$$
By construction, $K$ is a periodic shrinking box of $g$; by the choice of $\delta$ we have $\|g-f\| < \varepsilon$.
Now, let us study the case for which $M$ is a manifold with boundary and all the recurrent points of $f$ belong to $\partial M$. Choose one such recurrent point $x_0 \in \partial M$. For any $\delta>0$, there exists a compact box $ B \subset M$, with $\mbox{diam}( B) \leq \delta$ such that $x_0 \in \partial M \cap B$. Since $x_0$ is recurrent, there exists a smallest natural number $p \geq 1$ such that $f^p(x_0) \in B$. But $f^p(x_0)$ is also recurrent. So, $f^p(x_0) \in \partial M \cap B$.
The previous proof does not work as is.
To overcome the problem, we choose a new point $\widetilde x_0 \neq x_0$, near enough $x_0$, such that $\widetilde x_0 \in \mbox{int}( B) \setminus \partial M$ and $f^p(\widetilde x_0) \in B.$ By applying Lemma \ref{Lemma1bis} and slightly perturbing $f$, if necessary, we can assume that the restriction of $f $ to a small neighborhood of $\widetilde x_0$ is a local homeomorphism onto its image. Hence, $f^p(\widetilde x_0) \in \mbox{int}( B) \setminus \partial M.$
To conclude, we repeat the construction of $g$ and $K$ above
replacing the recurrent point $x_0$ by $\widetilde x_0$.
Now, let us show that we can construct densely in $C^0(M)$ a periodic shrinking box $K$ such that the return map $f^p|_{K}$ is a homeomorphism onto its image. We repeat the beginning of the proof, up to the construction of the points
$x_0, f (x_0), \ldots, f^p(x_0)$ such that
$ x_0, f^p(x_0) \in \mbox{int}( B)$ and $ f^j(x_0) \not \in B.$
Apply Lemma \ref{Lemma1bis}, slightly perturb $f$, if necessary, inside small open neighborhoods
$W_0, W_1, \ldots, W_{p-1}$ of the points $x_0, f (x_0), \ldots, f^{p-1}(x_0)$ respectively, so that
$f|_{\overline W_i} $ is a homeomorphism onto its image for all $i=0, 1, \ldots, p-1$.
Finally, construct the box $U$ (Figure \ref{FigureLemma1}), but small enough so $f^j(U) \subset W_j$ for all
$j= 0,1,\ldots, p-1$, and repeat the construction of $K$ and $g$ as above.
\end{proof}
\begin{remark}
\label{RemarkPerturbInsideB}
\em Note that to obtain the dense property in the proof of Lemma \ref{Lemma1}, we only need to perturb the map $f$ in the interior of the initial box $ {B}$ with diameter smaller than $\delta$.
\end{remark}
The following lemma is the homeomorphism version of Lemma \ref{Lemma1}.
\begin{lemma}
\label{Lemma1b}
For any $\delta>0$, there exists an open and dense set of maps $f \in \mbox{Hom}(M)$ that have a periodic shrinking box $K$ with $\mbox{diam}(K)< \delta$.
\end{lemma}
\begin{proof}
The proof of Lemma \ref{Lemma1} also works in the case that $f \in \mbox{Hom}(M)$: in fact, the $\varepsilon$-perturbed map $g$ constructed there is a homeomorphism, and to obtain $\|g-f\|_{\mbox{Hom}(M)} < \varepsilon$ it is enough to reduce $\delta>0$ to be smaller than the $\varepsilon$-continuity modulus of $f$ and $f^{-1}$.
\end{proof}
\begin{remark}
\label{RemarkRecurrentPointIsPeriodic}
\em In the proof of Lemmas \ref{Lemma1} and \ref{Lemma1b}, if the starting recurrent point $x_0$ were a periodic point of period $p$, then the periodic shrinking box $K$ so constructed would contain $x_0$ in its interior and have the same period $p$.
\end{remark}
\begin{lemma}
\label{Lemma2} Let $\delta>0$. A generic map $f \in C^0(M)$ has a periodic shrinking box $K$ with $\mbox{diam}(K)< \delta$ such that the return map $f^p|_K$ is topologically conjugated to a model map $\Phi \in {\mathcal H}$ \em (recall Definition \ref{DefinitionModel}).
\end{lemma}
\begin{proof} Let $K \subset M$ be a periodic shrinking box for $f$. Fix a homeomorphism $\phi: K \rightarrow D^m$.
To prove the $G_{\delta}$ property, assume that $f \in C^0(M)$ has a periodic shrinking box $K$ with $\mbox{diam}(K)< \delta$, such that
$\phi \circ f^p|_K \circ \phi^{-1} =\Phi \in {\mathcal H}. $ From Definition \ref{DefinitionPerShrBox}, the same box $K$ is also periodic shrinking with period $p$ for all $g \in {\mathcal N}$, where ${\mathcal N}\subset C^0(M)$ is an open neighborhood of $f$.
From Lemma \ref{LemmaModelHRHomNonempty}, ${\mathcal H}$ is a $G_{\delta}$-set in $C^0(D^m)$, i.e., it is the countable intersection of open families ${\mathcal H}_n \subset C^0(D^m)$. We define
$${\mathcal V}_n:= \{f \in {\mathcal N} \colon \phi \circ f^p|_{K} \circ \phi^{-1} \in {\mathcal H}_n\}. $$
Since the restriction to $K$ of a continuous map $f$, and the composition of continuous maps, are continuous operations in $C^0(M)$, we deduce that ${\mathcal V}_n $ is an open family in $ C^0(M)$. Besides
\begin{equation}
\label{eqn13}
\phi \circ g^p|_K \circ \phi^{-1} \in {\mathcal H} = \bigcap_{n \geq 1} {\mathcal H}_n \ \ \mbox{ if } \ \ g \in \bigcap_{n \geq 1} {\mathcal V}_n \subset C^0(M) . \end{equation}
In other words, the set of maps $g\in C^0(M)$ that have periodic shrinking box $K$ with $\mbox{diam}(K)< \delta$, such that the return map $g^p|_K$ coincides, up to a conjugation, with a model map $\Phi$, is a $G_{\delta}$-set in $C^0(M)$.
To show the denseness fix $f \in C^0(M)$ and $\varepsilon >0$. Applying Lemma \ref{Lemma1}, it is not restrictive to assume that $f$ has a periodic shrinking box $K$ with $\mbox{diam}(K) < \min\{\delta, \varepsilon\} $, such that $f^p|_K$ is a homeomorphism onto its image. We will construct $g \in C^0 (M)$ to be $\varepsilon$-near $f$ and such that $\phi \circ g^p|_K \circ \phi^{-1} \in {\mathcal H}$.
Choose a box $W $ such that $ f^{p-1}(K) \subset \mbox{int}(W)$. If $p \geq 2$, take $W$ disjoint with $f^j(K)$ for all $j\in \{0,1, \ldots, p-2\}$ (Figure \ref{FigureLemma2}). Reducing $\delta$ if necessary, we can take $W$ with an arbitrarily small diameter.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{Figure4.jpg}
\caption{Perturbation $g$ of $f$ such that $g^p|_K = \Phi$. \label{FigureLemma2}
}
\end{center}
\end{figure}
To construct $g \in C^0(M)$ (see Figure \ref{FigureLemma2}) take $\Phi \in {\mathcal H}$ and let $g(x) := f(x) $ if $ x \not \in W$~and
$$g(x) := \phi^{-1} \, \circ \, \Phi \, \circ \, \phi\, \circ \, (f^p|_{K})^{-1} \circ f (x), \qquad \forall \ x \in f^{p-1}(K).$$
This defines a continuous map $g: f^{p-1}(K) \cup (M \setminus W) \rightarrow M$ such that $|g(x) - f(x)| < \mbox{diam}(K) < \varepsilon $ for all $x \in f^{p-1}(K) \subset W $ and $ g(x) = f(x) $ for all $ x \in M \setminus W.$
Applying the Tietze Extension Theorem, there exists a continuous extension of $g$ to the whole compact box $W$, hence to $M$, such that $\|g - f\|_{C^0} < \varepsilon.$
Finally, by construction we obtain $$g^p|_K = g|_{f^{p-1}(K)} \circ f^{p-1}|_{K}=
\phi^{\text{-}1} \, \circ \, \Phi \, \circ \, \phi\, \circ \, (f^p|_{K})^{\text{-}1} \circ f \circ f^{p-1}|_{K}= \phi^{\text{-}1} \, \circ \, \Phi \, \circ \, \phi, $$
ending the proof of Lemma \ref{Lemma2}.
\end{proof}
\begin{lemma}
\label{Lemma2b} Let $\delta>0$.
A generic homeomorphism $f \in \mbox{Hom}(M)$ has a periodic shrinking box $K$ with $\mbox{diam}(K) < \delta$,
such that the return map $f^p|_K$ is topologically conjugated to a model homeomorphism $\Phi \in {\mathcal H} \cap \mbox{RHom}(D^m)$.
\end{lemma}
\begin{proof}
We repeat the proof of the $G_{\delta}$-set property of Lemma \ref{Lemma2}, using ${\mathcal H} \cap \mbox{RHom}(D^m)$ instead of ${\mathcal H}$, and $\mbox{Hom}(M)$ instead of $C^0(M)$ (notice that taking the inverse is also a continuous operation in $\mbox{Hom}(M)$).
To show the denseness fix $f \in \mbox{Hom}(M)$ and $\varepsilon >0$. Let $\delta \in (0,\varepsilon)$ be smaller the the $\varepsilon$-modulus of continuity of $f$ and $f^{-1}$, and consider a periodic shrinking box $K$ with $diam(K) < \delta$ (Lemma \ref{Lemma1b}).
Fix a homeomorphism $\phi: K \rightarrow D^m$. We will construct $g \in \mbox{Hom} (M)$ to be $\varepsilon$-near $f$ in $\mbox{Hom}(M)$, with $\phi \circ g^p|_K \circ \phi^{-1} =\Phi \in {\mathcal H} \cap \mbox{RHom}(D^m)$.
From Definition \ref{DefinitionPerShrBox} we know that the boxes $K, f(K), \ldots, f^{p-1}(K) $ are pairwise disjoint and that $f^p(K) \subset \mbox{int}(K)$. Denote $W := f^{-1}(K)$. Since $f$ is a homeomorphism, we deduce that $W$ is a box as in Figure \ref{FigureLemma2}, such that
$ W \cap f^j( K) = \emptyset $ for all $ j = 0,1, \ldots, p-2$ if $ p \geq 2,$ and $ f^{p-1}(K) \subset \mbox{int}(W)$. Since $\mbox{diam}(K) < \delta$ we have
$\mbox{diam}(W) < \varepsilon.$
Consider $\phi \circ f^p|_K \circ \phi^{-1} \in \mbox{RHom}(D^m)$. Applying part b) of Lemma \ref{LemmaModelHRHomNonempty}, there exists a homeomorphism $\psi: D^m \rightarrow D^m$ such that
$$\psi|_{\partial D^m} = \mbox{id}|_{\partial D^m}, \ \ \ \ \ \psi \circ \phi \circ f^p|_K \circ \phi^{-1} = \Phi \in {\mathcal H} \cap RHom(D^m).$$
So, we can construct $g \in \mbox{Hom}(M)$ such that $g(x) := f(x) $ for all $ x \not \in W$, and $ g(x) := \phi^{-1}\circ \psi \circ \phi \circ f (x) $ for all $x \in W.$
Since $\psi|_{\partial D^m} $ is the identity map, we obtain $g|_{\partial W} = f|_{\partial W}$. Thus, the above equalities define a continuous map $g : M \rightarrow M$. Moreover $g$ is invertible because $g|_W : W \rightarrow K$ is a composition of homeomorphisms, and $g|_{M \setminus W} = f|_{M \setminus W}: M \setminus W \rightarrow M \setminus K$ is also a homeomorphism. So, $g \in \mbox{Hom}(M)$. Moreover, by construction we have $|g(x) - f(x)| < \mbox{diam}(K) < \varepsilon $ for all $ x \in W , $ and $ g(x) = f(x) $ for all $x \not \in W. $ Also the inverse maps satisfy
$|g^{-1}(x) - f^{-1}(x)| < \mbox{diam}(f^{-1}(K)) = \mbox{diam}(W) < \varepsilon$ for all $ x \in K, $ and $g^{-1}(x) = f^{-1}(x)$ for all $ x \not \in K. $
Therefore $\|g -f\|_{Hom} < \varepsilon.$
Finally, let us check that $g^p|_K$ is topologically conjugated to $\Phi$:
$$g^p|_K = g|_{f^{p-1}(K)} \circ f^{p-1}|_ K = g|_{W} \circ f^{p-1}|_ K =$$
$$ \phi^{-1}\circ \psi \circ \phi \circ f \circ f^{p-1}|_K = $$ $$ \phi^{-1}\circ (\psi \circ \phi \circ f ^p|_K \circ \phi^{-1}) \circ \phi = \phi^{-1} \circ \Phi \circ \phi, $$
ending the proof of Lemma \ref{Lemma2b}.
\end{proof}
\begin{remark}
\label{RemarkPerturbInsideK}
\em In the proof of the dense property in Lemmas \ref{Lemma2} and \ref{Lemma2b}, once a periodic shrinking box $K$ is constructed with period $p \geq 1$, we only need to perturb the map $f$ inside $W \cup \bigcup_{j=0}^{p-1} f^j(K)$, where $W= f^{-1}(K)$ if $f$ is a homeomorphism, and $\mbox{int}(W) \supset f^{p-1}(W)$ otherwise. In both cases, by reducing $\delta>0$ from the very beginning, if necessary, we can construct $W$ such that $\mbox{diam}(W)< \varepsilon$ for a previously specified small $\varepsilon>0$.
\end{remark}
\begin{proof}[Proof of Theorems \ref{Theorem1} and \ref{Theorem3}]
From Lemmas \ref{Lemma2} and \ref{Lemma2b}, a generic map $f \in C^0(M)$ and also a generic $f \in \mbox{Hom}(M)$, has a periodic shrinking box $K$ such that the return map
$f^p|_K: K \rightarrow \mbox{int}(K) $ is conjugated to a model map $ \Phi \in {\mathcal H}.$ We consider the homeomorphism $ \phi ^{-1}: K \to D^m$
such that
$\phi^{-1} \circ f^p \circ \phi = \Phi \in {\mathcal H}$.
Lemma \ref{LemmaMain} states that every map $\Phi \in {\mathcal H}$ has an $\Phi$-invariant mixing measure $\nu$ with infinite metric entropy for $\Phi$. Consider the push-forward measure $\phi _* \nu$, defined by
$(\phi_* \nu) (B):= \nu (\phi^{-1}(B \cap K))$ for all the Borel sets $B \subset M$.
By construction, $\phi_* \nu$ is supported on $K \subset M$.
Since $\phi$ is a conjugation between $\Phi$ and $f^p|_K$, the push-forward measure $\phi_* \nu$ is $f^p$-invariant and mixing for $f^p$ and moreover $h_{\phi_* \nu}(f^p) = + \infty$.
From $\phi_*\nu$, we will construct an $f$-invariant and $f$-ergodic measure $\mu$ supported on $\bigcup_{j=0}^{p-1} f^j(K)$, with infinite metric entropy for $f$. Precisely, for each Borel set $B \subset M$, define
\begin{equation} \label{eqn15} \mu(B):= \frac{1}{p} \sum_{j= 0}^{p-1} (f^j)_* (\phi_* \nu) (B \cap f^j(K)).\end{equation}
Applying Equality (\ref{eqn15}), and the fact that $\phi_* \nu$ is $f^p$-invariant and $f^p$-mixing, it is standard to check that $\mu$ is $f$-invariant and $f$-ergodic.
From the convexity of the metric entropy function, we deduce that $$h_{\mu}(f^p) = \frac{1}{p}\sum_{j=0}^{p-1} h_{(f^j)_* (\phi_* \nu)} (f^p) = + \infty.$$
Finally, recalling that $ h_{\mu}(f^p) \leq p \, h_{\mu}(f)$ for any $f$-invariant measure $\mu$ and any natural number $p \geq 1$, we conclude that $h_{\mu}(f)= + \infty$.
\end{proof}
\section{Good sequences of periodic shrinking boxes} \label{sectionCleanSequences}
\noindent The purpose of this section is to prove Theorems \ref{Theorem2} and \ref{Theorem4}.
\begin{definition} \em
\label{definitionCleanSequence} Let $f \in C^0(M)$ and let
$K_1, K_2, \ldots, K_n, \ldots$ be a sequence of periodic shrinking boxes for $f$.
We call $\{K_n\}_{n \ge 1}$ \em good \em if it has the following properties (see Figure \ref{FigureLemma3}):
\noindent $\bullet$ $\{K_n\}_{n \geq 1}$ is composed of pairwise disjoint boxes.
\noindent $\bullet$ There exists a natural number $p \geq 1$, independent of $n$, such that $K_n$ is a periodic shrinking box for $f$ with a period $p_n$, a multiple of $p$.
\noindent $\bullet$ There exists a sequence $\{H_n\}_{n \geq 0}$ of periodic shrinking boxes, all with period $p$, such that $K_n \cup H_n \subset H_{n-1} , \ K_n \cap H_n = \emptyset$ for all $n \geq 1$, and $\mbox{diam}(H_n) \rightarrow 0$ as $n \rightarrow + \infty$.
\end{definition}
\noindent {\bf Remark.} Definition \ref{definitionCleanSequence} implies that $ \bigcap_{n \geq 1} H_n = \{x_0\}$,
where $x_0$ is periodic with period $p$. Furthermore, for any $j \ge 0$ we have
$$d(f^j(K_n), f^j(x_0)) \leq \mbox{diam}(f^j(H_{n-1})) \leq \displaystyle \max_{0 \leq k \leq p-1} \mbox{diam}(f^k(H_{n-1})) \stackrel{\scriptscriptstyle n \to \infty}{\to} 0,$$
and thus
\begin{equation} \label{eqnUniformLimitCleanSequence} \lim_{n \rightarrow + \infty}\sup_{j \geq 0} d(f^j(K_n), f^j(x_0))= 0.\end{equation}
\begin{lemma} \label{lemma3} \label{lemma4}
Generic maps $f \in C^0(M)$, and generic $f \in \mbox{Hom}(M)$ if $m \geq 2$, have good sequences $\{K_n\}_{n \geq 1}$ of periodic shrinking boxes such that the return maps $f ^{p_n}|_{K_n}$ are topologically conjugated to model maps.
\end{lemma}
\begin{proof} To see the $G_{\delta}$ property assume that $f $ has a good sequence $\{K_n\}_n$ of periodic shrinking boxes. For each fixed $n$, the boxes $K_n$ and $H_n$ are also periodic shrinking with periods $p_n $ and $p$ respectively, for all $g$ in an open set in $C^0(M)$ or in $\mbox{Hom}(M)$ (see Definition \ref{DefinitionPerShrBox}). Taking the intersection of such open sets for all $n \geq 1$, we deduce that the same sequence $\{K_n\}$ is also a good sequence of periodic shrinking boxes for all $g$ in a $G_{\delta}$-set. Now, also assume that $f^{p_n}|_{K_n}$ is a model map for all $n \geq 1$. From Lemmas \ref{Lemma2} and \ref{Lemma2b}, for each fixed $n \geq 1$, the family of continuous maps $g$ such that the return map $g^{p_n}|_{K_n}$ is topologically conjugated to a model, is a $G_{\delta}$-set in $C^0(M)$ or in $\mbox{Hom}(M)$. The (countable) intersection of these $G_{\delta}$-sets, produces a $G_{\delta}$-set, as required.
To prove denseness fix $f \in C^0(M)$ or $f \in \mbox{Hom}(M)$, and $\varepsilon>0$. We will construct $g$ in the $\varepsilon$-neighborhood of $f$ and a good sequence of periodic shrinking boxes $K_n$ for $g$ such that, $g^{p_n}|_{K_n} =_{\phi} \Phi_n \in {\mathcal H}$ for all $n \geq 1$.
Generic maps $\widetilde f\in C^0(M)$ and generic $\widetilde f\in \mbox{Hom}(M)$ have a periodic shrinking box $H_0$
with period $p \geq 1$, such that $\widetilde f^p|_{H_0} $ is conjugate to a model map $ \Phi\in {\mathcal H}$
(Lemmas \ref{Lemma2} and \ref{Lemma2b}). Take such $\widetilde f$ in the $(\varepsilon/6)$-neighborhood of $f$.
Since $\widetilde f^p: H_0 \rightarrow \mbox{int}(H_0) \subset H_0$ is continuous, by the Brouwer Fixed Point Theorem
there exists a periodic point $x_0 \in {int}(H_0)$ of period $p$.
Lemma \ref{LemmaMain} and the argument at the end of the proofs of Theorems \ref{Theorem1} and \ref{Theorem3}, show that the map $\widetilde f$ has an ergodic measure $\mu$ supported on $\bigcup_{j=0}^{p-1} \widetilde f^j(H_0)$ such that $h_{\mu}(\widetilde f) = + \infty$. Therefore, by Poincar\'{e} Recurrence Lemma, there exists some recurrent point $y_1 \in \mbox{int}(H_0) $ for $\widetilde f$ such that $y_1 \neq x_0$ (see Figure \ref{FigureLemma3}).
Choose $\delta_1>0$ small enough and
construct a box $B_1$ such that $y_1 \in \mbox{int}(B_1)$, $\mbox{diam}(B_1) < \delta_1$, $x_0 \not \in B_1 \mbox{ and } B_1 \subset \mbox{int} (H_0). $ We repeat the proofs of the dense property of
Lemmas \ref{Lemma1} and \ref{Lemma1b}, using the recurrent point $y_1$ instead of $x_0$, and the box $B_1$ instead of $B$ (see Figure \ref{FigureLemma1}).
So, we deduce that there exists an $(\varepsilon/6)$-perturbation $f^*$ of $\widetilde f$, and a periodic shrinking box $K_1 \subset B_1$ for $f^*$, with some period $p_1\geq p$ (see Figure \ref{FigureLemma3}). Moreover, $f^*$ coincides with $ \widetilde f$ in $H_0 \setminus \mbox{int}(B_1)$
(recall Remark \ref{RemarkPerturbInsideB}). Therefore, the same periodic point $x_0$ of $\widetilde f$ survives for $f^*$, and the same initial box $H_0$ is still periodic shrinking with period $p$ for $f^*$. So, the compact sets of the family $\{{f^*}^j(H_0)\}_{j= 0,1,\ldots, p-1} $ are pairwise disjoint, and ${f^*}^p(H_0) \subset \mbox{int}(H_0)$. This implies that the period $p_1$ of the new periodic shrinking box $K_1$ for $f^*$, is a multiple of $p$.
Now, we apply the proofs of the dense property of
Lemmas \ref{Lemma2} and \ref{Lemma2b}, using the shrinking box $K_1$ instead of $K$ (see Figure \ref{FigureLemma2}).
We deduce that there exists an $(\varepsilon/6)$-perturbation $g_1$ of $f^*$, such that $K_1$ is still a periodic shrinking box for $g_1$ with the same period $p_1$, but moreover, the return map is now $g_1^{p_1}|_{K_1}= \Phi_1 \in {\mathcal H} $. Taking into account Remark \ref{RemarkPerturbInsideK}, we can construct $g_1$ to coincide with $ f^*$ in the complement of $W_1 \bigcup \Big( \bigcup_{j=0}^{p_1-1}{f^*}^j(K_1) \Big)$, where $W_1 \supset f^{p-1}(K_1)$ is a box, small enough not to contain the periodic point $x_0$, and to be contained in the interior of the shrinking box $H_0$.
Therefore, $x_0$ and $H_0$ are still periodic with period $p$ for $g_1$.
To summarize, we have built the periodic shrinking boxes $H_0$ and $ K_1$ for a continuous map or homeomorphism $g_1$, with periods $p$ and $p_1$ respectively, where $p_1$ is multiple of $p$, and a periodic point $x_0 \in \mbox{int}(H_0)$ (see Figure \ref{FigureLemma3}), such that: $$K_1 \subset H_0\setminus \{x_0\}, \ \ \ \ g_1^{p_1}|_{K_1} =_{\phi} \Phi_1 \in {\mathcal H} \ \ \mbox{ and } $$ $$\|g_1 - f\| < \|g_1 - f^*\| + \|f^* - \widetilde f\|+ \|\widetilde f - f\| < \frac{\varepsilon}{6} +\frac{\varepsilon}{6} + \frac{\varepsilon}{6} = \frac{\varepsilon}{2}.$$
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{Figure5.jpg}
\caption{Construction of a good sequence of periodic shrinking boxes. \label{FigureLemma3}}
\end{center}
\end{figure}
We proceed by induction on $n \geq 1$, assume that $H_0, \ldots, H_{n-1}$ and $ K_1, \ldots, K_{n}$ are periodic shrinking boxes (see Figure \ref{FigureLemma3}) of $g_n \in C^0(M)$ or $g_n \in \mbox{Hom}(M)$, with periods $p$ and $p_1, \ldots, p_n$ respectively, where $p_i$ is multiple of $p$, and that $x_{n-1} \in \mbox{int}(H_{n-1})$ is a periodic point of period $p$ for $g_n$. Assume also that $K_n \subset H_{n-1} \setminus\{x_{n-1}\}$, that for $1\le j \le n-1$
\begin{equation}\label{ind.eq} H_j, K_j \subset H_{j-1};
\ H_j \cap K_{j} = \emptyset;\
\mbox{diam}(H_j) < \frac{\varepsilon}{ 2^j}; $$ $$
g_n^{p_j}|_{K_j} \mbox{ is topologically conjugated to } \Phi_j \in {\mathcal H} ,
\end{equation}
and that we have a finite number of the previously constructed maps $g_1, \ldots, g_n$ such that
\begin{equation}\label{ind1.eq} \|g_1 - f\| < \frac{\varepsilon}{2}, \ \ \|g_{j} - g_{j-1}\| < \frac{\varepsilon}{2^j}, \qquad \forall \ j= 2, \ldots, n.\end{equation}
We will construct $g_{n+1}$ and the boxes $H_{n}$ and $ K_{n+1} $ that satisfy the above properties for $n+1$ instead of $n$, and such that for all $j= 1, \ldots, n$, the boxes $H_{j-1}$ and $K_j$ are still periodic shrinking for $g_{n+1}$ with the same periods $p, p_j$.
From the inductive hypothesis, $g_n$ has a periodic shrinking box $H_{n-1}$ of period $p$, a periodic point $x_{n-1} \in \mbox{int} (H_{n-1})$ of period $p $, and a periodic shrinking box $K_n \subset H_{n-1}\setminus \{x_{n-1}\}$ of period $p_n$, a multiple of $p$. We choose $0<\widetilde \delta_n < {\varepsilon}/{2^n}$ small enough, and construct a box ${\widetilde B_{n}} \subset H_{n-1}$ containing the periodic point $x_{n-1}$ in its interior, disjoint from $K_{n}$, and such that $\mbox{diam}({\widetilde B_{n}}) < \widetilde \delta_n$.
Repeating the proof of the density properties in Lemmas \ref{Lemma1} and \ref{Lemma1b} (putting $x_{n-1}$ instead of $x_0$,
and $\widetilde \delta_n >0$ small enough), we construct an $\varepsilon/(3 \cdot 2^{n+1})$-perturbation $\widetilde g_n$ of $g_n$
and a periodic shrinking box
$H_n \subset \mbox{int} (\widetilde B_{n})$
for $\widetilde g_n$. Moreover, since $x_{n-1}$
is a periodic point with period $p$, the period of $H_n$ can be made equal to $p$
(see Remark \ref{RemarkRecurrentPointIsPeriodic}). By construction $H_n \subset \widetilde B_n \subset H_{n-1}$ is disjoint from $K_n$
and from $\partial H_{n-1}$. To construct $\widetilde g_n$ we only need to modify $g_n$ inside $\widetilde B_n$
(recall Remark \ref{RemarkPerturbInsideB}). Therefore the same periodic shrinking boxes $H_0, H_1, \ldots, H_{n-1}$
and $K_1, K_2, \ldots, K_n$ of
$g_n$, are preserved for $\widetilde g_n$, with the same periods.
Now, as in the proof of Lemmas \ref{Lemma2} and \ref{Lemma2b}, we construct a new $\varepsilon/(3 \cdot 2^{n+1}) $-perturbation $ g_n^*$ of $\widetilde g_n$, such that $ {g^*_n}^{p}|_{H_n}$ is conjugated to a map in ${\mathcal H}$. To construct $ g^*_n$ we only need to modify $\widetilde g_n$ in $ \widetilde W_n \cup \bigcup_{j=0}^{p-1} g_n^{j} (H_n)$, where $ \widetilde W_n $ is a neighborhood of $\widetilde g_n^{p-1}(H_n)$ that can be taken arbitrarily small by choosing $H_n$ small enough from the beginning (see Remark \ref{RemarkPerturbInsideK}).
Therefore we do not need to modify $\widetilde g_n$ or $g_n$ outside $H_{n-1}$ or inside $K_n$. We conclude that the same shrinking boxes $K_1, \ldots, K_n; H_0, \ldots, H_{n-1}$ for $\widetilde g_n$ and $g_n$, are still periodic shrinking for $g^*_n$, with the same periods and that ${g_n^*}^{p_j}|_{K_j}= \widetilde g_n^{p_j}|_{K_j} $ which is conjugated to $\Phi_j \in {\mathcal H}$ for all $j=1, \ldots, n$.
When modifying $g_n$ inside $H_{n-1} \setminus K_n$ to obtain $\widetilde g_n$ and $ g^*_n$, the periodic point $x_{n-1} $ of period $p$ for $g_n$, may not be preserved. But since $H_n \subset H_{n-1} \setminus K_n$ is a periodic shrinking box with period $p$ for $ g^*_n$, by the Brouwer Fixed Point Theorem, there exists a periodic point $x_n \in \mbox{int}(H_n)\setminus K_n$ for $g^*_n$, with the same period $p$.
Since the return map ${g^*_n}^p|_{H_n}$ is conjugated to a model, there exists an ergodic measure with infinite entropy (see Lemma \ref{LemmaMain}), supported on the $g^*_n$-orbit of $H_n$. Therefore, there exists a recurrent point $y_n \in \mbox{int}(H_n)$ such that $y_n \neq x_{n}$. We choose $\delta_n>0$ small enough, and a compact box $B_n \subset \mbox{int}(H_n)\setminus \{x_n\}$ such that $y_n \in \mbox{int} (B_n)$ and $\mbox{diam}( B_n) < \delta_n$. Repeating the above arguments, and if $\delta_n$ is small enough, we construct a new $\varepsilon/(3 \cdot 2^{n+1})$-perturbation $g_{n+1}$ of $ g^*_n$ and a box $K_{n+1} \subset \mbox{int}(B_n)$ that is periodic with some period $p_{n+1}$ for $g_{n+1}$, and such that $g_{n+1}^{p_{n+1}}|_{K_{n+1}}$ is conjugate to a map in
${\mathcal H}$.
To construct such a perturbation $g_{n+1}$ of $ g_n^*$, we only need to modify $g_n^*$ in $\mbox{int}(B_{n})$, and in a box $W_{n+1} $ containing ${g_n^*}^{p_{n+1}-1}(K_{n+1})$ in its interior (recall Remarks \ref{RemarkPerturbInsideB} and \ref{RemarkPerturbInsideK}). Recall that $W_{n+1}$ is a small set, provided that $\delta_n >0$ is small enough. Therefore, $g_{n+1}$ can be constructed so the point $x_{n}$ is still periodic with period $p$ for $g_{n+1}$, and the same boxes $H_0, H_1, \ldots, H_n,$ $K_1, \ldots, K_n$ are still shrinking periodic for $g_{n+1}$, with the same periods. Moreover, $g_{n+1}^{p_j}|_{K_j} = g_j^{p_j}|_{K_j} $ is conjugated to $ \Phi_j \in {\mathcal H}$ for all $j = 1, \ldots, n$.
In particular $ H_n$ is periodic shrinking with period $p$ for $g_{n+1}$, and it contains $K_{n+1}$ by construction. This implies that the period $p_{n+1}$ of $K_{n+1}$ is a multiple of $p$.
By construction we have,
$$\|g_{n+1}- g_n\| \leq \|g_{n+1}- g^*_n\| + \|g_n^* - \widetilde g_n\| + \|\widetilde g_n - g_n\| < 3 \cdot \frac{\varepsilon}{3 \cdot 2^{n+1}} = \frac{\varepsilon}{2^{n+1}}.$$
Moreover $\mbox{diam}(H_n) < \widetilde \delta_n< \varepsilon/ 2^n$.
We have constructed $g_{n+1}$ and the boxes $H_{n}$ and $ K_{n+1} $ that satisfy the inductive properties for $n+1$ instead of $n$, as required.
We have defined a sequence $\{g_n\}_{n \geq 1}$ of
continuous maps or homeomorphisms on $M$, and sequences $\{H_n\}_{n\geq1}$, $\{K_n\}$ of compact boxes such that the Properties \eqref{ind.eq} and \eqref{ind1.eq} are satisfied for all $n \geq 1$. Since
$\|g_{n+1} - g_{n}\| \leq {\varepsilon}/{2^{n+1}}$ for all $n \geq 1$
the sequence $\{g_n\}_{n \geq 1}$ is Cauchy. So, there exists a limit map $g$. Since $g_n$ is an $\varepsilon$-perturbation of $f$ for all $n \geq 1$, the limit map $g$ also is.
Finally, by construction we have $g_k (x) = g_n(x)$ for all $x \in \bigcup_{j=0}^{p_n} g_n^j(K_n)$, $g_k^{p_n}|_{K_n}$
is topologically conjugated to $ \Phi_n \in {\mathcal H}$ for all $n \geq 1$ and for all $k \geq n$. Thus
$\{K_n\}_{n \geq 1}$ is a good sequence of periodic shrinking boxes for $g$, as required.
\end{proof}
\begin{remark}
\label{RemarkLemma4} \em As a consequence of Lemmas \ref{lemma4} and \ref{LemmaMain} (after applying the same arguments at the end of the proof of Theorems \ref{Theorem1} and \ref{Theorem3}), generic continuous maps and homeomorphisms $f$ have a sequence of ergodic measures $\mu_n$, each one supported on the $f$-orbit of a box $K_n$ of a good sequence $\{K_n\}_{n \geq 1}$ of periodic shrinking boxes for $f$, satisfying $h_{\mu_n}(f) = + \infty $ for all $n \geq 1$.
\end{remark}
Let ${\mathcal M}$ denote the metrizable space of Borel probability measures on a compact metric space $M$, endowed with the weak$^*$ topology. Fix a metric $\mbox{dist}^*$ in $\mathcal M$.
\begin{lemma}
\label{LemmaMeasureDistance}For all $\varepsilon >0$ there exists $\delta>0$ that satisfies the following property:
if $\mu, \nu \in {\mathcal M}$ and $\{ B_1, B_2, \ldots, B_r\} $ is a finite family of pairwise disjoint compact balls $ B_i \subset M$, and if \em
$\mbox{supp}(\mu) \cup \mbox{supp}(\nu) \subset \bigcup_{i=1}^r B_i ,$
and $ \mu( B_i) = \nu( B_i) , $ $ \mbox{diam}( B_i) < \delta $ for all $ i= 1,2, \ldots, r, $
\em then \em $\mbox{dist}^*(\mu, \nu) < \varepsilon.$
\end{lemma}
\begin{proof}
If $M= [0,1]$ the proof can be found for instance in \cite[Lemma 4]{CT2017}. If $M$ is any other compact manifold of finite dimension $m \geq 1$, with or without boundary, just copy the proof of \cite[Lemma 4]{CT2017} by substituting the pairwise disjoint compact intervals $I_1, I_2, \ldots, I_r \subset [0,1]$ in that proof, by the family of pairwise disjoint compact boxes $ B_1, B_2, \ldots, B_r \subset M$.
\end{proof}
\begin{proof}[Proofs of Theorems \ref{Theorem2} and \ref{Theorem4}]
For any $\varepsilon>0$, take $\delta>0$ as in Lemma \ref{LemmaMeasureDistance}.
Applying Lemma \ref{lemma4}, generic continuous maps or homeomorphisms $f $ have a good sequence of periodic shrinking boxes $\{K_n\}_{n \geq 1}$, and a sequence $\{\mu_n\}$ of ergodic $f$-invariant measures such that $h_{\mu_n}(f) = + \infty$ (see Remark \ref{RemarkLemma4}) and such that $
\mbox{supp}(\mu_n) \subset \bigcup_{j= 0}^{p_n-1} f^j(K_n),$ where $p_n = l_n \cdot p$, multiple of $p$, is the period of the shrinking box $K_n$.
Taking into account that $\{f^j({K_n})\}_{0 \leq j \leq p_n-1}$ is a family of pairwise disjoint compact sets, and $f^{p_n}(K_n) \subset \mbox{int}(K_n)$, we obtain for each $j \in \{0,1,\dots,p_n\}$
$$ \mu_n(f^j(K_n)) = \mu_n(f^{-j}(f^j(K_n)))= \mu_n(f^{-j}(f^j(K_n))\cap \mbox{supp}(\mu_n) )=$$ $$ \mu_n(K_n).$$
Since
$1= \sum_{j=0}^{p_n-1} \mu_n(f^j(K_n)) = p_n \cdot \mu_n (K_n);$ we obtain
$$\mu_n(f^j(K_n)) = \mu_n(K_n) = \frac{1}{p_n} = \frac{1}{l_n \, p}, \qquad \forall \ j= 0,1, \ldots, p_n.$$
From Definition \ref{definitionCleanSequence}, there exists a periodic point $x_0$ of period $p $ such that $\lim _{n \rightarrow + \infty} \sup_{j \geq 0} \mbox{Hdist}(f^j(K_n, f^j(x_0))) = 0,$ where Hdist denotes the Hausdorff distance. Therefore, there exists $n_0 \geq 1$ such that
$ d (f^j(K_n, f^j(x_0)) < {\delta'} $ for all $ j \geq 0 $ and for all $ n \geq n_0,$ where $\delta' < \delta/2$ is chosen such that
the family of compact balls $ B_0$, $ B_1$, \ldots, $ B_{p-1}$, centered at the points $ f^j(x_0)$ and with radius $\delta'$, are pairwise disjoint. We obtain
$
f^j(K_n) \subset B_{j \mbox{\footnotesize (mod.} p)} $ for all $ j \geq 0 $ and for all $ n \geq 0. $
Therefore,
$$\mu_n ( B_j) = \frac{1}{p}, \qquad \forall \ j=0,1, \ldots, p-1, \ \ \ \ \forall \ n \geq n_0.$$
Finally, applying Lemma \ref{LemmaMeasureDistance}, we conclude
$\mbox{dist}^*(\mu_n, \mu_0) < \varepsilon $ for all $n \geq n_0,$
where $\mu_0 := (1/p) \sum_{j= 0}^{p-1} \delta_{f^j(p)}$
is the $f$-invariant probability measure supported on the periodic orbit of $x_0$, which has zero entropy.
\end{proof}
\section{Open questions}
If $f$ is Lipschitz then no invariant measure has infinite entropy, since its topological entropy is finite. The following question
arises: do Theorems \ref{Theorem1} and \ref{Theorem3} hold also for maps with more regularity than continuity but lower regularity than Lipschitz? For instance, do they hold for H\"{o}lder-continuous maps?
A-priori there is a chance to answer positively this question for one-dimensional H\"{o}lder continuous endomorphisms, because in such a case, the topological entropy is generically infinite. This is a simple corollary of the arguments in \cite{FHT1}. Also for bi-H\"{o}lder homeomorphisms on manifolds of dimension 2 or larger, there is a chance to answer positively the above question, because their topological entropy is also generically infinite \cite{FHT}, \cite{FHT1}. In this article we focus only on the $C^0$-case, and leave for further research the eventual adaptation of our proofs, if this adaptation is possible, to $C^{\alpha}$-maps or homeomorphisms with $0< \alpha <1$.
The hypothesis of Theorems \ref{Theorem1} and \ref{Theorem3} states that $M$ is a compact manifold. It arises the following question: do some of the results also hold in other compact metric spaces that are not manifolds? For instance, do they hold if the space is a Cantor set $K$?
If the aim were just to construct $f \in \mbox{Hom}(K)$ with ergodic measures with infinite metric entropy, the answer would be positive. But if the purpose were to prove that such homeomorphisms are generic in $Hom(K)$, the answer would be negative.
In fact, Theorem \ref{Theorem3} holds in particular for the 2-dimensional square $D^2:=[0,1]^2$. One of the steps of the proofs consists in constructing some fixed Cantor set $\Lambda \subset D^2$, and a homeomorphism $\Phi$ on $M$ that leaves $\Lambda$ invariant, and possesses an $\Phi$-invariant ergodic measure supported on $\Lambda$ with infinite metric entropy (see Lemma \ref{LemmaMain} and Remark \ref{RemarkMainLemma}).
Since any pair of Cantor sets $K$ and $\Lambda$ are homeomorphic, we deduce that any Cantor set $K$ supports a homeomorphism $f$ and an $f$-ergodic measure with infinite metric entropy.
Nevertheless, the above phenomenon is not generic on a Cantor set $K$. On the one hand, there also exists homeomorphisms on $K$ with finite, and even zero, topological entropy. (Take for instance $f \in \mbox{Hom}(K)$ conjugated to the homeomorphism on the attractor of a Smale horseshoe, or to the attractor of the $C^1$- Denjoy example on the circle.) On the other hand, it is known that each homeomorphism on a Cantor set $K$ is topologically locally unique; i.e., it is conjugated to any of its small perturbations \cite{AGW}. Therefore, the topological entropy is locally constant in $\mbox{Hom}(K)$. We conclude that the homeomorphisms on the Cantor set $K$ with infinite metric entropy, that do exist, are not dense in $\hbox{Hom}(K)$; hence they are not generic.
| proofpile-arXiv_065-1323 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Following successful demonstration of nanopore sequencing via engineered protein pores\cite{clarke2009continuous}, the next research frontier in nanopore physics is the development of solid-state nanopore devices with sequencing or diagnostic capability \cite{lindsay2016promises}. Solid-state pores are mechanically more robust, admit of cheaper, more scalable fabrication, have greater compatibility with CMOS semiconductor technology, possess enhanced micro/nanofluidic integration potential \cite{miles2013single} and could potentially increase sensing resolution \cite{lindsay2016promises}. Yet, despite the great interest in solid-state pore devices, approaches for fabricating solid-state pores, especially with diameters below 10\,nm, are limited, with the main challenge being a lack of scalable processes permitting integration of single solid-state pores with other nanoscale elements required for solid-sate sequencing schemes, such as transverse nanoelectrodes \cite{gierhart2008nanopore, ivanov2010dna}, surface plasmonic structures \cite{jonsson2013plasmonic, nicoli2014dna, pud2015self, belkin2015plasmonic, shi2018integrating} and micro/nanochannels \cite{zhang2015fabrication, zhang2018single, liu2018controlling, tahvildari2015integrating}. The main pore production approaches, such as milling via electron beams in a transmission electron microscope (TEM) \cite{storm2003fabrication} and focused-ion beam (FIB) \cite{lo2006fabrication, yang2011rapid, xia2018rapid}, use high energy beam etching of substrate material. While these techniques can produce sub 10\,nm pores with nm positioning precision, they require expensive tools and lack scalability.
In 2014 Kwok \emph{et al}\cite{kwok2014nanopore, briggs2014automated} showed that by directly applying a voltage across an insulating membrane in electrolyte solution, they could form single nanopores down to 2\,nm in size. The applied voltage induces a high electric field across the thin membrane, so strong that it can induce dielectric breakdown, leading to pore formation. The dielectric breakdown method is fast, inexpensive and potentially highly scalable, yet it has a critical disadvantage: the pore position is random. When a high trans-membrane voltage is applied electric breakdown occurs at a ``weak'' location on the insulating membrane, a position determined randomly by the intrinsic inhomogeneity of the nitride film. As the pore can form anywhere on the membrane upon voltage application, the breakdown technique cannot form pores at precisely determined positions; creating multiple pores with well-defined spacing is likewise unfeasible. This is a very problematic limitation, particularly given that many solid-state sensing and sequencing schemes requiring precise pore positioning (e.g. between transverse electrodes \cite{gierhart2008nanopore, ivanov2010dna}, carbon nanotubes \cite{jiang2010fabrication}, graphene nanoribbon \cite{saha2011dna}, or within a micro/nanofluidic channel \cite{zhang2015fabrication, zhang2018single, liu2018controlling}). Multiple closely spaced pores show promise for translocation control \cite{pud2016mechanical, zhang2018single, liu2018controlling}. Critically, the breakdown approach may also inadvertently produce more than one nanopore over the membrane area \cite{carlsen2017solid, zrehen2017real, wang2018fabrication, ying2018formation}, leading to a drastic loss of signal-to-noise and inability to resolve single-molecule translocation events. A recent variation of the breakdown approach uses a pipette tip to control voltage application \cite{arcadia2017situ}, increasing pore positioning precision to the micron scale (the pipette tip opening diameter is 2\,$\mu$m), but nanometer positioning precision is in fact required for many solid-state sequencing schemes, due to the small size of sensing elements required to interface with the pores. In addition, the pipette-tip approach does not prevent the potential formation of multiple pores over the still large (micron scale) region of voltage application.
We have developed a new approach for forming solid-state pores that combines the positioning advantages of particle beam milling and the simplicity/low-cost of the electric breakdown approach with the powerful imaging capabilities of Atomic Force Microscopy (AFM). In our approach, which we call Tip-Controlled Local Breakdown (TCLB), a conductive AFM tip is brought into contact with a nitride membrane and used to apply a local voltage to the membrane (figure~\ref{fig:1}). The local voltage induces electric breakdown at a position on the membrane determined by the AFM tip, forming a nanopore at that location, which we demonstrate via I-V measurement, TEM characterization and single-molecule translocation. Firstly, in TCLB, the nanoscale curvature of the AFM tip (r\,$\sim$\,10\,nm) localizes the electric field to a truly nanoscale region, eliminating the possibility of forming undesirable additional nanopores on the membrane as well as preventing the pore-free region of the membrane from being damaged by high electric fields. Secondly, TCLB can form pores with a spatial precision determined by the nanoscale positioning capability of the AFM instrument (an improvement in spatial precision from micro to nanoscale). Thirdly, TCLB drastically shortens the fabrication time of a single nanopore from on order of seconds to on order of 10\,ms (at improvement of at least 2 orders of magnitude). Fast pore fabrication implies that arrays can be written with extremely high throughput (over $\sim$100 pores in a half an hour, compared to $\sim100$ in a day \cite{arcadia2017situ}). Fourthly, as TCLB is AFM based, it can harness the topographic, chemical and electrostatic scanning modalities of an AFM to image the membrane before and after pore formation, enabling precise alignment of pores to existing features. The scanning capabilities of the AFM tool can be used to automate fabrication of arrays of precisely positioned pores, with the successful fabrication of each pore automatically verified by current measurement at the tip following voltage application. The precise control of the contact force, made possible by AFM, is essential for establishing the reliable contact between the tip and the membrane. As AFM are benchtop tools that operate in ambient conditions (e.g. at atmospheric pressure and normal indoor humidity) they are inherently low-cost and can be readily scaled. The ability to work in ambient conditions implies that the approach is compatible with materials possessing sensitive which require chemical functionalization (e.g. that might be damaged by vacuum conditions used in FIB and TEM). Finally, while classic dielectric breakdown requires that both sides of the membrane be in contact with aqueous electrolyte reservoirs, our approach requires that only one side of the membrane be in contact with a liquid reservoir, considerably easing the scaling of our method and the speed of nanopore formation, as the AFM scanning takes place in a dry environment.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{Figure1.pdf}
\caption{Nanopore fabrication via tip controlled local electric breakdown (TCLB). A 3D schematic of the experimental setup depicting an AFM cantilever with a conductive tip positioned over a silicon nitride membrane. Application of a voltage pulse leads to formation of a nanopore at the tip position. Nanopore arrays can be readily formed via control of the AFM tip location, with \emph{in situ} current measurement at each pore verifying successful pore fabrication at that location. Note that our setup requires only one side of the membrane to be in contact with electrolyte, while the other side of the membrane is exposed to air.}
\label{fig:1}
\end{figure}
\section{Results}
\subsection{Nanopore Fabrication}
The schematic of the experimental setup is illustrated in figure~\ref{fig:1}. Using a bench-top AFM setup operated in ambient laboratory conditions, a conductive AFM tip is brought into contact with a thin nitride membrane sitting on top of an electrolyte reservoir. The conductive AFM tip is positioned a distance of $\sim$\,100\,$\mu$m from the membrane (figure \ref{fig:2} a). To initiate pore fabrication, the tip approaches the membrane at a speed of $\sim$5\,$\mu$m/s until it engages with the surface (figure~\ref{fig:2} b). A small loading force (typically in the order of 1\,nN) is applied to the tip in order to minimize contact resistance between the tip and the membrane. This force is set sufficiently small to avoid tip-induced mechanical damage to the membrane. To initiate the breakdown process, the tip is positioned at the desired location in the scanning region and a single rectangular pulse is applied (figure~\ref{fig:2} d). The pulse has an amplitude of $V_\text{pulse}$, and a duration of $t_\text{pulse}$. The applied voltage pulse initiates the breakdown process and creates a nanoscale pore on the membrane, located at the tip location. After nanopore formation, the tip is retracted from the membrane (figure~\ref{fig:2} d). A representative breakdown event is shown in figure~\ref{fig:2} e-g. A voltage pulse of $V_\text{pulse}$=24\,V, $t_\text{pulse}$=100\,ms is applied (figure~\ref{fig:2} e). After voltage application, the current increases to $\sim$\,50\,pA and remains roughly constant (figure~\ref{fig:2} f inset). After a time delay of $t_\text{BD}$=36.2\,ms (figure~\ref{fig:2} f), the current increases sharply to a few nA, indicating successful breakdown and nanopore formation. If the pores are large, successful nanopore fabrication at the tip location can additionally be confirmed by a subsequent topographic AFM scan (figure~\ref{fig:2} h,i). When the nanopore diameter is smaller or comparable (d$\le$10\,nm) to the tip radius of curvature, the nanopore may not be observed in the AFM scan.
We have developed a custom script enabling automatic control of the pore fabrication process. Using this script we can readily create pore arrays, iterating the single-pore formation process over a $5\times5$ grid with the pores spaced evenly by 500\,nm. Using the same tip, we have successfully fabricated over 300 nanopores on the same membrane, demonstrating the scalability of our TCLB technique (see supplementary S2 for more information).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{Figure2.pdf}
\caption{Fabrication process of a single nanopore by conductive tip induced local electric breakdown. (a) Schematic showing the conductive AFM tip located above a thin nitride membrane. The bottom side of the membrane is in contact with electrolyte. (b) To minimize contact resistance between the tip and membrane, the tip is pressed against the membrane in contact mode. (c) A voltage pulse is applied across the membrane through the tip, initiating the breakdown process, resulting in the formation of a single nanopore. (d) Tip is retracted from the membrane once a nanopore is formed. The \textbf{voltage pulse} (e) and the \textbf{current} across the membrane (f) during a typical nanopore fabrication event. The membrane thickness is 20\,nm, the pulse height $V_\text{pulse}=24$\,V, the pulse width $t_\text{pulse}=100$\,ms and the tip radius is $10 \pm 5$\,nm. (g) TEM image of a 9.2\,nm pore corresponding to the current and voltage trace shown in e-f. (h) AFM scan of a larger sized single nanopore fabricated on nitride membrane using TCLB with accompanying topographic scans of bare membrane (i-red, surface roughness $Ra=0.66$\,nm) and across the pore (i-blue). Note that small nanopores (d$\le$10 nm) may not show up on an AFM scan due to the tip radius.}
\label{fig:2}
\end{figure}
\subsection{Probing the breakdown threshold}
Our automated pore fabrication protocol enables efficient varying of process parameters to optimize pore fabrication. In particular, we vary the pulse amplitude across the nanopore array to probe the threshold at which membrane breakdown occurs. A pulse train of five subsets, with each set containing five rectangular pulses of fixed duration (100\,ms) but increasing amplitude (11\,V to 15\,V, with an increment of 1\,V), are applied across the membrane (figure~\ref{fig:3} a, blue trace). Each pulse is applied to a different location on the membrane. The detected current is shown in Figure~\ref{fig:3} b (trace in red). The locations are arrayed spatially in a $5\times5$ square grid, with the pulse location in the array given by figure~\ref{fig:3} g. The fabrication process starts from location A1 and ends at location E5, rastering in the $y$ direction (figure~\ref{fig:3} g, A1$\rightarrow$A5, B1$\rightarrow$B5, C1$\rightarrow$C5, D1$\rightarrow$D5, E1$\rightarrow$E5). The spacing between each fabrication site is 500\,nm. Spikes in the detected current, which occur for pulse amplitudes greater than $13$\,V, indicate successful electric breakdown. At $V_\text{pulse}$=14\,V, 2 out of 5 attempts induce breakdown. A further increase of the voltage to 15\,V leads to a 100\% breakdown probability (5 out of 5). Magnified view of no-breakdown and successful breakdown events are shown in figure~\ref{fig:3} (c-f) corresponding to location A1 ($V_\text{pulse}$=11\,V) and D4 ($V_\text{pulse}$=14\,V).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{Figure3.pdf}
\caption{Automatic probing of pore fabrication conditions. (a) The voltage pulse train applied to different membrane locations and (b) the resulting current. (c-d) Magnified view of voltage pulse (c) and resulting current (d) that does not correspond to pore fabrication. (e-f) Magnified view of voltage pulse (e) and resulting current (f) that does correspond to pore fabrication. (g) Pore formation conditions across the 5$\times$5 array. Location A1 corresponds to (c-d); Location D4 corresponds to (e-f).}
\label{fig:3}
\end{figure}
\subsection{TEM characterization}
TEM microscopy allowed for a detailed characterization of the nanopores made by TCLB. Figure~\ref{fig:4} shows three TEM micrographs of nanopore arrays. In agreement with our AFM settings (figure~\ref{fig:2} j and figure~\ref{fig:3} g), nanopores are spaced evenly by 500\,nm in an array format. Figure~\ref{fig:4} a and b show a 3$\times$3 nanopore array fabricated using $V_\text{pulse}$= 15\,V, $t_\text{pulse}$= 100\,ms. Figure~\ref{fig:4} c and d show two nanopore arrays made on a new membrane with a new tip under exactly the same fabrication conditions ($V_\text{pulse}$= 15\,V, $t_\text{pulse}$= 100\,ms). Despite using different tips and membranes (12-14\,nm thick) from different chips, nanopores fabricated with the same parameters as our TCLB method have similar diameters (below or close to 5\,nm).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{Figure4.pdf}
\caption{TEM characterization of nanopore arrays. (a) TEM micrograph of a nanopore array containing 9 nanopores. Nanopores are located at the center of the dashed circles. The pore-to-pore spacing is $\sim$500\,nm. (b) Zoomed-in TEM micrograph of a nanopore with an opening diameter of 4.1\,nm. (c)-(d) TEM micrograph of nanopore arrays fabricated on a different membrane from (a). Insets showing magnified micrographs of different nanopores with diameter close to or under 5\,nm. Fabrication condition: $V_\text{pulse}=15$\,V, $t_\text{pulse}=100$\,ms, membrane thickness $l=$12-14\,nm, tip radius $r=10\pm5$\,nm. Additional examples of nanopore arrays are shown in supplementary figure~S3.}
\label{fig:4}
\end{figure}
\subsection{Pore Formation Mechanism}
\subsubsection{Weibull versus Log-normal}
Nanopore fabrication time (time-to-breakdown, $t_\text{BD}$) can provide insight into the pore formation mechanism. Nanopores fabricated via classic dielectric breakdown have a time-to-breakdown following a Weibull probability distribution \cite{briggs2015kinetics, arcadia2017situ, yanagi2018two}. The Weibull distribution is used extensively to model the time-to-failure of semiconductor devices \cite{dissado1984weibull, degraeve1995consistent}. The Weibull distribution arises from the ``weakest-link'' nature of typical dielectric breakdown process, where breakdown happens at the weakest position over a large membrane area. The nanopore fabrication time is dominated by the time to make a pore at this weakest position.
In contrast, we find that our time-to-breakdown distribution, obtained from forming over 300 nanopores using our automatic process, yields better agreement with a \emph{log-normal} probability distribution. Figure \ref{fig:5} shows the cumulative distribution of time-to-breakdown plotted with a log-normal scaling. In this form, data distributed according to a log-normal distribution follows a straight line. Our time-to-breakdown results, linearized by this rescaling, are thus clearly consistent with a log-normal distribution. In figure \ref{fig:S4}, we plot the same results rescaled appropriately for a Weibull, and it is apparent that the Weibull is not as good a description. See supplementary materials section 4 for more detail on log-normal, Weibull distribution and appropriate rescalings (probability plot forms).
The better agreement with a log-normal suggests that the physical mechanism of pore-formation is different using TCLB than classic breakdown. Under tip control, the membrane location where dielectric breakdown occurs is controlled by the tip position, and is thus highly defined rather than random. In this case the statistics of membrane breakdown is no longer a weakest link problem (i.e. determined by the time to breakdown of some randomly located ``weak-point''), but instead is determined by the degradation of a ``typical'' location on the membrane reflecting average film properties. Theoretical and experimental work demonstrate that the overall time-scale of a degradation process that arises from the multiplicative action of many small degradation steps (regardless of physical mechanism) can be modelled via a log-normal distribution \cite{peck1974reliability, berman1981time, lloyd2005simple, mcpherson2010reliability}. Possible degradation mechanisms for our pore-formation process include electromigration, diffusion and corrosion \cite{strong2009reliability}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{Figure5.pdf}
\caption{Log-normal probability plot of time-to-breakdown ($t_\text{BD}$) for a total of 308 nanopores under different pulse voltages. (a) Cumulative distribution of $t_\text{BD}$ presented with a log-normal rescaling under following conditions: $V_\text{pulse}=$15\,V, $t_\text{pulse}=$100\,ms, membrane thickness $l = $12--14\,nm. The average nanopore fabrication time is <$t_\text{BD}$>$=20.9\pm 1.4$\,ms. (b)--(d) Cumulative distributions of $t_\text{BD}$ with $V_\text{pulse}=$16, 17, 18\,V respectively. The dashed lines give the best fit to a log-normal distribution. All experiments are performed with the same tip on one membrane. Tip radius of curvature: $\sim$10\,nm. Membrane thickness: 12--14\,nm. Window size: 50$\times$50\,$\mu$m$^{2}$. (See supplementary section S4 for more details regarding log-normal distribution, Weibull distribution and probability plots.)}
\label{fig:5}
\end{figure}
\subsubsection{Voltage dependence of time-to-breakdown}
In figure \ref{fig:6} a we show the mean time-to-breakdown ($\langle t_\text{BD} \rangle$) versus voltage on a semi-log scale. The mean time-to-breakdown decreases exponentially with voltage. This behaviour is predicted by the \emph{E}-model of time dependent dielectric breakdown (TDDB) \cite{mcpherson1998underlying}, which predicts that the mean time-to-breakdown should depend exponentially on the local electric field (proportional to applied voltage at the tip). The \emph{E}-model arises fundamentally from a thermochemical \cite{mcpherson1998underlying, mcpherson2003thermochemical} rather than a direct tunnelling mechanism (Fowler-Nordheim tunnelling) \cite{mcpherson1998comparison}. In thermochemical breakdown, high voltage across the dielectric material induces strong dipolar coupling of local electric field with intrinsic defects in the dielectric. Weak bonding states can be thermally broken due to this strong dipole-field coupling, which in turn serves to lower the activation energy required for thermal bond-breakage and accelerates the degradation process, resulting in a final dielectric breakdown \cite{mcpherson1998underlying, mcpherson2003thermochemical}.
We have also investigated whether we can use tip-controlled breakdown to produce pores in thicker (20\,nm) nitride membranes. We are able to form pores with a high probability but with a corresponding increase in the required voltage, as demonstrated by figure 6b. The mean time-to-breakdown as a function of voltage in the thicker membranes also follow the \emph{E}-model (figure \ref{fig:6} c).
In figure 6d we compare the average time-to-breakdown for our tip controlled approach versus classical dielectric breakdown. We find that our approach gives pore formation times two orders of magnitude lower than classical breakdown, by comparison with a wide-range of experimental studies \cite{kwok2014nanopore, briggs2015kinetics, yanagi2014fabricating, pud2015self, arcadia2017situ, ying2018formation, yanagi2018two, yamazaki2018photothermally, bandara2019push} exploring classical breakdown for different film thickness (10-30\,nm, 75\,nm), pH (2-13.5) and voltage (1-24\,V).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{Figure6.pdf}
\caption{(a) Semilog plot of the mean breakdown time ($\langle t_\text{BD} \rangle$) versus voltage for 12-14\,nm thick nitride membrane with an exponential fit. (b) Breakdown probability versus voltage for 12-14\,nm and 20\,nm thick nitride. (c) Semilog plot of the mean breakdown time versus voltage for a 20\,nm thick nitride membrane. (d) Comparison of average nanopore fabrication time of this work versus range of studies exploring classical breakdown \cite{kwok2014nanopore, briggs2015kinetics, yanagi2014fabricating, pud2015self, arcadia2017situ, ying2018formation, yanagi2018two, yamazaki2018photothermally}. Note that if the average fabrication time is not given or can not be estimated from the reference, a range is then plotted for comparison (see more details in supplementary section S5).}
\label{fig:6}
\end{figure}
\subsection{Single Molecule DNA Detection}
Lastly, we show nanopores produced using our tip-controlled approach can be used for single molecule detection. Figure \ref{fig:7} shows results for 100\,bp ladder DNA (100-2000\,bp) translocating through a 9.9\,nm pore ($V_\text{pulse}$=20\,V, $t_\text{pulse}$=150\,ms, membrane thickness 10\,nm, tip radius $r$=10$\pm$5\,nm). To perform single molecule detection, the chip is transferred to a fluidic cell with DNA containing 1\,M KCl buffer added to the $cis$ chamber and DNA-free buffer added to the $trans$ chamber. A potential drop of 200\,mV is applied across the nanopore, so that DNA molecule are pulled from $cis$ to $trans$ through the pore.
Figure \ref{fig:7} a-b shows typical signatures of ionic blockades induced by translocating DNA, composed of a mixture of single and multi-level events. A histogram of current blockades, including 587 translocation events measured by the same nanopore, is shown in Figure \ref{fig:7} d. Prior to performing this DNA translocation experiment, an I-V trace was obtained to characterize pore size (figure \ref{fig:7} e), which yielded a nanopore resistance of 23.0\,M$\Omega$. This strong linearity between current and applied voltage demonstrates that our TCLB fabricated nanopore has an outstanding Ohmic performance. Using a membrane thickness $l$=10\,nm and an electrolyte conductivity $\sigma$=10\,S/m, according to the pore conductance model\cite{kowalczyk2011modeling} the estimated effective pore diameter is 9.9\,nm.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{Figure7.pdf}
\caption{DNA translocation through a nanopore fabricated using TCLB ($V_\text{pulse}$=20\,V, $t_\text{pulse}$=150\,ms, membrane thickness 10\,nm, tip radius $r$=10$\pm$5\,nm). (a) Typical ionic current traces of DNA translocating through a 9.9\,nm pore in a 10\,nm thick nitride membrane. Experiment was performed with 0.5\,$\mu$g/mL 100\,bp ladder DNA (100-2000\,bp) in 1\,M KCl buffered with 10\,mM Tris, 1\,mM EDTA, at pH=8.0. Observed events are labelled as 1-3, corresponding to different DNA configurations/folding states while translocating through the pore. (b) Zoomed-in current trace of event 1, 2 and 3, corresponding to the cartoon translocation types shown in (c). (d) Current blockade histogram including over 500 events. (e) I-V characterization of the nanopore. The nanopore displays an Ohmic I-V curve with a resistance of 23.0\,M$\Omega$, leading to an effective pore diameter of 9.9\,nm. Power spectral density (PSD) of the nanopore is shown in supplementary figure \ref{fig:S5}. }
\label{fig:7}
\end{figure}
\section{Discussion and Conclusion}
In summary, we show that tip-controlled local breakdown can be used to produce pores with nm positioning precision (determined by AFM tip), high scalability (100's of pores over a single membrane) and fast formation (100$\times$ faster than classic breakdown) using a bench-top tool. These capabilities will greatly accelerate the field of solid-state nanopore research. In particular, the nm positioning is crucial for wide-range sensing and sequencing applications where there is a need to interface nanopores with additional nanoscale elements. Sequencing approaches based on tunneling require positioning a pore between two electrodes \cite{gierhart2008nanopore, ivanov2010dna}. Plasmonic devices with interfaced pores require positioning pores at the optimal distance ($10-20$\,nm) from nano antennas in order to maximize plasmonic coupling \cite{jonsson2013plasmonic, nicoli2014dna, pud2015self, belkin2015plasmonic, shi2018integrating}. In devices utilizing nanofluidic confinement (e.g. nanochannels, nanocavities) pores need to be aligned with etched sub 100\,nm features \cite{zhang2015fabrication, zhang2018single, liu2018controlling, larkin2017length}. In addition to producing pores, our AFM based approach can exploit multiple scanning modalities (topographic, chemical, electrostatic) to map the device prior to pore production and so align pores precisely to existing features.
TCLB can be integrated into an automated wafer-scale AFM system, ensuring nm alignment of each pore with simultaneous mass pore production. Thus, not only can TCLB drive novel nanopore sensing applications, TCLB can simultaneously drive the industrial scaling of these applications. As an example, consider combining TCLB with photo-thermally assisted thinning\cite{yamazaki2018photothermally, gilboa2018optically, ying2018formation}. In a photo-thermally assisted thinning process, a laser beam is focused on a silicon nitride membrane, leading to formation of a locally thinned out region, with thinning achieved down to a few nm \cite{yamazaki2018photothermally}. If there is only one thinned well formed, classic dielectric breakdown will tend to form a pore at this `thinned out' weakest position. Classic dielectric breakdown, however, is limited to forming only \emph{one} pore in \emph{one} well across an entire membrane. In contrast, TCLB can position pores in each member of a large-scale array of photo-thermally thinned wells, with the wells packed as close as the photo-thermal thinning technique allows. Specifically, AFM topographic scans will determine the center-point of each well and TCLB will then form pores at these positions.
TCLB may also have applications beyond nanopore fabrication, providing an AFM-based approach to locally characterize the dielectric strength of thin membranes and 2D materials. This application, which could be useful for the MEMS and the semiconductor industry, could enable mapping of dielectric strength across large membranes and semiconductor devices, leading to enhanced material performance (e.g. for high-$\kappa$ gate dielectrics \cite{okada2007dielectric}).
\section{Methods}
\indent \textbf{Materials.}
The nitride membranes we use are commercially available from Norcada (part \# NBPT005YZ-HR and NT002Y). The membrane is supported by a circular silicon frame (2.7\,$\mu$m diameter, 200\,$\mu$m thickness) with a window size of 10$\times$10, 20$\times$20 or 50$\times$50\,$\mu$m$^{2}$. The membrane thickness is 10\,nm, 12-14\,nm or 20\,nm.
The AFM probes used are obtained from Adama Innovations (part \# AD-2.8-AS) and have a tip radii of curvature of 10$\pm$5\,nm.
Nanopore fabrication experiments are performed in 1\,M sodium percholorate dissolved in propylene carbonate (PC), with a conductivity of 2.82\,S/m \cite{daprano1996conductance}. DNA translocation experiments are performed in a 3D printed fluidic cell with 100\,bp ladder DNA (Sigma-Aldrich, 100-2000\,bp) diluted to a final concentration of 0.5\,$\mu$g/mL in 1\,M KCl buffered with 10\,mM Tris and 1\,mM EDTA at pH=8.0.
\noindent \textbf{Instrumentation.} The atomic force microscope used in our experiments is a MultiMode Nanoscope III from Digital Instruments (now Bruker). Nanoscript is used for automated fabrication of nanopores. The TEM images are acquired using the JEM-2100F TEM from JEOL.
\noindent \textbf{Current Data Acquisition and Analysis.} The current signal during nanopore fabrication is recorded using a custom current amplifier with 5\,kHz detection bandwidth. Analysis of dielectric breakdown events in the current signal was performed using a custom Python code. The ionic trans-pore current during DNA translocations was recorded using an Axopatch 200B with a 250\,kHz sampling rate, low-pass filtered at 100\,kHz. DNA translocation data analysis was carried out using Transalyzer\cite{plesa2015data}.
\begin{acknowledgement}
This work is financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants Program (Grant No. RGPIN 386212 and RGPIN 05033), Idea to Innovation (I2I) Grant (I2IPJ 520635-18) and joint CIHR funded Canadian Health Research Projects grant (CIHRR CPG-140199). The authors acknowledge useful discussions with Prof. Robert Sladek and Hooman Hosseinkhannazer. The authors acknowledge Norcada for material supplies (nitride membranes). The authors acknowledge Facility for Electron Microscopy Research (FEMR) at McGill and Centre de Caract\'erisation Microscopique des Mat\'eriaux (CM)$^{2}$ at \,Ecole Polytechnique de Montr\'eal for access to electron microscopes.
\end{acknowledgement}
\newpage
\begin{suppinfo}
\renewcommand{\thefigure}{S\arabic{figure}}
\setcounter{figure}{0}
\renewcommand{\thetable}{S\arabic{table}}
\setcounter{table}{0}
\renewcommand{\theequation}{S\arabic{equation}}
\setcounter{equation}{0}
\subsection{S1-Experimental Setup}
Here we discuss the detailed experimental setup for TCLB nanopore fabrication. The fluidic cell assembly and accompanying schematic are shown in figure \ref{fig:S1} a and b. Prior to pore fabrication, the circular nitride TEM window is mounted in the fluidic cell with the cell body filled by electrolyte. The cell is then placed inside the AFM headstage (figure \ref{fig:S1} c). Alignment of the conductive AFM tip to the nitride membrane is monitored via two optical microscopes with an external light source.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{FigureS1.pdf}
\caption{Experimental setup for nanopore fabrication. (a) Assembled fluidic cell with nitride window mounted. (b) Fluidic cell assembly with nitride windows sandwiched between the top cover and O-ring. (c) AFM setup. The whole setup is mounted on a vibration isolation table.}
\label{fig:S1}
\end{figure}
\newpage
\subsection{S2-Reliability and scalability of TCLB}
To demonstrate the reliability and scalability of the TCLB technique, we fabricated over 300 nanopores using the same AFM tip on one membrane. All data presented in figure \ref{fig:5} are collected from a total of 308 nanopores, fabricated using a single tip on the same membrane window (12-14\,nm thick, window size 50$\times$50\,$\mu$m$^{2}$) with a total time of around 30\,min. The location of the arrays (17 in total) in relation to the window position are mapped out in figure \ref{fig:S2} a. Each array contains a maximum possible of 25 nanopores (5$\times$5 array). Figure \ref{fig:S2} b shows another example of 11 arrays (in total 68 nanopores) located on a 20\,nm thick membrane, window size 20$\times$20\,$\mu$m$^{2}$.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{FigureS2.pdf}
\caption{Mapping the location of nanopore arrays across the membrane. (a) Schematic showing the position of 17 nanopore arrays on a 50$\times$50\,$\mu$m membrane window (12-14\,nm thick). Each array contains a maximum possible 25 nanopores (5$\times$5 array). A total of 353 nanopores were successfully fabricated under various fabrication conditions. (a) The position of 11 nanopore arrays on a 20$\times$20\,$\mu$m membrane window (20\,nm thick). A total of 68 nanopores were successfully fabricated. }
\label{fig:S2}
\end{figure}
\newpage
\subsection{S3-TEM characterization}
Two additional TEM micrographs of nanopore arrays are presented in figure \ref{fig:S3}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{FigureS3.pdf}
\caption{TEM micrograph of nanopore arrays. (a) TEM micrograph showing a 3$\times$5 nanopore array fabricated on a 20\,nm nitride membrane. All nanopore are fabricated using the same conditions: $V_\text{pulse}$=25\,V, $t_\text{pulse}$=100\,ms. (b) TEM micrograph showing a 2$\times$5 nanopore array fabricated on a 20\,nm nitride membrane. The pores in the top row are fabricated using $V_\text{pulse}$=24\,V and $t_\text{pulse}$=100\,ms. The yield is 100\% (5 out of 5). The pores in the bottom row are fabricated using the same pulse width ($t_\text{pulse}$=100\,ms) but lower voltage ($V_\text{pulse}$=23\,V). The yield is only 60\% (red circles indicate failed breakdown attempts). The scale bar is 500\,nm. Note that TEM images are taken slightly under focused in order to better visualize the nanopore.}
\label{fig:S3}
\end{figure}
\newpage
\subsection{S4-Log-normal distribution and Weibull distribution}
The probability density function (\textit{pdf}) and cumulative distribution function (\textit{cdf}) of \emph{log-normal} distribution and 2-parameter \emph{Weibull} distribution are given by:\\
\text{Log-normal \textit{pdf}:}
\begin{equation}
f(t; \mu, \sigma)=\frac{1}{t \sigma \sqrt{2 \pi}} e^{-\frac{(\ln t- \mu)^2}{2 {\sigma}^2}}
\label{Lognormal:pdf}
\end{equation}
\text{Log-normal \textit{cdf}:}
\begin{equation}
F(t; \mu, \sigma)= \frac{1}{2}+\frac{1}{2} \text{erf}[\frac{\ln t-\mu}{\sqrt{2}\sigma}]
\label{Lognormal:cdf}
\end{equation}
\text{Weibull \textit{pdf}:}
\begin{equation}
f(t; \lambda, \beta)=\frac{\beta}{\lambda} (\frac{t}{\lambda})^{\beta-1} e^{-{(\frac{t}{\lambda})}^{\beta}}
\label{Weibull:pdf}
\end{equation}
\text{Weibull \textit{cdf}:}
\begin{equation}
F(t; \lambda, \beta)=1- e^{-(\frac{t}{\lambda})^{\beta}}
\label{Weibull:cdf}
\end{equation}
where $\mu$, $\sigma$ are log-normal distribution's shape and scale parameters; $\beta$ and $\lambda$ are Weibull distribution's shape and scale parameters. The symbol erf designates the error function, $\text{erf}(x)=\frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^{2}}dt$. \emph{F}(\emph{t}) is the cumulative failure rate at time \emph{t} (t$\ge$0).
\subsubsection*{Probability plots of log-normal and Weibull}
The probability plots of distributions are constructed by rescaling the axes to linearize the cumulative distribution function (\emph{cdf}) of the distribution. For example after rescaling both X axis and Y axis, a log-normal distribution will show up as a straight line in the log-normal probability plot, likewise a Weibull distribution will show up as a straight line in the Weibull probability plot.
The X scale type and Y scale type for log-normal probability plot and Weibull probability plot are given by:
\begin{table}[]
\begin{tabular}{lll}
Distribution & X scale type & Y scale type \\ \hline
Log-normal & Ln & Probability \\ \hline
Weibull & Log10 & Double Log Reciprocal
\end{tabular}
\label{probabilityplots}
\end{table}
\noindent where \emph{Probability} scaling is given by the inverse of a cumulative Gaussian distribution: $X^{-1}=\Phi^{-1}(X/100)$. The quantity $\Phi$ is the cumulative Gaussian distribution function, $\Phi=\frac{1}{2} [1+\text{erf}(\frac{x-\mu}{\sigma \sqrt{2}})]$. \emph{Double log reciprocal} scaling is given by $X^{-1}=\ln(-\ln(1-X))$.
An example of the log-normal probability plot of time-to-breakdown ($t_\text{BD}$) is shown in figure \ref{fig:5}. An example of the Weibull probability plot for the same data set is shown in figure \ref{fig:S4}. One can compare probability plots for log-normal and Weilbull and conclude that the time-to-breakdown ($t_\text{BD}$) fits better to a log-normal distribution.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{FigureS4.pdf}
\caption{Weibull probability plot of time-to-breakdown ($t_\text{BD}$) for the same data set presented in figure~\ref{fig:5}. (a) Cumulative distribution of $t_\text{BD}$ presented with a Weibull rescaling under following conditions: $V_\text{pulse}=$15\,V, $t_\text{pulse}=$100\,ms, membrane thickness $l = $12--14\,nm. The average nanopore fabrication time is $\langle t_\text{BD} \rangle$=20.9$\pm$1.4\,ms. (b)--(d) Cumulative distributions of $t_\text{BD}$ with $V_\text{pulse}=$16, 17, 18\,V respectively. The dashed lines give the best fit to a Weibull distribution. All experiments are performed with the same tip on one membrane. Tip radius of curvature: $\sim$10\,nm.}
\label{fig:S4}
\end{figure}
\newpage
\subsection{S5-Nanopore fabrication time comparison}
The following table compares dielectric breakdown based nanopore fabrication approaches in greater detail, including average pore fabrication time, membrane thickness, breakdown voltage, min/max fabrication time and number of nanopores analyzed.
\begin{table}[]
\small\addtolength{\tabcolsep}{-4.5pt}
\begin{tabular}{ccccccc}
\hline
Methods & \begin{tabular}[c]{@{}c@{}}Average pore\\ fabrication time\end{tabular} & \begin{tabular}[c]{@{}c@{}}Membrane\\ thickness\end{tabular} & \begin{tabular}[c]{@{}c@{}}Breakdown\\ voltage\end{tabular} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Min/Max\\ fabrication time\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Number of \\ nanopores analyzed\end{tabular} \\ \hline
{\color[HTML]{000000} This work} & {\color[HTML]{000000} 20\,ms} & {\color[HTML]{000000} \begin{tabular}[c]{@{}c@{}}10\,nm, 12-14\,nm, \\ 20\,nm\end{tabular}} & {\color[HTML]{000000} 13-25\,V} & {\color[HTML]{000000} 1\,ms} & {\color[HTML]{000000} 85\,ms} & {\color[HTML]{000000} $\sim$400} \\ \hline
CBD \cite{kwok2014nanopore, briggs2015kinetics} & NA & 10\,nm, 30\,nm & 5-17\,V & 4\,s & 10$^{5}$\,s & $\sim$50 \\ \hline
Micro pipette \cite{arcadia2017situ} & 8.9\,s & 10\,nm & up to 24\,V & 1\,s & 17\,s & 169 \\ \hline
Two-step BD \cite{yanagi2018two} & 265.5\,s & 20\,nm & 10, 20\,V & 150\,s & 350\,s & 50 \\ \hline
\begin{tabular}[c]{@{}c@{}}Multilevel pulse\\ injection \cite{yanagi2014fabricating} \end{tabular} & $\sim$1\,s & 10\,nm & 2.5, 7\,V & 0.1\,s & 20\,s & 40 \\ \hline
\begin{tabular}[c]{@{}c@{}}Optically \\ controlled BD \cite{pud2015self} \end{tabular} & NA & 20\,nm & 6\,V & 30\,s & 300\,s & NA \\ \hline
\begin{tabular}[c]{@{}c@{}}Laser-assisted \\ controlled BD \cite{ying2018formation} \end{tabular} & $\sim$35\,s & 30\,nm & 18\,V & 10\,s & 80\,s & 33 \\ \hline
\begin{tabular}[c]{@{}c@{}}Photothermally\\ assisted BD \cite{yamazaki2018photothermally} \end{tabular} & 165\,s & 75\,nm & 1\,V & NA & NA & 29 \\ \hline
\end{tabular}
\caption{Comparing nanopore fabrication of TCLB with classical dielectric breakdown.}
\label{table:S1}
\end{table}
\newpage
\subsection{S6-Nanopore PSD}
Figure below shows the current power spectral density (PSD) plot of the nanopore presented in figure \ref{fig:7}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\linewidth]{FigureS5.pdf}
\caption{Current power spectral density (PSD) of the nanopore presented in figure \ref{fig:7}. PSD was obtained by measuring a 10\,s of ionic trace, low-pass filtered at 100\,kHz and sampled at 250\,kHz with 200\,mV applied voltage.}
\label{fig:S5}
\end{figure}
\end{suppinfo}
\newpage
| proofpile-arXiv_065-1329 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The main goal of existing wireless networks is to provide the highest possible spectral efficiency and the best possible data rate for human users. But, machine-type communications (MTC) will become a strong challenge for next generation wireless networks. Traffic patterns for MTC completely differ from human-generated traffic and can be characterized by the following features: (a) a huge number of autonomous devices connected to one access point, (b) low energy consumption is a vital requirement, (c) short data packets and (d) low traffic intensity generated by single device. 3GPP has proposed multiple candidate solutions for massive MTC (mMTC). The main candidates are multi-user shared access (MUSA, \cite{yuan2016non}), sparse coded multiple access (SCMA, \cite{nikopour2013sparse}) and resource shared multiple access (RSMA, \cite{3gpp.R1-164688, 3gpp.R1-164689}), but the lack of implementation details does not allow to select the most preferable solution. At the same time, we mention that no one of 3GPP solutions is based on polar codes \cite{Arikan} despite the fact that these codes in combination with Tal--Vardy list decoder \cite{TalVardyList2015} are extremely good for short code lengths and low code rates. In this paper, we fill this gap.
Polar codes \cite{Arikan} are the first class of error-correcting codes which is proved to achieve the capacity of any binary memoryless symmetric channel with a low-complexity encoding and
decoding procedures. However, it appeared to be a challenging problem to construct (optimize) such codes for the finite blocklength regime. This question was addressed in \cite{TalVardyConstr2013, MoriTanaka2009, Trifonov2012}. These methods as well as Tal--Vardy list decoder \cite{TalVardyList2015} allowed to significantly improve the practical performance of such codes. As a result, these codes were selected as a coding scheme for the control channel of the enhanced mobile broadband (eMBB) \cite{3gpp.finrep, 3gpp.R1-1611109}.
In \cite{Telatar2user, Onay2013} polar codes were proved to achieve the full admissible capacity region of the two-user binary input MAC. In \cite{TelatarMuser} the results were generalized for the $K$-user case. At the same time, there are no efficient decoding and optimization methods for the case of finite blocklength. In this paper, we address this question and investigate the practical performance of polar codes in $K$-user MAC.
Our contribution is as follows. We compare two possible decoding techniques: joint successive cancellation algorithm and joint iterative algorithm. In order to optimize the codes (choose frozen bits) we propose a special and efficient design algorithm. We investigate the performance of the resulting scheme in the Gaussian multiple access channel (GMAC) by means of simulations. The scheme is shown to outperform LDPC based solution by approximately $1$~dB and to be close to the achievability bound for GMAC.
\section{Preliminaries}
\subsection{Polar Codes}
Let us consider the Arikan's kernel
\[
G_2 \triangleq \begin{bmatrix} 1 & 0 \\
1 & 1
\end{bmatrix},
\]
then the \textit{polar transform} of size $N = 2^n$ is defined as follows
\[
G_N \triangleq B_N G_2^{\otimes n},
\]
where $\otimes$ is the Kronecker power and $B_N$ is called a \textit{shuffle reverse} operator (see \cite{Arikan}).
In order to construct an $(N, k)$ polar coset code let us denote the set of frozen positions by $\mathcal{F}$, $|\mathcal{F}| = N - k$. By $\mathbf{u}_\mathcal{F}$ we denoted the projection of the vector $\mathbf{u}$ to positions in $\mathcal{F}$. For now, we can define a \textit{polar coset code} $\mathcal{C}$ as follows
\[
\mathcal{C}(N, k, \mathcal{F}, \mathbf{f}) = \left\{ \mathbf{c} = \mathbf{u} G_N \:\: | \:\: \mathbf{u} \in \{0,1\}^N, \:\: \mathbf{u}_\mathcal{F} = \mathbf{f} \right\}.
\]
\subsection{System model}
Let us describe the system model. There are $K$ active users in the system. Communication proceeds in a frame-synchronized fashion. The length of each frame is $N$ and coincides with the codeword length. Each user has $k$ bits to transmit during a frame. All users have equal powers and code rates.
Let us describe the channel model
\begin{equation*}
\mathbf{y} = \sum_{i=1}^{K} \mathbf{x}_i + \mathbf{z},
\end{equation*}
where $\mathbf{x}_i \in \mathbb{R}^n$ is a codeword transmitted by the $i$-th user and $\mathbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ is an additive white Gaussian noise (AWGN).
We note, that non-asymptotic achievability and converse bounds for this channel were derived in \cite{polyanskiy2017perspective}. We note, that these bounds were proved for the case of the same codebook and decoding up to permutation, but can be easily changed for the use in different codebook case. In what follows we compare the performance of our codes to these bounds.
In our system the users utilize \textit{different} polar coset codes $\mathcal{C}_i(N, k, \mathcal{F}_i, \mathbf{f}_i)$, $i=1,\ldots, K$. Lets us consider the \mbox{$i$-th} user. In order to send the information word $\mathbf{u}_i$ the user first encodes it with the code $\mathcal{C}_i(N, k, \mathcal{F}_i, \mathbf{f}_i)$ and obtain a codeword $\mathbf{c}_i$. Then the user performs BPSK modulation or equivalently
\[
\mathbf{x}_i = \tau(\mathbf{c}_i), \quad \tau(\mathbf{c}_i) = (\tau(c_{i,1}), \ldots, \tau(c_{i,N})),
\]
where $\tau:\{0, 1\} \rightarrow \{\sqrt{P}, -\sqrt{P}\}$.
The probability of error (per user) is defined as follows
\begin{equation}
\label{eq:p_e}
P_e = \frac{1}{K} \sum\limits_{i=1}^{K} \Pr(\mathbf{u}_i \ne \hat{\mathbf{u}}_i),
\end{equation}
where $\hat{\mathbf{u}}_i$ is the estimate of $\mathbf{u}_i$ provided by the decoder.
As energy efficiency is of critical importance for mMTC scenario we focus on optimization of the required energy per bit ($E_b/N_0$). Recall, that it is calculated as follows
\[
E_b/N_0 = \frac{N P}{2 k}.
\]
\section{Decoding algorithms}
\subsection{Joint Successive Cancellation Decoding}
Let us first explain the main idea for the toy example with $N=2$ (see \Fig{fig:PolarRepr}). We see that instead of working with bits of different users and several polar codes we can work with a single polar code over $\mathbb{Z}_2^K$. In our example we will first decode a bit configuration (tuple) $(u_1, v_1)$ -- first bits of users and then a tuple $(u_2, v_2)$ -- second bits of users. We assume the decoder to work with tuple distributions rather than with probabilities of single bits.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{polar_jsc.eps}
\caption{Representation as a polar code over $\mathbb{Z}_2^K$ for $K=2$.}
\label{fig:PolarRepr}
\end{figure}
The input of the decoder is the vector $\mathbf{P} = ({\mu}_1,\ldots,{\mu}_N)$ of length $N$ consisting of a priory probability mass functions (pmf) ${\mu}_i \in [0, 1]^{2^K}$, $i = 1, \ldots, N$. Let us show how to initialize the $k$-th pmf. Recall, that the channel output is a vector $\mathbf{y}$ and consider its $k$-th component $y_k$ and let $g = (b_1, \ldots, b_K) \in \mathbb{Z}_2^K$.
\begin{flalign}
&{\mu}_k(g) = \Pr[g = (b_1, \ldots, b_K) | y_k] \nonumber \\
&\propto \exp \left\{-\frac{(y_k - \sum_{i=1}^K \tau(b_i))^2}{2\sigma^2}\right\}, \label{eq:initialization}
\end{flalign}
recall, that the noise variance $\sigma^2 = 1$ in our case.
Let us first consider the decoding of the basic block shown in \Fig{fig:PolarRepr}. Let us assume, that we are given two a priory pmfs $\mathbf{\mu}_1$ and $\mathbf{\mu}_2$. Let us describe the operations.
We start with decoding of the tuple corresponding to the first bits of the users ($(u_1, v_1)$ in our example). In order to do this, we need to calculate the distribution of the sum of two random variables over $\mathbb{Z}_2^K$. In what follows we refer to this operation as the \textit{check-node operation (cnop)}. Clear, that this can be done by means of convolution, i.e. $\hat{\mu}_1 = \mu_1 \ast \mu_2$. As we are working in the abelian group $\mathbb{Z}_2^K$, so there exists a Fourier transform (FT) $\mathcal F$. In what follows in order to perform a convolution we use the FFT-based technique proposed in \cite{Declercq} the case of LDPC codes over Abelian groups. Thus, the final rule is as follows
\begin{equation*}
\hat{\mu}_1 \propto \mathcal F^{-1}\left(\mathcal F(\mu_1) \odot \mathcal F(\mu_2)\right),
\end{equation*}
where $\odot$ denotes the element-wise multiplication.
After we calculated the pmf $\hat{\mu}_1$ we can make a hard decision $\hat{g}_1$ taking into account the values of frozen bits in this position.
After $\hat{g}_1$ is found we proceed with \textit{variable-node operation (vnop)}. The rule is as follows
\begin{equation*}
\hat{\mu}_2(g) \propto \mu_1(g + \hat{g}_1) \mu_2(g)\ \ \ \forall g\in \mathbb Z_2^K.
\end{equation*}
The final joint successive cancellation (JSC) decoding algorithm utilizes $cnop$ and $vnop$ functions in a recursive manner. Please see Algorithm~\ref{alg:jsc} for full description.
\begin{algorithm}
\caption{Joint Successive Cancellation Decoding (JSC)}
\label{alg:jsc}
\begin{algorithmic}[1]
\INPUT{$N$ -- code length, $K$ -- number of users, $\mathbf{F} \in \{0, 1, \text{inf}\}^{K \times N}$ -- matrix of frozen bits, $\mathbf{y} \in \mathbb{R}^N$ -- received signal.}
\State Initialize $\mathbf{P} = ({\mu}_1,\ldots,{\mu}_N)$ according to \eqref{eq:initialization}.
\Function{PolarDecode}{$\mathbf{P}$, $\mathbf{F}$}
\If{$ \mathop{len}(P) = 1$}
\State $u, x = \mathop{decision}(\mathbf{P}, \mathbf{F})$
\Comment{Make decision based on probabilities and the matrix of frozen bits}
\Else
\State $\mathbf{P}_o = ({\mu}_1, {\mu}_3, \ldots)$, $\mathbf{P}_e = ({\mu}_2, {\mu}_4, \ldots)$
\State $\mathbf{P}_{1} = \mathop{cnop}(\mathbf{P}_e, \mathbf{P}_o)$
\State $\mathbf{u}_1, \mathbf{x}_1 = \mathop{PolarDecode}(\mathbf{P}_{1}, \mathbf{F})$
\State $\mathbf{P}_{2} = \mathop{vnop}(\mathop{cnop}(\mathbf{x}_1, \mathbf{P}_o), \mathbf{P}_e)$
\State $\mathbf{u}_2, \mathbf{x}_2 = \mathop{PolarDecode}(\mathbf{P}_{2}, \mathbf{F})$
\State $\mathbf{u} = \mathop{concat}(\mathbf{u}_1, \mathbf{u}_2)$
\State $\mathbf{x} = \mathop{merge}(\mathbf{x}_1, \mathbf{x}_2)$
\EndIf
\EndFunction
\OUTPUT{$\mathbf{u}$, $\mathbf{x}$}
\end{algorithmic}
\end{algorithm}
\begin{remark}
It is worth noting that we can easily improve the decoding procedure by using list decoding method \cite{TalVardyList2015}. It means we will take into account not only the most probable path (the path consists of tuples in our case) but $L$ different paths with the highest metric.
\end{remark}
\subsection{Iterative Decoding}
Now let us describe the iterative decoding algorithm. The aim of this decoder is to update the log-likelihood ratios (LLR) for every bit that has been transmitted by every user. During an iteration the algorithm selects the next user from the list in a round robin manner, fixes the remaining LLRs and updates the LLR vector only for the user under consideration. Every iteration consists of a message passing algorithm on a graph shown in Figure~\ref{fig:IterScheme}. This graph has the following nodes: a) the polar list decoder node, which uses the LLR values as inputs and generates $L$ candidate codewords as the output, b) LLR evaluation nodes (circles), which perform the per-bit LLR evaluation given the input candidate codewords list~\citep{Pyndiah}, and c) the functional nodes (represented by triangles) performing the per-user LLR update given the LLR vectors for every user and the received signal vector $\mathbf{y}$.
As soon as the polar list decoding is a well-known procedure~\citep{TalVardyList2015} as well as the method of constructing the LLR vector from the candidate codeword list~\citep{Pyndiah}, one need to describe in details the message passing procedure to and from the functional nodes.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{polar_iterative.eps}
\caption{Iterative decoding scheme for $K=2$.}
\label{fig:IterScheme}
\end{figure}
\begin{algorithm}
\caption{Iterative Decoding}
\label{alg:polar_iterative}
\begin{algorithmic}[1]
\INPUT{$N$ - code length, $K$ - number of users, $\mathbf{F}\in \{0, 1, inf\}^{K\times N}$ - matrix of frozen bits, $\mathbf{y}\in \mathbb{R}^N$ - received signal.}
\State initialize the LLR values of variable nodes for each user code with zero values assuming equal probability for $\sqrt{P}$ and $-\sqrt{P}$ values
\For {$i = 1, \ldots, K\times I$} \Comment perform $I$ iterations
\State {$u = mod(I, K)$} \Comment round robin user selection
\State Update LLR vector for given user assuming all other users have fixed LLRs~eq.~\eqref{eq:update_func_nodes} \Comment from functional nodes to polar decoder
\State Perform single user list polar decoding~\citep{TalVardyList2015} given the input LLR vector \Comment corresponds to orange arrow on Figure~\ref{fig:IterScheme}
\State Derive output LLR vector given the decoded candidate list \Comment corresponds to magenta arrow on Figure~\ref{fig:IterScheme}
\EndFor
\State Make decisions given the output LLR vector for every user
\OUTPUT{u, x}
\end{algorithmic}
\end{algorithm}
Every functional node corresponds to a single channel use. As mentioned above, every user's LLR vector is updated under fixed LLR vectors for all other users. For convenience let us consider some arbitrary functional node (its index is omitted for convenience) and the first user. The goal of the functional node is to marginalize out the uncertainty about the signal transmitted by users $j=2,\ldots,K$
\begin{equation}
L(x_1) = \log \left(\frac{
\sum\limits_{x_1 = +\sqrt{P}, x_2, \ldots x_K} p\left(y \bigg| \sum\limits_{j=1}^{K} x_j\right)\prod\limits_{j=2}^{K} \Pr(x_j)
}
{
\sum\limits_{x_1 = -\sqrt{P}, x_2, \ldots x_K} p\left(y \bigg| \sum\limits_{j=1}^{K} x_j\right)\prod\limits_{j=2}^{K} \Pr(x_j)
}
\right),
\label{eq:update_func_nodes}
\end{equation}
where the numerator corresponds to the total probability that user $1$ has transmitted the signal $x_1=+\sqrt{P}$ and the denominator -- that $x_1=-\sqrt{P}$ has been transmitted (subscript corresponds to user number) and $L(x_1)$ is the output LLR for the first user. The probability $p(y|a) = \frac{1}{\sqrt{\pi}}\exp{\left(-(y-a)^2\right)}$ corresponds to AWGN channel assumption. Full algorithm description is presented in~Algorithm~\ref{alg:polar_iterative}.
\section{Design of Polar Codes for GMAC}
In this section, we propose a method to optimize polar codes for the use in $K$-user GMAC. First of all, let us mention the fact, that GMAC is not a symmetric channel. To see this fact let us consider $K=2$ and a noiseless case. We see, that the tuples $(0,1)$ and $(1,0)$ will lead to the channel output $0$ and it is not possible to distinguish in between this two hypotheses given $y = 0$. At the same time $(0,0)$ and $(1,1)$ will lead to $y = 2$ and $y = -2$ and the decoder can easily find the transmitted tuple. Thus, the zero codeword assumption (so popular in the single user case) does not work in our case. In order to construct the codes we apply the approach of \cite{SGMAC} and ``symmetrize'' the channel (see \Fig{fig:SGMAC}). The main idea is in adding and subtracting (during demodulation process (\ref{eq:initialization})) of a random element $\mathbf{h}$ distributes uniformly on $\mathbb{Z}_2^K$ (different $\mathbf{h}$ for each channel use).
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{SGMAC.eps}
\caption{Equivalent symmetric channel}
\label{fig:SGMAC}
\end{figure}
It is easy to see that the resulting channel (see \Fig{fig:SGMAC}) is symmetric. In what follows we refer to it as sGMAC and construct the codes for it. For this channel, we can use a zero codeword assumption.
Initially, we supposed to write density evolution rules for our case. The idea is very similar to density evolution for non-binary LDPC codes \cite{NBDE_LDPC}. To be precise we mean Gaussian approximation, i.e. pdfs of the messages are approximated with use of multidimensional (the dimension is $2^K$) Gaussian mixtures. But we found that this procedure requires much more computational resources in comparison to simple Monte-Carlo simulation to determine good subchannels. The problem is in $\mathop{cnop}$ operation. It is rather difficult and requires sampling and fitting operations.
Finally, we found that the major problem for the decoder is non-unique decoding rather than the noise and propose a construction method for the noiseless adder MAC, which works fine for sGMAC also. Let us briefly define the method. We suppose that zero tuples are being transmitted through symmetric noiseless MAC (see \Fig{fig:SGMAC}). First of all we need to calculate the initial pmf, which goes to the decoder. It can be done as follows
\[
{\mu}_0(\mathbf{x}) = \frac{1}{2^K}\sum\limits_{\mathbf{h}: \:\: \weight{\mathbf{h} + \mathbf{x}} = \weight{\mathbf{h}}} \frac{1}{\binom{K}{\weight{\mathbf{h}}}},
\]
by $\weight{\cdot}$ we mean the Hamming weight, i.e. a number of non-zero elements in a vector.
\begin{example}
Let $K = 2$ and assume, that the users send a tuple $(0, 0)$. Then consider $4$ cases of $\mathbf{h}$ and calculate $\mu_0$ for this case:
\begin{enumerate}
\item $\mathbf{h} = (0,0), \quad \mu = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}$;
\item $\mathbf{h} = (0,1), \quad \mu = \begin{bmatrix} 1/2 & 0 & 0 & 1/2 \end{bmatrix}$;
\item $\mathbf{h} = (1,0), \quad \mu = \begin{bmatrix} 1/2 & 0 & 0 & 1/2 \end{bmatrix}$;
\item $\mathbf{h} = (1,1), \quad \mu = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}$;
\end{enumerate}
Thus, the resulting initial distribution (averaged over $h$) is $\mu_0 = \begin{bmatrix} 3/4 & 0 & 0 & 1/4 \end{bmatrix}$. The elements of $\mu_0$ are indices by the tuples in lexicographic order.
\end{example}
At each of $\nu = 0, \ldots, n-1$, with $n = \log_2 N$ steps we construct $2$ new pmfs: $\mu_{\nu+1}^-$ and $\mu_{\nu+1}^+$
$$
\mu_{\nu+1}^- = \mathop{cnop} (\mu_{\nu}, \mu_{\nu}),\quad \mu_{\nu+1}^+ = \mathop{vnop} (\mu_{\nu}, \mu_{\nu}),
$$
where $\mu_{\nu+1}^{(2i-1)} = \mu_{\nu+1}^{-,(i)}$, $\mu_{\nu+1}^{(2i)} = \mu_{\nu+1}^{+, (i)}$.
To choose the subchannels we compare $\mu_{n}(\mathbf{0})$ values on the $n$-th step.
\section{Numerical Results and Experiments}
We conducted a series of experiments of proposed algorithms and compared the results with GMAC random coding bound \cite{polyanskiy2017perspective} and with PEXIT optimized LDPC code ($15$ inner and $15$ outer iterations) proposed in \cite{10.1007/978-3-030-01168-0_15}.
Let us describe how we constructed polar codes for our experiments. In order to choose the frozen positions we utilized the proposed design procedure. We have selected the common set of frozen tuples for all users and the values of frozen bits have been selected at random for different users because the same frozen values lead to poor performance.
The first experiment was conducted for $K=2$ users. The probability of decoding error~\eqref{eq:p_e} performance is shown in~\Fig{fig:comparison_2users}. Both JSC and iterative decoding schemes were tested with the list size $L=8, 16, 32$. We have selected $15$ decoding iterations for the iterative scheme. One can observe that the JSC scheme outperforms the iterative one and the LLR estimation procedure does not experience significant gain when increasing the list size. With the list size being increased the JSC also experiences higher performance gain in comparison with iterative scheme. We note, that for JSC algorithm we plotted the probability that the correct word belongs to the output list. The choice of the codeword can be done by means cyclic redundancy check (CRC) and we expect $3-5$ bit CRC to be enough. Another interesting approach is dynamically or parity-check frozen bits. We also note, that we do not need CRC for iterative decoder as the list is used only for LLR calculation.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{comparison_2users.eps}
\caption{Probability of decoding error for $K=2$ users with code parameters $N=512, k=128$ with different list size $L$.}
\label{fig:comparison_2users}
\end{figure}
In the second experiment we run the same schemes for $K=4$ users. We found that the result of iterative decoding is quite bad, so we used only JSC method with the same list sizes as for $K=2$ case. The results of this experiment are presented in \Fig{fig:comparison_4users}. One can easily see that JSC algorithm really improves decoding efficiency in both setups. For both cases JSC can achieve $10^{-3}$ probability of error on at least $1$ dB less energy-per-bit than PEXIT optimized LDPC. And our best-performing solution is less than $0.8$ dB apart from the random coding bound at $10^{-3}$ probability of error level. The list size also affects the performance and in case of JSC, we can see a significant performance gain when increasing the list size.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{comparison_4users.eps}
\caption{Probability of decoding error for $K=4$ users with code parameters $N=512, k=128$ with different list size $L$.}
\label{fig:comparison_4users}
\end{figure}
Another important practical issue is that JSC has no tunable parameters rather than iterative decoding Algorithm~\ref{alg:polar_iterative} (see \cite{Pyndiah}). We have also performed best parameters search when running iterative decoding algorithm.
\section{Conclusions and Future Work}
\vskip -0.028cm
In this paper, NOMA schemes based on polar codes are discussed. We proposed two different decoding algorithms. We have also derived a code designing procedure that optimizes polar codes for $K$-user GMAC. Then we compared our schemes with existing NOMA technique based on PEXIT optimized LDPC codes. As a result, we can conclude that JSC decoding algorithm for designed polar codes outperforms Iterative decoding procedures for both considered LDPC and polar coding schemes and becomes less that $0.8$ dB apart from the random coding bound~\citep{polyanskiy2017perspective}. We have considered single antenna AWGN model in this work and leave MIMO and fading channels for the future research.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-1335 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
A number of wireless applications exists involving echo-assisted communication wherein messages transmitted by the source arrive at the destination as multiple noisy copies. Typical examples include communication over frequency-selective channels \cite{HSY}, relay networks \cite{NTW}, and multiple receive antennas \cite{mimo_ref}. In such scenarios, it is well known that suitably combining these copies can increase the effective signal-to-noise-ratio, thereby facilitating higher transmission rate.
In this work, we consider attack models on echo-assisted communication wherein a subset of the copies collected at the destination might have been manipulated by an adversary. Attacks on only a subset of copies are attributed to practical limitations on the adversary to manipulate all the copies. For instance, in the case of frequency-selective channels with delay spreads, the adversary may have processing-delay constraints to manipulate the first copy, but not the subsequent ones \cite{HSY}. We study a specific adversarial attack referred to as the flipping attack \cite{DJLS} wherein the message bits of the attacked copy are flipped at 50\% rate independently. With such attacks, the dilemma at the destination is whether to use the vulnerable copies or discard them when recovering the messages. To gain insights on the attack model, we focus on the case of two received copies, out of which the second copy might have been manipulated by an adversary. Although adversarial models on binary channels have been studied by the information-theory community \cite{DJLS, BudJ}, flipping attacks on echo-assisted communication involving binary input and continuous output have not been studied hitherto. Henceforth, throughout the paper, we refer to the source and the destination as Alice and Bob, respectively.
\begin{center}
\begin{figure}[h]
\includegraphics[scale=0.44]{for_ITW_2019}
\vspace{-0.5cm}
\caption{\label{fig:compound_channel} Compound channel comprising the source, adversarial echo-assisted channel, and the combining strategy, which is aided by the attack-detection block at the destination. In this work, we characterize the mutual information $I(x^{n};y^{n}_{c} ~|~ \hat{A})$ of the compound channel, where $x^{n} \in \{-1, +1\}^{n}$ is the input frame, $y^{n}_{1} \in \mathbb{R}^{n}$ and $y^{n}_{2} \in \mathbb{R}^{n}$ are the two received copies at the destination, $\hat{A}$ is the binary variable which represents the decision of the attack detector, and $y^{n}_{c} \in \mathbb{R}^{n}$ is the output of the combining block. }
\end{figure}
\end{center}
\subsection{Motivation}
\label{subsec:motivation}
Consider an echo-assisted communication setting, as shown in Fig. \ref{fig:compound_channel}, wherein a binary codeword of large block-length is transmitted from Alice to Bob as a sequence of several frames, each of length $n$. Upon transmission of a frame, denoted by $x^{n}$, Bob receives two noisy copies of it, denoted by $y^{n}_{1} \in \mathbb{R}^{n}$ and $y^{n}_{2} \in \mathbb{R}^{n}$, in the presence of additive white Gaussian noise (AWGN). It is well known that appropriately combining these two copies can yield higher signal-to-noise-ratio at Bob, which in turn assists Alice to transmit at higher-rate than when only one of the copies is used to decode the codeword. The adversarial model in our setting is that the second copy is vulnerable to the flipping attack but not the first one.
Specifically, we consider a non-persistent attack model, wherein the second copy is vulnerable to the flipping attack on 50\% of the frames chosen at random in an i.i.d. fashion.\footnote{Persistent adversarial model, wherein all the frames of the second copy are under attack, is relatively straightforward to handle, as Bob may detect the attack accurately when the block-length of the code is large.} A conservative strategy to handle this adversarial setting is as follows:
\begin{itemize}
\item Bob discards $y^{n}_{2}$ irrespective of the attack, and only uses $y^{n}_{1}$ to recover the message, i.e., $y^{n}_{c} = y^{n}_{1}$ as per Fig. \ref{fig:compound_channel}.
\item Alice uses a codebook designed for Gaussian channels to achieve the rate $I(x; y_{1})$, wherein $y_{1} = \gamma_{1}x + z_{1}$ such that $x \in \{-1, +1\}$, $z_{1} \sim \mathcal{N}(0, \sigma^2)$, and $\gamma_{1}$ is a constant known to both Alice and Bob.
\end{itemize}
\indent Keeping in view of the above conservative baseline, we are interested in designing a combining strategy at Bob which can assist Alice in transmitting at higher-rate than $I(x; y_{1})$. Towards achieving higher-rate, it is clear that Bob must first observe $y^{n}_{2}$, detect whether $y^{n}_{2}$ is attacked, and then decide to combine it with $y^{n}_{1}$ to recover the message. Since the frames are under attack in an i.i.d. fashion, Bob has to detect the attack locally by observing the $n$ samples of the frame, and this detection problem can be challenging especially when $n$ is small. Given that a practical detection strategy is typically imperfect, the combining strategy may lead to degraded performance either (i) when the flipping attack on $y^{n}_{2}$ is misdetected, or (ii) when a legitimate frame $y^{n}_{2}$ is categorized as under attack. While the performance of detectors can be evaluated by miss-detection and false-positive rates, these traditional metrics do not capture any rate-loss incurred by the source in aiding the detection strategy. As a result, there is a need to characterize the achievable rates of this adversarial echo-assisted channel in terms of the performance of the underlying attack-detectors.
\begin{table}
\begin{scriptsize}
\caption{Mutual Information Computation of Attack-Detectors in Echo-assisted Communication}
\vspace{-0.2cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline \textbf{Operating Region of the Detector} & \textbf{Mutual Information}\\
& \textbf{Computation}\\
\hline $p_{md} = 0, p_{fa} = 0$ (Genie Detector) & Tractable\\
\hline $p_{md} = 0, p_{fa} = 1$ (Conservative Strategy) & Tractable\\
\hline $0 < p_{md} < 1, 0 < p_{fa} < 1$ & Intractable\\
\hline A special case of the regime & We propose \\
$0 < p_{md} < 1, 0 < p_{fa} < 1$ & an approximation in Theorem 1\\
\hline
\end{tabular}
\end{center}
\label{table:contribution_table}
\end{scriptsize}
\end{table}
\subsection{Contributions}
The main contributions of this work are listed below:
\begin{itemize}
\item On the echo-assisted communication model discussed in Section \ref{subsec:motivation}, we quantify the performance of attack detectors by computing the mutual information of the compound channel, which comprises of the source, the adversarial echo-channel, and the combining strategy, which is guided by the attack detector at Bob, as shown in Fig. \ref{fig:compound_channel}. This way, we incorporate the traditional metrics of miss-detection rates, denoted by $p_{md}$, and false-positive rates, denoted by $p_{fa}$, indirectly into the mutual information of the compound channel. Henceforth, the mutual information of the compound channel, as shown in Fig. \ref{fig:compound_channel}, is referred to as the mutual information of the underlying attack-detector.
\item Although the adversarial model has memory, we show that the compound channel involving a Genie detector (which corresponds to $p_{md} = 0$ and $p_{fa} = 0$) is memoryless by the virtue of perfect knowledge of the attack event at Bob. As a result, we show that computation of mutual information of Genie detectors is tractable (as listed in Table \ref{table:contribution_table}). However, it is well known that Genie detectors cannot be realized in practice especially when the frame-length is not sufficiently large. We show that the compound channel comprising a practical (imperfect) attack detector, such that $0 < p_{md} < 1, 0 < p_{fa} < 1$, continues to have memory, and this in addition to finitary constraint on the input alphabet \cite{HS}, renders mutual information computation intractable. To circumvent this issue, we provide a new framework to approximate the mutual information of imperfect detectors. Specifically, we provide sufficient conditions on (i) the miss-detection and false-positive rates of detectors (as shown in Fig. \ref{fig:motivation}), and (ii) on the channel parameters such that the proposed approximation holds (see Theorem \ref{thm1}).
\item In the last part of this work (see Section \ref{sec:det_techniques}), we propose two attack detectors, namely: (i) k Nearest Neighbor (KNN) estimator, which measures the mutual information between $y^{n}_{1}$ and $y^{n}_{2}$ to detect the flipping attack, and (ii) a Neural Network (NN) classifier, which uses two hidden layers to solve the detection problem as a supervised classification problem. We present our approximations on the mutual information of these detectors and show that the NN classifier is capable of accurately detecting the attacks on frames-lengths as short as $100$ and $40$ symbols at low signal-to-noise-ratio of $0$ dB and $5$ dB, respectively.
\end{itemize}
\begin{center}
\begin{figure}[h]
\includegraphics[scale=0.46]{Approx_label}
\vspace{-0.3cm}
\caption{\label{fig:motivation}Plot of $\{(p_{md | \bar{x}}, 1- p_{fa | \bar{x}})\}$ of two detectors, where $p_{md | \bar{x}}$ and $p_{fa | \bar{x}}$ denote the miss-detection and false-positive rates conditioned on input codewords $x^{n} = \bar{x}$ for $n = 3$. We propose a framework to approximate the achievable rates of detectors which have $\{(p_{md | \bar{x}}, 1- p_{fa | \bar{x}}) ~|~ \bar{x} \in \{-1, 1\}^{n}\}$ below the line with slope $\frac{\mu}{1 - \mu}$ for some small $0 < \mu < 1$. To exemplify, given a small $\mu > 0$, our framework can approximate the rate of the detector marked with symbol $\times$ in green but not the one with $\circ$ in red.}
\end{figure}
\end{center}
\emph{Notations}: For an $n$-dimensional random vector $y^{n} \in \mathbb{R}^{n}$ with joint probability distribution function $P(y^{n})$, its differential entropy, denoted by $h(y^{n})$, is represented as $-\mathbb{E}[\mbox{log}_{2}(P(y^{n}))]$, where the expectation is over $P(y^{n})$. A Gaussian random variable with zero mean and variance $\sigma^{2}$ is denoted by $\mathcal{N}(0, \sigma^{2})$. An $n \times n$ identity matrix, an $n$-length vector of zeros, and an $n$-length vector of ones are denoted by $\mathbf{I}_{n}$, $\mathbf{0}_{n}$, and $\mathbf{1}_{n}$, respectively. For a given $n$-length vector, denoted by $y^{n}$, the notation $y^{n'}$ for $n' \leq n$, denotes the $n'$-length vector containing the first $n'$ components of $y^{n}$. The notation $\mbox{prob}(\cdot)$ denotes the usual probability operator.
\section{System Model}
\label{sec:system_model}
Alice transmits an $n$-length frame $x^{n} \in \{-1, +1\}^{n}$ such that the components of $x^{n}$ are i.i.d. over the Probability Mass Function (PMF) $\{\alpha, 1 - \alpha\}$ for some $0 < \alpha < 1$. Meanwhile, Bob receives two copies of $x^{n}$ over the Additive White Gaussian Noise (AWGN) channels as
\begin{equation}
\label{eq:signal_model}
y^{n}_{1} = \gamma_{1}x^{n} + z^{n}_{1} \in \mathbb{R}^{n} \mbox{ and } y^{n}_{2} = \gamma_{2}(b^{n} \circ x^{n}) + z^{n}_{2} \in \mathbb{R}^{n},
\end{equation}
where $\gamma_{1} \in \mathbb{R}$ and $\gamma_{2} \in \mathbb{R}$ are non-zero constants known to both Alice and Bob, $z^{n}_{1}$ and $z^{n}_{2}$ represent the additive white Gaussian noise vectors distributed as $\mathcal{N}(\mathbf{0}_{n}, \sigma^{2}\mathbf{I}_{n})$. We assume that $z^{n}_{1}$ and $z^{n}_{2}$ are statistically independent. Between the two copies, we assume that $y^{n}_{2}$ is vulnerable to the flipping attack, whereas $y^{n}_{1}$ is not. To model the flipping attack on $y^{n}_{2}$, we introduce Hadamard product, denoted by $\circ$, between $b^{n} \in \{-1, +1\}^{n}$ and $x^{n}$. When the frame is under attack, the components of $b^{n}$ are i.i.d. over the PMF $\{0.5, 0.5\}$, and are unknown to both Alice and Bob. However, without attack, $b^{n} = \mathbf{1}_{n}$. In this adversarial setting, the attacker executes the flipping attack on a frame chosen randomly in an i.i.d. fashion with probability $0.5$. By using $A = 0$ and $A = 1$ to denote the events of attack and no-attack, respectively, we have $\mbox{prob}(A = 0) = \mbox{prob}(A = 1) = 0.5$.
With no knowledge of $A$ at Bob, characterizing the mutual information (MI) of the adversarial channel is intractable due to the memory-property introduced by the attacker. However, when $A$ is perfectly known at Bob, we can compute the MI of the compound channel shown in Fig. \ref{fig:compound_channel}, wherein the underlying detector is the Genie detector, which assigns $\hat{A} = A$ for each frame.
\begin{proposition}
\label{prop_genie}
The MI of the compound channel involving the Genie detector is
\begin{eqnarray}
\label{eq_mi_m_genie_prior}
\mathcal{M}^{Genie} & = & I(x; y_{c, na})\frac{n}{2} + I(x; y_{1})\frac{n}{2},
\end{eqnarray}
where $y_{c, na} = (|\gamma_{1}|^2 + |\gamma_{2}|^2)x + z_{c} \mbox{ and } y_{1} = \gamma_{1}x + z_{1}$ are the scalar channels such that $x \in \{-1, +1\}$ with PMF $\{\alpha, 1 - \alpha\}$, and the additive noise $z_{c} = \gamma_{1}z_{1} + \gamma_{2}z_{2}$ is distributed as $\mathcal{N}(0, \sigma^{2}_{eq})$ with $\sigma^{2}_{eq} = (|\gamma_{1}|^{2} + |\gamma_{2}|^{2})\sigma^{2}$.
\end{proposition}
\begin{IEEEproof}
The average MI offered by the compound channel comprising the Genie detector is
\begin{eqnarray}
\label{eq:Genie_ergodic}
\mathcal{M}^{Genie} & = & I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ A = 0)\mbox{prob}(A = 0)\nonumber \\
& & + ~I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ A = 1)\mbox{prob}(A = 1),\nonumber \\
& = & \frac{1}{2} I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ A = 0)\nonumber \\
& & + ~ \frac{1}{2}I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ A = 1).
\end{eqnarray}
When $A = 1$, each bit of $x^{n}$ on the second copy is flipped by the attacker with probability $0.5$ in an i.i.d. fashion, and as a result, it is straightforward to prove that $I(x^{n}; y^{n}_{2} ~|~ A = 1) = 0$. As a consequence, we have
\begin{eqnarray}
\label{eq:Genie_ergodic_1}
I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ A = 1) = I(x^{n}; y^{n}_{1}) = nI(x; y_{1}),
\end{eqnarray}
where the last equality is applicable due to the memoryless property of the channel on the first copy. This implies that discarding $y^{n}_{2}$ is the optimal strategy at Bob when $A = 1$. On the other hand, when $A = 0$, the mutual information of the compound channel is given by
\begin{eqnarray}
\label{eq:Genie_ergodic_2}
I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ A = 0) & = & I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ b^{n} = \mathbf{1}_{n}),\nonumber\\
& = & I(x^{n}; y^{n}_{c, na}),\nonumber\\
& = & nI(x; y_{c, na}),
\end{eqnarray}
where $y^{n}_{c, na}$ in the second equality is obtained by combining $y^{n}_{1}$ and $y^{n}_{2}$ as $y^{n}_{c, na} = \gamma_{1}y^{n}_{1} + \gamma_{2}y^{n}_{2} = (|\gamma_{1}|^2 + |\gamma_{2}|^2)x^{n} + z^{n}_{c},$ such that the additive noise vector $z^{n}_{c} = \gamma_{1}z^{n}_{1} + \gamma_{2}z^{n}_{2}$ is distributed as $\mathcal{N}(\mathbf{0}_{n}, \sigma^{2}_{eq}\mathbf{I}_{n})$, where $\sigma^{2}_{eq} = (|\gamma_{1}|^{2} + |\gamma_{2}|^{2})\sigma^{2}$. It is straightforward to verify that $I(x^{n}, y^{n}_{c, na}) = I(x^{n}; y^{n}_{1}, y^{n}_{2} ~|~ b^{n} = \mathbf{1}_{n})$, which implies that the combining strategy is optimal without the attack. Note that the last equality is applicable by using the memoryless nature of the channel, attributed to the perfect knowledge of $A$ at Bob. Finally, by using \eqref{eq:Genie_ergodic_1} and \eqref{eq:Genie_ergodic_2} in \eqref{eq:Genie_ergodic}, we get the expression for mutual information in \eqref{eq_mi_m_genie_prior}. This completes the proof.
\end{IEEEproof}
\begin{figure*}
\begin{equation}
\label{eq:pdf_y_no_attack}
P(y_{c, na}) = \frac{1}{\sqrt{2\pi \sigma^{2}_{eq}}}\left(\alpha e^{-\frac{(y_{c, na} - (|\gamma_{1}|^2 + |\gamma_{2}|^2))^{2}}{2\sigma^{2}_{eq}}} + (1-\alpha) e^{-\frac{(y_{c, na} + (|\gamma_{1}|^2 + |\gamma_{2}|^2))^{2}}{2\sigma^{2}_{eq}}}\right)\\
\end{equation}
\hrule
\end{figure*}
Since $x$ takes values from a finite input alphabet, $\mathcal{M}^{Genie}$ in \eqref{eq_mi_m_genie_prior} can be numerically computed as a function of the input PMF $\{\alpha, 1 - \alpha\}$, constants $\gamma_{1}$ and $\gamma_{2}$, and $\sigma^{2}$ \cite{HS}. Specifically, $I(x; y_{c, na})$ is given by
\begin{equation}
\label{eq:mi_no_attack}
I(x; y_{c, na}) = h(y_{c, na}) - h(y_{c, na} |x),
\end{equation}
where $h(y_{c, na}) = -\mathbb{E}[\mbox{log}_{2}(P(y_{c, na}))]$ such that $P(y_{c, na})$ is as given in \eqref{eq:pdf_y_no_attack}. The conditional entropy $h(y_{c, na}|x)$ can be computed using the distribution $P(y_{c, na} | x = \beta)$ given by
\begin{eqnarray*}
P(y_{c, na} | x = \beta) = \frac{1}{\sqrt{2\pi \sigma^{2}_{eq}}} e^{-\frac{(y_{c, na} - \beta(|\gamma_{1}|^2 + |\gamma_{2}|^2))^{2}}{2\sigma^{2}_{eq}}},
\end{eqnarray*}
for $\beta \in \{-1, +1\}$.
Similarly, we can also compute $I(x; y_{1})$.
Although the mutual information of Genie detectors can be computed based on Proposition \ref{prop_genie}, it is well known that practical detectors not perfect. Therefore, in the next section, we address the challenges involved in computing the MI of (imperfect) practical attack-detectors.
\section{Mutual Information with Practical Detection Strategy}
\begin{figure}
\includegraphics[scale=0.45]{fig_detection_strategy}
\caption{\label{fig:pract_detection_strategy}Depiction of the combining strategy with a practical detection algorithm.}
\end{figure}
We consider a practical attack-detection strategy, as shown in Fig. \ref{fig:pract_detection_strategy}, which uses the received samples $\{y^{n}_{1}, y^{n}_{2}\}$ to detect the flipping attack on every frame. Based on the detector's output, represented by the variable $\hat{A} \in \{0, 1\}$, Bob decides either to combine $y^{n}_{1}$ and $y^{n}_{2}$, or discard $y^{n}_{2}$. Note that this detector is typically imperfect, and as a result, it has its associated miss-detection and false-positive rates, defined as $p_{md} \triangleq \mbox{prob}(\hat{A} = 0 ~|~ A = 1)$ and $p_{fa} \triangleq \mbox{prob}(\hat{A} = 1 ~|~ A = 0)$, respectively. When the detector outputs $\hat{A} = 1$, Bob drops the samples $y^{n}_{2}$, and only uses the samples $y^{n}_{1}$ to recover the message. On the other hand, when the detector outputs $\hat{A} = 0$, Bob combines $y^{n}_{1}$ and $y^{n}_{2}$ to obtain $y^{n}_{c} = \gamma_{1}y^{n}_{1} + \gamma_{2}y^{n}_{2}$ and then uses it to recover the message.
In the event of miss-detection, i.e., when $A = 1$ and $\hat{A} = 0$, we know that $b^{n} \in \{-1, +1\}^{n}$ is random and unknown to Bob. Therefore, $y^{n}_{c}$ is denoted as $y^{n}_{c, a}$, and is given by
\begin{eqnarray}
\label{eq:rx_combine_when_attack}
y^{n}_{c, a} = (|\gamma_{1}|^2 + b^{n}|\gamma_{2}|^2) \circ x^{n} + z^{n}_{c}.
\end{eqnarray}
However, when $A = 0$ and $\hat{A} = 0$, we have $b^{n} = \textbf{1}_{n}$, and therefore, $y^{n}_{c}$ is denoted as $y^{n}_{c, na}$, and is given by
\begin{eqnarray}
\label{eq:rx_combine_when_no_attack}
y^{n}_{c, na} = (|\gamma_{1}|^2 + |\gamma_{2}|^2)x^{n} + z^{n}_{c}.
\end{eqnarray}
The MI of this detection strategy, denoted by $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}}$, is
\begin{eqnarray}
\label{eq:rate_non_genie}
\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}} & = & I(x^{n}; y^{n}_{c} ~|~ \hat{A} = 0)\mbox{prob}(\hat{A} = 0) \nonumber\\
& & + ~I(x^{n}; y^{n}_{1})\mbox{prob}(\hat{A} = 1),
\end{eqnarray}
where $\mbox{prob}(\hat{A} = 0) = \frac{1}{2}(1 + p_{md} - p_{fa})$ and $\mbox{prob}(\hat{A} = 1) = \frac{1}{2}(1 - p_{md} + p_{fa})$.
To compute $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}}$, we have to compute $I(x^{n}; y^{n}_{c} ~|~ \hat{A} = 0)$ for a given frame-length $n$. However, this needs us to evaluate the differential entropy of the probability distribution function $P(y^{n}_{c} ~|~ \hat{A} = 0)$ given in \eqref{pdf_y_combine_delta_A_0_expanded}. Since the input alphabet is finite in size, the corresponding differential entropy can only be computed using numerical methods, and as a result, computing $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}}$ is intractable for sufficiently large $n$ (of the order of hundreds). In a nutshell, the above computational issue is because the equivalent channel when $\hat{A} = 0$ is not memoryless. To circumvent this problem, we show that the MI value $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}}$ of some detectors can be computed using an approximation under special conditions on $p_{md}$ and $p_{fa}$.
\begin{figure*}
\begin{eqnarray}
P(y^{n}_{c} ~|~ \hat{A} = 0) & = & \frac{P(y^{n}_{c} | A = 1, \hat{A} = 0)\mbox{prob}(A = 1, \hat{A} = 0) + P(y^{n}_{c} | A = 0, \hat{A} = 0)\mbox{prob}(A = 0, \hat{A} = 0)}{\mbox{prob}(\hat{A} = 0)}\nonumber \\
\label{pdf_y_combine_delta_A_0}
& = & \frac{P(y^{n}_{c, a})p_{md} + P(y^{n}_{c, na})(1-p_{fa})}{p_{md} + 1 - p_{fa}} \\
\label{pdf_y_combine_delta_A_0_expanded}
P(y^{n}_{c} ~|~ \hat{A} = 0) & = & \frac{p_{md}}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}} (p_{md} + 1 - p_{fa})} \frac{1}{2^{n}} \sum_{x^{n} = \bar{x}} \mbox{prob}(x^{n} = \bar{x}) \left(\sum_{b^{n} = \bar{b}} e^{- \frac{||y^{n}_{c} - (|\gamma_{1}|^{2} + \bar{b}|\gamma_{2}|^2)\circ \bar{x}||^{2}_{F}}{2\sigma^{2}_{eq}}}\right) \\
& & + ~ \frac{1 - p_{fa}}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}(p_{md} + 1 - p_{pfa})} \sum_{x^{n} = \bar{x}} \mbox{prob}(x^{n} = \bar{x}) e^{- \frac{||y^{n}_{c} - (|\gamma_{1}|^{2} + |\gamma_{2}|^2) \bar{x}||^{2}_{F}}{2\sigma^{2}_{eq}}} \nonumber
\end{eqnarray}
\hrule
\end{figure*}
The following sequence of definitions and lemmas are useful to present our results on approximation in Theorem \ref{thm1}.
\begin{definition}
\label{def2}
For $0 \leq x, y \leq 1$, let a set $\mathcal{R}_{\mu}$, for some negligible $\mu > 0$, be defined as
\begin{equation*}
\mathcal{R}_{\mu} \triangleq \left\lbrace(x, y) ~|~ y \leq \frac{\mu}{1 - \mu}x\right\rbrace.
\end{equation*}
\end{definition}
\begin{definition}
\label{def3}
For a given attack-detector, we define its performance profile as
\begin{equation*}
\mathcal{P} \triangleq \left\lbrace (p_{md | \bar{x}}, 1 - p_{fa | \bar{x}}) ~|~ \forall ~\bar{x} \in \{-1, 1\}^{n} \right\rbrace,
\end{equation*}
where $p_{md | \bar{x}} = \mbox{prob}(\hat{A} = 0 | A = 1, x^{n} = \bar{x})$ and $p_{fa | \bar{x}} = \mbox{prob}(\hat{A} = 1 | A = 0, x^{n} = \bar{x})$.
\end{definition}
\begin{definition}
\label{def1}
For a given $\bar{x} \in \{-1, 1\}^{n}$, let $\mathcal{S}_{\bar{x}} = \{(|\gamma_{1}|^{2} + \bar{b} |\gamma_{2}|^2) \circ \bar{x} ~|~ \forall ~\bar{b} = \{-1, 1\}^{n}\}$ denote an $n$-dimensional discrete constellation in $\mathbb{R}^{n}$ obtained by using $\bar{b}$ over $\{-1, +1\}^{n}$. On $\mathcal{S}_{\bar{x}}$, we define,
\begin{itemize}
\item $d^{2}_{min}(y^{n}, \mathcal{S}_{\bar{x}}) = \displaystyle \min_{s^{n} \in \mathcal{S}_{\bar{x}}} ||y^{n} - s^{n}||^{2}_{F}$
\item $d^{2}_{max}(y^{n}, \mathcal{S}_{\bar{x}}) = \displaystyle \max_{s^{n} \in \mathcal{S}_{\bar{x}}} ||y^{n} - s^{n}||^{2}_{F}$
\item $d^{2}_{max}(\mathcal{S}_{\bar{x}}) = \displaystyle \max_{s^{n}_{1}, s^{n}_{2} \in \mathcal{S}_{\bar{x}}} ||s^{n}_{1} - s^{n}_{2}||^{2}_{F},$
\end{itemize}
where $y^{n} \in \mathbb{R}^{n}$ and $||\cdot||^{2}_{F}$ denotes the squared Euclidean distance.
\end{definition}
\begin{lemma}
\label{lemma1}
If $a, b, \mu$ are such that $0 \leq a \leq 2b$ and $\mu > 0$ is a negligible number, then we have $\mu a + (1 - \mu) b \approx b.$
\end{lemma}
\begin{IEEEproof}
The convex combination $\mu a + (1 - \mu) b$ can be written as $b - \mu(b-a)$. This implies that $\mu a + (1 - \mu) b = b - \lambda$, where $0 \leq \lambda \leq \mu b$ when $a \leq b$, and $-\mu b \leq \lambda < 0$ when $b < a \leq 2b$. Since $\mu$ is negligible, $b - \lambda \approx b$ for every $b \geq 0$.
\end{IEEEproof}
Since the accuracy of the approximation depends on $\mu$, we henceforth denote $\approx$ by $\approx_{\mu}$.
\begin{lemma}
\label{lemma2}
If $\gamma_{1}$, $\gamma_{2}$ and $\sigma^{2}_{eq}$ are such that $d^{2}_{max}(\mathcal{S}_{\bar{x}}) \leq 2\mbox{log}_{e}(2)\sigma^{2}_{eq}$ for each $\bar{x} \in \{-1, +1\}^{n},$ then we have
\begin{eqnarray}
\label{pdf_bound1}
P(y^{n}_{c, a} = y^{n} | \bar{x}) & \leq & 2P(y^{n}_{c, na} = y^{n} | \bar{x}),\\
\label{pdf_bound2}
P(y^{n}_{c, a} = y^{n}) & \leq & 2P(y^{n}_{c, na} = y^{n}),
\end{eqnarray}
for every $y^{n} \in \mathbb{R}^{n}$.
\end{lemma}
\begin{IEEEproof}
We only show the applicability of \eqref{pdf_bound1}. Since $P(y^{n}_{c, a} = y^{n})$ can be written as a weighted sum of $P(y^{n}_{c, a} = y^{n} | \bar{x})$ over all $\bar{x}$, \eqref{pdf_bound1} can be used to show the applicability of \eqref{pdf_bound2}. Given $x^{n} = \bar{x}$, the $n$-dimensional distribution of $y^{n}_{c, a}$ is given by $P(y^{n}_{c, a} | \bar{x}) = \frac{1}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}} \frac{1}{2^{n}} \sum_{b^{n} = \bar{b}} e^{- \frac{||y^{n}_{c, a} - (|\gamma_{1}|^{2} + \bar{b}|\gamma_{2}|^2)\circ \bar{x}||^{2}_{F}}{2\sigma^{2}_{eq}}}.$ When evaluated at $y^{n} \in \mathbb{R}^{n}$, we can upper bound the above term as
\begin{equation}
\label{eq:term1}
P(y^{n}_{c, a} = y^{n}| \bar{x}) \leq \frac{1}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}} e^{- \frac{d^{2}_{min}(y^{n}, \mathcal{S}_{\bar{x}})}{2\sigma^{2}_{eq}}},
\end{equation}
where $d^{2}_{min}(y^{n}, \mathcal{S}_{\bar{x}})$ is as given in Definition \ref{def1}.
Meanwhile, the $n$-dimensional distribution of $y^{n}_{c, na}$ is given by
\begin{eqnarray}
\label{eq:term2}
P(y^{n}_{c, na} = y^{n}| \bar{x}) & = & \frac{1}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}} e^{- \frac{||y^{n} - (|\gamma_{1}|^{2} + |\gamma_{2}|^2)\bar{x}||^{2}_{F}}{2\sigma^{2}_{eq}}}, \nonumber \\
& \geq & \frac{1}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}} e^{- \frac{d^{2}_{max}(y^{n}, \mathcal{S}_{\bar{x}})}{2\sigma^{2}_{eq}}}, \nonumber \\
& \geq & \frac{1}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}} e^{- \frac{d^{2}_{min}(y^{n}, \mathcal{S}_{\bar{x}}) + d^{2}_{max}(\mathcal{S}_{\bar{x}})}{2\sigma^{2}_{eq}}}
\end{eqnarray}
where the first inequality holds since $(|\gamma_{1}|^{2} + |\gamma_{2}|^2)\bar{x} \in \mathcal{S}_{\bar{x}}$. The second inequality holds because of triangle inequality. Finally, if $d^{2}_{max}(\mathcal{S}_{\bar{x}}) \leq 2\mbox{log}_{e}(2)\sigma^{2}_{eq}$ for each $\bar{x} \in \{-1, +1\}^{n},$ then \eqref{eq:term2} can be further lower bounded as
\begin{eqnarray}
\label{eq:term3}
P(y^{n}_{c, na} = y^{n}| \bar{x}) & \geq & \frac{1}{(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}} e^{- \frac{d^{2}_{min}(y^{n}, \mathcal{S}_{\bar{x}}) + 2 \small{\mbox{log}}_{e}(2)\sigma^{2}_{eq}}{2\sigma^{2}_{eq}}},\nonumber \\
& = & \frac{1}{2(2 \pi \sigma^{2}_{eq})^{\frac{n}{2}}} e^{- \frac{d^{2}_{min}(y^{n}, \mathcal{S}_{\bar{x}})}{2\sigma^{2}_{eq}}},\\
& \geq & \frac{1}{2} P(y^{n}_{c, a} = y^{n}| \bar{x}),
\end{eqnarray}
where the last inequality is due to the bound in \eqref{eq:term1}. This implies that $P(y^{n}_{c, a} = y^{n} | \bar{x}) \leq 2P(y^{n}_{c, na} = y^{n} | \bar{x})$ for each $y^{n}$. This completes the proof.
\end{IEEEproof}
Using the results of Lemma $\ref{lemma1}$ and Lemma $\ref{lemma2}$, we are now ready to present our result on approximation.
\begin{theorem}
\label{thm1}
If $\gamma_{1}$, $\gamma_{2}$ and $\sigma^{2}_{eq}$ are such that $d^{2}_{max}(\mathcal{S}_{\bar{x}}) \leq 2\mbox{log}_{e}(2)\sigma^{2}_{eq}$ for each $\bar{x} \in \{-1, +1\}^{n},$ and if the detection strategy is such that $\mathcal{P} \subseteq \mathcal{R}_{\mu}$, for a fixed small $\mu > 0$, then we have $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}} \approx_{\mu, pdf} \mathcal{M}^{approx}_{p_{fa}}$, where
\begin{equation}
\label{eq:rate_non_genie_pmd_0}
\mathcal{M}^{approx}_{p_{fa}} = \frac{n}{2}I(x; y_{c, na})(1-p_{fa}) + \frac{n}{2}I(x; y_{1})(1+ p_{fa}),
\end{equation}
and the notation $\approx_{\mu, pdf}$ captures the notion that the approximation on MI is a result of approximating the underlying distributions using $\approx_{\mu}$.
\end{theorem}
\begin{IEEEproof}
Based on the expression of $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}}$ in \eqref{eq:rate_non_genie}, it is straightforward to show that $I(x^{n}; y^{n}_{1}) = nI(x; y_{1})$. In this proof, we only address the computation of $I(x^{n}; y^{n}_{c} ~|~ \hat{A} = 0)$. From first principles, we have
\begin{equation*}
I(x^{n}; y^{n}_{c} ~|~ \hat{A} = 0) = h(y^{n}_{c} ~|~ \hat{A} = 0) - h(y^{n}_{c} ~|~ x^{n}, \hat{A} = 0),
\end{equation*}
where $h(y^{n}_{c} ~|~ \hat{A} = 0)$ can be obtained using $P(y^{n}_{c} ~|~ \hat{A} = 0)$ as
\begin{equation*}
h(y^{n}_{c} ~|~ \hat{A} = 0) = -\mathbb{E}\left[\mbox{log}_{2}\left(P(y^{n}_{c} ~|~ \hat{A} = 0)\right)\right],
\end{equation*}
where $P(y^{n}_{c} ~|~ \hat{A} = 0)$ is as given in \eqref{pdf_y_combine_delta_A_0}.
When the attack-detection technique operates at $\mathcal{P} \subseteq \mathcal{R}_{\mu}$, then we can show that $(p_{md}, p_{fa}) \in \mathcal{R}_{\mu}$, where $p_{fa} = \mathbb{E}[p_{fa|\bar{x}}]$ and $p_{md} = \mathbb{E}[p_{md|\bar{x}}]$ such that the expectation is over $x^{n}$. By applying the results of Lemma \ref{lemma1} and Lemma \ref{lemma2} on \eqref{pdf_y_combine_delta_A_0}, we get
\begin{equation*}
P(y^{n}_{c} ~|~ \hat{A} = 0) \approx_{\mu} P(y^{n}_{c, na}).
\end{equation*}
The above approximation holds because $\frac{p_{md}}{p_{md} + 1 - p_{fa}}$ plays the role of $\mu$ in Lemma \ref{lemma1}, and the condition $a \leq 2b$ of Lemma \ref{lemma1} is satisfied because of \eqref{pdf_bound2} in Lemma \ref{lemma2}. As a result $h(y^{n}_{c} ~|~ \hat{A} = 0) \approx_{\mu, pdf} -\mathbb{E}[\mbox{log}_{2}(P(y^{n}_{c, na}))]$. Furthermore, since each component of $y^{n}_{c, na}$ is independent across $n$, we have
\begin{equation}
\label{eq:appro_diff_entropy}
h(y^{n}_{c} ~|~ \hat{A} = 0) \approx_{\mu, pdf} h(y^{n}_{c, na}) = nh(y_{c, na}),
\end{equation}
where $h(y_{c, na}) = -\mathbb{E}[\mbox{log}_{2}(P(y_{c, na}))]$ such that $P(y_{c, na})$ is given by \eqref{eq:pdf_y_no_attack}. Similarly, the conditional differential entropy $h(y^{n}_{c} ~|~ \hat{A} = 0, x^{n})$ is given by
\begin{equation}
\label{eq:diff_cond_entropy}
h(y^{n}_{c} ~|~ \hat{A} = 0, x^{n}) = \sum_{x^{n} = \bar{x}} p(\bar{x} | \hat{A} = 0)h(y^{n}_{c} ~|~ \hat{A} = 0, x^{n} = \bar{x}),
\end{equation}
where $p(\bar{x} | \hat{A} = 0) \triangleq \mbox{prob}(x^{n} = \bar{x}|\hat{A} = 0)$ and $h(y^{n}_{c} ~|~ \hat{A} = 0, x^{n} = \bar{x}) = -\mathbb{E}[\mbox{log}_{2}(P(y^{n}_{c} ~|~ \hat{A} = 0, x^{n} = \bar{x}))]$ such that $P(y^{n}_{c} ~|~ \hat{A} = 0, x^{n} = \bar{x})$ can be written as
\begin{eqnarray}
\label{eq:cond_pdf}
\frac{P(y^{n}_{c, a}~|~x^{n} = \bar{x})p_{md | \bar{x}} + P(y^{n}_{c, na}~|~x^{n} = \bar{x})(1-p_{fa | \bar{x}})}{p_{md | \bar{x}} + 1 - p_{fa | \bar{x}}}.
\end{eqnarray}
To arrive at \eqref{eq:cond_pdf}, we assume that $A$ and $x^{n}$ are statistically independent. Again, applying the results of Lemma \ref{lemma1} and Lemma \ref{lemma2} on \eqref{eq:cond_pdf}, we have the approximation
\begin{equation*}
P(y^{n}_{c} ~|~ \hat{A} = 0, x^{n} = \bar{x}) \approx_{\mu} P(y^{n}_{c, na}~|~x^{n} = \bar{x}),
\end{equation*}
for every $x^{n} = \bar{x}$. As a result, we have $h(y^{n}_{c} ~|~ \hat{A} = 0, x^{n} = \bar{x}) \approx_{\mu, pdf} h(y^{n}_{c, na} ~|~ x^{n} = \bar{x})$. Finally, using the above expression in \eqref{eq:diff_cond_entropy}, we get
\begin{small}
\begin{eqnarray}
\label{eq:appro_diff_cond_entropy}
h(y^{n}_{c} ~|~ \hat{A} = 0, x^{n}) & \approx_{\mu, pdf} & \sum_{x^{n} = \bar{x}} p(\bar{x} | \hat{A} = 0)h(y^{n}_{c, na} ~|~ x^{n} = \bar{x})\nonumber \\
& = & h(z^{n}_{c}) = nh(y_{c, na} ~|~ x),
\end{eqnarray}
\end{small}
\noindent where the last equality is due to i.i.d. nature of $z_{c}^{n}$. Overall, using \eqref{eq:appro_diff_cond_entropy} and \eqref{eq:appro_diff_entropy} in \eqref{eq:rate_non_genie}, we get the expression in \eqref{eq:rate_non_genie_pmd_0}.
\end{IEEEproof}
$~~$\\
\indent The proposed sufficient condition on the performance profile of attack detectors is also depicted in Fig. \ref{fig:motivation}. Due to intractability in evaluating $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}}$, Theorem \ref{thm1} approximates the MI of a special class of detectors when (i) the detectors operate in the region $\mathcal{P} \subseteq \mathcal{R}_{\mu}$, and (ii) the channel parameters $\gamma_{1}, \gamma_{2}, \sigma^{2}$ satisfy Lemma \ref{lemma2}. For such a class of detectors, the MI $\mathcal{M}^{approx}_{p_{fa}}$, given in \eqref{eq:rate_non_genie_pmd_0} is now easy to evaluate since $I(x; y_{c, na})$ and $I(x; y_{1})$ can be computed using standard numerical methods \cite{HS}. Note that the Genie detector trivially belongs to this special class, and as a result, \eqref{eq:rate_non_genie_pmd_0} is upper bounded by $\mathcal{M}^{Genie}$ in \eqref{eq_mi_m_genie_prior}. Also note that \eqref{eq:rate_non_genie_pmd_0} is lower bounded by $nI(x; y_{1})$, which is the MI offered by the conservative strategy of unconditionally dropping $y^{n}_{2}$ when recovering the message.
\section{Experiment Results}
\label{sec:det_techniques}
To conduct experiments on the performance of attack detection in echo-assisted communication, we use the system model in Section \ref{sec:system_model} with $\alpha = 0.5$, $\gamma_{1} = \gamma_{2} = 1$, and $\mbox{SNR} = 10\mbox{log}_{10}(\frac{1}{\sigma^{2}}) \in \{0, 5, 10, 15\}$ in dB. We propose the following two detectors which are designed to detect the flipping attack by using the first $n'$ samples of the received frames, namely $\{y^{n'}_{1}, y^{n'}_{2}\}$, for some $n' \leq n$.
\textit{1) k Nearest-Neighbor (KNN) MI Estimation:} Based on the attack model in Section \ref{sec:system_model}, we observe that $I(y_{1}; y_{2} ~|~ A = 0) > I(y_{1}; y_{2} ~|~ A = 1)$, and both these terms can be calculated off-line. As a result, we use a detection strategy that measures the MI between $y^{n'}_{1}$ and $y^{n'}_{2}$ by using scikit-learn \cite{Scikit} library's MI calculation method using $k$ nearest neighbors \cite{MI1}. The proposed detection strategy feeds an appropriate value of $\hat{A}$ to the combining block depending on whether the MI estimate is above or below the threshold, which in turn is empirically chosen such that $p_{md}$ is bounded by 0.1\%.
\textit{2) Neural Network (NN) Classifier:} In this method, we pose attack detection as a supervised classification problem. The proposed NN uses two hidden layers with ReLU activation function followed by a sigmoid output at the end. The inputs to the training phase constitutes channel outputs, namely, $\{y^{n'}_{1}, y^{n'}_{2}\}$ (with 50\% of the frames under attack) along with the respective ground truths on $A$. Based on the inputs, the NN estimates the probability of attack by minimizing an appropriate binary cross-entropy function. We train for eight epochs to ensure convergence over the training set with a batch size of $512$ using the Adam optimizer \cite{adam}. To achieve the constraint of $p_{md} = 0.1\%$, we empirically find an appropriate threshold which gives 0.1\% miss-detection rate on the training data set, and then measure $p_{md}$ and $p_{fa}$ on the validation data set.
For each combination of $n' \in \{10, 20, \ldots, 100\}$ and $\mbox{SNR} \in \{0, 5, 10, 15\}$, we repeat the experiments to compute $p_{fa}$ of the above detectors by driving their $p_{md} = 0.1\%$. Subsequently, we substitute the corresponding $p_{fa}$ in \eqref{eq:rate_non_genie_pmd_0} to obtain $\mathcal{M}^{approx}_{p_{fa}}$, as presented in Fig. \ref{fig:ar}. The plots show that the NN classifier outperforms the KNN detection significantly at $\mbox{SNR} = 0 \mbox{ dB}$, whereas the benefits of the NN classifier are not significant at $\mbox{SNR} = 5 \mbox{ dB}$. Furthermore, we highlight that $\mathcal{M}^{approx}_{p_{fa}}$ offered by the NN detector is close to that of the Genie detector for frame-lengths as short as $100$ and $40$ symbols at $0$ dB and $5$ dB, respectively. For more details on our experiments, we refer the reader to \cite{github}, where the source codes of the detectors are also available.
\begin{center}
\begin{figure}
\includegraphics[scale=0.47]{ComparisionFinal}
\vspace{-0.4cm}
\caption{\label{fig:ar}$\mathcal{M}^{approx}_{p_{fa}}$ of attack detectors based on KNN and NN classifier for various $n' \in \{10, 20, \ldots, 100\}$ and SNR $= \{0, 5, 10, 15\}$ in dB. We omit the results for SNR $= 10, 15$ since both detectors achieve the Genie bound.}
\end{figure}
\end{center}
\subsection{Discussion on Relevance of Theorem \ref{thm1}}
\label{sec:discussion}
For each $n'$ and $\mbox{SNR}$, we can evaluate the tightness of the MI values in Fig. \ref{fig:ar} by first computing $\mathcal{P}$, and then determining an appropriate $\mu'$ such that $\mathcal{P} \subseteq \mathcal{R}_{\mu'}$. With that, \eqref{eq:rate_non_genie_pmd_0} qualifies as an approximation with accuracy $\mu'$. Although obtaining the performance profile $\mathcal{P}$ through exhaustive experiments is computationally challenging for large $n$, sampling techniques can be used to estimate $\mu'$. For instance, at $n' = 50$ and $\mbox{SNR} = 0 \mbox{ dB}$, we have used the NN classifier to empirically compute the pairs $\{(p_{md | \bar{x}}, 1 - p_{fa | \bar{x}})\}$ for $10000$ randomly chosen codewords, and have verified that more than $99\%$ of them lie inside $\mathcal{R}_{\mu'}$ with $\mu' = 3 \times 10^{-3}$.
As the second caveat, we recollect that Theorem \ref{thm1} is applicable if $\gamma_{1}, \gamma_{2}$ and $\sigma^{2}$ satisfy the conditions in Lemma \ref{lemma2}. However, for arbitrary values of $\gamma_{1}, \gamma_{2}$ and $\sigma^{2}$, we do not have a proof on the applicability of the upper bound in \eqref{pdf_bound1} for all $y^{n} \in \mathbb{R}^{n}$, nor we can verify \eqref{pdf_bound1} for a given $y^{n} \in \mathbb{R}^{n}$ due to intractable distributions. By acknowledging these limitations we caution the reader not to interpret the plots in Fig. \ref{fig:ar} as exact MI values. Nevertheless, we have presented $\mathcal{M}^{approx}_{p_{fa}}$ as they serve as benchmarks for comparison with tighter approximations on $\mathcal{M}^{non-Genie}_{p_{md}, p_{fa}}$ in future.
| proofpile-arXiv_065-1340 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This work is concerned with the large time asymptotic behavior of a class of branching Markov processes in continuous time, which we call {\em growth-fragmentation processes}. These may be used to model the evolution of a population, for instance of bacteria,
in which an individual reproduces by fission into two or more new
individuals.
Each individual
grows continuously, with the growth depending
deterministically on the current mass of the individual, up to
a random instant at which fission occurs.
This individual, which may be thought of as a mother,
is then replaced by a family of new individuals, referred to as her
daughters. We assume that mass is preserved at fission, meaning
that the mass of the mother immediately before the division
is equal to the sum of the masses of her daughters immediately afterwards.
The time at which the fission occurs and the masses of her daughters
at fission are both random, and depend on the mass of the mother
individual.
After a fission event, the daughters are in turn viewed as mothers of
future generations,
and evolve according to the same dynamics,
independently of the other individuals.
Mathematically, we represent this as
a process in continuous time, $\Zz=(\Zz_t, t\geq 0)$,
with values in the space of point measures on $(0,\infty)$.
Each individual is represented as an atom in $\Zz_t$,
whose location is the individual's mass.
That is, if at time $t$ there are $n\in \NN \cup\{\infty\}$ individuals
present, with masses $z_1,z_2,\dotsc$, then
$\Zz_t = \sum_{i=1}^n \delta_{z_i}$, with $\delta_z$ the Dirac delta
at $z \in (0,\infty)$.
Growth-fragmentation processes are members of the family of structured population models, which were first studied using analytic methods in the framework of linear integro-differential equations. To demonstrate this connection,
consider the \emph{intensity measure} $\mu_t$ of $\Zz_t$, defined by
$\crochet{\mu_t,f} = \EE[\crochet{ \Zz_t,f}]$ for all $f\in{\mathcal C}_c$. That is, $f$ is a continuous function on $(0,\infty)$ with compact support, and the notation $\crochet{m,f} = \int f \, \dd m$ is used for the integral of a function $f$ against a Radon measure $m$ on $(0,\infty)$, whenever this makes sense.
In words, $\mu_t(A)$ describes the concentration of individuals at time $t$ with masses in the set $A\subset (0,\infty)$, and, informally, the evolution of the branching Markov process $\Zz$ entails that
the family $(\mu_t)_{t\geq 0}$ solves an evolution equation
(see \cite{EN00} for background) of the form
\begin{equation}\label{E:GFE}
\frac{\dd }{\dd t} \crochet{\mu_t ,f}= \crochet{ \mu_t, {\mathcal A} f}, \qquad f\in{\mathcal C}_c^1,
\end{equation}
where the infinitesimal generator
\[
{\mathcal A}f(x)
=
c(x) f'(x) + {B}(x) \int_{\Pp} \left( \sum_{i=1}^{\infty}f(xp_i)-f(x)\right)\kappa(x, \dd \pp)
\]
is naturally associated to the dynamics of $\Zz$
and $f$ is a smooth function in the domain of ${\mathcal A}$.
The meaning of this operator will be described precisely later,
when we derive it in equation \eqref{E:GFG}.
Briefly, $c\colon (0,\infty)\to (0,\infty)$ is a continuous function
representing the growth rate, $B\colon (0,\infty) \to [0,\infty)$ is a
bounded measurable function representing the fission rate,
and $\kappa$ a measurable probability kernel describing
the relative masses of the daughters obtained at fission.
That is, an individual of mass $x$ grows at rate $c(x)$,
experiences fission at rate $B(x)$ and, if fission occurs,
then the relative masses of the daughters are drawn from
the distribution $\kappa(x,\cdot)$.
We shall refer to \eqref{E:GFE} as the {\em growth-fragmentation equation}.
A fundamental problem in this analytic setting is to determine explicit conditions on the parameters governing the evolution of the system that ensure the so-called (asynchronous) {\em Malthusian behavior}: for all $f\in{\mathcal C}_c$,
\begin{equation} \label{E:Malthus}
\EE[\crochet{ \Zz_t,f}]=\crochet{\mu_t,f} \sim \e^{\lambda t}\crochet{\mu_0,h } \crochet{\nu,f}\qquad \text{as }t \to \infty,
\end{equation}
where $\lambda \in \RR$, $h$ is positive function, and $\nu$ a Radon measure on $(0,\infty)$ with $\crochet{\nu,h }=1$.
When \eqref{E:Malthus} holds, we call $\lambda$ the
\emph{Malthus exponent} and $\nu$ the \emph{asymptotic profile}.
There exists a vast literature on this topic, and we content ourselves here to cite a few contributions \cite{BerGab2, CalvoDoumicPerthame, DDGW, Esco} amongst the most recent ones, in which many further references can be found.
Spectral analysis of the infinitesimal generator ${\mathcal A}$ often plays a key role for establishing \eqref{E:Malthus}. Indeed, if
there exist $\lambda \in \RR$, a positive function $h$ and a Radon measure $\nu$ that solve the eigenproblem
\begin{equation} \label{E:eigenp}
{\mathcal A}h =\lambda h \ ,\ {\mathcal A}'\nu =\lambda \nu\ ,\
\crochet{\nu,h }=1,
\end{equation}
with $\mathcal{A}'$ the adjoint operator to $\mathcal{A}$,
then \eqref{E:Malthus} follows rather directly. In this direction, the Perron-Frobenius paradigm, and more specifically
the Krein-Rutman theorem (which requires compactness of certain operators related to ${\mathcal A}$) yield
a powerful framework for establishing the existence of solutions to the eigenproblem \eqref{E:eigenp}. This method
has been widely used in the literature; see, for instance, \cite{Per07, BCG-fine, DoumGab, MS16}.
Then $\lambda$ arises as the leading eigenvalue of ${\mathcal A}$, i.e., the eigenvalue with the maximal real part, and
$h$ and $\nu$ respectively as a corresponding positive eigenfunction and dual eigenmeasure.
A stochastic approach for establishing \eqref{E:Malthus}, which is based on the Feynman-Kac formula and circumvents spectral theory, has been
developed by the authors in \cite{BW, BeFK} and Cavalli in \cite{Cavalli}.
To carry out this programme, we introduce, under the assumption
$ \sup_{x>0} c(x)/x<\infty$, the unique
strong Markov process $X$ on $(0,\infty)$ with generator
\[ \mathcal{G} f (x)
= \frac{1}{x} \mathcal{A}\bar{f}(x) - \frac{c(x)}{x} f(x), \]
where $\bar{f}(x) = x f(x)$.
Assume that $X$ is irreducible and aperiodic, and
define the Feynman-Kac weight
$$\mathcal{E}_t = \exp\left(\int_0^t \frac{c(X_s)}{X_s} \, \dd s\right),$$
and the Laplace transform
$$L_{x,y}(q)
= \EE_x[ \e^{- qH(y)} \mathcal{E}_{H(y)} \Indic{H(y)<\infty}], $$
where $H(y)=\inf\{t>0: X_t=y\}$ denotes the first hitting time of $y$ by $X$.
A weaker version of Theorem 1.2 in \cite{BeFK} (see also Theorem 1.1 in \cite{BW}) can then be stated as follows.
\setcounter{dummy}{-1}
\begin{theorem}\label{t:0}
Fix $x_0 > 0$. Define
\[ \lambda = \inf\{q\in \RR: L_{x_0,x_0}(q) < 1\}. \]
The value of $\lambda \in\RR$ and is independent of $x_0$.
If
\begin{equation} \label{E:main}
\limsup _{x\to 0+} \frac{c(x)}{x}< \lambda \quad\text{and}\quad \limsup _{x\to \infty} \frac{c(x)}{x}<\lambda,
\end{equation}
then the Malthusian behavior \eqref{E:Malthus} holds (so $\lambda$ is the Malthus exponent)
with
$$h(x) = xL_{x,x_0}(\lambda) \quad \text{and} \quad
\nu(\dd y) = \frac{\dd y}{h(y)c(y) \lvert L'_{y,y}(\lambda)\rvert}.$$
\end{theorem}
Indeed, in \cite{BeFK}, it was even shown that \eqref{E:main} implies
that \eqref{E:Malthus} occurs at exponential rate.
\autoref{t:0} will form the basis of our work,
the purpose of which is to investigate the analog of \eqref{E:Malthus} for the random variable
$\crochet{ \Zz_t,f}$ itself, rather than merely its expectation.
More precisely, assuming for simplicity that the growth-fragmentation process $\Zz$ starts from a single individual with mass $x>0$ and writing $\PP_x$ for the corresponding probability law, we prove the following result:
\begin{theorem}\label{t:main}
Under the assumptions of \autoref{t:0},
the process $\Zz$ exhibits \emph{strong Malthusian behavior}: for all $x>0$ and for $f$ any continuous function satisfying $\lVert f/h \rVert_{\infty}<\infty$, one has
\begin{equation} \label{E:MalthusF}
\lim_{t\to \infty} \e^{-\lambda t} \crochet{ \Zz_t,f}= \crochet{\nu,f} W_{\infty} \qquad \text{ in }L^1(\PP_x),
\end{equation}
where
$$W_\infty=\lim_{t\to \infty} \e^{-\lambda t} \crochet{ \Zz_t,h} \quad\text{and} \quad \EE_x[W_{\infty}]=h(x).$$
\end{theorem}
The criterion \eqref{E:main} involves the Malthus exponent $\lambda$, which is itself usually not explicitly known. It might therefore appear unsatisfactory.
However, one can easily obtain lower-bounds for $\lambda$ solely in terms of the characteristics of the growth-fragmentation process, and these yield a fully explicit criterion.
We give an example of such a result as a conclusion to this work.
Of course, even though the Malthusian behavior \eqref{E:Malthus} suggests that its strong version \eqref{E:MalthusF} might hold, this is by no means automatic. For instance, it should be plain that \eqref{E:MalthusF} cannot hold when $\lambda$ is negative.
The question of strong Malthusian behavior
has been considered in the literature on branching processes for several different models, including
general Crump-Mode-Jagers branching processes \cite{Nerman, Jagers,JN84},
branching random walks \cite{Biggins92},
branching diffusions \cite{BBHHR, EnHaKy, GiHaHa, HaHa, HaHeKy},
branching Markov processes \cite{AsHer, CRY, ChYu, Shio},
pure fragmentation processes \cite{Beresty, BeRou, Ber-frag}
and certain other growth-fragmentation processes \cite{BBCK, Dadoun, ShiQ-ou}.
A notable recent development is the study of the neutron transport equation
and associated stochastic processes
\cite{CHHK, HKV, HHK}, which uses a different probabilistic approach based on the notion of quasi-stationarity, as in \cite{ChVil}.
Of course, these are just a sample of works on this topic,
and many more references can be found cited within them.
In particular, we can view $(\Zz_t, t\ge 0)$ as a general branching process
in the sense of Jagers \cite{Jagers}. This means that, rather than tracking
the mass of individuals at a given time, we instead track the birth time, birth
mass and death (i.e., fission) time
of every individual
in each successive generation. This process can be characterised in terms
of a \emph{reproduction kernel}; given the birth time and mass of an
individual, this describes the distribution of the
birth times and masses of its daughters.
Assuming that this general branching
process is Malthusian and supercritical (as defined in Section 5 of \cite{Jagers} in terms of the reproduction kernel), and that a certain $x\log x$ integrability condition and some further technical assumptions are fulfilled, Theorem 7.3 in \cite{Jagers} essentially states that
\eqref{E:MalthusF} holds with $W_{\infty}$ the terminal value of the so-called {\em intrinsic martingale}. However, the assumptions and the quantities appearing in Theorem 7.3 in \cite{Jagers} are defined in terms of the reproduction kernel, sometimes in an implicit way. It appears to be rather difficult to understand the hypotheses and conclusions of \cite{Jagers} in terms of the parameters of the growth-fragmentation process; for instance, it does not seem to be straightforward to connect the general branching
process with the eigenproblem \eqref{E:eigenp}.
Our approach combines classical elements with some more recent ingredients.
Given the Malthusian behaviour recalled in \autoref{t:0},
the main technical issue is
to find explicit conditions, in terms of the characteristics of the growth-fragmentation, which ensure the uniform integrability of the intrinsic martingale.
However, the intrinsic martingale is defined in terms of the generations of the
associated general branching process rather than in natural time (see Section 5 of \cite{Jagers}), and it is difficult to connect this to the dynamics of the growth-fragmentation process.
We will circumvent this difficulty as follows.
As \autoref{t:0} may suggest, we first establish a so-called many-to-one
(or Feynman-Kac) formula, which provides an expression for the intensity measure $\mu_t$ of the point process $\Zz_t$ in terms of a functional of the (piecewise deterministic) Markov process $X$. Making use of results in \cite{BW}, this enables us to confirm that $\mu_t$ indeed solves the growth-fragmentation equation \eqref{E:GFE},
and to construct a remarkable additive martingale associated with the growth-fragmentation process $\Zz$, namely
$$W_t=\e^{-\lambda t} \crochet{\Zz_t, h},\qquad t\geq 0, $$
where the Malthus exponent $\lambda$ and the function $h$ are defined in terms of the Markov process $X$.
In fact, $W$ is nothing but the version in natural times of the intrinsic martingale indexed by generations, as defined in Section 5 of \cite{Jagers}.
We shall then prove that the boundedness in $L^2(\PP_x)$, and hence the uniform integrability, of the martingale $W$ follows from \eqref{E:main} by adapting the well-known spinal decomposition technique
(described in \cite{BigKyp} for branching random walks) to our framework.
The spine process, which is naturally associated to the intrinsic martingale, plays an important role in the proof of the strong Malthusian behavior \eqref{E:MalthusF}. Specifically, it yields a key tightness property for the random point measures $\Zz_t$, which then enables us to focus on individuals with masses bounded away from $0$ and from $\infty$. This is crucial to extend the original method of Nerman \cite{Nerman} to our setting.
The rest of this paper is organized as follows. In Section 2, we describe the precise construction of the growth-fragmentation process $\Zz$, which is needed in Section 3 to establish a useful many-to-one formula for the intensity measure $\mu_t$ of $\Zz_t$. In particular, a comparison with results in \cite{BW} makes the connection with the growth-fragmentation equation \eqref{E:GFE} rigorous.
The $L^2$-boundedness of the intrinsic martingale is established in Section 4 under the assumption \eqref{E:main},
and we then prove the strong Malthusian behavior \eqref{E:MalthusF} in Section 5. Section 6 is devoted to providing explicit conditions on the characteristics of the growth-fragmentation that ensure \eqref{E:main}.
\section{Construction of the growth-fragmentation process}
To start with, we introduce the three characteristics $c, {B}$ and $\kappa$ which govern the dynamics of the growth-fragmentation process. First, let $c\from (0,\infty) \to (0,\infty)$ be a continuous function with
\begin{equation}\label{e:condc}
\sup_{x>0} c(x)/x<\infty,
\end{equation}
which describes the growth rate of individuals as a function of their masses.
For every $x_0>0$, the initial value problem
\begin{equation}\left\{
\begin{aligned}
\dot{x}(t) &= c(x(t)), \qquad t \geq 0, \\
x(0) &= x_0,
\end{aligned}
\right.
\label{e:ode}
\end{equation}
has a unique solution that we interpret as the mass at time $t$ of an individual with initial mass $x_0$ when no fission occurred before time $t$.
Next, we consider a bounded,
measurable function ${B}\colon (0,\infty)\to [0,\infty)$,
which specifies the rate at which a particle breaks (or branches) as a function of its mass. That is, the probability that no fission event has occurred by time $t>0$ when the mass at the initial time is $x_0$, is given by
$$\PP_{x_0}[\text{no fission before time }t]=\exp\left(-\int_0^t {B}(x(s))\dd s\right)=\exp\left(-\int_{x_0}^{x(t)} \frac{{B}(y)}{c(y)}\dd y\right).$$
To complete the description and specify the statistics at fission events, we need to introduce some further notation. We call a non-increasing sequence $\pp=(p_1, p_2, \ldots)$ in the unit sphere of $\ell^1$, i.e.,
$$ p_1\geq p_2\geq \dotsb \geq 0 \text{ and } \sum_{i\geq 1}p_i= 1,$$
a (proper) {\em mass partition}. In our setting, we interpret a mass partition as the sequence (ranked in the non-increasing order) of the daughter-to-mother mass ratios at a fission event, agreeing that $p_i=0$ when the mother begets less than $i$ daughters.
The space $\Pp$ of mass partitions is naturally endowed with the $\ell^1$-distance
and we write ${\mathcal B}(\Pp)$ for its Borel $\sigma$-algebra. We consider a probability kernel
$$\kappa \from (0,\infty)\times {\mathcal B}(\Pp) \to [0,1],$$
and think of $\kappa (x,\dd \pp)$ as the distribution of the random mass partition resulting from a fission event that occurs when the mother has mass $x>0$. We always implicitly assume that $\kappa(x, \dd \pp)$ has no atom at the trivial mass partition $(1,0,0, \ldots)$, as the latter corresponds to a fictive fission.
We next provide some details on the construction of growth-fragmentation processes and make the framework rigorous. We denote by $\Uu = \bigcup_{n\ge 0} \NN^n$ the Ulam-Harris tree of finite sequences of positive integers, which will
serve as labels for the individuals.
As usual, we interpret the length $|u|=n$ of a sequence $u\in\NN^n$ as a generation, and for $i\in\NN$,
write $ui$ for the sequence in $\NN^{n+1}$ obtained by aggregating $i$ to $u$ as its $(n+1)$-th element, viewing then $ui$ as the $i$-th daughter of $u$.
The unique element of $\NN^0$, written $\varnothing$, will represent
an initial individual.
We fix $x_0>0$ and aim at constructing the growth-fragmentation process $(\Zz_t, t\geq 0)$ started from
a single atom at $x_0$, which we understand to represent a single
progenitor individual, \emph{Eve}. We denote by $\PP_{x_0}$ the corresponding
probability measure.
First consider a random variable
$\zeta $ in $(0, \infty]$ with cumulative distribution function
$$\PP_{x_0}[\zeta \leq t]= 1-\exp\left(-\int_{x_0}^{x(t)} \frac{{B}(y)}{c(y)}\dd y\right), \qquad t\geq 0,$$
where $x(\cdot)$ denotes the solution to the flow velocity \eqref{e:ode} started from $x_0$. We view $\zeta $ as the fission time of Eve, and thus the trajectory of Eve is
$$Z_t^{\varnothing}=x(t) \text{ for }t<\zeta .$$
We further set $b^{\varnothing}= 0$ and $d^{\varnothing}=\zeta $, so $[b^{\varnothing}, d^{\varnothing})$ is the time interval during which Eve is alive. We also view
$d^{\varnothing}$ as the birth-time of the daughters of Eve and thus set $b^i=d^{\varnothing}$ for every $i\in\NN$.
Next, conditionally on $d^{\varnothing}=s<\infty$, that is, equivalently, on $Z_{d^{\varnothing}-}^{\varnothing}=x$
with $x=x(s)$, we pick a random mass partition $\pp=(p_1, \ldots)$ according to the law $\kappa(x,\dd \pp)$. We view
$xp_1, xp_2, \ldots$ as the masses at birth of the daughters of Eve and continue the construction iteratively in an obvious way. That is, conditionally on $xp_i=y>0$, the lifetime $\zeta^i$ of the $i$-th daughter of Eve has the same distribution as $\zeta $ under $\PP_{y}$.
Further set $d^i=b^i+\zeta^i$, and the trajectory of the $i$-th daughter of Eve is thus
$$Z_{t}^{i}=x(t-b^i) \text{ for }t\in[b^i,d^i),$$
with $x(\cdot)$ now denoting the solution to \eqref{e:ode} started from $y$.
We stress that, thanks to \eqref{e:condc}, the boundary point $0$ is a trap for the flow velocity, in the sense that
the solution to \eqref{e:ode} with initial value $x(0)=0$ is $x(t)=0$ for all $t$. Thus $0$ serves a cemetery state for particles, and individuals with zero mass can be simply discarded.
This enables us construct recursively a trajectory $(Z^u_t: t\in[b^u, d^u))$ for every $u\in \Uu$, and the state of the growth-fragmentation at time $t$ is then given by the point measure on $(0,\infty)$ with atoms at the locations of the individuals alive at time $t$, viz.
$$\Zz_t = \sum_{u\in\Uu} \Ind_{\{t\in[b^u, d^u)\}} \delta_{Z^u_t}.$$
We stress that the number of individuals may explode at a finite time even in situations when every mother always begets finitely many children (see, e.g. \cite{Savits}), and then infinitely many fission events may occur on any non-degenerate time interval. On the other hand, it is readily seen from
our key assumption \eqref{e:condc}
that the total mass process
increases at most exponentially fast, specifically
$$\crochet{\Zz_t, {\mathrm {Id}}}\leq x\e^{\gamma t}, \qquad \PP_x{\text{-a.s.}} $$
where $\gamma=\sup_{x>0} c(x)/x$. Thus the point process $\Zz_t$ is always locally finite; however the growth-fragmentation is not always a continuous time Markov chain.
\section{A many-to-one formula}
The first cornerstone of our analysis is a useful expression for the expectation of the integral of some function with respect to the random point measure $\Zz_t$ in terms of a certain Markov process $X$ on $(0,\infty)$. In the literature, such identities are commonly referred to as
many-to-one formulas, they go back to \cite{KaP, Pey} and are known to play a crucial role in the analysis of branching processes.
Recall that a size-biased pick from a mass partition $\pp=(p_1, \ldots)$ refers to a random element
$p_K$, where the distribution of the random index $K$ is $\PP(K=i)=p_i$ for $i\in\NN$. Size-biased picking enables us to map the probability kernel $\kappa$ on $(0,\infty)\times \Pp$
into a kernel $\bar k$ on $(0,\infty)\times (0,1)$ by setting for every $x>0$
$$\int_{(0,1)} g(r)\bar k(x,\dd r)= {B}(x) \int_{\Pp} \sum_{i=1}^{\infty} p_ig(p_i) \kappa(x, \dd \pp)$$
for a generic measurable function $g\from (0,1)\to \RR_+$.
We stress that $\int_{(0,1)} \bar k(x,\dd r)= {B}(x)$ since $\kappa$ is a probability kernel on the space of proper mass partitions.
We then introduce the operator
$${\mathcal G} f (x) = c(x) f'(x) +\int_{(0,1)} (f(rx) - f(x)) \bar k(x, \dd r),$$
say defined for functions $f: (0,\infty)\to \RR$ which are bounded and possess a bounded and continuous derivative. It is easily seen that ${\mathcal G}$
is the infinitesimal generator of a unique Markov process, say $X=(X_t, t\geq 0)$.
Recall that we have assumed condition \eqref{e:condc}
and that $B$ is bounded.
By a slight abuse, we also use also the notation $\PP_{x_0}$ for the probability measure under which this piecewise deterministic Markov process starts from $X_0=x_0$.
The evolution of $X$ can be described in words as follows. The process is driven by the flow velocity \eqref{e:ode} until it makes a first downwards jumps; more precisely, the total rate of jump at state $x$ is $\int_{(0,1)}\bar k(x,\dd r)={B}(x)$. Further, conditionally on the event that a jump occurs when the process is about to reach $x$, the position after the jump is distributed according to the image of the probability law ${B}(x)^{-1}\bar k(x, \dd r)$ by the dilation $r\mapsto rx$. An alternative formulation which makes the connection to the growth-fragmentation process more transparent, is that $X$ follows the path of Eve up to its fission, then picks a daughter at random according to a size-biased sampling and follows her path, and so on, and so forth.
We now state a useful representation of the intensity measure of $\Zz_t$ in terms of the Markov process $X$.
\begin{lemma}[Many-to-one formula -- Feynman-Kac representation] \label{L1}
Define, for every $t\geq 0$,
\[
\Ee_t = \exp\biggl\{\int_0^t \frac{c(X_s)}{X_s} \, \dd s\biggr\}.
\]
For every measurable $f\from (0,\infty)\to \RR_+$ and every $x_0>0$, we have
\[
\EE_{x_0} \left[ \langle \Zz_t, f\rangle \right]
= x_0\EE_{x_0}\left[ \frac{f(X_t)}{X_t}\Ee_t \right].
\]
\end{lemma}
Lemma \ref{L1} is closely related to Lemma 2.2 in \cite{BW}, which provides a representation of the solution to the growth-fragmentation equation \eqref{E:GFE} by Feynman-Kac formula. Specifically, introduce the growth-fragmentation operator ${\mathcal A}$
given for every $f\in{\mathcal C}^1_c$ by
\begin{eqnarr}\label{E:GFG}
{\mathcal A}f(x) &=&c(x) f'(x) + \int_{(0,1)} r^{-1} f(rx) \bar k(x, \dd r) -{B}(x) f(x) \nonumber \\
&=&
c(x) f'(x) + {B}(x) \int_{\Pp} \left( \sum_{i=1}^{\infty}f(xp_i)-f(x)\right)\kappa(x, \dd \pp),
\end{eqnarr}
then comparing Lemma \ref{L1} above and Lemma 2.2 in \cite{BW} shows that the intensity measure $\mu_t$ of $\Zz_t$ solves \eqref{E:GFE} with $\mu_0=\delta_{x_0}$.
A fairly natural approach for establishing Lemma \ref{L1} would be to argue first that the intensity measure of $\Zz_t$ solves the growth-fragmentation equation for ${\mathcal A}$ given by \eqref{E:GFG} and then invoke Lemma 2.2 in \cite{BW}. This idea is easy to implement when the number of daughters after a fission event is bounded (for instance, when fissions are always binary);
however, making this analytic approach fully rigorous in the general case would be rather tedious, as the total number of individuals may explode in finite time and thus fission events accumulate. We rather follow a classical probabilistic approach and refer to the treatise by Del Moral \cite{Delmoral} and the lecture notes of Shi \cite{Shi} for background.
\begin{proof}
We set $T_0=0$ and then write $T_1<T_2<\dotsb$ for the sequence of the jump times of the piecewise deterministic Markov process $X$.
We claim that for every generation $n\geq 0$, there is the identity
\begin{equation}\label{e:m21gen}
\EE_{x_0} \left[ \sum_{|u|=n} f(Z^u_t) \Ind_{\{b^u \leq t < d^u\}} \right]
= x_0\EE_{x_0}\left[ \Ind_{\{T_n \leq t < T_{n+1}\}}\frac{f(X_t)}{X_t}\Ee_t \right].
\end{equation}
The many-to-one formula of Lemma \ref{L1} then follows by summing over all generations.
We shall now establish \eqref{e:m21gen} by iteration.
The identity
\begin{equation}\label{e:Eexp}
\exp\left( \int_0^t \frac{c(x(s))}{x(s)}\dd s\right) = \frac{x(t)}{x(0)}
\end{equation}
for the solution to the flow velocity \eqref{e:ode}
makes \eqref{e:m21gen} obvious for the generation $n=0$.
Next, by considering the fission rates of Eve, we get that for every measurable function $g\from [0,\infty)\times [0,\infty)\to \RR_+$ with $g(t,0)=0$, we have
\begin{equation}
\EE_{x_0}\left[ \sum_{i=1}^{\infty} g(b^i, Z^i_{b^i})\right]
=\int_0^{\infty} \dd t {B}(x(t)) \exp\left(-\int_0^t {B}(x(s)) \dd s\right) \int_{\Pp} \kappa(x(t), \dd \pp)
\sum_{i=1}^{\infty}g(t,x(t)p_i).
\label{e:interm1}
\end{equation}
We then write
$$ \sum_{i=1}^{\infty}g(t,x(t)p_i)=x(t) \sum_{i=1}^{\infty}p_i\frac{g(t,x(t)p_i)}{x(t)p_i},$$
so that by comparing with the jump rates of $X$, we see that
the right-hand side of \eqref{e:interm1}
equals
$$
\EE_{x_0}\left[ \frac{g(T_1, X_{T_1})}{X_{T_1}}X_{T_1-}\right] =
x_0 \EE_{x_0}\left[ \frac{g(T_1, X_{T_1})}{X_{T_1}}\Ee_{T_1}\right],
$$
where the identity stems from \eqref{e:Eexp}. Putting the pieces together, we have shown that
\begin{equation} \label{e:interm}
\EE_{x_0}\left[ \sum_{i=1}^{\infty} g(b^i, Z^i_{b^i})\right]= x_0 \EE_{x_0}\left[ \frac{g(T_1, X_{T_1})}{X_{T_1}}\Ee_{T_1}\right].
\end{equation}
We then assume that \eqref{e:m21gen} holds for a given $n\geq 0$. Applying the branching property at the fission event of Eve, we get
$$ \EE_{x_0} \left[ \sum_{|u|=n+1} f(Z^u_t) \Ind_{\{b^u \leq t < d^u\}} \right]=
\EE_{x_0}\left[ \sum_{i=1}^{\infty} g(b^i, Z^i_{b^i})\right],
$$
with
$$g(s,y)= \EE_{y} \left[ \sum_{|u|=n} f(Z^u_t) \Ind_{\{b^u \leq t-s < d^u\}} \right] = y\EE_y\left[ \Ind_{\{T_n \leq t-s < T_{n+1}\}}\frac{f(X_{t-s})}{X_{t-s}}\Ee_{t-s}
\right]
$$
for $s\leq t$ and $g(s,y)=0$ otherwise. We conclude from the strong Markov property at the first jump time $T_1$ of $X$,
the fact that the functional $\Ee$ is multiplicative, and \eqref{e:interm},
that the many-to-one formula \eqref{e:m21gen} holds for the generation $n+1$.
By induction, \eqref{e:m21gen} holds for any $n$.
\end{proof}
In the final section of this work, we shall also need a version of Lemma \ref{L1} extended to the situation where, roughly speaking, individuals are frozen at
times which are observable from their individual trajectories.
Specifically, we define a \emph{simple stopping line} to be a functional $T$ on the space of piecewise continuous trajectories $z=(z_t)_{t\geq 0}$ and with values in $[0,\infty]$, such that for every $t\geq 0$ and every trajectory $z$, if $T(z)\leq t$, then $T(z)=T(z')$ for any trajectory $z'$ that coincides with $z$ on the time-interval $[0,t]$. Typically, $T(z)$ may be the instant of the $j$-th jump of $z$, or the first entrance time $T(z)=\inf\{t>0: z_t\in A\}$ in some measurable set $A\subset (0,\infty)$.
The notion of a simple stopping line is a particular case of the more general
stopping line introduced by Chauvin \cite{Cha-stop}. The restriction simplifies
the proofs somewhat, and will be sufficient for our applications later.
We next introduce the notion of ancestral trajectories. Recall from the preceding section the construction of the trajectory $(Z^u_t: t\in[b^u, d^u))$ for an individual $u=u_1\ldots u_n\in\Uu$.
The sequence of prefixes $u^j=u_1 \ldots u_j$ for $j=1, \ldots, n$ forms the ancestral lineage of that individual. Note that, as customary for many branching models, the death-time of a mother always coincides with the birth-time of her children, so every individual $u$ alive at time $t>0$ (i.e. with $b^u\leq t < d^u$) has a unique ancestor alive at time $s\in[0,t)$, which is the unique prefix $u^j$ with $b^{u^j}\leq s < d^{u^j}$. We can thus define unambiguously the mass at time $s$ of the unique ancestor of $u$ which is alive at that time, viz.\ $Z^u_s=Z^{u^j}_s$. This way, we extend $Z^u$ to $[0,d^u)$, and get the ancestral trajectory of the individual $u$.
For the sake of simplicity, for any simple stopping line $T$ and any trajectory $z$, we write $z_T=z_{T(z)}$, and define
the point process of individuals frozen at $T$ as
$$\Zz_T = \sum_{u\in\Uu} \Ind_{\{T(Z^u)\in[b^u, d^u)\}} \delta_{Z^u_T}.$$
\begin{lemma}\label{L1'} Let $T$ be a simple stopping line.
For every measurable $f\from (0,\infty)\to \RR_+$ and every $x_0>0$, we have
\[
\EE_{x_0} \left[ \langle \Zz_T, f\rangle \right]
= x_0\EE_{x_0}\left[ \frac{f(X_T)}{X_T}\Ee_{T(X)}, T(X)<\infty \right].
\]
\end{lemma}
\begin{proof} The proof is similar to that of Lemma \ref{L1}, and we use the same notation as there.
In particular, we write $x(\cdot)$ for the solution to the flow velocity \eqref{e:ode} started from $x(0)=x_0$,
and set $T(x(\cdot))=t_0\in[0,\infty]$. By the definition of a simple stopping line, we have obviously that under $\PP_{x_0}$,
$T(Z^{\varnothing}) = t_0$ a.s.\ on the event $0\leq T(Z^{\varnothing})\leq d^{\varnothing}$, and
also $T(X)=t_0$ a.s.\ on the event $0\leq T(X)\leq T_1$.
Using \eqref{e:Eexp}, we then get
$$\EE \left[f(Z^{\varnothing}_T) \Ind_{\{b^{\varnothing} \leq T(Z^{\varnothing}) < d^{\varnothing}\}} \right] =x_0\EE_{x_0}\left[ \Ind_{\{0 \leq T < T_{1}\}}\frac{f(X_T)}{X_T}\Ee_T \right].$$
Just as in the proof of Lemma \ref{L1}, it follows readily by induction
that for every generation $n\geq 0$, there is the identity
$$
\EE_{x_0} \left[ \sum_{|u|=n} f(Z^u_T) \Ind_{\{b^u \leq T(Z^u) < d^u\}} \right]
= x_0\EE_{x_0}\left[ \Ind_{\{T_n \leq T < T_{n+1}\}}\frac{f(X_T)}{X_T}\Ee_T \right],
$$
and we conclude the proof by summing over generations.
\end{proof}
\section{Boundedness of the intrinsic martingale in \texorpdfstring{$L^2(\PP)$}{L\textasciicircum 2(P)} }
In order to apply results from \cite{BW,BeFK}, we shall now make some further fairly mild assumptions that will be enforced throughout the rest of this work.
Specifically, we suppose henceforth that
\begin{equation} \label{e:assump}
\text{the Markov process $X$, with generator $\Gg$, is irreducible and aperiodic.}
\end{equation}
Although \eqref{e:assump} is expressed in terms of the Markov process $X$ rather than the characteristics of the growth-fragmentation process, it is easy to give some fairly general and simple conditions
in terms of $c, B$ and $\kappa$ that guarantee \eqref{e:assump}; see notably Lemma 3.1 of \cite{BeFK} for a discussion of irreducibility. We further stress that aperiodicity should not be taken to granted if we do not assume the jump kernel $\bar k$ to be absolutely continuous.
\begin{remark} We mention that a further assumption is made in \cite{BW,BeFK}, namely that the kernel $\bar k(x,\dd y)$ is absolutely continuous with respect to the Lebesgue measure, and that the function $(0,\infty) \ni x \mapsto \bar{k}(x,\cdot) \in L^1(0,\infty)$ is continuous. However, this is only needed in \cite{BW} to ensure some analytic properties (typically, the Feller property of the semigroup, or the connection with the eigenproblem \eqref{E:eigenp}), but had no role in the probabilistic arguments developed there.
We can safely drop this assumption here, and apply results of \cite{BW,BeFK} for which it was irrelevant.
\end{remark}
Following \cite{BW}, we introduce the Laplace transform
\[
L_{x,y}(q) = \EE_x\bigl[ \e^{-qH(y)} \Ee_{H(y)} \Indic{H(y) < \infty} \bigr],\qquad q\in\RR,
\]
where $H(y) = \inf\{ t > 0: X_t = y \}$.
For any $x_0>0$, the map $L_{x_0,x_0}\from \RR \to (0,\infty]$ is a convex non-increasing function
with $\lim_{q\to \infty}L_{x_0,x_0}(q)=0$. We then {\em define} the {\em Malthus exponent} as
$$\lambda \coloneqq \inf\{q\in\RR: L_{x_0,x_0}(q)<1\}.$$
Recall that the value of $\lambda$ does not depend on the choice for $x_0$, and that although our definition of the Malthus exponent apparently differs from that in Section 5 of \cite{Jagers}, Proposition 3.3 of \cite{BW} strongly suggests that the two actually should yield the same quantity.
With this in place, we define the functions $\ell, h \from (0,\infty) \to (0,\infty)$ by
\[ \ell(x) = L_{x, x_0}(\lambda) \text{ and } h(x) = x \ell(x), \]
and may now state the main result of this section.
\begin{theorem} \label{T1}
Assume
\begin{equation}\label{e:cc}
\limsup _{x\to 0+} \frac{c(x)}{x}< \lambda \quad\text{and}\quad \limsup _{x\to \infty} \frac{c(x)}{x}<\lambda.
\end{equation}
Then for every $x>0$, the process
\[ W_t = \e^{-\lambda t} \crochet{\Zz_t, h}, \qquad t\geq 0\]
is a martingale bounded in $L^2(\PP_x)$.
\end{theorem}
Before tackling the core of the proof of Theorem \ref{T1}, let us first recall some features proved in \cite{BW,BeFK} and their immediate consequences. From Section 3.5 in \cite{BeFK}, it is known that \eqref{e:cc} ensures the existence of some $q<\lambda$ with $L_{x_0,x_0}(q)<\infty$.
By continuity and non-increase of the function $L_{x_0,x_0}$ on its domain, this guarantees that
\begin{equation} \label{e:assump:malthus}
L_{x_0 , x_0}(\lambda)=1.
\end{equation}
Theorem 4.4 in \cite{BW} then shows that $\e^{-\lambda t} \ell(X_t) \Ee_t$ is a $\PP_{x}$-martingale,
and we can combine the many-to-one formula of Lemma \ref{L1} and the branching property of $\Zz$ to conclude that $W_t$ is indeed a $\PP_x$-martingale. We
therefore call $h$ a \emph{$\lambda$-harmonic function}; in this vein,
recall also from Corollary 4.5 and Lemma 4.6 in \cite{BW} that
$h$ is an eigenfunction for the eigenvalue $\lambda$ of (an extension of) the growth-fragmentation operator ${\mathcal A}$ which has been defined in \eqref{E:GFG}.
We call $W=(W_t: t\geq 0)$ the \emph{intrinsic martingale}, as it bears a close connection to the process with the same name that has been defined in Section 5 of \cite{Jagers}. To explain this connection, it is convenient to view the atomic measure $\e^{-\lambda t} \Zz_t$ as a weighted version of point measure $\Zz_t$, where the weight of any individual at time $t$ is $\e^{-\lambda t}$. In this setting, $W_t$ is given by the integral of the $\lambda$-harmonic function $h$ with respect to the weighted atomic measure $\e^{-\lambda t} \Zz_t$.
Next, consider for each $k\in \NN$, the simple stopping line $T_k$ at which a trajectory makes its $k$-th jump, and recall from the preceding section, that $\Zz_{T_k}$ then denotes the point measure obtained from $\Zz$ by freezing individuals at the instant when their ancestral trajectories jump for the $k$-th time. In other words, $\Zz_{T_k}$ is the point process that describes the position at birth of the individuals of the $k$-th generation. Just, as above, we further discount the weight assigned to each individual at rate $\lambda$, so that the weight of an individual of the $k$-th generation which is born at time $b$ is $\e^{-\lambda b}$ (of course, individuals at the same generation are born at different times, and thus have different weights). The integral, say ${\mathcal W}_k$, of the $\lambda$-harmonic function $h$ with respect to this atomic measure, is precisely the intrinsic martingale as defined in \cite{Jagers}. Using more general stopping line techniques, one can check that the boundedness in $L^2$ of $(W_t: t\geq 0)$ can be transferred to $({\mathcal W}_k: k\in\NN)$. Details are left to the interested reader.
\begin{remark} Actually \eqref{e:assump:malthus}, which is a weaker assumption than \eqref{e:cc}, not only ensures that the process in continuous time $W_t=\e^{-\lambda t} \crochet{\Zz_t, h}$ is a martingale, but also
that the same holds for the process indexed by generations $({\mathcal W}_k: k\in\NN)$. Indeed,
from the very definition of the function $L_{x_0,x_0}$, \eqref{e:assump:malthus} states that the expected value under $\PP_{x_0}$
of the nonnegative martingale $\e^{-\lambda t} \ell(X_t) \Ee_t$, evaluated at the first return time $H(x_0)$, equals $1$, and therefore the stopped martingale
$$\e^{-\lambda t\wedge H(x_0)} \ell(X_{t\wedge H(x_0)}) \Ee_{t\wedge H(x_0)}, \qquad t\geq 0$$ is uniformly integrable. Plainly, the first jump time of $X$, $T_1$ occurs before $H(x_0)$, and the optional sampling theorem yields
$$\EE_{x_0}[\e^{-\lambda T_1} \Ee_{T_1} \ell(X_{T_1})]=1.$$
One concludes from the many-to-one formula of Lemma \ref{L1'} (or rather, an easy pathwise extension of it)
that $\EE_{x}[{\mathcal W}_1]=h(x)$ for all $x>0$, and the martingale property of ${\mathcal W}$ can now be seen from the branching property. \end{remark}
The rest of this section is devoted to the proof of Theorem \ref{T1}; in particular we assume henceforth that \eqref{e:cc} is fulfilled.
To start with, we recall from Lemma 4.6 of \cite{BW} that the function $\ell$ is bounded and continuous, and
as a consequence
\begin{equation}\label{e:bornh}
\sup_{y>0} h(y)/y = \sup_{y>0} \ell(y)=\| \ell \|_{\infty}<\infty.
\end{equation}
Moreover $\ell$ and $h$ are strictly positive, and thus bounded away from $0$ on compact subsets of $(0,\infty)$. We shall use often these facts in the sequel.
The heart of the matter is thus to establish boundedness of $(W_t)_{t\ge 0}$
in $L^2(\PP_x)$, for which we follow the classical path based on the probability tilting and spine decomposition; see e.g. \cite{BigKyp} and references therein.
For an arbitrary time $t>0$, one defines a probability measure $\tilde \PP_x$ on an augmented probability space by further distinguishing an individual $U_t$, called the spine, in such a way that
\[
\tilde{\PP}_x[ \Lambda \cap \{ U_t = u \}]
= h(x)^{-1}\e^{-\lambda t} \EE_x[ {h}(Z^u_t) \Ind_{\Lambda}\Ind_{\{b^u \leq t < d^u\}}]
\]
for $\Lambda \in \FF_t = \sigma( \Zz_t, s \le t)$ and $u\in \Uu$.
The projection of $ \tilde{\PP}_x$ on $\FF_t$ is then absolutely continuous with respect to $\PP_x$ with density
$W_t/W_0$.
Recall that the martingale property of $W$ ensures the consistency of the definition of $\tilde \PP_x$. Precisely,
under the conditional law $ \tilde{\PP}_x[\cdot \mid \FF_t]$, the spine is picked at random amongst the individuals alive at time $t$ according to an $h$-biased sampling, and the ancestor $U_s$ of $U_t$ at time $s\leq t$ serves as spine at time $s$.
In order to describe the dynamics of the mass of the spine $\tilde X_t=Z^{U_t}_t$ as time passes, we introduce first for every $x>0$
$$w(x)=\int_{\Pp}\sum_{i=1}^{\infty} h(xp_i) \kappa(x, \dd \pp)$$
and set
\begin{equation}\label{e:ratespine}
\tilde {B}(x)= \frac{w(x)}{h(x)}{B}(x)\ \text{and}\ \tilde \kappa(x, \dd \pp)= w(x)^{-1} \sum_{i=1}^{\infty} h(xp_i) \kappa(x, \dd \pp).
\end{equation}
In short, one readily checks that just as $X$, $\tilde X$ increases steadily and has only negative jumps. Its growth is driven by the flow velocity \eqref{e:ode}, and the total rate of negative jumps at location $x$ is
$\tilde {B}(x)$, which is the total fission rate of the spine when its mass is $x$. Further, $\tilde \kappa(x, \dd \pp)$
gives the distribution of the random mass partition resulting from the fission of the spine, given that the mass of the latter immediately before that fission event is $x$.
At the fission event of the spine, a daughter is selected at random by $h$-biased sampling and becomes the new spine. We now gather some facts about the spine which will be useful later on.
\begin{lemma}\label{L20}
Set $\tilde X_t=Z^{U_t}_t$ for the mass of the spine at time $t\geq 0$.
\begin{enumerate}
\item The process
$\tilde X$ is Markovian and exponentially point recurrent, in the sense that
if we write $\tilde H(y) =\inf\{t>0: \tilde X_t=y\}$ for the first hitting time of $y>0$ by $\tilde X$, then there exists
$\varepsilon>0$ such that
$\EE_x[\exp(\varepsilon \tilde H(y)] <\infty$.
\item The following many-to one formula holds: for every nonnegative measurable function $f$ on $(0,\infty)$, we have
\begin{equation}\label{e:mtoY}
\EE_x[\crochet{\Zz_t, f}]=\e^{\lambda t} h(x)\tilde \EE_x[f(\tilde X_t)/h(\tilde X_t)].
\end{equation}
\item Any function $g\from (0,\infty)\to \RR$ such that $g\ell$ is continuously differentiable belongs to the domain
of its extended infinitesimal generator $\tilde {\mathcal G}$ and
\begin{equation}\label{E:genY}
\tilde {\mathcal G}g(x)
= \frac{1}{\ell(x)} {\mathcal G}(g\ell)(x) +(c(x)/x- \lambda) g(x),
\end{equation}
in the sense that the process
$$
g(\tilde X_t)-\int_0^t \tilde {\mathcal G}g(\tilde X_s) \, \dd s
$$
is a $\tilde \PP_x$-local martingale for every $x>0$.
\end{enumerate}
\end{lemma}
\begin{proof} It follows immediately from the definition of the spine and the many-to-one formula of Lemma \ref{L1} that for every $t\geq 0$, the law of $\tilde X_t$ under $\tilde \PP_x$ is absolutely continuous with respect to that of $X_t$ under $\PP_x$, with density $\e^{-\lambda t} \Ee_t \ell(X_t)/\ell(X_0)$.
Recall that the latter is a $\PP_x$-martingale, which served in Section 5 of \cite{BW} to construct a point-recurrent Markov $Y=(Y_t, t\geq 0)$ by probability tilting. Hence $Y$ has the same one-dimensional marginals as $\tilde X$,
and since the two processes are Markovian, they have the same law.
The claim that $\tilde X$ (that is equivalently, $Y$) is exponentially point recurrent, stems from the proof of Theorem 2 of \cite{BeFK} and Lemma 5.2(iii) of \cite{BW}. The many-to-one formula \eqref{e:mtoY} merely rephrases the very definition of the spine.
Finally, the third claim about the infinitesimal generator follows from Lemma 5.1 in \cite{BW}.
\end{proof}
\begin{remark}
The description of the dynamics governing the evolution of the spine entails that
its infinitesimal generator can also be expressed by
\begin{eqnarray}\label{E:gentilde}
\tilde {\mathcal G}f(x) &=& c(x) f'(x) + \frac{B(x)}{h(x)} \int_{\Pp} \left(\sum_{i=1}^{\infty}h(xp_i)f(xp_i)-f(x)\right) \kappa(x, \dd \pp),
\end{eqnarray}
say for any $f\in{\mathcal C}^1_c$. The agreement between
\eqref{E:genY} and \eqref{E:gentilde} can be seen from the identity
$\mathcal{G}\ell(x) = (\lambda - c(x)/x)\ell(x)$, which is proved in
Corollary 4.5(i) of \cite{BW}.
\end{remark}
We readily deduce from Lemma \ref{L20} that the intensity measure of the growth-fragmentation satisfies the Malthusian behavior \eqref{E:Malthus} uniformly on compact sets.
\begin{corollary}\label{C1} For every compact set $K\subset (0,\infty)$ and every continuous function $f$ with $\|f/h\|_{\infty}<\infty$, we have
$$\lim_{t\to \infty} \e^{-\lambda t}\EE_x[\crochet{ \Zz_t,f}]= h(x) \crochet{\nu,f} \qquad \text{uniformly for }x\in K,$$
where the asymptotic profile is given by $\nu=h^{-1}\pi$, with $\pi$
the unique stationary law of the spine process $\tilde X$.
\end{corollary}
\begin{proof} Suppose $K\subset [b,b']$ for some $0<b<b'<\infty$, and fix $ \varepsilon <\in (0,1)$.
For every $0<x<y$, let $s(x,y)$ denote the instant when the flow velocity \eqref{e:ode} started from $x$ reaches $y$.
Since the total jump rate $\tilde B$ of $\tilde X$ remains bounded on $K$ and the growth rate $c$ is bounded away from $0$ on $K$, we can find a finite sequence $b=x_1< x_2 < \ldots < x_j=b'$
such that for every $i=1, \ldots, j-1$ and every $x\in(x_i, x_{i+1})$:
$$s(x,x_{i+1})<1 \text{ and } s(x_i,x) < 1,$$
as well as
$$\tilde \PP_{x}(\tilde H(x_{i+1})=s(x,x_{i+1})) >1-\varepsilon \quad \text{and}\quad \tilde \PP_{x_i}(\tilde H(x)=s(x_i,x)) >1-\varepsilon. $$
An immediate application of the simple Markov property now shows that for every $i=1, \ldots, j-1$, every $x\in(x_i, x_{i+1})$, every $t\geq 1$, and every nonnegative measurable function $g$, the following bounds hold
\[
(1-\varepsilon) \tilde \EE_{x_{i+1}}(g(\tilde X_{t-s(x,x_{i+1})}))
\le
\tilde \EE_{x}(g(\tilde X_{t}))
\le
(1-\varepsilon)^{-1} \tilde \EE_{x_i}(g(\tilde X_{t+s(x_i,x)}))
\]
On the other hand, we know that $\tilde X$ is irreducible, aperiodic and ergodic, with stationary law $\pi$ (recall Lemma \ref{L20}(i)). Since $\varepsilon$ can be chosen arbitrarily small, it follows from above that $\tilde X$ is uniformly ergodic on $K$, in the sense that for every continuous and bounded function $g$,
$$\lim_{t\to \infty} \tilde \EE_{x}(g(\tilde X_{t}))= \crochet{\pi, g} \qquad \text{uniformly for }x\in K.$$
We conclude the proof with an appeal to the many-to-one formula of Lemma \ref{L20} (ii), taking $g=f/h$.
\end{proof}
The next lemma is a cornerstone of the proof of Theorem \ref{T1}.
\begin{lemma}\label{L2}
We have:
\begin{enumerate}
\item There exists $a<\infty$ such that, for all $x>0$ and $t\geq 0$,
$$\tilde \EE_x[1/\ell(\tilde X_t)]\leq at + 1/\ell(x).$$
\item There exists some $\lambda'<\lambda$ such that, for all $x>0$,
$$\lim_{t\to \infty} \e^{-\lambda' t} \tilde X_t=0\quad \text{ in }
L^{\infty}(\tilde \PP_x).$$
\end{enumerate}
\end{lemma}
\begin{proof} (i) We apply Lemma \ref{L20}(iii)
to $g=1/\ell$, with
$$\tilde {\mathcal G}\left(\frac{1}{\ell}\right) (x)= (c(x)/x-\lambda)/\ell(x).$$
Our assumption \eqref{e:cc} ensures that the right-hand side above is negative for all $x$ aside from some compact subset of $(0,\infty)$.
Taking $a=\sup_{x>0} \tilde {\mathcal G}\left(1/{\ell}\right) <\infty$,
we deduce from Lemma \ref{L20}(iii) by optional sampling that
$\tilde \EE_x[1/\ell(\tilde X_t)]-a t\leq 1/\ell(x)$, which entails our claim.
(ii) Recall from the description of the dynamics of the spine before the statement that $\tilde X$ increases continuously with velocity $c$ and has only negative jumps. As a consequence, $\tilde X$ is bounded from above by the solution to the flow velocity
\eqref{e:ode}. One readily deduces that $\lim_{t\to \infty} \e^{-\lambda' t}x(t)=0$ for every $\lambda' >\limsup_{x\to \infty} c(x)/x$, and since $\limsup_{x\to \infty} c(x)/x<\lambda$ according to our standing assumption \eqref{e:cc}, this establishes our claim.
\end{proof}
We now have all the ingredients needed to prove Theorem \ref{T1}
\begin{proof}[Proof of Theorem \ref{T1}]
Since the projection on ${\mathcal F}_t$ of $\tilde \PP_x$ is absolutely continuous with density $W_t/W_0$,
the process $W$ is bounded in $L^2(\PP_x)$ if and only if $
\sup_{t\geq 0} \tilde \EE_x [W_t] <\infty$. We already know from Lemma \ref{L2}(ii) that $\sup_{t\geq 0} \tilde \EE_x [\e^{-\lambda t} \tilde X_t] <\infty$,
and we are thus left with checking that
\begin{equation}\label{e:goal}
\sup_{t\geq 0} \tilde \EE_x [W'_t] <\infty,
\end{equation}
where
$$W'_t=W_t-\e^{-\lambda t} \tilde X_t.$$
In this direction, it is well-known and easily checked that
the law of $\Zz$ under the new probability measure $\tilde{\PP}_x$ can be constructed by the following procedure, known as the spine decomposition. After each fission event of the spine, all the daughters except the new spine
start independent growth-fragmentation processes following the genuine dynamics of $\Zz$ under $\PP$.
This spine decomposition
enables us to estimate the conditional expectation of $W'_t$ under $\tilde \PP_x$, given the spine and its sibling. At each time, say $s>0$, at which a fission occurs for the spine, we write
$\pp(s)=(p_1(s), \ldots)$ for the resulting mass partition, and $I(s)$ for the index of the daughter spine. Combining this with the fact that $(W_s: s\geq 0)$ is a $\PP_y$-martingale for all $y>0$ entails the identity
$$\tilde \EE_x\left[W'_t\mid (\tilde X_s, \pp(s), I(s))_{s\geq 0}\right] =
\sum_{s\in \tilde F, s\leq t} \e^{-\lambda s} \sum_{i\neq I(s)} h(\tilde X_{s-} p_i(s))\,,$$
where $\tilde F$ denotes the set of fission times of the spine. Note from \eqref{e:bornh}
that $\sum h(xp_i) \leq x \| \ell\|_{\infty}$ for every $x>0$ and every mass-partition ${\mathbf p}=(p_1, \ldots)$, so the right-hand side is bounded from above by
$$ \| \ell \|_{\infty} \sum_{s\in \tilde F, s\leq t} \e^{-\lambda s} \tilde X_{s-},$$
and to prove \eqref{e:goal}, we now only need to check that
$$\tilde \EE_x \left[ \sum_{s\in \tilde F} \e^{-\lambda s} \tilde X_{s-}\right] <\infty . $$
Recall from \eqref{e:ratespine} that $\tilde {B} = w{B}/h$ describes the fission rate of the spine, and observe from
\eqref{e:bornh} that $w(x)\leq \| \ell\|_{\infty} x$, so that
$$\tilde {B}(x) \leq \| \ell\|_{\infty} \| {B} \|_{\infty}
\frac{1}{\ell(x)}
\qquad \text{for all }x>0.$$
This entails that
$$\tilde \EE_x \left[ \sum_{s\in \tilde F} \e^{-\lambda s} \tilde X_{s-}\right] \leq
\| \ell\|_{\infty} \| {B} \|_{\infty}\tilde \EE_x\left[ \int_0^{\infty} \e^{-\lambda s} \frac{\tilde X_s}{\ell(\tilde X_s)}\dd s\right]. $$
We now see that the expectation in the right-hand side is indeed finite by writing first
$$\int_0^{\infty} \e^{-\lambda s} \frac{\tilde X_s}{\ell(\tilde X_s)}\dd s
= \int_0^{\infty} \frac{1}{\ell(\tilde X_s)} \cdot \e^{-\lambda' s}\tilde X_s
\cdot \e^{-(\lambda-\lambda')s} \, \dd s
$$
and then applying Lemma \ref{L2}.
\end{proof}
\section{Strong Malthusian behavior}
We assume again throughout this section that the assumption \eqref{e:cc} is fulfilled.
We will prove Theorem \ref{t:main}:
the strong Malthusian behavior \eqref{E:MalthusF} then holds.
The proof relies on a couple of technical lemmas. Recall from Section 3 the notation $Z^u\colon [0, d^u)\to(0,\infty)$ for the ancestral trajectory of the individual $u$, and agree for the sake of simplicity that $f(Z^u_t)=0$ for every function $f$ whenever $t\geq d^u$.
The first lemma states a simple tightness result.
\begin{lemma} \label{L:B} For every $x>0$ and $\varepsilon >0$, there exists a compact $K\subset (0,\infty)$ such that for all $t\geq 0$:
$$ \e^{-\lambda t} \EE_{x}\left[\sum_{u\in\Uu: b^u\leq t < d^u } h(Z^u_t) \Indic{Z^u_t\not\in K}
\right]< \varepsilon.$$
\end{lemma}
\begin{proof} From the very definition of the spine $\tilde X$, there is the identity
$$ \e^{-\lambda t} \EE_{x}\left[ \sum_{u\in\Uu: b^u\leq t < d^u } h(Z^u_t) \Indic{Z^u_t\not\in K}
\right] = h(x) \tilde \PP_{x}\left[ \tilde X_t\not\in K\right].$$
Recall from Lemma \ref{L20}(i) that $\tilde X$ is positive recurrent; as a consequence the family of its one-dimensional marginals under $\tilde \PP_{x}$ is tight, which entails our claim.
\end{proof}
The second lemma reinforces the boundedness in $L^2$ of the intrinsic martingale, cf.\ Theorem \ref{T1}.
\begin{lemma} \label{L:C} For every compact set $K\subset(0,\infty)$, we have
$$ \sup_{x\in K} \sup_{t\geq 0} \EE_x\left[ W^2_t \right] < \infty.$$
\end{lemma}
\begin{proof} We may assume that $K=[b,b']$ is a bounded interval. For any $x\in(b,b']$, we write $s(x)$ for the time when the flow velocity \eqref{e:ode} started from $b$ reaches $x$. We work under
$\PP_{b}$ and consider the event $\Lambda_x$ that the Eve individual hits $x$ before a fission event occurs.
We have on the one hand, that the law of $\Zz_{s(x)+t}$ conditionally on
$ \Lambda_x$ is the same as that of $\Zz_t$ under $\PP_x$.
In particular, the law of $W_t$ under $\PP_x$ is the same as that of $\e^{\lambda s(x)} W_{s(x)+t}$ under
$\PP_b[\cdot \mid \Lambda_x]$, and thus
$$\sup_{t\geq 0} \EE_x\left[ W^2_t \right] \leq \frac{\e^{\lambda s(x)}}{\PP_b[\Lambda_x]} \EE_b\left[ W^2_{\infty} \right].$$
On the other hand, for every $x\in(b,b']$, we have $s(x)\leq s(b')<\infty$ and
$$ \PP_b[\Lambda_x]\geq \PP_b[\Lambda_{b'}]=\exp\left(-\int_{b}^{b'} \frac{{B}(y)}{c(y)}\dd y\right)>0,$$
and our claim is proven.
\end{proof}
We have now all the ingredients to prove Theorem \ref{t:main}.
\begin{proofthm}{Proof of Theorem \ref{t:main}}
We suppose that $0\leq f \leq h$, which induces of course no loss of generality. Our aim is to check that
$\e^{-\lambda(t+s)} \crochet{\Zz_{t+s},f} $ is arbitrarily close to $\crochet{\nu,f}W_t $ in $L^1(\PP_x)$ when $s$ and $t$ are sufficiently large. In this direction, recall that ${\mathcal F}_t$ denotes the natural filtration generated by $\Zz_t$ and use
the branching property at time $t$ to express the former quantity as
$$ \e^{-\lambda (t+s)}\crochet{\Zz_{t+s},f}=
\sum_{u\in\Uu: b^u\leq t < d^u}\e^{-\lambda t} h(Z^u_t) \cdot \frac{1}{h(Z^u_t)}\e^{-\lambda s}\crochet{\Zzu_{s},f},
$$
where conditionally on ${\mathcal F}_t$, the processes $\Zzu$ are independent versions of the growth-fragmentation $\Zz$ started from $Z^u_t$.
Fix $\varepsilon>0$.
To start with, we choose a compact subset $K\subset (0,\infty)$
as in Lemma \ref{L:B}, and restrict the sum above to individuals $u$ with $Z^u_t\not \in K$.
Observe first that, since $\crochet{\Zzu_s, f}\leq \crochet{\Zzu_s, h}$ and $h$ is $\lambda$-harmonic, taking the conditional expectation given ${\mathcal F}_t$ yields
\begin{multline*}
\EE_x\!\left[ \sum_{\substack{u\in\Uu, \\ b^u\leq t < d^u}}\e^{-\lambda t} h(Z^u_t) \Indic{Z^u_t\not \in K}\cdot \frac{1}{h(Z^u_t)}\e^{-\lambda s}\crochet{\Zzu_{s},f}\right]
\leq \EE_x\!\left[ \sum_{\substack{u\in\Uu, \\ b^u\leq t < d^u}}\e^{-\lambda t} h(Z^u_t) \Indic{Z^u_t\not \in K}\right].
\end{multline*}
From the very choice of $K$, there is the bound
\begin{equation}\label{E:P1}
\EE_x\left[ \sum_{u\in\Uu: b^u\leq t < d^u}\e^{-\lambda t} h(Z^u_t) \Indic{Z^u_t\not \in K}\cdot \frac{1}{h(Z^u_t)}\e^{-\lambda s}\crochet{\Zzu_{s},f}\right]\leq \varepsilon.
\end{equation}
Next, recall from Lemma \ref{L:C} that
$$C(K)\coloneqq
\sup_{y\in K} \sup_{s\geq 0} \EE_y\left[ W_s^2 \right] < \infty,
$$
and consider
$$A(u,t,s)=\frac{1}{h(Z^u_t)}\e^{-\lambda s}\crochet{\Zzu_{s},f} $$
together with its conditional expectation given ${\mathcal F}_t$
$$a(u,t,s)=\EE_x[A(u,t,s)\mid {\mathcal F}_t].$$
Again, since $0\leq f \leq h$,
for every $u$ with $Z^u_t\in K$, we have
$$\EE_x[(A(u,t,s)-a(u,t,s))^2\mid {\mathcal F}_t] \leq 4 C(K).
$$
Since conditionally on ${\mathcal F}_t$, the variables $A(u,t,s)-a(u,t,s)$ for $u\in{\mathcal U}$ are independent and centered, there is the identity
\begin{multline*}
\EE_x\left[ \left| \sum_{u\in\Uu: b^u\leq t < d^u}\e^{-\lambda t} h(Z^u_t) \Indic{Z^u_t \in K}\cdot (A(u,t,s)-a(u,t,s))\right |^2
\right]\\
= \EE_x\left[ \sum_{u\in\Uu: b^u\leq t < d^u}\e^{-2\lambda t} h^2(Z^u_t) \Indic{Z^u_t \in K}\cdot \EE_x\bigl[(A(u,t,s)-a(u,t,s))^2\bigm\vert {\mathcal F}_t\bigr]
\right],
\end{multline*}
and we deduce from above that this quantity is bounded from above by
$$4 C(K)\e^{-\lambda t} h(x) \max_{y\in K} h(y).$$
This upper-bound tends to $0$ as $t\to \infty$, and it thus holds that
$$\EE_x\left[ \left| \sum_{u\in\Uu: b^u\leq t < d^u}\e^{-\lambda t} h(Z^u_t) \Indic{Z^u_t \in K}\cdot (A(u,t,s)-a(u,t,s))\right |
\right]< \varepsilon$$
for all $t$ sufficiently large.
On the other hand, writing $y=Z^u_t$, we have from the branching property
$$a(u,t,s)= \frac{1}{h(y)} \e^{-\lambda s} \EE_{y} \left[\crochet{\Zz_{s},f}\right],$$
and Corollary \ref{C1} entails that for all $s$ sufficiently large,
$|a(u,t,s)-\crochet{\nu,f}|\leq \varepsilon$ for all individuals $u$ with $Z^u_t\in K$. Using the bound \eqref{E:P1} with $h$ in place of $f$ for individuals $u$ with $Z^u_t\not \in K$ and
putting the pieces together, we have shown that for all $s, t$ sufficiently large,
$$\EE_x\left[ \left| \e^{-\lambda (t+s)}\crochet{\Zz_{t+s},f} - \crochet{\nu,f} W_t
\right | \right] \leq (2+h(x))\varepsilon,
$$
which completes the proof.
\end{proofthm}
\section{Explicit conditions for the strong Malthusian behavior}
The key condition for strong Malthusian behavior, \eqref{e:cc}, is given in terms of the Malthus exponent $\lambda$, which is not known explicitly in general. In this final section, we discuss explicit criteria in terms of the characteristics of $\Zz$ ensuring that $\lambda > 0$, so that condition \eqref{e:cc} then immediately follows from the simpler requirement
$$\lim_{x\to 0+}c(x)/x = \lim_{x\to \infty}c(x)/x =0.$$
In this direction, we recall first that Theorem 1 in \cite{DoumGab} already gives sufficient conditions for the strict positivity of the leading eigenvalue in the eigenproblem \eqref{E:eigenp}. More recently, it has been pointed out in Proposition 3.4(ii) of \cite{BeFK} that if the Markov process $X$ is recurrent, and the uninteresting case when $c(x)=ax$ is a linear function excluded,
then $\lambda > \inf_{x>0}c(x)/x$.
(If $c(x) = ax$, then $\lambda=a$, $h(x)=x$, and one readily checks that the martingale $W$ is actually constant.)
It was further argued in Section 3.6 in \cite{BeFK} that sufficient conditions warranting recurrence for $X$ are easy to formulate.
For instance, it suffices that there exist some $q_{\infty}>0$ and $x_{\infty}>0$ such that
$$
q_{\infty} c(x)/x + \int_{(0,1)} (r^{q_{\infty}}-1)\bar k(x,\dd r) \leq 0 \qquad\text{ for all }x\geq x_{\infty},
$$
and also some $q_0>0$ and $x_0>0$ such that
$$-q_0 c(x)/x + \int_{(0,1)} (r^{-q_0}-1) \bar k (x,\dd r) \leq 0 \qquad\text{ for all }x\leq x_0.
$$
In this section, we shall show that a somewhat weaker condition actually suffices. For the sake of simplicity, we focus on the situation when fissions are binary (which puts $\Zz$ in the class of Markovian growth-fragmentations defined in \cite{BeGF}). However, it is immediate to adapt the argument to the general case.
Assume that, for all $x>0$, $\kappa(x, \dd \pp)$ is supported by the set of binary mass partitions $\pp=(1-r,r, 0, \ldots)$ with $r\in(0,1/2]$. It is then more convenient to represent the fission kernel $\kappa$ by a probability kernel $\varrho(x,\dd r)$ on $(0,1/2]$, such that for all functionals $g\geq 0$ on $\Pp$,
$$\int_{\Pp} g(\pp) \kappa(x, \dd \pp) = \int_{(0,1/2]} g(1-r,r, 0, \ldots) \varrho(x,\dd r).$$
In particular, there is the identity
$$\int_{(0,1)} f(r) \bar k (x,\dd r)= B(x)\int_{(0,1/2]} ((1-r) f(1-r)+ r f(r)) \varrho(x,\dd r).$$
\begin{proposition}\label{P1} In the notation above, assume that
there exist $q_{\infty}, x_{\infty}>0$ such that
$$
q_{\infty} c(x)/x + B(x) \int_{(0,1/2]} (r^{q_{\infty}}-1) \varrho (x, \dd r) \leq 0 \qquad\text{ for all }x\geq x_{\infty},
$$
and $q_0, x_0>0$ such that
$$-q_0 c(x)/x + B(x) \int_{(0,1/2]} ((1-r)^{-q_0}-1) \varrho(x,\dd r) \leq 0 \qquad\text{ for all }x\leq x_0.
$$
Then, the Malthus exponent $\lambda$ is positive.
\end{proposition}
\begin{proof}
Let $a \in (x_0,x_\infty)$.
By the definition of the Malthus exponent in Section 4 and the right-continuity of the function $L_{a,a}$, we see that $\lambda>0$ if and only if $L_{a,a}(0)\in(1,\infty]$. We thus have to check that
$$\EE_a\left[\Ee_{H(a)}, H(a)<\infty\right]>1,$$
that is, thanks to the many-to-one formula of Lemma \ref{L1'}, that
$$\EE_a[\crochet{\Zz_{H(a)}, \mathbf{1}}]>1,$$
where $\mathbf{1}$ is the constant function with value $1$.
{\em A fortiori}, it suffices to check that $\crochet{\Zz_{H(a)}, \mathbf{1}}\geq 1$ $\PP_a$-a.s., and that this inequality is strict with positive $\PP_a$-probability. In words, we freeze individuals
at their first return time to $a$; it is easy to construct an event with positive probability on which there are two or more frozen individuals, so we only need to verify that we get at least one frozen individual $\PP_a$-a.s.
In this direction, we focus on a specific ancestral trajectory, say $X^*$, which is defined as follows. Recall that any trajectory is driven by the flow velocity \eqref{e:ode} between consecutive times of downward jumps, so we only need to explain how we select daughters at fission events. When a fission event occurs at a time $t$
with $X^*_{t-}=x^*>a$, producing two daughters, say $rx^*$ and $(1-r)x^*$ for some $r\in(0,1/2]$, then we choose the smallest daughter, i.e. $X^*_t=rx^*$, whereas if $x^*<a$ then we choose the largest daughter, i.e. $X^*_t=(1-r)x^*$.
The process $X^*$ is then Markovian with infinitesimal generator
$${\mathcal G}^*f(x)=c(x) f'(x) +B(x) \int_{(0,1/2]} (\Indic{x>a} f(rx) + \Indic{x<a}f((1-r)x) -f(x))\varrho(x,\dd r).$$
We now outline a proof that $X^*$ is point-recurrent.
Since the process has only negative jumps,
it is sufficient to show that $X^*_t \to \infty$ and $X^*_t \to 0$,
as $t\to\infty$, are both impossible.
For the former, consider starting the process at $x>x_\infty$
and killing it upon passage below $x_\infty$. Denote this process by
$X^\circledast$ and its generator by
$${\mathcal G}^\circledast f(x)=c(x) f'(x) +B(x) \int_{(0,1/2]} (\Indic{rx > x_\infty} f(rx) -f(x))\varrho(x,\dd r).$$
(The dependence on $a$ in the integral
vanishes since $x_\infty>a$.)
Now, let $V(x) = x^{q_\infty}$, for $x\ge x_\infty$.
The conditions in the statement imply that
$\mathcal{G}^\circledast V \le 0$, so
$V(X^\circledast)$ is a supermartingale. This ensures that
$X^\circledast$ cannot converge to $+\infty$, and indeed
the same for $X^*$ itself.
To show $X^*$ cannot converge to $0$, we start it at $x<x_0$
and kill it upon passing above $x_0$, and follow the same
argument with $V(x) = x^{-q_0}$.
To conclude, we have shown that
that $X^*$ is point-recurrent, and therefore $\PP_a$-almost surely hits $a$.
This shows that
$\PP_a[\crochet{\Zz_{H(a)}, \mathbf{1}}\geq 1]=1$, and completes the proof.
\end{proof}
We remark that a similar argument was carried out in Section 3.6 of \cite{BeFK},
using instead the Markov process $X$. The Markov process $X$
can be selected from the process $\mathbf{Z}$ by making a size-biased
pick from the offspring at each branching event; that is, from offspring
of sizes $rx$ and $(1-r)x$, following the former with probability $r$
and the latter with probability $1-r$. On the other hand, in the
process $X^*$ in the proof above, we pick from the offspring more carefully
in order to follow a line of descent which is more likely to stay close
to the point $a$. This accounts for the improvement in conditions
between \cite{BeFK} and this work.
| proofpile-arXiv_065-1351 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec-intro}
\emph{Bisimulation equivalence} plays a central role among the many
notions of semantic equivalence studied in verification and
concurrency theory~\citep{vanglabbeek01}. Indeed, two bisimilar
processes always satisfy exactly the same specifications written in
modal logics~\citep{vanbenthem75} or in the modal
$\mu$-calculus~\citep{janin96}, allowing one to replace for instance a
naive implementation with a highly optimised one without breaking the
conformance. As a toy example, the two recursive Erlang functions
below implement the same stateful message relaying service, that
either receives \lstinline[language=erlang]!{upd, M1}! and updates its
internal message from~M to~M1, or receives
\lstinline[language=erlang]!{rel,C}! and sends the message~M to the
client~C.
\ifams\hspace*{10pt}\fi\begin{minipage}[b]{\linewidth}\begin{lstlisting}[language=erlang,basicstyle=\small,literate=
{->}{$\rightarrow{}$}{1}{,,}{\hspace{7.3pt}}{1},columns=flexible,numbers=left,firstnumber=1,numberstyle=\tiny,xleftmargin=-10pt,numbersep=2pt]
serverA(M) -> serverB(M) ->
receive M2 = receive
{upd, M1} -> serverA(M1); {upd, M1} -> M1;
{rel, C } -> C!M, {rel, C } -> C!M, M;
serverA(M); end,
end. ,,serverB(M2).
\end{lstlisting}\end{minipage}
The two programs are weakly bisimilar if we only observe
the input (\lstinline[language=erlang]{receive}) and output
(\lstinline[language=erlang]{C!M}) actions, but the one on the left is
not tail-recursive and might perform poorly compared to the one on the
right.
In a landmark \citeyear{senizergues98} paper,
\citet{senizergues98,senizergues05} proved the decidability of
bisimulation equivalence for rooted equational graphs of finite
out-degree. The proof extends his previous seminal
result~\citep{senizergues97,senizergues01}, which is the decidability
of language equivalence for deterministic pushdown automata (DPDA),
and entails that weak bisimilarity of pushdown processes where silent
actions are deterministic is decidable; a silent action
(also called an $\varepsilon$-step) is deterministic if it has
no alternative when enabled.
Because the control flow of a first-order recursive program is readily
modelled by a pushdown process, one can view this result as showing
that the equivalence of recursive programs (like the two Erlang
functions above) is decidable as far as their observable behaviours
are concerned, provided silent moves are deterministic.
Regarding decidability, \citeauthor{senizergues98}' result is
optimal in the sense that bisimilarity becomes undecidable if
we consider either
nondeterministic (popping) $\varepsilon$-steps~\citep{jancar08},
or second-order pushdown processes with no
$\varepsilon$-steps~\citep{broadbent12}. Note that the decidability
border was also refined in~\citep{yin14} by considering
branching bisimilarity, a stronger version of weak bisimilarity.
\paragraph{Computational Complexity}
While this delineates the decidability border for equivalences of
pushdown processes, the computational complexity of the bisimilarity
problem is open. \Citeauthor{senizergues98}' algorithm consists in
two semi-decision procedures, with no clear means of bounding its
complexity, and subsequent works like~\citep{jancar14} have
so far not proven easier to analyse. We know however that this
complexity must be considerable, as the problem is \ComplexityFont{TOWER}-hard in the
real-time case (i.e., without silent actions, hence for
\emph{strong} bisimilarity)~\citep{benedikt13} and
\ComplexityFont{ACKERMANN}-hard in the general case (with deterministic silent actions)~\citep{jancarhard}---we are employing
here the `fast-growing' complexity classes defined
in~\citep{schmitz16}, where $\ComplexityFont{TOWER}=\F 3$ is the
lowest non elementary class and $\ComplexityFont{ACKERMANN}=\F\omega$ the
lowest non primitive-recursive one.
In fact, the precise complexity of deciding equivalences for pushdown
automata and their restrictions is often not known---as is commonplace
with infinite-state processes~\citep{srba04}. For instance,
language equivalence of deterministic pushdown automata is
\P-hard and was shown to be in \ComplexityFont{TOWER}\ by
\citet{stirling02} (see~\citep{jancarhard} for an explicit upper
bound), and
bisimilarity of BPAs (i.e., real-time pushdown
processes with a single state) is
\ComplexityFont{EXPTIME}-hard~\citep{kiefer13} and in
\ComplexityFont{2EXPTIME}~\citep{burkart95} (see~\citep{jancar13} for an
explicit proof).
There are also a few known completeness results in restricted cases:
bisimilarity of
normed BPAs is \P-complete~\citep{hirshfeld96}
(see~\citep{czerwinski10} for the best known upper bound),
bisimilarity of real-time one-counter
processes (i.e., of pushdown processes with a singleton stack
alphabet) is \PSPACE-complete~\citep{bohm14}, and bisimilarity of
visibly pushdown processes is
\ComplexityFont{EXPTIME}-complete~\citep{srba09}.
\paragraph{Contributions}
In this paper, we prove that the bisimilarity problem for pushdown
processes
is in $\ComplexityFont{ACKERMANN}$,
even the weak bisimilarity problem when
silent actions are deterministic.
Combined with the
already mentioned lower bound
from~\citep{jancarhard},
this shows
the problem to be \ComplexityFont{ACKERMANN}-complete.
This is the first instance of a
complexity completeness result in the line of research originating from
\citeauthor{senizergues97}'
work~\citep{senizergues97,senizergues98,senizergues01,senizergues05};
see \cref{tab-cmplx}.
\begin{table}[tbp]
\begin{threeparttable}
\caption{\ifams The complexity of equivalence problems over pushdown
processes.\else The Complexity of Equivalence Problems over Pushdown Processes\fi}
\label{tab-cmplx}
\centering
\begin{tabular}{lcc}
\toprule
Problem & Lower bound & Upper bound \\
\midrule
DPDA lang.\ equ.\!\!\!\! & \P & \ComplexityFont{TOWER}~\citep{stirling02,jancarhard} \\
strong bisim.\ & \!\!\ComplexityFont{TOWER}~\citep{benedikt13}\!\! &\!\! \ComplexityFont{ACKERMANN}~[this paper] \\
weak bisim.\tnote{$a$} & \!\!\ComplexityFont{ACKERMANN}~\citep{jancarhard}\!\! &\!\! \ComplexityFont{ACKERMANN}~[this paper] \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[$a$] silent actions must be deterministic
\end{tablenotes}
\end{threeparttable}
\end{table}
Rather than working with rooted equational graphs of finite out-degree
or with pushdown processes with deterministic silent actions, our
proof is cast in the formalism of
\emph{first-order grammars} (see \cref{sec-fog}), which are term
rewriting systems with a head rewriting semantics, and are known to
generate the same class of graphs~\cite{caucal95}.
Our proof heavily relies on the main novelty from~\citep{jancar14}:
the bisimilarity of two arbitrary terms according to a first-order
grammar essentially hinges on a finite
\emph{basis}
of pairs of \emph{non-equivalent terms},
which can be constructed from the grammar independently of the terms
provided as input. The basis provides a number that allows us to
compute a bound on the `equivalence-level' of two non-equivalent
terms; this is the substance of the decision procedure
(see \cref{sec-bisim}). Both in~\citep{jancar14} and in its reworked
version in~\citep{jancar18},
such a
basis is obtained through a brute force argument, which yields no
complexity statement. In \cref{sec-algo} we exhibit a concrete
algorithm computing the
basis, and we analyse its complexity
in the framework of~\citep{schmitz14,schmitz16,schmitz17}
in \cref{sec-upb}, yielding the \ComplexityFont{ACKERMANN}\ upper bound.
Finally, although our results do not match the \ComplexityFont{TOWER}\ lower bound
of~\citet{benedikt13} in the case of real-time pushdown processes, we
nevertheless show in \cref{sec-pda} that bisimilarity becomes
primitive-recursive in that case if additionally the number of
control
states
of the pushdown processes is fixed.
\section{First-Order Grammars}
\label{sec-fog}
\emph{First-order grammars} are labelled term rewriting systems with a
head rewriting semantics. They are a natural model of first-order
functional programs with a call-by-name semantics, and were shown to
generate the class of rooted equational graphs of finite out-degree by
\citet{caucal92,caucal95}, where they are called \emph{term
context-free grammars}. Here we shall use the terminology and
notations from~\citep{jancar18}.
\subsection{Regular Terms}
Let $\?N$ be a finite ranked alphabet, i.e., where each symbol $A$
in~$\?N$ comes with an arity $\ar(A)$ in~$\+N\eqby{def}\{0,1,2,\dots\}$,
and
$\textsc{Var}\eqby{def}\{x_1,x_2,\dots\}$
a countable set of variables, all with
arity~zero. We work with possibly infinite \emph{regular terms}
over~$\?N$ and~$\textsc{Var}$, i.e., terms with finitely many distinct subterms.
Let $\opfont{Terms}_{\N}$ denote the set of all regular terms over~$\?N$ and~$\textsc{Var}$.
We further use $A,B,C,D$ for nonterminals, and $E,F$ for
terms, possibly primed and/or with subscripts.
\paragraph{Representations}
Such terms can be represented by finite directed graphs as shown
in \cref{fig-terms}, where each node has a label in $\?N\cup\textsc{Var}$ and a
number of ordered outgoing arcs equal to its arity. The unfolding of
the graph representation is the desired term, and there is a bijection
between the nodes of the \emph{least} graph representation of a
term~$E$ and the set of subterms of~$E$.
\begin{figure}[tbp]
\centering\vspace*{-.1cm}
\begin{tikzpicture}[auto,on grid]
%
\node[square](1) {$A$};
\node[square,below left =.75 and .7 of 1](2){$D$};
\node[square,below =.75 of 1](3){$x_5$};
\node[square,below right=.75 and .7 of 1](4){$B$};
\node[square,below left =.75 and .35 of 2](5){$x_5$};
\node[square,below right=.75 and .35 of 2](6){$C$};
\node[square,below left =.75 and .35 of 6](7){$x_2$};
\node[square,below right=.75 and .35 of 6](8){$B$};
\path[->,every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(1) edge[swap] node {1} (2)
(1) edge node {2} (3)
(1) edge node {3} (4)
(2) edge[swap] node {1} (5)
(2) edge node {2} (6)
(6) edge[swap] node {1} (7)
(6) edge node {2} (8);
\node[above left=.7 and .7 of 1](r){$\rt{E_1}$};
\path[color=black!70]
(r) edge (1);
%
\node[square,right=3 of 1](11) {$A$};
\node[square,below left =.75 and .7 of 11](12){$D$};
\node[square,below =.75 of 11](13){$x_5$};
\node[square,below right=.75 and .7 of 11](14){$B$};
\node[square,below left =.75 and .35 of 12](15){$x_5$};
\node[square,below right=.75 and .35 of 12](16){$C$};
\node[square,below right=.75 and .35 of 16](18){$B$};
\path[->,every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(11) edge[swap] node {1} (12)
(11) edge node {2} (13)
(11) edge node {3} (14)
(12) edge[swap] node {1} (15)
(12) edge node {2} (16)
(16) edge node {2} (18);
\node[above left=.7 and .7 of 11](1r){$\rt{E_2}$};
\path[color=black!70]
(1r) edge (11);
\draw[->,>=stealth',shorten >=1pt,thin]
(16.south west) .. controls (1.5,-4.1) and (1.5,1) .. (1.east);
\node[font=\tiny,color=black!70,below left=.25 and .35 of 16]{1};
%
\node[square,right=3 of 11](21) {$A$};
\node[square,below left =.75 and .7 of 21](22){$D$};
\node[square,below =.75 of 21](23){$x_5$};
\node[square,below right=.75 and .7 of 21](24){$B$};
\node[square,below left =.75 and .35 of 22](25){$x_5$};
\node[square,below right=.75 and .35 of 22](26){$C$};
\node[square,below right=.75 and .35 of 26](28){$B$};
\path[->,every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(21) edge[swap] node {1} (22)
(21) edge node {2} (23)
(21) edge node {3} (24)
(22) edge[swap] node {1} (25)
(22) edge node {2} (26)
(26) edge node {2} (28);
\node[above left=.7 and .7 of 21](2r){$\rt{E_3}$};
\path[color=black!70]
(2r) edge (21);
\draw[->,>=stealth',shorten >=1pt,thin]
(26.south west) .. controls (4,-4) and (4,0) .. (21.west);
\node[font=\tiny,color=black!70,below left=.25 and .35 of 26]{1};
\end{tikzpicture}\vspace*{-1.5cm}
\caption{Graph representations of two finite terms $E_1$ and $E_2$,
and of an infinite regular term~$E_3$.}
\label{fig-terms}
\end{figure}
\paragraph{Size and Height}
We define the \emph{size} $\size E$ of a term~$E$ as its number of
distinct subterms. For instance, $\size{E_1}=6$, $\size{E_2}=9$, and
$\size{E_3}=5$ in
\cref{fig-terms}. For two terms $E$ and~$F$, we also denote by $\size{E,F}$
the number of distinct subterms of $E$ and~$F$; note that $\size{E,F}$
can be smaller than $\size E+\size F$, as they might share some
subterms. For instance, $\size{E_1,E_2}=9$ in \cref{fig-terms}. We
let $\ntsize E$ denote the number of distinct subterms of~$E$ with
root labels in~$\?N$; e.g., $\ntsize{E_1}=4$ in \cref{fig-terms}.
A term~$E$ is thus \emph{finite} if and only if
its graph representation is
acyclic, in which case it has a \emph{height} $\height E$,
which is
the maximal length of a path from the root to a leaf; for instance
$\height{E_1}=3$ in \cref{fig-terms}. Finally, we let $\vars E$
denote the set of variables occurring in~$E$, and let
$\vars{E,F}\eqby{def}\vars E\cup\vars F$; e.g.,
$\vars{E_1,E_2}=\{x_2,x_5\}$ in \cref{fig-terms}.
\subsection{Substitutions}
A \emph{substitution}~$\sigma$ is a map $\textsc{Var}\to\opfont{Terms}_{\N}$
whose \emph{support}
$\dom\sigma\eqby{def}\{x\in\textsc{Var}\mid \sigma(x)\neq x\}$
is finite. This map is lifted to act
over terms by
\begin{align*}
x\sigma&\eqby{def}\sigma(x)\;,&
A(E_1,\dots,E_{\ar(A)})\sigma&\eqby{def}
A(E_1\sigma,\dots,E_{\ar(A)}\sigma)
\end{align*}
for all $x$ in~$\textsc{Var}$, $A$ in~$\?N$, and
$E_1,\dots,E_{\ar(A)}$ in~$\opfont{Terms}_{\N}$. For instance, in \cref{fig-terms},
$E_2=E_1\sigma$ if
$\sigma(x_2)= E_1$ and $\sigma(x_5)= x_5$.
\subsection{Grammars}
A \emph{first-order grammar} is a tuple $\?G=\tup{\?N,\Sigma,\?R}$
where $\?N$ is a finite ranked alphabet of nonterminals, $\Sigma$ a
finite alphabet of actions, and $\?R$ a finite set of labelled term
rewriting rules of the form
$A(x_1,\dots,x_{\ar(A)})\step a E$
where
$A\in\?N$, $a\in\Sigma$, and $E$ is a finite term in
$\opfont{Terms}_{\N}$ with $\vars{E}\subseteq\{x_1,\dots,x_{\ar(A)}\}$.
\paragraph{Head Rewriting Semantics}
A first-order grammar $\?G=\tup{\?N,\Sigma,\?R}$ defines an infinite
\emph{labelled transition system}
\begin{equation*}
\?L_\?G\eqby{def}\tup{\opfont{Terms}_{\N},\Sigma,({\step a})_{a\in\Sigma}}
\end{equation*}
over $\opfont{Terms}_{\N}$ as set of
states, $\Sigma$ as set of actions, and with a transition
relation
${\step a}\subseteq\opfont{Terms}_{\N}\times\opfont{Terms}_{\N}$ for each $a\in\Sigma$,
where each rule $A(x_1,\dots,x_{\ar(A)})\step a E$ of~$\?R$ induces a
transition
\begin{equation*}
A(x_1,\dots,x_{\ar(A)})\sigma\step a E\sigma
\end{equation*}
for every
substitution~$\sigma$. This means that rewriting steps can only occur
at the root of a term, rather than inside a context. For instance,
the rules $A(x_1,x_2,x_3)\step a C(x_2,D(x_2,x_1))$ and
$A(x_1,x_2,x_3)\step b x_2$ give rise on the terms of
\cref{fig-terms} to the transitions
$E_1\step a C(x_5,D(x_5,D(x_5,C(x_2,B))))$ and $E_1\step b x_5$.
The transition relations $\step{a}$ are extended to $\step{w}$ for
words $w\in\Sigma^\ast$ in the standard way.
Note that variables $x\in\textsc{Var}$ are `dead', in that no
transitions can be fired from a variable.
In fact, in \cref{subsec:eqlevels} we discuss that for
technical reasons we
could formally assume that each variable~$x$ has its unique action~$a_x$
and a transition $x\step{a_x}x$.
\paragraph{Grammatical Constants} Let us fix a first-order grammar
$\?G=\tup{\?N,\Sigma,\?R}$. We define its size as
\begin{align}\label{eq-gsize}
|\?G|&\eqby{def}\sum_{A(x_1,\dots,x_{\ar(A)})\step a E\,\,\in\?R}\ar(A)+1+\size
E\;.
\intertext{%
Let $\opfont{rhs}$ be the set of terms appearing on the right-hand sides
of~$\?R$ (which are finite terms by definition). We let}
m&\eqby{def}\max_{A\in\?N}\ar(A)\;,\label{eq-m}\\
\opfont{hinc}&\eqby{def}\max_{E\in\opfont{rhs}}\height E-1\;,\label{eq-hinc}\\
\opfont{sinc}&\eqby{def}\max_{E\in\opfont{rhs}}\ntsize E\label{eq-sinc}
\end{align}
bound respectively the maximal arity of its nonterminals, its maximal
height increase in one transition step, and its maximal size increase in one
transition step.
If $A(x_1,\dots,x_{\ar(A)})\step{w}x_i$ in~$\?L_\?G$ for some $i$
in~$\{1,\dots,\ar(A)\}$ and $w$ in~$\Sigma^\ast$, then we call~$w$
an \emph{$(A,i)$-sink word}.
Observe that $w\neq\varepsilon$, hence $w=aw'$ with
$A(x_1,\dots,x_{\ar(A)})\step{a}E$ in~$\?R$ and $E\step{w'}x_i$, where
either $w'=\varepsilon$ and $E=x_i$ or $E$ `sinks' to~$x_i$ when
applying~$w'$. Thus, for each
$A\in\?N$ and $i\in\{1,\dots,\ar(A)\}$
we can compute some shortest $(A,i)$-sink
word $w_{[A,i]}$ by dynamic programming;
in the cases where no $(A,i)$-sink
word exist, we can formally put $w_{[A,i]}\eqby{def}\varepsilon$.
In turn, this
entails that the maximal length of shortest sink words satisfies
\begin{equation}\label{eq-d0}
d_0\eqby{def} 1+\!\!\max_{A\in\?N,1\leq i\leq\ar(A)}\!|w_{[A,i]}|\leq
1+(2+\opfont{hinc})^{|\?N|m}\;;
\end{equation}
here and in later instances, we let $\max\emptyset\eqby{def} 0$.
Finally, the following grammatical constant $n$ from~\citep{jancar18}
is important for us:
\begin{equation}\label{eq-n}
n\eqby{def} m^{d_0}\;;
\end{equation}
note that~$n$ is at most doubly exponential in the size of~$\?G$.
This $n$ was chosen in~\citep{jancar18} so that each $E$
can be written as $E'\sigma$ where
$\height{E'}\leq d_0$ and $\vars{E'}\subseteq\{x_1,\dots,x_n\}$, and it
is guaranteed that each path $E\step{w}F$ where $|w|\leq d_0$
can be presented as
$E'\sigma\step{w}F'\sigma$ where $E'\step{w}F'$.
Put simply: $n$ bounds the number of depth-$d_0$ subterms for each term
$E$.
\section{Bisimulation Equivalence}
\label{sec-bisim}
Bisimulation
equivalence has been introduced independently in the
study of modal logics~\citep{vanbenthem75} and in that of concurrent
processes \citep{milner80,park81}. We recall here the basic notions
surrounding bisimilarity before we introduce the key notion
of \emph{candidate bases} as defined in~\citep{jancar18}.
\subsection{Equivalence Levels}\label{subsec:eqlevels}
Consider a labelled transition system
\begin{equation*}
\?L=\tup{\?S,\Sigma,({\step a})_{a\in\Sigma}}
\end{equation*}
like the one defined
by a first-order grammar, with set of states~$\?S$, set of
actions~$\Sigma$, and a transition relation
${\step a}\subseteq\?S\times\?S$ for each $a$ in~$\Sigma$. We work in
this paper with \emph{image-finite} labelled transition systems, where
$\{s'\in\?S\mid s\step a s'\}$ is finite for every~$s$ in~$\?S$ and
$a$ in~$\Sigma$. In this setting, the coarsest (strong)
\emph{bisimulation}~$\sim$ can be defined through a chain
\begin{equation*}
{\sim_0}\supseteq{\sim_1}\supseteq\cdots\supseteq{\sim}
\end{equation*}
of
equivalence relations over $\?S\times\?S$: let
${\sim_0}\eqby{def}\?S\times\?S$ and for each~$k$ in~$\+N$, let
$s\sim_{k+1} t$ if $s\sim_k t$ and
\begin{description}[\IEEEsetlabelwidth{(zag)}]
\item[(zig)] if $s\step a s'$ for some $a\in\Sigma$,
then there exists $t'$ such that $t\step a t'$ and $s'\sim_k t'$, and
\item[(zag)] if $t\step a t'$ for some $a\in\Sigma$,
then there exists $s'$ such that $s\step a s'$ and $s'\sim_k t'$.
\end{description}
We put
$\sim_\omega\eqby{def}\bigcap_{k\in\+N}{\sim_k}$;
hence
${\sim}={\sim_\omega}$.
For each pair $s,t$ of states in~$\?S$, we
may then define its \emph{equivalence level} $\el{s,t}$
in~$\omega+1=\+N\uplus\{\omega\}$
as
\begin{equation}\label{eq-el}
\el{s,t}\eqby{def}\sup\{k\in\+N\mid s\sim_k t\}\;.
\end{equation}Here we should add that---to be
consistent with~\citep{jancar18}---we stipulate that $\el{x,E}=0$ when
$E\neq x$; in particular $\el{x_i,x_j}=0$ when $i\neq j$. This would
automatically hold if we equipped each $x\in\textsc{Var}$ with a special
transition $x\step{a_x}x$ in $\?L_\?G$, as we already mentioned. This
stipulation guarantees that $\el{E,F}\leq\el{E\sigma,F\sigma}$.
Two states $s,t$ are (strongly) \emph{bisimilar} if $s\sim t$, which
is if and only if $\el{s,t}=\omega$.
We will later show an algorithm computing the equivalence level
of two given terms in the labelled transition
system defined by a given first-order grammar.
The main decision problem in which we are interested
is the following.
\begin{problem}[Bisimulation]
\hfill\\[-1.5em]\begin{description}[\IEEEsetlabelwidth{question}]
\item[input] A first-order grammar $\?G=\tup{\?N,\Sigma,\?R}$ and two
terms $E,F$ in~$\opfont{Terms}_{\N}$.
\item[question] Is $\el{E,F}=\omega$ in the labelled transition
system~$\?L_\?G$?
\end{description}\end{problem}
\subsection{Bisimulation Game}\label{sub-game}
Observe that the following variant of the bisimulation problem is decidable.
\begin{problem}[Bounded Equivalence Level]
\hfill\\[-1.5em]\begin{description}[\IEEEsetlabelwidth{question}]
\item[input] A first-order grammar $\?G=\tup{\?N,\Sigma,\?R}$, two
terms $E,F$ in~$\opfont{Terms}_{\N}$, and $e$ in~$\+N$.
\item[question] Is $\el{E,F}\leq e$ in the labelled transition
system~$\?L_\?G$?
\end{description}\end{problem}
Indeed, as is well-known, the zig-zag condition can be recast as a
\emph{bisimulation game} between two players called Spoiler and
Duplicator. A position of the game is a pair $(s_1,s_2)\in\?S\times\?S$.
Spoiler wants to prove that the two states are not bisimilar, while
Duplicator wants to prove that they are bisimilar. The game proceeds
in rounds; in each round,
\begin{itemize}
\item Spoiler chooses $i\in\{1,2\}$ and a
transition $s_i\step a s'_i$ (if no such transition exists, Spoiler
loses), then
\item Duplicator chooses a transition $s_{3-i}\step a s'_{3-i}$ with
the same label~$a$ (if no such transition exists, Duplicator loses);
\end{itemize}
the game then proceeds to the next round from position $(s'_1,s'_2)$.
Then $\el{s_1,s_2}\leq k$ if and only if Spoiler has a strategy to win
in the $(k{+}1)$th round at the latest
when starting the game from $(s_1,s_2)$.
Note that this game is determined and memoryless strategies suffice.
Thus, the bounded equivalence level problem can be solved by an
alternating Turing machine that first writes the representation of~$E$
and~$F$ on its tape, and then plays at most~$e$ rounds of the
bisimulation game, where each round requires at most a polynomial
number of computational steps in the size of the grammar (assuming a
somewhat reasonable tape encoding of the terms).
\begin{fact} The bounded equivalence level problem is in
\label{eq-bel}
$\ComplexityFont{ATIME}\big(\size{E,F}+\poly(|\?G|)\cdot e\big)$.
\end{fact}
\subsection{Candidate Bases}
Consider some fixed first-order grammar $\?G=\tup{\?N,\Sigma,\?R}$.
Given three numbers $n$, $s$, and~$g$ in~$\+N$---which will depend on~$\?G$---, an \emph{$(n,s,g)$-candidate basis} for
non-equivalence is a set of pairs of terms
$\?B\subseteq\opfont{Terms}_{\N}\times\opfont{Terms}_{\N}$ associated with two sequences of
numbers $(s_i)_{0\leq i\leq n}$ and $(e_i)_{0\leq i\leq n}$ such that
\begin{enumerate}
\item $\?B\subseteq{\nsim}$,
\item for each $(E,F)\in\?B$ there is $i\in\{0,\dots,n\}$ such that
$\vars{E,F}=\{x_1,\dots,x_i\}$ and $\size{E,F}\leq s_i$,
\item $s_n\eqby{def} s$, and the remaining numbers are defined inductively
by
\end{enumerate}
\begin{align}
e_i &\eqby{def}\max_{(E,F)\in\?B\mid\size{E,F}\leq s_i}\el{E,F}\;,\label{eq-ei}\\
s_{i-1}&\eqby{def} 2s_i+g+e_i(\opfont{sinc}+g)\;.\label{eq-si}
\end{align}
Note that the numbers $(s_i)_{0\leq i\leq n}$ and $(e_i)_{0\leq i\leq
n}$ are entirely determined by~$\?B$ and $n$, $s$, and $g$.
An $(n,s,g)$-candidate basis $\?B$ yields a \emph{bound}
$\?E_\?B$ defined by
\begin{equation}\label{eq-EB}
\?E_\?B\eqby{def} n+1+\sum_{i=0}^n e_i\;.
\end{equation}
\paragraph{Full Bases}For $0\leq i\leq n$, let
\begin{multline}\label{eq-pairs}
\pairs\eqby{def}\{(E,F)\mid\exists j\leq i\mathbin.\vars{E,F}=\{x_1,\dots,x_j\}\ifams\relax\else\\\fi\wedge
\size{E,F}\leq s_i\}\;.\end{multline}
An $(n,s,g)$-candidate basis~$\?B$ is \emph{full below} some
equivalence level $e\in\omega+1$ if, for all $0\leq i\leq n$ and all
$(E,F)\in\pairs$ such that $\el{E,F}<e$ we have $(E,F)\in\?B$.
We
say that~$\?B$ is \emph{full} if it is full below~$\omega$. In other
words and because $\?B\subseteq{\nsim}$, $\?B$ is full if and only
if, for all $0\leq i\leq n$, $\pairs\setminus\?B\subseteq{\sim}$.
\begin{proposition}[{\citep[Prop.~9]{jancar18}}]
For any $n,s,g$, there is a unique full $(n,s,g)$-candidate basis,
denoted by $\?B_{n,s,g}$.
\end{proposition}
\begin{proof}
The full candidate basis~$\?B_{n,s,g}$ is constructed by induction
over~$n$. Let $s_n\eqby{def} s$ and consider the finite set
$S_n\eqby{def}\{(E,F)\in\opfont{Terms}_{\N}\times\opfont{Terms}_{\N}\mid E\nsim
F\wedge\exists j\leq n\mathbin.\vars{E,F}=\{x_1,\dots,x_j\}\wedge\size{E,F}\leq
s_n\}$;
%
%
%
$S_n$ has a maximal equivalence level
$e_n\eqby{def}\max_{(E,F)\in S_n}\el{E,F}$.
%
%
%
If $n=0$, we define
$\?B_{0,s,g}\eqby{def} S_0$. Otherwise, we let $s_{n-1}\eqby{def}
2s_n+g+e_n(\opfont{sinc}+g)$ as in~\eqref{eq-si}; by induction hypothesis
there is a unique full $(n-1,s_{n-1},g)$-candidate basis
$\?B_{n-1,s_{n-1},g}$ and we set $\?B_{n,s,g}\eqby{def}
S_n\cup\?B_{n-1,s_{n-1},g}$.
\end{proof}
The main result from~\citep{jancar18} can now be stated.
\begin{theorem}[{\citep[Thm.~7]{jancar18}}]\label{th-el}
Let $\?G=\tup{\?N,\Sigma,\?R}$ be a first-order grammar. Then one
can compute a grammatical constant~$g$ exponential in~$|\?G|$ and
grammatical constants $n$, $s$, and $c$ doubly exponential
in~$|\?G|$ such that, for all terms $E,F$ in $\opfont{Terms}_{\N}$ with
$E\nsim F$, \begin{equation*} \el{E,F}\leq
c\cdot\big(\?E_{\?B_{n,s,g}}\cdot\size{E,F}+\size{E,F}^2\big)\;. \end{equation*}
\end{theorem}
\Cref{th-el} therefore shows that the bisimulation problem can be
reduced to the bounded equivalence level problem, provided one can
compute the full $(n,s,g)$-candidate basis for suitable $n$, $s$,
and~$g$---see \cref{tab-cst} in the appendix for details on how the
grammatical constants $n$, $s$, $c$, and~$g$ are defined
in~\citep{jancar18}. Our goal in \cref{sec-algo} will thus be to
exhibit a concrete algorithm computing the full candidate
basis~$\?B_{n,s,g}$, in order to derive an upper bound
on~$\?E_{\?B_{n,s,g}}$.
The proof of \citep[Thm.~7]{jancar18} relies on the following insight,
which we will also need in order to prove the correctness of our algorithm.
\begin{lemma}[{\citep[Eq.~39]{jancar18}}]\label{cl-el}
Let $\?G=\tup{\?N,\Sigma,\?R}$ be a first-order grammar, $g,n,s,c$ be
defined as in \cref{th-el}, $E,F$ be two terms in~$\opfont{Terms}_{\N}$ with $E\not\sim F$,
and $\?B$ be an $(n,s,g)$-candidate basis full below~$\el{E,F}$.
Then
\begin{equation*}
\el{E,F}\leq c\cdot\big(\?E_{\?B}\cdot\size{E,F}+\size{E,F}^2\big)\;.
\end{equation*}
\end{lemma}
\section{Computing Candidate Bases}
\label{sec-algo}
\Cref{th-el} shows that, in order to solve the bisimulation problem,
it suffices to compute~$c$ and~$\?E_{\?B_{n,s,g}}$ and then solve the
bounded equivalence problem, for which \cref{eq-bel} provides a
complexity upper bound. In this section, we show how to compute
$\?E_{\?B_{n,s,g}}$ for an input first-order grammar
$\?G=\tup{\?N,\Sigma,\?R}$. Note that this grammatical constant was
shown computable in~\cite{jancar14,jancar18} through a brute-force
argument, but here we want a concrete algorithm, whose complexity will
be analysed in \cref{sec-upb}. We proceed in two steps, by first
considering a non effective version\ifams\relax\else\ of the
algorithm\fi\ in \cref{sub-neff}, whose correctness
is straightforward, and then the
actual algorithm in \cref{sub-eff}.
\subsection{Non Effective Version}\label{sub-neff}
Throughout this section, we consider~$n$ as a fixed parameter. We
first assume that we have an oracle
$\textsc{EqLevel}(\?G,\?E_{\?B},c,E,F)$ at our disposal,
that returns the equivalence
level $\el{E,F}$ in~$\?L_\?G$; the parameters $\?E_{\?B},c$ will be used
in the effective version in \cref{sub-eff}. The following procedure
then constructs full $(n,s,g)$-candidate basis $\?B_{n,s,g}$ and its associated
bound~$\?E_{\?B_{n,s,g}}$, by progressively adding pairs from the
sets~$\pairs$ until the candidate basis is full. In order not to
clutter the presentation too much, we assume implicitly that the
equivalence level $e$~of each pair $(E,F)$ added to~$\?B$ on
line~\ref{al-b-up} is implicitly stored, thus it does not need to be
recomputed on line~\ref{al-ei-up}.
\medskip
\begin{algorithmic}[1]%
\Procedure {CandidateBound$_n$}{$\?G$, $s$, $g$, $c$}
\State $\?B\leftarrow\emptyset$\Comment{Initialisation}\label{al-ini-start}
\For {$i\leftarrow 0,\dots,n$}
\State $e_i\leftarrow 0$
\EndFor
\State $s_n\leftarrow s$
\For {$i\leftarrow n-1,\dots,0$}
\State $s_i\leftarrow 2s_{i+1}+g$
\EndFor
\State $\?E_{\?B}\leftarrow n+1$\label{al-ini-end}
\For {$i\leftarrow n,\dots,0$}\label{al-pi-start}%
\State
$\?P_i\leftarrow\pairs\setminus\bigcup_{i<j\leq n}\?P_j$\label{al-pi-end}
\EndFor
\While{$\exists i\in\{0,1,\dots,n\}\mathbin,\exists (E,F)\in\?P_i:$
\ifams\relax\else\\
\hspace*{1.8em}\fi
$\textsc{EqLevel}(\?G,\?E_{\?B},c,E,F)<\omega$}\ifams\relax\else\Comment{Main loop}\fi
\label{al-ml-start}
\State
$e\leftarrow\textsc{EqLevel}(\?G,\?E_{\?B},c,E,F)$\label{al-el-inc}\ifams\Comment{Main loop}\fi
\State $\?P_i\leftarrow \?P_i\setminus\{(E,F)\}$\label{al-pi-dec}
\State $\?B\leftarrow \?B\cup\{(E,F)\}$\label{al-b-up}
\If{$e > e_i$}\Comment{If so, then update}\label{al-up-start}
\State $e_i\leftarrow e$\label{al-up-ei}
\For{$j\leftarrow i-1,\dots,0$}\label{al-up-for}
\State $s_j\leftarrow 2s_{j+1}+g+e_{j+1}(\opfont{sinc}+g)$\label{al-up-sj}
\State $e_j\leftarrow\max_{(E,F)\in\?B\mid\size{E,F}\leq
s_j}\el{E,F}$\label{al-ei-up}
\State $\?P_j\leftarrow \pairs[j]\setminus(\?B\cup\bigcup_{i<k\leq n}\?P_k)$\label{al-pi-up}
\EndFor
\State $\?E_{\?B}\leftarrow n+1+\sum_{0\leq j\leq n}e_j$\label{al-up-end}
\EndIf
\EndWhile
\State\Return $\?E_{\?B}$
\EndProcedure
\end{algorithmic}
\paragraph{Invariant}
The procedure $\textsc{CandidateBound}_n$ maintains as an invariant of
its main loop on lines~\ref{al-ml-start}--\ref{al-up-end} that $\?B$
is an $(n,s,g)$-candidate basis associated with the
numbers~$(s_i)_{0\leq i\leq n}$ and~$(e_i)_{0\leq i\leq n}$, and that
$\?E_{\?B}$ is its associated bound. This holds indeed after the
initialisation phase on lines~\ref{al-ini-start}--\ref{al-ini-end},
and is then enforced in the main loop by the update instructions on
lines~\ref{al-up-start}--\ref{al-up-end}.
\paragraph{Correctness}
Let us check that, if it terminates, this non effective version does
indeed return the bound~$\?E_{\?B_{n,s,g}}$ associated with the unique
full $(n,s,g)$-candidate basis~$\?B_{n,s,g}$. By the previous
invariant, it suffices to show that~$\?B$ is full when the procedure
terminates. Consider for this some index $0\leq i\leq n$ and a pair
$(E,F)\in\pairs$ with $\el{E,F}=e$ for some $e<\omega$. By definition
of the sets $(\?P_i)_{0\leq i\leq n}$ on
lines~\ref{al-pi-start}--\ref{al-pi-end} and their updates on
lines~\ref{al-pi-dec} and~\ref{al-pi-up} in the main loop, the pair
$(E,F)$ must have been added to some~$\?P_j$ for $j\geq i$. Then the
pair must have been selected by the condition of the main loop on
line~\ref{al-ml-start}, and added to~$\?B$.
\paragraph{Termination}
Although we are still considering a non effective version of the
algorithm, the proof that it always terminates is the same as the one
for the effective version in \cref{sub-eff}. We exhibit a ranking
function on the main loop, thereby showing that it must stop
eventually. More precisely, each time we enter the main loop on
line~\ref{al-ml-start}, we associate to the current state of the
procedure the ordinal rank below $\omega^{n+1}$ defined by\footnote{
Note that this is equivalent to defining the rank as the tuple
$\tup{|\?P_n|,\dots,|\?P_0|}$ in~$\+N^{n+1}$, ordered
lexicographically, but ordinal notations are more convenient for our
analysis in \cref{sec-upb}.}
\begin{equation}\label{eq-rank}
\alpha\eqby{def}\omega^n\cdot|\?P_n|+\cdots+\omega^0\cdot|\?P_0|\;.
\end{equation}
This defines a descending sequence of ordinals
\begin{equation}\label{eq-ranks}
\alpha_0>\alpha_1>\cdots
\end{equation}
of ordinals, where $\alpha_\ell$ is the rank after $\ell$~iterations of the
main loop.
Indeed, each time we enter the loop, the cardinal
$|\?P_i|$ of the set under consideration strictly decreases on
line~\ref{al-pi-dec}, and is not modified by the updates on
line~\ref{al-pi-up}, which only touch the sets~$\?P_j$ for $j<i$.
Hence $\textsc{CandidateBound}_n$ terminates.
\subsection{Effective Version}\label{sub-eff}
In order to render $\textsc{CandidateBound}_n$ effective, we provide
an implementation of $\textsc{EqLevel}$ that does not require an
oracle for the bisimulation problem, but relies instead
on \cref{cl-el} and the bounded equivalence level problem, which as we
saw in \cref{sub-game} is decidable.
\begin{algorithmic}[1]%
\Procedure{EqLevel}{$\?G$, $\?E_\?B$, $c$, $E$, $F$}
\If{$\el{E,F}\leq
c\cdot\big(\?E_\?B\cdot\size{E,F}+\size{E,F}^2\big)$}
\State\Return $\el{E,F}$
\Else
\State\Return $\omega$
\EndIf
\EndProcedure
\end{algorithmic}
We establish the correctness of this effective variant in the
following theorem, which uses the same reasoning as the proof
of~\citep[Thm.~7]{jancar18}.
\begin{theorem}\label{th-algo}
The effective version of procedure
$\textsc{CandidateBound}_n(\?G,s,g,c)$ terminates and, provided $n$,
$s$, $c$, and $g$ are defined as in \cref{th-el}, returns the
bound~$\?E_{\?B_{n,s,g}}$.
\end{theorem}
\begin{proof}
Termination is guaranteed by the ranking function defined
by~\eqref{eq-rank}. Regarding correctness, assume the provided $g$,
$n$, $s$, and $c$ are defined as in \cref{th-el}, and let us define
a (reflexive and symmetric) relation $\dot\sim_k$ on $\opfont{Terms}_{\N}$ by
$E\mathrel{\dot\sim_k} F$ if and only if $\el{E,F} >
c\cdot\big(k\cdot\size{E,F}+\size{E,F}^2\big)$. Clearly,
${\sim}\subseteq{\dot\sim_k}$ for all $k$~in $\+N$. We say that an
$(n,s,g)$-candidate basis is \emph{$k$-complete} if, for all $0\leq
i\leq n$, $\pairs\setminus\?B\subseteq{\dot\sim_k}$. We
call~$\?B$ \emph{complete} if it is $\?E_{\?B}$-complete. By the
reasoning we used for showing the correctness of the non effective
version, when the effective version of $\textsc{CandidateBound}_n$
terminates, $\?B$ is complete.
It remains to show that $\?B$ is complete if and only if it is full.
First observe that, if $\?B$ is full, then it is complete: indeed,
$\?B$ being full entails that,
for all $E\nsim F$ in~$\pairs$, $(E,F)$ is
in~$\?B\subseteq{\nsim}$, hence
$\pairs\setminus\?B\subseteq{\sim}\subseteq{\dot\sim_{\?E_\?B}}$.
Conversely, assume that~$\?B$ is complete, and let us show that it
is full; it suffices to show that, in that case,
${\dot\sim_{\?E_\?B}}\subseteq{\sim}$. By contradiction, consider a
pair $E\nsim F$ with $E\mathrel{\dot\sim_{\?E_\?B}}F$; without loss
of generality, $\el{E,F}$ can be assumed minimal among all such pairs.
Then $\?B$ is full below~$\el{E,F}$: indeed, if $(E',F')\in\pairs$ and
$\el{E',F'}<\el{E,F}$, since $\el{E,F}$ was taken minimal,
$E'\mathrel{\dot{\nsim}_{\?E_\?B}}F'$ and therefore $(E',F')$ belongs
to~$\?B$ since~$\?B$ is complete. Thus \cref{cl-el} applies and
shows that $E\mathrel{\dot\nsim_{\?E_\?B}}F$, a contradiction.
\end{proof}
\section{Complexity Upper Bounds}
\label{sec-upb}
In this section, we analyse the procedure $\textsc{CandidateBound}_n$
to derive an upper bound on the computed $\?E_{\?B}$. In turn,
by \cref{eq-bel,th-el}, this bound will allow us to bound the
complexity of the bisimulation problem. The idea is to analyse the
ranking function defined by \eqref{eq-rank} in order to bound how many
times the main loop of $\textsc{CandidateBound}_n$ can be executed.
We rely for this on a so-called `length function theorem'
from~\cite{schmitz14} to bound the length of descending sequences of
ordinals like~\eqref{eq-ranks}. Finally, we classify the final upper
bound using the `fast-growing' complexity classes defined
in~\citep{schmitz16}. A general introduction to
these techniques can be found in~\citep{schmitz17}.
Throughout this section, we
assume that the values of $g$, $n$, $s$, and $c$ are the ones needed
for \cref{th-el} to hold.
\subsection{Controlled Descending Sequences}
Though all descending
sequences of ordinals are finite, we cannot bound their lengths in
general; e.g., $K+1>K>K-1>\cdots>0$ and
$\omega>K>K-1>\cdots>0$ are descending sequences of length $K+2$ for
all~$K$ in~$\+N$. Nevertheless, the sequence~\eqref{eq-ranks}
produced by $\textsc{CandidateBound}_n$ is not arbitrary, because the
successive ranks are either determined by the input and the
initialisation phase, or the result of some computation, hence one
cannot use an arbitrary~$K$ as in these examples.
This intuition is captured by the notion of \emph{controlled
sequences}. For an ordinal $\alpha<\omega^\omega$ (like the ranks
defined by~\eqref{eq-rank}), let us write $\alpha$ in Cantor normal
form as
\begin{equation*}
\alpha=\omega^{n}\cdot c_n+\cdots+\omega^0\cdot c_0
\end{equation*}
with
$c_0,\dots,c_n$ and $n$ in~$\+N$, and define its \emph{size} as
\begin{equation}
\|\alpha\|\eqby{def}\max\{n,\max_{0\leq i\leq n}c_i\}\;.\label{eq-norm}
\end{equation}
Let $N_0$ be a natural number in~$\+N$ and $h{:}\,\+N\to\+N$ a
monotone inflationary function, i.e.,
$x\leq y$ implies $h(x)\leq h(y)$, and $x\leq h(x)$.
A sequence $\alpha_0,\alpha_1,\dots$
of ordinals below~$\omega^\omega$ is \emph{$(N_0,h)$-controlled} if,
for all $\ell$ in~$\+N$,
\begin{equation}\label{eq-ctrl}
\|\alpha_\ell\|\leq h^\ell(N_0)\;,
\end{equation}
i.e., the size of the $\ell$th ordinal $\alpha_\ell$ is bounded by the
$\ell$th iterate of~$h$ applied to~$N_0$; in particular,
$\|\alpha_0\|\leq N_0$. Because for each~$N\in\+N$, there are only
finitely many ordinals below~$\omega^\omega$ of size at most~$N$, the
length of controlled descending sequences is bounded~\citep[see,
e.g.,][]{schmitz14}. One can actually give a precise bound on this
length in terms of \emph{subrecursive functions}, whose definition we
are about to recall.
\subsection{Subrecursive Functions}
Algorithms shown to terminate via an ordinal ranking function can have
a very high worst-case complexity. In order to express such large
bounds, a convenient tool is found in subrecursive hierarchies, which
employ recursion over ordinal indices to define faster and faster
growing functions. We define here two such hierarchies.
\paragraph{Fundamental Sequences}
A \emph{fundamental sequence} for a limit ordinal $\lambda$ is a
strictly ascending sequence $(\lambda(x))_{x<\omega}$ of ordinals
$\lambda(x)<\lambda$ with supremum~$\lambda$.
We use the standard assignment of fundamental
sequences to limit ordinals $\lambda\leq\omega^\omega$, defined
inductively by
\begin{align*}
\omega^\omega(x)&\eqby{def} \omega^{x+1}\;,&
(\beta+\omega^{k+1})(x)&\eqby{def} \beta+\omega^k\cdot(x+1)\;,
\end{align*}
where $\beta+\omega^{k+1}$ is in Cantor normal form.
This particular assignment satisfies, e.g., $0 < \lambda(x)
< \lambda(y)$ for all $x < y$. For instance, $\omega(x) = x + 1$ and
$(\omega^{3}+\omega^3+\omega)(x)=\omega^3+\omega^3+x+1$.
\paragraph{Hardy and Cicho\'n Hierarchies}
In the context of controlled sequences, the hierarchies of Hardy and
Cicho\'n turn out to be especially well-suited~\citep{cichon98}. Let
$h{:}\,\+N\to\+N$ be a function. For each such~$h$, the \emph{Hardy
hierarchy} $(h^\alpha)_{\alpha\leq\omega^\omega}$ and the \emph{Cicho\'n hierarchy}
$(h_\alpha)_{\alpha\leq\omega^\omega}$ relative to~$h$ are two families of functions
$h^\alpha,h_\alpha{:}\,\+N\to\+N$ defined by induction over~$\alpha$ by
\begin{align*}
h^0(x)&\eqby{def} x\;,&
h_0(x)&\eqby{def} 0\;,\\
h^{\alpha+1}(x)&\eqby{def} h^\alpha(h(x))\;,
&h_{\alpha+1}(x)&\eqdef1+h_\alpha(h(x))\;,\\
h^\lambda(x)&\eqby{def} h^{\lambda(x)}(x)\;,&
h_\lambda(x)&\eqby{def} h_{\lambda(x)}(x)\;.
\end{align*}
The Hardy functions are well-suited for expressing a large number of
iterations of the provided function~$h$. For instance, $h^k$ for some
finite $k$ is simply the $k$th iterate of~$h$. This intuition carries
over: $h^\alpha$ is a `transfinite' iteration of the function~$h$,
using a kind of diagonalisation in the fundamental sequences to handle
limit ordinals. For instance, if we use the successor function $H(x)
= x+1$ as our function~$h$, we see that a first diagonalisation yields
$H^\omega(x) = H^{x+1}(x) = 2x+1$. The next diagonalisation occurs at
$H^{\omega\cdot 2}(x) = H^{\omega+x+1}(x)=H^\omega(2x + 1) = 4x +
3$. Fast-forwarding a bit, we get for instance a function of
exponential growth $H^{\omega^2}(x) = 2^{x+1} (x + 1) - 1$, and later
a non-elementary function $H^{\omega^3}$ akin to a tower of
exponentials, and a non primitive-recursive function
$H^{\omega^\omega}$ of Ackermannian growth.
In the following, we will use the following property of Hardy
functions~\citep{wainer72,cichon98}, which can be checked by induction provided
$\alpha+\beta$ is in Cantor normal form (and
justifies the use of superscripts):
\begin{gather}
h^\alpha\circ h^\beta (x)=h^{\alpha+\beta}(x)\;,\label{eq-hardy-comp}\\
\intertext{and if $h$ is monotone inflationary, then so
is $h^\alpha$:}
\textnormal{if $x\leq y$, then } x\leq h^\alpha(x)\leq h^\alpha(y)\;.\label{eq-hardy-mono}
\end{gather}
Regarding the Cicho\'n functions, an easy induction on~$\alpha$ shows
that $H^\alpha(x) = H_\alpha(x) + x$ for the hierarchy relative to
$H(x)\eqby{def} x+1$. But the main interest of
Cicho\'n functions is that they capture how many iterations are
performed by Hardy functions~\citep{cichon98}:
\begin{equation}\label{eq-hardy-cichon}
h^{h_\alpha(x)}(x)=h^\alpha(x)\;.
\end{equation}
\paragraph{Length Function Theorem}
We can now state a `length function theorem' for controlled descending
sequences of ordinals.
\begin{theorem}[{\citep[Thm.~3.3]{schmitz14}}]\label{th-lft}
Let $N_0\geq n+1$. The maximal length of $(N_0,h)$-controlled
descending sequences of ordinals in $\omega^{n+1}$ is
$h_{\omega^{n+1}}(N_0)$.
\end{theorem}
\subsection{Controlling the Candidate Computation}
\paragraph{General Approach}
Consider an execution of $\textsc{CandidateBound}_n$ entering the main
loop at line~\ref{al-ml-start} and let us define
\begin{align}\label{eq-N}
N&\eqby{def}\max\{n+1,\?E_\?B,\max_{0\leq i\leq n}s_i,\max_{0\leq i\leq
n}|\?P_i|\}\;.
\intertext{We are going to exhibit $h{:}\,\+N\to\+N$ monotone and inflationary
such that, along any execution of $\textsc{CandidateBound}_n$, the
sequence of successive values $N_0,N_1,\dots$ defined by~\eqref{eq-N}
each time the execution enters the main loop on line~\ref{al-ml-start}
satisfies}
N_\ell&\leq h^\ell(N_0)\label{eq-N-ctrl}
\intertext{for all~$\ell$ in~$\+N$. By
definition of the ordinal size in~\eqref{eq-norm} of the ranks
from~\eqref{eq-rank}, $\|\alpha_\ell\|\leq N_\ell$. Hence, this will
show that the corresponding sequence of ranks
$\alpha_0>\alpha_1>\cdots$ is $(N_0,h)$-controlled.
Therefore, \cref{th-lft} can be applied since furthermore $N_0\geq
n+1$, showing that the number of loop iterations is bounded by}
L&\eqby{def} h_{\omega^{n+1}}(N_0)\;.
\intertext{By \eqref{eq-hardy-cichon}, this will entail an upper bound
on the returned~$\?E_\?B$ when $\textsc{CandidateBound}_n$ terminates:}
\?E_\?B&\leq N_L\leq h^L(N_0)=h^{\omega^{n+1}}(N_0)\;.\label{eq-EB-h}
\end{align}
\paragraph{Controlling one Loop Execution}
As a preliminary, let us observe that, for all $0\leq i\leq n$, the
number of elements of $\pairs$ (defined in~\eqref{eq-pairs}) can be
bounded by
\begin{equation}\label{eq-pairs-size}
|\pairs|\leq \big((|\?N|+i)\cdot s_i^m\big)^{s_i}\cdot s_i^2\leq
2^{3s_i|\?G|\log n\log s_i}\;.
\end{equation}
Indeed, the graph representation of some pair $(E,F)$ in $\pairs$ has
at most~$s_i$ vertices,
each labelled by a nonterminal symbol
from~$\?N$ or a variable from~$\{x_1,\dots,x_i\}$ and with at most~$m$
outgoing edges; finally the two roots must be distinguished.
Let us turn our attention to the contents of the main loop.
\begin{lemma}\label{lem-ctrl}
For all $\ell$ in~$\+N$ we have $N_{\ell+1}\leq G_{\?G}(N_\ell)$ where
$$G_{\?G}(x)\eqby{def} 2^{2^{2n+6}c^2g^2|\?G|^3x^4}\;.$$
\end{lemma}
\begin{proof}
Assume we enter the main loop for the $\ell$th time with $N_\ell$ as
defined in~\eqref{eq-N}. On line~\ref{al-el-inc}, a new equivalence
level~$e$ is introduced, with $e\leq 2cN_\ell^2$ since $\?E_\?B\leq
N_\ell$ and $\size{E,F}\leq N_\ell$, thus in case of an update on
line~\ref{al-up-ei}, we have $e_i\leq 2cN_\ell^2$.
Consider now the \textbf{for} loop on
lines~\ref{al-up-for}--\ref{al-pi-up}. Regarding
line~\ref{al-ei-up}, observe that
$\max_{(E,F)\in\?B}\el{E,F}\leq\max\{e,\?E_{\?B}\}\leq 2cN_\ell^2$,
thus
\begin{align}
e_j&\leq 2cN_\ell^2\label{eq-ej}
\intertext{for all $j$ in~$\{i,\dots,0\}$ and
$s_i\leq N_\ell$ by assumption. Thus, regarding
line~\ref{al-up-sj}, for all $j$ in~$\{i-1,\dots,0\}$,}
s_j&\leq 2^{i-j}N_\ell+(2^{i-j}-1)(g+2cN_\ell^2(\opfont{sinc}+g))\notag\\
&\leq 2^{n+2}cg|\?G|N_\ell^2\;.\label{eq-sj}
\intertext{%
Regarding line~\ref{al-pi-up}, by \eqref{eq-pairs-size}, \eqref{eq-sj}
entails that for all $j$ in~$\{i-1,\dots,0\}$,}
|\?P_j|&\leq 2^{2^{2n+6}c^2g^2|\?G|^3N_\ell^4}\;.
\end{align}
Finally, regarding line~\ref{al-up-end}, by~\eqref{eq-ej}, $\?E_\?B\leq
2(n+1)cN_\ell^2$.
\end{proof}
\paragraph{Final Bound}
Let us finally express \eqref{eq-EB-h} in terms of~$n$ and~$|\?G|$.
First observe that, at the end of the initialisation phase of
lines~\ref{al-ini-start}--\ref{al-ini-end}, $e_i=0$, $s_i\leq
2^{n+1}g$, $|\?P_i|\leq 2^{2^{2n+5}s^2g^2\log|\?G|}$, and $\?E_\?B=n+1$, thus
\begin{equation}\label{eq-N0}
N_0\leq 2^{2^{2n+5}s^2g^2\log|\?G|}\;.
\end{equation}
Then, because the bounds in \cref{lem-ctrl,eq-N0} are in terms of
$|\?G|$ (recall that the grammatical constant $g$ is exponential and
$n$, $s$, and $c$ are doubly exponential in terms of~$|\?G|$), there
exists a constant~$d$ independent from~$\?G$ such that
$|\?G|\leq N_0\leq H^{\omega^2\cdot d}(|\?G|)$ and
$G_{\?G}(x)\leq H^{\omega^2\cdot d}(\max\{x,|\?G|\})$ for all~$\?G$
and~$x$, where according to~\eqref{eq-hardy-comp}
$H^{\omega^2\cdot d}$ is the $d$th iterate of
$H^{\omega^2}(x)=2^{x+1}(x+1)-1$. Then by~\eqref{eq-hardy-mono},
$h(x)\eqby{def} H^{\omega^2\cdot d}(x)$ is a suitable control function
that satisfies~\eqref{eq-N-ctrl} and therefore~\eqref{eq-EB-h}.
Finally, because $n\leq N_0\leq h(|\?G|)$ and
by~\eqref{eq-hardy-mono},
$h^{\omega^{n+1}}(N_0)\leq h^{\omega^\omega}(h(|\?G|))$. We have just
shown the following upper bound.
\begin{lemma}\label{lem-upb}
Let $\?G$ be a first-order grammar and $n$, $s$, and $g$ be defined
as in \cref{th-el}. Then $\?E_{\?B_{n,s,g}}\leq
h^{\omega^{\omega}}(h(|\?G|))$ where $h(x)\eqby{def} H^{\omega^2\cdot
d}(x)$ for some constant~$d$.
\end{lemma}
\subsection{Fast-Growing Complexity}
It remains to combine \cref{eq-bel} with \cref{lem-upb} in order to
provide an upper bound for the bisimilarity problem. We will employ
for this the \emph{fast-growing} complexity classes defined
in~\citep{schmitz16}. This is an ordinal-indexed hierarchy of
complexity classes $(\F\alpha)_{\alpha<\varepsilon_0}$,
that uses the Hardy functions
$(H^\alpha)_\alpha$ relative to $H(x)\eqby{def} x+1$ as a standard against
which we can measure high complexities.
\paragraph{Fast-Growing Complexity Classes}
Let us first define
\begin{align}
\FGH\alpha&\eqby{def}\bigcup_{\beta<\omega^\alpha}\ComplexityFont{FDTIME}\big(H^\beta(n)\big)
\intertext{%
as the class of functions computed by deterministic Turing
machines in time $O(H^\beta(n))$ for some $\beta<\omega^\alpha$.
This captures for instance the class of Kalmar elementary
functions as $\FGH 3$ and the class of primitive-recursive
functions as $\FGH\omega$~\citep{lob70,wainer72}. Then we let}
\F\alpha&\eqby{def}\bigcup_{p\in\FGH\alpha}\ComplexityFont{DTIME}\big(H^{\omega^\alpha}\!(p(n))\big)
\end{align}
denote the class of decision problems solved by deterministic Turing
machines in time $O\big(H^{\omega^\alpha}\!(p(n))\big)$ for some
function~$p\in\FGH\alpha$. The intuition behind this quantification
over~$p$ is that, just like e.g.\
$\ComplexityFont{EXPTIME}=\bigcup_{p\in\poly}\ComplexityFont{DTIME}\big(2^{p(n)}\big)$
quantifies
over polynomial functions to provide enough `wiggle room' to account
for polynomial reductions, $\F\alpha$ is closed under $\FGH\alpha$
reductions~\citep[Thms.~4.7 and~4.8]{schmitz16}.
\begin{figure}[tbp]
\centering\scalebox{.87}{
\begin{tikzpicture}[every node/.style={font=\small}]
%
\shadedraw[color=black!90,top color=black!20,middle
color=black!5,opacity=20,shading angle=-15](-1,0) arc (180:0:4.8cm);
\draw[color=black!90,thick,fill=black!10](-.7,0) arc (180:0:3.8cm);
\draw[color=black!90,fill=black!7,thick](-.65,0) arc (180:0:3.5cm);
\draw[color=violet!90!black,fill=violet!20,thick](-.6,0) arc (180:0:3.25cm);
\shadedraw[color=black!90,top color=black!20,middle color=black!5,opacity=20,shading angle=-15](-.5,0) arc (180:0:3cm);
\draw[color=blue!90,fill=blue!20,thick](-.1,0) arc (180:0:1.7cm);
\shadedraw[color=black!90,top color=black!20,middle
color=black!5,opacity=20,shading angle=-15,thin](0,0) arc (180:0:1.5cm);
\draw (1.5,.5) node{$\ComplexityFont{ELEMENTARY}$};
\draw (4,1.2) node[color=blue]{$\F3=\!\ComplexityFont{TOWER}$};
\draw[color=blue!90,thick] (3.15,1) -- (3.05,.9);
\draw (2.5,1.9) node{$\bigcup_k\!\F{k}{=}\ComplexityFont{PRIMITIVE\text-RECURSIVE}$};
\draw (5.32,1.5) node[color=violet!90!black]{$\F\omega$};
\draw (5.73,1.6) node[color=black!70]{$\F{\!\omega^{\!2}}$};
\draw (6.21,1.7) node[color=black!60]{$\F{\!\omega^3}$};
%
\draw (3.95,4) node{$\bigcup_k\!\F{\omega^{k}}=\ComplexityFont{MULTIPLY\text-RECURSIVE}$};
%
%
\draw[loosely
dotted,very thick,color=black!70](6.7,1.8) --
(7.2,1.92); \end{tikzpicture}} \caption{Pinpointing
$\F{\omega}=\ComplexityFont{ACKERMANN}$ among the complexity
classes beyond \ComplexityFont{ELEMENTARY}~\citep{schmitz16}.\label{fig-fg}}
\end{figure}
For instance, $\ComplexityFont{TOWER}\eqby{def}\F 3$ defines the class of
problems that can be solved using computational resources bounded by a
tower of exponentials of elementary height in the size of the input,
$\bigcup_{k\in\+N}\F k$ is the class of primitive-recursive decision
problems, and $\ComplexityFont{ACKERMANN}\eqby{def}\F\omega$ is the class
of problems that can be solved using computational resources bounded
by the Ackermann function applied to some primitive-recursive function
of the input size---here it does not matter for $\alpha>2$ whether we
are considering deterministic, nondeterministic, alternating, time, or
space bounds~\citep[Sec.~4.2.1]{schmitz16}. See \cref{fig-fg} for a
depiction.
\begin{theorem}\label{th-upb}
The bisimulation problem for first-order grammars is in
\ComplexityFont{ACKERMANN}, and in $\F{n+4}$ if~$n$ is fixed.
\end{theorem}
\begin{proof}
This is a consequence of \cref{eq-bel} combined with
\cref{th-el,lem-upb}; the various overheads on top of the
bound on~$\?E_{\?B_{n,s,g}}$ are of course negligible for such
high complexities~\citep[Lem.~4.6]{schmitz16}. We rely here on
\citep[Thm.~4.2]{schmitz16} to translate from $h^{\omega^{n+1}}$
with $h=H^{\omega^2\cdot d}\in\FGH 3$ into a bound in terms
of~$H^{\omega^{n+4}}$.
\end{proof}
\section{Pushdown Processes}
\label{sec-pda}
The complexity upper bounds obtained in \cref{sec-upb} are stated in
terms of first-order grammars. In this section, we revisit the known
reduction from pushdown systems to first-order grammars (as given
in~\citep{jancar12,jancar16}), and we also give a direct reduction
from first-order grammars to pushdown systems (instead of giving just
a general reference to~\citep{CourcelleHandbook,caucal95}). We do
this first to make clear that the reductions are primitive recursive
(in fact, they are polynomial-time reductions), and second to show
that, in the real-time case, \cref{th-upb} provides
primitive-recursive bounds for pushdown systems with a fixed number of
states.
\paragraph{Pushdown Systems}
Let us first recall that a \emph{pushdown system} (\emph{PDS}) is a
tuple $M=\tup{Q,\Sigma,\Gamma,\Delta}$ of finite sets where the
elements of $Q,\Sigma,\Gamma$ are called \emph{control states},
\emph{actions} (or \emph{terminal letters}), and \emph{stack symbols},
respectively; $\Delta$ contains \emph{transition rules} of the form
$pY \step{a}q\gamma$ where $p,q\in Q$, $Y\in\Gamma$,
$a\in \Sigma\uplus\{\varepsilon\}$, and $\gamma\in \Gamma^\ast$. A
pushdown system is called \emph{real-time} if $a$ is restricted to be
in~$\Sigma$, i.e., if no $\varepsilon$ transition rules appear
in~$\Delta$.
A PDS $M=\tup{Q,\Sigma,\Gamma,\Delta}$ generates the labelled
transition system
\begin{equation*}
\?L_M\eqby{def}(Q\times \Gamma^\ast,\Sigma\uplus\{\varepsilon\},
(\step{a})_{a\in\Sigma\cup\{\varepsilon\}})
\end{equation*}
where each rule $pY \step{a}q\gamma$ induces transitions
$pY\gamma' \step{a}q\gamma\gamma'$ for all $\gamma'\in \Gamma^\ast$.
Note that~$\?L_M$ might feature \emph{$\varepsilon$-transitions} (also
called \emph{$\varepsilon$-steps})
$pY\gamma'\step\varepsilon q\gamma\gamma'$ if the PDS is not real-time.
\subsection{From PDS to First-Order Grammars}
We recall
a construction already presented in the appendix of the
extended version of~\citep{jancar16}. The idea is that, although
first-order grammars lack the notion of control state, the behaviour
of a pushdown system can nevertheless be captured by a first-order
grammar that uses $m$-ary terms where $m$~is the number of control
states.
\begin{figure}[tbp]
\centering\scalebox{.9}{\hspace*{-1pt}%
\begin{tikzpicture}[auto,on grid]
%
\node[square](A) {$A$};
\node[square,left=.43 of A]{$p$};
\node[square,below=.9 of A](C){$C$};
\node[square,below=.9 of C](B){$B$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(A) edge (C)
(C) edge (B);
%
\node[lsquare,right=2 of A](pA){$pA$};
\node[lsquare,below left =.9 and .8 of pA](q1C){$q_1C$};
\node[lsquare,below =.9 of pA](q2C){$q_2C$};
\node[lsquare,below right=.9 and .8 of pA](q3C){$q_3C$};
\node[lsquare,below left =.9 and .8 of q2C](q1B){$q_1B$};
\node[lsquare,below =.9 of q2C](q2B){$q_2B$};
\node[lsquare,below right=.9 and .8 of q2C](q3B){$q_3B$};
\node[lsquare,below left =.9 and .8 of q2B](q1){$q_1$};
\node[lsquare,below =.9 of q2B](q2){$q_2$};
\node[lsquare,below right=.9 and .8 of q2B](q3){$q_3$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(pA) edge (q1C)
(pA) edge (q2C)
(pA) edge (q3C)
(q1C) edge (q1B)
(q1C) edge (q2B)
(q1C) edge (q3B)
(q2C) edge (q1B)
(q2C) edge (q2B)
(q2C) edge (q3B)
(q3C) edge (q1B)
(q3C) edge (q2B)
(q3C) edge (q3B)
(q1B) edge (q1)
(q1B) edge (q2)
(q1B) edge (q3)
(q2B) edge (q1)
(q2B) edge (q2)
(q2B) edge (q3)
(q3B) edge (q1)
(q3B) edge (q2)
(q3B) edge (q3);
%
\node[square,right=3.5 of pA](a) {$A$};
\node[square,left=.43 of a]{$p$};
\node[right=.8 of a]{$\step a$};
\node[square,right=2 of a](c){$C$};
\node[square,left=.43 of c]{$q$};
\node[square,below=.9 of c](ca){$A$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(c) edge (ca);
%
\node[lsquare,below left =1.8 and .4 of a](pa){$pA$};
\node[lsquare,below left =.9 and .8 of pa](x1){$x_1$};
\node[lsquare,below =.9 of pa](x2){$x_2$};
\node[lsquare,below right=.9 and .8 of pa](x3){$x_3$};
\node[right=1.2 of pa]{$\step a$};
\node[lsquare,right=2.6 of pa](qc){$qC$};
\node[lsquare,below left =.9 and .8 of qc](q1a){$q_1A$};
\node[lsquare,below =.9 of qc](q2a){$q_2A$};
\node[lsquare,below right=.9 and .8 of qc](q3a){$q_3A$};
\node[lsquare,below left =.9 and .8 of q2a](x1a){$x_1$};
\node[lsquare,below =.9 of q2a](x2a){$x_2$};
\node[lsquare,below right=.9 and .8 of q2a](x3a){$x_3$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(pa) edge (x1)
(pa) edge (x2)
(pa) edge (x3)
(qc) edge (q1a)
(qc) edge (q2a)
(qc) edge (q3a)
(q1a) edge (x1a)
(q1a) edge (x2a)
(q1a) edge (x3a)
(q2a) edge (x1a)
(q2a) edge (x2a)
(q2a) edge (x3a)
(q3a) edge (x1a)
(q3a) edge (x2a)
(q3a) edge (x3a);
%
\node[right=.55 of C]{\Large$\rightsquigarrow$};
\node[below left=.37 and 1.2 of ca,rotate=-90]{\Large$\rightsquigarrow$};
\end{tikzpicture}}
\caption{The PDS configuration $pACB$ encoded as a term (left), and
the translation of the PDS rule $pA\step a qCA$ into a
first-order rule (right).}\label{fig:pdatofo}
\end{figure}
\Cref{fig:pdatofo} (left) presents a configuration of a PDS---i.e., a
state in $\?L_M$---as a term; here we assume that $Q=\{q_1,q_2,q_3\}$.
The string $pACB$, depicted on the left in a convenient vertical form,
is translated into a term presented by an acyclic graph in the
figure. On the right in \cref{fig:pdatofo} we can see the translation
of the PDS transition rule $pA\step{a}qCA$ into a rule of a first-order
grammar.
\subsubsection{Real-Time Case}
Let us first assume that $M$ is a real-time PDS, i.e., that each PDS
transition rule $pY\step a q\gamma$ is such that $a$ is in~$\Sigma$.
We are interested in the following decision problem.
\begin{problem}[Strong Bisimulation]
\hfill\\[-1.5em]\begin{description}[\IEEEsetlabelwidth{question}]
\item[input] A real-time pushdown system
$M=\tup{Q,\Sigma,\Gamma,\Delta}$ and two configurations $pY,qZ$
in~$Q\times\Gamma$.
\item[question] Is $pY\sim qZ$ in the labelled transition
system~$\?L_M$?
\end{description}\end{problem}
Formally, for a real-time PDS $M=(Q,\Sigma,\Gamma,\Delta)$, where
$Q=\{q_1,q_2,\dots,q_m\}$, we can define the first-order grammar
\begin{equation*}
\?G_M\eqby{def}(\?N,\Sigma,\?R)
\end{equation*}
where $\?N\eqby{def} Q\cup (Q\times \Gamma)$,
with $\ar(q)\eqby{def} 0$ and $\ar((q,X))\eqby{def} m$ for all $q$ in~$Q$ and
$X$ in~$\Gamma$; the set $\?R$ is defined below. We write $[q]$ and
$[qY]$ for nonterminals $q$ and $(q,Y)$, respectively, and we map each
configuration $p\gamma$ to a (finite) term $\?T(p\gamma)$ in $\opfont{Terms}_{\N}$ defined
by structural induction:
\begin{align}
\?T(p\varepsilon)&\eqby{def}[p]\;,\label{eq-Teps}\\
\?T(pY\gamma)&\eqby{def}
[pY](\?T(q_1\gamma),\?T(q_2\gamma),\dots,\?T(q_m\gamma))\;.\label{eq-TY}
\intertext{For a smooth translation of rules, we introduce a special `stack
variable' $x$, and we set }\?T(q_ix)&\eqby{def} x_i\label{eq-Tx}
\end{align} for all $i\in\{1,\dots,m\}$.
A PDS transition rule $pY\step{a}q\gamma$
in~$\Delta$ with $a$ in~$\Sigma$ is then translated into the
first-order grammar rule
\begin{align}\label{eq-ru}
\?T(pYx)&\step{a}\?T(q\gamma x)
\intertext{%
in~$\?R$. Hence $pY\step{a}q_i$ is
translated into}
\notag
[pY](x_1,\dots,x_m)&\step{a}x_i
\intertext{%
and
$pY\step{a}qZ\gamma$ is translated into}
\notag
[pY](x_1,\dots,x_m)&\step{a} [qZ](\?T(q_1\gamma x),\dots,\?T(q_m\gamma
x))\;.
\end{align}
It should be obvious that the labelled transition system~$\?L_M$ is
isomorphic with the restriction of the labelled transition system
$\?L_{\?G_M}$
to the states $\?T(p\gamma)$ where $p\gamma$
are configurations of~$M$; moreover, the set
$\{\?T(p\gamma)\mid p\in Q, \gamma\in\Gamma^\ast\}$ is closed w.r.t.\
reachability in
$\?L_{\?G_M}$: %
if $\?T(p\gamma)\step{a}F$
in $\?L_{\?G_M}$, then $F=\?T(q\gamma')$ where $p\gamma\step{a}q\gamma'$
in $\?L_M$.
\begin{corollary}\label{cor-pds}
The strong bisimulation problem for real-time pushdown systems is in
\ComplexityFont{ACKERMANN}, and in $\F{|Q|+4}$ if the number $|Q|$ of states is fixed.
\end{corollary}
\begin{proof}
What we have sketched above is a polynomial-time (in fact,
\ComplexityFont{logspace}) reduction from the strong bisimulation
problem in (real-time) pushdown systems to the bisimulation problem
in first-order grammars, for which we can apply \cref{th-upb}.
Observe that, in this translation and according to the discussion
after~\eqref{eq-n}, we may bound~$n$ by the number~$|Q|$ of states
of the given pushdown system, which justifies the
primitive-recursive $\F{|Q|+4}$ upper bound when the number of
states is fixed. (\Cref{fig:pdatofo} makes clear that all branches
in $\?T(p\gamma)$ have the same lengths, and there are
precisely~$|Q|$ depth-$d$ subterms of $\?T(p\gamma)$, for each
$d\leq\height{\?T(p\gamma)}$.)
\end{proof}
\subsubsection{General Case}
In the case of labelled transition systems
$\?L=\tup{\?S,\Sigma,({\step a})_{a\in\Sigma\uplus\{\varepsilon\}}}$
with a \emph{silent action}~$\varepsilon$,
by $s\dstep w t$,
for $w\in\Sigma^\ast$, we denote
that there are $s_0,s_1,\dots,s_\ell\in \?S$ and
$a_1,\dots,a_\ell\in\Sigma\uplus\{\varepsilon\}$
such that
$s_0=s$, $s_\ell=t$, $s_{i-1}\step{a_i}s_i$ for all
$i\in\{1,\dots,\ell\}$, and $w=a_1\cdots a_\ell$.
Thus $s\dstep\varepsilon t$ denotes
an arbitrary sequence of silent steps, and
$s\dstep{a} t$ for $a\in\Sigma$ denotes that there are $s',t'$ such
that $s\dstep{\varepsilon}s'\step{a}t'\dstep{\varepsilon}t$.
A relation $R\subseteq\?S\times\?S$ is a \emph{weak bisimulation}
if the following two conditions hold:
\begin{description}[\IEEEsetlabelwidth{(zag)}]
\item[(zig)] if $s\mathbin R t$ and $s\step a s'$ for some
$a\in\Sigma\uplus\{\varepsilon\}$, then there exists $t'$ such that
$t\dstep a t'$ and $s'\mathbin R t'$;
\item[(zag)] if $s\mathbin R t$ and $t\step a t'$ for some
$a\in\Sigma\uplus\{\varepsilon\}$, then there exists $s'$ such that
$s\dstep a s'$ and $s'\mathbin R t'$.
\end{description}
By $\approx$ we denote \emph{weak bisimilarity}, i.e., the largest
weak bisimulation (the union of all weak bisimulations), which is an
equivalence relation.
We are now interested in the following problem.
\begin{problem}[Weak Bisimulation]
\hfill\\[-1.5em]\begin{description}[\IEEEsetlabelwidth{question}]
\item[input] A pushdown system
$M=\tup{Q,\Sigma,\Gamma,\Delta}$ and two configurations $pY,qZ$
in~$Q\times\Gamma^\ast$.
\item[question] Is $pY\approx qZ$ in the labelled transition
system~$\?L_M$?
\end{description}\end{problem}
Unfortunately, in general the weak bisimulation problem for PDS is
undecidable, already for one-counter systems~\cite{mayr03}; we can
also refer, e.g., to~\cite{jancar08} for further discussion. As
already mentioned in the introduction, we thus consider PDS with
(very) \emph{restricted silent actions}: each rule
$pY\step{\varepsilon}q\gamma$ in $\Delta$ is \emph{deterministic}
(i.e., alternative-free), which means that there is no other rule with
the left-hand side $pY$.
From now on, by \emph{restricted PDS} we mean PDS whose
$\varepsilon$-rules are deterministic.
We aim to show that the weak bisimulation problem for restricted PDS
reduces to the (strong) bisimulation problem for first-order grammars
(where silent actions are not allowed by our definition). For this it
is convenient to make a standard transformation~\citep[see,
e.g.,][Sec.~5.6]{harrison78} of our restricted PDS that removes
non-popping $\varepsilon$-rules; an $\varepsilon$-rule
$pY\step{\varepsilon}q\gamma$ is called \emph{popping} if
$\gamma=\varepsilon$. This is captured by the next proposition.
(When comparing two states from different LTSs, we implicitly refer to
the disjoint union of these LTSs.)
\begin{proposition}\label{prop:restrtopop}
There is a polynomial-time transformation
of a restricted PDS $M=\tup{Q,\Sigma,\Gamma,\Delta}$ to
$M'=\tup{Q,\Sigma,\Gamma,\Delta'}$ in which each $\varepsilon$-rule is
deterministic and popping, and $pY$ in $\?L_M$ is weakly bisimilar
with $pY$ in $\?L_{M'}$.
\end{proposition}
\begin{proof}
Given a restricted PDS $M=\tup{Q,\Sigma,\Gamma,\Delta}$, we
proceed as follows.
First we find all $pY$ such that
\begin{align}\label{pop1}
pY&\step{\varepsilon}\cdots\dstep{\varepsilon}pY\gamma
\intertext{for some $\gamma\in\Gamma^\ast$, and remove the respective
rules $pY\step{\varepsilon}\cdots$.
Then for each $pY$ such that}\label{pop2}
pY&\step{\varepsilon}\cdots\step{\varepsilon}\cdots\dstep{\varepsilon}q\;,
\intertext{we add the popping rule
$pY\step{\varepsilon}q$, and for each
$pY$ where}
pY&\step{\varepsilon}\cdots\dstep{\varepsilon}qB\gamma\label{pop3}
\end{align}
and each rule $qB\step{a}q'\gamma'$ with $a\in\Sigma$ we add the rule
$pY\step{a}q'\gamma'\gamma$. Finally we remove all the non-popping
$\varepsilon$-rules. Thus $M'=\tup{Q,\Sigma,\Gamma,\Delta'}$ arises.
Identifying the configurations that satisfy conditions
(\ref{pop1}--\ref{pop3}) can be performed in polynomial time through a
saturation algorithm.
The claim on the relation of $\?L_M$ and $\?L_{M'}$ is
straightforward.
\end{proof}
A \emph{stable configuration} is either a configuration
$p\varepsilon$, or a configuration $pY\gamma$ where there is no
$\varepsilon$-rule of the form
$pY\step{\varepsilon}q\gamma'$.
In a
restricted PDS with only popping $\varepsilon$-rules, any unstable
configuration $p\gamma$ only allows to perform a finite sequence of
silent popping steps until it reaches a stable configuration.
It is
natural to restrict our attention to the transitions
$p\gamma\step{a}q\gamma'$ with $a\in\Sigma$ between stable
configurations; such transitions might encompass sequences of
popping $\varepsilon$-steps.
When defining the grammar $\?G_M$, we can avoid the explicit use of
deterministic popping silent steps, by `preprocessing'
them: we apply the inductive definition of the translation
operator~$\?T$ from (\ref{eq-Teps}--\ref{eq-Tx}) to stable
configurations, while if $pY$ is unstable, then there is exactly one
applicable rule,
$pY\step{\varepsilon}q$, and in this case we let
\begin{equation}\label{eq-Tuns}
\?T(pY\gamma)\eqby{def}\?T(q\gamma)\;.
\end{equation}
\begin{figure}[tbp]
\centering
\centering\scalebox{.9}{\hspace*{-1pt}%
\begin{tikzpicture}[auto,on grid]
%
\node[square](a) {$A$};
\node[square,left=.43 of a]{$p$};
\node[right=.6 of a]{$\step a$};
\node[square,right=1.6 of a](c){$C$};
\node[square,left=.43 of c]{$q$};
\node[square,below=.9 of c](ca){$A$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(c) edge (ca);
%
\node[square,below=1.8 of a](A) {$A$};
\node[square,left=.43 of A]{$q_2$};
\node[right=.6 of A]{$\step\varepsilon$};
\node[square,right=1.17 of A](q3){$q_3$};
%
\node[lsquare,right=4.2 of a](pa){$pA$};
\node[lsquare,below left =.9 and .8 of pa](x1){$x_1$};
\node[lsquare,below =.9 of pa](x2){$x_2$};
\node[lsquare,below right=.9 and .8 of pa](x3){$x_3$};
\node[right=1.2 of pa]{$\step a$};
\node[lsquare,right=2.6 of pa](qc){$qC$};
\node[lsquare,below left =.9 and .8 of qc](q1a){$q_1A$};
\node[lsquare,below right=.9 and .8 of qc](q3a){$q_3A$};
\node[lsquare,below left =1.8 and .8 of qc](x1a){$x_1$};
\node[lsquare,below =1.8 of qc](x2a){$x_2$};
\node[lsquare,below right=1.8 and .8 of qc](x3a){$x_3$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(pa) edge (x1)
(pa) edge (x2)
(pa) edge (x3)
(qc) edge (q1a)
(qc) edge[bend right=15] (x3a)
(qc) edge (q3a)
(q1a) edge (x1a)
(q1a) edge (x2a)
(q1a) edge (x3a)
(q3a) edge (x1a)
(q3a) edge (x2a)
(q3a) edge (x3a);
%
\node[right=.9 of ca]{\Large$\rightsquigarrow$};
\end{tikzpicture}}
\caption{Deterministic popping silent steps are
`preprocessed.'}\label{fig:swalloweps}
\end{figure}%
\Cref{fig:swalloweps} (right) shows the grammar-rule
\begin{equation*}
\?T(pAx)\step{a}\?T(qCAx)
\end{equation*}
(arising from the PDS-rule
$pA\step{a}qCA$), when $Q=\{q_1,q_2,q_3\}$ and
there is a PDS-rule $q_2A\step{\varepsilon}q_3$, while $q_1A$, $q_3A$
are stable.
\begin{corollary}\label{cor-e-pds}
The weak bisimulation problem for restricted pushdown systems (i.e.,
where $\varepsilon$-rules are deterministic) is in~$\ComplexityFont{ACKERMANN}$.
\end{corollary}
\begin{proof}
By Proposition~\ref{prop:restrtopop} it suffices to consider
a PDS $M=(Q,\Sigma,\Gamma,\Delta)$ where each
$\varepsilon$-rule is deterministic and popping.
Since it is clear that $pY\approx qZ$ in $\?L_M$ iff
$\?T(pY)\sim \?T(qZ)$ in $\?L_{\?G_M}$, the claim follows from
\cref{th-upb}.
\end{proof}
Note that, due to our preprocessing, the terms~$\?T(p\gamma)$ may
have branches of varying lengths, which is why~$n$ as defined
in~\eqref{eq-n} might not be bounded by the number of states as in
\cref{cor-pds}.
\subsection{From First-Order Grammars to PDS}
We have shown the $\ComplexityFont{ACKERMANN}$-membership for bisimilarity of first-order
grammars (\cref{th-upb}), and thus also for weak bisimilarity of
pushdown processes with deterministic $\varepsilon$-steps
(\cref{cor-e-pds}). By adding the lower bound from~\cite{jancarhard},
we get the $\ComplexityFont{ACKERMANN}$-completeness for both problems.
In fact, the $\ComplexityFont{ACKERMANN}$-hardness in~\cite{jancarhard} was shown in the
framework of first-order grammars. The case of pushdown processes was
handled by a general reference to the equivalences that are known,
e.g., from~\cite{CourcelleHandbook} and the works referred there;
another relevant reference for such equivalences is~\cite{caucal95}.
Nevertheless, in our context it seems more appropriate to show a
direct transformation from first-order grammars to pushdown processes
(with deterministic $\varepsilon$-steps), which can be argued to be
primitive-recursive;
in fact, it is
a \ComplexityFont{logspace} reduction.
\medskip
Let $\?G=\tup{\?N,\Sigma,\?R}$ be a first-order grammar. For
a term $F\in\opfont{Terms}_{\N}$ such that $F\not\in\textsc{Var}$ (hence the root of $F$ is a
nonterminal $A$) we define its \emph{root-substitution} to be the
substitution $\sigma$ where $F=A(x_1,\dots,x_{\ar(A)})\sigma$ and
$x\sigma=x$ for all $x\not\in\{x_1,\dots,x_{\ar(A)}\}$. A
substitution $\sigma$ is an \emph{rhs-substitution} for~$\?G$ if it is
the root-substitution of a subterm $F$ of the right-hand side $E$ of a
rule $A(x_1,\dots,x_{\ar(A)})\step{a}E$ in $\?R$ (where $F\not\in\textsc{Var}$);
we let $\textsc{RSubs}_\?G$ denote the set of
rhs-substitutions for~$\?G$.
\begin{figure}[!t]
\centering\scalebox{.87}{
\begin{tikzpicture}[auto,on grid]
\node[square](A){A};
\node[square,below left =.9 and .8 of A](x1){$x_1$};
\node[square,below =.9 of A](x2){$x_2$};
\node[square,below right=.9 and .8 of A](x3){$x_3$};
\node[right=1.1 of A]{$\step a$};
\node[square,right=2.2 of A](C){$C$};
\node[square,below right=.9 and .4 of C](D){$D$};
\node[square,below left =.9 and .4 of D](C1){$x_2$};
\node[square,below right=.9 and .4 of D](D2){$x_1$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(A) edge (x1)
(A) edge (x2)
(A) edge (x3)
(C) edge[bend right] (C1)
(C) edge (D)
(D) edge (C1)
(D) edge (D2);
%
\node[right=1.3 of D]{\Large$\rightsquigarrow$};
%
\node[square,above right=.9 and 3.2 of C](a){$A$};
\node[square,left=.43 of a]{$q_1$};
\node[right=.6 of a]{$\step a$};
\node[square,right=1.6 of a](c){$C$};
\node[square,left=.43 of c]{$q_1$};
\node[square,below=.9 of c](sub){$\sigma$};
\node[square, below=1.8 of a](s){$\sigma$};
\node[square,left=.43 of s]{$q_1$};
\node[right=.6 of s]{$\step\varepsilon$};
\node[square,right=1.17 of s](q2){$q_2$};
\node[square, below=.9 of s](s2){$\sigma$};
\node[square,left=.43 of s2]{$q_2$};
\node[right=.6 of s2]{$\step\varepsilon$};
\node[square,right=1.6 of s2](d){$D$};
\node[square,left=.43 of d]{$q_1$};
\node[square,below=.9 of d](sub2){$\sigma'$};
\path[every node/.style={font=\tiny,inner sep=1pt,color=black!70}]
(c) edge (sub)
(d) edge (sub2);
\end{tikzpicture}}
\caption{\label{fig-fog2pda}The transformation from first-order
grammars to pushdown processes with deterministic
$\varepsilon$-steps. In this example,
$x_1\sigma=x_2, x_2\sigma=D(x_2,x_1)$,
and $x_1\sigma'=x_2, x_2\sigma'=x_1$.}
\end{figure}
We define the PDS
$M_{\?G}\eqby{def}(Q,\Sigma,\Gamma,\Delta)$
where
\ifams%
\begin{align*}
Q&\eqby{def}\{q_1,\dots,q_m\}
\intertext{for $m$ as defined in~\eqref{eq-m}---or
$Q\eqby{def}\{q_1\}$ if $m=0$---,}
\Gamma&\eqby{def} \?N\uplus\textsc{RSubs}_\?G\;,\\
\Delta&\eqby{def}\{q_1A\step a q_i\mid
(A(x_1,\dots,x_{\ar(A)})\step{a}x_i)\in\?R\}\\
&\,\cup\,\{q_1A\step{a}q_1B\sigma\mid \sigma\in\textsc{RSubs}_\?G\wedge(A(x_1,\dots,x_{\ar(A)})\step{a}B(x_1,\dots,x_{\ar(B)})\sigma)\in\?R\}\\
&\,\cup\,\{q_i\sigma\step\varepsilon
q_j\mid 1\leq i\leq
m\wedge\sigma\in\textsc{RSubs}_\?G\wedge\sigma(x_i)=x_j\}\\
&\,\cup\,\{q_i\sigma\step{\varepsilon}q_1C\sigma'\mid 1\leq i\leq
m\wedge\sigma,\sigma'\in\textsc{RSubs}_\?G\wedge\sigma(x_i)=C(x_1,\dots,x_{\ar(C)})\sigma'\}\;.
\end{align*}\else%
\begin{align*}
Q&\eqby{def}\{q_1,\dots,q_m\}
\intertext{for $m$ as defined in~\eqref{eq-m}---or
$Q\eqby{def}\{q_1\}$ if $m=0$---,}
\Gamma&\eqby{def} \?N\uplus\textsc{RSubs}_\?G\;,\\
\Delta&\eqby{def}\{q_1A\step a q_i\mid
(A(x_1,\dots,x_{\ar(A)})\step{a}x_i)\in\?R\}\\
&\,\cup\,\{q_1A\step{a}q_1B\sigma\mid \sigma\in\textsc{RSubs}_\?G\:\wedge\\
&\qquad\qquad(A(x_1,\dots,x_{\ar(A)})\step{a}B(x_1,\dots,x_{\ar(B)})\sigma)\in\?R\}\\
&\,\cup\,\{q_i\sigma\step\varepsilon
q_j\mid 1\leq i\leq
m\wedge\sigma\in\textsc{RSubs}_\?G\wedge\sigma(x_i)=x_j\}\\
&\,\cup\,\{q_i\sigma\step{\varepsilon}q_1C\sigma'\mid 1\leq i\leq
m\wedge\sigma,\sigma'\in\textsc{RSubs}_\?G\:\wedge\\
&\qquad\qquad\sigma(x_i)=C(x_1,\dots,x_{\ar(C)})\sigma'\}\;.
\end{align*}\fi%
See \cref{fig-fog2pda} for an example. Note that the $\varepsilon$-rules
are indeed deterministic; moreover, any non-popping
$\varepsilon$-step, hence of the form
$q_i\sigma\gamma\step{\varepsilon}q_1C\sigma'\gamma$, cannot be
followed by another $\varepsilon$-step.
It should be obvious that a state $A(x_1,\dots,x_{\ar(A)})$ in $\?L_\?G$
is weakly bisimilar with the state $q_1A$ in $\?L_{M_{\?G}}$.
In particular we note that $q_1A\dstep{w}q_i\gamma$ in $\?L_{M_{\?G}}$
(where also $\varepsilon$-steps might be comprised)
entails
that $\gamma=\sigma_0\sigma_1\dots\sigma_\ell$
(in which case $q_i\gamma$ represents the term $x_i\sigma_0\sigma_1,\dots\sigma_\ell$), or
$\gamma=B\sigma_1\dots\sigma_\ell$ when $i=1$
(in which case $q_1\gamma$ represents the term
$B(x_1,\dots,x_{\ar(B)})\sigma_1,\dots\sigma_\ell$).
We could add a technical discussion about how to represent all the
terms from~$\opfont{Terms}_{\N}$ (including the infinite regular terms) in an
enhanced version of~$\?L_{M_{\?G}}$, but this is not necessary since
the lower bound construction in~\cite{jancarhard} uses only the
states of~$\?L_\?G$ that are reachable from `initial' terms of the
form~$A(x_1,\dots,x_{\ar(A)})$ (more precisely, of the form
$A(\bot,\dots,\bot)$ for a nullary nonterminal~$\bot$).
\begin{corollary}\label{cor:pdshard}
The weak bisimulation problem for
pushdown systems whose $\varepsilon$-rules are deterministic and
popping is~$\ComplexityFont{ACKERMANN}$-hard.
\end{corollary}
\begin{proof}
In~\citep{jancarhard}, the
\ComplexityFont{ACKERMANN}-hardness of the control-state reachability problem for reset
counter machines is recalled~\citep{schnoebelen10}, and its
polynomial-time
(in fact, \ComplexityFont{logspace}) reduction to the bisimulation problem for
first-order grammars is shown.
The reduction guarantees that a given control state is reachable from the initial
configuration of a given reset counter machine $R$ iff
$A(\bot,\dots,\bot)\not\sim B(\bot,\dots,\bot)$
in $\?L_{\?G_R}$ for the constructed grammar $\?G_R$.
As shown above, the question whether $A(\bot,\dots,\bot)\sim B(\bot,\dots,\bot)$
in $\?L_{\?G_R}$
can be
further reduced to an instance of the weak bisimulation problem for
the pushdown system $M_{\?G_{R}}$.
\end{proof}
\section{Concluding Remarks}
\label{sec-concl}
\Cref{th-upb,cor-e-pds} provide the first known worst-case upper
bounds, in \ComplexityFont{ACKERMANN}, for the strong bisimulation equivalence of
first-order grammars and the weak bisimulation equivalence of pushdown
processes restricted to
deterministic silent steps. By the lower
bound shown in~\citep{jancarhard} and \cref{cor:pdshard}, this is moreover optimal. An
obvious remaining problem is to close the complexity gap in the case of
strong bisimulation for real-time pushdown processes, which is only
known to be \ComplexityFont{TOWER}-hard~\citep{benedikt13}, and for which we do not
expect \cref{cor-pds} to provide tight upper bounds.
| proofpile-arXiv_065-1358 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A granular material, such as a soil, possesses an inherent
fabric that is imprinted
during its formation. This initial fabric usually imparts a directional,
an\-iso\-tropic character, so that strength and stiffness depend upon
the direction of loading relative to the original deposition
\citep{Arthur:1972a}.
\citet{Oda:1972a} defined fabric as the spatial arrangement of particles
and voids, and he showed that the microscopic fabric
can exhibit two forms of anisotropy: a preferred orientation of elongated
or flat particles, and a prevalence of inter-particle contacts in
preferred directions. In pioneering experiments, he also found
that the subsequent loading and deformation
of a soil alters the arrangement of its
particles and causes contacts to become increasingly aligned in the
direction of the major principal stress~---~a phenomenon now termed
\emph{stress-induced fabric anisotropy }\citep{Oda:1972b,Oda:1972c}\emph{.}
These early studies, which derived from a geometric view of fabric,
were later augmented to include a kinetic or statical aspect of anisotropy.
Photo-elastic experiments have shown that forces between particles
become largest at contacts that are aligned with the loading direction;
moreover, Cundall and others have shown that deviatoric stress is
largely an expression of such \emph{stress-induced force anisotropy}
\citep{Cundall:1983a,Thornton:1986a}.
\par
The current study addresses the induced anisotropies of contact orientation
and contact force. The Paper develops a systematic means of tracking
and analyzing the evolution of anisotropy. That is, the focus is on
the underlying source and rate of induced anisotropy~--- on the rate of its
evolution rather than its state at a particular instant.
Section~2 begins with a brief presentation
conventional measures of fabric and stress and
then presents a rational, mathematical approach to the
manner in which fabric anisotropy emerges and evolves during loading.
The basis is an idea recently proposed by \citet{Ma:2006a}, and the
Paper goes beyond this original idea by supplying additional terms
and developing expressions for all terms. Although the first part
of the paper is primarily analytical, in Section~\ref{sec:Quantifying}
the Paper proposes specific forms for the various mathematical terms
and uses discrete element (DEM) simulations to verify and quantify
these terms.
The DEM simulations are of densely packed durable spheres,
and the data analysis concentrates on soil behavior at large, failure
strains, specifically on fabric and strength at the critical state.
Once the theory has been developed and quantified,
Section~\ref{sec:pi_plane}
applies the theory to predict the effect of the intermediate principal
stress on soil strength at the critical state.
The theory provides
an explanation for the shape of the critical state yield surface,
and the predictions are compared with published results and with DEM
simulations.
\section{\label{sec:Theory}Fabric rate equations}
The Paper pursues two anisotropies, of contact orientation and of
contact force, and models their evolution during bulk loading. Fabric
anisotropy will be expressed in terms of the \emph{contact density}
$\widehat{g}(\mathbf{n})$, a function that describes contact orientation
within an assembly of particles. The individual orientation $\mathbf{n}^{m}$
of a single $m^{\mathrm{th}}$ contact is the unit vector normal
to the surfaces of two particles at their contact point (Fig.~\ref{fig:particles}).%
\begin{figure}
\centering
\includegraphics{Fig_1.eps}
\caption{Two particles at a contact $m$.}
\label{fig:particles}
\end{figure}
The density $\widehat{g}(\mathbf{n})$ within an entire assembly is
the average number of contacts having a common orientation $\mathbf{n}$,
as expressed per particle and per unit of area on the unit sphere
(i.e., per steradian of solid angle). If a granular assembly contains
$N$ particles having $M$ contacts, then the bulk average of contacts
per particle, $M/N$, is the integral of density $\widehat{g}(\mathbf{n})$
across the unit sphere,\[
M/N=\int_{\Omega}\widehat{g}(\mathbf{n})\, d\Omega=4\pi\widehat{g_{\text{avg}}}\]
where $\Omega$ is the unit sphere surface and $\widehat{g_{\text{avg}}}$
is the average density. Note that the more commonly used \emph{average
coordination number} is simply twice $M/N$. An initially isotropic
particle arrangement, with no preferred direction of contact orientation,
has a uniform density $\widehat{g}(\mathbf{n})=M/N/(4\pi)$ across
all of $\Omega$, measured in contacts per particle per steradian
of solid angle. Upon loading, the density becomes anisotropic, with
larger values in the direction of compressive loading.
\par
In a similar manner, we can consider a {}``$\:\widehat{\:}\:$''
\emph{density of contact force}, $\widehat{\mathbf{f}}(\mathbf{n})$, also
a function of contact orientation $\mathbf{n}$. This vector density
has units of force per particle per steradian on the unit sphere.
The \emph{average force} among those contacts having a particular
orientation $\mathbf{n}$ will be written as $\overline{\mathbf{f}}(\mathbf{n})$
and is simply the force density divided by the contact density:
\begin{equation}
\overline{\mathbf{f}}(\mathbf{n})=\widehat{\mathbf{f}}(\mathbf{n})\,/\,\widehat{g}(\mathbf{n})
\label{eq:avg_f}
\end{equation}
\par
To relate these densities to stress, we begin with the Cauchy formula
for the average stress $\boldsymbol{\sigma}$ within a granular region\[
\boldsymbol{\sigma}=\frac{1}{V}\sum_{m=1}^{M}\mathbf{l}^{m}\otimes\mathbf{f}^{m}\]
a discrete sum of dyadic products $\mathbf{l}^{m}\otimes\mathbf{f}^{m}$
(= $l_{i}^{m}f_{j}^{m}$) for the $M$ contacts within the region's
volume $V$. Vector $\mathbf{f}^{m}$ is the contact force, and $\mathbf{l}^{m}$
is the branch vector between the centers of two contacting particles
(Fig.~\ref{fig:particles}). For spherical particles, vector $\mathbf{l}^{m}$
is the product $\ell^{m}\mathbf{n}^{m}$ of the branch length $\ell^{m}$
and the contact's unit normal vector $\mathbf{n}^{m}$, so that
\begin{equation}
\boldsymbol{\sigma}=\frac{1}{V}\sum_{m=1}^{M}\ell^{m}\mathbf{n}^{m}\otimes\mathbf{f}^{m}\approx\frac{\overline{\ell}}{V}\sum_{m=1}^{M}\mathbf{n}^{m}\otimes\mathbf{f}^{m}
\label{eq:stress2}
\end{equation}
where the approximation includes the average branch length $\overline{\ell}$
among all contacts. In making this approximation, we ignore the small
correlation between individual lengths $\ell^{m}$ and the corresponding
contact directions and forces. Noting that $\widehat{\mathbf{f}}(\mathbf{n})$
is the density of contact force per particle, the average stress within
a sphere assembly can be expressed as
\begin{equation}
\boldsymbol{\sigma}\approx\frac{\overline{\ell}N}{V}\int_{\Omega}\mathbf{n}\otimes\widehat{\mathbf{f}}(\mathbf{n})\, d\Omega=\frac{\overline{\ell}}{\overline{v}}\int_{\Omega}\mathbf{n}\otimes\widehat{\mathbf{f}}(\mathbf{n})\, d\Omega
\label{eq:Stress3}
\end{equation}
with $\mathbf{n}\otimes\widehat{\mathbf{f}}(\mathbf{n})=n_{i}\widehat{f}_{j}(\mathbf{n})$.
In this form, we replace the discrete sum in Eq.~(\ref{eq:stress2})
with an integral of density on the unit sphere $\Omega$, and we introduce
$\overline{v}$, the average volume of a particle and its associated
void space ($\overline{v}=V/N$). Because the force density $\widehat{\mathbf{f}}(\mathbf{n})$
is the product $\widehat{g}(\mathbf{n})\overline{\mathbf{f}}(\mathbf{n})$,
deviatoric stress is the result of anisotropies in both contact orientation
$\widehat{g}(\mathbf{n})$ and average contact force $\overline{\mathbf{f}}(\mathbf{n})$
(see \citealp{Rothenburg:1989a}).
\par
The above principles apply to the \emph{state} of anisotropy at any instant
and are well established in the literature
(for reviews, see \citealt{Oda:1999b,Nemat-Nasser:2001b}).
Equation~(\ref{eq:Stress3}) will henceforth be used to investigate
the rates of fabric and stress evolution
during loading, the primary intent of the Paper. The stress
rate will depend, in part, upon the rate at which the contact force
density $\widehat{\mathbf{f}}(\mathbf{n})$ evolves, which will be
written as $\left.\partial\widehat{\mathbf{f}}(\mathbf{n},t)/\partial t\right|_{\mathbf{n}}$,
or simply $\partial\widehat{\mathbf{f}}(\mathbf{n})/\partial t$.
The partial derivatives emphasize that the rate of density $\widehat{\mathbf{f}}(\mathbf{n})$
is measured at fixed orientations $\mathbf{n}$, even though the motions
of individual particles will cause contacts to pass through any given
orientation. Focusing attention on small fixed portions $d\Omega$
of the unit sphere, the stress rate $\dot{\boldsymbol{\sigma}}$ is
computed from the rate at which force density $\widehat{\mathbf{f}}(\mathbf{n})$
changes within these regions:
\begin{equation}
\dot{\boldsymbol{\sigma}}=-\frac{\dot{v}}{\overline{v}}\boldsymbol{\sigma}+\frac{\overline{\ell}}{\overline{v}}\int_{\Omega}\mathbf{n}\otimes\left.\frac{\partial}{\partial t}\widehat{\mathbf{f}}(\mathbf{n})\right|_{\mathbf{n}}\, d\Omega
\label{eq:stress_rate1}
\end{equation}
again assuming spherical particles. This rate expression includes
a possible volume change during the loading process ($\dot{v}=d(V/N)/dt=d\overline{v}/dt$)
but assumes that the average branch length $\overline{\ell}$ remains
constant,
an assumption that is appropriate for hard particles having a ratio
of stress and shear modulus $\sigma / G$ that is small.
\par
Particles will roll and slide across each other during loading, and
the orientation of any particular contact, say $\mathbf{n}^{m}$,
can shift to a new orientation $\mathbf{n}^{m}+\dot{\mathbf{n}}^{m}dt$.
These contact movements can be highly erratic, but when tracked by
the thousands, contacts are observed to migrate from directions of
bulk compression toward directions of extension. This average, prevailing
migration rate, denoted as $\dot{\mathbf{n}}(\mathbf{n})$, is a vector
field tangent to (and on) the unit sphere and has units of radians
per time. The migration $\dot{\mathbf{n}}(\mathbf{n})$ is a function
of orientation $\mathbf{n}$ and will depend upon the loading
process. As an example, Fig.~\ref{fig:migration} illustrates the
average migration vectors among over 800,000 contacts, as measured
in DEM simulations of sustained flow during biaxial plane-strain compression
at the critical (steady) state (see Sections~\ref{section:DEM}--\ref{section:Migration_rate}). %
\begin{figure}
\centering
\includegraphics{Fig_2.eps}
\caption{Migration vectors $\dot{\mathbf{n}}$ and
$\dot{\mathbf{n}}^{\mathrm{proj}}$
on the unit sphere during plane-strain biaxial compression at the
critical state, from DEM simulations
(Sections~\ref{section:DEM} and~\ref{section:Migration_rate}).}
\label{fig:migration}
\end{figure}
As permitted by the symmetry of these loading conditions, the results
have been folded into a single octant of the unit sphere. Contacts
are seen to migrate (flow) from the compression direction, $\mathbf{n=\mathbf{e}}_{1}$,
toward the zero-strain direction $\mathbf{\mathbf{e}}_{2}$ and
toward the extension direction $\mathbf{\mathbf{e}}_{3}$. When
considered alone, this migration will be seen to have a softening
effect: by transporting contacts and their forces from the direction
of compression toward the direction of extension, migration usually diminishes
both fabric anisotropy and deviatoric stress. A functional form of
the migration $\dot{\mathbf{n}}(\mathbf{n})$ is proposed in Section~\ref{section:Migration_rate}.
\par
The stress evolution in Eq.~(\ref{eq:stress_rate1}) depends on the
rate of force density $\partial\widehat{\mathbf{f}}(\mathbf{n})/\partial t$,
a rate that will depend upon the interactions of the particles and
also upon the prevailing contact migration $\dot{\mathbf{n}}(\mathbf{n})$.
The density rate can be viewed as a transport problem on the surface
of the unit sphere. In this sense, the rate $\partial\widehat{\mathbf{f}}(\mathbf{n},t)/\partial t$
at a given, fixed orientation $\mathbf{n}$ is
described by the Fokker-Planck equation with an additional source density:
\begin{equation}
\left.\frac{\partial\widehat{\mathbf{f}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}=\frac{d\widehat{\mathbf{f}}(\mathbf{n})}{dt}\,-\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\left(\dot{\mathbf{n}}\otimes\widehat{\mathbf{f}}(\mathbf{n})\right)+\left(\frac{\partial\widehat{\mathbf{f}}(\mathbf{n})}{\partial t}\right)_{\mathrm{diff}}
\label{eq:f_rate}
\end{equation}
where $\boldsymbol{\nabla}_{\!_{\Omega}}$ is the gradient $\partial(\bullet)/\partial\mathbf{n}$
on the unit sphere, and $\boldsymbol{\nabla}_{\!_{\Omega}}\cdot(\bullet)$
is the corresponding divergence, $\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\mathbf{a}=a_{k,k}$.
This general form, except for the final (diffusion) term
and certain details among the other terms, was recently
proposed by \citet{Ma:2006a}. We describe the nature of each term
in the remainder of this section, noting that all are amenable to
rational analysis and to experimental measurement. In Section~\ref{sec:Quantifying},
we propose more specific forms for each term, verify these forms with
DEM data, and quantify the relevant material parameters that appear
within the specific forms. As a complement to the force rate in Eq.~(\ref{eq:f_rate}),
we must also consider the rate of contact density, which is also the
result of three terms:%
\begin{equation}
\left.\frac{\partial\widehat{g}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}=\frac{d\widehat{g}(\mathbf{n})}{dt}\,-\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\left(\dot{\mathbf{n}}\,\widehat{g}(\mathbf{n})\right)+\left(\frac{\partial\widehat{g}(\mathbf{n})}{\partial t}\right)_{\mathrm{diff}}
\label{eq:g_rate}
\end{equation}
where the gradient $\boldsymbol{\nabla}_{\!_{\Omega}}\cdot(\dot{\mathbf{n}}\,\widehat{g})=(\dot{n}_{i}\widehat{g})_{,i}$.
\citet{Didwania:2001a} proposed an expression
similar to Eq.~(\ref{eq:g_rate}) for the evolution
contact density, although the mean-field rotation of particle pairs was used
in place of the more general migration $\dot{\mathbf{n}}$.
\par
The first terms on the right of Eqs.~(\ref{eq:f_rate}) and~(\ref{eq:g_rate}),
both $d\widehat{\mathbf{f}}/dt$ and
$d\widehat{g}/dt$, are \emph{source densities}, representing the
rates of force and contact density that are generated by the particle
interactions. The other terms account for convection and diffusion
processes which are associated with contact migration, as will be
discussed later.
\par
Because contact density $\widehat{g}(\mathbf{n})$ is a scalar field,
its {}``$d\,$'' source density $d\widehat{g}(\mathbf{n})/dt$ is
somewhat easier to describe and measure than is the vector force rate
$d\widehat{\mathbf{f}}/dt$. Contacts are continually created and
broken during deviatoric loading,
producing the net rate $d\widehat{g}(\mathbf{n})/dt$
at any given orientation $\mathbf{n}$. Unlike the force rate, the
contact rate is entirely a material rate,
$d\widehat{g}(\mathbf{n})/dt=
(\partial\widehat{g}(\mathbf{n})/\partial t)_{\text{matl}}$,
and will be expressed as such in the remainder of the paper. As an
example, in our DEM simulations of dense particle packings, 4096 spherical
particles would typically touch at about 8190 contacts while the assembly
was flowing at the critical (steady) state in sustained biaxial plane-strain
compression (Section~\ref{section:DEM}). During such flow, over
6000 contacts were created and about the same number were broken with
each 1\% of continued deviatoric strain, yet maintaining the nearly
constant 8190 contacts at any instant. Contacts were predominantly
created in the compressive strain direction~--- at orientation $\mathbf{n=\mathbf{e}}_{1}$
in the simulations (Fig.~\ref{fig:migration})~--- and were predominantly
extinguished in the direction of extension, $\mathbf{n=\mathbf{e}}_{3}$.
This pattern of contact activity produces the net material rate and
is responsible for induced fabric anisotropy. In Section~\ref{section:contact_matl_rate},
we quantify the material rate, which is shown to be a function of
orientation $\mathbf{n}$, the loading conditions, and the material
characteristics of the particles.
\par
The source density of force, the $d\widehat{\mathbf{f}}/dt$ rate
in Eq.~(\ref{eq:f_rate}) is similar to the contact rate $d\widehat{g}/dt$
but also involves rotation of a vector field~--- a rotation that will
always accompany contact migration. For example, a single contact
force $\mathbf{f}^{m}$ between two particles will rotate as the
particles roll across each other, inducing a certain force increment
during the contact rotation $d\mathbf{n}^{m}$ (see \citealp{Kuhn:2005b}).
This \emph{induced rotational increment} must be added to any \emph{material
increment} in the force that might result from a changing indentation
at the contact:%
\begin{align}
\begin{split}d\mathbf{f}^{m}= & \left(d\mathbf{f}^{m}\right)_{\text{matl}}+\mathbf{f}^{m}\times(d\mathbf{n}^{m}\times\mathbf{n}^{m})
\\
& +\frac{1}{2}\left[\left(d\boldsymbol{\theta}^{i}+d\boldsymbol{\theta}^{j}\right)\cdot\mathbf{n}^{m}\right]\left(\mathbf{n}^{m}\times\mathbf{f}^{m}\right)
\end{split}
\label{eq:dfm_induced}
\end{align}
using the cross product $\mathbf{a}\times\mathbf{b}=e_{ijk}a_{j}b_{k}$
and inner product $\mathbf{a}\cdot\mathbf{b}=a_{i}b_{i}$.
The first term is the material change produced by the indentation process;
the second term is an induced increment produced
by any \emph{tilting increment} $d\mathbf{n}^{m}$
of the particle pair; and the final term is an induced \emph{twirling
increment} which accompanies any rigid rotation of the two particles,
$d\boldsymbol{\theta}^{i}$ and $d\boldsymbol{\theta}^{j}$, about
their common normal axis $\mathbf{n}^{m}$.
The source density $d\widehat{\mathbf{f}}(\mathbf{n})/dt$
can likewise be viewed as the sum of a material rate and an induced
rotation.
This analysis is made easier by first treating force density
as the sum of two parts: a normal force density and a tangential force
density,
\begin{equation}
\widehat{\mathbf{f}}(\mathbf{n})=-\widehat{f^{\textrm{n}}}(\mathbf{n})\,\mathbf{n}+\widehat{f^{\textrm{t}}}(\mathbf{n})\,\mathbf{t}(\mathbf{n})\label{eq:fn_plus_ft}
\end{equation}
where $\widehat{f^{\textrm{n}}}(\mathbf{n})$ and $\widehat{f^{\textrm{t}}}(\mathbf{n})$
are scalar densities on the unit sphere, and $\mathbf{t}(\mathbf{n})$
is the average unit direction of tangential force at the particular
orientation $\mathbf{n}$. A compressive normal force $\widehat{f^{\textrm{n}}}$
is considered positive. With this approach, the source density in
Eq.~(\ref{eq:f_rate}) is the sum of two rates,
\begin{equation}
\frac{d\widehat{\mathbf{f}}(\mathbf{n})}{dt}
= \frac{d\widehat{\mathbf{f}^{\textrm{n}}}(\mathbf{n})}{dt}
+\frac{d\widehat{\mathbf{f}^{\textrm{t}}}(\mathbf{n})}{dt}\label{eq:df_normal_tangent}
\end{equation}
and each vector rate is the sum of a material rate and an induced
rotation, as in Eq.~(\ref{eq:dfm_induced}),
\begin{align}
\begin{split}
\frac{d\widehat{\mathbf{f}^{\textrm{n}}}(\mathbf{n})}{dt}= & -\left(\frac{\partial\widehat{f^{\textrm{n}}}(\mathbf{n})}{\partial t}\right)_{\textrm{matl}}\!\!\!\mathbf{n}
\\
& -\widehat{f^{\textrm{n}}}(\mathbf{n})\,\mathbf{n}\times\left(\dot{\mathbf{n}}(\mathbf{n})\times\mathbf{n}\right)
\end{split}
\label{eq:fn_source_rate}
\end{align}
\begin{multline}
\frac{d\widehat{\mathbf{f}^{\textrm{t}}}(\mathbf{n})}{dt}=\left(\frac{\partial\widehat{f^{\textrm{t}}}(\mathbf{n})}{\partial t}\right)_{\textrm{matl}}\!\!\!\!\!\mathbf{s}(\mathbf{n})\\
+\widehat{f^{\textrm{t}}}(\mathbf{n})\left[\mathbf{t}(\mathbf{n})\times\left(\dot{\mathbf{n}}(\mathbf{n})\times\mathbf{n}\right)\right]+\left(\frac{d\widehat{\mathbf{f}^{\text{t}}}\left(\mathbf{n}\right)}{dt}\right)_{\text{twirl}}
\label{eq:ft_source_rate}
\end{multline}
\noindent%
The vector field $\mathbf{s}(\mathbf{n})$ is the unit direction
of the tangential material rate, a direction that might differ from
the current direction $\mathbf{t}(\mathbf{n})$ of tangential force
$\widehat{\mathbf{f}^{\text{t}}}(\mathbf{n})$. The final term in
Eq.~\eqref{eq:ft_source_rate} is the net effect of the contacts'
twirling upon the force density. Twirling takes place within the tangent
plane and does not alter the normal force density, so this effect
is absent in Eq.~(\ref{eq:fn_source_rate}).
\par
We now consider the second terms on the right of
Eqs.~(\ref{eq:f_rate}) and~(\ref{eq:g_rate}), which
include the gradient $\boldsymbol{\nabla}_{\!_{\Omega}}$
applied on the curved surface of the unit sphere. These terms account
for divergence and convection effects.
For example, the gradient of
contact density $\widehat{g}(\mathbf{n})$ in Eq.~(\ref{eq:g_rate}),
as introduced by \citet{Ma:2006a},
can be expanded as two sub-terms: $\boldsymbol{\nabla}_{\!_{\Omega}}\cdot(\dot{\mathbf{n}}\,\widehat{g}(\mathbf{n}))=(\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\dot{\mathbf{n}})\,\widehat{g}(\mathbf{n})+\dot{\mathbf{n}}\cdot\left(\boldsymbol{\nabla}_{\!_{\Omega}}\widehat{g}(\mathbf{n})\right)$.
The first of these sub-terms, the divergence rate $(\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\dot{\mathbf{n}})\,\widehat{g}(\mathbf{n})=\dot{n}_{i,i}\widehat{g}(\mathbf{n})$,
accounts for either a spreading or converging migration that will
rarefy or accumulate contacts at particular orientations $\mathbf{n}$.
In Fig.~\ref{fig:migration}, for example, migration spreads (diverges)
from the compressive $\mathbf{\mathbf{e}}_{1}$ direction and converges
into the extensional $\mathbf{\mathbf{e}}_{3}$ direction. The other
sub-term, the convective rate $\dot{\mathbf{n}}\cdot\boldsymbol{\nabla}_{\!_{\Omega}}\widehat{g}(\mathbf{n})=\dot{n}_{i}\widehat{g}_{,i}$,
addresses the drift of contact density: for example, the convection
of a higher contact density, moving at rate $\dot{\mathbf{n}}$,
toward a lower density, thereby increasing contact density at the
latter orientation.
\par
The gradient of force density that appears in Eq.~(\ref{eq:f_rate})
is similar to the gradient of contact density in Eq.~(\ref{eq:g_rate}),
except that force density is a vector field, whose gradient
can be analyzed by separating force into its normal
and tangential parts, as in Eq.~(\ref{eq:fn_plus_ft}),
\begin{equation}
\begin{split}
\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\left(\dot{\mathbf{n}}\otimes\widehat{\mathbf{f}}(\mathbf{n})\right)= & -\dot{n}_{j,j}\widehat{f^{\textrm{n}}}n_{i}-\dot{n}_{j}\widehat{f^{\textrm{n}}}_{,j}n_{i}-\dot{n}_{j}\widehat{f^{\textrm{n}}}n_{i,j}\\
& +\dot{n}_{j,j}\widehat{f^{\textrm{t}}}t_{i}+\dot{n}_{j}\widehat{f^{\textrm{t}}}_{,j}t_{i}+\dot{n}_{j}\widehat{f^{\textrm{t}}}t_{i,j}
\end{split}
\label{eq:grad_force}
\end{equation}
The third and sixth terms can be written as
\begin{align}
-\dot{n}_{j}\widehat{f^{\textrm{n}}}\, n_{i,j}= & -\widehat{f^{\textrm{n}}}(\mathbf{n})\,\dot{\mathbf{n}}(\mathbf{n})
\label{eq:grad_fn_simple}
\\
\dot{n}_{j}\widehat{f^{\textrm{t}}}\, t_{i,j}= & \widehat{f^{\textrm{t}}}(\mathbf{n})\,\dot{\mathbf{t}}(\mathbf{n})
\label{eq:grad_ft_simple}
\end{align}
Because $\dot{\mathbf{n}}$ is normal to $\mathbf{n}$, the
quantity $\widehat{f^{\textrm{n}}}(\mathbf{n})\,\dot{\mathbf{n}}$
represents a tangential rotation of normal force along a migration
path $\dot{\mathbf{n}}$. The vector field $\dot{\mathbf{t}}(\mathbf{n})$
in Eq.~(\ref{eq:grad_force}) is the change in direction of the tangential
force density along contact migration paths. This vector rate is orthogonal
to $\mathbf{t}(\mathbf{n})$, so that $\dot{\mathbf{t}}(\mathbf{n})$
will have a normal component and, possibly, a tangential component.
A tangential component of $\dot{\mathbf{t}}(\mathbf{n})$ occurs at any
orientation where $\mathbf{t}$ veers in direction along a path $\dot{\mathbf{n}}$:
in Fig.~\ref{fig:migration}, for example, migration arrows are seen
to veer toward the east as they approach the
$\mathbf{e}_{2}$--$\mathbf{e}_{3}$ equator.
The two parts of $\widehat{f^{\textrm{t}}}(\mathbf{n})\dot{\mathbf{t}}(\mathbf{n})$,
normal and veering, can be written as
\begin{equation}
\widehat{f^{\textrm{t}}}(\mathbf{n})\,\dot{\mathbf{t}}(\mathbf{n})=-\widehat{f^{\text{t}}}(\mathbf{n})\,\left[\mathbf{t}\cdot\dot{\mathbf{n}}(\mathbf{n})\right]\mathbf{n}+\left(\frac{d\widehat{\mathbf{f}^{\text{t}}}\left(\mathbf{n}\right)}{dt}\right)_{\text{veer}}
\label{eq:ft_w_veer}
\end{equation}
The final, veering part lies in the tangent plane but is orthogonal to
the tangential direction $\mathbf{t}(\mathbf{n})$, as will be demonstrated
in an analysis of DEM data in Section~\ref{sub:Twirl_veer}.
\par
The full rate of force density that produces the stress rate in Eq.~(\ref{eq:stress_rate1})
can be separated into normal and tangential parts,
\begin{equation}
\left.\frac{\partial\widehat{\mathbf{f}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}=\left.\frac{\partial\widehat{\mathbf{f}^{\textrm{n}}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}+\left.\frac{\partial\widehat{\mathbf{f}^{\textrm{t}}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}
\label{eq:ftfn_rate}
\end{equation}
The two parts are expanded by combining Eqs.~(\ref{eq:f_rate})
and~(\ref{eq:df_normal_tangent})--\eqref{eq:ft_w_veer}:
\begin{multline}
\left.\frac{\partial\widehat{\mathbf{f}^{\textrm{n}}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}=-\left[\left(\frac{\partial\widehat{f^{\textrm{n}}}(\mathbf{n})}{\partial t}\right)_{\textrm{matl}}\!\!\!-\widehat{f^{\textrm{n}}}(\mathbf{n})\left(\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\dot{\mathbf{n}}(\mathbf{n})\right)\right.\\
\left.-\dot{\mathbf{n}}(\mathbf{n})\cdot\left(\boldsymbol{\nabla}_{\!_{\Omega}}\widehat{f^{\textrm{n}}}(\mathbf{n})\right)+\left(\frac{\partial\widehat{f^{\textrm{n}}}(\mathbf{n})}{\partial t}\right)_{\mathrm{diff}}\right]\mathbf{n}
\label{eq:rate_fn}
\end{multline}
and
\begin{align}
\begin{split}
\left.\frac{\partial\widehat{\mathbf{f}^{\textrm{t}}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}= & \left(\frac{\partial\widehat{f^{\textrm{t}}}(\mathbf{n})}{\partial t}\right)_{\textrm{matl}}\!\!\mathbf{s}(\mathbf{n})\\
& -\widehat{f^{\textrm{t}}}(\mathbf{n})\left[\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\dot{\mathbf{n}}(\mathbf{n})\right]\mathbf{t}(\mathbf{n})\\
& -\dot{\mathbf{n}}(\mathbf{n})\cdot\left(\boldsymbol{\nabla}_{\!_{\Omega}}\widehat{f^{\textrm{t}}}(\mathbf{n})\right)\,\mathbf{t(\mathbf{n})}+\left(\frac{d\widehat{\mathbf{f}^{\text{t}}}\left(\mathbf{n}\right)}{dt}\right)_{\text{twirl}}\\
& -\left(\frac{d\widehat{\mathbf{f}^{\text{t}}}\left(\mathbf{n}\right)}{dt}\right)_{\text{veer}}+\left(\frac{\partial\widehat{f^{\textrm{t}}}(\mathbf{n})}{\partial t}\right)_{\mathrm{diff}}\!\!\mathbf{t}(\mathbf{n})
\end{split}
\label{eq:rate_ft}
\end{align}
in which we have applied the identity $\mathbf{a}\times(\mathbf{b}\times\mathbf{c})=\mathbf{b}(\mathbf{a}\cdot\mathbf{c})-\mathbf{c}(\mathbf{a}\cdot\mathbf{b})$,
noting that $\mathbf{t}\cdot\mathbf{n}=0$, $\dot{\mathbf{n}}\cdot\mathbf{n}=0$,
and $\mathbf{n}\cdot\mathbf{n}=1$.
The forms of the material force
rates $\left(\bullet\right)_{\mathrm{matl}}$ and of the twirling
and veering rates are settled by analyzing DEM results in
Sections~\ref{section:normal_force} and~\ref{section:tangent_force}.
Equations~(\ref{eq:rate_fn}) and~(\ref{eq:rate_ft})
differ from those of \citet{Ma:2006a}, with the inclusion
of the twirling, veering, and diffusions terms and the mutual cancelling
of the tilting terms that arise in
Eqs.~(\ref{eq:df_normal_tangent})--\eqref{eq:ft_w_veer}.
\par
Until now, we have considered the average contact migration $\dot{\mathbf{n}}$
and its effect on fabric and stress rates.
We must also consider the random fluctuations
among individual contact motions~--- fluctuations that produce the
diffusion terms in Eqs.~(\ref{eq:g_rate}), (\ref{eq:rate_fn}),
and (\ref{eq:rate_ft}). The tangential rate $\dot{\mathbf{n}}^{m}$
of an individual contact $m$, oriented in the direction $\mathbf{n}^{m}$,
is the sum of the prevailing (mean) migration $\dot{\mathbf{n}}$ and
the individual's fluctuation $\delta\dot{\mathbf{n}}^{m}$ from the mean:
\begin{equation}
\dot{\mathbf{n}}^{m}=\dot{\mathbf{n}}(\mathbf{n}^{m})+\delta\dot{\mathbf{n}}^{m}
\label{eq:Fluctuations}
\end{equation}
\noindent These fluctuations are quite large and can produce a diffusion
of the contact density $\widehat{g}$ during bulk deformation.
Contact diffusion at an orientation $\mathbf{n}$~--- the final term
in Eq.~(\ref{eq:g_rate})~--- is driven by the ongoing deformation
of the bulk material, causing contacts to diffuse (disperse) from
orientations of high contact concentration toward orientations of
lower concentration. This phenomenon is distinct from convection and
divergence~--- the {}``$\boldsymbol{\nabla}_{\!_{\Omega}}$'' term
in Eq.~(\ref{eq:g_rate})~--- in which contacts are swept along
by the prevailing rate $\dot{\mathbf{n}}$. Contact diffusion
can be modelled with the classical diffusion equation,
in the form
\begin{equation}
\left(\frac{\partial\widehat{g}(\mathbf{n},t)}{\partial t}\right)_{\text{diff}}=D_{g}\nabla^{2}\widehat{g}(\mathbf{n},t){\,\dot{\epsilon}}_{\text{oct}}
\label{eq:ContactDiffusion}
\end{equation}
where $\nabla^{2}\widehat{g}=\widehat{g}_{,kk}$ is the Laplacian
of $\widehat{g}$ on the surface of the unit sphere and $D_{g}$ is
the diffusion coefficient which quantifies the process.
Because bulk deformation drives the diffusion,
we use the instantaneous octahedral
strain rate $\dot{\epsilon}_{\text{oct}}$ as a scalar measure of
the distortional strain ($\dot{\epsilon}_{\text{oct}}=\sqrt{D_{ij}^{\prime}D_{ij}^{\prime}/3}$;
$D_{ij}^{\prime}=D_{ij}-\frac{1}{3}D_{kk}\delta_{ij}$), although
other measures could be used as well. In Section~\ref{section:Diffusion_rates},
we provide further reasoning for the form (\ref{eq:ContactDiffusion})
and present the means of extracting the diffusion coefficient $D_{g}$
from DEM simulations.
\par
A diffusion of force density $\widehat{\mathbf{f}}(\mathbf{n})$
will accompany the diffusion of contact density $\widehat{g}(\mathbf{n})$:
contact forces are dispersed with their contacts during deformation.
The following forms of force diffusion accrue from Eq.~(\ref{eq:avg_f})
and complement the contact diffusion of Eq.~(\ref{eq:ContactDiffusion}):
\begin{equation}
\left(\frac{\partial\widehat{f^{\mathrm{n}}}(\mathbf{n},t)}{\partial t}\right)_{\text{diff}}=D_{g}\overline{f^{\text{n}}}\left(\mathbf{n},t\right)\nabla^{2}\widehat{g}(\mathbf{n},t)\,\dot{\epsilon}_{\mathrm{oct}}
\label{eq:fn_diffusion}
\end{equation}
\begin{equation}
\left(\frac{\partial\widehat{f^{\mathrm{t}}}(\mathbf{n},t)}{\partial t}\right)_{\text{diff}}=D_{g}\overline{f^{\text{t}}}\left(\mathbf{n},t\right)\nabla^{2}\widehat{g}(\mathbf{n},t)\,\dot{\epsilon}_{\mathrm{oct}}
\label{eq:ft_diffusion}
\end{equation}
\noindent which define the scalar diffusion rates that appear in
Eqs.~(\ref{eq:rate_fn}) and~(\ref{eq:rate_ft}).
\par
To summarize this section,
the rates of contact and force densities include material
rates, combined with divergence, twirling,
and diffusion effects (Eqs.~\ref{eq:f_rate} and~\ref{eq:g_rate}).
As will be seen, a sort of competition exists
between the material rate and the other effects. Anisotropy is usually
reduced by divergence, twirling, and diffusion,
as these effects tend to diminish
$\widehat{g}$ and $\widehat{\mathbf{f}}$ at those orientations $\mathbf{n}$
where the densities are large. At the same orientations, the densities
are usually replenished by the material rates, which represent a generation
of contacts and of contact force. The material rate will be seen to
dominate at the start of loading, inducing fabric anisotropy and deviatoric
stress. During failure, the material rate becomes
quite small and is counteracted
by the other rate effects, eventually leading to a steady, critical
state of fabric and stress.
\section{\label{sec:Quantifying}Quantifying fabric evolution}
In the previous section, we found that the stress rate $\dot{\boldsymbol{\sigma}}$
results from changes in the contact force density
$\widehat{\mathbf{f}}(\mathbf{n})$~--- changes
that can be separated into several rate fields.
In this section, specific forms of the various fields are adopted,
guided by DEM observations of granular behavior.
Taken together, these forms comprise the rudiments
of a constitutive model for soils and other granular materials, a
model based on micro-mechanics and informed by DEM results.
The Paper focuses on behavior at the critical state,
when granular materials
reach a stationary condition: during sustained steady state flow,
the volume, the stress, the fabric, and, most notably,
the density functions $\widehat{g}(\mathbf{n})$,
$\widehat{f^{\text{n}}}(\mathbf{n})$,
and $\widehat{\mathbf{f}^{\text{t}}}(\mathbf{n})$
remain constant.
To aid understanding of the critical state behavior,
we also consider the rate of $\widehat{f^{\text{n}}}(\mathbf{n})$
at the other extreme of deformation~--- at the start of loading~---
which will provide a basis for quantifying
the corresponding critical state terms.
We begin with a brief description of the simulations.
\subsection{\label{section:DEM}DEM simulations}
DEM simulations were performed on twenty small assemblies of spherical
particles.
The simulations permitted the observation of a sufficiently large
number of contacts to attain their motions, force rates,
and net creation rates across the entire unit sphere of orientations.
The assemblies contained the same set of 4096 particles
that were randomly packed into cube containers having periodic boundaries
on all sides and the dense initial conditions listed in Table~\ref{table:DEM}.%
\begin{table*}
\centering
\begin{tabular}{lc}
\hline
Characteristic & Value\\
\hline
Assemblies & 20\\
Assembly shape & cube\\
Assembly particles & 4096\\
Assembly dimension & 13.4$D_{50}$\\
Assembly boundaries & periodic\\
Particle shape & spherical\\
Particle size range & 0.4$D_{50}$ -- 1.2$D_{50}$\\
Particle shear modulus, $G$ & 29~GPa\\
Particle Poisson ratio, $\nu$ & 0.15\\
Inter-particle friction ratio & 0.50\\
Initial particle arrangement & dense, isotropic\\
Initial void ratio, solids fraction & 0.510, 0.662\\
Initial avg. pressure, $p_{\text{o}}=-\sigma_{ii}/3$ & 320~kPa\\
Initial avg. coord. no., $2M/N$ & 5.48\\
\hline
\end{tabular}
\caption{Characteristics of DEM assemblies}
\label{table:DEM}
\end{table*}
The spheres were polydisperse with diameters ranging from 0.4$D{}_{50}$
to 1.5$D{}_{50}$, where $D{}_{50}$ is the median diameter. The hard
non-breaking particles interact at Hertz contacts having a frictional
limit $\mu=0.5$ and a modified Mindlin tangential stiffness, as described
by \citet{Lin:1997a}. The cubic assemblies had dimensions of about
$13.4\times13.4\times13.4$ particle diameters: small enough to prevent
shear bands, yet large enough to capture the average, bulk material behavior.
Because a small assembly of only 4096 particles will exhibit substantial
spikes in stress during deviatoric loading, twenty different assemblies
were randomly created and then loaded. Their averaged behavior is
reported herein. Although several loading paths are studied, the primary
loading was slow biaxial plane-strain compression: the assemblies
were compressed by continually reducing their $x_{1}$ dimension at
a constant rate ($\dot{\epsilon}_{11}=\textrm{constant}$) while
maintaining a constant normal stress in the $x_{3}$ direction and
a constant assembly width in the $x_{2}$ direction ($\sigma_{33}$
constant $=p_{\text{o}}=320\:\text{kPa}$, and $\epsilon_{22}=0$;
see inset in Fig.~\ref{fig:migration}). Because expansion was permitted
against a constant stress $\sigma_{33}$, the simulations can be considered
as drained tests in the geotechnical sense.
The averaged results are displayed in Fig.~\ref{fig:crs},
which shows the normalized deviator stress $(\sigma_{11}-\sigma_{33})/p_{\text{o}}$
and the fabric anisotropy $F_{11}-F_{33}$ during the course of plane-strain
biaxial compression (the fabric measure $F_{ij}$
is defined as
$\int_{\Omega}n_{i}n_{j}\widehat{g}(\mathbf{\mathbf{n})\,}d\Omega$,
as in \citealt{Nemat-Nasser:2001b}).%
\begin{figure}
\centering
\includegraphics{Fig_3.eps}
\caption{Average behavior of twenty DEM assemblies in biaxial
plane-strain compression (see inset in Fig.~\ref{fig:migration}).}
\label{fig:crs}
\end{figure}
Because we are interested in the rates of fabric and stress evolution
at both the macro (bulk) and micro scales, contact statistics were
compiled during loading at the critical state (at $-\epsilon_{11}>0.25)$.
{}``Snapshots'' of the contacts and their rates were taken at several
such strains, and by doing the same for all twenty assemblies, we were
able to analyze over 800,000 contacts, as discussed below.
\subsection{\label{section:Migration_rate}Migration rate $\dot{\mathbf{n}}$,
convection, and divergence}
During deviatoric loading, particles roll and slide across each other,
causing contact orientations to migrate. DEM simulations were used
to determine a functional form of the average
migration $\dot{\mathbf{n}}(\mathbf{n})$
in relation to the bulk deformation rate.
As an example, the migrations
of 800,000 contacts were measured at the critical state in simulations
of plane-strain biaxial compression, and the average rates are depicted
in Fig.~\ref{fig:migration}. Although somewhat obscured, two arrows
emanate from each grid point. The heavier arrows are the actual, measured
rates. These arrows are closely aligned with lighter arrows that represent
a certain projection $\dot{\mathbf{n}}^{\mathrm{proj}}$ of the instantaneous
deformation rate $\mathbf{D}$ onto the unit sphere:
\begin{align}
\begin{split}
\dot{\mathbf{n}}(\mathbf{n})\:\overset{\mathrm{\stackrel{{\scriptstyle aligned}}{with}}}{\longleftrightarrow}\:\dot{\mathbf{n}}^{\mathrm{proj}}(\mathbf{n}) & \equiv\mathbf{P}^{\text{n}}(\mathbf{n})\cdot(\mathbf{D}\cdot\mathbf{n})
\\
& =(\mathbf{I}-\mathbf{n}\otimes\mathbf{n})\cdot(\mathbf{D}\cdot\mathbf{n})
\end{split}
\label{eq:n_align}
\end{align}
with $\mathbf{P}^{\text{n}}\cdot\mathbf{D}\cdot\mathbf{n}=P_{ij}^{\text{n}}D_{jk}n_{k}$.
In this equation, $\mathbf{D}$ is the rate of deformation tensor
(instantaneous strain rate), and the matrix $\mathbf{P}^{\text{n}}(\mathbf{n})=\delta_{ij}-n_{i}n_{j}$
projects the rate vector $\mathbf{D}\cdot\mathbf{n}$ ($=D_{jk}n_{k}$)
onto the tangent plane. The rate $\dot{\mathbf{n}}^{\mathrm{proj}}$
represents the ideal tangential motion of two contacting spheres whose
centers move in perfect accord with the mean, bulk deformation $\mathbf{D}$.
Although individual contacts migrate in widely varying directions
and rates, the alignment of the average rate $\dot{\mathbf{n}}$ and
the ideal rate $\dot{\mathbf{n}}^{\mathrm{proj}}$ is quite close:
the two directions differ, on average, by less than $3^{\circ}$.
\par
Although the observed and ideal rates are aligned, they differ in
magnitude. Different scales have been applied in Fig.~\ref{fig:migration}
for displaying the measured rates $\dot{\mathbf{n}}$ and the projected
rates $\dot{\mathbf{n}}^{\mathrm{proj}}$, and although obscured in
the small figure, the lengths of $\dot{\mathbf{n}}$ and $\dot{\mathbf{n}}^{\mathrm{proj}}$
are consistently in about the same proportion: the two vector fields
are nearly aligned and proportional. This observation suggests that
the migration rate $\dot{\mathbf{n}}$ can be approximated as
\begin{equation}
\dot{\mathbf{n}}\approx\alpha\,\dot{\mathbf{n}}^{\mathrm{proj}}
\label{eq:ndot_alpha}
\end{equation}
a condition that is closely held throughout the loading process. At
the start of loading, the simulations show that the factor $\alpha$
is equal to 1.0, so that the measured and ideal rates are about equal
(the assumption of $\dot{\mathbf{n}}=\dot{\mathbf{n}}^{\mathrm{proj}}$
being made by \citealt{Didwania:2001a}).
As deformation progresses to larger strains,
the actual rate exceeds the ideal projected
rate, with $\alpha=1.7$ at the critical state.
\par
The approximation of Eq.~(\ref{eq:ndot_alpha})
can be used to estimate the convection and divergence
rates in Eqs.~(\ref{eq:g_rate}), (\ref{eq:rate_fn}), and~(\ref{eq:rate_ft}).
For example, the scalar divergence of $\dot{\mathbf{n}}$ is
\begin{equation}
\boldsymbol{\nabla}_{\!_{\Omega}}\cdot\dot{\mathbf{n}}\approx\boldsymbol{\nabla}_{\!_{\Omega}}\cdot(\alpha\dot{\mathbf{n}}^{\mathrm{proj}})=-3\alpha\mathbf{n}\cdot(\mathbf{D}^{\prime}\cdot\mathbf{n})\label{eq:divergence_ndot}
\end{equation}
or $-3\alpha D_{ij}^{\prime}n_{i}n_{j}$, where $\mathbf{D}^{\prime}$
($=D_{ij}-D_{kk}\delta_{ij}/3$) is the deviatoric part of deformation.
\subsection{\label{section:normal_force}Rate of normal force}
Particles press against each other with changing force
as a granular material undergoes bulk deformation. The material rate
of normal force density, $(\partial\widehat{f^{\textrm{n}}}(\mathbf{n})/\partial t)_{\textrm{matl}}$,
is the net effect of these many changes at particular orientations
$\mathbf{n}$. To quantify this material rate, several hundreds of
thousands of contacts were observed in DEM simulations of biaxial
plane-strain loading at two extremes of deformation: at the start
of loading and during sustained flow at the critical state. In general,
the material rate exhibits the following characteristic: the rate
is usually positive (increasingly compressive) at orientations $\mathbf{n}$
in which the bulk strain produces compression; whereas, the material
rate is negative (tensile) in directions of extension. This observation,
although not surprising,
suggests that the \emph{average} ``$\,\overline{\rule{0em}{1ex}\;\;}\,$'' compressive rate,
$(\partial\overline{f^{\textrm{n}}}(\mathbf{n})/\partial t)_{\textrm{matl}}$,
might be approximated as the product of an average normal stiffness
$\overline{k^{\textrm{n}}}$ and the average rate of approach between
the centers of contacting particles:
\begin{equation}
\left(\frac{\partial\overline{f^{\textrm{n}}}(\mathbf{n})}{\partial t}\right)_{\text{matl}}=-\overline{k^{\text{n}}}(\mathbf{n})\,\frac{\partial\ell(\mathbf{n})}{\partial t}
\label{eq:dfn_avg}
\end{equation}
If the particle motions were to conform to the mean, bulk rate of
deformation field $\mathbf{D}$,
the rate of approach would be $\partial\ell(\mathbf{n})/\partial t=\overline{\ell}(\mathbf{n})\,\mathbf{n}\cdot(\mathbf{D}\cdot\mathbf{n})=\overline{\ell}(\mathbf{n})\, n_{i}D_{ij}n_{j}$,
where $\overline{\ell}(\mathbf{n})$ is the average distance between
the centers of particles oriented in direction $\mathbf{n}$.
Numerous studies have sought expressions for the bulk elastic moduli
of granular media by starting with this mean-field assumption.
This approach over-estimates the moduli and is usually amended by
considering motion fluctuations from the mean
(e.g.~\citealt{Jenkins:2005a}).
Successful estimates are only achieved, however, at small strains and while
behavior is elastic.
Observations have shown that the normal motions between particles,
$\partial\ell(\mathbf{n})/\partial t$,
are suppressed during deformation~--- especially at large strain~---
as particles tend to roll and slide in a manner that minimizes such
motion \citep{Kuhn:2004k}. In this regard, we will introduce a
factor $\beta^{\text{n}}$ to reduce the indentation rate between particles.
With this change, the scalar rate $\partial\ell(\mathbf{n})/\partial t$
is approximated as
\begin{equation}
\frac{\partial\ell(\mathbf{n})}{\partial t}\approx\beta^{\text{n}}\,\overline{\ell}\,\mathbf{n}\cdot(\mathbf{D}\cdot\mathbf{n})
\label{eq:dl_dt}
\end{equation}
In this approximation, we also use the average branch length $\overline{\ell}$
in place of the function $\overline{\ell}(\mathbf{n})$, ignoring
the small correlation between branch length and orientation.
\par
The average contact stiffness $\overline{k^{\text{n}}}$ in Eq.~(\ref{eq:dfn_avg})
will depend upon the stiffness characteristics of the particles themselves.
For a single $m$th contact between two isotropic elastic spheres
of equal size, the Hertz stiffness is
\begin{equation}
k^{\textrm{n},m}=\left[\frac{3G^{2}R^{m}f^{\textrm{n},m}}{(1-\nu)^{2}}\right]^{1/3}
\label{eq:k_nm}
\end{equation}
where $f^{\textrm{n},m}$ is the pair's current normal (compressive)
force, $G$ is the particle shear modulus, $\nu$ is the Poisson's ratio, and
$R^{m}$ is the shared radius. This contact stiffness depends upon
the contact force $f^{\textrm{n},m}$. Because the average normal
force $\overline{f^{\textrm{n}}}(\mathbf{n})$ within an entire assembly
is known to be anisotropic, we would expect the average contact
stiffness $\overline{k^{\text{n}}}$ to depend on orientation. The
average stiffness among all contacts that share an orientation $\mathbf{n}$
can be approximated as
\begin{equation}
\overline{k^{\textrm{n}}}(\mathbf{n})\approx\left[\frac{3G^{2}\overline{\ell}\,\overline{f^{\textrm{n}}}(\mathbf{n})}{2(1-\nu)^{2}}\right]^{1/3}
\label{eq:kn}
\end{equation}
using $\overline{\ell}/2$ in place of $R$. By combining Eqs.~(\ref{eq:dfn_avg}),
(\ref{eq:dl_dt}), and~(\ref{eq:kn}) and twice applying Eq.~(\ref{eq:avg_f}),
we can estimate the material rate of normal force density as
\begin{equation}
\left(\frac{\partial\widehat{f^{\text{n}}}(\mathbf{n})}{\partial t}\right)_{\text{matl}}
\approx
-\beta^{\text{n}}\,\overline{\ell}\,\widehat{k^{\text{n}}}(\mathbf{n})\left[\mathbf{n}\cdot(\mathbf{D}\cdot\mathbf{n})\right]
\label{eq:fn_rate_matl}
\end{equation}
with the \emph{stiffness density}
\begin{equation}
\widehat{k^{\text{n}}}(\mathbf{n})=\overline{k^{\textrm{n}}}(\mathbf{n})\,\widehat{g}(\mathbf{n})=\left[\frac{3G^{2}\overline{\ell}}{2(1-\nu)^{2}}\,\widehat{f^{\text{n}}}(\mathbf{n})\left(\widehat{g}(\mathbf{n})\right)^{2}\right]^{1/3}
\label{eq:kn_hat}
\end{equation}
Coefficient $\beta^{\text{n}}$ was investigated for two extremes
of loading: during initial loading and during sustained plastic flow
at the critical state.
\subsubsection{Rate of normal force at small strains}
Upon initial loading of dense DEM sphere packings,
the value $\beta^{\text{n}}$
was measured as 0.94: the actual stiffness was slightly smaller than
the ideal stiffness that would apply if each particle moved
in perfect accord with the mean deformation field. Figure~\ref{fig:fn_Zero}
compares Eq.~(\ref{eq:fn_rate_matl}) and the DEM data, with both
plotted in a dimensionless form.%
\begin{figure}
\centering
\includegraphics{Fig_4.eps}
\caption{Material rates of normal force creation for
plane-strain biaxial compression at the start of loading: data from
DEM simulations (symbols~$\circ$), and Eq.~(\ref{eq:fn_rate_matl})
with $\beta^{\text{n}}=0.94$ (lines~---). Data is arranged along
meridians of $30{}^{\circ}$ spacing (see Fig.~\ref{fig:migration}).}
\label{fig:fn_Zero}
\end{figure}
Four meridians of the unit sphere are shown in the figure, corresponding
to the angle $\xi$ in Fig.~\ref{fig:migration}.
Equation~(\ref{eq:fn_rate_matl}) closely fits the DEM data.
\par
With a $\beta^{\text{n}}=0.94$ at the start of loading, the material
rate of normal force density is more than two hundred times larger
than the other rates that contribute to the force density: the divergence,
convection, and diffusion terms in Eq.~(\ref{eq:rate_fn}). The rapid
evolutions of fabric and stress at the start of loading are, therefore,
dominated by the material rate $(\partial\widehat{f^{\text{n}}}(\mathbf{n})/\partial t)_{\textrm{matl}}$,
with the other rates nearly inconsequential.
The situation changes, however, upon further loading.
Because $\beta^{\text{n}}$ decreases with increasing
strain, its hardening effect is progressively diminished,
and the influences of divergence,
convection, and diffusion become increasingly more significant~--- nearly
dominant~--- as will be seen in the next paragraphs.
\subsubsection{\label{sub:Normal_large}Rate of normal force at the critical state}
DEM simulations were also used to
measure the material parameter $\beta^{\text{n}}$
during failure at the critical state.
Because the total rate of normal force $\left.\partial\widehat{\mathbf{f}^{\text{n}}}/\partial t\right|_{\mathbf{n}}$
is zero at the critical state, the corresponding material rate can
be readily computed from the remaining terms in Eq.~(\ref{eq:rate_fn}).
A $\beta^{\text{n}}=0.0037$ fits the DEM data, although this value
must be slightly amended, as described below. Such a small value of
$\beta^{\text{n}}$ indicates that the average normal motions between
particles, the rate $\partial\ell(\mathbf{n})/\partial t$ in Eq.~(\ref{eq:dl_dt}),
is much smaller than would be anticipated by assuming that the particle
motions conform to a uniform, affine deformation field.
On the other hand,
we had previously noted that the average tangential motions
become somewhat \emph{greater} than those of uniform deformation,
with an $\alpha$ of 1.7 in Eq.~(\ref{eq:ndot_alpha}).
These contrasting
results are consistent with other evidence that the large-strain motions
of particles are dominated by the tangential rolling of particle pairs,
but with minimal average normal motions at the contacts \citep{Kuhn:2004k}.
\par
The DEM simulations also reveal an unexpected aspect of the material
rate $(\partial\widehat{f^{\textrm{n}}}(\mathbf{n})/\partial t)_{\textrm{matl}}$:
although one might expect a force rate of zero at those neutral orientations
$\mathbf{n}$ where particles neither approach nor withdraw (where
$\partial\ell(\mathbf{n})/\partial t=0$), we find instead that the
material rate is usually slightly negative
(depletive or tensile) at these orientations.
This anomalous situation is most noticeable at neutral orientations
that also have large migration rates $\dot{\mathbf{n}}(\mathbf{n})$. To
account for this observation, we apply a small adjustment to the material
rate of Eq.~(\ref{eq:fn_rate_matl}), replacing that equation as
follows:
\begin{align}
\begin{split}
\left(\frac{\partial\widehat{f^{n}}(\mathbf{n})}{\partial t}\right)_{\text{matl}}\approx & -\beta^{\text{n}}\,\widehat{k^{\text{n}}}(\mathbf{n})\,\overline{\ell}\left[\mathbf{n}\cdot(\mathbf{D}\cdot\mathbf{n})\right]
\\
& -\,\beta_{2}^{\text{n}}\,\widehat{f^{\textrm{n}}}(\mathbf{n})\frac{\left|\dot{\mathbf{n}}(\mathbf{n})\right|^{2}}{\dot{\epsilon}_{\mathrm{oct}}}
\end{split}
\label{eq:fn_rate_matl2}
\end{align}
in which the new, subtracted $\beta_{2}^{\text{n}}$ term produces
a small depletive bias in the material rate. The magnitude $\left|\dot{\mathbf{n}}(\mathbf{n})\right|$
is approximated with Eq.~(\ref{eq:ndot_alpha}) and is normalized
by dividing by the average rate $\dot{\epsilon}_{\mathrm{oct}}=\sqrt{D_{ij}^{\prime}D_{ij}^{\prime}/3}$.
The factor $\beta_{2}^{\text{n}}$ was measured as 0.65.
\par
We must also consider the role of the mean stress in generating contact
force density. For spherical particles, the mean stress depends exclusively
on the normal components of the contact forces \citep{Cundall:1983a},
such that the bulk pressure $p$ is proportional to the average normal
force density $\widehat{f^{\text{n}}}(\mathbf{n})$. At small strains,
the total rate of normal force density, the rate $\partial\widehat{f^{\text{n}}}(\mathbf{n})/\partial t$
on the left of Eq.~(\ref{eq:rate_fn}), is dominated by the material
rate, and its approximation with Eqs.~(\ref{eq:kn_hat}) and~(\ref{eq:fn_rate_matl2})
would suggest that the bulk stiffness is proportional to $G^{2/3}$
and to $p^{1/3}$. This scaling at small strains is in fair agreement
with small-strain vibrational experiments which show that the elastic
moduli are proportional to $p^{n}$ with an exponent $n$ between
1/3 and 1/2 (see \citealt{Goddard:1990a} for a review). Granular
behavior at large strains scales quite differently.
At the critical state, strength is proportional to the confining pressure,
$p^{1}$, and is nearly independent of the particle stiffness $G$. If
left unmodified, the stiffness density in Eq.~(\ref{eq:kn_hat})
would produce a strength proportional to $p^{1/3}$ and to $G^{2/3}$,
contrary to the observed behavior of soils and other granular materials.
We should expect, therefore,
that at large strains, the factor $\beta^{\text{n}}$ will depend
upon $p$ and upon the particles' elastic properties in the following
manner:
\begin{equation}
\beta^{\text{n}}=B^{\text{n}}\left(\frac{p\,(1-\nu)}{G}\right)^{2/3}
\label{eq:beta_B}
\end{equation}
where the dimensionless factor $B^{\text{n}}$ effects a proper scaling
of granular strength at large strains. We tested this hypothesis by
running DEM simulations of biaxial compression that were identical
to those previously described, except that the initial confining pressure
was increased about six-fold. At the critical state, the mean stress
increased from 490~kPa to 3200~MPa and the strength was found to
increase by the same factor, but $\beta^{\text{n}}$ had only increased
from 0.0037 to 0.012~--- not a six-fold increase, but roughly in
accord with Eq.~(\ref{eq:beta_B}) and a $B^{\text{n}}=6.3$. The
second parameter in Eq.~(\ref{eq:fn_rate_matl2}), $\beta_{2}^{\text{n}}$,
remained about the same for the two confining pressures, as would
be expected, since the last term in Eq.~(\ref{eq:fn_rate_matl2})
is proportional to $\widehat{f^{\text{n}}}$, a form that is consistent
with strength being proportional to mean stress.
\par
In Fig.~\ref{fig:fn_ft_rates},
the combination of Eqs.~(\ref{eq:fn_rate_matl2})
and~(\ref{eq:beta_B})
is compared with data from 800,000 contacts in DEM simulations. %
\begin{figure}
\begin{centering}
\includegraphics{Fig_5.eps}
\par\end{centering}
\caption{Material rates of normal force creation during plane-strain
biaxial compression at the critical state: data from DEM simulations
(symbols~$\circ$), and Eqs.~(\ref{eq:fn_rate_matl2}) and~(\ref{eq:beta_B}),
with $B^{\text{n}}=6.3$ and $\beta_{2}^{\text{n}}=0.65$ (lines~---).
Data is arranged along meridians of $30{}^{\circ}$
spacing (see Fig.~\ref{fig:migration}).}
\label{fig:fn_ft_rates}
\end{figure}
The simulations are of plane-strain biaxial compression at the critical
state, and data is presented along four meridians (angle $\xi$, Fig.~\ref{fig:migration}).
All results are reported in a dimensionless form: force density is
divided by the stress $p$, and its time rate is normalized with respect
to the average octahedral rate $\dot{\epsilon}_{\mathrm{oct}}$.
Equations~(\ref{eq:fn_rate_matl2}) and~(\ref{eq:beta_B})
are in close agreement with the DEM data.
\subsection{\label{section:tangent_force}Rates of tangential force}
DEM simulations can be used to resolve a realistic form for the material
rate of tangential force at the critical state~--- the rate $(\partial\widehat{f^{\text{t}}}(\mathbf{n})/\partial t)_{\textrm{matl}}$
in Eq.~(\ref{eq:rate_ft}). We will quantify this material rate,
after first considering the tangential force rates of twirling and
veering.
\subsubsection{\label{sub:Twirl_veer}Twirling and veering rates at the critical
state}
During sustained flow at the critical state, the DEM simulations show
that the tangential force density
$\widehat{f^{\text{t}}}(\mathbf{n})\mathbf{t}(\mathbf{n})$
becomes closely aligned with the direction of contact migration $\dot{\mathbf{n}}(\mathbf{n})$,
such that the unit direction is approximated as
\begin{equation}
\mathbf{t}(\mathbf{n})\approx\dot{\mathbf{n}}\left/\left|\dot{\mathbf{n}}\right|\right.
\label{eq:t_ndot}
\end{equation}
where $\dot{\mathbf{n}}(\mathbf{n})$ is the field depicted in Fig.~\ref{fig:migration}
and given by Eqs.~(\ref{eq:n_align}) and~(\ref{eq:ndot_alpha}).
The total rate of tangential force that appears on the left of Eq.~(\ref{eq:rate_ft})
results from various effects: a material rate combined with convection,
twirling, veering, and diffusion effects. The density rates of twirling
and veering involve rotations of tangential force within the tangent
plane.
We can isolate and directly measure the twirling rate by considering other
DEM simulations that eliminate the veering effect: by using triaxial
rather than plane-strain loading.
With triaxial compression, assemblies
are compressed in the $\mathbf{e}_{1}$ direction while
constant stress is maintained in the
$\mathbf{e}_{2}$ and $\mathbf{e}_{3}$ directions
(see inset, Fig.~\ref{fig:Twirl_sphere_eps}).
This symmetric loading condition, in which two principal strains are
equal, produces contact migrations $\dot{\mathbf{n}}$, as defined
in Eq.~(\ref{eq:n_align}), along meridians (geodesics) that emanate
from the $\mathbf{e}_{1}$ pole
and approach the $\mathbf{e}_{2}$--$\mathbf{e}_{3}$ equator.
As stated above, $\mathbf{t}(\mathbf{n})$ and $\dot{\mathbf{n}}(\mathbf{n})$
are aligned at the critical state, and because neither direction veers
under triaxial loading, we can use triaxial DEM data to directly measure
any bulk twirling of the tangential forces.
\par
The twirling rates (i.e., from
the final term in Eq.~\ref{eq:dfm_induced}) of over 800,000 contacts
were averaged, and Fig.~\ref{fig:Twirl_sphere_eps} illustrates these
average density rates.%
\begin{figure}
\centering
\includegraphics{Fig_6.eps}
\caption{DEM simulations of triaxial compression
at the critical state: (1)~the average twirling rates of tangential
contact forces (upward, light arrows), and (2)~the contact migration
rates (downward, darker arrows).}
\label{fig:Twirl_sphere_eps}
\end{figure}
Although somewhat obscured in the small monochrome figure, two vector
fields are displayed: thinner arrows correspond to the twirling density
$(\partial\widehat{\mathbf{f}^{\text{t}}}(\mathbf{n})/\partial t)_{\textrm{twirl}}$;
thicker arrows are the contact migration field $\dot{\mathbf{n}}(\mathbf{n})$.
Some scatter is apparent in the data, but the results indicate that
the twirling rate vectors are consistently in a direction opposite
the migration field. For example, in the northern octant
of Fig.~\ref{fig:Twirl_sphere_eps},
tangential forces tend to rotate toward
the north, even as their contacts are migrating toward the south.
The magnitude of the twirling density was also found to be roughly proportional
to the product of the tangential force density $\widehat{f^{\text{t}}}(\mathbf{n})$
and the magnitude of $\dot{\mathbf{n}}(\mathbf{n})$.
These observations
suggest the following form of the twirling density field:
\begin{equation}
\left(\frac{\partial\widehat{\mathbf{f}^{\textrm{t}}}(\mathbf{n})}{\partial t}\right)_{\textrm{twirl}}\approx-\gamma_{\text{twirl}}\,\widehat{f^{\text{t}}}(\mathbf{n})\,\dot{\mathbf{n}}(\mathbf{n})
\label{eq:ft_twirl}
\end{equation}
The factor $\gamma_{\text{twirl}}$ was about 1.0 in the DEM simulations.
Figure~\ref{fig:ft_twirl} compares this equation with the twirling
data for triaxial compression at the critical state. %
\begin{figure}
\centering
\includegraphics{Fig_7.eps}
\caption{Twirling rates of tangential force in triaxial
compression at the critical state:
comparison of DEM data and Eq.~(\ref{eq:ft_twirl}),
with $\gamma_{\text{twirl}}=1.0$. }
\label{fig:ft_twirl}
\end{figure}
Equation~(\ref{eq:ft_twirl}) and a $\gamma_{\text{twirl}}=1.0$
fit the data, although the figure
also indicates that the twirling rate of force is relatively small
when compared with the normal force rates that are shown in
Fig.~\ref{fig:fn_ft_rates}.
\par
Having resolved the twirling effect, we can now investigate possible
veering of the tangential force direction $\mathbf{t}(\mathbf{n})$.
To this end, we return to biaxial plane-strain compression simulations.
The veering rate of tangential force is the tangential component of
the rate $\widehat{f^{\text{t}}}\dot{\mathbf{t}}$ along migration
paths $\dot{\mathbf{n}}$ (from Eqs.~\ref{eq:grad_ft_simple}
and~\ref{eq:ft_w_veer}):
\begin{equation}
\left(\frac{d\widehat{\mathbf{f}^{\text{t}}}\left(\mathbf{n}\right)}{dt}\right)_{\text{veer}}=\widehat{f^{\text{t}}}(\mathbf{n})\,\mathbf{P}^{\text{n}}(\mathbf{n})\cdot\left(\frac{\partial\mathbf{t}(\mathbf{n})}{\partial\mathbf{n}}\cdot\dot{\mathbf{n}}(\mathbf{n})\right)
\notag
\end{equation}
where the projection matrix $\mathbf{P}^{\text{n}}(\mathbf{n})$ extracts
the tangential component of this rate (see Eq.~\ref{eq:n_align}).
As an intermediate step in deriving the veering rate,
the unit vector $\mathbf{t}(\mathbf{n})$
can be expressed as the product of a matrix $\mathbf{Q}(\mathbf{n})$
and the unit normal $\mathbf{n}$, as
\mbox{$\mathbf{t}(\mathbf{n})=\mathbf{Q}(\mathbf{n})\cdot\mathbf{n}/|\mathbf{Q}(\mathbf{n})\cdot\mathbf{n}|$},
in which $\mathbf{Q}\cdot\mathbf{n}=Q_{ij}n_{j}$. After differentiating
$\mathbf{t}(\mathbf{n})$ with respect to $\mathbf{n}$,
\begin{multline}
\left(\frac{d\widehat{\mathbf{f}^{\text{t}}}\left(\mathbf{n}\right)}{dt}\right)_{\text{veer}}=\\
\widehat{f^{\text{t}}}(\mathbf{n})\,\mathbf{P}^{\text{n}}(\mathbf{n})\cdot\mathbf{P}^{\text{t}}(\mathbf{n})\cdot\left(\frac{\frac{\partial\mathbf{Q}(\mathbf{n})}{\partial\mathbf{n}}\cdot\mathbf{n}+\mathbf{Q}(\mathbf{n})}{\left|\mathbf{Q}(\mathbf{n})\cdot\mathbf{n}\right|}\cdot\dot{\mathbf{n}}(\mathbf{n})\right)
\label{eq:veer_Q}
\end{multline}
where the new operator $\mathbf{P}^{\text{t}}(\mathbf{n})=\mathbf{I}-\mathbf{t}\otimes\mathbf{t}=\delta_{ij}-t_{i}t_{j}$
projects vectors onto a plane that is perpendicular to the tangent
direction $\mathbf{t}$. Combined with $\mathbf{P}^{\text{n}}(\mathbf{n})$,
the matrix product $\mathbf{P}^{\text{n}}\cdot\mathbf{P}^{\text{t}}=P_{ik}^{\text{n}}P_{kj}^{\text{t}}$
produces a veering rate that is orthogonal to both $\mathbf{n}$ and
$\mathbf{t}$.
\par
As expressed in Eq.~(\ref{eq:t_ndot}),
the simulations show that during sustained flow
at the critical state, the tangential force density is closely aligned
with the migration direction $\dot{\mathbf{n}}=\mathbf{P}^{\text{n}}\cdot\mathbf{D}\cdot\mathbf{n}$,
defined in Eqs.~(\ref{eq:n_align}) and (\ref{eq:ndot_alpha}): that is,
matrix $\mathbf{Q}(\mathbf{n})$ is equal to the matrix product
$\mathbf{P}^{\text{n}}(\mathbf{n})\cdot\mathbf{D}$.
The corresponding veering rate in Eq.~(\ref{eq:veer_Q})
is, therefore,
\begin{multline}
\left(\frac{d\widehat{\mathbf{f}^{\text{t}}}\left(\mathbf{n}\right)}{dt}\right)_{\text{veer}}=\widehat{f^{\text{t}}}(\mathbf{n})\,\mathbf{P}^{\text{n}}(\mathbf{n})\cdot\mathbf{P}^{\text{t}}(\mathbf{n})\\
\cdot\left(\frac{\mathbf{D}-2\left(\mathbf{n}\otimes\mathbf{n}\right)\cdot\mathbf{D}-\left[\mathbf{n}\cdot(\mathbf{D}\cdot\mathbf{n})\right]\mathbf{I}}{\left|\mathbf{P}^{\text{n}}(\mathbf{n})\cdot\mathbf{D}\cdot\mathbf{n}\right|}\cdot\dot{\mathbf{n}}(\mathbf{n})\right)
\label{eq:df_veer}
\end{multline}
\par
The plane-strain DEM data reveals behavior
that is similar to the triaxial simulations that were discussed earlier.
The twirling rate
$(\partial\widehat{\mathbf{f}^{\text{t}}}
(\mathbf{n})/\partial t)_{\textrm{twirl}}$
for plane-strain biaxial compression is also directed roughly opposite
$\dot{\mathbf{n}}$, but with one difference. Although the twirling
rate and $\dot{\mathbf{n}}(\mathbf{n})$ are roughly opposed in the
biaxial simulations, they are not perfectly counter-aligned: the DEM
data show that the twirling rate consistently includes a small tangential
component that is orthogonal to the $\mathbf{t}(\mathbf{n})$ direction~---
a small component that is aligned with the veering rate given in
Eq.~(\ref{eq:df_veer}).
With this observation,
we speculate that the original approximation in Eq.~(\ref{eq:ft_twirl})
can be applied to non-triaxial loading by making the following adjustment:
\begin{equation}
\left(\frac{\partial\widehat{\mathbf{f}^{\textrm{t}}}(\mathbf{n})}{\partial t}\right)_{\textrm{twirl}}-\left(\frac{\partial\widehat{\mathbf{f}^{\textrm{t}}}(\mathbf{n})}{\partial t}\right)_{\textrm{veer}}\approx\:-\gamma_{\text{twirl}}\,\widehat{f^{\text{t}}}(\mathbf{n})\,\dot{\mathbf{n}}(\mathbf{n})
\label{eq:twirl_minus_veer}
\end{equation}
Allowing for considerable scatter in the DEM data, this approximation,
with $\gamma_{\text{twirl}}=1.0$, gives a reasonable fit to the DEM
data of both triaxial and plane-strain simulations.
\subsubsection{Rate of tangential force at the critical state}
In a previous section, DEM simulations were used to extract an approximation
of the material rate of normal force during sustained deformation
at the critical state (Section~\ref{sub:Normal_large}). The process
was aided by the use of Eq.~(\ref{eq:rate_fn}) and by the presumption
of a vanishing rate $\left.\partial\widehat{\mathbf{f}^{\text{n}}}/\partial t\right|_{\text{n}}$
during steady, critical state loading. We now apply a similar approach
to tangential force, by using Eq.~(\ref{eq:rate_ft}) to find the
tangential vector rate $(\partial\widehat{\mathbf{f}^{\text{t}}}(\mathbf{n})/\partial t)_{\text{matl}}$.
Because the other terms in Eq.~(\ref{eq:rate_ft}) are aligned
with the migration field $\dot{\mathbf{n}}(\mathbf{n})$, the unit
direction $\mathbf{s}(\mathbf{n})$ of the tangential material rate
must also be aligned with $\dot{\mathbf{n}}(\mathbf{n})$. The DEM
experiments show that at certain orientations, the directions $\mathbf{s}(\mathbf{n})$
and $\dot{\mathbf{n}}(\mathbf{n})$ do indeed coincide; however, at
other orientations, the simulations show that they are collinear but
in \emph{opposite} directions. This paradoxical observation is reconciled
by considering another observation: at those orientations $\mathbf{n}$
where the particles approach each other~--- when the normal rate
$(\partial\widehat{f^{\text{n}}}(\mathbf{n})/\partial t)_{\textrm{matl}}$
is compressive~--- the direction of the tangential rate
coincides with that of $\dot{\mathbf{n}}(\mathbf{n})$. Contrarily,
for orientations $\mathbf{n}$ at which particles tend to withdraw
from each other, the tangential rate and $\dot{\mathbf{n}}(\mathbf{n})$
are counter-aligned. These observations resemble the behavior of
a frictional block system that is loaded with normal and tangential
forces, such as that described by \citet{Bazant:1991a}, \S10.7. The
DEM simulations also show that the magnitude of the tangential material
force rate
correlates with the magnitude of the migration rate,
$|\dot{\mathbf{n}}(\mathbf{n})|$:
the material rate is larger at orientations where the particles are
migrating more rapidly across each other. When considered together,
these observations suggest the following form for the tangential material
rate at large strains:
\begin{equation}
\left(\frac{\partial\widehat{f^{\text{t}}}(\mathbf{n})}{\partial t}\right)_{\mathrm{matl}}\mathbf{s}(\mathbf{n})\,\approx\,\zeta^{\text{t}}\left(\frac{\partial\widehat{f^{\text{n}}}(\mathbf{n})}{\partial t}\right)_{\mathrm{matl}}\frac{\left|\dot{\mathbf{n}}(\mathbf{n})\right|}{\dot{\epsilon}_{\mathrm{oct}}}\,\mathbf{t}(\mathbf{n})
\label{eq:ft_rate_matl}
\end{equation}
Although Eq.~(\ref{eq:ft_rate_matl}) is consistent with a frictional
system, the factor $\zeta^{\text{t}}$ was measured as only 0.067
in the simulations, a value much smaller than the 0.50
friction coefficient between particles.
A larger factor might have been realized
if frictional sliding were to occur at all contacts, but at the critical
state, the DEM simulations show that only 18\% of contacts
are sliding (see also \citealt{Thornton:2000a}), as particles tend
to roll rather than slide at their contacts (\citealt{Kuhn:2004k}).
\par
Equation~(\ref{eq:ft_rate_matl}) is compared with data from DEM
simulations in Fig.~\ref{fig:Material_rate_ft_Steady}, which shows
results along two meridians of the unit sphere. Equation~(\ref{eq:ft_rate_matl})
is in general agreement with the experimental data.%
\begin{figure}
\begin{centering}
\includegraphics{Fig_8.eps}
\par\end{centering}
\caption{Material rates of tangential force
creation during plane-strain biaxial compression at the critical state: data
from DEM simulations (symbols), and rates from Eq.~(\ref{eq:ft_rate_matl})
with $\zeta^{\text{t}}=0.067$ (lines),
along two $\xi$ meridians (Fig.~\ref{fig:migration}). }
\label{fig:Material_rate_ft_Steady}
\end{figure}
\subsection{\label{section:contact_matl_rate}Rate of contact creation at the
critical state}
The material rate of contact density, ${(\partial\widehat{g}(\mathbf{n})/\partial t)}_{\mathrm{matl}}$,
is the net rate at which contacts are created or extinguished during
deformation, as in Eq.~(\ref{eq:g_rate}). The DEM simulations show
that contacts are predominantly created at orientations $\mathbf{n}$
in which the
deformation $\mathbf{D}$ produces compression between particle pairs,
whereas contacts are predominantly broken at orientations of extension.
These trends
resemble the situation with normal force density, so we begin by assuming
that the net contact creation rate is proportional to the
rate at which particles approach (or withdraw from) each other~---
the rate $\partial\ell(\mathbf{n})/\partial t$ in Eq.~(\ref{eq:dl_dt}).
In variance with the force rate, however, we now use the deviatoric
rate $\mathbf{D}^{\prime}$ ($=D_{ij}-D_{kk}\delta_{ij}/3$) instead
of the full rate $\mathbf{D}$, nullifying the influence of bulk volume
change on the rate of contact creation. This modification is justified
by two observations. The contact network among particles is most effectively
rearranged by bulk distortion: although pure isotropic compression
will increase the contact forces, it does not greatly alter the number
or orientations of the contacts.
Furthermore, the dilation that usually accompanies the
shearing of dense granular materials does not appreciably disengage
contacts: the number of contacts will typically remain nearly constant
during post-peak deformation, even as the material is vigorously dilating
\citep{Thornton:2000a}.
We should expect, however, that the contact
density rate $(\partial\widehat{g}(\mathbf{n})/\partial t){}_{\mathrm{matl}}$
will depend on the average normal force $\overline{f^{\textrm{n}}}(\mathbf{n})$:
larger contact forces imply greater contact indentations and require
a greater motion $d\ell$ to disengage the contacts. In this regard,
we introduce a reference movement for the Hertz contact between an
$m$th pair of elastic spheres (see Eq.~\ref{eq:k_nm}),
\begin{equation}
\frac{f^{\textrm{n},m}}{k^{\textrm{n},m}}\quad\textrm{or}\quad\left[\frac{\left(f^{\textrm{n},m}\right)^{2}(1-\nu)^{2}}{3G^{2}R^{m}}\right]^{1/3}
\label{eq:fn_over_kn}
\end{equation}
This reference movement would cause two spheres, initially pressed
together with force $f^{\textrm{n},m}$, to withdraw and disengage,
if the original stiffness $k^{\textrm{n},m}$ was active throughout
the withdrawal process. The material rate of contact density is approximated
as the particle withdrawal rate $\partial\ell(\mathbf{n})/\partial t$
divided by the average reference movement $\overline{f^{\textrm{n}}}(\mathbf{n})/\overline{k^{\text{n}}}$
and multiplied by the current density $\widehat{g}(\mathbf{n})$.
Applying Eqs.~(\ref{eq:avg_f}), (\ref{eq:dl_dt}), and~(\ref{eq:kn_hat}),%
\begin{equation}
\left(\frac{\partial\widehat{g}(\mathbf{n})}{\partial t}\right)_{\mathrm{matl}}\approx-\beta^{\text{n}}\,\widehat{k^{\text{n}}}(\mathbf{n})\frac{\widehat{g}(\mathbf{n})}{\widehat{f^{\text{n}}}(\mathbf{n})}\,\overline{\ell}\,\left[\mathbf{n}\cdot(\mathbf{D}^{\prime}\cdot\mathbf{n})\right]
\label{eq:g_rate_malt}
\end{equation}
where $\mathbf{D}^{\prime}$ is used in place of $\mathbf{D}$. This
material rate is negative at orientations $\mathbf{n}$ where particles
withdraw from each other~--- when $\mathbf{n}\cdot(\mathbf{D}^{\prime}\cdot\mathbf{n})$
is positive (tensile).
\par
Figure~\ref{fig:g_rate} compares Eq.~(\ref{eq:g_rate_malt}) with
DEM data of biaxial plane-strain compression at the critical state,
showing that the equation closely fits the data.
\par
If we compare the material rates of contact force creation
and of contact creation (Eqs.~\ref{eq:fn_rate_matl2} and~\ref{eq:g_rate_malt}),
we see that the material rate of force is proportional to $\widehat{g}^{2/3}\widehat{f^{\text{n}}}^{1/3}$;
whereas the material rate of contact creation is proportional to $\widehat{g}^{5/3}\widehat{f^{\text{n}}}^{-2/3}$.
Despite the different scaling of these two rates, the DEM simulations
reveal that the same $\beta^{\text{n}}$ value applies to both rates:
the generation of contacts and the generation of contact force
apparently share a common origin.
\begin{figure}
\begin{centering}
\includegraphics{Fig_9.eps}
\par\end{centering}
\caption{Material rates of contact creation during
plane-strain
biaxial compression at the critical state: data from DEM simulations
(symbols~$\circ$) and rates
from Eqs.~(\ref{eq:g_rate_malt}) and~(\ref{eq:beta_B})
with $B^{\text{n}}=6.3$ (lines~---).}
\label{fig:g_rate}
\end{figure}
\subsection{\label{section:Diffusion_rates}Diffusion rate at the critical state}
Classical diffusion theory explains the diffusion of a molecular species,
either within itself (self-diffusion) or through other molecular species
(e.g., \citealp{Jeans:1962a}). The process is driven by random fluctuations
among the molecules' velocities, displacing them from their original
positions at time $t=0$. With these random motions, individual displacements
$\delta\mathbf{r}^{m}(t)$ increase with time, such that the collective
mean-square displacement is roughly proportional to time:
\begin{equation}
\left\langle \delta r_{1}^{2}\right\rangle +\left\langle \delta r_{2}^{2}\right\rangle +\ldots+\left\langle \delta r_{d}^{2}\right\rangle =2dDt
\label{eq:DiffusionRate}
\end{equation}
In this form of Einstein-Smoluchowski diffusion, $d$ is the spatial
dimension; the $\delta r_{j}$ are the separate
$j$-components of the randomly advancing displacements; and the diffusion
coefficient $D$ is a measure of the time rate of these growing displacements.
In our application, the displacements $\delta\mathbf{r}^{m}$ are
not of molecules or particles; rather, they are the tangential, angular
displacements of individual contact orientations $\delta\dot{\mathbf{n}}^{m}$
on the unit sphere~--- the fluctuations in Eq.~(\ref{eq:Fluctuations})~---
that occur as particles slide or roll across each other during bulk
deformation.
\par
Equation~(\ref{eq:DiffusionRate}) provides the means for experimentally
measuring a diffusion coefficient~--- in particular, the coefficient
$D_{g}$ of contact self-diffusion in
Eqs.~(\ref{eq:ContactDiffusion})--(\ref{eq:ft_diffusion}). %
\begin{figure*}
\begin{centering}
\mbox{%
\subfigure[]{\includegraphics{Fig_10a.eps}}%
\quad\quad%
\subfigure[]{\includegraphics{Fig_10b.eps}}%
}%
\par\end{centering}
\caption{Fluctuations of contact motions at the critical
state: (a)~movements of twenty-five contacts on the unit sphere and
(b)~estimation of the diffusion coefficient
$D_{g}$ (\emph{cf.} Eq.~\ref{eq:DiffusionRate}).}
\label{fig:Diffusion}
\end{figure*}
The tangential movements of several contacts are plotted in
Fig.~\ref{fig:Diffusion}a
over the course of 0.045 strain at the critical state. The figure
illustrates the erratic, zigzag nature of contact movement (i.e.,
a small mean free path).
To measure the coefficient of contact diffusion $D_{g}$,
we tracked the long-term movements,
$\int\delta\dot{\mathbf{n}}^{m}\, dt$,
of over 150,000 contacts as the assembly was being deformed at the
critical state.
The mean-square cumulative contact displacements are
plotted in Fig.~\ref{fig:Diffusion}b as a function of the advancing
octahedral strain $\epsilon_{\mathrm{oct}}$. The result is a nearly
linear relation, consistent with the conceptual Eq.~(\ref{eq:DiffusionRate}),
with cumulative strain replacing time.
The slope of 0.03 is the contact diffusion coefficient $D_{g}$ that
appears in Eqs.~(\ref{eq:ContactDiffusion})--(\ref{eq:ft_diffusion}),
having units of radians$^{2}$ per unit of strain $\epsilon_{\mathrm{oct}}$.
This value of $D_{g}$ is fairly small when compared with the material
rate of $\widehat{g}$ shown in Fig.~\ref{fig:g_rate}.
\citet{Didwania:2001a} also measured a relatively small rate for a
\emph{kinematic} diffusion derived from the relative translational
velocities between neighboring (but not necessarily contacting)
particle pairs.
\section{\label{sec:pi_plane}Effect of the intermediate principal stress}
The conventional measure of soil strength is the friction angle $\phi$,
based upon the major and minor principal stresses at failure:
$\phi=\sin^{-1}(\sigma_{1}-\sigma_{3})/(\sigma_{1}+\sigma_{3})$.
Tests using advanced true-triaxial and hollow ring torsion apparatus
demonstrate that strength depends on the intermediate principal stress
$\sigma_{2}$, whose relative magnitude is usually represented by
the $b$-value,
defined as \mbox{$b=(\sigma_{2}-\sigma_{3})/(\sigma_{1}-\sigma_{3})$}.
Certain phenomena, however, can obscure the influence of the intermediate
stress in laboratory tests. Soils exhibit a propensity for inhomogeneous
deformation in the form of shear bands. Shear bands usually appear
near the peak stress, and their emergence can
alter the subsequent stress-strain behavior
and the measured strength. The emergence
of shear bands can be either suppressed or promoted by the particular
specimen dimensions and boundary conditions, so that the measured
influence of the intermediate principal stress is subject to the vagaries
of the testing equipment (see \citealt{Lade:2006a} for a review).
\par
DEM simulations were conducted with cubical assemblies measuring about
13.4 particle diameters between periodic boundaries
(Section~\ref{section:DEM}).
Although deformation within an assembly is non-uniform, large-scale
localization, such as shear banding, is unable to develop within such
limited assembly dimensions. The behavior observed in the simulations
can, therefore, be considered close to the underlying material behavior
that would prevail during homogeneous deformation.
\par
Figure~\ref{fig:Results_b_DEM} shows results of these DEM simulations,
conducted with different intermediate stresses $\sigma_{2}$.
\begin{figure*}
\begin{centering}
\mbox{%
\subfigure[]{\includegraphics{Fig_11a.eps}}%
\quad%
\subfigure[]{\includegraphics{Fig_11b.eps}}%
}
\par\end{centering}
\caption{Results of DEM simulations of critical
state flow with different intermediate principal stresses.}
\label{fig:Results_b_DEM}
\end{figure*}
Ten sets of simulations were conducted: triaxial compression ($b=0$),
triaxial extension ($b=1$), and eight sets with intermediate conditions.
Each set involved loading the same twenty randomly generated assemblies
and averaging the results (see Section~\ref{section:DEM}).
During loading, mixed boundary conditions were applied in the following manner.
A constant lateral stress $\sigma_{3}$ was maintained while the specimens
were compressed at a constant rate in the $x_{1}$ direction ($\dot{\epsilon}_{11}=\text{constant}$).
Strain control was also maintained in the intermediate direction,
with $\epsilon_{22}$ advanced in fixed proportions of $\epsilon_{11}$
(i.e., engineering strain rates $\dot{\epsilon}_{22}=a\dot{\epsilon}_{11}$,
with the constant $a=-0.5,$ $-$0.375, $-$0.25, $-$0.125, 0, 0.25,
0.5, 0.75, and 1).
Tests were stopped well after the critical state was
attained and the $b$-value had become stationary, usually at a compressive
engineering strain $-\epsilon_{11}$ of~$0.30$ to~$0.35$.
\par
Figure~\ref{fig:Results_b_DEM}a shows the failure conditions at
the peak and critical states. The principal stresses are plotted on
the deviatoric pi-plane and are normalized by dividing by the current
mean stress $p$.
Each smooth, curved envelope is a spline fit of ten data points.
Because the initial fabric was isotropic, an entire
failure envelope is symmetrically produced from the ten results that
are shown as dots in the first sextant.
Flattened Tresca hexagons
are also shown, since these envelopes would apply if strength were
independent of the intermediate principal stress.
Neither the peak nor
critical state strengths agree with this idealized condition.
The critical state results, however, form a slightly less rotund envelope,
lying closer to the Tresca condition.
\par
The results are shown in more detail in Fig.~\ref{fig:Results_b_DEM}b,
which gives the friction angles $\phi$ for the ten $b$-values.
The figure also shows two commonly used methods for fitting a strength
envelope to experimental soil data:
the Lade and Matsuoka methods \citep{Lade:1975a,Matsuoka:1976a},
which have both been calibrated to the triaxial compression strengths
($b=0$). The Lade method closely fits the DEM strength data at the
peak state, although neither fitting method captures strength at the
critical state.
\par
The micro-mechanical model in
Sections~\ref{sec:Theory} and~\ref{sec:Quantifying}
can be used for predicting strength at the critical state.
The three Eqs.~(\ref{eq:g_rate}), (\ref{eq:rate_fn}), and~(\ref{eq:rate_ft})
are general rate expressions for the densities
$\widehat{g}(\mathbf{n})$,
$\widehat{\mathbf{f}^{\text{n}}}(\mathbf{n})$,
and $\widehat{\mathbf{f}^{\text{t}}}(\mathbf{n})$.
At the critical state, the three densities
are stationary, and the total rates are zero:
\begin{equation}
\left.\frac{\partial\widehat{g}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}=0\:,\quad\left.\frac{\partial\widehat{\mathbf{f}^{\text{n}}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}=0\:,\quad\left.\frac{\partial\widehat{\mathbf{f}^{\text{t}}}(\mathbf{n})}{\partial t}\right|_{\mathbf{n}}=0
\label{eq:Rates_equal_zero}
\end{equation}
Each of these differential equations can be expanded and expressed with the
functional forms that were developed in
Section~\ref{sec:Quantifying} (e.g. $\dot{\mathbf{n}}(\mathbf{n})$,
$(\partial\widehat{f^{\text{n}}}(\mathbf{n})/\partial t)_{\text{matl}}$,
etc.).
In principle, these three equations can be solved for the scalar density
functions $\widehat{g}(\mathbf{n})$, $\widehat{f^{\text{n}}}(\mathbf{n})$,
and $\widehat{f^{\text{t}}}(\mathbf{n})$, which then can be used
in Eqs.~(\ref{eq:fn_plus_ft}) and~(\ref{eq:Stress3}) to find the
stress tensor at the critical state.
Equations~(\ref{eq:Rates_equal_zero}) are complex:
they are coupled non-linear partial differential equations on the
surface of the unit sphere.
Looking beyond this difficulty, each of the three equations involve
the deformation rate $\mathbf{D}$, which can be considered an input
parameter. The solution of the three equations and the resulting stress
tensor will, therefore, depend upon the direction of deformation.
\par
Equations~(\ref{eq:Rates_equal_zero}) were solved for several constant-volume
deformation rates $\mathbf{D}$, as would apply at the critical state
($D_{kk}=0$). The coupled non-linear equations were solved using
the method of weighted residuals, by approximating each density function
as a series of even-powered spherical harmonics, for example,
\begin{equation}
\widehat{g}_{\text{A}}(\mathbf{n})=\!\!\!\sum_{\gamma=2,4,6,\ldots}\,\sum_{i_{1},i_{2},\ldots i_{\gamma}}\!\!\!\widehat{G}_{i_{1}i_{2}i_{3}\ldots i_{\gamma}}n_{i_{1}}n_{i_{2}}n_{i_{3}}\ldots n_{i_{\gamma}}
\notag
\end{equation}
where the $\widehat{G}_{\cdots}$ are scalar coefficients,
and where the direction index $i_{j}\in\{1,2,3\}$.
Approximations for $\widehat{g}(\mathbf{n})$,
$\widehat{f^{\text{n}}}(\mathbf{n})$,
and $\widehat{f^{\text{t}}}(\mathbf{n})$
were substituted into the appropriate forms of all terms
on the right sides of expressions~(\ref{eq:g_rate}),
(\ref{eq:rate_fn}), and~(\ref{eq:rate_ft}), and the harmonic coefficients
were sought to minimize these expressions over the unit sphere:
\begin{equation}
\text{min. }\!\!\!
\int_{\Omega}
\left[\left(\frac{\partial\widehat{g}_{\text{A}}}{\partial t}\right)^{2}
\!\!+\left(\frac{\partial\widehat{f^{\text{n}}}_{\text{A}}}{\partial t}\right)^{2}
\!\!+\left(\frac{\partial\widehat{f^{\text{t}}}_{\text{A}}}{\partial t}\right)^{2}\right]d\Omega
\notag
\end{equation}
thus approximating a simultaneous solution of all three
Eqs.~(\ref{eq:Rates_equal_zero}).
\par
The solution results are shown in Fig.~\ref{fig:Phi_vs_b_model}.
The differential equations were quantified with the particle properties
$E$ and $\nu$ and with the micro-mechanical transport properties
that were measured at the critical state: $\alpha$, $B^{\text{n}}$,
$\beta_{2}^{\text{n}}$, $\gamma_{\text{twirl}}$, $\zeta^{\text{t}}$,
and~$D_{g}$.
These values had been extracted from the single set of DEM simulations
of plane-strain biaxial loading ($a=0$), and we simply used the same values
in solving the differential equations
for other deformation directions~$\mathbf{D}$.
A common mean stress and average coordination number were imposed
as auxiliary constraints ($p=490$~kPa, $2M/N=4.0$).
The solutions of Eqs.~(\ref{eq:Rates_equal_zero}) imply stress tensors that
would, in theory, produce critical state flow under various deformation
directions $\mathbf{D}$. Figure~\ref{fig:Phi_vs_b_model} shows
that these solutions compare reasonably well with data from the ten
sets of true-triaxial simulation experiments.%
\begin{figure}
\centering
\includegraphics{Fig_12.eps}
\caption{Effect of the intermediate principal stress
on strength: comparison of micro-mechanical theory with DEM simulations.}
\label{fig:Phi_vs_b_model}
\end{figure}
\section{Conclusion}
The Paper presents a model for the evolution of fabric and stress
in granular materials. A distinction is made between evolution effects
produced by the interactions of particles~--- termed {}``material''
effects~--- and effects that are due to an \emph{en masse} shifting
of contacts and forces from one orientation to another. The latter
include convection, diffusion, and twirling effects. The DEM simulations
show that the material rate has a consistent hardening influence during
loading: by itself, the material rate increases the deviatoric stress
and induces fabric anisotropy. The other rates have a consistent softening
influence, reducing the anisotropies of contact force and contact
orientation. The model is most useful in assessing failure at large
strains, when the two competing changes are of roughly equal magnitude
and both must be tracked during the loading process. At the critical
state, the effects are in balance, producing stationary stress and
fabric. With this observation, the Paper develops one application
of the model: a prediction of the effect of the intermediate principal
stress on strength at the critical state.
\par
Although the model's predictions compare favorably with DEM simulations
of small assemblies of 4096 particles, such results may seem questionable
when applied to the scale of problems that are encountered in industrial
and geotechnical situations. In these large-scale problems, deformation
and failure are not homogeneous, but are instead concentrated within
shear bands and other localization zones. The Paper's model is based
upon an averaging of trends among the many contacts within a small
representative volume, and this averaged behavior can be thought
to apply to continuum points or to small regions within a shear band,
rather than encompassing the entire band thickness. By pursuing a
point-based continuum view, the model might be used to develop advanced,
comprehensive continuum models that incorporate a length scale and
can predict the emergence and evolution of a shear band. Fulfilling
this promise requires work beyond that of the Paper. The effect of
the intermediate principal stress, suitably predicted by the model,
is but one phenomenon that is known
to influence the possibility and nature of shear bands.
At least three other phenomena are also known to be relevant,
and these phenomena
are listed as possible future applications of the model, which might
culminate in a more comprehensive continuum description.
\begin{enumerate}
\item%
As shearing progresses within a granular material, shear bands begin
to form before the peak stress is attained, but these early bands
are usually transitory and will form, grow, and then disappear.
Once the peak stress is reached and the soil begins to soften, the
numerous bands coalesce into a few persistent shear bands~--- perhaps
a single shear band~--- and further deformation becomes concentrated
and captive within these persistent features. The theoretical framework
in the Paper could be used to model the onset and evolution of softening.
The divergence, convection, and diffusion processes reduce the pronounced
anisotropies of fabric and force that are attained at the peak state,
which might account, at least partially, for the strength softening.
\item%
Incremental strains within a shear band are often not aligned with
the stress increments: granular materials exhibit a non-coaxiality
of the principal strain and stress increments.
Such non-coaxiality
can favor an abrupt change in the direction of deformation, as at
the start of shear banding.
The non-coaxiality of stress and strain
increments is likely due to induced fabric and force anisotropies,
which favor stress increments in certain directions, regardless
of the direction of deformation.
The model is naturally suited to account for such effects of
fabric anisotropy.
\item%
Shear bands have a characteristic thickness that is related to particle
size. A comprehensive and verifiable explanation of this characteristic
remains an open problem in granular mechanics. Several continuum theories
have been proposed as a rationale for band thickness, but to the author's
knowledge, only one such explanation has been confirmed: a dependence
of the stress increment upon the spatial gradients of strain \citep{Kuhn:2002a}.
The model in the Paper can, perhaps, be extended to understand and
explain the effect of strain gradients, since such gradients will
alter the contact migration pattern of Eqs.~(\ref{eq:n_align})
and~(\ref{eq:ndot_alpha}),
changing the evolution of fabric and stress.
\end{enumerate}
An application of the model to these three phenomena will likely require
an understanding of density rates during unloading, a matter not addressed
in the Paper. Although extending the model to include unloading rates
presents a further challenge, once achieved, the model might also
provide a micro-mechanical alternative to the notion of a yield surface.
\bibliographystyle{MofM}
| proofpile-arXiv_065-1389 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\input{Tex/intro.tex}
\section{Related Works}
\label{sec:relwork}
\input{Tex/relwork.tex}
\section{Proposed Method}
\label{sec:method}
\input{Tex/method.tex}
\section{Experiment}
\label{sec:exp}
\input{Tex/exp.tex}
\section{Conclusion}
\label{sec:disc}
\input{Tex/disc.tex}
\vfill
\bibliographystyle{IEEEbib}
\subsection{Model architecture}
Our model architecture, shown in Fig. \ref{fig:model}, is a modification of CAE proposed by \cite{theis2017lossy}. The encoder and decoder are composed of convolutional layers as described in Section \ref{subs:sel}. The input image is first down-sampled by three blocks with each containing a convolutional layer, a batch normalization layer and a PReLU layer. Following 15 residual blocks, two more down-sampling convolutional blocks with the last convolutional block are applied, generating ${\mathbf z}$. The quantizer $Q$ then quantizes it and fed into the decoder whose architecture mirrors the encoder.
\subsection{Training}
We use the Adam optimizer \cite{kingma2014adam} with the batch size set to 32 to solve the first sub-problem. Learning rate is set to $4 \cdot 10^{-3}$ and is halved each time the loss has not dropped for ten epochs. Every 20 epochs, the second and third steps of the ADMM pruning method is applied. The distance function used as a part of back-propagation is a linear combination of MSE and differentiable versions of PSNR/SSIM/MS-SSIM, and the training is first warmed up by a scaled MSE alone. The ratio of the number of elements to retain in step two is set to be $10\%$. To enable fine-grained tuning of bpp, we modify the last layer of the encoder. All procedures are implemented in PyTorch and open-sourced\footnote{https://github.com/JasonZHM/CAE-ADMM}. Each model is trained for 300 epochs on 4 NVIDIA GeForce GTX 1080Ti GPUs.
\subsection{Datasets and preprocessing}
We use BSDS500 \cite{martin2001database} as the training set, which contains five hundred $481\times 321$ natural images. The images are randomly cropped to $128\times 128$, horizontally and vertically flipped and then normalized. For the test set, we use the Kodak PhotoCD dataset \footnote{http://r0k.us/graphics/kodak/}, which contains twenty-four $768\times 512$ images.
\begin{figure*}[htbp]
\centering
\subfigure{
\centering
\includegraphics[width=0.9\linewidth]{Fig/legend-new.pdf}
}
\subfigure{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{Fig/ssim.pdf}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{Fig/msssim.pdf}
\end{minipage}
}
\caption{Comparison of different method with respect to SSIM and MS-SSIM on the Kodak PhotoCD dataset. Note that Toderici et al.\cite{toderici2017full} used RNN structure instead of entropy coding while CAE-ADMM (Ours) replaces entropy coding with pruning method.}
\label{fig:compare}
\end{figure*}
\subsection{Results and discussion}
We test CAE-ADMM (Our method), JPEG (implemented by libjpeg\footnote{http://libjpeg.sourceforge.net/}) and JPEG 2000 (implemented by Kadadu Software\footnote{http://kakadusoftware.com/}) on the Kodak PhotoCD dataset. For the distance metric, we use the open-source implementation of SSIM and MS-SSIM \footnote{https://github.com/jorge-pessoa/pytorch-msssim}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Fig/compare_03-new.pdf}
\caption{Performance of different methods on image \textit{kodim21} from Kodak dataset. Bpp is set to be about 0.3.}
\label{fig:cat}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Fig/latent.pdf}
\caption{Comparison of latent code before and after pruning for \textit{kodim21}. For the sake of clarity, we marked zero values in the feature map before normalization as black.}
\label{fig:latent}
\end{figure}
Fig. \ref{fig:compare} shows a comparison of the performance achieved by the mentioned methods on Kodak. Our method (CAE-ADMM) outperforms all the other methods in both SSIM and MS-SSIM, especially the original CAE which uses entropy coding. Note that the blue curve represents the RNN-based method proposed by Toderici et al. which is optimized without an entropy estimator.
In Fig. \ref{fig:cat}, we demonstrate the effect of different compression methods visually: the origin (top left), JPEG (top right), CAE-ADMM (ours, bottom left) and JPEG 2000 (bottom right). From the figure, we can see that JPEG breaks down under a bpp of 0.3 while that of CAE-ADMM and JPEG 2000 are still satisfactory.
\begin{table}[htbp]
\centering
\begin{tabular}{lcc}
\toprule
Model &bpp &ratio of zeros\\
\midrule
Before pruning & $1.684 \pm 0.012$ & $7.80\% \pm 3.44\%$ \\
After pruning & $1.257 \pm 0.011$ & $17.65\% \pm 4.90$\% \\
\bottomrule
\end{tabular}
\caption{Bpp \& proportion of zero elements in $\hat{{\mathbf z}}$ the total number of elements in $\hat{{\mathbf z}}$ before and after pruning. For both statistics, a 95\% confidence interval is established with a sample size of 233 (size of the mixed dataset). }
\label{tab:pruning}
\end{table}
For ablation study, we test out the effectiveness of ADMM-module by applying the same training procedure to the same model, one with the pruning schedule and another without. Then, we calculate the average bpp as well as the ratio of zero elements in a mixed dataset ($768\times512$ crops of images from Urban100 \cite{MSLapSRN}, Manga109 \cite{MSLapSRN} and Kodak PhotoCD). Results can be seen in Table \ref{tab:pruning} and more direct visualization of a sample image can be found in Figure \ref{fig:latent}.
Inference-speed-wise, from Table \ref{tab:inference} we can see that our CAE-ADMM has an acceptable inference speed comparing to traditional codecs while maintaining superior quality concerning SSIM.
\begin{table}[htbp]
\centering
\begin{tabular}{lccc}
\toprule
Model & $\overline{\text{bpp}}$ & SSIM & Second/image\\
\midrule
bpp\_0.5 &$\mathbf{0.597}$ & $\mathbf{0.871\!\pm\!0.003}$ &$0.140\!\pm\!0.008$\\
JPEG &0.603 & $0.828\!\pm\!0.006$ &$\mathbf{0.033\!\pm\!0.001}$\\
JPEG2000 &0.601 & $0.793\!\pm\!0.013$ &$0.177\!\pm\!0.020$\\
\bottomrule
\end{tabular}
\caption{95\% confidence intervals of SSIM and inference speed of CAE-ADMM model (inference time being the sum of all $128 \times 128$ patches with batch size = 8) on mixed dataset, 2 GPUs compared to traditional codecs.}
\label{tab:inference}
\end{table}
\subsection{Selection of $E$, $D$ and $Q$}
\label{subs:sel}
\input{Tex/method/ED.tex}
\subsection{Solution to the optimization problem}
\label{subs:opt}
\input{Tex/method/opt.tex}
| proofpile-arXiv_065-1391 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Due to the surge of interest in networks, such as the Internet
(e.g., The Internet Mapping Project by Hal Burch and Bill Cheswick),
resistor networks~\cite{newman2004finding}, the World Wide Web
(WWW)~\cite{barabasi1999emergence}, and social networks (e.g.,
friendship
network~\cite{moody2001race}), a plethora of network models have
been
proposed and studied in the last several decades. In this paper, we
investigate a network model that recently caught researchers'
attention---{\em Apollonian networks} (ANs). ANs arise from the
problem of space-filling packing of spheres, proposed by the ancient
Greek mathematician Apollonius of Perga. ANs possess a variety of
typical network characteristics,
which are summarized in the title of~\cite{andrade2005apollonian}:
scale free,
small world, Euclidean, space filling and matching graphs. Each of
these phrases is a significant area of modern network research
itself. In practice, ANs have found a lot of applications in
different scientific disciplines~\cite{almeida2013quantum,
huang2006walks,
lima2012nonequilibrium, pawela2015generalized, serva2013ising,
silva2013critical, souza2013discrete, wong2010partially,
xu2008coherent}. The wide application of this class of networks
motivates us to conduct the present research.
The counterpart of AN in the field of random network analysis is
called {\em Random Apollonian Network} (RAN). The study of RAN first
appeared in~\cite{zhou2005maximal}, where the power-law and the
clustering
coefficient were investigated. Since then, many more properties of
RANs have been uncovered by applied mathematicians and probablists:
The degree distribution was characterized by~\cite{frieze2014some};
the diameter was calculated by~\cite{ebrahimzadeh2014onlongest,
frieze2014some}; the length
of the longest path in RANs was determined
by~\cite{collevecchio2016longest, cooper2015long,
ebrahimzadeh2014onlongest}. All these research papers, however,
only
focused on planar RANs, the evolution of which is based on
continuing triangulation. Triangulated RANs are a special class of
(more general) RANs with network {\em index} taking value $3$. It
can be
shown that triangulated RANs are maximal planar graphs by the {\em
Kuratowski criterion}~\cite{kuratowski1930remarques}. It is
evident
that there is an underlying theory of {\em preferential attachment}
(PA)~\cite{massen2007preferential} in the evolution of RANs, where
PA is a critical manifestation in social sciences, rendering the
potential application of RANs in a wider range of fields (than what
has been found in the literature).
In this paper, we consider
a class of high-dimensional networks generalized from triangulated
RANs, i.e., high-dimensional random Apollonian Networks (HDRANs)
that refer to the RANs with a general network index $k \ge 3$.
HDRANs were first introduced by~\cite{zhang2006high}, where an
iterative
algorithm was designed to characterize several network properties
including degree distribution, clustering coefficient and diameter.
The exact
degree distribution of a vertex with a fixed label and the total
weight (a macro metric) were determined
by~\cite{zhang2016thedegree}. A follow-up study embedding RANs into
continuous time was given by~\cite{zhang2016distributions}. To the
best of our knowledge, there is almost no other work has been done
for
HDRANs in the literature.
The goal of this paper is to give a comprehensive study of HDRANs,
with specific focus on the investigation of several network
properties of common interest by utilizing some well-developed
methods; for instance, stochastic recurrences and \polya\ urns.
Some of the results, such as the degree distribution, are
directly extended from their counterparts for triangulated RANs. For
better readability, only the results and the underlying theory are
presented in the main body of the paper, but the associated
mathematical derivations are given in the appendix. A couple of
novel network properties, such as the sparsity and the
depth of HDRANs, are rigorously uncovered as well. Details will be
given in the
sequel.
The rest of the paper is
organized as follows. In Section~\ref{Sec:evolution}, we briefly
review the evolutionary process of HDRANs as well as some basic
graph invariants thereof. In the next five sections, the main
results of the analyzed properties are given; see a summary in
Table~\ref{Table:summary}.
\begin{table}[h!]
\begin{center}
\renewcommand*{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|}
\hline
Section & Property & Method(s)
\\ \hline
$3$ & Degree profile {\rm I} & Two-dimensional induction
(extended from~\cite{frieze2014some})
\\ \hline
\multirow{2}{*}{$4$} & \multirow{2}{*}{Degree profile
{\rm II}} & Analytic
combinatorics~\cite{flajolet2006some}
\\ & & Triangular urns~\cite{zhang2016thedegree}
\\ \hline
$5$ & Small world & Local clustering coefficient
\\ \hline
$6$ & Sparsity & A proposed Gini index
\\ \hline
\multirow{3}{*}{$7$} & Total depth & Recurrence methods
\\ & Diameter & Results directly
from~\cite{cooper2014theheight}
\\ & The Wiener index & Numeric experiments
\\ \hline
\end{tabular}
\caption{Summary of the article}
\label{Table:summary}
\end{center}
\end{table}
In Section~\ref{Sec:concluding}, we give some concluding remarks and
propose some future work.
\section{Evolution of random Apollonian networks}
\label{Sec:evolution}
In this section, we review the evolution of a RAN of index $k \ge
3$. At time $n = 0$, we start with a {\em complete
graph}\footnote{In graph theory, a complete graph is a graph
such
that each pair of vertices therein is connected by an edge. A
complete graph on $k$ vertices is also called $k$-clique or
$k$-simplex. We shall interchangeably use these terms through
the
manuscript.} on $k$ vertices all of which are labeled with $0$.
At
each subsequent time point $n \ge 1$, a $k$-clique is chosen
uniformly at random among all active cliques in the network. A new
vertex labeled with $n$ is linked by $k$ edges to all the vertices
of the chosen clique. Then, the recruiting clique is deactivated. An
explanatory example of a RAN with index $k = 5$ is given in
Figure~\ref{Fig:evol}.
\begin{figure}[tbh]
\begin{center}
\begin{tikzpicture}[scale=2.63]
\draw
(-0.25,0) node [circle=0.1,draw] {0}
(0.25,0) node [circle=0.1,draw] {0}
(0,0.866) node [circle=0.1,draw] {0}
(-0.45, 0.52) node [circle=0.1,draw] {0}
(0.45, 0.52) node [circle=0.1,draw] {0}
(-0.133,0) -- (0.133,0)
(-0.33,0.52) -- (0.33,0.52)
(-0.158, 0.085) -- (-0.03, 0.755)
(0.158, 0.085) -- (0.03, 0.755)
(-0.41, 0.405) -- (-0.33, 0.09)
(0.41, 0.405) -- (0.33, 0.09)
(-0.357, 0.448) -- (0.141, 0.036)
(0.357, 0.448) -- (-0.141, 0.036)
(-0.357, 0.6) -- (-0.077, 0.78)
(0.357, 0.6) -- (0.077, 0.78);
\draw [->,>=stealth,thick] (0.625,.5) -- +(0.25,0);
\draw
(1.25,0) node [circle=0.1,draw] {0}
(1.75,0) node [circle=0.1,draw] {0}
(1.5,0.866) node [circle=0.1,draw] {0}
(1.05, 0.52) node [circle=0.1,draw] {0}
(1.95, 0.52) node [circle=0.1,draw] {0}
(1.85, 0.866) node [circle=0.1,draw] {1};
\draw[dashed]
(1.367,0) -- (1.633,0)
(1.17,0.52) -- (1.83,0.52)
(1.342, 0.085) -- (1.47, 0.755)
(1.658, 0.085) -- (1.53, 0.755)
(1.09, 0.405) -- (1.17, 0.09)
(1.91, 0.405) -- (1.83, 0.09)
(1.143, 0.448) -- (1.641, 0.036)
(1.857, 0.448) -- (1.359, 0.036)
(1.143, 0.6) -- (1.423, 0.78)
(1.857, 0.6) -- (1.577, 0.78);
\draw
(1.625, 0.866) to[bend left=15] (1.735, 0.866)
(1.143, 0.6) to[bend left=75] (1.8, 0.97)
(1.342, 0.085) to[bend left=45] (1.75, 0.8)
(1.658, 0.085) to[bend left=15] (1.8, 0.75)
(1.9, 0.63) to[bend right=30] (1.9, 0.75);
\draw [->,>=stealth,thick] (2.12,.5) -- +(0.25,0);
\draw
(2.75,0) node [circle=0.1,draw] {0}
(3.25,0) node [circle=0.1,draw] {0}
(3,0.866) node [circle=0.1,draw] {0}
(2.55, 0.52) node [circle=0.1,draw] {0}
(3.45, 0.52) node [circle=0.1,draw] {0}
(3.35, 0.866) node [circle=0.1,draw] {1}
(3.55, 0.2) node [circle=0.1,draw] {2};
\draw[dashed]
(2.867,0) -- (3.133,0)
(2.67,0.52) -- (3.33,0.52)
(2.842,0.085) -- (2.97, 0.755)
(3.158, 0.085) -- (3.03, 0.755)
(2.59, 0.405) -- (2.67, 0.09)
(3.41, 0.405) -- (3.33, 0.09)
(2.643, 0.448) -- (3.141, 0.036)
(3.357, 0.448) -- (2.859, 0.036)
(2.643, 0.6) -- (2.923, 0.78)
(3.357, 0.6) -- (3.077, 0.78)
(3.125, 0.866) to[bend left=15] (3.235, 0.866)
(2.643, 0.6) to[bend left=75] (3.3, 0.97)
(2.842, 0.085) to[bend left=45] (3.25, 0.8)
(3.158, 0.085) to[bend left=15] (3.3, 0.75);
\draw
(3.4, 0.63) to[bend right=30] (3.4, 0.75)
(3.6, 0.32) to [bend right=45] (3.43, 0.78)
(3.52, 0.32) to [bend left=30] (3.05, 0.765)
(3.44, 0.25) to [bend left=15] (2.66,0.49)
(3.436, 0.195) to [bend right=10] (2.842, 0.085)
(3.47, 0.115) to [bend left=15] (3.37, 0.02);
\end{tikzpicture}
\caption{An example of the evolution of a HDRAN of index 5
in two steps; active cliques are those containing at
least
one solid edge.}
\label{Fig:evol}
\end{center}
\end{figure}
According to the evolutionary process described above, we obtain
some basic and deterministic graph invariants of a RAN with index
$k$ at time $n$: the number of vertices $V_{n}^{(k)} = k + n$, the
number of edges $E_{n}^{(k)} = k + nk$, and the number of active
cliques $\clique_n^{(k)} = 1 + (k - 1)n$. We note that RANs of
indices $1$ and $2$ are not considered in this paper, as their
structure lacks research interest. A RAN of index $1$ at time $n$ is
a single vertex labeled with $n$, while a RAN of index $2$ at time
$n$ is a path of length $n$.
\section{Degree profile {\rm I}}
\label{Sec:degree}
In this section, we investigate the degree profile of a RAN of index
$k \ge 3$. The random variable of prime interest is $X_{n,
j}^{(k)}$, the number of vertices of degree $j$ in a RAN of
index
$k$ at time $n$, for $j \ge k$, where the boundary condition arises
from the natural lower bound of the degree of vertices in
RANs\footnote{Upon joining into the network, every newcomer is
connected with $k$ existing vertices, leading to minimal
possible
degree $k$.}. It is also worthy of noting that the natural upper
bound for $j$ at time $n$ is $k + n - 1$.
The degree random variable that we consider in this section is
different from that investigated in~\cite{zhang2016thedegree}, and
the methods
developed in~\cite{zhang2016thedegree} are not amenable to this
study, which will
be explained in detail in the sequel. To distinguish the two kinds
of degree profiles, we call the one discussed in this section degree
profile {\rm I}. Specifically, we present two results of $X_{n,
j}^{(k)}$, which are respectively shown in
Theorems~\ref{Thm:L1bound} and~\ref{Thm:pbound}. In
Theorem~\ref{Thm:L1bound}, we prove that the difference between the
expectation of $X_{n, j}^{(k)}$ and a linear function of $n$ is
uniformly bounded, where the bound is determined. In
Theorem~\ref{Thm:pbound}, we show that $X_{n, j}^{(k)}$ concentrates
on its expectation with high probability, i.e., a focusing property.
\begin{theorem}
\label{Thm:L1bound}
Let $X_{n, j}^{(k)}$ be the number of vertices of degree $j$ in
a RAN of index $k$ at time $n$, for $j \ge k$. For each $n \in
\mathbb{N}$ and any $k \ge 3$, there exists a constant $b_{j,
k}$ such that
\begin{equation}
\label{Eq:degreeL1}
\left|\E \left[X_{n, j}^{(k)}\right] - b_{j, k} \, n\right|
\le \frac{2k^2}{2k - 1}.
\end{equation}
In particular, we have $b_{j, k} = \frac{\Gamma(j)\Gamma(2k -
1)}{\Gamma(j + k) \Gamma(k - 1)}$.
\end{theorem}
The proof of Theorem~\ref{Thm:L1bound} is based on an elementary
mathematical tool---induction. As suggested
in~\cite{frieze2014some}, we
split the cases of $j = k$ and $j > k$ in the proof. For the case of
$j = k$, we apply the traditional mathematical induction directly,
whereas we develop a two-dimensional induction based on an infinite
triangular array for the case of $j > k$. For the better readability
of the paper, we present the major steps of the proof in
Appendix~\ref{App:L1bound}.
In the proof of Theorem~\ref{Thm:L1bound}, we show that the mean of
$X_{n, j}^{(k)}$ scaled by $n$ converges to $b_{j, k}$ when $n$ is
large. When $j$ goes to infinity as well, we discover that $b_{j, k}
\sim j^{-k}$ according to the {\em Stirling's approximation}. This
implies that the degree distribution in HDRANs follows a {\em
power-law} property, where the exponent is the network index
$k$.
Consequently, HDRANs are {\em scale-free} networks. The power-law
property for planar RANs (i.e., $k = 3$) has been recovered
in~\cite{zhou2005maximal} numerically and in~\cite{frieze2014some}
analytically.
In addition, we are interested in the deviation of the random
variable $X_{n, j}$ from its expectation. In
Thoerem~\ref{Thm:pbound}, we develop a Chebyshev-type inequality.
\begin{theorem}
\label{Thm:pbound}
Let $X_{n, j}^{(k)}$ be the number of vertices of degree $j$ in
a RAN of index $k$ at time $n$, for $j \ge k$. For any $\lambda>
0$, we have
$$\Prob\left(\left|X_{n, j}^{(k)} - \E\left[X_{n, j}^{(k)}
\right]\right| \ge \lambda \right) \le e^{-\lambda^2/(8kn)}.$$
\end{theorem}
The proof of Theorem~\ref{Thm:pbound} is presented in
Appendix~\ref{App:pbound}. The main idea is to employ the {\em
Azuma-Hoeffding inequality}~\cite{azuma1967weighted} based on a
martingale
sequence. We remark that the exact same concentration result is
found for {\em random $k$-trees}~\cite{gao2009thedegree}. The author
of~\cite{gao2009thedegree} tackled the problem by using the methods
from tree
realization theory. The intrinsic reason of the identicality is
similarity in the evolutionary processes of HDRANs with index
$k$ and random $k$-trees.
Before ending this section, we would like to point out that the
methods in the proofs of Theorems~\ref{Thm:L1bound}
and~\ref{Thm:pbound} are extended from the ideas
in~\cite{frieze2014some}.
The results for planar RANs (a special case for $k = 3$) can be
found in~\cite[Theorem 1.1]{frieze2014some}.
\section{Degree profile {\rm II}}
\label{Sec:degree2}
Another type of degree profile that we look into is node-specified.
Let $D_{n, j}^{(k)}$ denote the degree of the node labeled with $j$
in a HDRAN of index $k$ at time $n$. This property was investigated
in~\cite{zhang2016thedegree}, where the growth of HDRANs was
represented by a
two-color \polya\ urn scheme~\cite{mahmoud2009polya}. \polya\ urn
appears to
be an appropriate model since it successfully captures the
evolutionary characteristics of highly dependent structures.
Noticing that the degree of a vertex is equal to the number of
cliques incident with it, the authors of~\cite{zhang2016thedegree}
introduced a
color code such that the active cliques incident with the node
labeled with $j$ were colored white, while all the rest were colored
blue. The associated urn scheme is governed by the {\em replacement
matrix}
$$\begin{pmatrix}
k - 2 & 1
\\ 0 & k - 1
\end{pmatrix}.$$
This replacement matrix is triangular, so the associated \polya\ urn
is called {\em triangular urn}. This class of urns has been
extensively studied in~\cite{flajolet2006some, janson2006limit,
zhang2015explicit}. The next
proposition specifies the exact distribution of $D_{j, n}^{(k)}$ as
well as its moments.
\begin{prop}
\label{Thm:degreedist}
Let $D_{n, j}^{(k)}$ be the degree of the node labeled with $j$
in a RAN of index $k$ at time $n$, for $n \ge j$. The
distribution of $D_{n, j}^{(k)}$ is given by
\begin{align*}
\Prob\left(D_{n, j}^{(k)} = k + \delta\right) &=
\frac{\Gamma(n - j + 1)\Gamma\left(j + \frac{1}{k -
1}\right)}{\Gamma\left(n + \frac{1}{k -
1}\right)}{{\delta +
\frac{2}{k - 2}} \choose \delta}
\\ &\qquad{}\times \sum_{r = 1}^{\delta} (-1)^{\delta}
{\delta \choose i} {{n - 2 - \frac{k - 2}{k - 1} r} \choose
{n - j}},
\end{align*}
for $\delta = 1, 2, \ldots, n - j$. The $s$-th moment of $D_{n,
j}^{(k)}$ is
\begin{align}
\E\left[\left(D_{n, j}^{(k)}\right)^{s\,}\right] &=
\frac{1}{(k - 2)^s} \left[(k(k - 3))^s + \sum_{r = 1}^{s}{s
\choose r} \frac{(k(k - 3))^{s - r}(k -
2)^r}{\left\langle j
+ 1/(k - 1)\right\rangle_{n - j}}\right. \nonumber
\\ &\qquad{}\left.\times \sum_{i = 1}^{r} (-1)^{r - i} {r
\brace i} \left\langle \frac{k}{k - 2} \right\rangle_i
\left\langle j + \frac{1}{k - 1} + \frac{k - 2}{k - 1}i
\right\rangle_{n - j}\right], \label{Eq:degreemoment}
\end{align}
where $\langle \cdot \rangle_{\cdot}$ represents the Pochhammer
symbol of rising factorial, and ${{\cdot} \brace {\cdot}}$
represents Stirling numbers of the second kind.
\end{prop}
The probability distribution function of $D_{n, j}^{(k)}$ is
obtained by exploiting the results in~\cite[Proposition
14]{flajolet2006some}, and the moments are recovered
from~\cite[Proposition
1]{zhang2016thedegree}. The asymptotic moments of $D_{n, j}^{(k)}$
are obtained
directly by applying the Stirling's approximation to
Equation~(\ref{Eq:degreemoment}); namely,
$$\E\left[\left(\frac{D_{n, j}^{(k)}}{n^{(k - 2)/(k - 1)}}\right)^{s
\,}\right] = \frac{\Gamma\left(j + \frac{1}{k - 1}\right)
\Gamma\left( s + \frac{k}{k - 2}\right)}{\Gamma\left(j + \frac{k
-
2}{k - 1}s + \frac{1}{k - 1}\right) \Gamma\left(\frac{k}{k -
2}\right)}.$$
In particular, the asymptotic mean of $D_{n, j}^{(k)}$ is given by
$$\E\left[D_{n, j}^{(k)}\right] \sim \frac{\frac{k}{k - 2}
\Gamma\left(j + \frac{1}{k - 1}\right)}{\Gamma(j + 1)} \, n^{(k
-
2)/(k - 1)},$$
implying a phase transition in $j = j(n)$:
$$\E\left[D_{n, j}^{(k)}\right] \sim
\begin{cases}
\frac{k}{k - 2} \left(\frac{n}{j(n)}\right)^{(k - 2)/(k - 1)},
\qquad &j = o(n),
\\ \frac{k}{k - 2} \left(k - 3 + \alpha^{-(k - 2)/(k - 1)}
\right), \qquad &j \sim \alpha n,
\end{cases}
$$
for some $\alpha > 0$.
\section{Small-world}
\label{Sec:smallworld}
In
this section, we look into the {\em small-world} property of HDRANs.
The term ``small-world'' was first coined
by~\cite{watts1998collective}. In the
paper, the authors suggested to use the average of {\em local
clustering coefficients} to assess the small-world effect of a
network; that is,
$$\hat{C}(n) = \frac{1}{V} \sum_{v} C_v(n),$$
where $V = V_n^{(k)}$ denotes the total number of vertices and
$C_v(n)$ is the
local clustering coefficient of vertex~$v$ at time $n$. The local
clustering coefficient of vertex $v$ is defined as the proportion of
the number of edges in the {\em open neighborhood} of $v$, i.e.,
$$C_v(n) = \frac{|\{e_{uw}, u, w \in
\mathcal{N}_v(n)\}|}{|\mathcal{N}_v(n)|(|\mathcal{N}_v(n) -
1|)/2},$$
where $\mathcal{N}_v(n)$ is the open neighborhood of $v$ at time
$n$, $e_{ij}$ denotes an edge between vertices $i$ and $j$, and
$|\cdot|$ represents the cardinality of a set.
For each newcomer, say $v^{*}$, to an HDRAN of index $k$, the open
neighborhood of $v^{*}$ is comprised by $k$ vertices of the simplex
chosen for recruiting $v^*$. Thus, the order of the open
neighborhood of $v^{*}$ is $k$, and the number of edges in the
neighborhood of $v^{*}$ is ${k \choose 2}$. Upon the first
appearance of $v^{*}$ in the network, the degree of $v^{*}$, denoted
$d_{v^{*}}(n)$, is $k$. As an active simplex containing $v^{*}$ is
selected for recruiting a newcomer in any subsequent time point,
$d_{v^{*}}(n)$ increases by $1$, and the number of edges of the
neighborhood of $v^{*}$ increases by $k - 1$. In general, for a
vertex $v$ of ${\rm deg}_v(n) = j$ at time $n$, the clustering
coefficient is given by
\begin{equation*}
C_v(n) = \frac{(k - 1)(j - k) + {k \choose 2}}{{j \choose 2}} =
\frac{(k - 1)(2j - k)}{j(j - 1)}.
\end{equation*}
Accordingly, the clustering coefficient of the entire network at
time $n$ is
\begin{equation*}
\hat{C}(n) = \frac{1}{n + k} \sum_{v}C_v(n) = \sum_{j = k}^{k +
n - 1} \frac{(k - 1)(2j - k)}{j(j - 1)} \times \frac{X_{n,
j}^{(k)}}{n + k},
\end{equation*}
where $X_{n, j}^{(k)}$ denotes the number of vertices of degree $j$
in the network at time $n$. When the network is large (i.e., $n \to
\infty$), the asymptotic clustering coefficient is given by
\begin{equation*}
\hat{C}(\infty) \approx \sum_{j = k}^{\infty} \frac{(k - 1)(2j
- k)}{j(j - 1)} \lim_{n \to \infty} \frac{\E \left[X_{n,
j}^{(k)}\right]}{n + k} = \sum_{j = k}^{\infty} \frac{(k
- 1)(2j
- k)}{j(j - 1)}
\frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j + k) \Gamma(k - 1)},
\end{equation*}
where the second equality in the last display holds according to
Theorem~\ref{Thm:L1bound}. We simplify the expression of
$\hat{C}(\infty)$ by applying several algebraic results of gamma
function, and get
\begin{equation*}
\label{Eq:asymcc}
\hat{C}(\infty) \approx \frac{(k - 1)\Gamma(2k - 1)}{\Gamma(k -
1)} \sum_{j = k}^{\infty} \frac{(2j - k) \Gamma(j - 1)}{j \,
\Gamma(j + k)} = \frac{(k - 1)\Gamma(2k - 1)}{\Gamma(k - 1)}
\sum_{j =
k}^{\infty} \left(\frac{2 \, \Gamma(j - 1)}{\Gamma(j + k)} -
\frac{k \, \Gamma(j - 1)}{j \, \Gamma(j + k))} \right).
\end{equation*}
We evaluate the two terms in the summand one after another. The
first sum is given by
$$\sum_{j = k}^{\infty} \frac{2 \, \Gamma(j - 1)}{\Gamma(j + k)} =
\frac{2(2k - 1)\Gamma(k - 1)}{k \, \Gamma(2k)}.$$
The second sum is simplified to
$$\sum_{j = k}^{\infty} \frac{k \, \Gamma(j - 1)}{j \, \Gamma(j +
k))} = \frac{\Gamma(k - 1)\Hypergeometric{3}{2}{1, k - 1, k}{2k,
k +
1}{1}}{\Gamma(2k)}.$$
where $\Hypergeometric{3}{2}{\cdot}{\cdot}{\cdot}$ is a {\em
generalized hypergeometric function}. Putting them together, we
thus
have
$$\hat{C}(\infty) \approx \frac{k - 1}{2k - 1}\left(\frac{2(2k -
1)}{k} - \Hypergeometric{3}{2}{1, k - 1, k}{2k, k +
1}{1}\right).$$
Although hypergeometric functions cannot be written in closed forms
in general, we derive the analytical results of $\hat{C}(\infty)$
for several small values of $k$, and present them in
Table~\ref{Table:asymcc}. In particular, the estimated clustering
coefficient for triangulated RANs (i.e., $k = 3)$ based on our
calculation is $12 \pi^2 - 353/3 \approx 0.7686$, which is more
accurate than $46/3 - 36 \log(3/2) \approx 0.7366$~\cite[Equation
(6)]{zhou2005maximal}, according to a simulation
experiment ($0.7683$ based on the average of $50$ independent
samples, each of which is run over $10,000$ iterations).
\begin{table}[h]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Network index ($k$) & $\hat{C}(\infty)$
\\ \hline
3 & $12 \pi^2 - \frac{353}{3}$
\\ \hline 4 & $120 \pi^2 - \frac{2367}{2}$
\\ \hline
5 & $\frac{2800}{3} \pi^2 - \frac{138161}{15}$
\\ \hline 6 & $6300 \pi^2 - \frac{746131}{12}$
\\ \hline
7 & $38808 \pi^2 - \frac{134056533}{350}$
\\ \hline 8 & $224224 \pi^2 - \frac{663900367}{300}$
\\ \hline
9 & $1235520 \pi^2 - \frac{26887974331}{2205}$
\\ \hline
10 & $6563700 \pi^2 - \frac{253941996039}{3920}$
\\ \hline
\end{tabular}
\end{center}
\caption{Asymptotic clustering coefficients of HDRANs with small
indicies $k$}
\label{Table:asymcc}
\end{table}
\section{Sparsity}
\label{Sec:sparsity}
{\em Sparsity} is a property of common interest in network
modeling~\cite{singh2015finding, verzelen2015community,
vinciotti2013robust}, as well as in
data
analytics~\cite{arnold2010specifying, buluc2011implementing}. As
opposed to
``dense,'' this
topology plays a key role when one defines sparse networks. Sparse
networks have fewer links than the maximum possible number of links
in the (complete) network of same order. In computer science, sparse
networks are considered to be somewhere dense or nowhere dense. The
investigation of sparsity of HDRANs is inspired by an article
recently published in the American Physics
Society~\cite{delgenio2011all}. It
was analytically and numerically proven in the article that the
probability of a scale-free network being dense is $0$, given that
the power-law coefficient falls between~$0$ and $2$.
One of the most commonly-used network topology to measure the
sparsity of a network $G(V, E)$ is the {\em link density} (also
known as {\em edge density} in the literature):
$${\rm density}(G) = \frac{|E|}{\binom{|V|}{2}}.$$
For a HDRAN of index $k$, denoted $\apol_{n}^{(k)}$, its link
density at time $n$ is a decreasing function of $n$, viz.,
$${\rm density}\left(\apol_{n}^{(k)}\right) =
\frac{E_n^{(k)}}{\binom{V_n^{(k)}}{2}} = \frac{k + nk}{\binom{k +
n}{2}} = \frac{2(k + nk)}{(k + n)(k + n - 1)}.$$
Observing that the link density of an HDRAN in any form is
deterministic given $k$ and $n$, we assert that this topology indeed
fails to expose the randomness or to capture the structure of
HDRANs. Other topologies that have been proposed to measure the
sparsity of both nonrandom and random networks include degeneracy,
arboricity, maximum average degree, etc. We refer the interested
readers to~\cite{nesetril2012sparsity} for textbook style
expositions of these
topologies and their properties.
In this section, we measure the sparsity of HDRANs via a classical
metric---the {\em Gini index}~\cite{gini1921measurement}. The Gini
index which
appears more often in economics is commonly used to measure the
inequality of income or wealth~\cite{dalton1920themeasurement,
gini1921measurement}. The utilization
of the Gini index as a sparsity measurement originates in electrical
engineering~\cite{hurley2009comparing}. More often, the Gini index
was used to
evaluate regularity of graphs~\cite{balaji2017thegini,
domicolo2020degree, zhang2019thedegree}. The Gini
index debuted as a sparsity measurement of networks
in~\cite{goswami2018sparsity}.
A graphical interpretation of the Gini index is the {\em Lorenz
curve}. As portrayed in Figure~\ref{Fig:exLorenz}, the Lorenz
curve
(thick black curve) splits the lower triangle of a unit square into
$A$ and $B$.
A well-established relationship between the Gini index and the
Lorenz curve is that the Gini index of the associated Lorenz curve
is equal to the ratio of ${\rm Area}(A)$ and ${\rm Area}(A + B)$,
equivalent to $1 - 2 \times {\rm Area}(B)$.
\begin{figure}[tbh]
\centering
\begin{tikzpicture}[scale=5]
\draw
(0, 0) -- (0, 1)
(0, 0) -- (1, 0)
(0, 1) -- (1, 1)
(1, 0) -- (1, 1)
(0, 0) -- (1, 1);
\draw[ultra thick]
(0, 0) to [bend right = 40] (1, 1);
\draw[fill=gray!20]
(0, 0) to [bend right = 40] (1, 1) -- (1, 0) -- (0, 0) --
cycle;
\node[text width = 0.1cm] at (0.57, 0.43) {$A$};
\node[text width = 0.1cm] at (0.8, 0.2) {$B$};
\end{tikzpicture}
\label{Fig:exLorenz}
\caption{An example of typical Lorenz curve}
\end{figure}
We construct the Gini index of HDRANs based on vertex degrees. At
time $n$, there is a total of $k + n$ vertices in $\apol_n^{(k)}$,
and the {\em admissible} degree set is $\mathcal{J} = \{k, k + 1,
\ldots, k + n\}$. According to Theorem~\ref{Thm:L1bound}, the mean
of the proportion of the number of vertices having degree $j \in
\mathcal{J}$ can be approximated by $\bigl(\Gamma(j) \Gamma(2k -
1)\bigr)/\bigl(\Gamma(j + k) \Gamma(k - 1)\bigr)$, when $n$ is
large. For simplicity, let us denote this mean proportion for each
pair of $j$ and $k$ by $\gamma(j, k)$. These $\gamma(j, k)$'s
altogether naturally form the Lorenz curve after being rearranged in
an ascending order. Note that
$$\frac{\partial}{\partial j} \, \gamma(j, k) = \frac{\bigl(
(\Psi(j) - \Psi(j + k) \bigr) 2^{2k - 2} (k - 1) \Gamma\left(k -
\frac{1}{2}\right) \Gamma(j)}{\Gamma\left(\frac{1}{2}\right)
\Gamma(j + k)} < 0,$$
where $\Psi(\cdot)$ is the {\em digamma function}, known to be
increasing on the positive real line. Hence, the function $\gamma(j,
k)$ is decreasing with respect to $j$.
Specifically, we build the Lorenz curve as follows. The bottom of
the unit square is equispaced into $(n + 1)$ segments. The bottom
left vertex is marked $0$ along with vertical value $0$. The
cumulative proportion value $\sum_{j = k + n - i + 1}^{k + n}
\gamma(j, k)$ is assigned to the $i$th segmentation from the left.
There is a total of $n$ segmentations between the bottom left and
bottom right vertices. Lastly, the vertical value for the bottom
right vertex is $\sum_{j = k}^{k + n} \gamma(j, k)$. The Lorenz
curve is comprised by smoothly connecting these assigned values in
order, from left to right.
In the next lemma, we show that the Lorenz curve that we established
in the last paragraph is well defined, i.e., the two ends of the
Lorenz curve respectively coincide with the bottom left and the top
right corners of the unit square.
\begin{lemma}
\label{Lem:lorenz}
We claim that
$$\lim_{n \to \infty} \sum_{j = k}^{k + n} \frac{\Gamma(j)
\Gamma(2k - 1)}{\Gamma(j + k) \Gamma(k - 1)} = 1 \quad
\mbox{\textit{and}} \quad
\lim_{n \to \infty} \frac{\sum_{j = k + n - i + 1}^{k + n}
\frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j + k) \Gamma(k -
1)}}{i/n} = 0.$$
\end{lemma}
The proof of Lemma~\ref{Lem:lorenz} is presented in
Appendix~\ref{App:lorenz}. Next, we calculate ${\rm Area}(B)$,
equivalent to integrating the Lorenz curve from $0$ to $1$. For
large value of $n$, the integration can be approximated by applying
the {\em trapezoid rule}; that is,
\begingroup
\allowdisplaybreaks
\begin{align*}
{\rm Area}(B) &\approx \frac{1}{2 (n + 1)} \left[\sum_{j = k +
n}^{k + n} \frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j + k)
\Gamma(k - 1)} \right.
\\ &\qquad{}+ \left(\sum_{j = k + n}^{k + n} \frac{\Gamma(j)
\Gamma(2k - 1)}{\Gamma(j + k) \Gamma(k - 1)} + \sum_{j = k +
n -
1}^{k + n} \frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j + k)
\Gamma(k - 1)}\right)
\\ &\qquad{} + \cdots + \left.\left(\sum_{j = k + 1}^{k + n}
\frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j + k) \Gamma(k - 1)} +
\sum_{j = k}^{k + n} \frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j +
k) \Gamma(k - 1)} \right)\right]
\\&= \frac{1}{2(n + 1)}\left(\frac{3k - 2}{k - 2} - \frac{2^{2k
- 1}\bigl((k - 1)n + 2\bigr) \Gamma\left(k -
\frac{1}{2}\right)\Gamma(k + n + 1)}{(k -
2)\Gamma\left(\frac{1}{2}\right)\Gamma(2k + n)}\right).
\\&\sim n^{-1} - n^{1 - k},
\end{align*}
\endgroup
In what follows, the Gini index of an HDRAN of index $k$ at time $n$
is given by
\begin{align*}
&{\rm Gini}\left(\apol_{n}^{(k)}\right) = 1 - 2 \times {\rm
Area}(B)
\\ &\quad= 1 - \frac{1}{(n + 1)}\left(\frac{3k - 2}{k - 2} -
\frac{2^{2k - 1}\bigl((k - 1)n + 2\bigr) \Gamma\left(k -
\frac{1}{2}\right)\Gamma(k + n + 1)}{(k -
2)\Gamma\left(\frac{1}{2}\right)\Gamma(2k + n)}\right),
\end{align*}
the asymptotic equivalent of which is equal to $1$. A large value of
Gini index (ranging from $0$ to $1$) indicates an extremely
nonuniform distribution of vertex degrees, implying that all vertex
degrees are dominated by only a few classes, whereas a small value
of Gini index suggests vertex degrees are evenly distributed in
different degree classes. Thus, we conclude (asymptotically) high
sparseness of HDRANs.
We further verify our conclusion by conducting some simulation
experiments. In general, each network $G(V, E)$ is associated with a
unique $|V| \times |V|$ {\em adjacency matrix}, denoted $\matA =
(A_{ij})$, in which $A_{ij} = 1$ only when there is an edge linking
vertices $i$ and $j$, for $i, j \in V$; $0$, otherwise. If $G$ is
undirected, $\matA$ is symmetric. The degree of vertex $i$ thus can
be represented by the sum of $i$th row or the $i$th column in
$\matA$, allowing us to compute the Gini index of each simulated
network~$G$ through $\matA$ accordingly. For each $k = 3, 10, 30$,
we generate $100$ independent HDRANs at time $n = 5000$. The
comparison of Lorenz curves (based on the average of cumulative
degree proportion sequences) is given in Figure~\ref{Fig:Lorenz}.
\begin{figure}[tbh]
\centering
\includegraphics[width=\textwidth]{lorenzcurve-eps-converted-to}
\caption{Comparison of Lorenz Curves for simulated HDRANs of $k
= 3, 10, 30$ at time $n = 5000$}
\label{Fig:Lorenz}
\end{figure}
Besides, we calculate the Gini index of each of the $100$ simulated
HDRANs (of $k = 3, 10, 30$) at time $50,000$, and take the average;
The estimated Gini indices are $0.9970330$ (for $k = 3$),
$0.9990327$ (for $k = 10$), and $0.9997262$ (for $k = 30$). We do
not show the corresponding Lorenz curves as they are not visually
distinguishable.
\section{Depth, diameter and distance}
\label{Sec:diameter}
In this section, we investigate several distance-based properties of
HDRANs. The first measure that we look into is clique-based---{\em
depth}, which is defined (for HDRANs) recursively as follows. At
time $1$, the original $k$-clique is divided into $k$ simplexes, and
then is deactivated. The depth of each of the active $k$-cliques
equals $1$. At time $n > 1$, an existing active clique $\clique^{*}$
is chosen uniformly at random, and subdivided into $k$ new cliques
$\clique_1, \clique_2, \ldots, \clique_k$. Then, we have
$${\rm depth}(\clique_i) = {\rm depth}(\clique^{*}) + 1,$$
for all $i = 1, 2, \ldots, k$. An explanatory example of a RAN of
index $m = 5$ is shown in Figure~\ref{Fig:RANdepth}, where the
(active) cliques denoted by $\left(1, 0^{(1)}, 0^{(2)}, 0^{(3)},
0^{(5)}\right)$,
$\left(1, 0^{(1)}, 0^{(2)}, 0^{(4)}, 0^{(5)}\right)$, $\left(1,
0^{(1)}, 0^{(3)}, 0^{(4)}, 0^{(5)}\right)$, and $\left(1, 0^{(2)},
0^{(3)}, 0^{(4)}, 0^{(5)}\right)$ have depth $1$; all the rest have
depth $2$.
\begin{figure}[tbh]
\begin{center}
\begin{tikzpicture}[scale=3.77]
\draw
(2.75,0) node [circle,draw] {$0^{(2)}$}
(3.25,0) node [circle=0.1,draw] {$0^{(1)}$}
(3,0.866) node [circle=0.1,draw] {$0^{(4)}$}
(2.55, 0.52) node [circle=0.1,draw] {$0^{(3)}$}
(3.45, 0.52) node [circle=0.1,draw] {$0^{(5)}$}
(3.35, 0.866) node [circle,draw] {$\; \; 1 \; \;$}
(3.55, 0.2) node [circle=0.1,draw] {$\; \; 2 \; \;$};
\draw
(2.867,0) -- (3.133,0)
(2.67,0.52) -- (3.33,0.52)
(2.842,0.085) -- (2.97, 0.755)
(3.158, 0.085) -- (3.03, 0.755)
(2.59, 0.405) -- (2.67, 0.09)
(3.41, 0.405) -- (3.33, 0.09)
(2.643, 0.448) -- (3.141, 0.036)
(3.357, 0.448) -- (2.859, 0.036)
(2.643, 0.6) -- (2.923, 0.78)
(3.357, 0.6) -- (3.077, 0.78)
(3.125, 0.866) to[bend left=15] (3.235, 0.866)
(2.643, 0.6) to[bend left=75] (3.3, 0.97)
(2.842, 0.085) to[bend left=45] (3.25, 0.8)
(3.158, 0.085) to[bend left=15] (3.3, 0.75);
\draw
(3.4, 0.63) to[bend right=30] (3.4, 0.75)
(3.6, 0.32) to [bend right=45] (3.43, 0.78)
(3.52, 0.32) to [bend left=30] (3.05, 0.765)
(3.44, 0.25) to [bend left=15] (2.66,0.49)
(3.436, 0.195) to [bend right=10] (2.842, 0.085)
(3.47, 0.115) to [bend left=15] (3.37, 0.02);
\end{tikzpicture}
\caption{An example of a HDRAN of index 5 at step 2.}
\label{Fig:RANdepth}
\end{center}
\end{figure}
In contrast, {\em distance}, also known as {\em geodesic distance},
is a property based on pairwise vertices. In a given network $G(V,
E)$, the distance between a pair of arbitrary vertices $i, j \in V$,
denoted $d(i, j)$, is the number of edges in the shortest path (or
one of the shortest paths) connecting $i$ and $j$. A related
property, {\em diameter} of network $G$, denoted ${\rm
diameter}(G)$, is defined in a max-min manner: the greatest
length
of the shortest paths between every two verticies in $G$, i.e.,
$\max_{i, j \in V} \left\{d(i, j)\right\}$. see~\cite[page
82]{bondy2008graph} for fundamental properties of the diameter of a
graph.
For instance, the diameter of the HDRAN given in
Figure~\ref{Fig:RANdepth} is $2$, referring to the distance between
the vertices respectively labeled with $2$ and $0^{(5)}$.
It was introduced in~\cite{darrasse2007degree} that there exists an
one-to-one
relation between the evolution of HDRANs (of index $k$) and that of
$k$-ary trees\footnote{See~\cite[page 224]{storer2002anintroduction}
for the
definition of $k$-ary tree}. An illustrative example is
presented in
Figure~\ref{Fig:karytree}.
\begin{figure}[tbh]
\begin{center}
\centering
\begin{tikzpicture}[scale = 1.08]
\node[ellipse, draw] at (0, 0) (v0){$0^{(1)}, 0^{(2)},
0^{(3)}, 0^{(4)}, 0^{(5)}$};
\node[rectangle, draw] at (-5, -2) (v11){$1, 0^{(1)},
0^{(2)}, 0^{(3)}, 0^{(5)}$};
\node[rectangle, draw] at (-1.7, -2) (v12){$1, 0^{(1)},
0^{(2)}, 0^{(4)}, 0^{(5)}$};
\node[ellipse, draw] at (0, -3) (v13){$1, 0^{(1)},
0^{(2)}, 0^{(3)}, 0^{(4)}$};
\node[rectangle, draw] at (1.7, -2) (v14){$1, 0^{(1)},
0^{(3)}, 0^{(4)}, 0^{(5)}$};
\node[rectangle, draw] at (5, -2) (v15){$1, 0^{(2)},
0^{(3)}, 0^{(4)}, 0^{(5)}$};
\node[rectangle, draw] at (-5, -5) (v21){$1, 0^{(1)},
0^{(2)}, 0^{(3)}, 2$};
\node[rectangle, draw] at (-1.7, -5) (v22){$1, 0^{(1)},
0^{(2)}, 0^{(4)}, 2$};
\node[rectangle, draw] at (0, -6) (v23){$1, 0^{(1)},
0^{(3)}, 0^{(4)}, 2$};
\node[rectangle, draw] at (1.7, -5) (v24){$1, 0^{(2)},
0^{(3)}, 0^{(4)}, 2$};
\node[rectangle, draw] at (5, -5) (v25){$0^{(1)},
0^{(2)}, 0^{(3)}, 0^{(4)}, 2$};
\draw (v0)--(v11);
\draw (v0)--(v12);
\draw (v0)--(v13);
\draw (v0)--(v14);
\draw (v0)--(v15);
\draw (v13)--(v21);
\draw (v13)--(v22);
\draw (v13)--(v23);
\draw (v13)--(v24);
\draw (v13)--(v25);
\end{tikzpicture}
\caption{The evolution of the $5$-ary tree corresponding to
that of the HDRAN of index $5$ given in
Figure~\ref{Fig:RANdepth}. Elliptic (internal) nodes
refer
to inactive cliques, whereas rectangular (active) nodes
refer to active ones.}
\label{Fig:karytree}
\end{center}
\end{figure}
Active and inactive cliques in HDRANs (of index $k$) respectively
correspond to external and internal nodes in $k$-ary trees. Thus,
the total depth of active cliques in $\apol_{n}^{(k)}$ is equivalent
to the total depth\footnote{In tree structure, the depth of a node
is the number of links between the node and the root (of the
tree).}
of external nodes in the corresponding $k$-ary tree at time $n$,
denoted $\ktree{k}{n}$. In the literature, the total depth of
external nodes in $\ktree{k}{n}$ is also known as the {total
external path}, denoted by $\ext{k}{n}$ in our manuscript. For
uniformity, we use $\ext{k}{n}$ as the notation for the total depth
of active cliques in $\apol_{n}^{(k)}$ as well.
\begin{prop}
\label{Thm:ext}
Let $\ext{k}{n}$ be the total depth of active cliques in a HDRAN
of index $k$ at time $n$. The first two moments of $\ext{k}{n}$
are given by
\begin{align*}
\E\left[\ext{k}{n}\right] &= (kn - n + 1) \sum_{i = 0}^{n -
1} \frac{k}{k + (k - 1)i}.
\\ \E\left[\left(\ext{k}{n}\right)^2\right] &= \bigl((k -
1)n + m\bigr) \bigl((k - 1)n + 1\bigr) k E(k, n) +
O\left(n^2 \log{n}\right),
\end{align*}
where $E(k, n)$ is a function of $k$ and $n$, given in
Appendix~\ref{App:ext}.
\end{prop}
The proof of Proposition~\ref{Thm:ext} also can be found in
Appendix~\ref{App:ext}. As we know that
$$\sum_{i = 0}^{n - 1} \frac{k}{k + (k - 1)i} \sim \frac{k}{k - 1}
\log{n},$$
for large $n$, we hence conclude that the leading order of the
asymptotic expectation of $\ext{k}{n}$ is $kn\log{n}$.
The diameter of HDRANs is also considered. In~\cite{frieze2014some},
the
authors established an upper bound for the diameter of planar RANs
by utilizing a known result of the height of weighted $k$-ary
trees~\cite[Theorem 5]{broutin2006large}, i.e.,
$${\rm diameter} \left(\apol_{n}^{(3)}\right) \le \rho \log n,$$
where $\rho = 1/\eta$, and $\eta$ is the unique solution greater
than $1$ for $\eta - 1 - \log \eta = \log 3$. This upper bound can
be extended to $\apol_{n}^{(k)}$ effortlessly; that is,
$${\rm diameter} \left(\apol_{n}^{(k)}\right) \le \frac{2}{\rho^{*}
(k - 1)} \log n,$$
where $\rho^{*} = 1 / \eta^{*}$ is the unique solution greater than
$1$ for $\eta^{*} - 1 - \log \eta^{*} = \log k$. In addition, the
authors of~\cite{ebrahimzadeh2014onlongest} proved ${\rm diameter}
\left(\apol_{n}^{(3)}\right) \overset{a.s.}{\sim} c \log n$ by
estimating the height of a class of specifically-designed random
trees. The value of $c$ is approximately $1.668$. The asymptotic
expression of the diameter of more general $\apol_{n}^{(k)}$ was
developed by~\cite{cooper2014theheight} and
by~\cite{kolossvary2016degrees}. The
approach in~\cite{cooper2014theheight} was to utilize known results
of
continuous-time branching processes coupled with recurrence methods,
and the authors of~\cite{kolossvary2016degrees} coped with
difficulties by
characterizing vertex generations. We only state (without repeating
the proof) the weak law of the diameter of $\apol_{n}^{(k)}$
from~\cite{cooper2014theheight} (with a minor tweak) in the next
theorem.
\begin{theorem}[\mbox{~\cite[Theorem 2]{cooper2014theheight}}]
For $k \ge 3$, with high probability, we have
$${\rm diameter} \left(\apol_{n}^{(k)}\right) \sim c \log n,$$
where $c$ is the solution of
$$\frac{1}{c} = \sum_{\ell = 0}^{k - 1} \frac{k - 1}{\ell + a(k
- 1)},$$
in which the value of $a$ is given by
$$\frac{\Gamma(k + 1) \Gamma(ka)}{\Gamma\bigl((k - 1)a +
k\bigr)} \exp\left\{ \sum_{\ell = 0}^{k - 1} \frac{(k - 1)(a
+
1) - 1}{\ell + (k - 1)a} \right\} = 1.$$
Especially, as $k \to \infty$,
$$c \sim \frac{1}{k \log 2}.$$
\end{theorem}
A topological measure related to distance is the {\em Wiener index},
which was proposed by the chemist Harry
Wiener~\cite{wiener1947structural} to
study molecular branching of chemical compounds. For a network $G(V,
E)$, the Wiener index is defined as the sum of distances of all
paired vertices, i.e., $W(G) = \sum_{i, j \in V} d(i, j)$. The
Wiener index has been extensively studied for random
trees~\cite{dobrynin2001wiener, neininger2002thewiener}. For other
random structures, we
refer the readers to~\cite{bereg2007wiener, fuchs2015thewiener,
janson2003thewiener, wagner2006aclass, wagner2007ontheaverage,
wagner2012onthewiener}.
The methodologies for computing the Wiener index of random trees,
however, are
not adaptable to the study of RANs, as the bijection between RANs
and $k$-ary trees is based on a clique-to-node mapping. The high
dependency of active cliques (sharing vertices and edges)
substantially increases the challenge of formulating mathematical
relation between distance (vertex-based) and depth (clique-based).
There is only a few articles studying distance or related properties
in RANs. In~\cite{kolossvary2016degrees}, the authors proved that
the distance
of two arbitrary vertices in a HDRAN has both mean and variance of
order $\log n$, and that this distance follows a Gaussian law
asymptotically. However, it seems difficult to extend this result to
the Wiener index, as the covariance structure of the distances (of
all paired vertices) is unspecified. Planar RANs ($\apol_{n}^{(3)}$)
were considered in~\cite{bodini2008distances}. In this article, the
dominant term
of the total distance of all pairs of vertices was shown to be
$\sqrt{3 \pi} n^{5/2}/22$. The main idea was to consider an
enumerative generating function of the total distance, and then
decompose the total distance into interdistance and extradistance.
This approach can be extended to HDRANs of small network index~$k$,
but seemingly not applicable to HDRANs with general index $k$.
Therefore, the Wiener index of HDRANs remains an open problem.
We numerically explore the Wiener index of HDRANs via a series of
simulations. For $k = 3, 5, 8, 10$, we generate $500$ independent
HDRANs at time $2,000$, calculate the Wiener index for each
simulated
HDRAN, and use the kernel method to estimate the density. The plots
of the estimated densities are presented in
Figure~\ref{Fig:densityest}, where we find that they are
approximately bell-shaped, but not symmetric (positively skewed). By
observing these patterns, we conjecture that the limiting
distribution of the Wiener index of HDRANs does not follow a
Gaussian law.
\begin{figure}[tbh]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[scale=0.23]{wiener3-eps-converted-to}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[scale=0.23]{wiener5-eps-converted-to}
\end{minipage}
\\
\begin{minipage}{0.48\textwidth}
\includegraphics[scale=0.23]{wiener8-eps-converted-to}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[scale=0.23]{wiener10-eps-converted-to}
\end{minipage}
\caption{Density estimation of the Wiener indices of HDRANs for
$k = 3, 5, 8, 10$}
\label{Fig:densityest}
\end{figure}
In addition, for each $k$, we apply the {\em Shapiro-Wilk} test to
the simulated data comprising $500$ Wiener indices, and receive the
following $p$-values: $0.0003$ for $k = 3$; $0.0024$ for $k = 5$;
$9.56 \times 10^{-8}$ for $k = 8$; and $0$ for $k = 10$. These
$p$-values are all statistically significant, in support of our
conjecture.
\section{Concluding remarks}
\label{Sec:concluding}
Finally, we address some concluding remarks and propose some
future work. We investigate several properties of high-dimensional
random Apollonian networks in this paper. Two types of degree
profiles are considered. For the first type, we show that the number
of vertices of a given degree concentrates on its expectation with
high probability. In the proof of Theorem~\ref{Thm:degreedist}, we
derive the $L_1$ limit of $X_{n, j}^{(k)}$, i.e.,
$$\lim_{n \to \infty} \E \left[X_{n, j}^{(k)} \right]=
\frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j + k) \Gamma(k - 1)} \, n,$$
which suggests that the asymptotic expectation of $X_{n, j}^{(k)}$
experiences a phase transition. There are two regimes. According to
the Stirling's Approximation, we have
$$\E \left[X_{n, j}^{(k)} \right] \sim
\begin{cases}
\frac{\Gamma(j) \Gamma(2k - 1)}{\Gamma(j + k) \Gamma(k - 1)} \,
n, &\qquad \mbox{for fixed }j;
\\ \frac{\Gamma(2k - 1)}{\Gamma(k - 1)} \frac{n}{j^k}, &\qquad
\mbox{for }j \to \infty,
\end{cases}
$$
as $n \to \infty$.
For the second type of degree profile, the degree of a vertex of a
given label, we develop the probability mass function and the exact
moments by applying the analytic combinatorics methods and the
results in triangular \polya\ urns.
The next two properties that we investigate are the small world
property measured by the local clustering coefficient and the
sparsity measured by a proposed Gini index. We conclude that HDRANs
are highly clustered and sparse.
The last several properties that we look into are based on node
distance. According to an one-to-one relation between HDRANs and
$k$-ary trees, we compute the first two moments of the total depth
of active cliques in HDRANs. We also numerically study the Wiener
index, and conjecture that its limiting distribution is not normal
based on simulation results. The diameter of HDRANs is retrieved
from~\cite{cooper2015long}.
To end this section, we propose some future work. Our conjecture of
non-normality of the Wiener index is based on numerical experiments.
A more rigorous proof is needed. There remain many open problems for
HDRANs, such as the length of the longest path and the highest
vertex degree. One suggests carrying out some studies of the
stochastic
processes that
take place on HDRANs, especially those with applications to
mathematical physics, such as percolation and diffusion. We will
look into these open problems and report the results elsewhere.
| proofpile-arXiv_065-1411 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Large Hadron Collider (LHC) made available a diverse data set of production cross sections of light nuclear clusters like deuterons (D), helions ($^3$He) and tritons ($^3$H)~\cite{Adam:2015vda,Acharya:2017fvb}.
The LHC also brought progress in femtoscopy, the study of the momentum-space correlations of particles emitted in hadronic collisions\footnote{Also known as Hanbury Brown-Twiss (HBT)~\cite{HanburyBrown:1956bqd,Brown:1956zza} analyses.}~\cite{Aamodt:2010jj,Abelev:2012sq,Abelev:2013pqa,Kisiel:2014upa,Adam:2015vja,Adam:2015pya,Szymanski:2016xia,Acharya:2018gyz}.
These measurements are a source of information on the state produced in heavy-ion collisions~\cite{Sato:1981ez,Mrowczynski:1987oid,Danielewicz:1992pei,Llope:1995zz,Scheibl:1998tk,Lednicky:2005tb,Mrowczynski:2016xqm,Bellini:2018epz,Braun-Munzinger:2018hat}.
A review of future prospects can be found in~\cite{Citron:2018lsq}.
In this paper we consider an interesting feature in the data~\cite{Blum:2017qnn}: the anti-correlation between the source homogeniety volume, probed in femtoscopy, and the coalescence factor of nuclear clusters. This correlation was predicted two decades ago in a seminal work by Scheibl \& Heinz~\cite{Scheibl:1998tk}. For a cluster with mass number $A$ and spin $J_A$, observed at vanishing transverse momentum $p_t=0$ in the collider frame, it is summarised by the relation~\cite{Blum:2017qnn,Bellini:2018epz}\footnote{See also~\cite{Llope:1995zz}.}\footnote{See, e.g.~\cite{Mekjian:1977ei,Mekjian:1978us,DasGupta:1981xx} for the appearance of a similar formula within a thermodynamic model.}:
\begin{eqnarray}
\label{eq:B2ScheiblHeinz}
\frac{\mathcal{B}_A}{m^{2(A-1)}}&\approx&\frac{2J_A+1}{2^A\,\sqrt{A}}\left(\frac{m\,R}{\sqrt{2\pi}}\right)^{3(1-A)}.
\end{eqnarray}
Here, the coalescence factor is defined as
$\mathcal{B}_A=\left(P_A^0\frac{dN_A}{d^3P_A}\right)/\left(p^0\frac{dN}{d^3p}\right)^A$,
where $p^0dN/d^3p$ is the Lorentz-invariant differential yield for constituent nucleons at $p=P_A/A$. The homogeniety volume is parametrised by the HBT radius $R$~\cite{Aamodt:2010jj,Abelev:2012sq,Abelev:2013pqa,Kisiel:2014upa,Adam:2015vja,Adam:2015pya,Szymanski:2016xia,Acharya:2018gyz}\footnote{More practical details about the definition of $R$ are given in Sec.~\ref{s:data}.}. $m\approx0.94$~GeV is the nucleon mass.
Eq.~(\ref{eq:B2ScheiblHeinz}) was predicted to apply in the limit that the size parameter $d_A$ of the cluster's wave function can be neglected compared to the source homogeniety radius: $d_A\ll R$. For small systems with $R\lesssim d_A$, Eq.~(\ref{eq:B2ScheiblHeinz}) receives a correction via $R^2\to R^2+(d_A/2)^2$.
At finite $p_t$, Ref.~\cite{Scheibl:1998tk} suggested that Eq.~(\ref{eq:B2ScheiblHeinz}) should be modified by $m\to m_t=\sqrt{m^2+p_t^2}$.
A comparison of Eq.~(\ref{eq:B2ScheiblHeinz}) to LHC data was presented in Ref.~\cite{Blum:2017qnn}, which used it to extrapolate measurements in Pb-Pb collisions into a prediction of the coalescence factor of D, $^3$He and $^3$H in p-p collisions. This extrapolation is nontrivial. The HBT radius characterising Pb-Pb collisions is $R\sim4$~fm, compared to $R\sim1$~fm measured in p-p collisions. Thus, Eq.~(\ref{eq:B2ScheiblHeinz}) predicts a large increase in $\mathcal{B}_A$ going from Pb-Pb to p-p: $\mathcal{B}_3^{\rm p-p}/\mathcal{B}_3^{\rm Pb-Pb}\sim4\times10^3$. Subsequent ALICE measurements~\cite{Acharya:2017fvb} in p-p collisions were consistent with this prediction:
Eq.~(\ref{eq:B2ScheiblHeinz}) appears to work, at least to $\mathcal{O}(1)$ accuracy, over orders of magnitude in $\mathcal{B}_A$. The question we ask (and answer) in this study is, why does it work?
To substantiate this question, note that Ref.~\cite{Scheibl:1998tk} derived Eq.~(\ref{eq:B2ScheiblHeinz}) using a number of assumptions and approximations. A simple source model was used to describe the emission of particles produced in hadronic collisions. This model implemented collective flow with a specific velocity profile and a Gaussian density profile, limited to radial symmetry in the transverse direction. Using a saddle point approximation to evaluate Cooper-Frye integrals~\cite{Cooper:1974mv}, Ref.~\cite{Scheibl:1998tk} compared their analytic results to a parallel analysis that used the same assumptions to calculate HBT parameters~\cite{Chapman:1995nz}, and found Eq.~(\ref{eq:B2ScheiblHeinz}).
Given this procedure, it is natural to question the theoretical basis for Eq.~(\ref{eq:B2ScheiblHeinz}). For example, as noted in~\cite{Scheibl:1998tk}, it is unlikely that the source model adopted there can actually describe systems ranging from Pb-Pb to p-p in detail. Why then does Eq.~(\ref{eq:B2ScheiblHeinz}) work? can we expect it to remain valid at $p_t>0$; at intermediate centrality; and so on?
The outline of our analysis and main results is as follows.
In Sec.~\ref{s:QM} we focus on D formation (Sec.~\ref{ss:D}) and two-proton correlations (Sec.~\ref{ss:pair}). Using non-relativistic quantum mechanics (QM) considerations, in idealistic settings ignoring final-state interactions and other real-life complications, we derive a relation between D formation and two-particle spectra. In Sec.~\ref{ss:BfromC} we extend our results to a relativistic formulation. Our main result is Eq.~(\ref{eq:B2C2}), giving $\mathcal{B}_2$ as an integral of the two-particle correlation function weighted by the D probability density. The derivation does not require a detailed model of the particle emission source. In particular, we need not invoke the assumptions and approximations of~\cite{Chapman:1995nz,Scheibl:1998tk}. Another derivation is shown in App.~\ref{app:kinetic}.
In Sec.~\ref{s:comp} we show that adopting the same assumptions our formalism reproduces Eq.~(\ref{eq:B2ScheiblHeinz}) as found in~\cite{Scheibl:1998tk}\footnote{\label{fn:1}Apart from the fact that the natural definition we find for $R$ is in the so-called pair rest frame, compared to the longitudinal frame adopted in~\cite{Scheibl:1998tk}, and apart from the replacement $m\to m_t$. Please see Sec.~\ref{s:comp} for details.
}.
The up shot is that our work makes Eq.~(\ref{eq:B2ScheiblHeinz}) a generic prediction. If, above, we argued that the model dependence in~\cite{Scheibl:1998tk} makes it a surprise that Eq.~(\ref{eq:B2ScheiblHeinz}) successfully describes systems from Pb-Pb to p-p, then in light of the discussion in Sec.~\ref{s:QM} it becomes nontrivial to imagine a system for which Eq.~(\ref{eq:B2ScheiblHeinz}) would fail.
The down side is that Eq.~(\ref{eq:B2ScheiblHeinz}) is essentially a kinematical relation and can teach us relatively little about the dynamics of the state produced in heavy-ion collisions.
Our analysis bears a connection to (being a less sophisticated version of) Ref.~\cite{Lednicky:2005tb}, which showed that the number of pion pairs produced in Coulomb bound states is related to the number of free pion pairs at small relative momentum. Our work is also close in spirit to work by Mrowczynski~\cite{Mrowczynski:1987oid,Mrowczynski:1989jd,Mrowczynski:1992gc,Mrowczynski:1993cx,Mrowczynski:1994rn,Maj:2004tb}.
In Sec.~\ref{s:data} we consider complications including final-state interactions and source chaoticity (Sec.~\ref{ss:lam}). We do not address these complications in detail, but show how experimental analyses that take these issues into account can be used to test the coalescence-correlation relation at the cost of some model-dependence. In Sec.~\ref{s:highA} we generalise our results to $A\geq2$, postponing some details to App.~\ref{app:wave}.
In Sec.~\ref{ss:data} we compare our theoretical results to data. In Sec.~\ref{ss:BAvsR} we recap the results of Ref.~\cite{Blum:2017qnn}, comparing the coalescence-correlation relation with data across systems.
While our results are consistent with available measurements, the uncertainties are large. Existing experimental analyses were not geared for a direct comparison of femtoscopy and cluster yields.
This lack motivates dedicated experimental work.
We conclude in Sec.~\ref{s:sum}.
\section{QM considerations}\label{s:QM}
Hadronic collisions produce a high-excitation state (HXS), characterised by a density matrix $\hat\rho_{\rm HX}$. QM allows to calculate the probability density to find a certain non-relativistic state in the HXS by projecting that state onto $\hat\rho_{\rm HX}$. In this section we use the QM formalism to derive a relation between D and two-particle spectra. We then convert to Lorentz-invariant quantities.
We emphasise that the QM formulation we use is far from new. It had been utilised in different guises in many early studies including (as a partial list) Refs.~\cite{Sato:1981ez,Llope:1995zz,Koonin:1977fh,Mrowczynski:1987oid,Pratt:1990zq,Mrowczynski:1994rn,Maj:2004tb,Chapman:1995nz,Scheibl:1998tk,Mrowczynski:2016xqm}. Our discussion in Secs.~\ref{ss:D} and~\ref{ss:pair} is merely intended to review the derivation of D and particle pair formation, respectively, in the HXS, recalling that the two phenomenae stem from building blocks that are closely related on general grounds. Our next step, in Sec.~\ref{ss:BfromC}, is to explicitly combine the expressions into a direct relation between coalescence and pair spectra, summarised in Eq.~(\ref{eq:B2C2}). This result, as far as we know, is new to the current work.
\subsection{Deuteron formation}\label{ss:D}
A D at lab-frame momentum $P_d$ is a two-particle (neutron-proton) bound state $|\psi_{P_d}\rangle$ with wave function
\b
\psi_{P_d}(x_1,x_2)&=&e^{i\vec P_d\vec X}\phi_d(\vec r),\end{eqnarray}
where
\begin{eqnarray}
&\vec X=(\vec x_1+\vec x_2)/2,&\vec r=\vec x_1-\vec x_2\label{eq:Xr}
\end{eqnarray}
and $\int d^3r|\phi_d(\vec r)|^2=1$.
The probability density of D in the HXS is~\cite{Sato:1981ez}
\begin{eqnarray}\label{eq:d1} \frac{dN_d}{d^3P_d}&=&(2\pi)^{-3}\langle\psi_{P_d}|\hat\rho_{\rm HX}|\psi_{P_d}\rangle\\
&=&\frac{G_d}{(2\pi)^3}\int d^3x_1\int d^3x_2\int d^3x'_1\int d^3x'_2\nonumber\\
&&\psi^*_{P_d}(x'_1,x'_2)\,\psi_{P_d}(x_1,x_2)\,\rho_{\rm 2}\left(x'_1,x'_2;x_1,x_2;t_f\right),\nonumber\end{eqnarray}
where $\rho_{\rm 2}\left(x'_1,x'_2;x_1,x_2;t_f\right)$ is the two-particle reduced HXS density matrix. $G_d$ is a dimensionless normalisation factor. In this section, for simplicity, we assume the existence of a well-defined freeze-out time $t_f$ and consider the HXS density matrix as being specified at the moment $t_f$. We emphasise that this simplification is not essential for the derivation, and our main result [Eq.~(\ref{eq:B2C2}) below] holds also if we allow a finite-duration freeze-out window. An alternative derivation that makes this point manifest is given in App.~\ref{app:kinetic}.
It is commonly assumed that the HXS density matrix can be factorised into 1-particle density matrices,
\begin{eqnarray}\label{eq:rhofct}\rho_{\rm 2}\left(x'_1,x'_2;x_1,x_2;t\right)&=&\rho_{\rm 1}\left(x'_1,x_1;t\right)\rho_{\rm 1}\left(x'_2,x_2;t\right),
\end{eqnarray}
that can in turn be described in terms of Wigner densities $f_1^W$,
\begin{eqnarray}\label{eq:fW}
\rho_1(x,x';t)&=&\int\frac{d^3k}{(2\pi)^3}e^{i\vec k\left(\vec x'-\vec x\right)}f_1^W\left(\vec k,\frac{\vec x+\vec x'}{2};t\right).
\end{eqnarray}
Inserting Eqs.~(\ref{eq:rhofct}) and~(\ref{eq:fW}) into Eq.~(\ref{eq:d1})
we obtain
\begin{eqnarray}\label{eq:d20} \frac{dN_d}{d^3P_d}&=&\frac{G_d}{(2\pi)^3}\int d^3R \int\frac{d^3q}{(2\pi)^3}\int d^3r\,\mathcal{D}_d\left(\vec q,\vec r\right)\,\times\\
&&f_1^W\left(\frac{\vec P_d}{2}+\vec q,\vec R +\frac{\vec r}{2};t_f\right)f_1^W\left(\frac{\vec P_d}{2}-\vec q,\vec R -\frac{\vec r}{2};t_f\right),\nonumber\end{eqnarray}
where $\mathcal{D}_d$ is the Wigner density of the D,
\begin{eqnarray}\mathcal{D}_d\left(\vec q,\vec r\right)&=&\int d^3\zeta\,e^{-i\vec q\vec\zeta}\,\phi_d\left(\vec r+\frac{\vec\zeta}{2}\right)\,\phi_d^*\left(\vec r-\frac{\vec\zeta}{2}\right).\end{eqnarray}
In terms of the original variables of Eq.~(\ref{eq:d1}), $\vec R =(\vec x_1+\vec x_1'+\vec x_2+\vec x_2')/4$ is the classical centre of mass coordinate of the two-nucleon system and $\vec r=(\vec x_1+\vec x_1')/2-(\vec x_2+\vec x_2')/2$ is the classical relative coordinate between the nucleons.
It can be shown that neglecting $\pm\vec q$ inside of the $f_1^W$ functions in Eq.~(\ref{eq:d20}) is a reasonable approximation, valid to $\sim10\%$ accuracy for Pb-Pb collisions~\cite{Scheibl:1998tk}.
With this approximation we can perform the $q$ integration which gives $\int d^3q\mathcal{D}_d\left(\vec q,\vec r\right)=(2\pi)^3|\phi_d(\vec r)|^2$.
Definin
\begin{eqnarray}\label{eq:D(k)}\left|\phi_d\left(\vec r\right)\right|^2&=&\int \frac{d^3k}{(2\pi)^3}\,e^{i\vec k\vec r}\,\mathcal{D}\left(\vec k\right)\end{eqnarray}
we obtain
\begin{eqnarray}\label{eq:d2}\frac{dN_d}{d^3P_d}&\approx&\frac{G_d}{(2\pi)^6}\int d^3q\,\mathcal{D}\left(\vec q\right)\int d^3R \int d^3r\,e^{i\vec q\vec r}\,\times\\
&&f_1^W\left(\frac{\vec P_d}{2},\vec R +\frac{\vec r}{2};t_f\right)\,f_1^W\left(\frac{\vec P_d}{2},\vec R -\frac{\vec r}{2};t_f\right).\nonumber\end{eqnarray}
Eq.~(\ref{eq:d2}) expresses a non-relativistic QM calculation of the Lorentz non-invariant quantity $dN/d^3P_d$. In Sec.~\ref{ss:BfromC} we return to the problem of connecting this result to the total Lorentz-invariant D yield obtained by integrating over different emission regions in an expanding HXS ``fireball".
\subsection{Nucleon pair emission}\label{ss:pair}
Consider a state $|\psi^s_{p_1,p_2}\rangle$ describing two free propagating protons in a spin-symmetric configuration. Ignoring final-state interactions (FSI), the position space representation of $|\psi^s_{p_1,p_2}\rangle$ is an antisymmetric function of the particle coordinates,
\begin{eqnarray}
\psi^s_{p_1,p_2}(x_1,x_2)&=&\frac{1}{\sqrt{2}}e^{2i\vec P\vec X}\left(e^{i\vec q\vec r/2}-e^{-i\vec q\vec r/2}\right),\nonumber\\&&
\end{eqnarray}
where the average pair momentum and the momentum difference are defined as
\begin{eqnarray}
&\vec P=\left(\vec p_1+\vec p_2\right)/2,\;\;&\vec q=\vec p_1-\vec p_2.
\end{eqnarray}
The probability density associated with $|\psi^s_{p_1,p_2}\rangle$ can be calculated as~\cite{Koonin:1977fh,Pratt:1990zq}
\begin{eqnarray}\!\!\!\!\!\!\!\label{eq:2pt1}\frac{dN^s}{d^3p_1d^3p_2}&=&(2\pi)^{-6}\langle\psi^s_{p_1,p_2}|\hat\rho_{\rm HX}|\psi^s_{p_1,p_2}\rangle\\
&=&\frac{G^s_2}{(2\pi)^6}\int d^3x_1\int d^3x_2\int d^3x'_1\int d^3x'_2\nonumber\\
&&\psi^{s*}_{p_1,p_2}(x'_1,x'_2)\,\psi^s_{p_1,p_2}(x_1,x_2)\,\rho_{\rm 2}\left(x'_1,x'_2;x_1,x_2;t_f\right).\nonumber\end{eqnarray}
Assuming unpolarised isospin-invariant HXS, we use the same $\rho_{\rm 2}\left(x'_1,x'_2;x_1,x_2;t_f\right)$ for the proton-proton and proton-neutron reduced density matrix, appearing in Eqs.~(\ref{eq:2pt1}) and~(\ref{eq:d1}). $G^s_2$ is a normalisation constant.
Inserting Eqs.~(\ref{eq:rhofct}) and~(\ref{eq:fW}) into Eq.~(\ref{eq:2pt1})
we obtain
\begin{eqnarray}\label{eq:Unit}\frac{dN^s}{d^3p_1d^3p_2}&=&G_2^s\left(\mathcal{A}_2\left(p_1,p_2\right)-\mathcal{F}_2\left(P,q\right)\right),\\
\label{eq:F2}\mathcal{F}_2\left(P,q\right)&=&\frac{1}{(2\pi)^6}\int d^3R\int d^3r\,e^{i\vec q\vec r}\,\times\nonumber\\
&&f_1^W\left(\vec P,\vec R+\frac{\vec r}{2};t_f\right)\,f_1^W\left(\vec P,\vec R-\frac{\vec r}{2};t_f\right),\nonumber\\
\mathcal{A}_2\left(p_1,p_2\right)&=&\frac{1}{(2\pi)^6}
\int d^3xf_1^W\left(\vec p_1,\vec x;t_f\right)\,\int d^3xf_1^W\left(\vec p_2,\vec x;t_f\right).\nonumber\end{eqnarray}
We could express $\mathcal{A}_2$ in Eq.~(\ref{eq:Unit}) in terms of $P,q$, but we keep $p_1,p_2$ for clarity. The $P,q$ notation is useful for the $\mathcal{F}_2$ term, which expresses the QM correlation.
We can repeat the same steps above for the spin anti-symmetric state $|\psi^a_{p_1,p_2}\rangle$, for which the wave function is an symmetric function of the particle coordinates. We find
\begin{eqnarray}\label{eq:Unita}\frac{dN^a}{d^3p_1d^3p_2}&=&G_2^a\left(\mathcal{A}_2\left(p_1,p_2\right)+\mathcal{F}_2\left(P,q\right)\right),\end{eqnarray}
with $G_2^a=G_2^s/3$.
\subsection{Coalescence from two-particle correlations}\label{ss:BfromC}
Eqs.~(\ref{eq:d2}) and~(\ref{eq:Unit}-\ref{eq:Unita}) give the number of D's and proton pairs, respectively, per differential momentum element when all momenta involved are small. The Lorentz-invariant version of the quantities on the LHS of these equations are $\gamma_d\,dN_d/d^3P_d$ and $\gamma_1\gamma_2\,dN^{s,a}/d^3p_1d^3p_2$. Subtleties arise in the computation of the RHS because for a relativistically expanding HXS, different parts of the particle emission region are moving relativistically w.r.t. other parts. This makes the spatial integrations
nontrivial~\cite{Cooper:1974mv}. In addition, instead of a homogeneous freeze-out time $t_f$ we expect a freeze-out surface $t_f=t_f(\vec R)$. We now consider these issues
Inspecting Eqs.~(\ref{eq:d2}) and~(\ref{eq:Unit}), we can write a differential coalescence-correlation relation
\begin{eqnarray}\label{eq:d3qm}
\frac{d}{d^3R}\left(\frac{dN_d}{d^3P_d}\right)
&\approx&G_d\frac{d}{d^3R}\int d^3q\,\mathcal{D}(\vec q)\,
\mathcal{F}_2\left(\frac{\vec P_d}{2},\vec q\right).\nonumber\\&
\end{eqnarray}
The differential presentation reveals model-independence in terms of the details of freeze-out. By either plugging-in Eq.~(\ref{eq:D(k)}), or proceeding directly from Eq.~(\ref{eq:d2}), we have
\begin{eqnarray}\label{eq:d20L1} \frac{d}{d^3R}\left(\frac{dN_d}{d^3P_d}\right)&=&\frac{G_d}{(2\pi)^3}\,f_1^W\left(\frac{\vec P_d}{2},\vec R;t_f\right)\,\times\nonumber\\
&&\int d^3r\left|\phi_d(\vec r)\right|^2f_1^W\left(\frac{\vec P_d}{2},\vec R-\vec r;t_f\right).\nonumber\\&&\end{eqnarray}
It is natural to regard the RHS of Eq.~(\ref{eq:d20L1}) as a Lorentz-invariant distribution function $f_d$.
This was done in Ref.~\cite{Scheibl:1998tk}, which used the Cooper-Frye prescription~\cite{Cooper:1974mv} to make the replacement $\gamma_d\int d^3R\,f_d\to(1/2m)\int \left[d^3\sigma_\mu P_d^\mu\right]f_d$, where $d^3\sigma^\mu$ is the volume element perpendicular to the HXS relativistic freeze-out surface.
While Ref.~\cite{Scheibl:1998tk} (which focused on D formation) arrived at this procedure directly from Eq.~(\ref{eq:d2}), the same implementation of freeze-out w.r.t. the integration over centre of mass coordinate $\vec R$ can be used in integrating the coalescence-correlation relation expressed by Eq.~(\ref{eq:d3qm}).
There is no need to specify the details of the freeze-out surface $t_f(\vec R)$
because Eq.~(\ref{eq:d3qm}) relates the pair emissivity and the D emissivity per differential volume element $d^3R$ in the HXS. Having noted this point, we can drop the differential $d^3R$ in Eq.~(\ref{eq:d3qm}) and consider it as a relation between total D and pair yields.
Let us now make contact with measurements. Experimental collaborations report the (Lorentz-invariant) coalescence factor
\begin{eqnarray}\label{eq:B2}\mathcal{B}_2(p)&=&\frac{P_d^0\,\frac{dN_d}{d^3P_d}}{\left(p^0\frac{dN}{d^3p}\right)^2},\end{eqnarray}
with $p=P_d/2$ and where $p^0\frac{dN}{d^3p}$ is the unpolarised proton yield. The two-particle correlation function is constructed as
\begin{eqnarray} \label{eq:C2base}C_2(P,q)&=&\frac{p_1^0\,p_2^0\frac{dN}{d^3p_1d^3p_2}}{\left(p_1^0\frac{dN}{d^3p_1}\right)\left(p_2^0\frac{dN}{d^3p_2}\right)}
\end{eqnarray}
The numerator on the RHS of Eq.~(\ref{eq:C2base}) sums together the different spin states of the proton pair.
In the denominator, the unpolarised differential yields at $p_1$ and $p_2$ are obtained by scrambling between proton pairs from different events.
Still provisionally neglecting FSI and other complications (which would be discussed later), Ref.~\cite{Adam:2015vja} parametrised two-proton correlation measurements in a way that can be put as
\begin{eqnarray}\label{eq:C2}C_2(P,q)&=&1-\frac{G_2^s-G_2^a}{G_2^s+G_2^a}\,\mathcal{C}_2(P,q).\end{eqnarray} %
By examining the $q$ dependence we see that the $\mathcal{C}_2$ term in Eq.~(\ref{eq:C2}) comes from the $\mathcal{F}_2$ term in Eq.~(\ref{eq:Unit}), while the 1 comes from the $\mathcal{A}_2$ term there. More precisely, in the non-relativistic limit we have
\begin{eqnarray} \mathcal{C}^{\rm PRF}_2\left(|\vec q|\ll m\right)&=&\frac{\mathcal{F}_2}{\mathcal{A}_2},\end{eqnarray}
where the superscript PRF instructs us that $q$ in $\mathcal{C}_2^{\rm PRF}$ is defined in the pair centre of mass frame.
In the same limit, Eqs.~(\ref{eq:d3qm}) and~(\ref{eq:B2}) show that
\begin{eqnarray}\mathcal{B}_2(p)&=&\frac{G_d}{G_2^s+G_2^a}\frac{2\,m}{m^2\mathcal{A}_2}\int d^3q\,\mathcal{D}(\vec q)\,\mathcal{F}_2(\vec p,\vec q).\end{eqnarray}
Assuming unpolarised isospin-symmetric HXS~\cite{Mattiello:1996gq}
we have
\begin{eqnarray}\frac{G_d}{G^s_2+G_2^a}&=&\frac{3}{3+1}
\end{eqnarray}
Using these conventions and noting that $\gamma_1\approx\gamma_2\approx\gamma_d$ for small $|\vec q|\ll m$, we are finally led to the result:
\begin{eqnarray}\label{eq:B2C2}\boxed{\;\mathcal{B}_2(p)\;\approx\;\frac{3}{2\,m}\int d^3q\,\mathcal{D}(\vec q)\,\mathcal{C}^{\rm PRF}_2\left(\vec p,\vec q\right).\;}\end{eqnarray}
Following the discussion around Eq.~(\ref{eq:d3qm}), this result is not limited to non-relativistic $p$. It is limited to non-relativistic $|\vec q|^2\ll m^2$, but that is not a real concern because both $\mathcal{C}_2$ and $\mathcal{D}$ cut-off at $|\vec q|\sim0.1\,m$.
We comment that the coalescence factor $\mathcal{B}_2(p)$ is defined for on-shell D with $P_d^2=4p^2\approx(2m)^2$. Thus, there will actually be no on-shell proton pairs that satisfy $p_1^2=p_2^2=m^2$ along with $(p_1+p_2)/2=p$ at $q\neq0$. This problem comes from neglecting corrections of order $\vec q^2/m^2$ in the derivation of Eq.~(\ref{eq:B2C2}). We can find on-shell proton pairs to construct $\mathcal{C}_2^{\rm PRF}$ by allowing the energy component $P^0$ of the $P$ 4-vector in Eq.~(\ref{eq:C2base}) to deviate from $p^0$ of Eq.~(\ref{eq:B2}), while at the same time enforcing $\vec P=\vec P_d/2=\vec p$. In other words, we let $p$ on the LHS of Eq.~(\ref{eq:B2C2}) denote the 4-momentum per nucleon of the on-shell D, and we equate $\vec p$ between the LHS and the RHS, but we do not enforce $p^0$ on the RHS to match $p^0$ on the LHS. Corrections due to this approximation are of order $\vec q^2/m^2$.
\section{Comparison with previous work}\label{s:comp}
Scheibl \& Heinz~\cite{Scheibl:1998tk} used a Gaussian source model (GSM) of the HXS 1-particle Wigner densities to calculate coalescence and two-particle correlations (following~\cite{Chapman:1995nz} on the latter), and expressed the coalescence factor in terms of the HBT radius parameters computed in their model. To obtain analytic expressions, the D wave function was taken to be Gaussian,
\begin{eqnarray}\label{ref:dGauss}\phi_d(\vec r)&=&\frac{e^{-\frac{\vec r^2}{2d^2}}}{\left(\pi d^2\right)^{\frac{3}{4}}}\end{eqnarray}
with $d=3.2$~fm.
This leads to
\begin{eqnarray}\mathcal{D}(\vec k)&=&e^{-\frac{\vec k^2d^2}{4}}.\end{eqnarray}
For the HBT analysis, Ref.~\cite{Scheibl:1998tk} used the parameters
$R_{\perp}$ and $R_{||}$ in terms of which the correlation function in their model is given by
\begin{eqnarray}\label{eq:C2gauss}\mathcal{C}_2^{\rm PRF}&=&e^{-R_{\perp}^2\vec q^2_{\perp}-R^2_{||}\vec q^2_l},\;\;\;\;\;\;{\rm {\bf(GSM)}}\end{eqnarray}
where $\vec q_l$ is the component of $\vec q$ parallel to the beam axis and $\vec q_{\perp}$ spans the transverse direction. Plugging these expressions for $\mathcal{D}$ and $\mathcal{C}_2^{\rm PRF}$ in Eq.~(\ref{eq:B2C2}) we find\footnote{See also~\cite{Llope:1995zz,Murray:2000cw}.}
\begin{eqnarray}\label{eq:B2SP}\mathcal{B}_2&=&\frac{3\pi^{\frac{3}{2}}}{2\, m\left(R_{\perp}^2+\left(\frac{d}{2}\right)^2\right)\sqrt{R_{||}^2+\left(\frac{d}{2}\right)^2}},\;\;\;{\rm {\bf(GSM)}}.\nonumber\\&&\end{eqnarray}
This reproduces Eq.~(\ref{eq:B2ScheiblHeinz}) and the main result of~\cite{Scheibl:1998tk} (see Eqs.~(6.3) and~(4.12) there), up to the replacement $m\to m_t=\sqrt{m^2+\vec p_t^2}$.
Please note that we have defined $R_\perp$ and $R_{||}$ in the PRF, while~\cite{Scheibl:1998tk} defined these parameters in the YKP frame~\cite{Chapman:1995nz,Yano:1978gk,Podgoretsky:1982xu,Wu:1996wk} which is offset by a transverse boost compared to the PRF.
Mrowczynski discussed the connection between coalescence and two-particle correlations in a series of papers~\cite{Mrowczynski:1987oid,Mrowczynski:1989jd,Mrowczynski:1992gc,Mrowczynski:1993cx,Mrowczynski:1994rn,Maj:2004tb}. This program resulted in a QM sum rule of the neutron-proton correlation function, that was proposed to give the D coalescence factor as a $q$-integral on the correlation function~\cite{Maj:2004tb}. The power of this idea was in that there was no need to correct the measured correlation function for long- or short-range final state interactions: the sum rule should apply directly to the observable correlation. In practice, this suggestion fails, apparently because the $q$-integral proposed in~\cite{Maj:2004tb} receives contributions from large-$q$ regions in the integration.
In comparison to the sum rule of~\cite{Mrowczynski:1994rn,Maj:2004tb}, Eq.~(\ref{eq:B2C2}) is less ambitious. The correlation function entering Eq.~(\ref{eq:B2C2}) does need to be corrected for final state interactions, because it assumes a kinetic picture where an HXS density matrix can be defined and projected into propagating particles. Eq.~(\ref{eq:B2C2}) also invokes assumptions such as isospin symmetry and smoothness for the HXS freezeout surface. In return, however, the RHS of Eq.~(\ref{eq:B2C2}) receives no contributions from large-$q$ modes because $\mathcal{D}(\vec q)$ in the integrand constrains the support to the small $q$ region, $|\vec q|\lesssim0.1m$.
A QM derivation of the coalescence factor using a specific one-dimensional Gaussian source model was given in Ref.~\cite{Mrowczynski:2016xqm}. This derivation agrees with Eq.~(\ref{eq:B2SP}) up to the replacement $m\to p^0=m\gamma_d$.
\section{Real-life complications, $A>2$ clusters, and comparing to data}\label{s:data}
Eq.~(\ref{eq:B2C2}) is idealistic. In practice we cannot pull out a directly measured correlation function $\mathcal{C}_2$, plug into Eq.~(\ref{eq:B2C2}) and calculate $\mathcal{B}_2$. Two main complications, preventing direct implementation of Eq.~(\ref{eq:B2C2}), are: (i) Long-lived resonances, decaying outside of the freeze-out surface of the HXS, distort the correlations. (ii) Long-range Coulomb and short-range strong nuclear FSI cause the two-particle wave function to differ from the plane-wave form. For proton pairs, FSI actually dominate the correlation function, meaning that the QM statistics contribution must be extracted indirectly as a sub-leading contribution to the actual observable $\mathcal{C}_2$. To make things more difficult, different spin states exhibit different short-range FSI.
We will not address the complications above in detail in this paper, deferring such refinements to future work. Instead, we build on femtoscopy data analyses that explicitly treat items (i-ii). The price we pay is to introduce model-dependence, that enters via an assumed simple analytic form for the correlation function. Our procedure and results are explained in the next sections.
\subsection{The chaoticity parameter $\lambda$}\label{ss:lam}
The GSM assumed in~\cite{Chapman:1995nz,Scheibl:1998tk,Mrowczynski:2016xqm} predicts not only the shape, but also the normalisation of $\mathcal{C}_2$: it predicts $\mathcal{C}_2^{\rm PRF}(\vec q\to0)=1$. In reality, measurements show $\mathcal{C}_2^{\rm PRF}(\vec q\to0)\to\lambda<1$, where $\lambda$ is known as the chaoticity (or intercept) parameter~\cite{Wiedemann:1996ig,Akkelin:2001nd}. In HBT analyses of pions, $\lambda<1$ follows from the fact that a sizeable fraction of the pions come from the decay of long-lived resonances, leading to a non-Gaussian contribution to $\mathcal{C}_2$ that is concentrated at very small $|\vec q|$ and cannot be resolved experimentally~\cite{Wiedemann:1996ig}. In HBT analyses of proton pairs, hyperons are the resonant contamination~\cite{Wang:1999xq,Wang:1999bf,Szymanski:2016xia}.
Since strong FSI between $p\Lambda$ and $pp$ are crucial in shaping the $p\Lambda$ and $pp$ correlation functions, studies~\cite{Wang:1999xq,Wang:1999bf,Szymanski:2016xia,Adam:2015vja} separate the $p\Lambda\to pp$ and genuine $pp$ contributions entering the observed $pp$ correlation into different terms, that are fit in a combined analysis. In~\cite{Adam:2015vja,Szymanski:2016xia}, separate chaoticity parameters $\lambda_{pp},\,\lambda_{p\Lambda}$ were assigned to the genuine $pp$ pairs and the pairs coming from $p\Lambda\to pp$.
The value of $\lambda$ defined in this way could reflect intrinsic departures of the source functions from Gaussianity.
In Ref.~\cite{Chapman:1995nz} (and many other analyses in the literature), $\lambda$ was introduced as a free parameter. Thus, it did not enter into the coalescence-HBT correspondence of Ref.~\cite{Scheibl:1998tk}. However, Eq.~(\ref{eq:B2C2}) shows that $\mathcal{B}_2$ is directly proportional to a $q$-moment of $\mathcal{C}_2^{\rm PRF}$. If we adopt the Gaussian form together with the $\lambda$ modification as an empirical description of $\mathcal{C}_2$,
\begin{eqnarray}\label{eq:C2gaussl}\mathcal{C}_2^{\rm PRF}&=&\lambda\,e^{-R_{\perp}^2\vec q^2_{\perp}-R_{||}\vec q^2_l},\,{\rm {\bf(GSM,\,chaoticity\,\lambda)}}\nonumber\\&&\end{eqnarray}
then $\mathcal{B}_2$ should match Eq.~(\ref{eq:B2SP}) simply multiplied by the experimentally deduced value of $\lambda$:
\begin{eqnarray}\label{eq:B2SPlam}\mathcal{B}_2&=&\frac{3\pi^{\frac{3}{2}}\lambda}{2m\left(R_{\perp}^2+\left(\frac{d}{2}\right)^2\right)\sqrt{R_{||}^2+\left(\frac{d}{2}\right)^2}},\,{\rm {\bf(GSM,\,chaoticity\,\lambda)}}.\nonumber\\&&\end{eqnarray}
\subsection{$A\geq2$}\label{s:highA}
Eq.~(\ref{eq:B2C2}) can be generalised to clusters with $A\geq2$. Assuming an $(A-1)$-dimensional symmetric Gaussian form for the cluster's relative coordinate wave function, and assuming that the $A$-particle correlation function can be decomposed as a product of 2-particle Gaussian correlators described by the same HBT radii $R_{\perp}$ and $R_{||}$ and chaoticity $\lambda$, then the analogue of Eq.~(\ref{eq:B2SPlam}) is:
\begin{eqnarray}\label{eq:BA}\frac{\mathcal{B}_A}{m^{2(A-1)}}&=&\lambda^{\frac{A}{2}}\frac{2J_A+1}{2^A\sqrt{A}}\,\times\nonumber\\
&&\left[\frac{(2\pi)^{\frac{3}{2}}}{m^3\left(R_{\perp}^2+\left(\frac{d_A}{2}\right)^2\right)\sqrt{R_{||}^2+\left(\frac{d_A}{2}\right)^2}}\right]^{A-1}.\nonumber\\&&\end{eqnarray}
The definition of the cluster wave function and its size parameter $d_A$, used in Eq.~(\ref{eq:BA}), are given in App.~\ref{app:wave}.
\subsection{Comparing to data}\label{ss:data}
Experimental collaborations often report the results of HBT analyses in terms of empirical fit parameters $R$ and $\lambda$~\cite{Aamodt:2010jj,Abelev:2012sq,Abelev:2013pqa,Kisiel:2014upa,Adam:2015vja,Adam:2015pya,Szymanski:2016xia,Acharya:2018gyz}, assuming Eq.~(\ref{eq:C2gaussl}) and accounting explicitly for the spin symmetry of the pair wave function and for the distortion due to FSI~\cite{Koonin:1977fh,Lednicky:1981su,Lednicky:2005tb}.
To compare our theoretical results to data, we will therefore use Eqs.~(\ref{eq:B2SPlam}) and~(\ref{eq:BA}). We further take the extra simplification of a 1-dimensional HBT parametrisation with $R_{\perp}=R_{||}=R$.
Pion, kaon, and proton femtoscopy results in Pb-Pb collisions were reported in~\cite{Adam:2015vja,Szymanski:2016xia}. Results for proton and kaon femtoscopy in p-p collisions were given in~\cite{Acharya:2018gyz} and~\cite{Abelev:2012sq}, respectively. The kaon results are of potential use because Ref.~\cite{Adam:2015vja} showed compatible results for the parameters $R$ and $\lambda$ obtained in proton and kaon correlations at the same $m_t$.
Using these HBT analyses we can calculate the RHS of Eqs.~(\ref{eq:B2SPlam}) and~(\ref{eq:BA}), and compare to experimental data on the production of light nuclei~\cite{Adam:2015vda,Acharya:2017fvb}.
\subsubsection{Pb-Pb collisions}
The {\bf top two panels} in Fig.~\ref{fig:RlamPbPb} summarise experimental results for $R$ and $\lambda$ in central (0-10\%) Pb-Pb collisions at $\sqrt{s}=2.76$~TeV~\cite{Adam:2015vja}\footnote{Useful details can be found in Tables 7.4-7.9 in~\cite{Szymanski:2016xia}.}. The {\bf bottom two panels} show $R$ and $\lambda$ obtained in intermediate centrality (30-50\%) data. For $R$, we show the average values found for $pp$ and $\bar p\bar p$ pairs. The uncertainties are mostly systematic, and the width of the band neglects the statistical uncertainty. For $\lambda$, we show the sum $\lambda_{pp}+\lambda_{p\Lambda}$, take the average of the systematic uncertainty, and average the result between particles and anti-particles\footnote{The reason to use the sum of $\lambda_{pp}+\lambda_{p\Lambda}$, and not just $\lambda_{pp}$, is that we are interested to use Eq.~(\ref{eq:B2SPlam}) which assumes the same single-particle spectrum normalisation in the definition of $\mathcal{C}_2$ and $\mathcal{B}_2$. However, the single-particle spectrum entering the denominator of $\mathcal{B}_2$ in the experimental analysis includes only the prompt contribution, while the denominator in the $\mathcal{C}_2$ experimental analysis with $pp$ and $p\Lambda$ terms explicitly separated includes both prompt and secondary protons.}.
Plugging these values of $R,\,\lambda$ into the RHS of Eq.~(\ref{eq:B2SPlam}), we obtain a prediction for $\mathcal{B}_2$. The result for (0-10\%) centrality events is shown by the blue shaded band in the {\bf topmost panel} of Fig.~\ref{fig:B2PbPb}. The uncertainty of the theory prediction was obtained by using the lower value for $\lambda$ and the upper value for $R$ to calculate the lower value of the predicted $\mathcal{B}_2$, and vice-verse.
An experimental measurement of $\mathcal{B}_2$~\cite{Adam:2015vda} is shown in the same plot as a grey band. We can also compare the data with the theoretical prediction of Ref.~\cite{Scheibl:1998tk}; this is done in the {\bf second from top panel} of Fig.~\ref{fig:B2PbPb}.
In the {\bf bottom two panels} of Fig.~\ref{fig:B2PbPb} we repeat the analysis using the intermediate centrality (30-50\%) HBT parameters, compared to $\mathcal{B}_2$ data corresponding to events at (20-40\%) and (40-60\%) centrality events\footnote{Note that the analysis of Ref.~\cite{Scheibl:1998tk} was restricted to radially symmetric HXS in the plane transverse to the beam axis. It should not, in principle, be valid for intermediate centrality.}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.495\textwidth]{ALICERPbPb}
\includegraphics[width=0.495\textwidth]{ALICElambdaPbPb}
\includegraphics[width=0.495\textwidth]{ALICERPbPb3050}
\includegraphics[width=0.495\textwidth]{ALICElambdaPbPb3050}
\end{center}
\caption{Experimental fit results for the 1-dimensional HBT radius $R$ and $\lambda$ parameters, extracted from correlations of $pp$, $p\Lambda$, and their anti-particles in central (0-10\%; {\bf panels (a) and (b)}) and intermediate centrality (30-50\%; {\bf panels (c) and (d)}) Pb-Pb collisions at $\sqrt{s}=2.76$~TeV~\cite{Adam:2015vja,Szymanski:2016xia}.}
\label{fig:RlamPbPb}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.495\textwidth]{B2vsdataPbPb}
\includegraphics[width=0.495\textwidth]{B2HeinzvsdataPbPb}
\includegraphics[width=0.495\textwidth]{B2vsdataPbPb2060}
\includegraphics[width=0.495\textwidth]{B2HeinzvsdataPbPb2060}
\end{center}
\caption{{\bf Panels (a) and (b):} experimental results for $\mathcal{B}_2$ from central (0-10\%) PbPb collisions at $\sqrt{s}=2.76$~TeV~\cite{Adam:2015vda}, shown by grey band, compared to Eq.~(\ref{eq:B2SPlam}) derived here (blue band) and to the prediction of Ref.~\cite{Scheibl:1998tk} (orange band). The coalescence calculation uses the experimentally extracted HBT $R$ and $\lambda$ parameters shown in Fig.~\ref{fig:RlamPbPb}. {\bf Panels (c) and (d):} experimental values of $\mathcal{B}_2$ from two intermediate centrality classes, (20-40\%) and (40-60\%), and the theoretical prediction calculated using HBT data from events at (30-50\%).}
\label{fig:B2PbPb}
\end{figure}
In Fig.~\ref{fig:B3PbPb} we consider experimental results for $\mathcal{B}_3$~\cite{Adam:2015vda} from centrality classes (0-20\%) and (20-80\%), shown in the {\bf top two} and {\bf bottom two panels}, respectively.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.495\textwidth]{B3vsdataPbPb}
\includegraphics[width=0.495\textwidth]{B3HeinzvsdataPbPb}
\includegraphics[width=0.495\textwidth]{B3vsdataPbPb2080}
\includegraphics[width=0.495\textwidth]{B3HeinzvsdataPbPb2080}
\end{center}
\caption{{\bf Panels (a) and (b):} Experimental results for $\mathcal{B}_3$ from central (0-20\%) PbPb collisions at $\sqrt{s}=2.76$~TeV~\cite{Adam:2015vda}, shown by grey band, compared to Eq.~(\ref{eq:BA}) derived here (blue band) and to the prediction of Ref.~\cite{Scheibl:1998tk} (orange band). {\bf Panels (c) and (d):} experimental values of $\mathcal{B}_3$ from the centrality class (20-80\%), and the theoretical prediction calculated using HBT data from events at (30-50\%).}
\label{fig:B3PbPb}
\end{figure}
\subsubsection{p-p collisions}
Ref.~\cite{Acharya:2018gyz} reported $R\approx1.14^{+0.07}_{-0.02}$~fm (comparable statistic and systematic uncertainties were added in quadrature) in a combined analysis of $pp$, $p\Lambda$, and other hyperon correlation data from $\sqrt{s}=7$~TeV p-p collisions at pair average momentum corresponding to $m_t=(1.2-1.6)$~GeV. The analysis in this work effectively assumed $\lambda=1$. However, in an analysis that allowed $\lambda$ to vary as a free parameter, kaon correlations were found to give $\lambda\sim0.5$ at $m_t=1.4$~GeV, along with $R\approx0.8\pm0.3$~fm~\cite{Abelev:2012sq}. This is of potential interest because Ref.~\cite{Adam:2015vja} demonstrated HBT parameters that were the same, within measurement uncertainties, for kaon and proton final states at the same $m_t$.
Using $R\approx1.14^{+0.07}_{-0.02}$~fm as found in the $pp$ analysis~\cite{Acharya:2018gyz}, Eqs.~(\ref{eq:B2SPlam}) predicts $\mathcal{B}_2=10^{-2}\times\left(0.8-0.9\right)\times\lambda~$GeV$^2$.
Using, instead, $R\approx0.8\pm0.3$~fm as found from kaon correlations~\cite{Abelev:2012sq}, Eq.~(\ref{eq:B2SPlam}) predicts $\mathcal{B}_2=10^{-2}\times\left(0.9-1.4\right)\times\lambda~$GeV$^2$.
These predictions can be compared to light cluster data from Ref.~\cite{Acharya:2017fvb}, which found the experimental result $\mathcal{B}_2^{\rm exp}\approx10^{-2}\times(1.6-2.2)$~GeV$^2$ at $m_t\approx1.4$~GeV.
Using $R\approx1.14^{+0.07}_{-0.02}$~fm~\cite{Acharya:2018gyz}, Eq.~(\ref{eq:BA}) predicts $\mathcal{B}_3=10^{-4}\times\left(2.1-2.8\right)\times\lambda^{\frac{3}{2}}~$GeV$^4$. For $R\approx0.8\pm0.3$~fm~\cite{Abelev:2012sq}, Eq.~(\ref{eq:BA}) predicts $\mathcal{B}_3=10^{-4}\times\left(3.1-23\right)\times\lambda^{\frac{3}{2}}~$GeV$^4$. The experimental result~\cite{Acharya:2017fvb} is $\mathcal{B}_3^{\rm exp}\approx10^{-4}\times(1-3)$~GeV$^4$ at $m_t\approx(1.1-1.4)$~GeV.
\subsection{Discussion: $\mathcal{B}_A$ vs. $R$, coalescence across systems}\label{ss:BAvsR}
Measurement uncertainties on the HBT $R$ and $\lambda$ parameters lead to large uncertainties on our theoretical prediction of $\mathcal{B}_2$ and $\mathcal{B}_3$, derived from Eqs.~(\ref{eq:B2SPlam}-\ref{eq:BA}). Part of this uncertainty is due to our crude treatment of the data. For example, our uncertainty estimate on $\mathcal{B}_{2}$ and $\mathcal{B}_{3}$ in the left panels of Figs.~\ref{fig:B2PbPb}-\ref{fig:B3PbPb} added together the effects of the systematic measurement uncertainties on $R$ and $\lambda$. As a result, while Eqs.~(\ref{eq:B2SPlam}-\ref{eq:BA}) are consistent with the data, there is much room to improve the analysis.
The coalescence-correlation correspondence motivates an experimental re-assessment of the data presented in Refs.~\cite{Abelev:2012sq,Adam:2015vja,Acharya:2018gyz} and~\cite{Adam:2015vda,Acharya:2017fvb}, aiming at a joint analysis of HBT and cluster yields in events sharing the same $p_t$ and centrality classes.
Before we conclude, in Fig.~\ref{fig:BR} we take a broader look at the data-theory comparison by considering the $\mathcal{B}_A-R$ (anti-)correlation across different systems~\cite{Blum:2017qnn}. In Fig.~\ref{fig:BR}, the grey shaded band shows the theoretical prediction for $\mathcal{B}_2$ ({\bf top}) and $\mathcal{B}_3$ ({\bf bottom}), calculated as function of $R$ using Eqs.~(\ref{eq:B2SPlam}-\ref{eq:BA}). The calculation uses an estimate of the experimentally measured value of $\lambda$. To define the upper edge of the bands, we interpolate between $\lambda=\{1,0.7,0.7\}$ defined at $R=\{0.85,2.5,5\}$. To define the lower edge we interpolate between $\lambda=\{0.5,0.3,0.3\}$ defined at $R=\{0.85,2.5,5\}$. This range of $\lambda$ is roughly consistent with the experimental results found in Ref.~\cite{Adam:2015vja,Acharya:2018gyz,Abelev:2012sq}.
The red horizontal bands in Fig.~\ref{fig:BR} show the (0-10\%) (for $\mathcal{B}_2$) and (0-20\%) (for $\mathcal{B}_3$) coalescence factor measurements for Pb-Pb. Each of the three red bands corresponds to a different bin in $m_t$, among the three bins shown in Ref.~\cite{Adam:2015vja}. The blue horizontal bands show the result for the (20-40\%) (for $\mathcal{B}_2$) and (20-80\%) (for $\mathcal{B}_3$) events, respectively. The green band shows the result for p-p collisions~\cite{Acharya:2017fvb}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.495\textwidth]{B2R}
\includegraphics[width=0.495\textwidth]{B3R}
\end{center}
\caption{Summary of data. {\bf Top:} $\mathcal{B}_2$ vs. $R$. {\bf Bottom:} $\mathcal{B}_3$ vs. $R$.}
\label{fig:BR}
\end{figure}
\section{Conclusions}\label{s:sum}
We considered the relation between nuclear cluster formation (defined via a coalescence factor $\mathcal{B}_A$) and two-particle correlation measurements (known as femtoscopy or Hanbury Brown-Twiss (HBT) analyses, with two-particle correlation function $\mathcal{C}_2$) in hadronic collisions. Scheibl \& Heinz~\cite{Scheibl:1998tk} derived a theoretical result, Eq.~(\ref{eq:B2ScheiblHeinz}), equating $\mathcal{B}_A$ to inverse-powers of the source homogeneity radius $R$ measured in HBT analyses. Eq.~(\ref{eq:B2ScheiblHeinz}) is consistent with LHC data over several orders of magnitude in $\mathcal{B}_A$, albeit with large uncertainties~\cite{Blum:2017qnn}. Ref.~\cite{Scheibl:1998tk} based their derivation of Eq.~(\ref{eq:B2ScheiblHeinz}) on a specific, simplified model of collective flow. This model is unlikely to actually represent in detail the dynamics in different systems ranging from Pb-Pb to p-p. The question we addressed to ourselves was, therefore: why does Eq.~(\ref{eq:B2ScheiblHeinz}) work?
Using an idealised quantum mechanical (QM) framework, we derived a direct integral relation between the coalescence factor and the two-particle correlation function. Our main result is Eq.~(\ref{eq:B2C2}), which gives $\mathcal{B}_2$ as an integral of $\mathcal{C}_2$ weighted by the D probability density. The derivation does not require a detailed model of the particle emission source. In particular, we need not invoke the assumptions and approximations of~\cite{Scheibl:1998tk}. If we specialise to the assumptions in~\cite{Scheibl:1998tk}, our formula essentially reproduces Eq.~(\ref{eq:B2ScheiblHeinz}). Importantly, Eq.~(\ref{eq:B2ScheiblHeinz}) also obtains under more general circumstances if the two-particle correlation function can be approximately described empirically by a Gaussian form, as commonly used in experimental HBT studies.
While our theoretical results are consistent with currently available measurements, the uncertainties are large. Existing experimental analyses were not geared for a direct comparison of femtoscopy and cluster yields. No HBT analysis precisely overlaps, in terms of, e.g., $p_t$ and centrality binning, with cluster yield measurements. The recent study in~\cite{Bellini:2018epz} (see also~\cite{Sun:2018mqq}) proposed to bypass this gap by replacing the HBT part in the coalescence-correlation comparison with multiplicity measurements that correlate with the HBT scales. We suggest, instead, that the coalescence-correlation relation offers a fundamental probe of the (generally defined) coalescence model, justifying dedicated experimental work aiming to test the relation directly.
\acknowledgments
We thank Francesca Bellini, Alexander Kalweit and Urs Wiedemann for discussions and JinJin Pan and Kenny Ng for ongoing collaboration on data analysis related to this work. We are grateful to Ulrich Heinz for discussions and especially for helping us find our way in the literature on coalescence and HBT in heavy-ion collisions. Finally, we thank Bhawani Singh for a diligent reading of our paper and for pointing out a normalisation error.
KB is incumbent of the Dewey David Stone and Harry Levine career development chair. The work of KB and MT was supported by grant 1937/12 from the I-CORE program of the Planning and Budgeting Committee and the Israel Science Foundation and by grant 1507/16 from the Israel Science Foundation.
\begin{appendix}
\section{Coalescence from correlation functions: kinetic theory}\label{app:kinetic}
Here we give another derivation of Eq.~(\ref{eq:B2C2}). The starting point of our analysis is equivalent to Eq.~(3.12) of Ref.~\cite{Scheibl:1998tk}, derived in Ref.~\cite{Danielewicz:1992pei}.
We assume that the 2-particle source can be factorised as a product of 1-particle source terms.
The production rate of deuterons (D) at momentum $P_d$, per four dimensional volume in the source region parametrised by D formation coordinates $R$, is given by
\begin{eqnarray}\label{eq:a0}
\frac{d}{d^4R}\frac{dN_d}{d^3P_d}
&=&\frac{3\cdot 2}{(2\pi)^3}
\int
\frac{d^3 r\,d^3 Q}{(2\pi)^3}\,
\mathcal{D}_d\left(\vec{Q},\vec{r}\right)\\
&& f(R_+;Q_+)\,\Gamma_{\rm free}(R_-;Q_-^*), \nonumber
\end{eqnarray}
where the factor $3$ is due to the deuteron spin and the factor $2$
is due to exchange of proton and neutron. $\Gamma_{\rm free}$ indicates the production rate of free nucleons. We have $Q_++Q_-^*=P_d$, and we take $Q_-^*$ slightly off-shell to ensure momentum conservation.
For small $\vec Q$, we can approximat
\begin{eqnarray}\label{eq:app1}
\frac{d}{d^4R}\frac{dN_d}{d^3P_d}
&\approx&\frac{3\cdot 2}{(2\pi)^3}\int{d^3 r}\,|\phi_d(\vec{r})|^2\\
&& f(R_+;P_d/2)\,\Gamma_{\rm free}(R_-;P_d/2). \nonumber
\end{eqnarray}
It is convenient to consider the coalescence problem in the D rest frame (DRF). In the DRF, we define the source function $S$ as
\begin{eqnarray} S(x)&=&\frac{m\,\Gamma_{\rm free}(x)}{(2\pi)^3},\end{eqnarray}
such that the free nucleon distribution function is given b
\begin{eqnarray}
f(y)&=&\frac{(2\pi)^3}{m}\int_{-\infty}^{y_0} dt\, S(t,\vec{y}).
\end{eqnarray}
For small $|\vec Q|^2\ll m^2$, the constituent nuclei energies are $\approx m$
in the DRF, so the Lorentz invariant D yield is
\begin{eqnarray}
\left(E_d\frac{dN_d}{d^3P_d}\right)^{\rm DRF}
&\approx
%
\frac{2m}{m^2}3\cdot 2(2\pi)^3\int d^4R\,\frac{1}{2}\int{d^4r}\,
|\phi_d(\vec{r})|^2 \nonumber\\
&&S\left(R^0-t,\vec{R}-\frac{\vec{r}}{2};m\right)S\left(R^0,\vec{R}+\frac{\vec{r}}{2};m\right).\nonumber\\
&&\label{eq:a1}
\end{eqnarray}
Now, consider the two-point correlation function $\mathcal{C}_2(P,q)$.
$\mathcal{C}_2(q,P)$ depends on frame and we take the pair centre of mass frame (PRF). For clarity, we use the symbol $\mathcal{C}_2^{\rm PRF}$ to define the two-point function in this frame.
Under the same source factorisation assumption we considered for the coalescence problem, we have~\cite{Chapman:1995nz}
\begin{eqnarray}
\mathcal{C}_2^{\rm PRF}(P,q)&=&\frac{4\int d^4R\int d^4rS\left(R+\frac{r}{2};P\right)S\left(R-\frac{r}{2};P\right)e^{iq\cdot r}}{\left(E\frac{dN}{d^3P}\right)^2},\nonumber\\&&\label{eq:a2}
\end{eqnarray}
where the factor $4$ comes from the spin combinations.
Comparing Eqs.~(\ref{eq:a1}) and~(\ref{eq:a2}), and using Eq.~(\ref{eq:D(k)}), we reproduce Eq.~(\ref{eq:B2C2}).
\section{Cluster wave function}\label{app:wave}
We consider the cluster internal wave function to be a symmetric Gaussian function of the normalised Jacobi coordinates $\vec\xi_n$, $n=1,...,A-1$,
\begin{eqnarray}\phi_A\left(\vec \xi_1,...,\vec \xi_{A-1}\right)&=&\frac{\exp\left(-\frac{\sum_{i=1}^{A-1}\vec\xi_i^2}{2d_A^2}\right)}{A^{\frac{3}{4}}\left(\pi \,d_A^2\right)^{\frac{3(A-1)}{4}}},\end{eqnarray}
where~\cite{Shebeko:2006ud}
\begin{eqnarray}\vec\xi_n&=&\frac{n}{\sqrt{n^2+n}}\left(\vec r_{n+1}-\frac{1}{n}\sum_{m=1}^{n}\vec r_m\right)\end{eqnarray}
and where $\vec r_m$, $m=1,...,A$ are the Cartezian constituent nucleon coordinates.
The size parameter $d_A$ is related to the cluster rms charge radius via~\cite{Mattiello:1996gq,Scheibl:1998tk,Bellini:2018epz}
\begin{eqnarray} r^2_{\rm rms}&=&\frac{3(A-1)}{2A}d_A^2.\end{eqnarray}
\end{appendix}
\vspace{6 pt}
| proofpile-arXiv_065-1413 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \suppressfloats
The paper concerns
the material behavior of granular media and examines questions
of internal stability, solution uniqueness,
and softening in these materials.
Granular materials can be viewed as systems of granules
that interact at their points of contact.
The incremental
boundary value problem for a granular system would involve
an entire multi-grain
body and the prescribed increments (rates) of
displacements and external forces
(Fig.~\ref{fig:bodies}a).
\begin{figure}
\centering
\includegraphics[scale=0.82]{Fig1.eps}
\caption{Region and sub-region of a granular material.}
\label{fig:bodies}
\end{figure}
When viewed as a system of nodes, connections, and
supports,
the problem resembles conventional problems in structural mechanics.
In an alternative approach,
we could treat the body as a continuum
and investigate uniqueness and stability by evaluating
the material behavior of the entire body or of a representative continuum point
in the manner of \citeN{Hill:1958a}, \citeN{Rice:1976a}, and others.
We suggest that questions of granular behavior can
be investigated by accepting these materials
as discrete systems, with the intent of
appraising their susceptibility to instability and softening.
The developments in the paper can be applied to the
problem of an entire body and its supports,
although the derivations are primarily directed toward
the problem of \emph{material behavior} within
the body, perhaps the behavior within isolated sub-regions
or representative volume elements (Fig.~\ref{fig:bodies}b).
In either case, the continuum notions of
stress and deformation are
replaced by discrete contact forces
and particle displacements within
the body or sub-region (Fig.~\ref{fig:bodies}c).
The purpose of this work is to derive the incremental stiffness of a
system of particles---a stiffness that accounts for the
particle shapes---and to provide stability, uniqueness,
and softening criteria for the system.
\par
In Section \ref{sec:mechanical}, we derive the incremental stiffness
matrix for a group of $N$ particles.
The primary contribution of this section is the inclusion
of geometric terms in the derivation, which account for the
shapes of the particles at their contacts.
By including these terms, we show that
the incremental stiffness of a granular material depends, in part,
on the current forces among the particles and not merely
on the contact stiffnesses alone.
The section includes an analysis of possible rigid rotations
of a sub-region when it is considered detached from the rest of
a granular body.
Section~\ref{sec:mechanical} ends with the presentation of
a sample, prototype contact model that can be used in
typical implementations.
In Section~\ref{sec:stability}, we present conditions
for stability, uniqueness, and softening of a granular
sub-region, with particular attention to the
incrementally nonlinear behavior of contacts within the sub-region.
Section~\ref{sec:example} presents examples of two-particle
and four-particle systems,
and we end by discussing implications of this work
and possible future directions.
A list of notation is given in Appendix~\ref{sec:notation}, and
some derivations are placed in Appendices~\ref{app:derive}--\ref{app:eigen}.
\section{Stiffness of a granular region}\label{sec:mechanical}
We consider the incremental motions and stiffness
of an assembly or cluster of particles
(Fig.~\ref{fig:bodies}b).
Particle positions, contact forces, and loading history are
assumed known at the current time $t$,
insofar as they affect the current incremental contact stiffnesses.
We address the incremental (or rate) problem in which
certain infinitesimal particle motions
and external force increments are prescribed,
and we seek the remaining, unknown motion and force increments.
The particles are assumed to be smooth and durable, with no
particle breaking,
and particles interact solely at their contacts (i.e., no long-range
inter-particle forces).
The particles are also assumed to be rigid except at their
compliant contacts, where the traction between a pair
of particles is treated as a point force that depends
on the relative motions of the two particles.
For example,
this assumption would be consistent with Hertz-type contact
models in which changes in force are produced by the relative
approach of two particles.
This compliant contact viewpoint
differs, however, from ``hard contact'' models that enforce
unilateral force and displacement constraints \cite{Moreau:2004a}.
Finally,
we assume slow deformations and rate-independent contact behavior.
\par
With these assumptions,
particle motions are governed by the mechanics of
rigid bodies with compliant contacts:
particle motions produce contact deformations; contact deformations
produce contact forces; and the forces on each particle
must be in equilibrium.
In this section, we derive the stiffness equation
for a three-dimensional group (or cluster) of $N$ particles in the form
\begin{equation}\label{eq:H}
\left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right]_{6N \times 6N}
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB_{6N \times 1}
= \LB d\mathbf{b}\\:d\mathbf{w} \RB_{6N \times 1}
\end{equation}
where $[\mathbf{H}]$ is the incremental stiffness matrix,
vector $[d\mathbf{u}/ d\boldsymbol{\theta}]$
contains three incremental displacements
and three incremental rotations for each of the $N$ particles,
and vector $[d\mathbf{b}/ d\mathbf{w}]$ contains the
six infinitesimal
increments of external force and moment applied to each of the $N$
particles (Fig.~\ref{fig:particles}).
The derivation allows for both contact forces and contact moments,
as well as for both
external body forces $d\mathbf{b}$ and external body moments $d\mathbf{w}$.
These external forces may embody the influence
of surrounding particles on the cluster,
and the paper is primarily directed toward problems in which
the increments $[d\mathbf{b}/d\mathbf{w}]$ are prescribed and the displacements
$[d\mathbf{u}/d\boldsymbol{\theta}]$ must be solved.
In the derivations, we include
all stiffness terms of order $(du)^{1}$ but
exclude terms of higher order.
Even so, Eq.~(\ref{eq:H}) may lead to instabilities,
just as a small strain--finite rotation approach can uncover
instabilities in continuous systems.
The results show that the cluster stiffness
does not exclusively depend upon the
stiffnesses of the contacts (i.e., on the ``contact springs'');
instead, the incremental stiffness also
includes geometric contributions that
depend on the shapes of particles
at their contacts and on the current, accumulated contact forces.
\par
The stiffness matrix $[\mathbf{H}]$ can be assembled in a conventional
manner from the stiffness matrices of the assembly's elemental units---the
individual contacts between particle pairs---and this
section is primarily concerned with deriving the incremental
stiffness of a single pair of particles.
Consider two representative
particles, $p$ and $q$, that are in contact (Fig.~\ref{fig:particles}).
\begin{figure}
\centering
\includegraphics[scale=0.90]{Fig2.eps}
\caption{Two particles in contact.}
\label{fig:particles}
\end{figure}
The incremental stiffness contributed
by this one contact can be expressed in matrix form as
\begin{equation}\label{eq:piece}
\left[
\begin{MAT}(e){l:l}
\mathbf{H}^{p\text{--}p} & \mathbf{H}^{p\text{--}q} \\:
\mathbf{H}^{q\text{--}p} & \mathbf{H}^{q\text{--}q} \\
\end{MAT}
\right]_{12 \times 12}
\left[\begin{MAT}(r){l} d\mathbf{u}^{p}\\ d\boldsymbol{\theta}^{p}\\: d\mathbf{u}^{q}\\ d\boldsymbol{\theta}^{q} \\ \end{MAT}\right]_{12 \times 1}
=
\left[\begin{MAT}(r){l} d\mathbf{b}^{p,\,pq}\\
d\mathbf{w}^{p,\,pq}\\:
d\mathbf{b}^{q,\,qp}\\
d\mathbf{w}^{q,\,qp}\\ \end{MAT}\right]_{12 \times 1}
\end{equation}
where $d\mathbf{u}^{p}$, $d\mathbf{u}^{q}$, $d\boldsymbol{\theta}^{p}$, and $d\boldsymbol{\theta}^{q}$ are the translations and rotations
of $p$ and $q$.
Equation~(\ref{eq:piece}) expresses the effect that the single contact
between $p$ and $q$ will have upon the equilibrium of the two particles.
The external force increments on the right of Eq.~(\ref{eq:piece})
must be combined with the forces that are implied by
the other contacts in an assembly or cluster.
The stiffness matrices of all $M$ contacts within the cluster
can be assembled in the usual manner into
a global matrix---the matrix $[\mathbf{H}]$ of Eq.~(\ref{eq:H}).
The matrix assembly process has been described
elsewhere in the context of the
finite element method (FEM), discontinuous deformation
analysis (DDA), and the granular element method (GEM)
(see \citeNP{Bathe:1976a}, \citeNP{Shi:1993a}, and \shortciteNP{Kaneko:2003a},
respectively).
In the current work,
we do not consider boundary constraints
(prescribed displacements) on the cluster, and this absence
will, of course, leave $[\mathbf{H}]$ singular,
with rigid-body modes of motion.
The possibility of such rigid modes will affect our assessment
of stability, a matter that we consider in Section~\ref{sec:rigid_rotation}.
\subsection{Objective incremental vectors}\label{eq:objectivity}
In deriving Eqs.~(\ref{eq:H}) and~(\ref{eq:piece}),
we preferentially use \emph{objective} incremental vectors,
since the response of a granular sub-region or contact should be
independent of the observer, even if the observer is moving
(\citeNP{Truesdell:1960a}, \S293).
An incremental vector is objective if it is assigned the same
measure by two moving observers who briefly share the
same frame at time $t$ but then rotate relative to one
another during the interval of $t$ to $t+dt$.
The increment $d\mathbf{y}$ between the initial and final
vectors $\mathbf{y}^{t}$ and $\mathbf{y}^{t+dt}$,
\begin{equation}
d\mathbf{y} = \mathbf{y}^{t+dt} - \mathbf{y}^{t}\;,
\end{equation}
is not objective, since an observer who rotates with $\mathbf{y}$
would observe a different $d\mathbf{y}$ than would a
stationary observer.
The discrepancy is corrected, of course, when the two observers
independently measure some other angular change $d\boldsymbol{\theta}$
that occurs during $dt$.
For example, if $d\boldsymbol{\theta}$ is the observed
rotation of the direction of $\mathbf{y}^{t+dt}$ relative
to $\mathbf{y}^{t}$, then the corotated force
\begin{equation}
\mathbf{y}^{t\text{, corotated}} = \mathbf{y}^{t} +
d\boldsymbol{\theta}\times\mathbf{y}^{t}
\end{equation}
can be subtracted from $\mathbf{y}^{t+dt}$ to compute an increment
$\Delta \mathbf{y}$ that would be assigned the same measure by both observers:
\begin{equation}\label{eq:Deltav}
\Delta \mathbf{y} = \mathbf{y}^{t+dt} - \mathbf{y}^{t\text{, corotated}}
= d\mathbf{y} - d\boldsymbol{\theta}\times\mathbf{y}^{t}\;.
\end{equation}
The increment $\Delta \mathbf{y}$ is objective.
Other objective increments can be extracted by referencing other
rotations $d\boldsymbol{\theta}$.
\par
In the paper,
we use four types of infinitesimal increments---designated by the
symbols $d$, $\delta$, $\mathfrak{d}$, and
$\mathbbm{d}$---with the following
distinctions:
\begin{itemize}
\item
``$d$'' increments are those seen by a distant (and possibly moving)
observer and are not objective.
\item
``$\delta$'' increments are those viewed by an
observer attached to (and moving with) a single particle
(the angle $ d\boldsymbol{\theta}$ in Eq.~\ref{eq:Deltav} is
taken as the particle rotation).
These increments are objective.
\item
``$\mathfrak{d}$'' increments are also objective but are tied to the
local material characteristics of two particles at their contact
(the angle $ d\boldsymbol{\theta}$ in Eq.~\ref{eq:Deltav} is
taken as the rotation of the contact frame as the particles rotate or twirl
across each other).
\item
``$\mathbbm{d}$'' increments are objective projections of force and
displacement onto certain objective subspaces
(Section~\ref{sec:rigid_rotation}, where
the angle $ d\boldsymbol{\theta}$ in Eq.~\ref{eq:Deltav} is taken as the
average rotation of a particle cluster).
\end{itemize}
\subsection{First geometric stiffness}\label{sec:firstgeometry}
The current contact forces $\mathbf{f}$ and the current contact
moments $\mathbf{m}$ on a single particle $p$ are assumed to be known
\emph{a prior} and to be in equilibrium
with the external force and moment:
\begin{equation}\label{eq:equil1}
-\sum_{q}\mathbf{f}^{pq} = \mathbf{b}^{p} \;, \quad -\sum_{q}\left(\mathbf{r}^{pq}\times\mathbf{f}^{pq} + \mathbf{m}^{pq} \right)= \mathbf{w}^{p} \;,
\end{equation}
where the sums are for all particles ``$q$'' that are in
contact with $p$,
and $\mathbf{b}^{p}$ and $\mathbf{w}^{p}$ are the current external body force and body moment
that act upon $p$ through the current position $\mathbf{x}^{p}$ of
its pre-assigned (material) reference point (Fig.~\ref{fig:particles}).
The internal contact force $\mathbf{f}^{pq}$ and contact moment $\mathbf{m}^{pq}$
act upon particle $p$ at its contact point with $q$, and
the radial vector $\mathbf{r}^{pq}$ is directed from the
reference point $\mathbf{x}^{p}$ of $p$ to the contact point with $q$.
In contrast, $\mathbf{f}^{qp}$ and $\mathbf{m}^{qp}$ act upon particle $q$, and
$\mathbf{r}^{qp}$ is directed from
the point $\mathbf{x}^{q}$ in particle $q$.
\par
The incremental forms of Eqs.~(\ref{eq:equil1}$_{1}$)
and~(\ref{eq:equil1}$_{2}$) are
\begin{equation}\label{eq:equil2}
-\Sumd\mathbf{f}^{pq} = d\mathbf{b}^{p} \;, \quad
-\sum_{q}(d\mathbf{r}^{pq}\times\mathbf{f}^{pq} + \mathbf{r}^{pq}\timesd\mathbf{f}^{pq} +d\mathbf{m}^{pq}) = d\mathbf{w}^{p} \;,
\end{equation}
where we account for changes $d\mathbf{r}^{pq}$ in the radii as well
as changes $d\mathbf{f}^{pq}$ and $d\mathbf{m}^{pq}$ in the contact forces.
As such, we pursue a second-order theory which accounts for
equilibrium in the deflected shape.
An infinitesimal ``$d$'' increment is one seen by a distant, possibly
moving, observer.
None of the incremental ``$d$'' vectors
in Eq.~(\ref{eq:equil2}) are objective,
but we can identify
an objective ``$\delta$'' part of each increment:
\begin{align}
\label{eq:dr}
d\mathbf{r}^{pq} &= \delta\mathbf{r}^{pq} + d\boldsymbol{\theta}^{p}\times\mathbf{r}^{pq}
\\
\label{eq:df}
d\mathbf{f}^{pq} &= \delta\mathbf{f}^{pq} + d\boldsymbol{\theta}^{p}\times\mathbf{f}^{pq}
\\
\label{eq:dm}
d\mathbf{m}^{pq} &= \delta\mathbf{m}^{pq} + d\boldsymbol{\theta}^{p}\times\mathbf{m}^{pq}
\\
\label{eq:db}
d\mathbf{b}^{p} &= \delta\mathbf{b}^{p} + d\boldsymbol{\theta}^{p}\times\mathbf{b}^{p}
\\
\label{eq:dw}
d\mathbf{w}^{p} &= \delta\mathbf{w}^{p} + d\boldsymbol{\theta}^{p}\times\mathbf{w}^{p}
\end{align}
where $d\boldsymbol{\theta}^{p}$ is the incremental rotation of particle $p$.
The objective ``$\delta$'' increments
are those that would be viewed by an observer attached to
and moving with the particle $p$; whereas,
the cross products in Eqs.~(\ref{eq:dr})--(\ref{eq:dw}) are the increments
that would be seen by a stationary observer when viewing
a vector (say, a follower force $\mathbf{b}^{p}$) that happens to be
rotating in unison with the particle.
Although the force increments on particles $p$ and $q$ are self-equilibrating,
with $d\mathbf{f}^{pq}=-d\mathbf{f}^{qp}$ and $d\mathbf{m}^{pq}=-d\mathbf{m}^{qp}$,
the corotating increments $\delta\mathbf{f}^{pq}$ and $\delta\mathbf{m}^{pq}$ are not
necessarily equal to the negatives of their
counterparts, $-\delta\mathbf{f}^{qp}$ and $-\delta\mathbf{m}^{qp}$, since the
``$pq$'' and ``$qp$'' increments are viewed by different observers.
\par
The equilibrium Eqs.~(\ref{eq:equil2}$_1$) and~(\ref{eq:equil2}$_2$)
can also be expressed in terms of objective ``$\delta$'' increments,
\begin{align}
\label{eq:equil3}
&-\sum_{q}\delta\mathbf{f}^{pq} = \delta\mathbf{b}^{p} \\
\label{eq:equil4}
&-\sum_{q}\left(\delta\mathbf{r}^{pq}\times\mathbf{f}^{pq} + \mathbf{r}^{pq}\times\delta\mathbf{f}^{pq} + \delta\mathbf{m}^{pq}\right) = \delta\mathbf{w}^{p} \;,
\end{align}
as derived in Appendix~\ref{app:derive}.
As expected, incremental equilibrium is an objective relationship,
independent of the observer, and expressible in terms of objective
quantities.
\par
An infinitesimal change in the radial contact position,
$\delta\mathbf{r}^{pq}$ in Eq.~(\ref{eq:equil4}),
alters the moment equilibrium of particle $p$.
This effect is related to similar geometric effects in structural
mechanics, such as buckling and ``p-delta'' phenomena that arise
from the flexing or swaying of columns and frames.
The increment $\delta\mathbf{r}^{pq}$ is objective and
can be separated into normal and tangential parts, which are
both amenable to kinematic/geometric analysis:
\begin{equation}\label{eq:Drpq}
\delta\mathbf{r}^{pq} = \delta s^{pq\text{, n}}\mathbf{n}^{pq} + \delta s^{pq\text{, t}}\mathbf{t}^{pq}\;.
\end{equation}
In this equation, $\mathbf{n}^{pq}$ and $\mathbf{t}^{pq}$ are unit vectors
in directions normal and tangential to $p$ at its contact with $q$,
and $\delta s^{pq\text{, n}}$ and $\delta s^{pq\text{, t}}$ are the associated displacement magnitudes.
Note that $\mathbf{n}^{pq}=-\mathbf{n}^{qp}$,
but the increments $\delta\mathbf{r}^{pq}$ and $\delta s^{pq\text{, t}}\mathbf{t}^{pq}$
might not equal the negatives of their counterparts
$\delta\mathbf{r}^{qp}$ and $\delta s^{qp\text{, t}}\mathbf{t}^{qp}$,
since the latter are viewed by an observer attached to $q$.
\par
For a compliant contact, the normal displacement
$\delta s^{pq\text{, n}}\mathbf{n}^{pq}$ can be taken as
the average incremental indentation of the two particles:
\begin{equation}\label{eq:dsn}
\delta s^{pq\text{, n}}\mathbf{n}^{pq} = \frac{1}{2}\left( \delta\mathbf{u}^{pq\text{, def}}\cdot\mathbf{n}^{pq}\right)\mathbf{n}^{pq}\;,
\end{equation}
where the objective vector $\delta\mathbf{u}^{pq\text{, def}}$ is the
translation of $p$ relative to $q$ near their contact,
\begin{equation}\label{eq:dudef}
\delta\mathbf{u}^{pq\text{, def}} = d\mathbf{u}^{q} -d\mathbf{u}^{p} + \left( d\boldsymbol{\theta}^{q}\times\mathbf{r}^{qp}-d\boldsymbol{\theta}^{p}\times\mathbf{r}^{pq}\right)\;,
\end{equation}
with $\delta\mathbf{u}^{pq\text{, def}}=-\delta\mathbf{u}^{qp\text{, def}}$.
\par
The displacement $\delta s^{pq\text{, t}}\mathbf{t}^{pq}$ is the tangential movement of the
contact point, as viewed by an observer attached to $p$,
a movement that is produced by a combination of sliding and rolling
motions, described by \citeN{Kuhn:2004b},
\begin{multline}\label{eq:rolling}
\delta s^{pq\text{, t}}\mathbf{t}^{pq} = \\
-\left( \mathbf{K}^{p} + \mathbf{K}^{q} \right)^{-1} \cdot
\left[
\delta\boldsymbol{\theta}^{pq\text{, def}}\times\mathbf{n}^{pq} -
\mathbf{K}^{q}\cdot\left( \delta\mathbf{u}^{pq\text{, def}} - (\delta\mathbf{u}^{pq\text{, def}}\cdot\mathbf{n}^{pq})\mathbf{n}^{pq}\right)
\right]
\end{multline}
where the objective rotational contact deformation $\delta\boldsymbol{\theta}^{pq\text{, def}}$ is
defined as
\begin{equation}\label{eq:dtdef}
\delta\boldsymbol{\theta}^{pq\text{, def}} = d\boldsymbol{\theta}^{q} - d\boldsymbol{\theta}^{p} \;,
\end{equation}
with $\delta\boldsymbol{\theta}^{pq\text{, def}}=-\delta\boldsymbol{\theta}^{qp\text{, def}}$.
Tensors $\mathbf{K}^{p}$ and $\mathbf{K}^{q}$ are the surface curvatures of particles $p$ and $q$
at their contact, with negative curvatures (eigenvalues) associated
with convex particles.
Both positive and negative curvatures are allowed in the paper,
provided that particle surfaces are sufficiently smooth---having
continuous curvatures at the contacts points.
We note, however, that a pseudo-inverse should be used in place of
$(\mathbf{K}^{p} + \mathbf{K}^{q})^{-1}$,
so that the rolling displacement vector $\delta s^{pq\text{, t}}\mathbf{t}^{pq}$
is projected onto the tangent plane \cite{Kuhn:2004b}.
\par
Both of the increments
$\delta s^{pq\text{, n}}\mathbf{n}^{pq}$ and $\delta s^{pq\text{, t}}\mathbf{t}^{pq}$ are objective, since both
are linear combinations of the objective vectors
$\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$.
In presenting Eqs.~(\ref{eq:dsn}) and~(\ref{eq:rolling}),
we have intensionally ignored changes in the curvatures
that are produced by particle deformations, since such changes
would produce force increments of an order higher than $(du)^{1}$.
\par
Having developed expressions for the
$\delta\mathbf{r}^{pq}$ in Eq.~(\ref{eq:equil4}),
we anticipate, however, that the contribution of the normal displacement
$\delta s^{pq\text{, n}}\mathbf{n}^{pq}\times\mathbf{f}^{pq}$
is likely small, and its effect is probably inconsequential
when compared with the product $\mathbf{r}^{pq}\times\delta\mathbf{f}^{pq}$
in Eq.~(\ref{eq:equil4}).
On the other hand,
the tangential terms $\delta s^{pq\text{, t}}\mathbf{t}^{pq}\times\mathbf{f}^{pq}$ will likely
become significant, perhaps dominant, at larger strains, since particle
rolling becomes a prevailing mechanism
during granular failure \cite{Kuhn:2004k}.
\par
Equation~(\ref{eq:equil4}) includes the effects of the
$\delta\mathbf{r}^{pq}$ increments on the equilibrium of the
single particle $p$, and the similar effects upon all $N$ particles
can be collected into a matrix form as
\begin{equation} \label{eq:Hg1}
-\sum_{q}\delta\mathbf{r}^{pq}\times\mathbf{f}^{pq}
\ \rightsquigarrow\ %
\left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--1}} \\ \end{MAT}\right]_{6N \times 6N}
\left[\begin{MAT}(r){l} d\mathbf{u}\\: d\boldsymbol{\theta} \\ \end{MAT}\right]_{6N \times 1}\;.
\end{equation}
where matrix $[\mathbf{H}^{\text{g--1}} ]$
is computed with Eqs.~(\ref{eq:equil4})--(\ref{eq:dtdef}).
When constructing the matrix $[ \mathbf{H}^{\text{g--1}} ]$,
one must include the separate contributions of
$\delta\mathbf{r}^{pq}\times\mathbf{f}^{pq}$ and $\delta\mathbf{r}^{qp}\times\mathbf{f}^{qp}$,
which pertain to the equilibrium of particles $p$ and $q$, respectively.
The symbol ``$\rightsquigarrow$'' connotes a matrix assembly
process that collects multiple equilibrium relations
in the form of Eq.~(\ref{eq:piece}) for all $N$ particles.
The six equilibrium equations~(\ref{eq:equil3}) and~(\ref{eq:equil4}),
which apply to any single particle,
can be gathered into the $6N$ equilibrium equations,
\begin{equation}\label{eq:Matrix2}
\left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--1}} \\ \end{MAT}\right]_{6N \times 6N}
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB_{6N \times 1}
- \left[\begin{MAT}(r){l} \mathbf{A}_{1} \\ \end{MAT}\right]_{6N \times 2(6M)}
\left[\begin{MAT}(r){l} \delta\mathbf{f}\\: \delta\mathbf{m} \\ \end{MAT}\right]_{2(6M) \times 1}
= \left[\begin{MAT}(r){l} \delta\mathbf{b}\\: \delta\mathbf{w} \\ \end{MAT}\right]_{6N \times 1} \;,
\end{equation}
by collecting the contact force increments,
$\delta\mathbf{f}^{(\cdot )}$ and $\delta\mathbf{m}^{(\cdot )}$,
of all $M$ contacts.
The first matrix product,
$[\mathbf{H}^{\text{g--1}} ] [d\mathbf{u}/d\boldsymbol{\theta}]$,
corresponds to the quantities
$\delta\mathbf{r}^{pq}\times\mathbf{f}^{pq}$ in Eqs.~(\ref{eq:equil4})--(\ref{eq:Hg1});
the second product
$[\mathbf{A}_{1}][\delta\mathbf{f}/\delta\mathbf{m}]$
corresponds to the
$\delta\mathbf{f}^{pq}$ and $\delta\mathbf{m}^{pq}$ terms in Eqs.~(\ref{eq:equil3}) and~(\ref{eq:equil4}).
These latter terms will soon be investigated.
When assembling the contact forces and moments
into Eq.~(\ref{eq:Matrix2}),
we use a less conventional approach: the
contact forces $\delta\mathbf{f}^{pq}$ and $\delta\mathbf{f}^{qp}$ are treated as distinct objects,
since $\delta\mathbf{f}^{pq}$ and $\delta\mathbf{m}^{pq}$ are not usually equal to $-\delta\mathbf{f}^{qp}$ and $-\delta\mathbf{m}^{qp}$.
This distinction
leads to a total of $2(6M)$ contact force/moment components among
the $M$ contacts.
The statics matrix $[\mathbf{A}_{1}]$
combines these contact forces and moments, as with the
$\delta\mathbf{f}^{pq}$ and $\delta\mathbf{m}^{pq}$ sums of
Eqs.~(\ref{eq:equil3}) and~(\ref{eq:equil4}).
Although it may be impossible to entirely separate geometric and
mechanical effects,
the $[\mathbf{H}^{\text{g--1}} ]$ product in~(\ref{eq:Matrix2})
originates from the geometric, surface shapes of the particles
and from the current contact forces $\mathbf{f}^{pq}$ and $\mathbf{m}^{pq}$.
The matrix $[\mathbf{H}^{\text{g--1}} ]$ would differ for the three
clusters in Fig.~\ref{fig:Geometry}
and would partially account for any differences in
their incremental responses.
Other geometric effects will arise
from the $[\delta\mathbf{f}/\delta\mathbf{m}]$ vector of
Eq.~(\ref{eq:Matrix2}), which will now be discussed.%
\begin{figure}%
\centering%
\includegraphics[scale=0.80]{Fig3.eps}%
\caption{Three clusters with the same topological arrangement, but
different particle curvatures at their contacts.}%
\label{fig:Geometry}%
\end{figure}%
\subsection{Mechanical stiffness;
second and third geometric stiffnesses}\label{sec:submechanical}
To achieve the form of Eq.~(\ref{eq:H}),
the product
$[\mathbf{A}_{1}][\delta\mathbf{f}/\delta\mathbf{m}]$
in Eq.~(\ref{eq:Matrix2}) must be expressed in terms of the $6N$
particle movements $[d\mathbf{u}/d\boldsymbol{\theta}]$.
The increments of a single contact's force and moment
will depend upon the contact deformations
of the two particles and also upon any change in the orientation
of their contact plane.
The increments of force and moment can be derived in terms of
either the ``$\delta$'' or ``$d$'' increments.
Using the simpler ``$d$'' increments, as viewed by a distant observer,
\begin{align}\label{eq:delfpq}
d\mathbf{f}^{pq} &= \mathfrak{d}\mathbf{f}^{pq} + \mathbf{f}^{pq}\times\left( d\mathbf{n}^{pq}\times\mathbf{n}^{pq}\right)
- \frac{1}{2}\left[ \left( d\boldsymbol{\theta}^{p} + d\boldsymbol{\theta}^{q}\right)\cdot\mathbf{n}^{pq}\right]\mathbf{f}^{pq}\times\mathbf{n}^{pq}
\\
\label{eq:delmpq}
d\mathbf{m}^{pq} &= \mathfrak{d}\mathbf{m}^{pq} + \mathbf{m}^{pq}\times\left( d\mathbf{n}^{pq}\times\mathbf{n}^{pq}\right)
- \frac{1}{2}\left[ \left( d\boldsymbol{\theta}^{p} + d\boldsymbol{\theta}^{q}\right)\cdot\mathbf{n}^{pq}\right]\mathbf{m}^{pq}\times\mathbf{n}^{pq}\;.
\end{align}
The increments $\mathfrak{d}\mathbf{f}^{pq}$ and $\mathfrak{d}\mathbf{m}^{pq}$ are the objective changes
in contact force and moment produced solely by material deformations
of the two particles near their contact.
These increments depend upon the objective deformation
vectors $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$,
and the possible nature of this dependence will be discussed later.
The terms $\mathbf{f}^{pq}\times\left( d\mathbf{n}^{pq}\times\mathbf{n}^{pq}\right)$
and $\mathbf{m}^{pq}\times\left( d\mathbf{n}^{pq}\times\mathbf{n}^{pq}\right)$ are the
force increments
produced by a rotation (tilting) of the contact plane, as seen
by a distant ``$d$'' observer.
These terms are typically computed in DEM algorithms in the
manner of \citeN{Lin:1997a}
and \shortciteN{Vuquoc:2000a}.
The final, subtracted terms in Eqs.~(\ref{eq:delfpq}) and~(\ref{eq:delmpq})
are not yet encountered in the DEM literature and
are produced by a rigid-body twirling of the particle pair.
That is, a rigid twirling of two particles, with $d\boldsymbol{\theta}^{p}=d\boldsymbol{\theta}^{q}=d\theta\mathbf{n}^{pq}$,
will leave the normal direction $\mathbf{n}^{pq}$ unchanged but
will cause the tangential contact force to rotate
with the particles in the plane of their contact.
(Alternatively,
an apparent rotation of force would be seen in a stationary pair of
particles when viewed by a distant observer who is
twirling about the direction $\mathbf{n}^{pq}$.)
The rotations $d\boldsymbol{\theta}^{p}$ and $d\boldsymbol{\theta}^{q}$ are assigned equal weight in
Eq.~(\ref{eq:delfpq}), so that $d\mathbf{f}^{pq}$
will equal $-d\mathbf{f}^{qp}$ when $p$ and $q$ are interchanged
(see \citeNP{Bagi:2005a}).
\par
Equation~(\ref{eq:delfpq}) can also be written in terms of the
corotated, objective ``$\delta$'' vectors, as required
in Eqs.~(\ref{eq:equil3}) and~(\ref{eq:equil4}):
\begin{equation}\label{eq:Dfpq}
\delta\mathbf{f}^{pq} = \mathfrak{d}\mathbf{f}^{pq} + \mathbf{f}^{pq}\times\left( \delta\mathbf{n}^{pq}\times\mathbf{n}^{pq}\right)
- \frac{1}{2}\left(\delta\boldsymbol{\theta}^{pq\text{, def}}\cdot\mathbf{n}^{pq}\right)\mathbf{f}^{pq}\times\mathbf{n}^{pq}\;,
\end{equation}
which is derived in Appendix~\ref{app:derive}.
In Eqs.~(\ref{eq:delfpq}) and~(\ref{eq:Dfpq}),
the total change in the contact normal, $d\mathbf{n}^{pq}$, is the
sum of two parts,
\begin{equation}\label{eq:dn}
d\mathbf{n}^{pq} = \delta\mathbf{n}^{pq} + d\boldsymbol{\theta}^{p}\times\mathbf{n}^{pq}\;,
\end{equation}
in the manner of Eqs.~(\ref{eq:dr})--(\ref{eq:dw}),
and these two parts will be discussed later.
As expected, the objective, corotated increment
$\delta\mathbf{f}^{pq}$ in Eq.~(\ref{eq:Dfpq}) depends solely
on other objective quantities---those vectors on the right side
of Eq.~(\ref{eq:Dfpq}).
Likewise, the corotated moment increment is
\begin{equation}
\label{eq:Dmpq}
\delta\mathbf{m}^{pq} = \mathfrak{d}\mathbf{m}^{pq} + \mathbf{m}^{pq}\times\left( \delta\mathbf{n}^{pq}\times\mathbf{n}^{pq}\right)
- \frac{1}{2}\left(\delta\boldsymbol{\theta}^{pq\text{, def}}\cdot\mathbf{n}^{pq}\right)\mathbf{m}^{pq}\times\mathbf{n}^{pq}\;.
\end{equation}
The increments $\mathfrak{d}\mathbf{f}^{pq}$ and $\mathfrak{d}\mathbf{m}^{pq}$
depend upon the infinitesimal contact deformations $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$,
but the other increments
depend upon the local shapes of the two particles at their contact
and upon the accumulated, current contact force $\mathbf{f}^{pq}$ and $\mathbf{m}^{pq}$.
The $\delta\mathbf{n}^{pq}$ terms in Eqs.~(\ref{eq:Dfpq})
and~(\ref{eq:Dmpq}) are likely insignificant at
small strains, but they may become dominant when the
material is failing \cite{Kuhn:2003h}.
\par
Returning to Eq.~(\ref{eq:dn}),
the second term on its right is the change
in the normal $\mathbf{n}^{pq}$ that would be produced
by a rigid rotation of the particle pair that occurs with no
change in the contact point on the surface of particle $p$.
This term is not objective.
The objective
increment $\delta\mathbf{n}^{pq}$ in Eq.~(\ref{eq:dn}) is the change in the normal
that results from a relocation of the contact point on particle $p$, as viewed
by an observer attached to (and rotating with) $p$.
We note, however, that
an observer attached to $q$ will likely view a different reorientation
$\delta\mathbf{n}^{qp}$ of its contact point with $p$.
The increment $\delta\mathbf{n}^{pq}$ depends upon the
curvature of particle $p$ and is (see \citeNP{Kuhn:2004b})
\begin{equation}\label{eq:Dnpq}
\delta\mathbf{n}^{pq} = -\mathbf{K}^{p}\cdot(\delta s^{pq\text{, t}}\mathbf{t}^{pq})\;,
\end{equation}
where the contact displacement $\delta s^{pq\text{, t}}\mathbf{t}^{pq}$ is given in Eq.~(\ref{eq:rolling}).
The force increments
in the final two terms of Eqs.~(\ref{eq:Dfpq}) and~(\ref{eq:Dmpq})
are collected into a matrix form by
applying Eqs.~(\ref{eq:Dnpq}) and~(\ref{eq:rolling}) to all
$M$ contacts:
\begin{multline}\label{eq:A3}
\left[
\begin{MAT}(r){l}
\mathbf{f}^{pq}\times(\delta\mathbf{n}^{pq}\times\mathbf{n}^{pq})- (1/2)\left(\delta\boldsymbol{\theta}^{pq\text{, def}}\cdot\mathbf{n}^{pq}\right)\mathbf{f}^{pq}\times\mathbf{n}^{pq}\\:
\mathbf{m}^{pq}\times(\delta\mathbf{n}^{pq}\times\mathbf{n}^{pq})- (1/2)\left(\delta\boldsymbol{\theta}^{pq\text{, def}}\cdot\mathbf{n}^{pq}\right)\mathbf{m}^{pq}\times\mathbf{n}^{pq}\\:
\mathbf{f}^{qp}\times(\delta\mathbf{n}^{qp}\times\mathbf{n}^{qp})- (1/2)\left(\delta\boldsymbol{\theta}^{pq\text{, def}}\cdot\mathbf{n}^{qp}\right)\mathbf{f}^{qp}\times\mathbf{n}^{qp}\\:
\mathbf{m}^{qp}\times(\delta\mathbf{n}^{qp}\times\mathbf{n}^{qp})- (1/2)\left(\delta\boldsymbol{\theta}^{pq\text{, def}}\cdot\mathbf{n}^{qp}\right)\mathbf{m}^{qp}\times\mathbf{n}^{qp}\\
\end{MAT}
\right]_{2(6M)\times 1}
\\
\rightsquigarrow\ %
\left[\begin{MAT}(r){l} \mathbf{A}_{2} \\ \end{MAT}\right]_{2(6M) \times 6N}
\left[\begin{MAT}(r){l} d\mathbf{u}\\: d\boldsymbol{\theta} \\ \end{MAT}\right]_{6N \times 1}\;,
\end{multline}
\par
We now consider the remaining terms, $\mathfrak{d}\mathbf{f}^{pq}$ and $\mathfrak{d}\mathbf{m}^{pq}$,
that appear in Eqs.~(\ref{eq:Dfpq}) and~(\ref{eq:Dmpq}).
Unique injective mappings
are assumed from the full $\mathbb{R}^{6}$
space of incremental contact deformations,
$\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$, into the possibly smaller space of incremental
contact force and moment,
$\mathfrak{d}\mathbf{f}^{pq}$ and $\mathfrak{d}\mathbf{m}^{pq}$.
We also assume that the particles are rigid except
at their compliant contacts.
For such contact between two particles,
any objective increment of contact force or moment,
such as $\mathfrak{d}\mathbf{f}^{pq}$ or $\mathfrak{d}\mathbf{m}^{pq}$, must depend on the
objective, relative increments $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$ of their
movements \cite{Kuhn:2005c}.
The assumption of a unique mapping
$[\delta\mathbf{u}^{pq\text{, def}} / \delta\boldsymbol{\theta}^{pq\text{, def}}] \rightarrow [\mathfrak{d}\mathbf{f}^{pq} / \mathfrak{d}\mathbf{m}^{pq}]$
excludes Signorini models of contact behavior.
Finally, we assume that the mapping is homogeneous of degree one
in both $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$, perhaps in the restricted form
\begin{align}\label{eq:constitutive}
\mathfrak{d}\mathbf{f}^{pq}
&=
\mathbf{F}^{pq}\left( \frac{\delta\mathbf{u}^{pq\text{, def}}}{|\delta\mathbf{u}^{pq\text{, def}} |}\,,\, \mathbf{f}^{pq}\right)\cdot\delta\mathbf{u}^{pq\text{, def}}
\\
\label{eq:constitutiveM}
\mathfrak{d}\mathbf{m}^{pq}
&=
\mathbf{M}^{pq}\left( \frac{\delta\boldsymbol{\theta}^{pq\text{, def}}}{|\delta\boldsymbol{\theta}^{pq\text{, def}} |}\,,\, \mathbf{m}^{pq}\right)\cdot\delta\boldsymbol{\theta}^{pq\text{, def}}
\;.
\end{align}
where we introduce the
contact stiffness tensor functions
$\mathbf{F}^{pq}$ and $\mathbf{M}^{pq}$,
noting that $\mathbf{F}^{pq}=-\mathbf{F}^{qp}$
and $\mathbf{M}^{pq}=-\mathbf{M}^{qp}$.
We could also choose more
general forms of contact behavior than those in
Eq.~(\ref{eq:constitutive}) and~(\ref{eq:constitutiveM}).
In these equations,
we have excluded viscous effects
(see \shortciteNP{Poschel:2001a}),
but we allow the incremental response to
depend on the current contact force $\mathbf{f}^{pq}$, as would apply with
frictional contacts.
The constitutive forms~(\ref{eq:constitutive})
and~(\ref{eq:constitutiveM}) depend upon the directions
of the deformations $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$ and are,
at best, incrementally nonlinear, as would be expected for
frictional contacts.
For general Mindlin-Cattaneo contacts, the form would additionally need to
include the history of the contact force.
We also note that in Eqs.~(\ref{eq:constitutive})
and~(\ref{eq:constitutiveM}),
a contact's force and moment are uncoupled from each other and
are also uncoupled from the forces and moments at the other
contacts of the same particle, although the latter condition may not
be suitable for very soft particles.
The forms in Eqs.~(\ref{eq:constitutive}) and~(\ref{eq:constitutiveM})
would also
not be appropriate for capturing the effects
of rolling friction, in which $\mathfrak{d}\mathbf{f}^{pq}$ and $\mathfrak{d}\mathbf{m}^{pq}$
depend on a combination of the translational
and rotational deformations, $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$
\shortcite{Iwashita:1998a,Vuquoc:2000a}.
Section~\ref{sec:epstiffness} recounts a specific example of the behavior
in Eq.~(\ref{eq:constitutive}).
\par
The general stiffness relations in
Eqs.~(\ref{eq:constitutive}) and~(\ref{eq:constitutiveM})
are collected
for all $M$ contacts into the matrix form
\begin{equation}\label{eq:constitMatrix}
\left[\begin{MAT}(r){l} \mathfrak{d}\mathbf{f}\\: \mathfrak{d}\mathbf{m} \\ \end{MAT}\right]_{2(6M) \times 1}
=
\left[
\begin{MAT}(r)[0pt,2.5em,2em]{c}
\mathbf{F}\\:
\mathbf{M}\\
\end{MAT}
\right]_{2(6M) \times 6M}
\left[\begin{MAT}(r){l} \delta\mathbf{u}^{\text{def}}\\: \delta\boldsymbol{\theta}^{\text{def}} \\ \end{MAT}\right]_{6M \times 1}\;,
\end{equation}
recognizing that the contents of matrix $[\mathbf{F}/\mathbf{M}]$
may depend upon the current contact forces, $\mathbf{f}^{pq}$ and
$\mathbf{m}^{pq}$,
and on the directions of the incremental contact deformations, $\delta\mathbf{u}^{pq\text{, def}}$
and $\delta\boldsymbol{\theta}^{pq\text{, def}}$.
That is, the mapping from $[\delta\mathbf{u}^{pq\text{, def}} / \delta\boldsymbol{\theta}^{pq\text{, def}} ]$
to $[\mathfrak{d}\mathbf{f}/ \mathfrak{d}\mathbf{m}]$
may be incrementally nonlinear in a manner explored
in Sections~\ref{sec:epstiffness} and~\ref{sec:stability}.
To be consistent with Eqs.~(\ref{eq:Matrix2}) and~(\ref{eq:A3}),
we treat the forces $\mathfrak{d}\mathbf{f}^{pq}$
and $\mathfrak{d}\mathbf{m}^{pq}$
as being distinct from $\mathfrak{d}\mathbf{f}^{qp}$
and $\mathfrak{d}\mathbf{m}^{qp}$,
even though
$\mathfrak{d}\mathbf{f}^{pq}=-\mathfrak{d}\mathbf{f}^{qp}$,
$\mathfrak{d}\mathbf{m}^{pq}=-\mathfrak{d}\mathbf{m}^{qp}$,
$\mathbf{F}^{pq}=-\mathbf{F}^{qp}$,
and $\mathbf{M}^{pq}=-\mathbf{M}^{qp}$.
\par
The contact deformations $\delta\mathbf{u}^{pq\text{, def}}$
and $\delta\boldsymbol{\theta}^{pq\text{, def}}$
in Eqs.~(\ref{eq:constitutive})--(\ref{eq:constitMatrix})
depend upon the motions of the two particles $p$ and $q$.
These kinematic relationships are supplied by
Eqs.~(\ref{eq:dudef}) and~(\ref{eq:dtdef}),
which can be collected in a matrix form as
\begin{equation}\label{eq:kinematics}
\left[\begin{MAT}(r){l} \delta\mathbf{u}^{\text{def}}\\: \delta\boldsymbol{\theta}^{\text{def}} \\ \end{MAT}\right]_{6M \times 1}
= \left[\begin{MAT}(r){l} \mathbf{B} \\ \end{MAT}\right]_{6M \times 6N}
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB_{6N \times 1}
\end{equation}
for all $N$ particles and their $M$ contacts.
Matrix $[\mathbf{B}]$ is the kinematics matrix.
\par
Equations~(\ref{eq:Dfpq}), (\ref{eq:Dmpq}), (\ref{eq:A3}),
(\ref{eq:constitMatrix}) and~(\ref{eq:kinematics})
are substituted into Eq.~(\ref{eq:Matrix2})
to arrive at a matrix equation for all particle
motions within a granular assembly:
\begin{equation}\label{eq:prefinal}
\left(
\left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--1}} \\ \end{MAT}\right]
+ \left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--2}} \\ \end{MAT}\right]
+
\left[\begin{MAT}(r){l} \mathbf{H}^{\text{m}} \\ \end{MAT}\right]
\right)
\DMu
= \left[\begin{MAT}(r){l} \delta\mathbf{b}\\: \delta\mathbf{w} \\ \end{MAT}\right]_{6N \times 1} \;,
\end{equation}
where the ``mechanical'' stiffness $[ \mathbf{H}^{\text{m}} ]$ is
\begin{equation} \label{eq:Hm}
\left[\begin{MAT}(r){l} \mathbf{H}^{\text{m}} \\ \end{MAT}\right]_{6N\times 6N} =
-\left[\begin{MAT}(r){l} \mathbf{A}_{1} \\ \end{MAT}\right]_{6N\times 2(6M)}
\left[
\begin{MAT}(r)[0pt,2em,2em]{c}
\mathbf{F}\\:
\mathbf{M}\\
\end{MAT}
\right]_{2(6M)\times 6M}
\left[\begin{MAT}(r){l} \mathbf{B} \\ \end{MAT}\right]_{6M\times 6N}
\end{equation}
and the second geometric stiffness $[\mathbf{H}^{\text{g--2}} ]$ is
\begin{equation}\label{eq:equilibrium4}
\left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--2}} \\ \end{MAT}\right]_{6N\times 6N}
=
-\left[\begin{MAT}(r){l} \mathbf{A}_{1} \\ \end{MAT}\right]_{6N\times 2(6M)}
\left[\begin{MAT}(r){l} \mathbf{A}_{2} \\ \end{MAT}\right]_{2(6M)\times 6N} \;.
\end{equation}
This geometric stiffness accounts for the rotations of
contact forces that accompany the rolling and twirling of particle pairs.
The stiffness
$[ \mathbf{H}^{\text{m}} ]$
in Eq.~(\ref{eq:Hm})
is the conventional mechanical stiffness matrix
for a system of $N$ nodes that interact through $M$ connections,
but in a granular system, the connections are through contacts whose
positions and orientations are altered by the particle movements---even
infinitesimal movements.
The geometric alterations are captured, in part,
with the matrices $[ \mathbf{H}^{\text{g--1}} ]$
and $[\mathbf{H}^{\text{g--2}}]$.
A third alteration is also required.
\par
To attain the desired form of Eq.~(\ref{eq:H}),
the corotating forces $\delta\mathbf{b}$ and
$\delta\mathbf{w}$ must be converted into the conventional increments
$d\mathbf{b}$ and $d\mathbf{w}$.
In view of Eqs.~(\ref{eq:db}) and~(\ref{eq:dw}),
\begin{equation}\label{eq:A4}
\left[\begin{MAT}(r){l} d\mathbf{b}\\: d\mathbf{w} \\ \end{MAT}\right]_{6N \times 1}
= \left[\begin{MAT}(r){l} \delta\mathbf{b}\\: \delta\mathbf{w} \\ \end{MAT}\right]_{6N \times 1}
+ \left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--3}} \\ \end{MAT}\right]_{6N\times 6N}
\left[\begin{MAT}(r){l} d\mathbf{u}\\: d\boldsymbol{\theta} \\ \end{MAT}\right]_{6N \times 1}\;,
\end{equation}
where the third geometric stiffness
$[\mathbf{H}^{\text{g--3}}]$
collects the relations in Eqs.~(\ref{eq:equil1}),
(\ref{eq:db}), and~(\ref{eq:dw})
for all $N$ particles,
\begin{equation} \label{eq:Hg3}
\left.\begin{array}{ll}
d\boldsymbol{\theta}^{p}\times\mathbf{b}^{p} &= -d\boldsymbol{\theta}^{p}\times\sum_{q}\mathbf{f}^{pq} \\
d\boldsymbol{\theta}^{p}\times\mathbf{w}^{p} &= -d\boldsymbol{\theta}^{p}\times\sum_{q}\left(\mathbf{r}^{pq}\times\mathbf{f}^{pq} + \mathbf{m}^{pq} \right)
\end{array}
\right\}
\ \rightsquigarrow\ %
\left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--3}} \\ \end{MAT}\right]_{6N\times 6N}
\left[\begin{MAT}(r){l} d\mathbf{u}\\: d\boldsymbol{\theta} \\ \end{MAT}\right]_{6N \times 1}\;.
\end{equation}
\subsection{Combined assembly stiffness matrix}\label{sec:combined}
Equation~(\ref{eq:A4}) can now be substituted into Eq.~(\ref{eq:prefinal})
to arrive at the stiffness relation for an assembly of
$N$ particles in the intended, target form of Eq.~(\ref{eq:H}):
\begin{equation}\tag{\ref{eq:H}}
\left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right]_{6N \times 6N}
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB_{6N \times 1} = \LB d\mathbf{b}\\:d\mathbf{w} \RB_{6N \times 1}
\end{equation}
with
\begin{align}\label{eq:final1}
\left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right]
&=
\left(\left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--1}} \\ \end{MAT}\right]
+ \left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--2}} \\ \end{MAT}\right]
+ \left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--3}} \\ \end{MAT}\right]\right)
+ \left[\begin{MAT}(r){l} \mathbf{H}^{\text{m}} \\ \end{MAT}\right] \\
\label{eq:final2}
&= \left[\begin{MAT}(r){l} \mathbf{H}^{\text{g}} \\ \end{MAT}\right] + \left[\begin{MAT}(r){l} \mathbf{H}^{\text{m}} \\ \end{MAT}\right] \;.
\end{align}
The geometric stiffness $[\mathbf{H}^{\text{g}}]$ is combined from three parts,
which merely correspond to three steps in deriving $[\mathbf{H}^{\text{g}}]$.
\par
Each of the
$6N\times 6N$ stiffnesses
in Eq.~(\ref{eq:final1}) can be constructed from
the $M$ corresponding $12\times 12$ contact stiffnesses.
That is, the $12\times 12$ stiffness in Eq.~(\ref{eq:piece})
for a single contact is the sum
of four $12\times 12$ contributions
that correspond to the matrices
$[\mathbf{H}^{\text{g--1}} ]$,
$[\mathbf{H}^{\text{g--2}}]$,
$[\mathbf{H}^{\text{g--3}}]$, and
$[ \mathbf{H}^{\text{m}} ]$
in Eq.~(\ref{eq:final1}).
We note, however, that the two submatrices
$[\mathbf{H}^{q\text{--}p}]$ and
$[\mathbf{H}^{q\text{--}q}]$ in Eq.~(\ref{eq:piece}) are formed
from the vectors $\mathbf{f}^{qp}$, $\mathbf{r}^{qp}$, and $\mathbf{n}^{qp}$,
etc. instead of their ``$pq$'' counterparts.
We also note that the inner product of $[\mathbf{H}^{\text{g--1}} ]$,
$[\mathbf{H}^{\text{g--2}}]$, or $[ \mathbf{H}^{\text{m}} ]$
with any rigid-body motion
$[d\mathbf{u}/d\boldsymbol{\theta}]^{\text{rigid}}$
will be zero, since these three
stiffnesses are constructed from the
contact deformations $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$,
which are zero for any rigid-body motion.
The product
$[\mathbf{H}^{\text{g--3}}] [d\mathbf{u}/d\boldsymbol{\theta}]^{\text{rigid}}$
might, however, not equal zero, an anomaly that is resolved
in Section~\ref{sec:rigid_rotation}.
\par
The assembly stiffness $[\mathbf{H}]$ in Eq.~(\ref{eq:final1})
embodies four stiffness components:
two geometric components, $[\mathbf{H}^{\text{g--1}} ]$
and $[\mathbf{H}^{\text{g--2}}]$,
that depend upon the particle shapes (surface curvatures)
and upon the current contact forces;
a third geometric component $[\mathbf{H}^{\text{g--3}}]$ that
depends upon the particle size (the radial vectors $\mathbf{r}^{pq}$)
as well as upon the current contact forces;
and a mechanical component
$[ \mathbf{H}^{\text{m}} ]$
that depends upon the contact stiffnesses.
The geometric stiffness $[\mathbf{H}^{\text{g}}]$
would be required to distinguish the different
incremental responses of the three clusters in
Fig.~\ref{fig:Geometry}.
Having derived the incremental stiffness $[\mathbf{H}]$,
we now consider two related matters that must
be resolved before applying $[\mathbf{H}]$ to questions
of stability, bifurcations, and softening.
\subsection{Cluster rotations}\label{sec:rigid_rotation}
Questions of stability and softening, discussed
in Section~\ref{sec:stability}, will depend upon
second-order work quantities, specifically, on the
signs of inner products such as
\begin{equation} \label{eq:InnerProducts}
\LB d\mathbf{b}\\:d\mathbf{w} \RB^{\text{T}} \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
\quad \text{and} \quad
\left[\begin{MAT}(r){l} \mathbbm{d}\mathbf{b}\\: \mathbbm{d}\mathbf{w} \\ \end{MAT}\right]^{\text{T}}
\left[\begin{MAT}(r){l} \mathbbm{d}\mathbf{u}\\: \mathbbm{d}\boldsymbol{\theta} \\ \end{MAT}\right] \;,
\end{equation}
where $[\mathbbm{d}\mathbf{b} / \mathbbm{d}\mathbf{w}]$ and
$[\mathbbm{d}\mathbf{u} / \mathbbm{d}\boldsymbol{\theta}]$ are defined in this section.
Although the first product (\ref{eq:InnerProducts}$_{1}$) is a standard matter
in structural stability analysis,
structures and machines are usually attached to foundations or
chassis, so that rigid body motions are not explicitly considered.
To investigate the internal stability of a granular material,
as manifested in a granular cluster or a representative volume element,
we must reconcile possible rigid modes of rotation,
particularly when the cluster is analyzed as being
independent of the surrounding material.
We refer to such granular sub-systems as ``isolated clusters,''
and the second product, Eq.~(\ref{eq:InnerProducts}$_2$),
is more appropriate for their analysis.
\par
Consider the isolated two-particle cluster in Fig.~\ref{fig:rotate}.
\begin{figure}
\centering
\includegraphics[scale=0.90]{Fig4.eps}
\caption{Rigid rotation of an equilibrated system.}
\label{fig:rotate}
\end{figure}
The particles are initially in equilibrium with the opposing external
forces $\mathbf{b}$ and $-\mathbf{b}$ (Fig.~\ref{fig:rotate}a).
The pair is then rotated in a rigid manner, along with its forces,
through the angular increment $d\overline{\boldsymbol{\theta}}^{\text{rigid}}$,
as in Fig.~\ref{fig:rotate}b
(or, alternatively, the observer rotates by the angle
$-d\overline{\boldsymbol{\theta}}^{\text{rigid}}$).
The two increments of force, $d\mathbf{b}$ and $-d\mathbf{b}$,
are due entirely to the products
$d\overline{\boldsymbol{\theta}}^{\text{rigid}}\times \mathbf{b}$
and $-d\overline{\boldsymbol{\theta}}^{\text{rigid}}\times \mathbf{b}$
of Eq.~(\ref{eq:db}),
which are generated by the stiffness contribution
$[\mathbf{H}^{\text{g--3}}]$ of Eqs.~(\ref{eq:A4}) and~(\ref{eq:Hg3}).
The simpler inner product
$[\mathbf{d\mathbf{b}}]^{\text{T}}[\mathbf{d\mathbf{u}}]$ equals $-2\,db\,du$
and is non-zero,
\emph{even though no second-order work involved}.
A stability criterion that is tied to this inner product
must obviously be amended to neglect such rigid rotation modes.
A similar situation arises in continuum theories of
internal instability and bifurcation,
and these problems are typically corrected by using a corotational
or nominal stress rate in place of the Cauchy rate,
and by taking advantage of a symmetry of the stiffness tensor that negates
any spin component of the velocity gradient
\cite{Hill:1958a,Rice:1976a,Bazant:1991a}.
\par
When investigating the stability of a discrete system,
certain corotational ``$\mathbbm{d}$'' increments should be used,
as in the second inner product of Eq.~(\ref{eq:InnerProducts}).
To this end,
we first derive a projection of the particle motions $[d\mathbf{u}/d\boldsymbol{\theta}]$
onto the vector subspace of rigid rotations.
A rigid rotation of the entire system by an angle
$d\overline{\boldsymbol{\theta}}^{\text{rigid}}$
produces the following motions,
$d\mathbf{u}^{p,\overline{\boldsymbol{\theta}}}$ and
$d\boldsymbol{\theta}^{p,\overline{\boldsymbol{\theta}}}$,
of a single particle $p$ having the position $\mathbf{x}^{p}$
(Fig.~\ref{fig:particles}):
\begin{align}\label{eq:dutrigid}
d\mathbf{u}^{p,\overline{\boldsymbol{\theta}}} &=
d\overline{\boldsymbol{\theta}}^{\text{rigid}} \times
\mathbf{x}^{p} \\
d\boldsymbol{\theta}^{p,\overline{\boldsymbol{\theta}}} &=
d\overline{\boldsymbol{\theta}}^{\text{rigid}} \;,
\end{align}
and the motions of all $N$ particles can be collected in a matrix
form as
\begin{equation}\label{eq:dtrigid}
\left[\begin{MAT}(r){l} d\mathbf{u}^{\overline{\boldsymbol{\theta}}}\\:
d\boldsymbol{\theta}^{\overline{\boldsymbol{\theta}}} \\ \end{MAT}\right]_{6N\times 1} =
\left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]_{6N\times 3}
\left[\begin{MAT}(r){l} d\overline{\boldsymbol{\theta}}^{\text{rigid}} \\ \end{MAT}\right]_{3\times 1} \;.
\end{equation}
Conversely, the rigid rotation $d\overline{\boldsymbol{\theta}}^{\text{rigid}}$
of a system of $N$ moving particles can be extracted from their
$6N$ motions $[d\mathbf{u}/d\boldsymbol{\theta}]$
by multiplying by
the Moore--Penrose inverse $[\mathbf{C}]^{+}$:
\begin{equation}\label{eq:Moore1}
\left[\begin{MAT}(r){l} d\overline{\boldsymbol{\theta}}^{\text{rigid}} \\ \end{MAT}\right]_{3\times 1} =
\left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]_{3\times 6N}^{+}
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB_{6N\times 1} \;,
\end{equation}
with
\begin{equation}
\left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]^{+} =
\left( \left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]^{\text{T}} \left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]\right)^{-1}
\left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]^{\text{T}} \;.
\end{equation}
The rigid-rotation mode
$d\overline{\boldsymbol{\theta}}^{\text{rigid}}$
can then be removed
from the original particle motions $[d\mathbf{u}/d\boldsymbol{\theta}]$
by projecting them onto the subspace that excludes
rigid rotations:
\begin{equation} \label{eq:ddu}
\left[\begin{MAT}(r){l} \mathbbm{d}\mathbf{u}\\: \mathbbm{d}\boldsymbol{\theta} \\ \end{MAT}\right]_{6N\times 1} =
\left[\begin{MAT}(r){l} \mathbf{P}^{\text{n--r--r}} \\ \end{MAT}\right]_{6N\times 6N}
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB_{6N\times 1} \;.
\end{equation}
The ``$\mathbbm{d}$'' projected motions
$[\mathbbm{d}\mathbf{u} / \mathbbm{d}\boldsymbol{\theta}]$
are objective and contain no systematic rigid rotation
of the $N$ particles.
The ``no--rigid--rotation'' (n--r--r) projection matrix,
$[\mathbf{P}^{\text{n--r--r}}]$, is given by
\begin{equation}\label{eq:Pnrr}
\left[\begin{MAT}(r){l} \mathbf{P}^{\text{n--r--r}} \\ \end{MAT}\right]_{6N\times 6N} =
\left[\begin{MAT}(r){l} \mathbf{I} \\ \end{MAT}\right]_{6N\times 6N}
- \left[\begin{MAT}(r){l} \mathbf{P}^{\text{r--r}} \\ \end{MAT}\right]_{6N\times 6N} \;,
\end{equation}
where the projection matrix $[\mathbf{P}^{\text{r--r}}]$
for ``rigid--rotations'' (r--r) is
\begin{equation}\label{eq:Prr}
\left[\begin{MAT}(r){l} \mathbf{P}^{\text{r--r}} \\ \end{MAT}\right] =
\left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]^{+} \;.
\end{equation}
Both $[\mathbf{P}^{\text{r--r}}]$ and
$[\mathbf{P}^{\text{n--r--r}}]$ are symmetric and idempotent.
\par
The stiffness relation in Eq.~(\ref{eq:H}) can be rewritten by
substituting the motions
$[\mathbbm{d}\mathbf{u} / \mathbbm{d}\boldsymbol{\theta}]$
in Eq.~(\ref{eq:ddu})
for the motions
$[d\mathbf{u} / d\boldsymbol{\theta}]$:
\begin{equation}\label{eq:equilibrium5}
\left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right]
\left[\begin{MAT}(r){l} \mathbf{P}^{\text{n--r--r}} \\ \end{MAT}\right]
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
= \left[\begin{MAT}(r){l} d\mathbf{b}\\: d\mathbf{w} \\ \end{MAT}\right]
- \left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \mathbf{P}^{\text{r--r}} \\ \end{MAT}\right]
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB \;.
\end{equation}
The proper use of $[\mathbf{C}]$ and its related matrices requires that
the particle positions $\mathbf{x}^{p}$ in Eq.~(\ref{eq:dutrigid})
are measured from the center of the $N$-particle cluster,
so that $\sum_{i=1}^{N}\mathbf{x}^{p}=0$.
By choosing another origin,
the product $[\mathbf{P}^{\text{r--r}}][d\mathbf{u}/d\boldsymbol{\theta}]$ will improperly
deal with rigid body translations, producing an apparent (and false) rotation
of the system.
If another origin must be used, three additional columns
should be appended to the matrix $[\mathbf{C}]$, so that
the column space of $[\mathbf{C}]$ spans both rigid rotations
$d\overline{\boldsymbol{\theta}}^{\text{rigid}}$ and rigid translations
$d\overline{\mathbf{u}}^{\text{rigid}}$.
The following derivations use a central origin and
the simpler $6N\times 3$
matrix $[\mathbf{C}]$ of Eq.~(\ref{eq:dtrigid}).
\par
Equation~(\ref{eq:equilibrium5}) is an alternative
to Eq.~(\ref{eq:H}),
and it effects two changes that are relevant
to stability analysis.
First, the product
$[\mathbf{P}^{\text{n--r--r}}][d\mathbf{u} / d\boldsymbol{\theta}]=[\mathbbm{d}\mathbf{u} / \mathbbm{d}\boldsymbol{\theta}]$
on the left of Eq.~(\ref{eq:equilibrium5})
removes rigid modes of rotation from the full $\mathbb{R}^{6N}$
space of particle motions $[d\mathbf{u}/d\boldsymbol{\theta}]$.
As such, the non-zero movements $d\mathbf{u}$ and $-d\mathbf{u}$
in Fig.~\ref{fig:rotate}c would be replaced with
$\mathbbm{d}\mathbf{u}=-\mathbbm{d}\mathbf{u}=0$.
Second, the force increments
$[d\mathbf{b}/d\mathbf{w}]$ on the right of Eq.~(\ref{eq:equilibrium5})
are reduced by the increments that are produced merely by a systematic
rigid rotation of the $N$ particles.
The matrix $[\mathbf{H}]$ in Eq.~(\ref{eq:equilibrium5})
is the sum of the four contributions given in
Eq.~(\ref{eq:final1}), but three of these contributions originate solely from
the objective contact deformations $\delta\mathbf{u}^{pq\text{, def}}$ and $\delta\boldsymbol{\theta}^{pq\text{, def}}$:
the matrices $[\mathbf{H}^{\text{g--1}} ]$,
$[ \mathbf{H}^{\text{g--2}} ]$, and
$[ \mathbf{H}^{\text{m}} ]$, as defined in
Eqs.~(\ref{eq:Hg1}), (\ref{eq:A3}), and~(\ref{eq:constitMatrix}).
These three contributions are unaffected by a systematic rigid rotation
of the assembly.
For example,
with the ``$\text{g--1}$'' contribution,
the product on the right of Eq.~(\ref{eq:equilibrium5}) is
$[\mathbf{H}^{\text{g--1}} ][\mathbf{P}^{\text{r--r}}] [d\mathbf{u}/d\boldsymbol{\theta}] =0$.
Only the $[\mathbf{H}^{\text{g--3}}]$ contribution
is affected by a rigid rotation,
as is seen by substituting a systematic rotation
$d\overline{\boldsymbol{\theta}}^{\text{rigid}}$
into the definition in Eq.~(\ref{eq:Hg3}).
\par
We define the
force increments $\mathbbm{d}\mathbf{b}$ and $\mathbbm{d}\mathbf{w}$
as the expression on the right of Eq.~(\ref{eq:equilibrium5}),
which can also be written in the alternative forms
\begin{equation} \label{eq:ddb}
\begin{split}
\LB \D\mathbf{b}\\:\D\mathbf{w}\RB &\ =\ %
\LB d\mathbf{b}\\:d\mathbf{w} \RB
- \left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \mathbf{P}^{\text{r--r}} \\ \end{MAT}\right]
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
\ =\ %
\LB d\mathbf{b}\\:d\mathbf{w} \RB - \left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--3}} \\ \end{MAT}\right]
\left[\begin{MAT}(r){l} \mathbf{P}^{\text{r--r}} \\ \end{MAT}\right]
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
\\
&\ =\ %
\LB d\mathbf{b}\\:d\mathbf{w} \RB
- \left[\begin{MAT}(r){l} \mathbf{H}^{\text{g--3}} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \mathbf{C} \\ \end{MAT}\right]
\left[\begin{MAT}(r){l} d\overline{\boldsymbol{\theta}}^{\text{rigid}} \\ \end{MAT}\right] \;.
\end{split}
\end{equation}
That is,
the force increments $[d\mathbf{b}/d\mathbf{w}]$
are reduced by a common rotation of the current
external forces $\mathbf{b}^{p}$ and $\mathbf{w}^{p}$,
a rotation that produces the increments
$d\overline{\boldsymbol{\theta}}^{\text{rigid}}\times\mathbf{b}^{p}$ and
$d\overline{\boldsymbol{\theta}}^{\text{rigid}}\times\mathbf{w}^{p}$ for $p=1\ldots N$.
The forces $d\mathbf{b}^{p}$ and $d\mathbf{b}^{q}$ in Fig.~\ref{fig:rotate}
would be eliminated by the subtracted
terms in Eq.~(\ref{eq:ddb}).
Increments $\mathbbm{d}\mathbf{b}$ and $\mathbbm{d}\mathbf{w}$ are objective.
\par
We define the modified stiffnesses $[\mathbbm{H}]$ and
$[\pmb{\mathcal{H}}]$ as
\begin{equation}\label{eq:newstiffnesses}
\left[\begin{MAT}(r){l} \mathbbm{H} \\ \end{MAT}\right] =
\left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \mathbf{P}^{\text{n--r--r}} \\ \end{MAT}\right]\;,\quad
\left[\begin{MAT}(r){l} \pmb{\mathcal{H}} \\ \end{MAT}\right] =
\left[\begin{MAT}(r){l} \mathbf{P}^{\text{n--r--r}}\\ \end{MAT}\right]^{\text{T}}
\left[\begin{MAT}(r){l}\mathbf{H} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \mathbf{P}^{\text{n--r--r}} \\ \end{MAT}\right]\;.
\end{equation}
When combined with
the definitions~(\ref{eq:equilibrium5}) and~(\ref{eq:ddb}),
the stiffness relation~(\ref{eq:H}) can be written in the following
alternative forms
\begin{equation} \label{eq:problem}
\left[\begin{MAT}(r){l} \mathbf{H} \\ \end{MAT}\right] \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
= \LB d\mathbf{b}\\:d\mathbf{w} \RB \quad\text{or}\quad
\left[\begin{MAT}(r){l} \mathbbm{H} \\ \end{MAT}\right] \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
= \LB \D\mathbf{b}\\:\D\mathbf{w}\RB \;.
\end{equation}
Possible bifurcations in an isolated granular cluster
are resolved by seeking multiple solutions of the second form
(Section~\ref{sec:uniqueness}).
The possible instability or softening of an isolated granular
cluster is resolved by considering the following inner
product:
\begin{equation}\label{eq:newinner}
\LB \D\mathbf{b}\\:\D\mathbf{w}\RB^{\text{T}}
\left[\begin{MAT}(r){l} \mathbbm{d}\mathbf{u}\\: \mathbbm{d}\boldsymbol{\theta} \\ \end{MAT}\right]
=
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB^{\text{T}}
\left[\begin{MAT}(r){l} \pmb{\mathcal{H}} \\ \end{MAT}\right]
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB \;,
\end{equation}
as discussed in Section~\ref{sec:softening}.
Because the projection matrix $[\mathbf{P}^{\text{n--r--r}}]$
is symmetric and idempotent, the two matrices
$[\mathbbm{H}]$ and $[\pmb{\mathcal{H}}]$ share the same eigenvalues.
This characteristic is proven by supposing
that $\lambda$ and $[\boldsymbol{\nu}]$ are an eigenvalue
and eigenvector of $[\mathbbm{H}]$:
\begin{align} \label{eq:proof1}
\left[\begin{MAT}(r){l} \mathbbm{H} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \boldsymbol{\nu} \\ \end{MAT}\right] &= \lambda\left[\begin{MAT}(r){l} \boldsymbol{\nu} \\ \end{MAT}\right] \\
\left[\begin{MAT}(r){l} \pmb{\mathcal{H}} \\ \end{MAT}\right] \left[\begin{MAT}(r){l} \boldsymbol{\nu} \\ \end{MAT}\right] &=
\lambda\left[\begin{MAT}(r){l}\mathbf{P}^{\text{n--r--r}}\\ \end{MAT}\right]\left[\begin{MAT}(r){l} \boldsymbol{\nu} \\ \end{MAT}\right]\\
\label{eq:proof3}
\left[\begin{MAT}(r){l} \pmb{\mathcal{H}} \\ \end{MAT}\right]\left[\begin{MAT}(r){l}\mathbf{P}^{\text{n--r--r}}\\ \end{MAT}\right]\left[\begin{MAT}(r){l} \boldsymbol{\nu} \\ \end{MAT}\right]
&=
\lambda\left[\begin{MAT}(r){l}\mathbf{P}^{\text{n--r--r}}\\ \end{MAT}\right]\left[\begin{MAT}(r){l} \boldsymbol{\nu} \\ \end{MAT}\right]
\end{align}
where we have substituted Eq.~(\ref{eq:newstiffnesses}) between
the first and second expressions and have used the idempotent property
of $[\mathbf{P}^{\text{n--r--r}}]$ to arrive at the third expression.
The result shows that $\lambda$ is also an eigenvalue of
$[\pmb{\mathcal{H}}]$, but with the associated eigenvector
$[\mathbf{P}^{\text{n--r--r}}][\boldsymbol{\nu}]$.
Stability depends, however, upon the eigenvalues of the symmetric part
of $[\pmb{\mathcal{H}}]$, which might differ from those of
$[\pmb{\mathcal{H}}]$ itself or of the symmetric part of
$[\mathbbm{H}]$ (Section~\ref{sec:softening}).
\subsection{Elastic-plastic contact stiffness}\label{sec:epstiffness}
\citeN{Michalowski:1978a} and \shortciteN{Radi:1999a} have derived a
simple contact stiffness by applying concepts of elasto-plasticity
theory.
We briefly review this stiffness, as it will serve as a prototype
for investigating the stability and softening of particle
sub-regions (Section~\ref{sec:stability}).
The contact stiffness is incrementally nonlinear
with two branches:
an elastic branch that is characterized
with the normal and tangential stiffnesses
$k^{pq}$ and $\alpha k^{pq}$,
and a sliding branch characterized by a friction coefficient
$\mu^{pq}$.
Whenever sliding becomes possible,
the active branch is
determined by the direction of the contact
deformation $\delta\mathbf{u}^{pq\text{, def}}$.
Sliding occurs at a firm contact when two conditions are met:
\begin{enumerate}
\item
When the current contact force satisfies the
yield condition $Q^{pq}=0$:
\begin{equation}\label{eq:yield}
Q^{pq} = Q(\mathbf{f}^{pq} ) =
\left |
\mathbf{f}^{pq} - (\mathbf{n}^{pq} \cdot \mathbf{f}^{pq}) \mathbf{n}^{pq}
\right |
+
\mu \mathbf{f}^{pq} \cdot \mathbf{n}^{pq}
= 0 \;.
\end{equation}
This yield condition depends upon the current contact force $\mathbf{f}^{pq}$,
which is known \emph{a priori}.
With the isotropic frictional behavior
in Eq.~(\ref{eq:yield}), the yield condition is axisymmetric
within the contact plane
(see \citeNP{Michalowski:1978a} for alternative, asymmetric forms).
\item
When the contact deformation $\delta\mathbf{u}^{pq\text{, def}}$ is directed outward
from the yield surface in displacement space, the condition
$S^{pq}>0$:
\begin{equation}\label{eq:flow}
S^{pq} = S(\mathbf{f}^{pq} , \delta\mathbf{u}^{pq\text{, def}}) = \mathbf{g}^{pq} \cdot \delta\mathbf{u}^{pq\text{, def}} > 0 \;,
\end{equation}
where the yield surface $Q$ has the normal direction
\begin{equation}\label{eq:G}
\mathbf{g}^{pq} = k \left( \alpha \mathbf{h}^{pq} + \mu \mathbf{n}^{pq} \right)
\end{equation}
and the unit sliding direction $\mathbf{h}^{pq}$ is tangent to the contact plane
and aligned with the current contact force $\mathbf{f}^{pq}$:
\begin{equation}\label{eq:Hdirection}
\mathbf{h}^{pq} = \frac{ \mathbf{f}^{pq} - (\mathbf{n}^{pq} \cdot\mathbf{f}^{pq} ) \cdot \mathbf{n}^{pq} }
{| \mathbf{f}^{pq} - (\mathbf{n}^{pq} \cdot\mathbf{f}^{pq} ) \cdot \mathbf{n}^{pq} |} \;.
\end{equation}
\end{enumerate}
With this simple model and a hardening modulus of zero,
the contact stiffness tensor $\mathbf{F}^{pq}$
in Eq.~(\ref{eq:constitutive}) has two branches,
elastic and sliding, given by
\begin{equation}\label{eq:ContactF}
\mathbf{F}^{pq} = \begin{cases}
\mathbf{F}^{pq\text{, elastic}} =
k\left[ \alpha\mathbf{I} + (1-\alpha)\mathbf{n}^{pq}\otimes\mathbf{n}^{pq} \right]
& \text{if } Q^{pq}<0 \text{ or } S^{pq}\leq 0 \\
\mathbf{F}^{pq\text{, sliding}} =
\mathbf{F}^{pq\text{, elastic}} -
\mathbf{h}^{pq}\otimes\mathbf{g}^{pq}
& \text{if } Q^{pq}=0 \text{ and } S^{pq}>0
\end{cases}
\end{equation}
where $\mathbf{I}$ is the Kronecker, identity tensor.
Because the sliding and yield directions do not coincide
($\mathbf{h}^{pq} \neq \mathbf{g}^{pq}$), sliding is non-associative and
the contact stiffness in Eq.~(\ref{eq:ContactF}$_2$) is
asymmetric and may lead to negative second-order work at the contact.
The sliding behavior possesses deviatoric associativity, however,
since the sliding direction $\mathbf{h}^{pq}$ is aligned with
the tangential component of the yield surface normal
$\mathbf{g}^{pq}$ \cite{Bigoni:2000a}.
The yield condition in Eq.~(\ref{eq:yield}) will likely
be met at multiple contacts within a granular assembly,
which will lead to a combined stiffness $\mathbf{H}^{m}([d\mathbf{u}/d\boldsymbol{\theta}])$ that is
incrementally nonlinear and has multiple stiffness branches
(Section~\ref{sec:stability}).
\par
The derivation of Eq.~(\ref{eq:ContactF}) assumes that the
two particles are in firm contact, as opposed to grazing contact
\shortcite{Radi:1999a}.
For a firm contact, the incremental stiffness
is piece-wise linear, having linear behavior within each
branch of Eq.~(\ref{eq:ContactF}).
Grazing contacts have thoroughly nonlinear behavior and are not treated
further in this work.
\section{Uniqueness, internal stability, and softening}\label{sec:stability}
With a typical structural system, questions of
uniqueness and stability can be resolved by investigating the
determinant and eigenvalues of its stiffness matrix.
Although we can use this approach with granular systems,
the incremental analysis will likely be complicated by two conditions:
(1) incrementally nonlinear stiffnesses $\mathbbm{H}$
and $\pmb{\mathcal{H}}$ having multiple branches,
and (2) the asymmetry of these stiffnesses.
Both factors are now considered.
We confine this study, however, to isolated particle clusters,
which lack any displacement constraints that would otherwise
prevent rigid motions of the cluster, and
the more general problem of constrained granular
systems is left for future study.
With isolated clusters, the matrices $[\mathbbm{H}]$ and $[\pmb{\mathcal{H}}]$ in
Eqs.~(\ref{eq:problem}$_2$) and~(\ref{eq:InnerProducts}$_2$)
will be examined in place of matrix
$[\mathbf{H}]$ and Eqs.~(\ref{eq:H}) and~(\ref{eq:InnerProducts}$_1$),
and the inevitable (but less interesting)
rigid-body motions will be referred to as
\emph{trivial solutions} of Eq.~(\ref{eq:problem}$_2$).
\par
The geometric stiffness $[\mathbf{H}^{\text{g}}]$ of smooth particles is independent of the
loading direction,
but the mechanical stiffness $\mathbf{H}^{\text{m}}([d\mathbf{u}/d\boldsymbol{\theta}] )$ can be incrementally
nonlinear, having a finite number $L$ of stiffness branches,
represented by the matrices
$[\mathbf{H}^{\text{m, } 1}]$,
$[\mathbf{H}^{\text{m, } 2}]$,
$[\mathbf{H}^{\text{m, } 3}]$, $\ldots\,$, %
$[\mathbf{H}^{\text{m, } L}]$.
Because the contact behavior is assumed homogeneous of degree one
(Eqs. \ref{eq:constitutive}--\ref{eq:constitutiveM}),
the active branch of $\mathbf{H}^{\text{m}}([d\mathbf{u}/d\boldsymbol{\theta}])$ is determined by the unit loading
direction $[d\mathbf{u}/d\boldsymbol{\theta}]/\left| [d\mathbf{u}/d\boldsymbol{\theta}] \right|$.
Although incrementally nonlinear, we assume that the
incremental mapping
$\mathbf{H}^{\text{m}}:\,[d\mathbf{u}/d\boldsymbol{\theta}]\rightarrow[d\mathbf{b}/ d\mathbf{w}]$
is continuous and piece-wise linear, so that two adjacent branches share the
same stiffness along their shared boundary, and the behavior is linear within
each branch.
The example contact model in Section~\ref{sec:epstiffness}
would lead to incrementally nonlinear mappings
$\mathbf{H}^{\text{m}}([d\mathbf{u}/d\boldsymbol{\theta}])$ having these characteristics.
With this contact model,
a single contact has one stiffness if it is elastic
($Q<0$ in Eq.~\ref{eq:yield}), but it has two branches
when the yield surface has been reached.
If $M^{\text{s}}$ of the $M$ contacts are known to
be potentially sliding, having a current $Q=0$, then the combined stiffness
$\mathbf{H}^{\text{m}}([d\mathbf{u}/d\boldsymbol{\theta}] )$ has $L=2^{M^{\text{s}}}$ branches.
The active branch is determined by applying $M^{\text{s}}$
independent sliding conditions,
each in the form of Eq.~(\ref{eq:flow}).
\par
The $i$th stiffness branches $[\mathbf{H}^{i}]$, $[\mathbbm{H}^{i}]$, and $[\pmb{\mathcal{H}}^{i}]$
will often be asymmetric.
Symmetry of the mechanical stiffness $[\mathbf{H}^{\text{m}} ]$ depends upon the symmetry
of the individual contact stiffnesses---the $\mathbf{F}^{pq}$ and $\mathbf{M}^{pq}$
in Eqs.~(\ref{eq:constitutive}) and~(\ref{eq:constitutiveM})---whose
symmetry is lost when contacts begin to slide.
The geometric stiffness $[\mathbf{H}^{\text{g}} ]$ is symmetric only if all $M$ contact forces
lack a tangential component.
\subsection{Uniqueness}\label{sec:uniqueness}
We now consider whether Eq.~(\ref{eq:problem}$_2$)
admits multiple non-trivial solutions for a given
force increment $[\mathbbm{d} \mathbf{b}/\mathbbm{d} \mathbf{w}]$.
For a \emph{linear} and possibly asymmetric structural system that is
constrained from rigid-body motions,
uniqueness is assured when the determinant
$\text{det}([\mathbf{H} ]) \neq 0$ or, alternatively, when
$[\mathbf{H} ]$ has no eigenvalues that are zero.
Isolated granular clusters are linear when no contacts are yet sliding,
but even then,
the usual criterion must be modified to exclude rigid
motions of the cluster as possible bifurcation modes.
Using the stiffness $[\mathbbm{H}]$
of Eq.~(\ref{eq:newstiffnesses}) in place of $[\mathbf{H}]$,
an isolated \emph{linear} granular cluster admits no non-trivial
bifurcations when $[\mathbbm{H} ]$ has only six eigenvalues that
are zero---the eigenvalues that correspond to the six independent
rigid-body motions.
A seventh zero-eigenvalue signals a
condition of \emph{neutral equilibrium} and the presence
of non-trivial, bifurcating solutions of the linear
equations.
In this case,
any multiple of the seventh eigenvector $[\boldsymbol{\nu}^{(7)}]$
can be added to a solution of the non-homogeneous
Eq.~(\ref{eq:problem}$_2$) to produce a family of solutions.
\par
When contacts are sliding, granular behavior is inelastic
and incrementally nonlinear, and multiple
branches of the stiffness $\mathbbm{H}([d\mathbf{u}/d\boldsymbol{\theta}])$ must be considered for admitting
solutions of Eq.~(\ref{eq:problem}$_2$).
For an isolated cluster, non-uniqueness arises when two non-trivial
solutions, $[d\mathbf{u}/d\boldsymbol{\theta}] ^{a}$ and $[d\mathbf{u}/d\boldsymbol{\theta}] ^{b}$, exist:
\begin{equation} \label{eq:nonunique}
\left[\begin{MAT}(r){l} \mathbbm{H}^{a}\\ \end{MAT}\right] \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB^{a} = \LB \D\mathbf{b}\\:\D\mathbf{w}\RB
\quad\text{and}\quad
\left[\begin{MAT}(r){l} \mathbbm{H}^{b}\\ \end{MAT}\right] \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB^{b} = \LB \D\mathbf{b}\\:\D\mathbf{w}\RB
\end{equation}
where the difference $[d\mathbf{u}/d\boldsymbol{\theta}] ^{a} - [d\mathbf{u}/d\boldsymbol{\theta}] ^{b}$ is not a rigid-body motion,
and where the two stiffness branches
$[\mathbbm{H}^{a}]$ and $[\mathbbm{H}^{b}]$ are consistent with the directions
of their solution vectors $[d\mathbf{u}/d\boldsymbol{\theta}]^{a}$ and $[d\mathbf{u}/d\boldsymbol{\theta}]^{b}$, respectively.
By \emph{consistent} we mean that a product
$[\mathbbm{H}^{i}][d\mathbf{u}/d\boldsymbol{\theta}]$ involves motions $[d\mathbf{u}/d\boldsymbol{\theta}]$
that lie within the particular domain of the
branch $[\mathbbm{H}^{i}]$, which could be verified by checking $M^{\text{s}}$
sliding conditions in the form of Eq.~(\ref{eq:flow}).
The non-uniqueness in Eq.~(\ref{eq:nonunique}) can arise in two
ways:
\begin{enumerate}
\item
\emph{Type~1 non-uniqueness} occurs
when $[d\mathbf{u}/d\boldsymbol{\theta}]^{a}$ and $[d\mathbf{u}/d\boldsymbol{\theta}]^{b}$ belong to different branches of
the stiffness $\mathbbm{H}([d\mathbf{u}/d\boldsymbol{\theta}])$, such that $[\mathbbm{H}^{a}]\neq[\mathbbm{H}^{b}]$.
\item
\emph{Type~2 non-uniqueness} occurs
when a single branch, say $[\mathbbm{H}^{a}]$
with solution $[d\mathbf{u}/d\boldsymbol{\theta}]^{a}$, satisfies
Eq.~(\ref{eq:nonunique}$_{1}$) and has a seventh eigenvalue that is
zero.
Because behavior within each branch is assumed to be linear,
a family of non-trivial solutions
$[d\mathbf{u}/d\boldsymbol{\theta}] ^{b} = [d\mathbf{u}/d\boldsymbol{\theta}] ^{a} + \gamma[\boldsymbol{\nu}^{(7)}]$
is associated with the solution
$[d\mathbf{u}/d\boldsymbol{\theta}] ^{a}$ (although the scalar $\gamma$ may need to
be restricted to keep $[d\mathbf{u}/d\boldsymbol{\theta}] ^{b}$ within the same branch
as $[d\mathbf{u}/d\boldsymbol{\theta}] ^{a}$).
\end{enumerate}
The first situation is possible when some of the contact stiffnesses
$\mathbf{F}^{pq}$ are not positive definite, as with the sliding contacts
of Eq.~(\ref{eq:ContactF}$_{2}$).
In this case, the Hill condition
$([d\mathbf{u}/d\boldsymbol{\theta}] ^{a} - [d\mathbf{u}/d\boldsymbol{\theta}] ^{b})^{\text{T}}
([\mathbbm{H}^{a}][d\mathbf{u}/d\boldsymbol{\theta}]^{a} - [\mathbbm{H}^{b}][d\mathbf{u}/d\boldsymbol{\theta}]^{b})>0$
might not be met for certain vectors
$[d\mathbf{u}/d\boldsymbol{\theta}] ^{a}$ and $[d\mathbf{u}/d\boldsymbol{\theta}] ^{b}$,
which can permit Type~1 non-uniqueness.
\par
The two types of non-uniqueness suggest an algorithm for seeking
possible bifurcating solutions of Eq.~(\ref{eq:problem}$_{2}$).
For the given loading $[\mathbbm{d}\mathbf{b}/\mathbbm{d}\mathbf{w}]$,
each of the $L=2^{M^{\text{s}}}$ branches of $[\mathbbm{H}^{i}]$, $i=1\ldots L$,
must be checked for a possible solution to Eq.~(\ref{eq:problem}$_{2}$).
If a solution appears to exist within the particular branch
$[\mathbbm{H}^{i}]$, this
solution $[d\mathbf{u}/d\boldsymbol{\theta}]$ must also be checked for its consistency with
the loading conditions of that branch
(e.g., by applying Eq.~\ref{eq:flow} to each of the $M^{s}$
potentially sliding contacts).
If multiple branches give non-trivial and consistent solutions,
then Type~1 non-uniqueness is present.
The number of zero-eigenvalues must also be counted for
each branch that yields a non-trivial and consistent solution.
If the matrix of any solution branch
has more than six zero-eigenvalues with consistent
eigenvectors, then Type~2
non-uniqueness is present.
\subsection{Stability and softening}\label{sec:softening}
We adopt the usual criterion of stability for time-invariant
systems:
a system is stable if positive work is required for all load increments
that maintain equilibrium
\cite{Kratzig:1995a,Petryk:2000a}.
If an isolated granular cluster is already in equilibrium under
the current external forces $[\mathbf{b}/\mathbf{w}]$,
then \emph{the system is stable} if the second-order work is positive
for all increments $[d\mathbf{u}/d\boldsymbol{\theta}]$:
\begin{equation} \label{eq:stab}
\left(\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB^{\text{T}}\left[\begin{MAT}(r){l} \pmb{\mathcal{H}}^{i} \\ \end{MAT}\right] \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB > 0,\;
\forall \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB \text{ consistent with }
\left[\begin{MAT}(r){l}\pmb{\mathcal{H}}^{i}\\ \end{MAT}\right]\right),\;i=1\ldots L
\;\Rightarrow
\text{Stability}
\end{equation}
where the inner product in Eqs.~(\ref{eq:InnerProducts}$_{2}$)
and~(\ref{eq:newinner})
is used in place of Eq.~(\ref{eq:InnerProducts}$_{1}$).
In verifying condition~(\ref{eq:stab}),
all branches $i=1\ldots L$ must be checked, and with each
branch, all consistent vectors $[d\mathbf{u}/d\boldsymbol{\theta}]$ must be checked.
The loading direction $[d\mathbf{u}/d\boldsymbol{\theta}]$
must be consistent with the particular branch $[\pmb{\mathcal{H}}^{i}]$
that is being checked.
The condition (\ref{eq:stab}), however,
is sufficient but not necessary for stability,
since higher-order work terms are not considered in this study.
In the stability of Eq.~(\ref{eq:stab}),
a stable cluster can sustain the current dead
load $[\mathbf{b}/\mathbf{w}]$,
insofar as small disturbances $[\mathbbm{d}\mathbf{b}/\mathbbm{d}\mathbf{w}]$
produce only small displacements.
\par
Conditions for \emph{neutral stability} and \emph{instability} are likewise
given by the criteria
\begin{align}
\label{eq:neutral}
&\text{Neutral stability}\;\Rightarrow
\exists\;\text{n.t.}\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
\text{ consistent with }\left[\begin{MAT}(r){l}\pmb{\mathcal{H}}^{i}\\ \end{MAT}\right], \;
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB^{\text{T}}
\left[\begin{MAT}(r){l} \pmb{\mathcal{H}}^{i} \\ \end{MAT}\right] \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB = 0 \\
\label{eq:instab}
&\exists\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB
\text{ consistent with }\left[\begin{MAT}(r){l}\pmb{\mathcal{H}}^{i}\\ \end{MAT}\right], \;
\LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB^{\text{T}}\left[\begin{MAT}(r){l} \pmb{\mathcal{H}}^{i} \\ \end{MAT}\right] \LB d\mathbf{u}\\:d\boldsymbol{\theta} \RB < 0
\;\Rightarrow \text{Instability}
\end{align}
(e.g., \citeNP{Bazant:1991a}),
where ``n.t.'' denotes a non-trivial displacement---one that does
not lie in the sub-space of rigid-body motions
(Section~\ref{sec:rigid_rotation}).
As with Eq.~(\ref{eq:stab}),
$[\pmb{\mathcal{H}}^{i}]$ must be consistent with the displacement $[d\mathbf{u}/d\boldsymbol{\theta}]$
that is being tested.
Once unstable, a granular system becomes dynamic and the particles'
inertias influence their subsequent motions, unless, of course,
some of the motions in $[d\mathbf{u}/d\boldsymbol{\theta}]$ are externally constrained.
\par
\emph{Softening} occurs in any loading direction
$[d\mathbf{u}/d\boldsymbol{\theta}]$, perhaps constrained, that produces negative second-order work,
as in Eq.~(\ref{eq:instab}) (e.g., \citeNP{Valanis:1985a}).
\par
The stability conditions in Eqs.~(\ref{eq:stab})--(\ref{eq:instab})
are determined, of course, by the symmetric
part $[\widehat{\pmb{\mathcal{H}}}^{i}]$ of the stiffness
$[\pmb{\mathcal{H}}^{i}]$,
where
$[\widehat{\pmb{\mathcal{H}}}^{i}] = (1/2)([\pmb{\mathcal{H}}^{i}] + [\pmb{\mathcal{H}}^{i}]^{\text{T}})$.
These stability conditions
differ from the uniqueness criterion
in Section~\ref{sec:uniqueness}, since the
latter depends upon the determinant or eigenvalues of the full,
asymmetric stiffness $[\mathbbm{H}^{i}]$
(or of $[\pmb{\mathcal{H}}^{i}]$, since $[\mathbbm{H}^{i}]$ and $[\pmb{\mathcal{H}}^{i}]$
share the same eigenvalues, Eqs.~\ref{eq:proof1}--\ref{eq:proof3}).
Because the smallest real eigenvalue of $[\widehat{\pmb{\mathcal{H}}}^{i}]$ is no greater
than the smallest real eigenvalue of $[\pmb{\mathcal{H}}^{i}]$,
instability does not imply a loss of uniqueness.
On the other hand, the neutral equilibrium of Type~2 non-uniqueness
implies neutral stability, since
$[\mathbbm{H}][d\mathbf{u}/d\boldsymbol{\theta}]=0\;\Rightarrow\;[d\mathbf{u}/d\boldsymbol{\theta}]^{\text{T}}[\pmb{\mathcal{H}}][d\mathbf{u}/d\boldsymbol{\theta}]=0$.
That is, a granular cluster can be unstable and soften before
passing through neutral equilibrium.
\par
The definitions in Eqs.~(\ref{eq:stab})--(\ref{eq:instab}) suggest
an algorithm for investigating the stability of an isolated
granular cluster.
Each of the $L=2^{M^{s}}$ branches $[\pmb{\mathcal{H}}^{i}]$,
$i=1\ldots L$, are examined by
finding the eigenvalues of their symmetric parts $[\widehat{\pmb{\mathcal{H}}}^{i}]$.
At least six eigenvalues will be zero for every
$[\widehat{\pmb{\mathcal{H}}}^{i}]$, corresponding to its rigid-body modes.
A \emph{sufficient condition for stability} is that all
branches $[\widehat{\pmb{\mathcal{H}}}^{i}]$ have only positive eigenvalues,
except for the six zero-eigenvalues.
A \emph{sufficient condition for neutral stability or instability}
is the presence of a seventh zero-eigenvalue
or a negative eigenvalue, respectively,
provided that the corresponding eigenvector is consistent with the
presumed loading conditions of the branch
(i.e., by applying Eq.~\ref{eq:flow} to each
of the $M^{s}$ potentially sliding contacts).
If the eigenvector is consistent, then it represents an eigenmode
of neutral stability or of instability, respectively.
\par
The sufficient conditions in this algorithm can be readily
applied by examining the eigenvalues and eigenvectors
of all branches $[\pmb{\mathcal{H}}^{i}]$, $i=1\ldots L$.
Implementation details are provided in
Appendix~\ref{app:eigen}.
The algorithm, however, provides a criterion that is over-sufficient
(i.e. not necessary) for instability:
even though all consistent eigenvectors of
a branch $[\widehat{\pmb{\mathcal{H}}}^{i}]$ may have positive eigenvalues,
a \emph{non-consistent} eigenvector having a negative eigenvalue
might be linearly combined with a consistent eigenvector
to produce a consistent motion $[d\mathbf{u}/d\boldsymbol{\theta}]$ that brings about
a negative inner product in Eq.~(\ref{eq:instab}).
Likewise, the algorithm provides conditions that are
over-sufficient for stability:
a negative eigenvalue might exist,
but if its corresponding eigenvector is non-consistent, the
presence of the negative eigenvalue does not imply instability.
\section{Examples}\label{sec:example}
\subsection{Two-particle system}\label{sec:twoparticle}
We consider an isolated cluster of two particles, ``$p$'' and ``$q$'',
and investigate its stability (Fig.~\ref{fig:Example}).
\begin{figure}
\centering
\includegraphics[scale=0.70]{Fig5.eps}
\caption{An example two-particle cluster.}
\label{fig:Example}
\end{figure}
The example system is simplified with the following four restrictions:
\begin{enumerate}
\item
Motions are restricted to the $x_{1}$--$x_{2}$ plane, with the basis
vectors $\mathbf{e}_{1}$ and $\mathbf{e}_{2}$.
\item
The radial vectors $\mathbf{r}^{pq}$ and $\mathbf{r}^{qp}$ are collinear,
such that
$\mathbf{x}^{p}$, $\mathbf{x}^{q}$, and the contact point lie on a common line.
The radii $\mathbf{r}^{pq}$ and $\mathbf{r}^{qp}$ are oriented along the $\mathbf{e}_{1}$ direction.
\item
The contact normal $\mathbf{n}^{pq}$ is aligned with the radii $\mathbf{r}^{pq}$ and $\mathbf{r}^{qp}$.
\item
No body moments are applied ($\mathbf{w}^{p} = \mathbf{w}^{q} = \mathbf{0}$), so that
the current body forces, $\mathbf{b}^{p}$ and $\mathbf{b}^{q}$, are collinear and
self-equilibrating: $\mathbf{b}^{p} = -\mathbf{b}^{q}$.
\end{enumerate}
We also adopt the simple contact model
of Section~\ref{sec:epstiffness}, and neglect any contact moment
resistance ($\mathfrak{d}\mathbf{m}^{pq} = -\mathfrak{d}\mathbf{m}^{qp} = \mathbf{0}$
in Eq.~\ref{eq:constitutiveM}).
Because the contact force $\mathbf{f}^{pq}$ is entirely normal,
the contact stiffness is elastic, as in Eq.~(\ref{eq:ContactF}$_{1}$):
\begin{equation}\label{eq:simple}
\mathfrak{d}\mathbf{f}^{pq} =
k \left[ \alpha\mathbf{I} + (1-\alpha)\mathbf{n}^{pq}\otimes\mathbf{n}^{pq}\right] \cdot \delta\mathbf{u}^{pq\text{, def}}\;,
\end{equation}
where the positive
stiffnesses $k$ and $\alpha k$ are in the normal and tangential directions.
The particles are pressed together with a current
compressive normal force $f$,
and the two particles have the convex radii of curvature
$\rho^{p}$ and $\rho^{q}$ at their contact.
\par
The stiffness $[\mathbf{H}]$ for the two-particle system is derived
in Appendix~\ref{app:example} with the following result:
\begin{align}\label{eq:exampleH}
\left[\begin{MAT}(r){l}\mathbf{H}\\ \end{MAT}\right] &\DUa = \left( \left[\begin{MAT}(r){l}\mathbf{H}^{m} \\ \end{MAT}\right] + \left[\begin{MAT}(r){l}\mathbf{H}^{g} \\ \end{MAT}\right] \right) \DUa \\
\label{eq:exampleH2}
=
&\left(k
\left[ \begin{MAT}(r)[2pt]{ccc:ccc}
1 & 0 & 0 & -1 & 0 & 0 \\
0 & \alpha & \alpha r^p & 0 & -\alpha & \alpha r^q \\
0 & \alpha r^p & \alpha (r^p)^2 & 0 & -\alpha r^p & \alpha r^p r^q \\:
-1 & 0 & 0 & 1 & 0 & 0 \\
0 & -\alpha & -\alpha r^p & 0 & \alpha & -\alpha r^q \\
0 & \alpha r^q & \alpha r^p r^q & 0 & -\alpha r^q & \alpha (r^q)^2 \\
\end{MAT} \right] \right.
\\
&\left. +\frac{f}{\Rhop+\Rhoq}
\left[ \begin{MAT}(r)[3pt]{ccc:ccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & -1 & \rho^{p} - r^p & 0 & 1 & \rho^{q} - r^q \\
0 & \rho^{p}-r^p & (\rho^{p}-r^p)(\rho^{q}+r^p) & 0 & r^p-\rho^{p} &
(\rho^{q}-r^q)(r^p - \rho^{p})\\:
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & r^p - \rho^{p} & 0 & -1 & r^q - \rho^{q} \\
0 & \rho^{q}-r^q & (\rho^{q}-r^q)(r^p - \rho^{p}) & 0 & r^q-\rho^{q} &
(\rho^{q}-r^q)(\rho^{p}+r^q)\\
\end{MAT} \right] \right)
\left[\begin{MAT}(r){l} du^p_1\\du^p_2 \\ d\theta^p_3\\: du^q_1\\du^q_2 \\ d\theta^q_3 \\ \end{MAT}\right]
\notag
\end{align}
Rather than give the full $12\times 12$ stiffness matrix for the pair,
we have discarded the $\mathbf{e}_{3}$ translation and the $\mathbf{e}_{1}$ and $\mathbf{e}_{2}$
rotations and have derived the remaining $6 \times 6$ stiffness components.
The rows of matrix $[\mathbf{H}]$ are arranged
to produce forces $[d\mathbf{b}/d\mathbf{w}]$ in the following order:
$[db^p_1, db^p_2, dw^p_3, db^q_1, db^q_2, dw^q_3]^{\text{T}}$.
Both the mechanical and geometric stiffnesses are symmetric,
since
the mechanical stiffness is entirely elastic, and the
contact force lacks a tangential component.
The relative importance of the geometric and mechanical stiffnesses
is seen to depend upon the force-to-stiffness ratio $f/k$.
Moreover, if the two particles fit together like hand-in-glove,
with $\rho^{p}\approx -\rho^{q}$,
the quotient $f/(\rho^{p} + \rho^{q})$ is large, and the
geometric stiffness will dominate.
\par
Stability is investigated by finding the six eigenvalues
$\lambda^{(j)}$ of the matrix
$[\mathbbm{H}]=[\mathbf{H}][\mathbf{P}^{\text{n--r--r}}]$,
where the projection $[\mathbf{P}^{\text{n--r--r}}]$ is computed from
the rotation vector
$[\mathbf{C}]$ given
in Eq.~(\ref{eq:exampleC}) of Appendix~\ref{app:example}.
General expressions for some eigenvectors are too lengthy to
present here, but we make the following observations:
\begin{enumerate}
\item
Three eigenvalues are zero, corresponding to two
rigid translations and a rigid rotation (the eigenvectors
$\boldsymbol{\nu}^{(1)}$, $\boldsymbol{\nu}^{(2)}$,
and $\boldsymbol{\nu}^{(3)}$ in Fig.~\ref{fig:Modes}a).
\item
A fourth eigenvalue $\lambda^{(4)}$ is a positive $2k$,
corresponding to the mode of normal contact indentation
($\boldsymbol{\nu}^{(4)}=[1/\sqrt{2}, 0, 0, -1/\sqrt{2}, 0, 0]^{\text{T}}$).
\item
Another positive eigenvalue corresponds to a tangential shearing at the
contact (mode $\boldsymbol{\nu}^{(5)}$ in Fig.~\ref{fig:Modes}a).
\item
A sixth eigenvalue $\lambda^{(6)}$ can be positive, zero,
or negative depending on the radii and curvatures of the particles,
the two contact stiffnesses $k$ and $\alpha k$, and the force $f$.
\end{enumerate}
\begin{figure}
\centering
\parbox[b]{3.4in}{\centering%
\includegraphics[scale=0.90]{Fig6a.eps}\\
(a) Displacement modes}
\quad\quad\quad\quad
\parbox[b]{1.1in}{\centering%
\includegraphics[scale=0.90]{Fig6b.eps}\\
(b) {\small Unstable} \\[0.20in]
\includegraphics[scale=0.90]{Fig6c.eps}\\
(c) {\small Stable}}
\caption{Displacement modes and stability of two-particle systems.}
\label{fig:Modes}
\end{figure}
The sixth mode $\boldsymbol{\nu}^{(6)}$ is
the most interesting and corresponds to a rolling of the
particles at their contact (Fig.~\ref{fig:Modes}a).
This mode can be investigated by restricting the two particles to
the same size and shape, with $r^{p}=r^{q}$ and $\rho^{p}=\rho^{q}$
at their contact.
Figure~\ref{fig:Contours} is a contour plot of
the sixth eigenvalue
$\lambda^{(6)}$ for various combinations of curvature $\rho$
and compressive force $f$.
\begin{figure}
\centering
\includegraphics[scale=0.90]{Fig7.eps}
\caption{Contour plot of the eigenvalue $\lambda^{(6)}$ when
$r^p=r^q$, $\rho^p=\rho^q$, and $\alpha=1$.}
\label{fig:Contours}
\end{figure}
The dimensionless curvature $\rho/r$ ranges from
shapes that are relatively ``sharp''
($\rho/r < 1$, Fig.~\ref{fig:Modes}b)
to shapes that are ``flat'' ($\rho/r > 1$, Fig.~\ref{fig:Modes}c)
at their contact.
Both conditions are illustrated in Figs.~\ref{fig:Modes}b and~c.
In the contour plot, we present a range of dimensionless force
$f/(kr)$ that is fairly narrow, from $-0.005$ to $0.02$.
The positive, compressive values are of a range typical
for hard particles; whereas, the negative values could
occur in dry powders when electrostatic and van der Waals
attractions are active.
As expected, sharp contacts are unstable ($\lambda^{(6)}<0$)
and flat contacts are stable ($\lambda^{(6)}>0$)
for any compressive force $f>0$.
This result, although limited to a simple two-particle system,
is consistent with the widely observed
tendency of granular materials toward stress-induced anisotropy,
in which contacts become predominately flat-to-flat in the direction of
compressive loading \cite{Rothenburg:1993a}.
In regard to uniqueness,
Type~2 neutral equilibrium
occurs under conditions that produce $\lambda^{(6)}=0$:
either with circular disks ($\rho/r=1$) or with zero-force,
grazing contacts ($f=0$).
\par
When two \emph{circular} disks are pressed together, they are in
neutral equilibrium and neutral stability, with $\lambda^{(6)}=0$.
\begin{figure}
\centering
\includegraphics[scale=0.90]{Fig8.eps}
\caption{A gear-like bifurcation mode in a regular packing when the
rolling stiffness $\mathbf{M}^{pq}=\mathbf{0}$.}
\label{fig:gears}
\end{figure}
For example, a bifurcation of motions is readily available
to the system in Fig.~\ref{fig:gears}: a synchronized,
gear-like turning
of the disks can be superposed onto any other solution.
This bifurcation would, of course, be inhibited by any genuine rotational
stiffness at the contact, demonstrating that the possible bifurcation mode in
Fig.~\ref{fig:gears} is simply a consequence of the constitutive choice
$\mathbf{M}^{pq}=\mathbf{0}$ in Eq.~(\ref{eq:constitutiveM}).
\subsection{Four-disk system}\label{sec:fourdisks}
We now analyze an isolated cluster of four equal-size
disks having four contacts (Fig.~\ref{fig:fourdisk}a), noting that
this cluster might represent the repeating unit of a regular
2D assembly (Fig.~\ref{fig:fourdisk}b).
\begin{figure}
\centering
\mbox{%
\subfigure[]{\raggedright\includegraphics[scale=0.90]{Fig9a.eps}}\ \quad%
\subfigure[]{%
\parbox[b]{1.6in}{%
\includegraphics[scale=0.60]{Fig9b.eps}%
\\[0.3in]\rule{0in}{0.01in}}\quad%
}
\subfigure[]{\includegraphics[scale=0.90]{Fig9c.eps}}
}
\caption{Four-disk example.}
\label{fig:fourdisk}
\end{figure}
We assume that
the four disks have been compressed vertically
while they have expanded horizontally,
so that current opposing pairs of vertical and
horizontal external forces, $b^{\text{v}}$ and
$b^{\text{h}}$, produce a frictional sliding
at all four contacts (Fig.~\ref{fig:fourdisk}a).
The system would soften
under these loading conditions, as shown by plotting
the force ratio $b^{\text{v}} / b^{\text{h}}$ against the angle $\beta$
(Fig.~\ref{fig:fourdisk}c).
We examine the system at a given angle $\beta$ to determine
the eigenmodes of further (incremental) deformation.
Since all four contacts are known to be sliding
at angle $\beta$ ($M^{\text{s}}=4$), the subsequent motions present
$L=2^{4}=16$ possible combinations
(i.e. branches) of contact loading or unloading
(sliding or elastic sticking).
Each combination is a separate, $i$th, branch of the cluster
stiffness $\mathbf{H}([d\mathbf{u}/d\boldsymbol{\theta}])$.
We must construct the mechanical stiffness
$[\mathbf{H}^{\text{m, }i}]$ for each branch
and then add it to the shared geometric stiffness $[\mathbf{H}^{\text{g}} ]$,
which will be the same for all branches.
The sixteen combined stiffnesses $[\mathbf{H}^{i}]$
are $12\times 12$,
since every 2D particle has three degrees of freedom.
With each loading-unloading combination, we find the twelve
eigenvalues and eigenvectors
of its matrices $[\mathbbm{H}^{i}]$ and $[\widehat{\pmb{\mathcal{H}}}^{i}]$
and then determine which of the
eigenvectors are consistent with the presumed combination
of loading and unloading for this branch
(Sections~\ref{sec:uniqueness} and~\ref{sec:softening}).
The question of whether an eigenvector produces a consistent
loading-unloading combination is determined by applying
Eq.~(\ref{eq:flow}) to each of the four contacts.
Appendix~\ref{app:eigen} describes a search algorithm.
\par
Numerical results were developed for the following conditions:
equal normal and tangential contact stiffnesses ($\alpha = 1$),
compressive contact forces that are much smaller than the contact
stiffness ($f/k = 1/1000$),
a friction coefficient $\mu=0.5$, and a particle orientation
$\beta=45^{\circ}$.
We assume that all four contacts are currently sliding
($Q=0$ in Eq.~\ref{eq:yield}),
but allow the possibility that all (or some) contacts cease
slipping during the subsequent motion $[d\mathbf{u}/d\boldsymbol{\theta}]$.
\par
The results show that each of the sixteen stiffness branches
$[\widehat{\pmb{\mathcal{H}}}^{i}]$ has four zero-eigenvalues:
three of these eigenvalues correspond to rigid-body motions;
the fourth corresponds to a gear-like rolling mode, such
as that depicted
in Fig.~\ref{fig:gears}.
Regardless of the branch that is active in a
loading increment $[\mathbbm{d}\mathbf{b}/\mathbbm{d}\mathbf{w}]$,
the system has no better than neutral stability
(Eq.~\ref{eq:neutral}), since the gear-like mode
presents a zero-work increment that can be superposed
on any solution.
The sixteen branches $[\widehat{\pmb{\mathcal{H}}}^{i}]$
possess a total of 30 non-zero eigenvalues whose
eigenvectors are consistent with the loading-unloading combination of
their respective branches
(Section~\ref{sec:stability} and Appendix~\ref{app:eigen}).
Twenty-one of these eigenvalues are positive; nine are negative.
The presence of multiple negative eigenvalues indicates
that the cluster is unstable: small changes in the external
forces $b^{\text{v}}$ and $b^{\text{h}}$ can produce large
displacements and a loss of the cluster's capacity to support a
sustained, dead load.
The negative eigenvalues also indicate that even if the displacements
can be controlled, the system will soften along numerous load paths,
such as the one shown in Fig.~\ref{fig:fourdisk}c.
\par
The cluster's instability and its potential for softening have two sources.
Frictional contact sliding is inherently unstable
and can produce softening by means of the cluster's mechanical
stiffness $[\mathbf{H}^{\text{m}} ]$.
The mechanical stiffness is a collection of contact
stiffnesses, and the symmetric part of the
frictional contact stiffness
$[\widehat{\mathbf{F}}^{pq}]$ in Eq.~(\ref{eq:ContactF}$_{2}$)
has a negative eigenvalue of $(1-\sqrt{1+\mu^{2}})/2$.
Ba{\u{z}}ant and Cedolin (1991, \S10.7) show that negative
second-order work is produced in a single-body frictional system
through the release of frictionally
blocked elastic energy, even though the system
is otherwise stable when the displacements are controlled.
We suspect that the softening observed in many granular
materials is due, in part, to this mechanical origin.
Instability and softening can also originate from the geometric
stiffness $[\mathbf{H}^{\text{g}}]$.
This origin is illustrated in Fig.~\ref{fig:fourdisk}c,
which shows the softening that ensues when the particles
do not rotate and sliding continues on all four contacts.
During such vertical compression, the magnitudes
of the normal and tangential forces can be maintained constant
(i.e. constant $f$ and $\mu f$ forces in Fig.~\ref{fig:fourdisk}a).
No frictionally blocked elastic energy is released during the softening
shown in Fig~\ref{fig:fourdisk}c.
All of this softening has a geometric origin.
\par
The two examples reveal the importance of including the geometric
stiffness $[\mathbf{H}^{\text{g}}]$ when evaluating stability.
In both examples, instability and softening are attributed
to the influence of $[\mathbf{H}^{\text{g}}]$.
\par
The two examples are readily amenable to analytical or computational
analysis, since the two systems have few particles and only a
few sliding contacts---the number of branches,
$L=2^{M^{\text{s}}}$, is one in the first example and sixteen in the second.
Similar eigenvalue analyses may be impossible for entire
systems of thousands of particles, although the methods in the
examples can be readily applied to clusters within larger systems.
\section{Discussion and Conclusion}
This work provides a conceptual framework for including the
influence of particle shape on granular stiffness and for
evaluating the potential for instability and softening.
This approach may be productive in investigating
granular behavior, particularly at large strains.
We foresee three applications:
(1) as a way of improving current numerical simulation methods
for granular assemblies,
(2) as an approach toward understanding granular failure
and localization, and
(3) as a means of analyzing and post-processing
simulation results for understanding granular behavior.
In regard to the first application,
GEM and DDA simulations methods currently use a
similar direct stiffness approach to simulate the
interactions of particles in a granular assembly,
and these methods could benefit from the full inclusion
of all stiffness terms of order $(du)^{1}$---terms
of both mechanical and geometric origin.
\par
With respect to the second application,
the formulations show that material stiffness depends upon the
contact stiffnesses and on a complex interaction of the contact
forces and particle shapes.
The influence of contact stiffness is embodied
in a mechanical stiffness $[ \mathbf{H}^{\text{m}} ]$,
and the effects of contact force and particle shape
are gathered into a geometric stiffness $[\mathbf{H}^{\text{g}}]$.
The latter stiffness likely has negligible influence at small strains,
but its effect may become substantial, perhaps dominant, during failure:
at large strains, the rotation and rolling among nearly rigid particles
become prevalent kinematic mechanisms---conditions in which
the geometric stiffness is most active.
Moreover, the bulk stiffness of
granular materials is small or even negative during failure,
and the otherwise small geometric stiffness likely becomes a relatively
larger contributor during failure.
Because the geometric stiffness is proportional to the current,
accumulated contact forces, our approach might also explain
why many aspects of granular failure are
influenced by the confining pressure.
The confining pressure is known to influence the strain at peak stress,
the friction angle at the peak stress,
the dilation rate at the peak stress,
the strain at which shear bands begin to appear,
the orientation and thickness of shear bands,
and the rate of softening at post-peak strains
\cite{Lee:1967a,Desrues:2004a}.
A comprehensive micro-mechanical explanation is
currently lacking for such observed behaviors,
and these phenomena should be examined in the context of the current work.
The work may also provide a basis for investigating
local stiffness, stability, and softening within granular regions,
perhaps within small representative elements of material.
For example, the shear bands that appear during failure
are thought to be an ongoing instability in which particle
chains continually buckle and then reorganize while a specimen
is being loaded \shortcite{Oda:1998b,Mair:2002a}.
Just as material behavior at small strains has been
successfully estimated by using simple micro-mechanical models,
the current approach might be useful in investigating material
behavior and instability within shear bands at larger strains.
\par
A third application is in post-processing the results of
DEM simulations to explore local behavior.
Unlike the GEM and DDA methods,
the DEM does not use a direct stiffness approach,
but instead uses an efficient dynamic relaxation algorithm to track
the interactions of particles while an assembly is being deformed
\cite{Cundall:1979a}.
Methods have already been proposed for
extracting the spatial distributions
of stress and strain from DEM results
\cite{Bagi:1996a,Satake:2004a}.
The current work provides a means of quantifying
local stiffness within granular materials,
so that questions of instability and softening can be studied
through DEM simulations:
the simulations would provide the state of a granular assembly;
whereas, the current methods could be used to explore the
stiffness characteristics in that state.
\par
Finally, we note that most existing simulation
methods---GEM, DDA, and DEM---are meant to solve
large boundary value problems that involve
a discrete, granular region,
and the success of a simulation is often
judged by the numerical stability of its algorithm.
These methods can provide a solution,
but without determining whether non-unique,
multiple solutions are possible at any stage of loading.
The proposed stability and uniqueness criteria provides a
framework for investigating the
stability and possible bifurcation of solutions during loading.
\section*{Acknowledgement}
Katalin Bagi assisted in the current work through her insightful
discussions.
She presents a parallel derivation of matrix $[\mathbf{H}]$
that compliments the current work \cite{Bagi:2005a}.
\pagebreak
| proofpile-arXiv_065-1416 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{intro}
Carbon dioxide (CO$_2$) is a greenhouse gas that contributes greatly to global warming. As the use of carbon-based fuel is a primary source of energy, it is desirable to develop technologies for efficient capture and sequestration of CO$_2$ produced from such sources. Significant efforts have been carried out to study adsorption of CO$_2$ on different materials including complicated structures such as covalent organic frameworks \cite{Zeng2016, Lan2010} and metal organic frameworks \cite{Zhang2016, Saha2010}. In this respect, CO$_2$ adsorption on boron clusters and surfaces offers an interesting alternative \cite{Sun2014PCCP, Sun2014JPCC} which deserves further investigation.
Boron, like its neighboring element carbon, possesses a remarkable variety of structures that could be of use in a wide range of applications \cite{Zhang2012, Endo2001, Carter2014}. Bulk boron polymorphs are mainly composed of 3D-icosahedral B$_{12}$ cage structures as basic building blocks \cite{Bullett1982, Perkins1996}, while small boron clusters prefer planar-type aromatic/antiaromatic structures \cite{Zhai2003, Sergeeva2014}.
In fact, neutral and charged clusters B$_{n}^{(+,-)}$, with ${n \leq 15}$, have been predicted theoretically \cite{Boustani1997, Ricca1996, Kiran2005,Tai2010}, and confirmed experimentally (or by combined experimental and theoretical studies) \cite{Zhai2003, Oger2007, Tai2010, Romanescu2012}, to be planar or quasiplanar. For ${{n} > 15}$, competing low-energy isomers start to occur, in particular for the positively charged clusters B$_{16}^+$ to B$_{25}^+$ which were reported to have ring-type structures, based on mobility measurements \cite{Oger2007}. On the other hand, the negatively charged B$_{n}^-$ clusters have shown to systematically conserve planar-like structures up to at least ${{n}=25}$ by joint photoelectron spectroscopy and quantum chemistry calculations \cite{SerAveZha11,PiaLiRom12, PopPiaLi13,PiaPopLi14}. Moreover, the neutral clusters B$_{16}$ and B$_{17}$ clusters are found to display planar-type geometries based on vibrational spectroscopy studies \cite{Romanescu2012}; in this case, the smallest 3D-like (tubular) structure was suggested to occur for B$_{20}$ \cite{Kiran2005PNAS}. Recently, B$_{27}^-$, B$_{30}^-$, B$_{35}$, B$_{35}^-$, B$_{36}$ and B$_{36}^-$ clusters have been discovered to possess quasiplanar geometries through combined experimental and theoretical studies \cite{PiaHiLi14, Li2014JACS, Li2014Ange, Li2015JCP}, while the B$_{40}$ cluster has been observed to occur with a fullerene structure \cite{Zhai2014}. Such quasiplanar clusters can be viewed as embryos for the formation of 2D boron sheets (borophenes) \cite{PiaHiLi14, Li2014JACS}. Several borophene polymorphs and boron nanotubes have been theoretically predicted \cite{Yang2008, Quandt2005, XWu2012, Weir1992} and also experimentally grown \cite{Ciuparu2004, Weir1992, Patel2015, Mannix2015}.
Previous computational studies have revealed an interestingly strong CO$_2$ adsorption behavior on some theoretical models of surfaces of solid $\alpha$-B$_{12}$ and $\gamma$-B$_{28}$ \cite{Sun2014PCCP} and relatively strong CO$_2$ binding energies on B$_{40}$ and B$_{80}$ fullerenes \cite{Dong2015, Gao2015, Sun2014JPCC}. For the most common boron planar type of clusters, as well as for 2D-boron sheets, however, chemical binding of CO$_2$ was theoretically predicted so far only in the case of chemically engineered systems, namely for charged transition metal (TM)-atom centered boron-ring clusters, TM\textendash B$_{8-9}^-$ \cite{Wang2015}, and for Ca-, Sc- coated boron sheets \cite{Tai2013}.
In the current work, we show the existence of strong chemical binding of the CO$_2$ molecule to the aromatic/antiaromatic planar-type B$_{n}$ clusters (${n}=10~\textrm{to}~13$). By means of first-principle calculations and by varying the CO$_2$ initial position, we identify various chemisorbed and physisorbed configurations. We find that the strong chemisorption occurs for all four clusters when the adsorbed CO$_2$ molecule is in the plane of the cluster, close to its edge, and that the strongest adsorption energy reaches 1.6~eV in the case of B$_{12}$. For B$_{11}$ and B$_{13}$ adsorption with dissociated CO$_2$ is also found to occur at some edge sites. We rationalize the mechanism of the strong adsorption as due to the strong and matching planar character of frontier orbitals of both the cluster and bent CO$_2$ molecule, together with the favorable redistribution of electronic charge in excess at the edges of the cluster, in the presence of the dipole moment of the bent CO$_2$.
\section{Methodology and systems}\label{method}
\subsection{Computational details}
All calculations were carried out using first-principles plane-wave pseudopotential density functional theory (DFT) method, as implemented in the Quantum ESPRESSO package \cite{Giannozzi2009}. The spin-polarized Perdew-Burke-Ernzerhof (PBE) \cite{Perdew1996} exchange-correlation functional within the generalized gradient approximation (GGA) was employed. We used scalar-relativistic Vanderbilt ultrasoft pseudopotentials \cite{Vanderbilt1990} generated from the following atomic configurations: $2s^{2}2p^{1}$ for B, $2s^{2}2p^{2}$ for C and $2s^{2}2p^{4}$ for O. A non-linear core correction was included in the B pseudopotential. We employed a cubic supercell with sides of 21~\AA\ for all calculations to avoid cluster interactions. A 1~$\times$~1~$\times$~1 Monkhorst-Pack \textbf{k}-point mesh was used with a Gaussian level smearing of 0.001 Ry. Threshold for electronic convergence was set to 10$^{-7}$~Ry, and structures were optimized until the forces on each atom were below 10$^{-4}$~Ry/a.u.
The CO$_{2}$ adsorption energy ($E_\textrm{ads}$) on the B clusters was computed as \cite{Sun2014JPCC}:
\begin{equation} \label{eq:E_ads}
E_\textrm{ads}=E_{\textrm{B}_{n}-\textrm{CO$_{2}$}}-{E}_{\textrm{B}_{n}}-{E}_\textrm{CO$_{2}$},
\end{equation}
\noindent where $E_{\textrm{B}_n-\textrm{CO}_2}$ is the total energy of the atomically relaxed system consisting of the B$_{n}$ cluster and adsorbed CO$_{2}$ molecule, $E_{\textrm{B}_n}$ is the total energy of the isolated (relaxed) B$_{n}$ cluster, and $E_{\textrm{CO}_{2}}$ is the total energy of the CO$_2$ molecule in the gas phase. Convergence tests for the plane-wave expansion of the electronic orbitals indicated that changing the kinetic energy cut-off from 64~Ry to 96~Ry resulted in $E_\textrm{ads}$ changes within 1~meV. We used the former wave-function cut-off, together with a 384-Ry cut-off for the augmentation charge density, in all calculations reported here.
\subsection{Geometry and relative stability of the B$_\textrm{10-13}$ clusters}
The initial boron-cluster structural configurations were constructed based on previous work \cite{Tai2010} that catalogued the stable structures of B$_{n}$ clusters (for ${n \leq 13}$). We performed structural optimization resulting in the lowest-energy cluster geometries and bond lengths, shown in Fig.~\ref{fig:bondlengths} that are consistent with the results in Ref.~\cite{Tai2010}. It can be seen that B$_{10}$ and B$_{12}$ clusters exhibit quasiplanar structures, while B$_{11}$ and B$_{13}$ clusters have planar structural geometries. Moreover, B$_{12}$ and B$_{13}$ clusters are characterized by three inner atoms that are compactly bound forming an inner triangle. The longest B\textendash B bonds of $\geq$1.8~\AA\ existing in these clusters belong to B$_{11}$ and B$_{13}$ clusters, and form a square configuration within the cluster (see Fig.~\ref{fig:bondlengths}). Among the B$_n$ clusters studies, B$_{12}$ is the energetically most stable with a binding energy of 5.37~eV/atom (calculated binding energies are given in Supplementary material, Part I: Table~S1).
\begin{figure}[!h]
\centering\includegraphics[width=0.7\linewidth]{bondlengths.png}
\caption{Obtained optimized structures of B$_{n}$ clusters (${n}=10-13$). Specific B\textendash B bond lengths, in \AA, are also indicated for each cluster. Insets show the side view of the cluster, demonstrating that B$_{11}$ and B$_{13}$ clusters exhibit planar structures, while B$_{10}$ and B$_{12}$ are quasiplanar with some of the atoms displaced by 0.31 and 0.34~\AA\ from the cluster plane for B$_{10}$ and B$_{12}$ clusters, respectively.}
\label{fig:bondlengths}
\end{figure}
\section{Results and discussions}\label{results}
\subsection{Chemisorption of CO$_2$ on B$_{n}$ clusters}
We considered different initial configurations of CO$_2$ relative to the B$_{n}$ clusters including various adsorption sites and orientations of the molecule. We found strong chemisorption of the CO$_2$ molecule along the contour edges of the B$_{n}$ clusters. In addition, physisorption states of the CO$_2$ molecule were observed at larger distances from the cluster or when placed on top of the B-cluster plane. The strong binding of CO$_2$, with adsorption energies between $-1.6$ and $-1$~eV, is a common feature for all four B$_{n}$-clusters.
Figure~\ref{fig:init_final} shows the obtained optimized configurations of the B$_{n}$\textendash CO$_{2}$ systems characterized by the strongest adsorption energy ($E_\textrm{ads}$, shown in Table~\ref{tab:E_ads}), together with their corresponding initial configurations, shown as insets. The strongest adsorption overall was found for the B$_{12}$\textendash CO$_{2}$ system with a chemisorption energy of $-1.60$~eV, followed by close values of about $-1.4$~eV for B$_{11}$ and B$_{13}$, and somewhat weaker, but still robust chemisorption on B$_{10}$. Besides similar strong values of the CO$_{2}$ adsorption, all four B$_{n}$\textendash CO$_{2}$ systems share common features regarding the adsorption geometry. Thus, chemisorption occurs when the CO$_2$ molecule is initially placed near the edge sites and in-plane with respect to the B$_{n}$ cluster (see the insets in Fig.~\ref{fig:init_final}) for all clusters. Furthermore, final configurations indicate that chemisorbed B$_{n}$\textendash CO$_2$ systems tend to keep a planar geometry. The CO$_2$ molecule bends by a similar angle of $\sim$122\textdegree~for all B clusters considered as it chemisorbs on the B cluster. It should be noted that this angle corresponds to the equilibrium geometry predicted theoretically for the negatively charged CO$_2$ molecule \cite{GutBarCom98}. Following the formation of a C\textendash B and O\textendash B bond (with lengths of $\sim$1.6 and $\sim$1.4~\AA, respectively), the O\textendash C bond lengths of the molecule (initially 1.18~\AA) also elongate asymmetrically to $\sim$1.2~\AA\ (the O\textendash C bond further away from the cluster) and to $\sim$1.5~\AA\ (for the O\textendash C bond that is linked to the B cluster). Distances between B atoms at which O and C atoms are bound (denoted B$^{(1)}$ and B$^{(2)}$ respectively, in Fig.~\ref{fig:init_final}, with the binding O denoted O$^{(1)}$) increase by $0.3~-~0.7$~\AA\ with respect to their bond lengths in isolated clusters. Other edge chemisorption sites were also found for all the four clusters (with $E_\textrm{ads}$~$<$~$-1.10$~eV).
\begin{figure}[!h]
\centering\includegraphics[width=0.7\linewidth]{init_final.png}
\caption{Obtained optimized structures of CO$_2$ with B$_{n}$ clusters for the strongest adsorption, where B, C and O atoms are shown in grey, yellow and red, respectively. The distances between the cluster and molecule are given in angstroms. Insets represent initial positions prior to the interaction of the CO$_2$ molecule with B clusters, with the molecule placed in the cluster plane at less than 2~\AA\ distance from the cluster. Boron bonds shorter than 2~\AA\ are represented by rods.}
\label{fig:init_final}
\end{figure}
\begin{table}[!h]
\centering
\caption{Strongest adsorption energies (in eV) obtained for the relaxed configurations with adsorbed CO$_2$ molecule on the B$_{n}$, ${n}=10-13$, clusters and with dissociated molecule (CO + O) in the cases of B$_{11}$ and B$_{13}$ (second line). The adsorption energies correspond to the final configurations shown in Figs. \ref{fig:init_final} and \ref{fig:dissoc}. The adsorption energy of the dissociated CO$_2$, $E_\textrm{ads}^\textrm{dissociated}$, was obtained using Eq.~\ref{eq:E_ads}.}
\begin{tabular}{l c c c c}
\hline
& \textbf{B$_\textrm{10} $} & \textbf{B$_\textrm{11} $} & \textbf{B$_\textrm{12} $} & \textbf{B$_\textrm{13} $}\\
\hline
$E_\textrm{ads}$ (eV) & $-1.11$ & $-1.42$ & $-1.60$ & $-1.43$\\
\hline
$E_\textrm{ads}^\textrm{dissociated}$ (eV) & --- & $-2.19$ & --- & $-1.66$\\
\hline
\end{tabular}
\label{tab:E_ads}
\end{table}
Dissociation of CO$_2$ was also observed in B$_{11}$ and B$_{13}$ clusters at some specific B sites, wherein some of B bonds broke in order for the dissociated O and C\textendash O fragments to bind to the (deformed) cluster, as shown in Fig.~\ref{fig:dissoc}. For B$_{11}$ and B$_{13}$ clusters with dissociated CO$_2$, the chemisorption energies ($E_\textrm{ads}^\textrm{dissociated}$) are $-2.19$~eV and $-1.66$~eV, respectively.
We also found physisorbed CO$_2$ configurations with physisorption energies ranging from $-11$ to $-30$~meV for distances between 3.5 and 4~\AA~from the B$_{n}$ cluster (measured as the smallest interatomic separation). The physisorption configurations include the CO$_2$ molecule placed above the cluster or placing the C of the molecule further away in the cluster plane, with the O atoms in or out of the cluster plane (as shown in Fig.~\ref{fig:physi_correct} for the case of B$_{12}$). An example describing the in-plane physisorption and chemisorption states of CO$_2$ on B$_{12}$ cluster is given in Part II of Supplementary material.
\begin{figure}[!h]
\centering\includegraphics[width=0.7\linewidth]{dissoc.png}
\caption{Obtained optimized structures of the CO$_2$ molecule adsorbing on (a) B$_{11}$ and (b) B$_{13}$ clusters where dissociation of the molecule occurs. Insets show the initial position prior to the interaction with the molecule placed in the cluster plane at a distance of less than 2~\AA\ from the cluster.}
\label{fig:dissoc}
\end{figure}
\begin{figure}[!h]
\centering\includegraphics[width=0.4\linewidth]{physi_correct.png}
\caption{Representative image of a typical physisorption state of CO$_2$ molecule on B$_{12}$ cluster obtained when the molecule is initially placed near an edge atom of the cluster, and rotated 90\textdegree ~out of the cluster plane. The CO$_2$ molecule maintains its linear structure as it moves away from the cluster.}
\label{fig:physi_correct}
\end{figure}
The binding energies we have found here for the chemisorbed CO$_2$ molecule on the neutral, metal-free planar-type clusters (in the range 1.1 \textendash~1.6 eV for B$_{10-13}$) are significantly larger than previously obtained for 3D-type cluster structures ($\sim$0.4~eV for B$_{40}$ and $\sim$0.8~eV for B$_{80}$ \cite{Sun2014JPCC, Dong2015, Gao2015}). To the best of our knowledge, this is the first study that provides evidence of the strong chemical binding of the CO$_2$ molecule to planar-type B clusters, although good adsorption was theoretically found for a few diatomic molecules on selected B clusters \cite{ValFarTab15, SunWanLi13, SloKanPan10}. The CO$_2$ binding energies to B$_{11-13}$ we obtained are also larger than those reported for the chemically engineered TM\textendash B$_\textrm{8-9}^-$ clusters and for the metallated/charged fullerenes ($1.1-1.2$~eV) \cite{Wang2015,Dong2015, Gao2015}. We note that previous studies have indicated that charging the Boron fullerenes or engineered TM\textendash B$_\textrm{8-9}$ negatively tends to enhance the adsorption of CO$_2$ \cite{Gao2015, Wang2015}, which suggests that even stronger adsorption could be obtained for B$_{n}^-$ planar clusters.
Furthermore, we expect the strong bonding character obtained here for CO$_2$ to B$_\textrm{10-13}$ to persist for the larger planar-type B clusters. In fact, we have examined the binding properties of CO$_2$ to a semi-infinite Boron $\alpha$-sheet \cite{TanIsm07,note_sheet} and also found chemisorption with ${E_\textrm{ads}\approx-0.3}~\textrm{eV}$ and a similar type of CO$_2$ adsorption geometry (including the $\sim$122\textdegree~O$^{(1)}$\textendash C\textendash O bond angle) at the edge of the Boron sheet \cite{note_sheet}. The latter may be viewed as the edge of a planar B$_{N}$ cluster in the limit of large $N$.
Finally, we stress that the large chemisorption energy we find is a robust feature of the system that persists even in the presence of Hubbard on-site interactions that are implemented via GGA + U calculations \cite{note_U}. The interactions provided by U increase the CO$_2$ HOMO-LUMO gap (next section), and are actually found to enhance the adsorption strength (binding energy) of the CO$_2$ molecule to the B clusters.
\subsection{Electronic properties of the distorted and undistorted isolated systems}
In order to better understand the strong chemisorption of CO$_2$ on all considered B planar-type clusters, we have examined the atomic-orbital-resolved density of states of the isolated clusters and bent molecule, focusing on the atoms participating in the formation of the chemisorption bonds. As we have seen in the previous section, the CO$_2$ bond angle changes from 180\textdegree~(free molecule) to approximately 122\textdegree~in the chemisorbed geometry, which is indicative of a negative charging of the molecule. Moreover, the bending itself of the CO$_2$ molecule significantly modifies its electronic spectrum, and in particular considerably reduces its HOMO-LUMO gap \cite{Tai2013b}. In fact, an important point to note is that, when the molecule bends, the previously degenerate (from linear CO$_2$) highest-occupied and lowest-unoccupied $\pi$ states of the molecule both split into in-plane and out-of-plane orbitals, leaving exclusively O and C $2p$-related in-plane molecular orbitals as the frontier orbitals of the 122\textdegree - bent molecule (see Supplementary material, Fig.~S2 and also Fig.~\ref{fig:pdos_mo}(a)).
The splitting, after the molecule bends, of the lowest-unoccupied $\pi$ ($p_{z}$,$p_{y}$) level, in particular, is very large (3.7~eV) compared to the HOMO level splitting (0.4~eV, Fig.~S2) and the overall HOMO-LUMO gap also drastically decreases (by 6.6~eV in our calculations, Fig.~S2(b) and Fig.~\ref{fig:pdos_mo}(a)) with respect to the linear molecule (Fig.~S2(a)). Figure~\ref{fig:pdos_mo}(a) shows, for the resulting bent CO$_2$, the in-plane components of the $2p$-related O$^{(1)}$ and C projected density of states (PDOS) along the B\textendash C bond direction ($p_{y}$ component) and perpendicular to it ($p_{x}$ component). The corresponding molecular orbitals for the levels closest to the gap are also displayed in the figure. As can be seen, the bent CO$_2$ molecule has fully planar-type HOMO and LUMO states (denoted as H$_\textrm{A1}$ and L$_\textrm{A1}$ in Fig.~\ref{fig:pdos_mo}), in strong contrast with the linear CO$_2$ molecule (Fig.~S2(a)). The PDOS in Fig.~\ref{fig:pdos_mo}(a) also shows that, while the HOMO of the bent molecule retains very strong O$^{(1)}$ and C $2p_{y}$-orbital character, the LUMO exhibits both a strong $2p_{y}$ component and a substantial $2p_{x}$ component (both antibonding) from the O$^{(1)}$ and C atoms.
In Fig.~\ref{fig:pdos_mo}(b), we display, for the isolated B$_{12}$ cluster, the same type of $p_{x}$ and $p_{y}$ in-plane components of the density of states projected on the $2p$-orbitals of B$^{(1)}$ and B$^{(2)}$ atoms (the $2p_{z}$ component is shown in the Supplementary material, Fig.~S3). Such in-plane states are the ones which may interact/hybridize with the frontier orbitals of the bent CO$_2$. In Fig.~\ref{fig:pdos_mo}(b), we also display for the levels closest to the HOMO-LUMO gap and having the highest in-plane PDOS, the corresponding molecular orbitals. These states are characterized by lobes protruding over the cluster's edge within the cluster plane.
It can be observed from Fig.~\ref{fig:pdos_mo}(b) (and comparison with the full $p$-state PDOS in Fig.~S3(b)) that there is an especially large density of in-plane orbitals of the {\it{peripheral B atoms}} (B$^\textrm{(1)}$ and B$^\textrm{(2)}$) in the upper (2 to 3~eV) region of the cluster occupied-state spectrum. We note that previous calculations indicated that the B clusters which we are considering have in total in the occupied spectrum only 3 to 4 $p_{z}$-type (out-of-plane) molecular orbitals \cite{Zubarev2007}, delocalized over all cluster atoms, which is also what we find. The high density of in-plane $p_{x}$ and $p_{y}$ orbitals from peripheral (B$^\textrm{(1)}$ and B$^\textrm{(2)}$) atoms in the top (2 to 3~eV) part of the cluster occupied-state spectrum is a feature common to all four clusters considered in this work.
The in-plane molecular states of the cluster in the energy region [$-5$~eV, $-1$~eV], in Fig.~\ref{fig:pdos_mo}(b), strongly contribute to the electronic charge density of the cluster along its contour edge. In Fig.~\ref{fig:B12distortedchargedens}, we display the electronic charge density of the isolated B$_{12}$ cluster with the distorted geometry as in the adsorbed B$_{12}$\textendash CO$_2$ system. The electronic charge distribution is similar to that of the free/undistorted B$_{12}$ cluster (Fig.~S1 in Supplementary material); it is largely concentrated at the contour edges of the cluster. This inhomogeneous electronic distribution makes the contour edges negatively charged and leaves the inner B atoms with a reduced electron density. These properties are observed in all four clusters investigated here (Fig.~S1 in the Supplementary material).
\begin{figure}[H]
\centering\includegraphics[width=0.7\linewidth]{pdos_mo.png}
\caption{Atomic $2p_{x}$ and $2p_{y}$ projected density of states (PDOS) of the isolated bent CO$_2$ molecule (a) and B$_{12}$ cluster (b) summed over the two atoms directly involved in the chemisorption bonds in the configuration shown in Fig.~\ref{fig:init_final}c, i.e., O$^{(1)}$ and C in panel (a) and B$^\textrm{(1)}$ and B$^\textrm{(2)}$ in panel (b). The bent molecule and cluster are also shown with the corresponding $\hat{{x}}$ and $\hat{{y}}$ directions: the $y$-direction is aligned with the B\textendash C bond and the $x$-axis is perpendicular to it, still remaining in the plane of the cluster. Some of the occupied, $E < 0$, and empty, $E > 0$, states (probability density with orbital phase change) of the bent CO$_2$ molecule and of the B$_{12}$ cluster are shown next to their respective PDOS. The isosurface level is set to 0.001 $e$~\AA$^{-3}$.}
\label{fig:pdos_mo}
\end{figure}
\begin{figure}[!h]
\centering\includegraphics[width=0.35\linewidth]{B12distortedChargeDens.png}
\caption{Electronic charge density contour plot calculated for an isolated B$_{12}$ cluster with the same distorted atomic structure as in the adsorbed B$_{12}$\textendash CO$_2$ system. The distortion is occuring mostly at the atoms which take part in the binding (the bottom two B atoms in the plot). It can be seen that the electronic charge density is systematically largest at the cluster contour edges leaving thus an extended positively charged area in the central part of the cluster. One can also observe that the adsorption of the CO$_2$ molecule causes it to lose its 3-fold symmetry.}
\label{fig:B12distortedchargedens}
\end{figure}
\subsection{Discussion of the chemisorption mechanism}
To identify the dominant CO$_2$ molecular orbital involved in the chemisorption, we examined the differential charge density, i.e., the difference between the charge density of the chemisorbed B$_{n}$\textendash CO$_2$ system and that of the isolated B$_{n}$ and CO$_2$. In Fig.~\ref{fig:chargediff}, we present differential charge-density isosurfaces illustrating the electronic charge difference associated with the chemisorption of CO$_2$ on B$_{12}$. The shape of the energy-gain isosurface in the region of the CO$_2$ molecule has strong similarities with the probability density isosurface of the LUMO of the bent CO$_2$ molecule (refer to L$_\textrm{A1}$ of Fig.~\ref{fig:pdos_mo}). The LUMO CO$_2$ orbital will interact with some planar high-energy occupied molecular orbital(s) of the cluster (in Fig.~\ref{fig:pdos_mo}(b)) and, based on the probability densities of the molecular orbitals of the interacting B$_{12}$\textendash CO$_2$ system (the highest occupied states are shown in Fig.~S4 in the Supplementary material), we find that the L$_\textrm{A1}$ molecular orbital of CO$_2$ interacts (hybridizes) predominantly with the H$_\textrm{B3}$ molecular orbital of the cluster (see Fig.~\ref{fig:pdos_mo}(b)). These molecular orbitals have lobes protruding from the edges of the cluster/molecule with substantial orbital overlap suggesting that strong interaction between cluster and molecule can take place in this region.
\begin{figure}[!h]
\centering\includegraphics[width=0.4\linewidth]{chargediff.png}
\caption{Differential electron density isosurface ($\Delta$$\rho$) for the B$_{12}$\textendash CO$_2$ system (see text). Gray color represents electron deficient region ($\Delta$$\rho < 0$), while orange denotes electron rich region ($\Delta$$\rho > 0$) with respect to the isolated B$_{12}$ cluster and CO$_2$ molecule. A large electron rich region can be observed for the adsorbed CO$_2$ molecule, indicating that CO$_2$ acquired excess electrons becoming effectively negatively charged. The isosurface level is set to 0.004 $e$~\AA$^{-3}$. It can be observed that the overall shape of the electron-gain (orange) differential charge density isosurface in the region of the CO$_2$ molecule resembles that of probability density of the LUMO of bent CO$_2$ (refer to L$_\textrm{A1}$ of Fig.~\ref{fig:pdos_mo}).}
\label{fig:chargediff}
\end{figure}
From Fig.~\ref{fig:chargediff} it can be inferred that the CO$_2$ molecule gained excess negative charge from the B cluster. We performed a Lowdin charge analysis, which (although it cannot provide exact values of the atomic charges in a hybridized system and is basis dependent) is useful to give charging trends. Thus, the C atom (binding with B$^\textrm{(2)}$) gained $0.27~e$, and the O atom (binding with B$^\textrm{(1)}$) gained $0.04~e$, while only a very small charge transfer for the other O atom is seen ($\sim$ 0.001~$e$). Similar total amount of Lowdin charge was lost by the B cluster. Strong charge transfer between the B structures and the chemisorbed CO$_2$ molecule has been reported earlier and related to the Lewis acid-base interactions \cite{Sun2014PCCP,Sun2014JPCC,Wang2015}. The electronic charge transfer from the edges of the cluster (with excess negative charge) to the molecule can be also rationalized considering that the bent CO$_2$, in difference to the linear molecule, has a net dipole moment which is substantial: 0.724~ea$_0$ \cite{MorHay79}. The positive end of the dipole is closer to the B cluster and the negative side further away from the cluster, facilitating the interaction with the edge sites of B cluster that exhibit higher electronic density.
In addition to the strong chemisorption of the full CO$_2$ molecule on B clusters, we also found cases where the molecule dissociated into C\textendash O and O fragments (Fig.~\ref{fig:dissoc}), each of which is bound separately to the B cluster, having typical bond lengths of 1.2 and 1.5~\AA\ (for both B$_{11}$ and B$_{13}$), respectively. The dissociation is attributed to the presence of longer bond lengths and lower charge density of the clusters, together with the specific choice of adsorption sites closest to the long B\textendash B bonds. The dissociation of the molecule takes place at B\textendash B edges where the charge density of the cluster is relatively low (Fig.~S1 in the Supplementary material) and the B atoms have less bonding with other B atoms. Both B$_{11}$ and B$_{13}$ clusters have considerably smaller HOMO-LUMO gap values than the other two clusters which do not display dissociative adsorption (Table~S1 in Supplementary material). The smaller gap indicates higher chances of interaction between the cluster and molecular states, allowing also more varied types of adsorption configurations, as we observe in our calculations.
\section{Conclusion}
We investigated the adsorption of CO$_2$ on B$_{n}$ ($n=10-13$) clusters by using first-principles density-functional theory. These clusters have been predicted theoretically and confirmed experimentally to have planar or quasiplanar geometries. We obtained different chemisorbed and physisorbed configurations depending on the initial position of the CO$_2$ molecule. In particular, the chemisorption is obtained for an in-plane position of the molecule close to the cluster contour edges, with adsorption, thus, at the cluster edge sites. CO$_2$ chemisorbs strongly to all four clusters considered, while the strongest CO$_2$ binding energy, amounting to 1.6~eV, is calculated for B$_{12}$. The CO$_2$ chemisorption energies we found for the B$_{10-13}$ clusters are considerably larger than previously obtained for the neutral B$_{80}$ and B$_{40}$ fullerene-type clusters. To the best of our knowledge, this is the first time such strong chemical binding of CO$_2$ to the planar-type B clusters is evidenced. The CO$_2$ binding energies to B$_{11-13}$ we obtained are also larger than previously reported for the chemically engineered TM\textendash B$_{8-9}^-$ clusters and doped/charged B fullerenes. We explain the strong chemisorption by the planarity of the B clusters which are characterized by a high density of protruding occupied in-plane molecular-orbital states near the cluster gap, associated with peripheral B atoms, and excess electronic charge at the cluster edges. These properties facilitate binding with the bent CO$_2$ molecule, which has exclusively in-plane frontier orbitals and a non-vanishing dipole moment.
\section{Acknowledgements}\label{acknowledgements}
This work was funded by the UP System Enhanced Creative Work and Research Grant ECWRG 2018-1-009. A.B.S.-P. is grateful to the Abdus Salam International Centre for Theoretical Physics (ICTP) and the OPEC Fund for International Development (OFID) for the OFID-ICTP postgraduate fellowship under the ICTP/IAEA Sandwich Training Educational Programme, and to the Philippines Commission on Higher Education (CHEd) for the Faculty Development Program (FacDev)\textendash Phase II.
\bibliographystyle{unsrtnat}
| proofpile-arXiv_065-1418 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusion}\label{sec:conclusion}
In this paper, we present the first PTAS for {\sc Capacitated Vehicle
Routing} in planar graphs. Although the approximation scheme takes polynomial
time, it is not an \emph{efficient} PTAS (one whose running time is
bounded by a polynomial whose degree is independent of the value of
$\epsilon$). It is an open question as to whether an efficient PTAS
exists. It is also open whether a PTAS exists when the
capacity $Q$ is unbounded.
\section{Embedding}\label{sec:embedding}
In this section, we prove Theorem~\ref{thm:embed}, which we restate
for convenience:
\addtocounter{theorem}{-1}
\embeddingthm
The proof uses as a black box the following result from~\cite{spanner_paper}:
\begin{lemma}[\cite{spanner_paper}]\label{lem:spanner}
There is a number $c$ and a polynomial-time algorithm that, given
a planar graph $G$ with specified root vertex $r$ and diameter $D$,
computes a graph $H$ of treewidth at most $(\frac{1}{\epsilon})^c$
and an embedding $\phi$ of $G$ into $H$ such that, for all vertices $u$ and $v$,
$$d_G(u,v)\leq d_H(\phi(u),\phi(v)) \leq d_G(u,v) + \epsilon D$$
\end{lemma}
For notational convenience, instead of
Inequality~\ref{eq:expected-error} of Theorem~\ref{thm:embed}, we prove
\begin{equation} \label{eq:expected-error3}
E[d_H(\phi(u),\phi(v))] \leq d_G(u,v) + 3\epsilon[d_G(u,r) +
d_G(v,r)]
\end{equation}
from which Theorem~\ref{thm:embed} can be proved by taking $\epsilon'=\epsilon/3$.
Our embedding partitions vertices of $G$ into \emph{bands} of vertices
defined by distances from $r$. Choose $x \in [0,1]$ uniformly at
random. Let $B_0$ be the set of vertices $v$ such that
$d_G(r,v) < {\frac{1}{\epsilon}}^{(x)\frac{1}{\epsilon}}$, and for
$i \in \{1,2,3,...\}$ let $B_{i}$ be the set of vertices $v$ such that
${\frac{1}{\epsilon}}^{(i+x-1)\frac{1}{\epsilon}} \leq d_G(r,v) <
{\frac{1}{\epsilon}}^{(i+x)\frac{1}{\epsilon}}$ (see Figure~\ref{fig:bands}).
Let $G_i$ be the subgraph induced by $B_{i}$, together with all
$u$-to-$v$ and $v$-to-$r$ shortest paths for all $u,v\in B_{i}$. Note
that although the $B_i$ partition $V$, the $G_i$ do not partition
$G$. Note also that the diameter of $G_i$ is at most
$4{\frac{1}{\epsilon}}^{(i+x)\frac{1}{\epsilon}}$. This takes into account the
paths included in $G_i$ that pass through vertices not in $B_i$.
For each $G_i$, let $\phi_i$ be the embedding and let $H_i$ be
the host graph resulting from applying Lemma~\ref{lem:spanner} and
using $\epsilon' = \epsilon^{\frac{1}{\epsilon}+1}$. Finally, let $H$ be the graph
resulting from adding a new vertex $r'$ and for all $i$ and all
$v \in B_i$ adding an edge $(\phi_i(v),r')$ of length $d_G(v,r)$. Set
$\phi(v)=\phi_i(v)$ for all $v \in B_i-\set{r}$ and set $\phi(r)=r'$. See
Figure~\ref{fig:embedding}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{bands.png}
\caption{$G$ is divided into bands $B_0,B_1,...,B_{final}$ based on distance from $r$.}
\label{fig:bands}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{embedding.png}
\caption{Each subgraph $G_i$ of $G$ is embedded into a host graph $H_i$.
These graphs are joined via edges to a new depot $r'$ to form a
host graph for $G$.}
\label{fig:embedding}
\end{figure}
Let $H^-$ be the graph obtained from $H$ by deleting $r'$. The connected components of $H^-$ are $\{H_i\}_i$. By Lemma~\ref{lem:spanner}, the treewidth of each host graph $H_i$ is at most $(\frac{1}{\epsilon'})^{c_0}$ = $(\frac{1}{\epsilon})^{c_0(\epsilon^{-1} + 1)}$ for some constant $c_0$. This also bounds the treewidth of $H^-$. Adding a single vertex to a graph increases the treewidth by at most one, so after adding $r'$ back, the treewidth of $H$ is $(\frac{1}{\epsilon})^{c_0(\epsilon^{-1} + 1)}+1 = (\frac{1}{\epsilon})^{c_1\epsilon^{-1}}$ for some constant $c_1$.
As for the metric approximation, it is clear that $d_G(u,v) \leq
d_H(\phi(u),\phi(v))$ with probability 1.
We use the following lemma to prove Equation~\ref{eq:expected-error3}.
\begin{lemma}\label{lem:prob}
If $\epsilon d_G(v,r) < d_G(u,r) \leq d_G(v,r)$, then the probability that $u$ and $v$ are in different bands is at most $\epsilon$.
\end{lemma}
\begin{proof}
Let $i$ be the nonnegative integer such that $d_G(u,r) =
{\frac{1}{\epsilon}}^{(i+a)\frac{1}{\epsilon}}$ for some $a \in [0,1]$. Let $b$
be the number such that $d_G(v,r) = {\frac{1}{\epsilon}}^{(i+b)\frac{1}{\epsilon}}$.
$$\frac{1}{\epsilon} \geq \frac{d_G(v,r)}{d_G(u,r)} =
\frac{{\frac{1}{\epsilon}}^{(i+b)\frac{1}{\epsilon}}}{{\frac{1}{\epsilon}}^{(i+a)\frac{1}{\epsilon}}}
= {\frac{1}{\epsilon}}^{(b-a)\frac{1}{\epsilon}}$$
Therefore
$$b-a \leq \epsilon$$
Consider two cases. If $b\leq 1$, then the probability that $u$ and $v$ are in different bands is $Pr[a\leq x < b] \leq \epsilon$.
If $b > 1$ then the probability that $u$ and $v$ are in different bands is $Pr[x \geq a \text{ or } x \leq b-1] \leq 1-a + b-1 = b-a \leq \epsilon$
\end{proof}
We now prove Equation~\ref{eq:expected-error3}. Let $u$ and $v$ be
vertices in $G$. Without loss of generality, assume $d_G(u,r) \leq d_G(v,r)$. First we address the case where $d_G(u,r) \leq \epsilon d_G(v,r)$. Since $\phi(u)$ and $\phi(v)$ are both adjacent to $r'$ in $H$, $d_H(\phi(u),\phi(v)) \leq d_H(\phi(u),r') + d_H(\phi(v),r') = d_G(u,r) + d_G(v,r) \leq 2d_G(u,r) + d_G(u,v) \leq d_G(u,v) + 2\epsilon d_G(v,r)$. Therefore $E[d_H(\phi(u),\phi(v))] \leq d_G(u,v) + 3\epsilon[d_G(u,r) + d_G(v,r)]$
Now, suppose $d_G(u,r) > \epsilon d_G(v,r)$. If $u$ and $v$ are in the
same band $B_i$, then by Lemma~\ref{lem:spanner},
\begin{eqnarray*}
d_{H}(\phi(u),\phi(v))&\leq& d_{H_i}(\phi(u),\phi(v)) \leq d_G(u,v) +
\epsilon' \frac{1}{\epsilon}^{(i+x)\frac{1}{\epsilon}} \\
&\leq& d_G(u,v) + \epsilon^{\frac{1}{\epsilon}+1}
4{\frac{1}{\epsilon}}^{(i+x)\frac{1}{\epsilon}}\\
& = & d_G(u,v) +
\ep4{\frac{1}{\epsilon}}^{(i+x-1)\frac{1}{\epsilon}} \leq d_G(u,v) +
2\epsilon(d_G(u,r)+d_G(v,r))
\end{eqnarray*}
In the final inequality, when
$i=0$, we use the fact that all nonzero distances are at least one to
give a lower bound on $d_G(u,r)$ and $d_G(v,r)$.
If $u$ and $v$ are in different bands, then since $\phi(u)$ and $\phi(v)$ are both adjacent to $r'$ in $H$, $d_H(\phi(u),\phi(v)) \leq d_H(\phi(u),r') + d_H(\phi(v),r') = d_G(u,r) + d_G(v,r)$. By Lemma~\ref{lem:prob}, this case occurs with probability at most $\epsilon$.
Therefore $E[d_H(\phi(u),\phi(v))] \leq (d_G(u,v) +
2\epsilon(d_G(u,r)+d_G(v,r))) + \epsilon[d_G(u,r) + d_G(v,r)] \leq d_G(u,v) +
3\epsilon[d_G(u,r) + d_G(v,r)]$, which proves Inequality~\ref{eq:expected-error3}.
The construction does not depend on planarity only via Lemma~\ref{lem:spanner}.
For the sake of future uses of the construction
with other graph classes, we state a lemma.
\begin{lemma}\label{lem:reduction}
Let $\mathcal F$ be a family of graphs closed under
vertex-induced subgraphs. Suppose that there is a function $f$ and a
polynomial-time algorithm that, for any graph $G$ in $\mathcal F$,
computes a graph $H$ of treewidth at most $f(\epsilon)$ and an
embedding $\phi$ of $G$ into $H$ such that, for all vertices $u$ and $v$,
$$d_G(u,v)\leq d_H(\phi(u),\phi(v)) \leq d_G(u,v) + \epsilon D$$
Then there is a function $g$ and a randomized polynomial-time algorithm that, for any graph $G$ in $\mathcal F$,
computes a graph $H$ with treewidth at most $g(\epsilon)$ and
an embedding $\phi$ of $G$ into $H$, such that, for every pair of
vertices $u,v$ of $G$, with probability 1 $d_G(u,v) \leq
d_H(\phi(u),\phi(v))$, and
$$E[d_H(\phi(u),\phi(v))] \leq d_G(u,v) + \epsilon\left[(d_G(u,r) +d_G(v,r)\right]$$
\end{lemma}
\section{Introduction}\label{sec:intro}
We define the {\sc Capacitated Vehicle Routing} problem with capacity $Q>0$
as follows. The input is an undirected graph with nonnegative
edge-lengths, a distinguished vertex $r$ (the \emph{depot}), and a set
$S$ of vertices (the \emph{clients}). The output is a set of tours
(closed walks), each including the depot, together with an assignment
of clients to tours such that each client belongs to the tour to which
it is assigned, and such that each tour is assigned at most $Q$
clients. The objective is to minimize the total length of the tours.
We refer to this quantity as the \emph{cost} of the solution.
This problem
arises in both public and commercial settings including
planning school bus routes and package delivery. {\sc Capacitated
Vehicle Routing} is NP-hard for any capacity greater than
two~\cite{asano1997}. We provide a polynomial-time approximation
scheme (PTAS) for {\sc Capacitated Vehicle Routing} when the capacity
is bounded and the underlying graph is planar
An \emph{embedding} of a guest graph $G$ in a host graph $H$ is a mapping $\phi: V(G) \longrightarrow V(H)$. One seeks embeddings in which, for each pair $u,v$ of vertices of $G$, the $u$-to-$v$ distance in $G$ is in some sense approximated by the $\phi(u)$-to-$\phi(v)$ distance in $H$. One algorithmic strategy for addressing a metric problem is as follows: find an embedding $\phi$ from the input graph $G$ to a graph $H$ with simple structure; find a good solution in $H$; lift the solution to a solution in $G$. The success of this strategy depends on how easy it is to find a good solution in $H$ and how well distances in $H$ approximate corresponding distances in $G$.
In this paper, we give a randomized method for embedding a planar
graph $G$ into a bounded-treewidth host graph $H$ so as to achieve a
certain expected distance approximation guarantee. There is a
polynomial-time algorithm to find an optimal solution to {\sc
Capacitated Vehicle Routing} in bounded-treewidth graphs. This
algorithm is used to find an optimal solution to the problem induced
in $H$. This solution in the host graph is then lifted to obtain a near-optimal solution in $G$.
\subsection{Related Work}\label{sec:related_work}
\subsubsection*{Capacitated Vehicle Routing} There is a substantial body of work on approximation algorithms for {\sc Capacitated Vehicle Routing}. As the problem generalizes the {\sc Traveling Salesman Problem} (TSP), for general metrics and values of $Q$, {\sc Capacitated Vehicle Routing} is also APX-hard~\cite{papadimitriou}. Haimovich and Rinnoy Kan~\cite{haimovich1985} observe the following lower bound.
\begin{equation} \label{eq:lb}
\frac{2}{Q}\sum_{v\in S}{d(v,r)} \leq \text{cost}(OPT)
\end{equation}
where $\text{cost}(OPT)$ denotes the cost of the optimal solution. They
use this inequality to give a $1+(1-\frac{1}{Q})\alpha$-approximation, where $\alpha$ denotes the approximation ratio of TSP. Using Christofides 1.5-approximation for TSP~\cite{christofides}, this gives a $2.5-\frac{1}{Q}$ approximation ratio. For general metrics and values of $Q$ this result has not been substantially improved upon. Even for tree metrics, the best known approximation ratio for arbitrary values of $Q$ is 4/3, due to Becker~\cite{becker_trees}. While no polynomial-time approximation schemes are known for arbitrary $Q$ for \emph{any} nontrivial metric, recently Becker and Paul~\cite{becker_paul} gave a bicriteria $(1,1+\epsilon)$ approximation scheme for tree metrics. It returns a solution of at most the optimal cost, but in which each tour is responsible for at most $(1+\epsilon)Q$ clients.
One reasonable relaxation is to consider restricted values of $Q$. Even for $Q$ as small as 3, {\sc Capacitated Vehicle Routing} is APX-hard in general metrics~\cite{asano1997}. On the other hand, for fixed values of $Q$, the problem can be solved in polynomial time on trees and bounded-treewidth graphs.
Much attention has been given to approximation schemes for Euclidean metrics. In the Euclidean plane $\mathbb{R}^2$, PTASs are known for instances in which the value of $Q$ is constant~\cite{haimovich1985}, $O(\log n/\log\log n)$~\cite{asano1997}, and $\Omega(n)$~\cite{asano1997}. For $\mathbb{R}^3$, a PTAS is known for $Q=O(\log n)$ and for higher dimensions $\mathbb{R}^d$, a PTAS is known for $Q=O(\log^{1/d}n)$\cite{khachay2016}. For arbitrary values of $Q$, Mathieu and Das designed a quasi-polynomial time approximation scheme (QPTAS) for instances in $\mathbb{R}^2$~\cite{das2010}. No PTAS is known for arbitrary values of $Q$.
There have been a few recent advances in designing approximation
schemes for {\sc Capacitated Vehicle Routing} in non-Euclidean
metrics. Becker, Klein, and Saulpic \cite{bks_planar} gave a
QPTAS for bounded-capacity instances in planar and bounded-genus graphs. The same authors gave a PTAS for graphs of bounded highway dimension \cite{bks_hwy_dim}.
\subsubsection*{Metric embeddings}
There has been much work on metric embeddings. In particular,
Bartal~\cite{Bartal96} gave a randomized algorithm for selecting an
embedding $\phi$ of the input graph into a tree so that, for any
vertices $u$ and $v$ of $G$, the expected $\phi(u)$-to-$\phi(v)$
distance in the tree approximates the $u$-to-$v$ distance in $G$ to
within a polylogarithmic factor. Fakcharoenphol, Rao, and
Talwar~\cite{FRT04} improved the factor to $O(\log n)$.
Talwar~\cite{talwar2004bypassing} gave a randomized algorithm for
selecting an embedding of a metric space of bounded doubling dimension
and aspect ratio $\Delta$ into a graph whose treewidth is bounded by a
function that is polylogarithmic in $\Delta$; the distances are
approximated to within a factor of $1+\epsilon$. Feldman, Fung,
K\"onemann, and Post.~\cite{feldmann20151+} built on this result to
obtain a similar embedding theorem for graphs of bounded highway
dimension.
What about planar graphs? Chakrabarti et al.~\cite{CJLV08} showed a
result that implies that unit-weight planar graphs cannot be embedded into
distributions over $o(\sqrt{n})$-treewidth graphs so as to achieve
approximation to within an $o(\log n)$ factor.
Let us consider distance approximation guarantees with absolute
(rather than relative) error. Becker, Klein, and
Saulpic~\cite{bks_hwy_dim} gave a deterministic algorithm that, given
a constant $\epsilon>0$, finds an embedding from a graph $G$ of bounded
highway dimension to a bounded-treewith graph $H$ such that, for each
pair $u,v$ of vertices of $G$, the $\phi(u)$-to-$\phi(v)$ distance in
$H$ is at least the $u$-to-$v$ distance in $G$ and exceeds that
distance by at most $\epsilon$ times the $u$-to-$r$ distance
plus the $v$-to-$r$ distance, where $r$ is a given vertex of $G$.
This embedding was used to obtain the previously mentioned PTAS for
{\sc Capacitated Vehicle Routing} with bounded capacity on graphs of
bounded highway dimension.
Recently, Fox-Epstein, Klein, and Schild~\cite{spanner_paper} showed how to embed planar graphs into graphs of bounded treewidth, such that distances are preserved up to a small additive error of $\epsilon D$, where $D$ is the diameter of the graph. They show how such an embedding can be used to achieve efficient bicriteria approximation schemes for $k$-{\sc Center} and $d$-{\sc Independent Set}.
\subsection{Main Contributions}\label{sec:contributions}
In this paper we present the first known PTAS for {\sc Capacitated Vehicle Routing} on planar graphs. We formally state the result as follows.
\begin{theorem}\label{thm:vehicle_routing}
For any $\epsilon>0$ and capacity $Q$, there is a polynomial-time algorithm
that, given an instance of {\sc Capacitated Vehicle Routing} on planar
graphs with capacity $Q$, returns a solution whose cost is at most $1+\epsilon$ times optimal.
\end{theorem}
Prior to this work, only a QPTAS was known~\cite{bks_planar} for planar graphs. As described in Section~\ref{sec:related_work}, PTASs for {\sc Capacitated Vehicle Routing} are known only for very few metrics. Our result expands this small list to include planar graphs---a graph class that is quite relevant to vehicle-routing problems as many road networks are planar or near-planar.
The basis for our new PTAS is a new metric-embedding theorem. For a graph
$G$ with edge-lengths and vertices $u$ and $v$, let $d_G(u,v)$ denote the $u$-to-$v$
distance in $G$.
\begin{restatable}{theorem}{embeddingthm}\label{thm:embed}
There is a constant $c$ and a randomized polynomial-time algorithm
that, given a planar graph $G$ with specified root vertex $r$ and
given $0<\epsilon<1$, computes a graph $H$ with treewidth at most $(\frac{1}{\epsilon})^{c\epsilon^{-1}}$ and
an embedding $\phi$ of $G$ into $H$, such that, for every pair of
vertices $u,v$ of $G$, $d_G(u,v) \leq d_H(\phi(u),\phi(v))$ with probability 1, and
\begin{equation} \label{eq:expected-error}
E[d_H(\phi(u),\phi(v))] \leq d_G(u,v) + \epsilon[d_G(u,r) +d_G(v,r)]
\end{equation}
\end{restatable}
\noindent The expectation $E[\cdot]$ is over the random choices of the algorithm.
Why does this metric-embedding result give rise to an approximation
scheme for {\sc Capacitated Vehicle Routing}?
We draw on the following observation, which was also used in previous
approximation schemes~\cite{bks_planar,bks_hwy_dim}:
tours with clients far from the depot can accommodate a larger error. In particular, each client can be \emph{charged} error that is proportional to its distance to the depot. In designing an appropriate embedding, we can afford a larger \emph{error allowance} for the clients farther from the depot.
Our new embedding result builds on that of Fox-Epstein et al.~\cite{spanner_paper}. The challenge in directly applying their embedding result is that it gives an \emph{additive} error bound, proportional to the diameter of the graph. This error is too large for those clients close to the depot. Instead, we divide the graph into annuli (\emph{bands}) defined by distance ranges from the depot and apply the embedding result to each induced subgraph independently, with an increasingly large error tolerance for the annuli farthest from the depot. In this way, each client \emph{can} afford an error proportional to the diameter of the \emph{subgraph} it belongs to.
How can these subgraph embeddings be combined into a global embedding with the desired properties? In particular, clients that are close to each other in the input graph may be separated into different annuli. How can we ensure that the embedding approximately preserves these distances while still achieving bounded treewidth?
We show that by randomizing the choice of where to define the annuli boundaries, and connecting all vertices of all subgraph embeddings to a new, global depot, client distances are approximately preserved (to within their error allowance) \emph{in expectation} by the overall embedding, without substantially increasing the treewidth. Specifically, we ensure that the annuli are \emph{wide} enough that the probability of nearby clients being separated (and thus generating large error) is small. Simultaneously, the annuli must be \emph{narrow} enough that, within a given annulus, the clients closest to the depot can afford an error proportional to error allowance of the clients farthest from the depot.
Once the input graph $G$ is embedded in a bounded-treewidth host graph
$H$, a dynamic-programming algorithm can be used to find an optimal
solution to the instance of {\sc Capacitated Vehicle Routing} induced
in $H$, and the solution can be straightforwardly lifted to obtain a
solution in the input graph that in expectation is near-optimal.
Finally we describe how this result can be derandomized by trying all possible (relevant) choices for defining annuli and noting that for \emph{some} such choice, the resulting solution cost must be near-optimal.
\subsection{Outline}\label{sec:outline}
In Section~\ref{sec:prelims} we describe preliminary notation and definitions. Section~\ref{sec:embedding} describes the details of the embedding and provides an analysis of the desired properties. In Section~\ref{sec:ptas} we outline our algorithm and prove Theorem~\ref{thm:vehicle_routing}. We conclude with some remarks in Section~\ref{sec:conclusion}.
\section{Preliminaries}\label{sec:prelims}
\subsection{Basics}
Let $G=(V,E)$ denote a graph with vertex set $V$ and edge set $E$.
The graph comes equipped with a
, and
let $n = |V|$. As mentioned earlier, for any two vertices $u,v \in
V$, we use $d_G(u,v)$ to denote the $u$-to-$v$ distance in $G$,
i.e. the minimum length of a $u$-to-$v$ path.
We might omit the subscript when the choice of graph is unambiguous. The \emph{diameter} of a graph $G$ is the maximum distance $d_G(u,v)$ over all choices of $u$ and $v$.
A graph is \emph{planar} if it can be drawn in the plane without any edge crossings.
We use $OPT$ to denote an optimal solution. For a minimization problem, an \emph{$\alpha$-approximation algorithm} is one that returns a solution whose cost is at most $\alpha$ times the cost of $OPT$. An \emph{approximation scheme} is a family of $(1+\epsilon)$-approximation algorithms, indexed by $\epsilon >0$. A \emph{polynomial-time approximation scheme} (PTAS) is an approximation scheme such that, for each $\epsilon >0$, the corresponding algorithm runs in $O(n^c)$ time, where $c$ is a constant independent of $n$ but may depend on $\epsilon$. A \emph{quasi-polynomial-time approximation scheme} (QPTAS) is an approximation scheme such that, for each $\epsilon >0$, the corresponding algorithm runs in $O(n^{\log^cn})$ time, where $c$ is a constant independent of $n$ but may depend on $\epsilon$.
An \emph{embedding} of a guest graph $G$ into a host graph $H$ is a mapping $\phi:V_G \rightarrow V_H$ of the vertices of $G$ to the vertices of $H$.
A \emph{tree decomposition} of a graph $G$ is a tree $T$ whose nodes (called \emph{bags}) correspond to subsets of $V$ with the following properties:
\begin{enumerate}
\item For each $v\in V$, $v$ appears in some bag in $T$
\item For each $(u,v) \in E$, $u$ and $v$ appear \emph{together} in some bag in $T$
\item For each $v \in V$, the subtree induced by the bags of $T$ containing $v$ is connected
\end{enumerate}
The \emph{width} of a tree decomposition is the size of the largest bag, and the \emph{treewidth} of a graph $G$ is the minimum width over all tree decompositions of $G$.
\subsection{Problem Statement}
A \emph{tour} in a graph $G$ is a closed path $v_0,v_1,v_2,...,v_L$ such that $v_0 = v_L$ and for all $i \in \{1,2,...,L\}$, $(v_{i-1},v_i)$ is an edge in $G$.
Given a capacity $Q>0$ and a graph $G = (V,E)$ with specified client set $S\subseteq V$ and depot vertex $r\in V$, the {\sc Capacitated Vehicle Routing} problem is to find a set of tours $\Pi = \{\pi_1,\pi_2,...\pi_{|\Pi|}\}$ that collectively cover all clients and such that each tour includes $r$ and covers at most $Q$ clients. The cost of a solution is the sum of the tour lengths, and the objective is to minimize this sum.
If a client $s$ is covered by a tour $\pi$, we say that $\pi$ \emph{visits} $s$. Note that $\pi$ may \emph{pass} many other vertices (including other clients) that it does not cover.
As stated, the problem assumes that each client has unit demand. In fact, the more general case, where clients have integral demand (assumed to be polynomially bounded) that is allowed to be covered across multiple tours (demand is \emph{divisible}) reduces to the unit-demand case as follows: For each client $s\in S$ with demand $dem(s) = k$, add $k$ new vertices $\{v_1,v_2,...,v_k\}$ each with unit demand and edges $(s,v_i)$ of length zero, and set $dem(s)$ to zero. Note that this modification does not affect planarity. Additionally, since demand is assumed to be polynomially-bounded, the increase in graph size is negligible for the purpose of a PTAS.
For {\sc Capacitated Vehicle Routing} with \emph{indivisible} demands, each client's demand must be covered by a single tour, and a tour can cover at most $Q$ units of client demand.
We assume values of $\epsilon$ are less than one. If not, any $\epsilon \geq 1$ can be replaced with a number $\epsilon'$ slightly less than one. This only helps the approximation guarantee and does not significantly increase runtime. Of course for very large values of $\epsilon$, an efficient constant-factor approximation can be used instead (see Section~\ref{sec:related_work}).
\section{PTAS for Capacitated Vehicle Routing}\label{sec:ptas}
In this section, we show how to use the embedding of Section~\ref{sec:embedding} to give a PTAS for {\sc Capacitated Vehicle Routing}, proving Theorem~\ref{thm:vehicle_routing}.
\subsection{Randomized algorithm}\label{sec:relaxed}
We first prove a slight relaxation of
Theorem~\ref{thm:vehicle_routing} in which the algorithm is
randomized, and the solution value is near-optimal \emph{in expectation}. We then show in Section~\ref{sec:derand} how to derandomize the result.
\begin{theorem}\label{thm:expectation_PTAS}
For any $\epsilon>0$ and capacity $Q$, there is a randomized algorithm for {\sc Capacitated Vehicle Routing} on planar graphs that in polynomial time returns a solution whose expected value is at most $1+\epsilon$ times optimal.
\end{theorem}
Our result depends on the following lemma, which is proved in~\cite{BeckerKS17}, the full version of \cite{bks_hwy_dim}.
\begin{lemma}[Lemma 20 in \cite{bks_hwy_dim}, Lemma 15 in \cite{BeckerKS17}]\label{lem:dp}
Given an instance of {\sc Capacitated Vehicle Routing} with capacity $Q$ on a graph $G$ with treewidth $w$, there is a dynamic-programming algorithm that finds an optimal solution in $n^{O(wQ)}$ time.
\end{lemma}
Given the dynamic program of Lemma~\ref{lem:dp} and the embedding of Theorem~\ref{thm:embed} as black boxes, the algorithm is as follows. First, the graph $G$ is embedded as in Theorem~\ref{thm:embed} using $\hat{\epsilon}=\epsilon/3Q$ into a host graph $H$ with treewidth $(\frac{1}{\hat{\epsilon}})^{c\hat{\epsilon}^{-1}}$ for some constant $c$, and $d_G(u,v)\leq E[d_H(\phi(u),\phi(v))] \leq d_G(u,v) + 3\hat{\epsilon}(d_G(u,r) + d_G(v,r))$ for all vertices $u$ and $v$. The dynamic program of Lemma~\ref{lem:dp} is then applied to $H$. The resulting solution $SOL_H$ in $H$ is then mapped back to a solution $SOL_G$ in $G$ which is returned by the algorithm.
Note that the tours in any vehicle-routing solution can be defined by specifying the order in which clients are visited. In particular, we use $(u,v)\in SOL$ to denote that $u$ and $v$ are consecutive clients visited by the solution, noting that $u$ or $v$ may actually be the depot. In this way, a solution in $H$ is easily mapped back to a corresponding solution in $G$, as $(u,v)\in SOL_G$ if and only if $(\phi(u),\phi(v))\in SOL_H$.
\medskip
We now prove Theorem~\ref{thm:expectation_PTAS} by analyzing this algorithm.
\begin{lemma}\label{lem:performance}
For any $\epsilon>0$, the algorithm described above finds a solution whose expected value is at most $1+\epsilon$ times optimal.
\end{lemma}
\begin{proof}
Let $OPT$ be the optimal solution in $G$ and let $OPT_H$ be the corresponding induced solution in $H$. Since the dynamic program finds an optimal solution in $H$, we have $\text{cost}_H(SOL_H)\leq \text{cost}_H(OPT_H)$. Additionally, since distances in $H$ are no shorter than distances in $G$, $\text{cost}_G(SOL_G) \leq \text{cost}_H(SOL_H)$. Putting these pieces together, we have
\begin{eqnarray*}
E[\text{cost}_G(SOL_G)] &\leq& E[\text{cost}_H(SOL_H)]\\
&\leq& E[\text{cost}_H(OPT_H)]\\
& =& E[\sum_{(u,v)\in OPT}d_H(\phi(u),\phi(v))]\\
&=& \sum_{(u,v)\in OPT}E[d_H(\phi(u),\phi(v))] \\
&\leq & \sum_{(u,v)\in OPT}d_G(u,v) + 3\hat{\epsilon}(d_G(u,r) + d_G(v,r))\\
&=& \sum_{(u,v)\in OPT}d_G(u,v) + 6\hat{\epsilon}\sum_{v\in S}d_G(v,r)\\
&\leq &\text{cost}_G(OPT) + 6\hat{\epsilon}\frac{Q}{2}\text{cost}_G(OPT)\\
&=& (1+\epsilon)\text{cost}_G(OPT)
\end{eqnarray*}
where the final inequality comes from Inequality~\ref{eq:lb} (see Section~\ref{sec:related_work}).
\end{proof}
The following lemma completes the proof of Theorem~\ref{thm:expectation_PTAS}.
\begin{lemma}\label{lem:runtime}
For any $Q,\epsilon>0$, the algorithm described above runs in polynomial time.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:spanner}, computing $H$ and the embedding of $G$ into $H$ takes polynomial time.
By Lemma~\ref{lem:dp}, the dynamic program runs in $|V_H|^{O(wQ)}$ time, where $w$ is the treewidth of $H$. By Theorem~\ref{thm:embed}, $w = (\frac{1}{\hat{\epsilon}})^{c\hat{\epsilon}^{-1}} = (\frac{Q}{\epsilon})^{c'Q\epsilon^{-1}}$, where $c$ and $c'$ are constants independent of $|V_H|$.
The algorithm therefore runs in $|V_H|^{(Q\epsilon^{-1})^{O(Q\epsilon^{-1})}}$
time. Finally, since $|V_H|$ is polynomial in the size of $G$, for
fixed $Q$ and $\epsilon$, the running time is polynomial.
\end{proof}
\subsection{Derandomization}\label{sec:derand}
The algorithm can be derandomized using a standard technique. The
embedding of Theorem~\ref{thm:embed} partitions the vertices of the
input graph into rings depending on a value $x$ chosen
uniformly at random from $[0,1]$. However, the partition depends on
the distances of vertices from the root $r$. It follows that the
number of partitions that can arise from different choices of $x$ is
at most the number of vertices. The deterministic algorithm tries
each of these partitions, finding the corresponding solution, and
returns the least costly of these solutions.
In particular, consider the optimum solution $OPT$. As shown in Section~\ref{sec:relaxed},
$$E[\sum_{(u,v) \in OPT}d_H(\phi(u),\phi(v))] $$ $$= \sum_{(u,v) \in OPT}E[d_H(\phi(u),\phi(v))] $$ $$\leq (1+\epsilon)\text{cost}_G(OPT)$$.
Therefore, for some choice of $x$, the induced cost of $OPT$ in $H$ is nearly
optimal, and the dynamic program will find a solution that costs at
most as much.
This completes the proof of Theorem~\ref{thm:vehicle_routing}.
| proofpile-arXiv_065-1421 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Optimal stopping with constraints}\label{sec:lambda}
\section{Introduction}
\label{sec:intro}
Optimal stopping problems are widespread in economics and finance (and other fields) where they are used to model asset sales, investment times and the exercise of American-style options. In typical applications an agent observes a stochastic process,
possibly representing the price of an asset, and chooses a stopping time in order to maximise the expected discounted value of a payoff which is contingent upon the process evaluated at that time.
Implicit in the classical version of the above problem is the idea that the agent can sell the asset (decide to invest, exercise the option) at any moment of their choosing, and for financial assets traded on an exchange this is a reasonable assumption. However, for other classes of assets, including those described as `real assets' by, for example, Dixit and Pindyck~\cite{DixitPindyck:94}, this assumption may be less plausible. Here we are motivated by an interpretation of the optimal stopping problem above in which an agent has an asset for sale, but can only complete the sale if they can find a buyer, and candidate buyers are only available at certain isolated instants of time.
In this work we model the arrival of candidate purchasers as the event times of a Poisson process. When a candidate purchaser arrives the agent can choose to sell to that purchaser, or not; if a sale occurs then the problem terminates, otherwise the candidate purchaser is lost, and the problem continues. If the Poisson process has a constant rate, then the analysis falls into the framework studied by Dupuis and Wang \cite{DupuisWang:03} and Lempa~\cite{Lempa:07}.
Dupuis and Wang \cite{DupuisWang:03} and Lempa~\cite{Lempa:07} discuss optimal stopping problems, but closely related is the work of Rogers and Zane~\cite{RogersZane:99} in the context of portfolio optimisation. Rogers and Zane consider an optimal investment portfolio problem under the hypothesis that the portfolio can only be rebalanced at event times of a Poisson process of constant rate, see also Pham and Tankov~\cite{PhamTankov:08} and Ang, Papanikolaou and Westerfield~\cite{AngPapanikolaouWesterfield:14}. The study of optimal stopping problems when the stopping times are constrained to be event times of an exogenous process is relatively unexplored, but Guo and Liu~\cite{GuoLiu:05} study a problem in which the aim is to maximise a payoff contingent upon the maximum of an exponential Brownian motion and Menaldi and Robin~\cite{MenaldiRobin:16} extend the analysis of Dupuis and Wang~\cite{DupuisWang:03} to consider non-exponential inter-arrival times. As a generalisation of optimal stopping, Liang and Wei~\cite{LiangWei:16} consider an optimal switching problem when the switching times are constrained to be event times of a Poisson process.
In this article we consider a more sophisticated model of optimal stopping under constraints in which the agent may expend effort in order to increase the frequency of the arrival times of candidate buyers. (Note that the problem remains an optimal stopping problem, since at each candidate sale opportunity the agent optimises between continuing and selling.) In our model the agent's instantaneous effort rate $E_t$ affects the instantaneous rate $\Lambda_t$ of the Poisson process, so that the candidate sale opportunities become the event times of an inhomogeneous Poisson process, where the agent chooses the rate. However, this effort is costly, and the agent incurs a cost per unit time which depends on the instantaneous effort rate. The objective of the agent is to maximise the expected discounted payoff net of the expected discounted costs. In particular, if $X=(X_t)_{t \geq 0}$ with $X_0=x$ is the asset price process, $g$ is the payoff function, $\beta$ is the discount factor, $E = (E_t)_{t \geq 0}$ is the chosen effort process, $\Lambda = (\Lambda_t)_{t \geq 0}$ given by $\Lambda_t = \Psi(E_t)$ is the instantaneous rate of the Poisson process, $C_E$ is the cost function so that the cost incurred per unit time is $C_E(E_t)$, and $T_\Lambda$ is the set of event times of a Poisson process, rate $\Lambda$, then the objective of the agent is to maximise the objective function
\begin{equation}
\label{eq:mainproblem}
\mathbb E^x \left[ e^{- \beta \tau} g(X_\tau) - \int_0^\tau e^{-\beta s} C_E(E_s) ds \right]
\end{equation}
over admissible effort processes $E$ and $T_\Lambda$-valued stopping times $\tau$.
Our goal is to solve for the value function, the optimal stopping time and the optimal effort, as represented by the optimal control process $E$.
In fact, typically it is possible to use the rate of the Poisson process as the control variable by setting $C(\Lambda_t) = C_E(E_t) = C_E \circ \Psi^{-1} (\Lambda_t)$. In the context of the problem it is natural to assume that $\Psi$ and $C_E$ are increasing functions, so that $\Psi^{-1}$ exists, and $C$ is increasing.
Our focus is on the case where $X$ is an exponential Brownian motion, but the general case of a regular, time-homogeneous diffusion can be reduced to this case at the expense of slightly more complicated technical conditions. See Lempa~\cite{Lempa:07} for a discussion in the constant arrival rate case. We begin by rigorously stating the form of the problem we will study. Then we proceed to solve for the effort process and stopping rule in \eqref{eq:mainproblem}. It turns out that there are two distinctive cases depending on the shape of $C$ or more precisely on the finiteness or otherwise of $\lim_{\lambda \uparrow \infty} \frac{C(\lambda)}{\lambda}$. Note that it is not clear {\em a priori} what shape $C= C_E \circ \Psi^{-1}$ should take, beyond the fact that it is increasing. Generally one might expect an increasing marginal cost of effort and a law of diminishing returns to effort which would correspond to convex $C_E$, concave $\Psi$ and convex $C$. But a partial reverse is also conceivable: effort expended below a threshold has little impact, and it is only once effort has reached a critical threshold that extra effort readily yields further stopping opportunities; in this case $\Psi$ would be convex and $C$ might be concave.
One outcome of our analysis is that the agent exerts effort to create a positive stopping rate only if they are in the region where stopping is optimal. Outside this region, they typically exert no effort, and there are no stopping opportunities. Typically therefore, (although we give a counterexample in an untypical case) the agent stops at the first occasion where stopping is possible and the optimal stopping element of the problem is trivial.
\section{The set-up}
We work on a filtered probability space $(\Omega, \mathcal F, \mathbb P, \mathbb F = (\mathcal F_t)_{t \geq 0})$ which satisfies the usual conditions and which supports a Brownian motion and an independent Poisson process. On this space there is a regular, time-homogeneous diffusion process $X= (X_t)_{t \geq 0}$ driven by the Brownian motion. We will assume that $X$ is exponential Brownian motion with volatility $\sigma$ and drift $\mu$ and has initial value $x$; then
\[ dX_t = \sigma dW_t + \mu dt, \hspace{10mm} X_0 = x. \]
Here $\mu$ and $\sigma$ are constants with $\mu<\beta$. The agent has a perpetual option with increaing payoff $g: \mathbb R_+ \mapsto \mathbb R_+$ of linear growth. In our examples $g$ is an American call: $g(x)=(x-K)_+$. Then, in the classical setting, the problem of the agent would be to maximise $\mathbb E[e^{-\beta \tau} g(Y_\tau)]$ over stopping times $\tau$. Note that the linear growth condition, together with $\mu < \beta$, is sufficient to ensure that this classical problem is well-posed.
We want to introduce finite liquidity into this problem, in the sense that we want to incorporate the phenomena that in order to sell the agent needs to find a buyer, and such buyers are in limited supply. In the simplest case buyers might arrive at event times of a time-homogeneous Poisson process with rate $\lambda$, and then at each event time of the Poisson process the agent faces a choice of whether to sell to this buyer at this moment or not; if yes then the sale occurs and the optimal stopping problem terminates, if no then the buyer is irreversibly lost, and the optimal stopping problem continues. We want to augment this problem to allow the agent to expend effort (via networking, research or advertising) in order to increase the flow of buyers. There is a cost of searching in this way --- the higher the effort the higher the rate of candidate stopping times but also the higher the search costs. Note that once the asset is sold, effort expended on searching ceases, and search costs thereafter are zero by fiat.
Let $\mathcal A_E$ be the set of admissible effort processes. We assume that $E \in \mathcal A_E$ if $E = (E_t)_{t \geq 0}$ is an adapted process such that $E_t \in I_{E}$ for all $t \in [0,\infty)$ where $I_{E} \subset \mathbb R_+$ is an interval which is independent of time. Then, since $\Lambda_t = \Psi(E_t)$ we find $E \in \mathcal A_E$ if and only if $\Lambda \in \mathcal A$ where $\Lambda \in \mathcal A$ if $\Lambda$ is adapted and $\Lambda_t \in I$ for all $t$ where $I = \Psi(I_{E})$. Note that $I$ is an interval in $\mathbb R_+$, and we take the lower and upper endpoints to be $\underline{\lambda}$ and $\overline{\lambda}$ respectively.
Recall that $T_\Lambda$ is the set of event times of an inhomogeneous Poisson process with rate $\Lambda$. Then $T^\Lambda = \{ T^\Lambda_1, T^\Lambda_2, \ldots \}$ where $0<T^\Lambda_1$ and $T^{\Lambda}_n < T^{\Lambda}_{n+1}$ almost surely. Let $\mathcal T(T_\Lambda)$ be the set of $T_\Lambda$-valued stopping times and let $\mathcal A$ be the set of admissible rate functions. Then, after a change of independent variable the problem is to find
\begin{equation}
\label{eq:Hdef}
H(x) = \sup_{\Lambda \in \mathcal A} \sup_{\tau \in \mathcal T(T_\Lambda)} \mathbb E^x \left[ e^{- \beta \tau} g(X_\tau) - \int_0^\tau e^{- \beta s} C(\Lambda_s) ds \right],
\end{equation}
together with the optimal rate function $\Lambda^* = (\Lambda^*_t)_{t \geq 0}$ and optimal stopping rule $\tau^* \in \mathcal T(T_\Lambda)$.
In addition to the set of admissible controls, we also consider the subset of integrable controls $\mathcal I \subseteq \mathcal A$ where $\Lambda \in \mathcal I = \mathcal I(I,C)$ is an adapted process with $\Lambda_t \in I$ for which $\mathbb E^x[\int_0^\infty e^{- \beta s} C(\Lambda_s) ds ] < \infty$. As mentioned above we have that $\mathbb E^x \left[ e^{- \beta \tau} g(X_\tau) \right] < \infty$ for any admissible $\Lambda$ and any stopping rule, and hence there is no loss of generality in restricting the search for the optimal rate function to the set of integrable controls.
The stopping rule is easily identified in feedback form. Let $T^0_\Lambda = T_\Lambda \cup \{ 0 \}$ and let $H^0$ be the value of the problem {\em conditional on there being a buyer available at time 0}, so that
\[ H^0(x) = \sup_{\Lambda \in \mathcal A} \sup_{\tau \in \mathcal T(T^0_\Lambda)} \mathbb E^x \left[ e^{- \beta \tau} g(X_\tau) - \int_0^\tau e^{- \beta s} C(\Lambda_s) ds \right]. \]
Then, it is optimal to stop immediately if and only if the value of stopping is at least as large as the value of continuing and
\[ H^0(x) = \max \{ g(x), H(x) \} . \]
It follows that if $\Lambda = (\Lambda_t)_{t \geq 0}$ is a fixed admissible rate process, and if $H^0_\Lambda$ and $H_\Lambda$ denote the respective value functions then, writing $T_1 = T^\Lambda_1$ for the first event time of the Poisson process with rate $\Lambda$,
\begin{eqnarray*}
H_\Lambda(x) & = & \sup_{\tau \in \mathcal T(T_\Lambda)} \mathbb E^x \left[ e^{- \beta \tau} g(X_\tau) - \int_0^\tau e^{- \beta s} C(\Lambda_s) ds \right] \\
& = & \sup_{\tau \in \mathcal T(T_\Lambda)} \mathbb E^x \left[ e^{- \beta T_1} \mathbb E \left[ \left. e^{-\beta (\tau - T_1)} g(X_\tau) - \int_{T_1}^{\tau} e^{- \beta (s - T_1)} C(\Lambda s) ds \right| \mathcal F_{T_1} \right] - \int_0^{T_1} e^{-\beta s} C(\Lambda_s) ds \right] \\
& = & \mathbb E^x \left[ e^{-\beta T_1} H^0_\Lambda(X_{T_1}) - \int_0^{\infty} I_{ \{ s < T_1 \} }e^{-\beta s} C(\Lambda_s) ds \right] \\
& = & \mathbb E^x \left[ \int_0^\infty \Lambda_s e^{- \int_0^s \Lambda_u du} e^{-\beta s} H^0_\Lambda(X_{s}) ds - \int_0^{\infty} e^{- \int_0^s \Lambda_u du} e^{-\beta s} C(\Lambda_s) ds \right] \\
& = & \mathbb E^x \left[ \int_0^\infty e^{- (\beta s + \int_0^s \Lambda_u du)} (\Lambda_s H^0_\Lambda(X_{s}) ds - C(\Lambda_s)) ds \right] .
\end{eqnarray*}
Taking a supremum over admissible rate processes $\Lambda \in \mathcal A$ we find
\[ H(x) = \sup_{\Lambda \in \mathcal A} \mathbb E^x \left[ \int_0^\infty e^{- \int_0^t (\beta + \Lambda_s) ds} \left( \Lambda_t H_\Lambda^0(X_t) - C(\Lambda_t) \right) dt \right] ,
\]
and this is the problem we aim to solve. Writing $\Lambda^*$ for the optimal rate process we expect $H$ to solve
\[ H(x) = \mathbb E^x \left[ \int_0^\infty e^{- \int_0^t (\beta + \Lambda^*_s) ds} \left( \Lambda^*_t \{ g(X_t) \vee H(X_t)\} - C(\Lambda^*_t) \right) dt \right] .
\]
\subsection{Some results for classical problems}
\label{ssec:classical}
For future reference we record some results for classical problems in which agents can stop at any instant.
First, let $\mathcal T([0,\infty))$ be the set of all stopping times and define
\[ w_K(x) := \sup_{\tau \in \mathcal T([0,\infty))} \mathbb E^x[ e^{-\beta \tau}(X_\tau - K)_+ ]. \]
(Imagine a standard, perpetual, American-style call option with strike $K$, though valuation is not taking place under the equivalent martingale measure.)
Classical arguments (McKean~\cite{McKean:65}, Peskir and Shiryaev~\cite{PeskirShiryaev:06}) give that $0 < w_K < x$ (the upper bound holds since we are assuming $\beta > \mu$) and that there exists a constant $L = \frac{\theta}{\theta - 1} K$ where $\theta = \left( \frac{1}{2} - \frac{\mu}{\sigma^2} \right) + \sqrt{\left( \frac{1}{2} - \frac{\mu}{\sigma^2} \right)^2 + \frac{2 \beta}{\sigma^2}} $ such that
\[ w_K(x) = \left\{ \begin{array}{ll} (x-K)_+, & x> L; \\
(L-K) L^{-\theta} x^\theta, & 0 < x \leq L .\end{array} \right. \]
For future reference set $\phi = \left( \frac{1}{2} - \frac{\mu}{\sigma^2} \right) - \sqrt{\left( \frac{1}{2} - \frac{\mu}{\sigma^2} \right)^2 + \frac{2 \beta}{\sigma^2}}$. Then $\phi<0<1<\theta$ and $\theta$ and $\phi$ are the roots of $Q_0=0$ where $Q_\lambda(\psi)=\frac{1}{2} \sigma^2 \psi(\psi-1) + \mu \psi - (\beta+ \lambda)$.
Second, define
\begin{equation} w_{K,\epsilon,\delta}(x) = \sup_{\tau \in \mathcal T([0,\infty))} \mathbb E^x \left[ e^{-\beta \tau} \{ (X_\tau - K)_+ - \epsilon \} - \delta \int_0^\tau e^{-\beta s} ds \right]. \label{eq:wKJD}
\end{equation}
(Imagine a perpetual, American-style call option with strike $K$, in which the agent pays a fee or transaction cost $\epsilon$ to exercise the option, and pays a running cost $\delta$ per unit time until the option is exercised.) Note that $w_{K,0,0} \equiv w_K$. It turns out that there are two cases. In the first case when $\epsilon \geq \delta/\beta$, when $X$ is small it is more cost effective to pay the running cost indefinitely than to pay the exercise fee. We find
\[ w_{K,\epsilon,\delta}(x) = w^{K+\epsilon - \delta/\beta}(x) - \delta/\beta . \]
In the second case when $\epsilon< \delta/\beta$, when $X$ is small it is cost-effective to stop immediately, even though the payoff is zero, because paying the fee is cheaper than paying the running cost indefinitely. In this case we find that $w = w_{K,\epsilon,\delta}$ and a pair of thresholds $l^* = l^*(K,\epsilon,\delta)$ and $L^*=L^*(K,\epsilon,\delta)$ with $0 < l^* < K + \epsilon < L^*$ satisfy the variational problem
\[ \{ \mbox{$w$ is $C^1$; $w = -\epsilon$ on $(0,l^*)$; $\mathcal L w - \beta w = \delta$ on $(l^*,L^*)$; $w=x-K - \epsilon$ on $(L^*, \infty)$ } \} \]
Returning to our problem with limited stopping opportunities, one immediate observation is that $H(x) \leq w_K(x)$. Conversely, if $\Lambda \equiv 0$ is admissible then $H(x) \geq - \frac{C(0)}{\beta}$.
\section{Heuristics}
From the Markovian structure of the problem we expect that the (unknown) value function $H$ and optimal rate function $\Lambda^*$ are time-homogeneous functions of the asset price only.
Let $M^\Lambda = (M^\Lambda_t)_{t \geq 0}$ be given by
\[ M^\Lambda_t = e^{-\int_0^t ( \beta + \Lambda_s) ds} H(X_t) + \int_0^t e^{-\int_0^u ( \beta + \Lambda_s) ds} \left[ \Lambda_u H^0(X_u) - C(\Lambda_u) \right] du, \]
and let $\mathcal L^X$ denote the generator of $X$ so that $\mathcal L^X f = \frac{\sigma^2}{2} f'' + \mu f'$. Assume that the value function under the optimal strategy $H$ is $C^2$. Then, by It\^{o}'s formula,
\[ dM^{\Lambda}_t = e^{-\int_0^t ( \beta + \Lambda_s) ds}\left\{ \left( \mathcal L^X H(X_t) - (\beta + \Lambda_t) H(X_t) + \Lambda_t (H^0(X_t) - C(\Lambda_t)) \right) dt + \sigma X_t H'(X_t) dW_t \right\}. \]
We expect that $M^\Lambda$ is a super-martingale for any choice of $\Lambda$, and a martingale for the optimal choice. Thus we expect
\[ \mathcal L^X H(X_t) - \beta H(X_t) - \inf_{\Lambda_t} \left\{ C(\Lambda_t) - \Lambda_t [ H^0(X_t) - H(X_t) ] \right\} = 0. \]
Let $\tilde{C}: \mathbb R_+ \mapsto \mathbb R$ be the concave conjugate of $C$ so that $\tilde{C}(z) = \inf_{\lambda \geq 0} \{C(\lambda) - \lambda z \}$.
Then we find that $H$ solves
\begin{equation} \mathcal L^X H - \beta H - \tilde{C}(H^0 - H) = 0, \label{eq:Hheuristic} \end{equation}
and a best choice of rate function is
$ \Lambda^*_t = \Lambda^*(X_t)$ where
\begin{equation} \Lambda^*(x) = \Theta(H^0(x)-H(x)) \label{eq:controlq}
\end{equation}
and $\Theta(z) = \mbox{arginf}_\lambda \{ C(\lambda) - \lambda z \}$.
Note that $H^0-H = (g-H)_+$ and that \eqref{eq:Hheuristic} is a second order differential equation and will have multiple solutions. The boundary behaviour near zero and infinity will determine which solution fits the optimal stopping problem.
\subsection{First Example: Quadratic cost functions}
\label{ssec:quadratic}
Suppose $g(x)=(x-K)_+$ for fixed $K>0$. Using terminology from the study of American options and optimal stopping we say that if $X_t > K$ then the process is in-the-money, if $X_t<K$ then the process is out-of-the-money and the region in the domain of $X$ where $\Lambda^*(X)$ is zero is the continuation region $\mathcal C$, and $\mathcal S := \mathbb R^+ \setminus \mathcal C$ is the selling region.
Suppose the range of possible values for the rate process is $I=[0,\infty)$ and consider a quadratic cost function $C(\lambda) = a + b \lambda + c\frac{\lambda^2}{2}$ with $a \geq 0$, $b \geq 0$ and $c>0$. Then $\tilde{C}(z) = a - \frac{[(z-b)_+]^2}{2c}$.
Consider first the behaviour of the value function near zero. If $a=0$ then $C(0)=0$, and when $X$ is close to zero the agent may choose not to search for buyers, a strategy which incurs zero cost. There is little chance of the process ever being in-the-money, but nonethelesss the agent delays sale indefinitely. We expect that the continuation region is $(0,L^*)$ for some threshold $L^*$.
Now suppose $a>0$. Now there is a cost to delaying the sale, even when $\Lambda = 0$. If $X$ is small then it is preferable to sell the asset even though the process is out-of-the-money, because in our problem there are no search costs once the asset is sold. In this case we expect the agent to search for buyers when $X$ is small, in order to reduce further costs. Then the continuation region will be $(\ell^*, L^*)$ for some $0<\ell^* < K < L^* < \infty$.
Consider now the behaviour for large $x$. In this case we can look for an expansion for the solution of \eqref{eq:Hheuristic} of the form
\[ H(x) = A_1 x + A_{1/2} \sqrt{x} + A_0 + O(x^{-1/2}) \]
for constants $A_1$, $A_{1/2}$ and $A_0$ to be determined. Using the fact that $H(x) \leq w_K(x)$ so that $H$ is of at most linear growth we find
\begin{equation}
H(x) = x - \sqrt{2c(\beta - \mu)} \sqrt{x} - \left\{ K + b - c\left[ \beta - \frac{\mu}{2} + \frac{\sigma^2}{8} \right] \right\} + \ldots \label{eq:Gexpansion}
\end{equation}
Numerical results (see Figure~\ref{fig:valuefun}) show that this expansion is very accurate for large $x$.
\subsubsection{Purely quadratic cost: $a=0=b$}
\label{sssec:pure}
In this case we expect that the continuation region is $(0,L^*)$ for a threshold level $L^*$ to be determined. For a general threshold $L$, and writing $H_L$ for the solution to \eqref{eq:Hheuristic} with $H(0)=0$ and $H(L)=L-K$ we find that $H_L$ solves
\begin{equation}
\mathcal L^X h - \beta h = \frac{1}{2c} (\{g-h\}_+)^2,
\label{eq:ODEa=0}
\end{equation}
and then that $H_L(x) = \frac{L-K}{L^\theta}x^\theta$ on $x \leq L$. On $(L,\infty)$, $H_L$ solves \eqref{eq:ODEa=0} subject to $H_L(L) = (L-K)$ and $H_L'(L) = \theta \frac{L-K}{L}$. This procedure gives us a family $(H_L)_{L \geq K}$ of potential value functions, each of which is $C^1$. Finally we can determine the threshold level $L$ we need by choosing the value $L^*$ for which $H_{L^*}$ has linear growth at infinity.
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{K=1zoomout.pdf}
\caption{For large x}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{K=1zoomin.pdf}
\caption{For small x}
\end{subfigure}
\caption{$(\beta,\mu,\sigma,K,a,b,c) =(5, 3 ,2,1,0,0,2)$. In both sub-figures the solid curved line represents $H_{L^*}$; the straight line represents $g \vee H_{L^*}$ on $\{ x : g(x) \geq H_{L^*}(x) \}$ and the dashed line in the left sub-figure is the expansion for $H$ in \eqref{eq:Gexpansion}. The optimal threshold is seen in the right sub-figure to be at $L^*= 1.35$. }
\label{fig:valuefun}
\end{figure*}
The linear growth solution $H_{L^*}$ is shown in Figure~\ref{fig:valuefun}, both for large $x$ and for moderate $x$. From Figure~\ref{fig:valuefun}(b) we see that the continuation region is $\mathcal C = (0,1.35)$ and that the stopping region $\mathcal S = [1.35,\infty)$. We also see that the expansion for $H$ given in \eqref{eq:Gexpansion} gives a good approximation of our numerical solution for large $x$.
\begin{figure}[H] \center
\includegraphics [width=0.45\linewidth] {lambda.pdf}
\caption{ $(\beta,\mu,\sigma,K,a,b,c) =(5, 3 ,2,1,0,0,2)$; this figure plots the optimal control $\Lambda^*$ given by \eqref{eq:controlq} as a function of wealth level $x$. }
\label{fig:lambda}
\end{figure}
Figure~\ref{fig:lambda} shows the optimal control. We see that $\Lambda^*$ is zero on the continuation region $\mathcal C=(0,L)$ and that $\Lambda^*$ is increasing and concave on the stopping region $\mathcal S=[L,\infty)$. The agent behaves rationally in the sense that on the continuation region where continuing is worth more than stopping, the agent is unwilling to stop and this is reflected by the minimal efforts spent on searching (i.e. $\Lambda^*(x)=0, \forall x \in \mathcal C$); similarly, on the stopping region, stopping is getting more and more valuable relative to continuing as the price process gets deeper in-the-money, and the agent is incentivised to spend more effort on searching for stopping opportunities.
We discuss the cases of $a>0$ and $b>0$ in Section~\ref{sec:examples}.
\section{Verification}
In this section we show that the heuristics are correct, and that the value to the stochastic problem is given by the appropriate solution of the differential equation. Although the details are different, the structure of the proof follows Dupuis and Wang~\cite{DupuisWang:03}.
Suppose, as throughout, that $X$ is exponential Brownian motion with $\mu <\beta$ and $g$ is of linear growth.
\begin{defn}
$(\tau, \Lambda)$ is admissible if $\Lambda$ is a non-negative, $I$-valued, adapted process and $\tau \in \mathcal T(T^\Lambda)$.
\end{defn}
Note that a consequence of the definition is that we insist that $\tau \leq T^{\Lambda}_\infty := \lim_n T^\Lambda_n$. Moreover, we may have $T_k = \infty$: in this case we may take $\tau = \infty$, whence we have $e^{-\beta \tau} g(X_\tau) = 0$ noting that $\lim_{t \uparrow \infty} e^{-\beta t} g(X_t)=0$ almost surely.
\begin{defn}
$(\tau, \Lambda)$ is integrable if $(\tau, \Lambda)$ is admissible and $\mathbb E[ \int_0^\tau e^{- \beta s} C(\Lambda_s) ds ] < \infty$.
\end{defn}
Clearly, if $(\tau,\Lambda)$ is integrable, then $(T^\Lambda_1, \Lambda)$ is integrable.
\begin{lem}
\label{lem:Gcomparison}
Let $G$ be an increasing, convex solution to
\begin{equation} \mathcal L^X G - \beta G - \tilde{C}((g - G)_+) = 0, \label{eq:Gdef} \end{equation}
and suppose that $G$ is of at most linear growth. Set $G^0 = G \vee g$.
Then for any integrable, admissible strategy $(\tau, \Lambda)$,
\begin{equation}
G(x)
\geq \mathbb E^x \left[ e^{-\beta T^\Lambda_1} G^0(X_{T^\Lambda_1}) I_{ \{ T^\Lambda_1 < \infty \} } - \int_0^{T^\Lambda_1} e^{- \beta s} C(\Lambda_s) ds \right]. \label{eq:Hleq}
\end{equation}
\end{lem}
\begin{proof}
Since $g$ and $G$ are of linear growth we may assume $G^0(x) \leq \kappa_0 + \kappa_1 x$ for some constants $\kappa_i \in (0,\infty)$.
Let $Z_t = e^{- \beta t - \int_0^t \Lambda_s ds}G(X_t) - \int_0^t e^{-\beta s - \int_0^s \Lambda_u du} F_s ds$ where
\[ F_s = F(g(X_s), G(X_s), \Lambda_s) := (g(X_s) -G(X_s))_+ \Lambda_s + \tilde{C}((g(X_s) -G(X_s))_+) \leq C(\Lambda_s). \]
Then, using the definition of $G$
\begin{eqnarray*}
dZ_t & = & e^{- \beta t - \int_0^t \Lambda_s ds} \left\{ -(\beta + \Lambda_t) G + \mathcal L^X G - (g-G)_+ \Lambda_t - \tilde{C}((g-G)_+) \right\} dt + dN_t \\
& = & e^{- \beta t - \int_0^t \Lambda_s ds} \left\{ - \Lambda_t [G + (g-G)_+] \right\} dt + dN_t \\
& = & - e^{- \beta t - \int_0^t \Lambda_s ds} \Lambda_t G^0(X_t) dt + dN_t
\end{eqnarray*}
where $N_t = \int_0^t e^{- \beta s - \int_0^s \Lambda_u du} \sigma X_s G'(X_s)dW_s$. Our hypotheses on $G$ allow us to conclude that $N=(N_t)_{t \geq 0}$ is a martingale.
It follows that $Z_0 = \mathbb E[Z_t + \int_0^t e^{- \beta s - \int_0^s \Lambda_u du} \Lambda_s G^0(X_s) ds]$ or equivalently
\begin{eqnarray} G(x) & = & \mathbb E^x \left[ e^{- \beta t - \int_0^t \Lambda_s ds} G(X_t) + \int_0^t e^{- \beta s - \int_0^s \Lambda_u du} \left( \Lambda_s(g(X_s) \vee G(X_s)) - F_s \right) ds \right] \nonumber \\
& \geq & \mathbb E^x \left[ e^{- \beta t - \int_0^t \Lambda_s ds} G(X_t)+ \int_0^t e^{- \beta s - \int_0^s \Lambda_u du} \left( \Lambda_s G^0(X_s) - C( \Lambda_s) \right) ds \right].
\label{eq:Ht}
\end{eqnarray}
Since $X$ is geometric Brownian motion and $\beta > \mu$ we have that $X^{\beta,*} := \sup_{u \geq 0} \{ e^{- \beta u} X_u \}$ is in $L^1$. Then
\begin{eqnarray*}
e^{- \beta t - \int_0^t \Lambda_s ds} G(X_t) &\leq& \kappa_0 + \kappa_1 X^{\beta,*}, \\
\int_0^t e^{- \beta s - \int_0^s \Lambda_u du} \Lambda_s G^0(X_s) ds & \leq & (\kappa_0 + \kappa_1 X^{\beta,*}) \int_0^t \Lambda_s e^{- \int_0^s \Lambda_u du} ds \leq\kappa_0 + \kappa_1 X^{\beta,*}, \\
\int_0^t e^{- \beta s - \int_0^s \Lambda_u du} C(\Lambda_s) ds &\leq& \int_0^\infty e^{- \beta s - \int_0^s \Lambda_u du} C(\Lambda_s) ds,
\end{eqnarray*}
and, since $(T^\Lambda_1, \Lambda)$ is integrable by hypothesis,
\[ \mathbb E \left[ \int_0^\infty e^{- \beta s - \int_0^s \Lambda_u du} C(\Lambda_s) ds \right] = \mathbb E \left[ \int_0^{T^\Lambda_1} e^{- \beta s} C(\Lambda_s) ds \right] < \infty. \]
Then Dominated Convergence, together with the fact that $e^{-\beta t}X_t \rightarrow 0$ gives \eqref{eq:Hleq}.
\end{proof}
\begin{lem}
Let $(\tau, \Lambda)$ be an integrable strategy.
Define $Y = (Y_n)_{n \geq 0}$ by
\[ Y_n = e^{-\beta (T^\Lambda_n \wedge \tau)} G^0(X_{T^\Lambda_n \wedge \tau}) I_{ \{ T^\Lambda_n \wedge \tau < \infty \}} - \int_0^{T^\Lambda_n \wedge \tau} e^{- \beta s} C(\Lambda_s) ds \]
where $T^\Lambda_0=0$. Define $\mathcal G_n = \mathcal F_{T^\Lambda_n}$ and set $\mathbb G = (\mathcal G_n)_{n \geq 0}$.
Then $Y$ is a uniformly integrable $(\mathcal G_n)_{n \geq 0}$-supermartingale.
\label{lem:Ysupermg}
\end{lem}
\begin{proof}
We have
\[ |Y_n| \leq \kappa_0 + \kappa_1 X^{\beta,*} + \int_0^\tau e^{-\beta s} C(\Lambda_s) ds \in L^1. \]
Moreover, on $T^\Lambda_{n-1}<\infty$ and $\tau > T^\Lambda_{n-1}$, writing $\tilde{T}$ as shorthand for $T^\Lambda_n-T^\Lambda_{n-1}$ and using $\tau \geq T^\Lambda_n$ and Lemma~\ref{lem:Gcomparison} for the crucial first inequality,
\begin{eqnarray*}
\mathbb E[Y_n | \mathcal G_{n-1}] & = & e^{- \beta T^\Lambda_{n-1}} \mathbb E \left[ \left. e^{- \beta \tilde{T}} G^0(X_{T^\Lambda_n}) I_{ \{ T^\Lambda_n < \infty \}} - \int_{T^\Lambda_{n-1}}^{T^\Lambda_n} e^{- \beta s} C(\Lambda_s) ds \right| \mathcal G_{n-1} \right] - \int_0^{T^\Lambda_{n-1}} e^{- \beta s} C(\Lambda_s) ds \\
& \leq & e^{- \beta T^\Lambda_{n-1}} G(X_{T^\Lambda_{n-1}}) - \int_0^{T^\Lambda_{n-1}} e^{- \beta s} C(\Lambda_s) ds \\
& \leq & e^{- \beta T^\Lambda_{n-1}} G^0(X_{T^\Lambda_{n-1}}) - \int_0^{T^\Lambda_{n-1}} e^{- \beta s} C(\Lambda_s) ds = Y_{n-1}.
\end{eqnarray*}
\end{proof}
\begin{prop}
\label{prop:HleqG}
Let $G$ be an increasing, convex solution to \eqref{eq:Gdef} of at most linear growth. Then $H \leq G$.
\end{prop}
\begin{proof} Let $(\tau, \Lambda)$ be any integrable strategy.
From Lemma~\ref{lem:Gcomparison} we have
\[ \mathbb E[Y_1] = \mathbb E \left[ e^{-\beta T^\Lambda_1} G^0(X_{T^\Lambda_1}) I_{\{ T^\Lambda_1 < \infty \} } - \int_0^{T^\Lambda_1} e^{-\beta s} C(\Lambda_s) ds \right] \leq G(x). \]
Moreover, since $Y$ is a uniformly integrable supermartingale,
\begin{eqnarray*} \mathbb E^x[Y_1] \geq \mathbb E^x[Y_\infty] & = & \mathbb E \left[ e^{-\beta \tau} G^0(X_\tau) I_{\{ \tau < \infty \} } - \int_0^\tau e^{-\beta s} C(\Lambda_s) ds \right] \\
& \geq & \mathbb E \left[ e^{-\beta \tau} g(X_\tau) I_{\{ \tau < \infty \} } - \int_0^\tau e^{-\beta s} C(\Lambda_s) ds \right].
\end{eqnarray*}
Taking a supremum over stopping times and rate processes we conclude that $H(x) \leq G(x)$.
\end{proof}
Our goal now is to show that $H=G$. We prove this result, first in the simplest case where the set of admissible rate processes is unrestricted (i.e. $\Lambda_t$ takes values in $I=[0,\infty)$ and the cost function $C$ is lower semi-continuous and convex, with $\lim_{\lambda \uparrow \infty} C(\lambda)/\lambda = \infty$). Then we argue that the same result holds true under weaker assumptions. Note that we allow for $\{ \lambda \in I : C(\lambda)=\infty \}$ to be non-empty, but our assumption that $C$ is lower semi-continuous means that if $\check{\lambda} = \inf \{ \lambda : C(\lambda)= \infty \}$ then $C(\check{\lambda}) = \lim_{\lambda \uparrow \check{\lambda}}C(\lambda)$.
\begin{thm}
Suppose $I = [0,\infty)$ and $C: I \mapsto [0,\infty]$ is increasing, convex and lower semi-continuous with $\lim_{\lambda \uparrow \infty} C(\lambda)/\lambda = \infty$.
Let $G$ be an increasing, convex solution to \eqref{eq:Gdef}
of at most linear growth. Then $H=G$.
\label{thm:H=G1}
\end{thm}
\begin{proof}
Let $C'$ denote the right-derivative of $C$ and set $C'=\infty$ on $\{ \lambda : C(\lambda)=\infty \}$.
Since $C'$ is increasing it has a left-continuous inverse $D : \mathbb R_+ \mapsto \mathbb R_+$. In particular, $D(y) = \sup \{ \lambda \in [0,\infty): C'(\lambda) < y \}$ with the convention that $D(y)=0$ if $C'(\lambda) \geq y$ on $(0,\infty)$. We note that our hypotheses mean that $D$ is well defined and finite on $(0,\infty)$ and we set $D(0)=0$.
Let $\hat{\Lambda} = (\hat{\Lambda}_s)_{s \geq 0}$ be given by $\hat{\Lambda}_s = D( (g(X_s) - G(X_s))_+)$. We will show that $\hat{\Lambda}$ is the optimal rate process.
Note first that there is equality in \eqref{eq:Ht}, and therefore in \eqref{eq:Hleq}, provided $F_s = F(g(X_s), G(X_s), \Lambda_s) = (g(X_s) -G(X_s))_+ \Lambda_s + \tilde{C}((g(X_s) -G(X_s))_+) = C(\Lambda_s)$. This is satisfied if $\Lambda_s = \hat{\Lambda}_s$.
Let $\mathcal X_> = \{ x : g(x) > G(x) \}$ and let $\mathcal X_{\leq} = \{ x : g(x) \leq G(x) \}$.
Then, under the hypothesis of the theorem, whilst $X_\cdot \in \mathcal X_\leq$ we have that $\hat{\Lambda}_\cdot \equiv 0$. Hence (almost surely) $X_{T^{\hat{\Lambda}}_1} \in \mathcal X_>$ and $G^0(X_{T^{\hat{\Lambda}}_1}) = g(X_{T^{\hat{\Lambda}}_1})$. Then, taking $T = T^{\hat{\Lambda}}_1$ we have from \eqref{eq:Hleq} that
\begin{eqnarray*}
G(x) & = &\mathbb E \left[ e^{-\beta T} G^0(X_{T}) I_{\{ T < \infty \} } - \int_0^{T} e^{-\beta s} C(\Lambda_s) ds \right] \\
& = &\mathbb E \left[ e^{-\beta T} g(X_{T}) I_{\{ T < \infty \} } - \int_0^{T} e^{-\beta s} C(\Lambda_s) ds \right] \leq H(x)
\end{eqnarray*}
and hence, combining with Proposition~\ref{prop:HleqG}, $G=H$.
\end{proof}
\begin{cor}
$\Lambda^* = (\Lambda_s^*)_{s \geq 0}$ given by $\Lambda^*_s = D((g(X_s) - G(X_s))_+)$ is an optimal strategy, and $\tau^* = T^{\Lambda^*}_1$ is an optimal stopping rule.
\label{cor:Lambda*}
\end{cor}
Our goal now is to extend Theorem~\ref{thm:H=G1} to allow for more general admissibility sets and cost functions.
Let $c$ be a generic increasing, convex function $c : [0,\infty) \mapsto [0,\infty]$. If $c$ takes the value $+\infty$ on $(\check{\lambda},\infty)$ then we assume that $c(\check{\lambda}) = \lim_{\lambda \uparrow \infty} c(\lambda) = c(\check{\lambda})$, and set the right-derivative $c'$ equal to infinity on $(\check{\lambda},\infty)$ also. For such a $c$ define $D_c:[0,\infty) \mapsto [0,\infty]$ by $D_c(y) = \sup \{ \lambda \in (0,\infty): c'(y) < y \}$ again with the conventions that $D_c(y)=0$ if $c'(\lambda) \geq y$ on $(0,\infty)$ and $D_c(0)=0$. Note that $D_c(y) \leq \sup \{ y : c(y) < \infty \}$.
Let $I$ with endpoints $\{\underline{\lambda}, \overline{\lambda} \}$ be a subinterval of $[0,\infty)$ with the property that $I$ is closed on the left and closed on the right if $\overline{\lambda} < \infty$.
Let $\gamma : I \mapsto \mathbb R_+$ be an increasing function. Let $\breve{\gamma}$ be the largest convex minorant of $\gamma$ on $I$. The
define $\gamma^\dagger$ by $\gamma^\dagger(\lambda) = \gamma(\underline{\lambda})$ on $[0,\underline{\lambda})$ (if this interval is non-empty), $\gamma^\dagger(\lambda) = \breve{\gamma}(\lambda)$ on $[\underline{\lambda}, \overline{\lambda}]$ and $\gamma^{\dagger} = \infty$ on $(\overline{\lambda},\infty)$.
By construction $\gamma^\dagger:[0,\infty) \mapsto [0,\infty]$ is convex and we can define $D_{\gamma^\dagger}$.
Suppose that $C: I \mapsto \mathbb R_+$ is our increasing, lower semi-continuous cost function. Introduce $C^\dagger : \mathbb R_+ \mapsto [0,\infty]$ and $D_{C^\dagger}$ which we abbreviate to $D^\dagger$. Note that if $D^\dagger(z) < \underline{\lambda}$ then $z=0$, $D^\dagger(z)=0$ and $C^\dagger(0)=C^\dagger(\underline{\lambda})=C(\underline{\lambda})$. Summarising the important results we have:
\begin{lem}
$\tilde{C} = \widetilde{C^{\dagger}}$. Moreover, for $z \in [0,\infty)$,
$C( (D^{\dagger}(z) \vee \underline{\lambda}) \wedge \overline{\lambda}) = C^\dagger(D^{\dagger}(z))$.
\label{lem:CCdagger}
\end{lem}
\begin{thm}
Suppose $I \subseteq [0,\infty)$ and let $C: I \mapsto \mathbb R$ be increasing, lower semi-continuous and such that $\lim_{\lambda \uparrow \infty} \frac{C(\lambda)}{\lambda} = \infty$. Let $G$ be an increasing, convex solution of \eqref{eq:Gdef} and suppose $G$ is of linear growth. Then $H=G$.
\label{thm:G=H2}
\end{thm}
\begin{proof}
Introduce $C^\dagger$, defined from $C$ as above, and let $H^\dagger$ be the solution of the unrestricted problem (ie $I^\dagger = [0,\infty)$) with (convex) cost function $C^\dagger$. Note that since $\tilde{C} = \tilde{C^\dagger}$ we have by Theorem~\ref{thm:H=G1} that $H^\dagger = G$. It remains to show that $H = H^\dagger$.
The inequality $H \leq H^\dagger$ is straight-forward: if $(\tau,\Lambda)$ is admissible for the interval $I$ and integrable for cost function $C$, then it is admissible for the interval $[0,\infty)$ and integrable for cost function $C^\dagger$; moreover $C \geq C^\dagger$, and so $H \leq H^\dagger$.
For the converse, let $\Lambda^\dagger = D^\dagger( (g(X_s) - G(X_s))_+)$ and $\tau^\dagger = T^{\Lambda^\dagger}_1$ be optimal for the problem with cost function $C^\dagger$. Note that $\Lambda^\dagger \leq \overline{\lambda}$ and that
\[ H^\dagger(x) = \mathbb E^x \left[ e^{- \beta \tau^\dagger} g(X_{\tau^\dagger}) - \int_0^{\tau^\dagger} e^{-\beta s} C^\dagger( \Lambda^\dagger_s) ds \right] \]
Define $\Lambda^* =\underline{\lambda} \vee \Lambda^\dagger$ and $\tau^* = \tau^\dagger$. Then, by Lemma~\ref{lem:CCdagger},
\[ C(\Lambda^*_s) = C( (D^\dagger( (g(X_s) - G(X_s))_+)\vee \underline{\lambda} )\wedge \overline{\lambda}) = C^\dagger( D^\dagger( (g(X_s) - G(X_s))_+)) = C^\dagger(\Lambda^\dagger_s). \]
Moreover, $\Lambda^* \in [\underline{\lambda},\overline{\lambda}]$ and is admissible for the original problem with admissibility interval $I$. Then
\[ H^\dagger(x) = \mathbb E^x \left[ e^{- \beta \tau^*} g(X_{\tau^*}) - \int_0^{\tau^*} e^{-\beta s} C( \Lambda^*_s) ds \right] \leq H(x) . \]
\end{proof}
\begin{rem}
Note that $\Lambda^* \geq \Lambda^\dagger$ and we may have strict inequality if $\underline{\lambda}>0$. In that case, when $g(X_s) \leq G(X_s)$ we have $\Lambda^\dagger_s = 0$, but $\Lambda_s^* = \underline{\lambda}$. In particular, we may have $\tau^* > T^{\Lambda^*}_1$, and the agent does not sell at the first opportunity. See Section~\ref{ssec:subinterval}.
\end{rem}
\section{Concave cost functions}
In this section we provide a complementary result to Theorem~\ref{thm:H=G1} by considering a concave cost function $C$ (defined on $\mathcal I = [0,\infty)$).
Suppose $C$ is increasing and concave on $[0,\infty)$. Then the greatest convex minorant $\breve{C}$ of $C$ is of the form
\[ \breve{C}(\lambda) = \delta + \epsilon \lambda \]
for some constants $\delta,\epsilon \in [0,\infty)$. Then, $C$ and $\breve{C}$ have the same concave conjugates given by $\tilde{C}(z) := \inf_{\lambda > 0} \{ C(\lambda) - \lambda z \}$ where $\tilde{C}(z) = \delta$ for $z \leq \epsilon$ and $\tilde{C}(z)= - \infty$ for $z>\epsilon$.
From the heuristics section we expect the value function to solve \eqref{eq:Hheuristic}. Then we might expect that on $g-H < \epsilon$ we have
\begin{equation}
\mathcal L^X H - \beta H - \delta = 0.
\label{eq:Hleqe}
\end{equation}
On the other hand some care is needed to interpret $\mathcal L^X H - \beta H = \tilde{C}((g-H)_+)$ on the set $g-H>\epsilon$. In fact, as we argue in the following theorem, $H \geq g-\epsilon$ and on the set $H=g - \epsilon$ \eqref{eq:Hleqe} needs to be modified. We show that
$H = w_{K,\epsilon,\delta}$ where (recall~\eqref{eq:wKJD})
\begin{equation} w_{K, \epsilon, \delta}(x) = \sup_{\tau \in \mathcal T([0,\infty))} \mathbb E^x \left[e^{-\beta \tau} \{(X_\tau - K)_+ - \epsilon \} - \delta \int_0^\tau e^{-\beta s} ds \right] . \label{eq:wKJD2} \end{equation}
The intuition is that when $H> g - \epsilon$ it is optimal to wait and to take $\Lambda=0$ at cost $\delta$ per unit time. However, on $H<g - \epsilon$ (and also when $H=g-\epsilon$) it is optimal to take $\Lambda$ as large as possible. Since there is no upper bound on $\Lambda$, this corresponds to taking $\Lambda$ infinite --- such a choice is inadmissible but can be approximated with ever larger finite values. Then, in the region where the agent wants to stop, if the stopping rate is large, say $N$, then the expected time to stop is $N^{-1}$, the cost incurred per unit time is $C(N) \approx \delta + \epsilon N$, and so the expected total cost of stopping is approximately $\frac{\delta + \epsilon N}{N} \approx \epsilon$. Effectively the agent can choose to sell (almost) instantaneously, for a fee or fixed transaction cost of $\epsilon$. This explains why the problem value is the same as the problem value for \eqref{eq:wKJD2}.
\begin{thm}
Let $I=[0,\infty)$ and let $C:I \mapsto \mathbb R_+$ be non-negative, increasing and concave. Suppose the greatest convex minorant $\breve{C}$ of $C(\lambda)$ is of the form $\breve{C}(\lambda) = \delta + \epsilon \lambda$ for non-negative constants $\delta$ and $\epsilon$.
Then $H(x) = w_{K,\epsilon,\delta}(x)$.
\label{thm:concave}
\end{thm}
\begin{proof}
First we show that for any integrable $\tau$ and $\Lambda$
\[ \mathbb E^x \left[ e^{-\beta \tau} (X_\tau - K)_+ - \int_0^\tau e^{-\beta s} C(\Lambda_s) ds \right] \leq w_{K,\epsilon,\delta}(x). \]
Then we show that there is a sequence of admissible strategies for which the value function converges to this upper bound.
We prove the result in the case $\epsilon \geq \delta/\beta$ when the cost of taking $\Lambda = 0$ is small relative to the proportional cost $C(\lambda)/\lambda$ associated with taking $\Lambda$ large. The proof in the case $\epsilon < \delta/\beta$ is similar, but slightly more complicated in certain verification steps, because the explicit form of $w^{K,\epsilon,\delta}$ is not so tractable.
When $\epsilon \geq \delta/\beta$ we have that $w = w_{K,\epsilon,\delta}$ is given by
\[ w(x) = \left\{ \begin{array}{ll} Ax^\theta - \frac{\delta}{\beta} & x \leq L \\
(x - K - \epsilon) & x > L \end{array} \right. , \]
where $L = \frac{\beta (K+\epsilon) - \delta}{\beta} \frac{\theta}{\theta - 1}$ and $A= \frac{1}{\theta} L^{1- \theta}$. Let $w^0(x) = w(x) \vee (x-K)_+$. Note that since $\frac{\beta}{\mu} > \theta$ we have $\frac{\theta}{\theta-1} > \frac{\beta}{\beta - \mu}$ and $L > \frac{\beta(K+\epsilon) - \delta}{\beta - \mu}$.
For fixed $\Lambda$ define $M^\Lambda = (M^\Lambda_t)_{t \geq 0}$ by $M^\Lambda_t = e^{- \int_0^t (\beta + \Lambda_s) ds} w(X_t) + \int_0^t e^{- \int_0^s (\beta + \Lambda_u) du} [\Lambda_s w^0(X_s) - C(\Lambda_s)] ds$ and set $N_t = \int_0^t e^{- \int_0^s (\beta + \Lambda_u) du} \sigma X_s w'(X_s) dW_s$. Then $N=(N_t)_{t \geq 0}$ is a martingale and
\begin{equation}
dM^\Lambda_t = dN_t + e^{- \int_0^t (\beta + \Lambda_s) ds} \left[ \mathcal L^X w - (\beta + \Lambda_t) w + \Lambda_t w^0 - C(\Lambda_t) \right] dt.
\label{eq:Mconcave}
\end{equation}
On $(0,L)$, $\mathcal L^X w - \beta w = \delta$, and \eqref{eq:Mconcave} becomes
\[ dM^\Lambda_t = dN_t + e^{- \int_0^t (\beta + \Lambda_s) ds}[\delta - \Lambda_t w + \Lambda_t w^0 - C(\Lambda_t) ] dt \leq dN_t + e^{- \int_0^t (\beta + \Lambda_s) ds} [ \Lambda_t (w^0 - w - \epsilon)] dt \leq dN_t. \]
since $w^0 \leq w + \epsilon$.
Similarly, on $(L,\infty)$, $w(x) = (x - K) - \epsilon$ and since $L>K+\epsilon$, \eqref{eq:Mconcave} yields
\begin{eqnarray*}
dM^\Lambda_t & \leq & dN_t + e^{- \int_0^t (\beta + \Lambda_s) ds}[ \mu X_t - (\beta + \Lambda )(X_t - K - \epsilon) + \Lambda_t (X_t - K) - (\delta + \epsilon \Lambda_t )] dt \\
& = & dN_t + e^{- \int_0^t (\beta + \Lambda_s) ds}[ (\mu - \beta) (X_t- L) + (\mu - \beta)L + \beta(K+ \epsilon) - \delta ] dt \leq dN_t.
\end{eqnarray*}
Putting the two cases together we see that $M^\Lambda$ is a supermartingale for any strategy $\Lambda$.
The rest of the proof that $H \leq w$ follows exactly as in the the proofs of Lemma~\ref{lem:Gcomparison}, Lemma~\ref{lem:Ysupermg} and Proposition~\ref{prop:HleqG}, with $w$ replacing $G$.
Now we show that there is a sequence of strategies for which the value function converges to $w=w_{K,\epsilon,\delta}$. Since $\delta + \epsilon \lambda$ is the largest convex minorant of $C$ there exists $(\lambda_n)_{n \geq 1}$ with $\lambda_n \uparrow \infty$ such that $\frac{C(\lambda_n)}{\lambda_n} \rightarrow \epsilon$.
Consider first the strategy of a constant rate of search $\lambda_n$, with stopping at the first event time of the associated Poisson process. Let $\tilde{H}_n$ denote the associated value function. Then
\begin{eqnarray*}
\tilde{H}_n(x) & = & \mathbb E^x \left[ \int_0^\infty \lambda_n e^{-\lambda_n t} dt \left\{ e^{-\beta t} (X_t - K)_+ - \int_0^t e^{-\beta s} C(\lambda_n) ds \right\} \right] \\
& \geq & \int_0^\infty \lambda_n e^{-\lambda_n t} dt \left\{ e^{-\beta t} (xe^{\mu t} - K) - \int_0^t e^{-\beta s} C(\lambda_n) ds \right\} \\
& = & \int_0^\infty \lambda_n e^{-(\lambda_n + \beta) t} (xe^{\mu t} - K) dt - \int_0^\infty e^{-\beta s} C(\lambda_n) ds \int_s^\infty \lambda_n e^{-\lambda_n t} dt \\
& = & \frac{\lambda_n}{\lambda_n + \beta - \mu} x - \frac{\lambda_n}{\lambda_n + \beta} K - \frac{1}{\lambda_n + \beta} C(\lambda_n)
\end{eqnarray*}
and $\tilde{H}_n(x) \rightarrow x-K - \epsilon$ as $n \uparrow \infty$.
Suppose $\epsilon \geq \delta/\beta$. Let $L = \frac{\beta (K+\epsilon) - \delta}{\beta} \frac{\theta}{\theta - 1}$ and let $\tau_L = \inf \{ u : X_u \geq L \}$. Consider the strategy with rate $\hat{\Lambda}_n = \lambda_n I_{ \{ t \geq \tau_L \} }$, for which selling occurs at the first event time of the Poisson process with this rate, and let $\hat{H}_n$ be the value function associated with this strategy.
For $x \geq L$ we have $\hat{H}_n(x) = \tilde{H}_n(x) \rightarrow x-K-\epsilon = w_{K,\epsilon,\delta}(x)$.
For $x < L$, we have $\mathbb E^x[ e^{- \beta \tau_L} ] = ( \frac{x}{L} )^\theta$ and
\begin{eqnarray*}
\hat{H}_n(x) & = & \mathbb E^x \left[ e^{- \beta \tau_L} \tilde{H}_n(L) - \int_0^{\tau_L} e^{-\beta s} C(0) ds \right] \\
& = & \mathbb E^x \left[ e^{- \beta \tau_L} \left( \tilde{H}_n(L) + \frac{C(0)}{\beta} \right) - \frac{C(0)}{\beta} \right] \\
& = & \left( \frac{x}{L} \right)^\theta \left[ \tilde{H}_n (L) + \frac{\delta}{\beta} \right] - \frac{\delta}{\beta} \\
& \rightarrow & w_{K, \epsilon, \beta}(x),
\end{eqnarray*}
where the last line follows from the definition of $L$ and some algebra.
\end{proof}
\subsection{An example}
In this example we consider a cost function of the form $C(\lambda) = \sqrt{\lambda}$. Then a (plausibly) good strategy is to take $\Lambda_t = 0$ if $X_t < L^* = \frac{\theta}{\theta - 1}$ and $\Lambda_t$ very large otherwise. It is immediate that the value function $H$ satisfies $H \leq w$; conversely, it is clear from Figure~\ref{fig:concave} that there exist strategies for which the value function is arbitrarily close to $w$.
\begin{figure}[H] \center
\includegraphics [width=0.5\linewidth] {concave.pdf}
\caption{ $(\beta,\mu,\sigma,K) =(5,3,3,1)$; the highest line is $w = w_{K,0,0}$, and the other lines are the value functions under the rate function $\Lambda_n(x) = n I_{ \{x \geq L^* \} }$.}
\label{fig:concave}
\end{figure}
\section{Further Examples}
\label{sec:examples}
\subsection{Addition of a linear cost}
\label{ssec:lc}
Let $C_0$ be a convex, lower semi-continuous, increasing cost function, and consider the impact of adding a linear cost to $C_0$; in particular, let $C_b : \mathbb R_+ \mapsto \mathbb R_+$ be given by $C_b(\lambda) = C_0(\lambda) + \lambda b$ for $b>0$.
Then the concave conjugates are such that $\tilde{C}_b(z ) = \tilde{C}_0((z-b)_+)$.
Suppose further that $G$, the solution of \eqref{eq:Gdef} of linear growth, is such that $G \geq 0$ on $\mathbb R_+$. The problem solution in the case of a purely quadratic cost function (recall Section~\ref{sssec:pure}) has this property. Then
\[ ( \{ (x -K)_+ - G \}_+ - b)_+ = \{ (x - (K+b))_+ - G \}_+ . \]
It follows that
\[ \tilde{C}_b( \{ (x-K)_+ - G \}_+ ) = \tilde{C}_0(( \{ (x -K)_+ - G \}_+ - b)_+) = \tilde{C}_0 (\{ (x - (K+b))_+ - G \}_+ ) \]
and then that the value function for a payoff $(x-K)^+$ with cost function $C_b$ is identical to the value function for a cost function $C_0(x)$ but with modified payoff $(x - (K+b))_+$.
Note that we see a similar result in the expansion \eqref{eq:Gexpansion} for $G$ in the large $x$ regime.
\subsection{Quadratic costs with positive fixed cost}
In this section we seek to generalise the results of Section~\ref{sssec:pure} on purely quadratic cost functions to other quadratic cost functions. In view of the results in Section~\ref{ssec:lc} the focus is on adding a positive intercept term, rather than a linear cost. Indeed the focus is on cost functions of the form $C(\lambda) = a + \frac{c}{2} \lambda^2$ for $a>0$.
In this section we will take $a$ and $c$ fixed and compare the cost fucntions $C_0(\lambda) = \frac{c}{2} \lambda^2$, $C_1(\lambda) = a + \frac{c}{2} \lambda^2$ and $C_>(\lambda) = a I_{ \{ \lambda > 0 \} } + \frac{c}{2} \lambda^2$. The difference between the last two cases is that in the final case, not searching at all incurs zero cost, whereas in the middle case, there is a fixed cost which applies irrespective of whether there is a positive rate of searching for offers or not.
In Section~\ref{sssec:pure} we saw that $H_0$, the value function for the cost $C_0(\lambda) = \frac{c}{2} \lambda^2$, solves
\[ \mathcal L^X H_0 - \beta H_0 = \frac{[(g-H_0)_+]^2}{2c}. \]
There is a threshold $L$ with $L>K$, such that $H_0 > g$ on $(0,L)$ and $H_0<g$ on $(L,\infty)$. On $(0,L)$ we have that $H_0(x) = (L-K)\frac{x^\theta}{L^\theta}$; on $[L,\infty)$, $H_0$ solves $\frac{1}{2} \sigma^2 x^2 h'' + \mu x h' - \beta h = \frac{1}{2c} (x-K - h)^2$ subject to initial conditions $H_0(L) = (L-K)$ and $H_0'(L) = \theta\frac{L-K}{L}$. We adjust $L$ until we find a solution for which $H_0$ is of linear growth at infinity.
Now consider $C_1$ with asscoiated value function $H_1$. When $X$ is very small, there is little prospect of $X$ ever rising above $K$. Nonetheless the agent faces a fixed cost, even if she does not search for offers. It will be cheaper to search for offers, because although the payoff is zero when a candidate purchaser is found, it is then possible in our model to stop paying the fixed cost.
Suppose $X=0$. If the agent chooses to search for buyers at rate $\lambda$ then the expected time until a buyer is found is $\lambda^{-1}$. The expected discounted cost until a buyer is found is
\[ \int_0^\infty \lambda e^{-\lambda s} \int_0^s e^{-\beta u} \left( a + \frac{c}{2} \lambda^2 \right) du =\frac{a + \frac{c}{2} \lambda^2}{\beta + \lambda}. \]
This is minimised by the choice $\lambda= \lambda_*$ where $\lambda_* = \sqrt{{\beta^2} + \frac{2a}{c}} - \beta$ and the minimal cost is $h^-_*$ where
\[ h^-_* = \frac{a + \frac{c}{2} \lambda_*^2}{\beta + \lambda_*} = c \lambda_* = c \left[ \sqrt{{\beta^2} + \frac{2a}{c}} - \beta \right] . \]
Then $H_1(0)= - h^-_*$.
(Another way to see this is to note that at 0 we expect $\mathcal L^X H_1=0$ and therefore $H_1(0)$ to solve $- \beta h = \tilde{C}(- h) = a - \frac{h^2}{2c}$.)
Then, the value function $H_1$ is such that there exists $\ell$ and $L$ with $0<\ell < K < L < \infty$ such that $H_1$ is $C^1$ with $H_1<0$ on $(0,\ell)$, $H_1(x)>(x-K)_+$ on $(\ell,L)$ and $H_1(x)<(x-K)_+$ on $(L,\infty)$ and such that $H_1$ satisfies
\[ \mathcal L^X h - \beta h = \left\{ \begin{array}{lll} a - \frac{1}{2c} h^2 & & x < \ell; \\
a & \; &\ell < x < L; \\
a - \frac{1}{2c} (g-h)^2 & & L < x.
\end{array} \right. \]
See Figure~\ref{fig:C1}. Considering $H_1$ on $(\ell,L)$ we have $H_1(x) = A x^\theta + B x^\phi - \frac{a}{\beta}$ for some constants $A$ and $B$ chosen so that $H_1(\ell)=0$ and $H_1(L) = (L-K)$:
\[ A = \frac{L^{-\phi}(L-K + \frac{a}{\beta}) - \ell^{-\phi} \frac{a}{\beta}}{L^{\theta-\phi} - \ell^{\theta-\phi}}, \hspace{20mm} B = \frac{ \ell^{-\phi} L^{\theta - \phi} \frac{a}{\beta} - \ell^{\theta-\phi} L^{-\phi}(L-K + \frac{a}{\beta})}{L^{\theta-\phi} - \ell^{\theta-\phi}} . \]
Then for general $\ell$ and $L$ we can use value matching and smooth fit at $\ell$ and $L$ to construct a solution on $(0,\infty)$. Finally, we adjust $\ell$ and $L$ until $H_1(0)=-h^-_*$ and $H_1$ has linear growth.
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig4.pdf}
\caption{The value function $H_1(x)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig4_lam.pdf}
\caption{The optimal rate $\Lambda_1^*(x)$.}
\end{subfigure}
\caption{$(\beta,\mu,\sigma,K) =(5, 3 ,2,1)$. The cost function is $C_1(\lambda)=1+\lambda^2$. The left figure shows the value function, and the right figure the optimal stopping rate. There are two critical thresholds $\ell= \ell^*$ and $L=L^*$.
}
\label{fig:C1}
\end{figure*}
In Figure~\ref{fig:C1} we plot the value function and optimal rate for the Poisson process for $C_1(\lambda) = 1 + \lambda^2$. There are two critical thresholds $\ell^*$ and $L^*$ with $0 < \ell^* < K < L^*$.
Above $L^*$ the agent would like to stop in order to receive the payoff $(x-K)$, and is willing to expend effort to try to generate selling opportunities in order to receive the payoff before discounting reduces the worth. Below $\ell^*$ the agent would like to stop, even though the payoff is zero, and is willing to expend effort to generate stopping opportunities in order to limit the costs they incur prior to stopping.
Between $\ell^*$ and $L^*$ the agent does not expend any effort searching for offers and would not accept any offers which were received.
Now consider the cost function $C_>(\lambda)=aI_{\{ \lambda>0 \} } + \frac{c}{2} \lambda^2$ with associated value function $H_>$. We have $\widetilde{C}_>(z) = 0$ for $z \leq \sqrt{2ac}$ and $\widetilde{C}_> = a - \frac{z^2}{2c}$ for $z \geq \sqrt{2ac}$.
As in the pure quadratic case, there is always the option of taking $\Lambda \equiv 0$ at zero cost, so that the value function is non-negative. It follows that $H_>(0)=0$.
There is a threshold $L$ below which the agent does not search for offers. But, this threshold is not the boundary between the sets $\{x : H_>(x)>g(x) \}$ and $\{ x : H_>(x) < g(x) \}$, since when $g(x)-H_>(x)$ is small, it is still preferable to take $\Lambda = 0$, rather than to incur the cost of strictly positive $\lambda$. Instead $L$ separates the sets $\{x : H_>(x)>g(x) - \sqrt{2ac} \}$ and $\{ x : H_>(x) < g(x) - \sqrt{2ac} \}$.
We find that there is a threshold $L$ with $L > K$ such that on $(0,L)$, $H_>$ solves $\mathcal L^X h - \beta h = a$. At $L$ we have $H_>(L) = (L-K-\sqrt{2ac})$ and it follows that on $(0,L)$ we have $H_>(x) = \frac{L-K-\sqrt{2ac}}{L^\theta} x^\theta$.
Then, on $(L,\infty)$, $H_>$ solves $\mathcal L^X h - \beta h = a - \frac{(x-K-h)^2}{2c}$, subject to value matching and smooth fit conditions at $x=L$. Finally, we adjust the value of the threshold $L$ until $H$ is of linear growth for large $x$.
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig5.pdf}
\caption{The value function $H_>(x)$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig5_lam.pdf}
\caption{The optimal rate $\Lambda^*_>(x)$}
\end{subfigure}
\caption{$(\beta,\mu,\sigma,K) =(5, 3 ,2,1)$. The cost function is $C_>(\lambda) = I_{\{ \lambda>0 \} } + \lambda^2$. The highest convex minorant is $\breve{C}_>(\lambda) = \lambda + [(\lambda - 1)_+]^2$. (Here we use the fact that $\sqrt{{2a}{c}}=2$.)
}
\label{fig:C>}
\end{figure*}
In Figure~\ref{fig:C>} we plot the value function $H_>$ and optimal rate $\Lambda^*_>$. We see that $\Lambda^*_>$ never takes values in $(0,1)$ where $C_> > \breve{C}_>$. Either it is optimal to spend a non-negligible amount of effort on searching for candidate buyers, or it is optimal to spend no effort.
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig6.pdf}
\caption{Comparison of the value functions}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig6_lam.pdf}
\caption{Comparison of the optimal stopping rates}
\end{subfigure}
\caption{$(\beta,\mu,\sigma) =(5, 3 ,2,1)$. The cost functions we consider are $C_0(\lambda) = \lambda^2$, $C_>(\lambda) = I_{\{ \lambda>0 \}} + \lambda^2$ and $C_1(\lambda) = 1 + \lambda^2$. The left figure plots the value functions under optimal behaviour, and the right figure plots the optimal rates for the Poisson process. For $x>5$ we have $\Lambda_1^* > \Lambda_>^* > \Lambda_0^*$. For small $x$, $\Lambda_1^* > 0 = \Lambda_>^* = \Lambda_0$.}
\label{fig:comparison}
\end{figure*}
Figure~\ref{fig:comparison} compares the value functions and optimal rates for the Poisson process for the three cost functions $C_0(\lambda) = \lambda^2$, $C_>(\lambda) = I_{\{ \lambda>0 \}} + \lambda^2$ and $C_1(\lambda) = 1 + \lambda^2$. Since $C_0 \leq C_> \leq C_1$ we must have that $H_0 \geq H_> \geq H_1$ and we see that away from $x=0$ this inequality is strict.
Indeed, especially for small $x$, $H_0$ and $H_>$ are close in value. The differences in optimal strategies are more marked. For large $x$ the fact that $H_0>H_> >H_1$ means that $\Lambda^*_0 < \Lambda^*_> < \Lambda^*_1$, and thus that even though $C_1>C_0$, the agent searches at a higher rate under $C_1$ than under $C_0$. Note that, we only have $\Lambda^*_> >0$ for $x$ above a critical value (in our case, approximately 5). Conversely, for $C_1$ there is a second region where $\Lambda_1>0$, namely where $x$ is small.
\subsection{Cost functions defined on a subset of $\mathbb R_+$}
\label{ssec:subinterval}
In this section we consider the case where there is a strictly positive lower bound on the rate at which offers are received. In fact, in our example the optimal rate of offers takes values in a two-point set. Nonetheless, we see a rich range of behaviours.
Suppose $\Lambda$ takes values in $[\underline{\lambda},\overline{\lambda}]$ where $0<\underline{\lambda}<\overline{\lambda}<\infty$ and suppose $C : [\underline{\lambda},\overline{\lambda}] \mapsto \mathbb R_+$ is increasing and concave.
Introduce $\breve{C}:[\underline{\lambda},\overline{\lambda}] \mapsto [0,\infty)$ defined by $\breve{C}(\lambda) = C(\underline{\lambda}) + \frac{\lambda - \underline{\lambda}}{\overline{\lambda}-\underline{\lambda}} (C(\overline{\lambda})-C(\underline{\lambda}))$. Finally introduce $C^\dagger: [0,\infty) \mapsto [0,\infty]$ by
\[ C^\dagger (\lambda) = \left\{ \begin{array}{lcl} C(\underline{\lambda}) & \; & \lambda < \underline{\lambda}, \\
\breve{C}(\lambda) && \underline{\lambda} \leq \lambda \leq \overline{\lambda}, \\
\infty && \overline{\lambda} < \lambda . \end{array} \right. \]
Write $a = C(\underline{\lambda})$ and $b = \frac{ (C(\overline{\lambda})-C(\underline{\lambda}))}{\overline{\lambda}-\underline{\lambda}}$. Then $C^\dagger$ has concave conjugate $\tilde{C}^\dagger (z) = a - \underline{\lambda} z$ for $z \leq b$ and $\tilde{C}^\dagger(z) = a - b \underline{\lambda} - (z-b) \overline{\lambda}$ for $z>b$.
Suppose first that $C(\underline{\lambda})=a=0$. Then the value function $H$ is positive, increasing and $C^1$ and satisfies
\[ \mathcal L^X h - \beta h = \left\{ \begin{array}{lcl} 0 & \; & x < L, \\
- \underline{\lambda} (g - h) && L \leq x \leq M, \\
- b \underline{\lambda} - \overline{\lambda}(g - h - b) && M < x, \end{array} \right. \]
where $L$ and $M$ are constants satisfying $0<K<L<M$ which must be found as part of the solution, and are such that $h(x) > (x-K)$ on $(0,L)$, $(x-K) > h(x) > x-K-b$ on $(L,M)$ and $(x-K-b)>h(x)$ on $(M,\infty)$. See Figure~\ref{fig:subinterval1}.
Fix $L$ and consider constructing a solution to the above problem with $H(0)$ bounded. On $(0,L)$ we have that $H(x) = Ax^\theta +B x^\phi$ and the requirement that $H$ is bounded means that $B=0$ and then $A= (L- K)L^{-\theta}$. We then use the $C^1$ continuity of $H$ at $L$
to find the constants $C$ and $D$ in the expression for $H$ over $(L,M)$:
\begin{equation}
\label{eq:HLM} H(x) = C x^{\underline{\theta}} + Dx^{\underline{\phi}} + \frac{\underline{\lambda}}{\underline{\lambda} + \beta - \mu} x - \frac{K \underline{\lambda}}{\underline{\lambda} +\beta}.
\end{equation}
Here $\underline{\phi}$, $\underline{\theta}$ with $\underline{\phi} < 0 <1 < \underline{\theta}$ are solutions to $Q_{\underline{\lambda}}(\cdot)=0$ where
$Q_\lambda(\psi) = \frac{1}{2} \sigma^2 \psi(\psi - 1) + \mu \psi - (\beta + \lambda)$.
In turn, we can find the value of $M=M(L)$ where $H$ given by \eqref{eq:HLM} crosses the line $y(x)=x-K-b$, and then value matching at $M$ gives us the value of $E$ in the expression for $H$ over $[M,\infty)$:
\[ H(x) = E x^{\overline{\phi}} + \frac{\overline{\lambda}}{\overline{\lambda} + \beta - \mu} x - \frac{(K+b) \overline{\lambda}- b \underline{\lambda}}{\overline{\lambda} +\beta} \]
where $\overline{\phi}$ is the negative root of $Q_{\overline{\lambda}}(\cdot)=0$. (There is no term of the form $x^{\overline{\theta}}$ since $H$ must be of linear growth at infinity.) Finally, we can solve for $L$ by matching derivatives of $H$ at $M$.
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig7.pdf}
\caption{The value function $H$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig7_lam.pdf}
\caption{The optimal rate $\Lambda^*$.}
\end{subfigure}
\caption{$(\beta,\mu,\sigma,K,\underline{\lambda}, \overline{\lambda}, C(\underline{\lambda}), C(\overline{\lambda})) =(5, 3 ,2,1,5,10, 0, 20)$. Note that $b = \frac{ (C(\overline{\lambda})-C(\underline{\lambda}))}{\overline{\lambda}-\underline{\lambda}}=4$.
The left figure plots the value function and the right figure plots the optimal rate function.
$\Lambda$ is constrained to lie in $[5,10]$, and the cost function is $20 I_{ \{\lambda>5 \}}$. We see that $\Lambda^*$ takes values in $\{5,10\}$.}
\label{fig:subinterval1}
\end{figure*}
Figure~\ref{fig:subinterval1} plots the value function and the optimal rate function. The state space splits into three regions. On $x>M$ the asset is considerably in-the-money and the agent is prepared to pay the cost to generate a higher rate of selling opportunities. When $x$ is not quite so large, and $L<x<M$, the agent is not prepared to pay this extra cost, but will sell if opportunities arise. However, if $x<L$ then selling opportunities will arise (we must have $\Lambda \geq \underline{\lambda}$) but the agent will forgo them. Ideally the agent would choose $\Lambda=0$, but this is not possible. Instead the agent takes $\Lambda = \underline{\lambda}$, but synthesises a rate of zero, by rejecting all offers.
When $C(\underline{\lambda})>0$, the agent will not pay the fixed cost indefinitely when $X$ is small. The behaviour for large $X$ is unchanged, but the agent will now stop if offers arrive when the value of continuing is negative, including when $X$ is near zero. There are two cases depending on whether $\frac{C(\underline{\lambda})}{\underline{\lambda}+\beta} \leq \frac{C(\overline{\lambda})}{\overline{\lambda}+\beta}$ or otherwise. In the former case, when $X$ is small it is cheaper to pay the lower cost and to stop if opportunities arise, than to pay the higher cost with the hope of stopping sooner. In the latter case, the comparison is reversed. We find that $H$ solves
\[ \mathcal L^X h - \beta h = \tilde{C}((g-h)_+) \]
subject to $h(0) = - \min_{\lambda \in \{ \underline{\lambda}, \overline{\lambda} \} } \{ \frac{C(\lambda)}{\lambda + \beta} \}$ and the fact that $h$ is of linear growth at infinity. The solution is smooth, except at points where $\tilde{C}((g-h)_+)$ is not differentiable. This may be at $K$ where $g$ is not differentiable, or when $g=h$, or, since $\tilde{C}$ is non-differentiable at $b$, when $g-h=b$.
Figure~\ref{fig:subinterval2} shows the value function and the optimal search rate in the case where $\frac{C(\underline{\lambda})}{\underline{\lambda}+\beta} \leq \frac{C(\overline{\lambda})}{\overline{\lambda}+\beta}$. This means that when $x$ is small the agent expends as little effort as possible searching for offers, although they do accept any offers which arrive. There is also a critical threshold $M$, beyond which it is optimal to put maximum effort into searching for offers. There are then two sub-cases depending on whether costs are small or large. If costs are large then the agent will always accept any offer which comes along (Figure~\ref{fig:subinterval2}(c) and (d)). However, when costs are small (Figure~\ref{fig:subinterval2}(a) and (b)), there is a region $(\ell, L)$ over which $h(x)>g(x)=(x-K)_+$. Then, as in the region $(0,L)$ when $C(\underline{\lambda})=0$, even when there is an offer the agent chooses to reject it. Effectively, the agent creates a zero rate of offers by thinning out all the events of the Poisson process.
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig8a.pdf}
\caption{The value function $H$ in the case $(C(\underline{\lambda})=1,C(\overline{\lambda})=20)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig8b.pdf}
\caption{The optimal rate $\Lambda^*$ in the case $(C(\underline{\lambda})=1,C(\overline{\lambda})=20)$.}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig8c.pdf}
\caption{The value function $H$ in the case $(C(\underline{\lambda})=10,C(\overline{\lambda})=20)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig8d.pdf}
\caption{The optimal rate $\Lambda^*$ in the case $(C(\underline{\lambda})=10,C(\overline{\lambda})=20)$.}
\end{subfigure}
\caption{$(\beta,\mu,\sigma,K,\underline{\lambda}, \overline{\lambda}) =(5, 3 ,2,1,5,10)$.
The left panels plot the value function and the right panels plot the optimal rate function. In each row $\frac{C(\underline{\lambda})}{\underline{\lambda} + \beta} < \frac{C(\underline{\lambda})}{\underline{\lambda} + \beta}$.
In the case of lower costs ($C(\underline{\lambda})=1$) there is a region $(\ell,L)$ where $H(x)>g(x)$ and the agent chooses to continue rather than to stop.}
\label{fig:subinterval2}
\end{figure*}
Figure~\ref{fig:subinterval3} shows the value function and the optimal search rate in the case where $\frac{C(\underline{\lambda})}{\underline{\lambda}+\beta} > \frac{C(\overline{\lambda})}{\overline{\lambda}+\beta}$.
Then, necessarily, $b = \frac{C(\overline{\lambda})-C(\underline{\lambda})}{\overline{\lambda} - \underline{\lambda}} < \frac{C(\overline{\lambda})}{\overline{\lambda} + \beta}$. When $x$ is small the agent searches at the maximum rate to generate an offer as quickly as possible. Necessarily $H(0) < - b$. If costs are large enough, then $H(x) < (x-k)_+ - b$ for all $x$, see Figure~\ref{fig:subinterval3}(a) and (b). Then the agent wants to stop as soon as possible, and is prepared to pay the higher cost rate in order to facilitate this. As costs decrease, we may have $(x-k)_+ -b \leq H(x)$ for some $x$, whilst the inequality $H(x) < (x-k)_+$ remains true, see Figure~\ref{fig:subinterval3}(c) and (d). Then there is a region $(m,M)$ over which the optimal strategy is $\Lambda^*(x)=\underline{\lambda}$. The agent still accepts any offer which is made. Finally, if costs are small enough we find that there is a neighbourhood $(\ell, L)$ of $K$ for which $H(x)>(x-K)_+$. Then, on $(\ell, L)$ the agent takes $\Lambda^*(x) = \underline{\lambda}$, but chooses to continue rather than stop if any offers are made.
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig9a.pdf}
\caption{The value function $H$ in the case $(C(\underline{\lambda})=15,C(\overline{\lambda})=20)$. $b=1$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig9b.pdf}
\caption{The optimal rate $\Lambda^*$ in the case $(C(\underline{\lambda})=15,C(\overline{\lambda})=20)$.}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig9c.pdf}
\caption{The value function $H$ in the case $(C(\underline{\lambda})=5,C(\overline{\lambda})=7)$. $b=0.4$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig9d.pdf}
\caption{The optimal rate $\Lambda^*$ in the case $(C(\underline{\lambda})=5,C(\overline{\lambda})=7)$.}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{fig9e.pdf}
\caption{The value function $H$ in the case $(C(\underline{\lambda})=2,C(\overline{\lambda})=2.5)$. $b=0.1$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig9f.pdf}
\caption{The optimal rate $\Lambda^*$ in the case $(C(\underline{\lambda})=2,C(\overline{\lambda})=2.5)$.}
\end{subfigure}
\caption{$(\beta,\mu,\sigma,K,\underline{\lambda}, \overline{\lambda}) =(5, 3 ,2,1,5,10)$.
The left column plots the value function and the right column plots the optimal rate function. In each row $\frac{C(\underline{\lambda})}{\underline{\lambda} + \beta} > \frac{C(\underline{\lambda})}{\underline{\lambda} + \beta}$. Near $x=0$ it is always preferable to choose the maximum possible rate process. Costs decrease as we move down the rows.
}
\label{fig:subinterval3}
\end{figure*}
As a limiting special case suppose $\overline{\lambda} = \underline{\lambda} = \hat{\lambda}$ and that $C(\hat{\lambda})=c \in [0,\infty)$. Then there is a single threshold $L$ to be determined and $H$ is of the form
\[ H(x) = \left\{ \begin{array}{ll} A x^{\theta} - \frac{c}{\beta} & x \leq L \\
B x^{\hat{\phi}} + \frac{\hat{\lambda}}{\hat{\lambda} + \beta - \mu} x - \frac{(c+\hat{\lambda} K)}{\beta + \hat{\lambda}} & x > L
\end{array} \right. \]
where $\hat{\phi}$ is the negative root of $Q_{\hat{\lambda}}(\cdot) = 0$.
The value matching condition $H(L)=(L-K)$ gives that $A = L^{-\theta}(L - K + \frac{c}{\beta})$ and
\[ B = L^{-\hat{\phi}}\left\{ \left( \frac{\beta - \mu}{\hat{\lambda} + \beta - \mu} \right) L + \frac{c - \beta K}{\beta + \hat{\lambda}} \right\}. \]
Then first order smooth fit at $L$ implies that
\[ L = (\beta K - c) \left[ \frac{\theta}{\beta} - \frac{\hat{\phi}}{\beta + \hat{\lambda}} \right]\left\{ \theta - \frac{\hat{\phi}(\beta - \mu)}{\hat{\lambda} + \beta - \mu} - \frac{\hat{\lambda}}{\hat{\lambda} + \beta - \mu} \right\}^{-1} . \]
Note that if we take $c=0$ we recover exactly the expressions in (3.12) and (3.13) of Dupuis and Wang~\cite{DupuisWang:03}.
\section{Conclusion and discussion}
Our goal in this article is to extend the analysis of Dupuis and Wang~\cite{DupuisWang:03} who considered optimal stopping problems where the stopping time was constrained to lie in the event times of a Poisson process, to allow the agent to affect the frequency of those event times. The motivation was to model a form of illiquidity in trading and to consider problems in which the agent can exert effort in order to increase the opportunity set of candidate moments when the problem can terminate. This notion of effort is different to the idea in the financial economics literature of managers expending effort in order to change the dynamics of the underlying process, as exemplified by Sannikov~\cite{Sannikov:08}, but seems appropriate for the context.
Our work focuses on optimal stopping of an exponential Brownian motion under a perpetual call-style payoff, although it is clear given the work of Lempa~\cite{Lempa:07} how the analysis could be extended to other diffusion processes and other payoff functions. Nonetheless, even in this specific case we show how it is possible to generate a rich range of possible behaviours, depending on the choice of cost function. In our time-homogeneous, Markovian set-up, the rate of the Poisson process can be considered as a proxy for effort, and the problem can be cast in terms of this control variable. Then, the form of the solution depends crucially on the shape of the cost function, as a function of the rate of the inhomogeneous Poisson process.
One important quantity is the limiting value for large $\lambda$ of the average cost $\frac{C(\lambda)}{\lambda}$. If this limit is infinite, then the agent does not want to select very large rates for the Poisson process as they are too expensive. In this case we can replace $C$ with its convex minorant and solve the problem for that cost function. However, if $C$ is concave and the set of possible values for the rate process is unbounded then when the asset is sufficiently in the money, the agent wants to choose an infinite rate function, and thus to generate a stopping opportunity immediately. Choosing a very large rate function, albeit for a short time, incurs a cost equivalent to a fixed fee for stopping, and this is reflected in the form of the value function.
Another important quantity is the value of $C$ at zero. If a choice of zero stopping rate is feasible and incurs zero cost per unit time, then the agent always has a feasible, costless choice for the rate function, and the value function is non-negative. Then, when the asset price is close to zero we expect the agent to put no effort into searching for buyers, and to wait. However, if the cost of choosing a zero rate for the Poisson process is strictly positive, then the agent has an incentive to search for offers even when the asset price is small and the payoff is zero. When the agent receives an offer they accept, because this ends their obligation to pay costs. In this way we can have a range of optimal behaviours when the asset price is small.
When the range of possible rate processes includes zero and $C$ is strictly increasing, then the agent only exerts effort to generate selling opportunities in circumstances where they would accept those opportunities. The result is that the agent stops at the first event of the Poisson process, and the optimal stopping element of the problem is trivial.
However, an interesting feature arises when there is a lower bound on the admissible rate process. Then, the agent may receive unwanted offers, which they choose to decline. In this case the agent chooses whether to accept the first offer or to continue.
We model the cost function $C$ as increasing, which seems a natural requirement of the problem. (However, if $C$ is not increasing, we can introduce a largest increasing cost function which lies below $C$, and the value function for that problem will match the solution of the original problem.) We also assume that the interval of possible values for the rate process is closed (at any finite endpoints) and that $C$ is lower semi-continuous. Neither of these assumptions is essential although they do simplify the analysis. In particular, these assumptions ensure that the minimal cost is attained, and that we do not need to consider a sequence of approximating strategies and problems.
| proofpile-arXiv_065-1429 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
In this work we consider the reaction-diffusion equation
\Fi{eq:parabolic}
\partial_t u = \text{div} (A(x) \nabla u ) + f (x,u), \quad t \in \mathbb{R}, \ x \in \mathbb{R}^N ,
\end{formula}\noindent
where $N \geq 1$ is the space dimension.
The diffusion matrix field $A = (A_{i,j})_{1 \leq i,j \leq N}$ is always assumed to be smooth
and to satisfy the ellipticity condition
\Fi{eq:ellipticity}
\exists C_1, C_2 \in (0,\infty), \quad \forall x,\xi \in \mathbb{R}^N,
\quad C_1 |\xi|^2 \leq \sum_{i,j} A_{i,j} (x) \xi_i \xi_j \leq C_2 |\xi|^2.\end{formula}\noindent
As far as the regularity of the reaction term $f(x,u)$ is concerned, we assume that it is at least globally Lipschitz continuous (a stronger hypothesis will be made in the general multistable case; see below).
Equation~\eqref{eq:parabolic} is spatially heterogeneous. As our goal is to construct travelling fronts,
i.e., self-similar propagating solutions, we impose a spatial structure on the heterogeneity. More precisely, we assume that the
terms in the equation are all periodic in space, with the same period.
For simplicity and without loss of generality up to some change of variables, we choose the periodicity cell to be $[0,1]^N$, that is,
\Fi{eq:periodicity}
\forall L \in \mathbb{Z}^N , \quad A (\cdot + L) \equiv A (\cdot) ,
\ f(\cdot + L ,\cdot) \equiv f(\cdot, \cdot).
\end{formula}\noindent
From now on, when we say that a function is periodic, we always understand that
its period is $(1,\dots,1)$.
In the spatially periodic case, one can consider
the notion of \textit{pulsating travelling front}, which we shall recall precisely below. Roughly, these are entire in time solutions which connect periodic steady states of the parabolic equation~\eqref{eq:parabolic}. The existence of such solutions is therefore deeply related to the underlying structure of \eqref{eq:parabolic} and its steady states.
In this paper, we shall always assume that \eqref{eq:parabolic} admits at least two spatially periodic steady states:
the constant~0 and a positive state~$\bar p(x)$. Namely, we assume that
$$f( \cdot, 0)
\equiv 0,$$
as well as
$$\left\{ \begin{array}{l}
\text{div} (A(x) \nabla \bar p) + f(x,\bar p) = 0, \vspace{3pt}\\
\forall L \in \mathbb{Z}^N, \quad \bar p(\cdot + L) \equiv \bar p >0.
\end{array}
\right.$$
We shall restrict ourselves to solutions~$u(t,x)$ of \eqref{eq:parabolic} that satisfy the inequality
$$0 \leq u \leq \bar p.$$
Notice that, as far the Cauchy problem is concerned, owing to the parabolic comparison principle, it is sufficient to assume that the above property is fulfilled by the initial datum (we restrict ourselves to bounded solutions,
avoiding in this way situations where the comparison principle fails).
Let us also mention that 0 could be replaced by a spatially periodic steady state; we make this choice to keep the presentation simpler.
The steady states $0$ and $\bar p$ will be assumed to be asymptotically stable; we shall recall what this means in a moment. Then we distinguish the situation where these are the unique periodic steady states (bistable case)
to that where there is a finite number of intermediate stable states (multistable case).
In the latter, we will strengthen the stability condition.
\begin{assump}[Bistable case]\label{ass:bi}
The functions $0$ and $\bar p$ are the unique asymptotically stable
periodic steady states of \eqref{eq:parabolic}.
Furthermore, there does not exist any pair $q$, $\tilde q$ of periodic steady states of
\eqref{eq:parabolic} such that $0<q<\tilde q<\bar p$.
\end{assump}
\begin{assump}[Multistable case]\label{ass:multi}
The function $\partial_u f(x,u) $ is well-defined and continuous. There is a finite number of asymptotically stable periodic steady states, among which~0 and~$\bar p$,
and they are all linearly stable.
Furthermore,
for any pair of ordered periodic steady states $q<\tilde q$, there is a linearly stable periodic steady state $p$ such that $q \leq p \leq\tilde q$.
\end{assump}
The main difference between these two assumptions is that only the latter allows the existence of intermediate stable steady states.
As we shall see, the presence of such intermediate states might prevent
the existence of a pulsating travelling front connecting directly the two extremal steady states 0 and $\bar p$. More complicated dynamics involving a family of travelling fronts, which we refer to as a \textit{propagating terrace}, may instead occur.
We emphasize that the stable states in Assumption~\ref{ass:multi} are not necessarily ordered.
Let us recall the different notions of stability.
A steady state $p$ is said to be {\em asymptotically stable} if its
{\em basin of attraction} contains an open neighbourhood of $p$
in the $L^\infty(\mathbb{R}^N)$ topology; the basin of attraction of $p$
refers to the set of initial data for which the solution of the Cauchy problem
associated with~\eqref{eq:parabolic} converges uniformly to $p$
as $t \to +\infty$.
A periodic state $p$ is said to be {\em linearly stable} (resp.~{\em unstable}) if the linearized operator around $p$, i.e.,
$$\mathcal{L}_p w := \text{div} (A(x)\nabla w ) +
\partial_u f (x, p(x)) w,$$
has a negative (resp.~positive)
principal eigenvalue in the space of periodic functions. Owing to the
regularity of $f$ from Assumption~\ref{ass:multi}, it is
rather standard to construct sub- and supersolutions using the principal eigenfunction
and to use them to show that linear stability implies asymptotic stability.
The converse is not true in general;
this is why the bistable Assumption~\ref{ass:bi} is not a particular case of Assumption~\ref{ass:multi}.
We also point out that
the second part of Assumption~\ref{ass:bi} automatically prevents the existence of intermediate
asymptotically stable steady states, thanks to a crucial result in dynamical systems due to Dancer and Hess~\cite{Tri}
known as ``order interval trichotomy''; see also~\cite{Matano}.
We recall such result in Theorem~\ref{DH} in the Appendix.
\begin{rmk}
In the case of the spatially-invariant equation
\Fi{eq:homo}
\partial_t u = \Delta u + f (u), \quad t \in \mathbb{R}, \ x \in \mathbb{R}^N,
\end{formula}\noindent
Assumption \ref{ass:bi} is fulfilled if and only if $\bar p$ is constant, say, equal to $1$,
and there exists $\theta\in(0,1)$
such that $f(u)<0$ for $u\in(0,\theta)$
and $f(u)>0$ for $u\in(\theta,1)$.
This is shown in Lemma \ref{lem:f0} below and the subsequent remark.
Then, with the same arguments, one can readily check that
Assumption~\ref{ass:multi}
is equivalent to require that $f\in C^1([0,1])$ and that it has an odd (finite) number of zeroes
such that, counting
from smallest to largest, the
odd ones (which include $0$, $1$) satisfy $f'<0$ (these are the only stable periodic steady~states).
\end{rmk}
With a slight abuse of terminology, in the sequel we shall simply refer to the
asymptotic stability as ``stability''.
Then a solution will be said to be ``unstable'' if it is not (asymptotically)~stable.
\paragraph{The notion of pulsating fronts and terraces} Let us first recall the notion of \textit{pulsating travelling front}, which is the extension to the periodic framework of the usual notion of travelling front. We refer to~\cite{Xin-91} for an early introduction of this concept.
\begin{defi}\label{def:puls_front}
A {\em pulsating travelling front} for~\eqref{eq:parabolic} is an entire in time solution of the type
$$u(t,x) = U ( x , x\cdot e -ct),$$
where $c \in \mathbb{R}$, $e \in \mathbb{S}^{N-1}$, the function $U(x,z)$ is periodic in the $x$-variable and satisfies
$$U (\cdot, - \infty) \equiv q_1 (\cdot) > U(\cdot, \cdot) > U (\cdot, +\infty) \equiv q_2 (\cdot).$$
Furthermore, we call $c$ the speed of the front, the vector $e$ its direction, and we say that $U$ connects
$q_1$ to $q_2$.
\end{defi}
\begin{rmk}\label{rmk:trap}
The functions $q_1$, $q_2$ in the above definition are necessarily two steady states
of \eqref{eq:parabolic}.
Let us also point out that the change of variables $(t,x) \mapsto (x , x\cdot e - ct)$ is only invertible when $c \neq 0$, so that one should a priori carefully distinguish both functions~$u$ and~$U$.
\end{rmk}
In the bistable case, our goal is to construct a pulsating front connecting
$\bar p$ to $0$. Let us reclaim a few earlier results. In~\cite{Xin-91}, a pulsating front was already constructed in the special case where coefficients are close to constants. Yet dealing with more general heterogeneities turned out to be much more difficult, and only recently a pulsating front was constructed in~\cite{FZ} for the one-dimensional case, through an abstract framework
which is similar to the one considered in the present work.
Higher dimensions were tackled in~\cite{Ducrot} under an additional nondegeneracy assumption and with a more PDE-oriented approach in the spirit of \cite{BH02}.
However, as mentioned before,
the notion of pulsating travelling front does not suffice to describe the dynamics
in the more general multistable case.
The good notion in such case is that of a propagating terrace, as defined in~\cite{DGM,GM}.
An earlier equivalent notion, called
\textit{minimal decomposition}, was introduced in~\cite{FifMcL77} in
the homogeneous case.
\begin{defi}\label{def:terrace}
A {\em propagating terrace} connecting $\bar p$ to $0$
in the direction $e \in \mathbb{S}^{N-1}$ is a couple of two finite sequences $(q_j)_{0 \leq j \leq J}$ and $(U_j)_{1 \leq j \leq J}$ such that:
\begin{itemize}
\item the functions $q_j$ are periodic steady states of \eqref{eq:parabolic} and satisfy
$$\bar p \equiv q_0 > q_1 > \cdots > q_J \equiv 0;$$
\item for any $1 \leq j \leq J$, the function $U_j$ is a pulsating travelling front of
\eqref{eq:parabolic} connecting $q_{j-1}$ to $q_j$
with speed $c_j \in \mathbb{R}$ and direction $e$;
\item the sequence $(c_j)_{1 \leq j \leq J}$ satisfies
$$c_1 \leq c_2 \leq \cdots \leq c_J .$$
\end{itemize}
\end{defi}
Roughly speaking, a propagating terrace is a superposition of pulsating travelling fronts
spanning the whole range from $0$ to $\bar p$.
We emphasize that the ordering of the speeds of the fronts involved in a propagating terrace is essential.
Indeed, while there may exist many families of steady states and fronts satisfying the first two
conditions in Definition \ref{def:terrace},
only terraces can be expected to describe the large-time behaviour of solutions of the
Cauchy problem associated to \eqref{eq:parabolic}, see~\cite{DGM},
which makes them more meaningful.
\paragraph{Main results} Before stating our theorems, let us also recall a result by Weinberger.
\begin{theo}[Monostable case \cite{W02}]\label{thm:mono}
Let $p>q$ be two periodic steady states of \eqref{eq:parabolic}, and assume that
any periodic function $u_0\in C(\mathbb{R}^N)$ satisfying $q\leq u_0\leq p$, $u_0\not\equiv q$,
lies in the basin of attraction of $p$.
Then, for any $e \in \mathbb{S}^{N-1}$,
there is some $c^* \in \mathbb{R}$ such that
a pulsating travelling front in the direction~$e$ with speed $c$ connecting $p$ to $q$
exist if and only if $c \geq c^*$.
\end{theo}
Assumptions~\ref{ass:bi} or~\ref{ass:multi} allow us to apply this theorem
around any given unstable periodic state~$q$ between $0$ and $\bar p$.
To check the hypothesis of Theorem~\ref{thm:mono},
fix $x_0\in \mathbb{R}^N$ and let $p_+ >q$ be a stable state realizing the following minimum:
$$ \min\{p(x_0) : q<p<\bar p \ \ \mbox{and $p$ is a periodic stable state} \} .$$
Note that $p_+$ exists since we always assume that there is a finite number of stable
periodic steady states. By either Assumptions~\ref{ass:bi} of~\ref{ass:multi},
there does not exist any periodic steady state between $q$ and $p_+$.
Because of this, and the stability of $p_+$,
only the case~$(b)$ of the order interval trichotomy Theorem~\ref{DH} is allowed.
Namely, there exists a spatially periodic solution $u$ of \eqref{eq:parabolic} such that $u(k, \cdot)\to q$ as $k\to-\infty$ and $u (k,\cdot) \to p_+$ as $k \to +\infty$. By comparison principles, this implies that any periodic initial datum $q \leq u_0 \leq p_+$ with $u_0 \not \equiv q$ lies in the basin of attraction of
$p_+$. We can therefore apply Theorem~\ref{thm:mono} and find a minimal
speed $\overline{c}_q$ of fronts in a given direction
$e\in\mathbb{S}^{N-1}$ connecting
$p_{+}$ to~$q$. Applying the same arguments to \eqref{eq:parabolic} with
$f(x,u)$ replaced by $-f(x,-u)$,
we find a minimal speed $c '_q$ of fronts $\tilde U$ in the direction $-e$ connecting $-p_{-}$ to $-q$,
where~$p_{-}$ is the largest stable periodic steady state lying below~$q$.
Hence, $\underline{c}_q:=-c '_q$ is the maximal speed of fronts $U(x,z):=-\tilde U(x,-z)$ for~\eqref{eq:parabolic}
in the direction $e$ connecting~$q$ to~$p_{-}$.
After these considerations, we are in a position to state our last assumption.
\begin{assump}\label{ass:speeds}
For any unstable periodic steady state $q$ between $0$ and $\bar p$ and any $e\in\mathbb{S}^{N-1}$, there holds that
$$\overline{c}_q > \underline{c}_q,$$
where $\overline{c}_q$ and $\underline{c}_q$ are defined above.
\end{assump}
Notice that under the bistable Assumption~\ref{ass:bi}, clearly $p_{+} \equiv \bar p$ and $p_{-}\equiv0$.
Therefore, in that case, Assumption~\ref{ass:speeds} means that pulsating fronts connecting $\bar p$ to
an intermediate state $q$ have to be strictly faster than pulsating fronts connecting~$q$ to~$0$. We point out that this hypothesis, though implicit, was already crucial in the earlier existence results for bistable pulsating fronts; see~\cite{Ducrot,FZ} where it was referred to as the \textit{counter-propagation} assumption.
When $u \mapsto f(x,u)$ is $C^1$, a sufficient condition ensuring Assumption~\ref{ass:speeds} is that $q$ is {\em linearly}
unstable.
In such a case there holds that $\overline{c}_q >0> \underline{c}_q$, as shown in
Proposition~\ref{prop:counter} in the Appendix.
We also show there for completeness that if $q$ is just unstable then
$\overline{c}_q \geq 0 \geq \underline{c}_q$.
The fact that the minimal speed in a monostable problem cannot be 0 seems
to be a natural property. Besides the non-degenerate ($q$ linearly unstable) case,
it is known to hold for homogeneous equations as well as
for some special (and more explicit) bistable equations,
c.f.~\cite{DHZ,FZ} and the references therein.
However, as far as we know, it remains an open problem in general.
Our first main result concerns the bistable cases.
\begin{theo}[Bistable case]\label{th:bi}
If Assumptions~\ref{ass:bi} and~\ref{ass:speeds} are satisfied, then for any $e \in \mathbb{S}^{N-1}$, there exists a monotonic in time
pulsating travelling front connecting $\bar p$
to $0$ in the direction $e$ with some speed $c (e) \in \mathbb{R}$.
\end{theo}
This theorem slightly improves the existence result of~\cite{Ducrot}, which
additionally requires the stability or instability of the steady states to be linear.
However, we emphasize that our argument is completely different:
while in~\cite{Ducrot} the proof relies on an elliptic regularization technique,
here we proceed through a time discretization and a dynamical system approach.
\begin{rmk}
Our previous theorem includes the possibility of a front with zero speed.
However, there does not seem to be a unique definition of a pulsating front with zero speed in the
literature, mainly because the change of variables $(t,x) \mapsto (x,x\cdot e-ct)$
is not invertible when $c=0$.
Here, by Definition~\ref{def:puls_front} a front with zero speed is simply
a stationary solution $u(x)$ with asymptotics $u (x) - q_{1,2} \to 0$ as $x \cdot e \to \mp \infty$. As a matter of fact, in the zero speed case our approach provides the additional
property that there exists a function $U$ as in Definition~\ref{def:puls_front}, such that
$u(t,x) = U (x, x \cdot e + z)$
solves~\eqref{eq:parabolic} for any $z \in \mathbb{R}$. However, this function $U$ lacks any regularity, so that in particular it is not a standing pulsating wave in the sense of~\cite{Ducrot}.
\end{rmk}
\begin{theo}[Multistable case]\label{th:multi}
If Assumptions~\ref{ass:multi} and~\ref{ass:speeds} are satisfied, then for any $e\in \mathbb{S}^{N-1}$, there exists a propagating terrace $((q_j)_j, (U_j)_j)$ connecting $\bar p$
to $0$ in the direction~$e$.
Furthermore, all the $q_j$ are stable steady states and all the fronts $U_j$ are monotonic in time.\end{theo}
Earlier existence results for propagating terraces dealt only with the one-dimensional case, where a Sturm-Liouville zero number and steepness argument is available~\cite{DGM,GM}. We also refer to \cite{Risler} where a similar phenomenon is studied by an energy method in the framework of systems with a gradient structure. As far as we know, this result is completely new in the heterogeneous and higher dimensional case.
The stability of these pulsating fronts and terraces will be the subject of a forthcoming work. Let us point out that, quite intriguingly, the shape of the terrace may vary depending on the direction. More precisely, for different choices of the vector $e$, the terrace may involve different intermediate states $(q_j)_j$; it is even possible that the number of such states varies, as we state in the next proposition.
\begin{prop}\label{prop:asymmetric}
There exists an equation~\eqref{eq:parabolic} in dimension $N=2$
for which Assumptions~\ref{ass:multi},~\ref{ass:speeds} hold and moreover:
\begin{itemize}
\item in the direction $(1,0)$, there exists a unique propagating terrace connecting $\bar p$ to 0, and it consists of exactly two travelling fronts;
\item in the direction $(0,1)$, there exists a unique propagating terrace connecting $\bar p$ to 0, and it consists of a single travelling front.
\end{itemize}
\end{prop}
Uniqueness here is understood up to shifts in time of the fronts.
It will be especially interesting to study how this non-symmetric
phenomenon affects the large-time dynamics of solutions of the Cauchy~problem.
\paragraph{Plan of the paper}
We start in the next section with a sketch of our argument in the homogeneous case, to explain the main ingredients of our method. This relies on a time discretization,
in the spirit of Weinberger \cite{W02},
and on the study of an associated notion of a discrete travelling front.
For the sake of completeness, some of the arguments of~\cite{W02} will be reclaimed
along the proofs. We also point out here that the resulting discrete problem shares similarities
with the abstract bistable framework considered in~\cite{FZ},
though we shall use a different method to tackle bistable and multistable equations
without distinction.
The proof of the general case is carried out in several steps:
\begin{enumerate}
\item Introduction of the iterative scheme (Sections \ref{sec:discrete}, \ref{sec:ac1}).
\item Definition of the speed of the front (Section \ref{sec:c^*}).
\item Capturing the iteration at the good moment and position (Section \ref{sec:subsequence}).
\item Derivation of the travelling front properties (Section \ref{sec:uppermost}).
\end{enumerate}
At this stage we shall have constructed a {\em discrete} pulsating travelling front
connecting~$\bar p$ to some stable periodic steady state~$0\leq p<\bar p$.
In the bistable case, one necessarily has that $p\equiv0$ and then it only remains to prove that
the front is actually a continuous front.
For the multistable case, we shall iterate our construction getting a family
of travelling fronts. In order to conclude that this is a propagating terrace, we need to show that
their speeds are ordered; this is the only point which requires
the linear stability in Assumption \ref{ass:multi}.
Summing up, the method proceeds as~follows:
\begin{enumerate}
\setcounter{enumi}{4}
\item Construction of the (discrete) pulsating terrace (Section \ref{sec:terrace}).
\item Passing to the continuous limit (Section \ref{sec:continuous}).
\end{enumerate}
Finally, Section~\ref{sec:asymmetric} is dedicated to the proof of
Proposition~\ref{prop:asymmetric},
which provides an example where the shape of the propagating terrace strongly
depends on its direction. To achieve this, we shall exhibit a bistable equation
for which pulsating fronts have different speeds, depending on their direction, see
Proposition \ref{pro:speeds} below.
\section{The 1-D homogeneous case}\label{sec:1D}
In order to illustrate our approach, let us consider the simpler (and, as far as travelling fronts are concerned, already well-understood~\cite{AW}) bistable homogeneous equation
\Fi{ref_frame}
\partial_t u = \partial_{xx} u + f(u),\quad t\in\mathbb{R},\ x\in\mathbb{R},
\end{formula}\noindent
with $f \in C^1 ([0,1])$ satisfying
$$f(0)=f(1)=0,\qquad f<0\ \text{ in }(0,\theta),\qquad f>0\ \text{ in }(\theta,1).$$
In this framework, pulsating fronts simply reduce to planar fronts, i.e., entire solutions
of the form $U(x-ct)$.
The hypotheses on $f$ guarantee that Assumption \ref{ass:bi} is fulfilled with $\bar p\equiv1$.
They also entail the ``counter-propagation'' property, Assumption \ref{ass:speeds},
because in the homogeneous monostable case travelling fronts have positive speeds, see \cite{AW}.
Namely, fronts connecting $1$ to $\theta$ exist for speeds $c$ larger than some $\overline c>0$,
whereas fronts connecting~$\theta$ to~$0$ exist for speeds $c$ smaller than some $\underline c<0$
(the latter property is derived from~\cite{AW} by considering fronts moving leftward
for the equation for $\theta-u$).
The equation in the frame moving rightward with speed $c\in\mathbb{R}$ reads
\Fi{moving}
\partial_t u = \partial_{xx} u + c \partial_x u+f(u),\quad t\in\mathbb{R},\ x\in\mathbb{R}.
\end{formula}\noindent
\subsection{The dynamical system}
We start by placing ourselves in a more abstract framework which we shall use to define a candidate front speed $c^*$, in the same way as in~\cite{W02}. We shall then turn to the construction of a travelling front connecting 1 to 0. We point out that in~\cite{W02}, such a travelling front was only shown to exist in the monostable case, and that a different argument is needed to deal with bistable or more complicated situations.
For any given $c\in\mathbb{R}$, we call $\mathcal{F}_c$
the evolution operator after time 1 associated with~\eqref{moving}.
Namely, $\mathcal{F}_c [ \phi ](x):=v(1,x)$, where
$v$ is the solution of \eqref{moving} emerging from
the initial datum $v(0,x)=\phi(x)$. It follows from the
parabolic strong maximum principle that the operator~$\mathcal{F}_c$ is increasing.
Let us already point out that the profile $U$ of a usual travelling front
$U(x-ct)$ for~\eqref{ref_frame}
is a stationary solution of \eqref{moving} and thus
a fixed point for the operator $\mathcal{F}_c$. As a matter of fact, in the homogeneous case the converse is also true (this follows for instance from a uniqueness result
for almost planar fronts derived in \cite{BH12}).
Therefore, our goal in this section will be to construct such a fixed point. \\
Consider a function $\phi\in W^{1,\infty}(\mathbb{R})$ satisfying
\Fi{phi}
\phi \ \text{ is nonincreasing},\qquad
\phi(-\infty)\in(\theta,1),\qquad
\phi= 0\ \text{ in }\ [0,+\infty).
\end{formula}\noindent
We then define a sequence $(a_{c,n})_{n\in\mathbb{N}}$ through the following iterative procedure:
$$a_{c,0}:= \phi,$$
$$a_{c,n+1} := \max \{ \phi, \mathcal{F}_c [a_{c,n}]\},$$
where the maximum is taken at each $x\in\mathbb{R}$.
It follows from the monotonicity of $\phi$ and $\mathcal{F}_c$ (the latter being strict) that
$a_{c,n}(x)$ is nondecreasing with respect to $n$ and
nonincreasing with respect to $x$,
and that it satisfies $0<a_{c,n}<1$.
Then, observing that
\Fi{E0}
\mathcal{F}_c [V]=\mathcal{F}_0 [V] (\cdot+c),
\end{formula}\noindent
for any function $V$, we deduce that $a_{c,n}$ is nonincreasing with respect to $c$.
One also checks by iteration that $a_{c,n}(+\infty)=0$, thanks to standard parabolic arguments.
All these properties are summarized in the following.
\begin{lem}\label{lem:acn}
The sequence $(a_{c,n})_{n\in\mathbb{N}}$ is nondecreasing and satisfies $0<a_{c,n}<1$
and $a_{c,n}(+\infty)=0$ for all $n\geq1$.
Moreover, $a_{c,n}(x)$ is nonincreasing with respect to both $c$ and $x$,
the latter monotonicity being strict in the set where $a_{c,n}>\phi$.
\end{lem}
Lemma \ref{lem:acn} implies that $(a_{c,n})_{n\in\mathbb{N}}$ converges pointwise to some
nonincreasing function $\phi\leq a_c\leq 1$.
The convergence actually holds locally uniformly in~$\mathbb{R}$, because the~$a_{c,n}$ are
equi-uniformly Lipschitz-continuous, due to parabolic estimates.
We also know that the $a_c$ are nonincreasing with respect to $c$.
We then introduce
$$c^* := \sup \{ c \in \mathbb{R} \, : \ a_c \equiv 1 \}.$$
One may check that $c^*$ is indeed a well-defined real number.
Without going into the details (this a particular case of either Section~\ref{sec:N} or~\cite{W02}), we simply point out that this can be proved using some super- and subsolutions which exist
thanks to the Lipschitz continuity of $f$
as well as to the choice of $\phi (-\infty)$ in the basin of attraction of $1$.
We further see that the definition of $c^*$ does not depend on the particular choice of the
initialising function $\phi$. Indeed, if $\tilde\phi$ satisfying \eqref{phi} is the initialisation of another sequence,
then for $c<c^*$ there holds that $a_{c,n}>\tilde\phi$ for $n$ sufficiently large.
From this and the monotonicity of $\mathcal{F}_c$
one deduces by iteration that the value of $c^*$ obtained starting from
$\phi$ is larger than or equal to the one provided by $\tilde\phi$.
Equality follows by exchanging the roles of $\phi$ and $\tilde\phi$.
We shall also use the fact that
\begin{equation}\label{c*not1}
a_{c^*} \not \equiv 1.
\end{equation}
This comes from the openness of the set $\{c \in \mathbb{R} : a_c \equiv 1 \}$, which is established in either
Section~\ref{sec:N} or~\cite{W02} in the more general periodic case.
Let us briefly sketch a more direct proof. Let $c\in\mathbb{R}$ be such that $a_{c} \equiv 1$.
We can find $\bar n$ such that $a_{c,\bar n}(1)>\phi(-\infty)$.
Arguing by induction and exploiting \eqref{E0}, one sees that
$$\forall\delta>0,\ n\in\mathbb{N},\ x\in\mathbb{R},\quad
a_{c+\delta,n}(x)\geq a_{c,n}(x+n\delta).$$
Thus, $a_{c+\frac1{\bar n},\bar n}(0)>\phi(-\infty)$ which implies that
$a_{c+\frac1{\bar n},\bar n}>\phi$
because $a_{c+\frac1{\bar n},\bar n}$ and $\phi$ are
nonincreasing and $\phi$ is supported in $(-\infty,0]$.
Using the next result we eventually deduce that
$a_{c''} \equiv 1$ for all $c''$ in some neighborhood of $c$, and thus $c^* > c$.
\begin{lem}\label{lem:c*open}
Let $c'\in\mathbb{R}$ and $\bar n\in\mathbb{N}$ be such that $a_{c',\bar n}>\phi$.
Then $a_{c''}\equiv1$ for all~$c''<c'$.
\end{lem}
\begin{proof}
The monotonicities provided by Lemma~\ref{lem:acn}
yield $a_{c'',\bar n+m} > \phi$ for all $c''\leq c'$ and $m\in\mathbb{N}$, which,
recalling the definition of the sequences
$(a_{c,n})_{n\in\mathbb{N}}$, implies in turn that
$a_{c'',\bar n+m}=(\mathcal{F}_{c''})^m [a_{c'' ,\bar n}]$.
Then, taking $c''<c'$ and exploiting \eqref{E0}, we get
\[\begin{split}
\forall m\in\mathbb{N},\ x\in\mathbb{R},\quad
a_{c'',\bar n+m}(x) & = (\mathcal{F}_{c''})^m [a_{c'', \bar n}] (x) \\
&=(\mathcal{F}_{c'})^m [a_{c'' ,\bar n}] (x- (c'-c'') m)\\
&\geq
(\mathcal{F}_{c'})^m [a_{c',\bar n}] (x - (c'-c'') m)\\
&=a_{c',\bar n+m}(x- (c'-c'') m).
\end{split}\]
Passing to the limit as $m \to +\infty$ (and using again the monotonicity of the sequence)
we find that $a_{c''}(x)\geq a_{c',n}(-\infty)$
for all $x\in\mathbb{R}$ and $n\in\mathbb{N}$.
Observe that $(a_{c',n}(-\infty))_{n\in\mathbb{N}}$ is the solution of the ODE $U'=f(U)$ computed on the integers and starting from
$\phi(-\infty)>\theta$, whence it converges to $1$.
This shows that $a_{c''} \equiv 1$.
\end{proof}
\subsection{Capturing the sequence at the good moment and position}
From here we diverge from
Weinberger's scheme which, as we mentioned above,
does provide a front in the monostable case but not
in the bistable one.
Consider $c < c^*$. Because $a_c \equiv 1$,
we have seen before that we can find $n(c)$ such that
$a_{c,n(c)+m} > \phi$ for $m\in\mathbb{N}$. This means that, starting from $n(c)$,
the sequence $(a_{c,n})_{n\in\mathbb{N}}$ is simply given by iterations of $\mathcal{F}_c$, namely,
\Fi{nc1}
\forall m\in\mathbb{N},\quad
a_{c,n(c)+m}=(\mathcal{F}_c)^m [a_{c,n(c)}].
\end{formula}\noindent
Fix $\theta'\in(\theta,\phi(-\infty))$ and, for $n\geq n(c)$, define the point
$x(c,n)$ through the relation
$$a_{c, n} (x (c,n)) =\theta'.$$
Note that $x(c,n)$ exists because $ a_{c,n}(-\infty)\geq\phi(-\infty)>\theta'$
and $a_{c,n}(+\infty)=0$ by Lemma~\ref{lem:acn}.
Moreover we claim that, by construction of $c^*$, there holds that
\begin{equation}\label{claim1}
\limsup_{n \to \infty} \frac{x (c,n)}{n} \leq c^* - c.
\end{equation}
Let us postpone the proof of this for a moment and continue with our construction.
By \eqref{claim1}, one readily sees that, up to increasing $n (c)$ if need be,
the following holds:
\Fi{nc2}
\forall 0\leq m\leq 1/\sqrt{c^*-c},\quad
x (c, n (c)+m ) - x (c, n(c)) \leq 2\sqrt{c^* -c}.
\end{formula}\noindent
Conditions \eqref{nc1},\eqref{nc2} determine our choice of the diagonal sequence
$(a_{c,n(c)})_{c<c^*}$.
Let $u_c (t,x)$ denote the solution of the Cauchy problem for
\eqref{moving} with initial datum~$a_{c,n(c)}$ (notice that $u_c (t,x)$ satisfies parabolic estimates up to time $t=0$ because $a_{c,n(c)}=\mathcal{F}_c[a_{c,n(c)-1}]$). Property \eqref{nc1} and the monotonicity of $(a_{c,n})_{n\in\mathbb{N}}$ imply that
$$\forall n\in\mathbb{N},\quad
u_c(n+1,\cdot)\equiv a_{c,n(c)+n+1}\geq a_{c,n(c)+n}\equiv u_c(n,\cdot),$$
that is, the sequence $(u_c(n,\cdot))_{n\in\mathbb{N}}$ is nondecreasing.
Furthermore, the function $u_c$ inherits the monotonicity
in $x$ of the initial datum, which is strict by Lemma~\ref{lem:acn}
because $ a_{c,n}>\phi$.
We finally consider the translation $u_c(t,x+x(c, n(c)))$ of $u_c$.
By parabolic estimates up to $t=0$, we have that (up to subsequences)
$$u_c (t ,x + x(c, n(c))) \to a^* (t,x)\quad\text{ as } c \nearrow c^*,$$
locally uniformly in $(t,x)\in [0,+\infty) \times\mathbb{R}$, where~$a^*(t,x)$ satisfies the equation \eqref{moving} with $c=c^*$.
We further know that
$$a^*(0,0)=\theta'$$
and that $a^*(n,x)$ is nondecreasing in~$n\in\mathbb{N}$ and
nonincreasing in~$x\in\mathbb{R}$.
Let us now prove~\eqref{claim1}. First, the function $\phi $
being nonincreasing, for any $c < c^*$ we deduce from \eqref{E0} that
$$a_{c,1} = \max \{ \phi, \mc{F}_c [\phi] \} \leq
\max \{ \phi (\cdot + (c-c^*)), \mc{F}_{c^*} [\phi] (\cdot + (c-c^*)) \} = a_{c^*,1} (\cdot + (c-c^*)).$$
An iterative argument then shows that
\begin{equation}\label{claim22}
\forall n\in\mathbb{N},\quad a_{c,n} \leq a_{c^*,n} (\cdot + n (c-c^*)).
\end{equation}
Now it follows from \eqref{c*not1} that $\inf a_{c^*} \leq \theta $. Indeed, assume by contradiction that $\inf a_{c^*} > \theta$. Then by comparison with the ODE, we immediately conclude that $(\mathcal{F}_c)^m [a_{c^*}] \to 1$ as $m \to +\infty$. However, by construction, $a_{c^*,n+1} \geq \mathcal{F}_{c^*} [a_{c^*,n}]$, hence $a_{c^*} \geq \mathcal{F}_{c^*} [a_{c^*}]$ and therefore
the monotonicity of $\mc{F}_{c^*}$ eventually yields
$$a_{c^*}\geq\lim_{m\to+\infty}(\mathcal{F}_c)^m [a_{c^*}]=1,$$
contradicting \eqref{c*not1}.
We infer from the above that there exists $X_{\theta'} \in\mathbb{R}$ such that
$$\forall n \in \mathbb{N},\quad
\theta'>a_{c^*} (X_{\theta'})\geq a_{c^*,n} (X_{\theta'})\geq a_{c,n} (X_{\theta'}+n (c^* -c)),$$
where the last inequality follows from~\eqref{claim22}. This means that
$$\forall n \in \mathbb{N},\quad
x (c,n)<X_{\theta'} + n (c^* -c),$$
from which \eqref{claim1} immediately follows.
\subsection{The function $a^*$ converges to the profile of a front}
We recall that, by construction, the sequence $a^* (n, \cdot)$ is nondecreasing with respect to $n \in \mathbb{N}$. In particular, we can define
$$U^* (t,x) := \lim_{n \to +\infty} a^* (t+n,x),$$
By parabolic estimates, the above limit exists (up to subsequences)
locally uniformly in $(t,x)\in\mathbb{R}^2$ and
$U^*$ is a periodic in time solution of \eqref{moving} with $c = c^*$.
Moreover, $U^*$ satisfies $U^*(0,0)\geq\theta'$
and inherits from $a^*$ that it is nonincreasing with respect to $x$. Let us check that it is actually a travelling front.
Using parabolic estimates and the monotonicity with respect to $x$, we see that the sequences $(U^*(t,x\pm n))_{n\in\mathbb{N}}$ converge locally uniformly in $(t,x)\in \mathbb{R}^2$ (up to subsequences)
to two steady states $U^*_\pm$ of the same ODE \,$U'=f(U)$ (here we used that this
ODE does not admit non-trivial periodic solutions), i.e., $U^*_\pm$ are constantly equal
to~$0$,~$\theta$ or $1$.
The fact that $U^*(0,0)\geq\theta'>\theta$ and the monotonicity in $x$ then imply that
$$U^*_- = U^* (\cdot, -\infty) \equiv 1.$$
Next, we claim that $U^*_- \equiv0$.
Once this claim is proved,
one may show by a sliding argument as in~\cite{BH12} that $U^*$
is actually independent of $t$, and thus
it is the profile of a front moving with speed $c^*$. Therefore, in order to conclude this preliminary section, we need to rule out the cases $U^*_+\equiv \theta$ and
$U^*_+\equiv 1$.
Condition \eqref{nc2}
is specifically devised to prevent the latter possibility. Indeed, it
yields
$$\forall 0\leq m\leq 1/\sqrt{c^*-c},\quad
u_c (m, x (c, n(c)) + 2\sqrt{c^* -c} ) \leq
u_c (m, x (c, n(c)+m))=\theta'.$$
Passing to the limit as $c \nearrow c^*$ in this inequality we get
\begin{equation*}
\forall m \in \mathbb{N},\quad a^* (m, 0) \leq \theta',
\end{equation*}
whence $U^* (0,0)\leq\theta'$. By the monotonicity in $x$, we then derive
$$U^*_+ = U^* (\cdot, +\infty)<1.$$
It remains to rule out the case $U^*_+ \equiv\theta$.
To achieve this, we shall compare $c^*$ with the spreading
speeds associated with the restrictions of $f$ to $[0,\theta]$ and $[\theta,1]$ respectively,
which are of the well-known (even in the periodic and multidimensional case) monostable type.
This is where the ``counter-propagation'' property comes into play.
We recall that such a property is guaranteed in the homogeneous case we are considering now,
but should be imposed in general through Assumption \ref{ass:speeds}.
We proceed by contradiction and suppose that $U^*_+ \equiv \theta$.
Thus $U^* (0,\cdot) \geq \theta$, as well as $U^* (0,\cdot) \geq \overline{u}_0$ defined by
$$\overline{u}_0 = \theta' \, \mathbbm{1}_{(-\infty,0]} + \theta \,\mathbbm{1}_{(0,+\infty)} .$$
Consider now the solution $\overline{u}$ of \eqref{ref_frame} with initial datum $\overline{u}_0$. Since $\theta$ is an unstable steady state, we can use the well-known result about the spreading speed for solutions of the monostable equation from~\cite{AW}.
Namely, we find a speed $\overline{c}>0$ such that
$$\forall c < \overline{c}, \quad \overline{u} (t,ct) \to 1 \quad\text{as }\;t \to +\infty,$$
$$\forall c > \overline{c}, \quad \overline{u} (t,ct) \to \theta \quad\text{as }\;t \to +\infty.$$
It is also proved in~\cite{AW} that $\overline{c}$ coincides with the minimal speed of fronts,
c.f.~Theorem~\ref{thm:mono}, that is,
using the same notation as in the introduction, there holds that~$\overline{c} = \overline{c}_\theta$.
Since $U^* (t,x-c^*t)$ satisfies \eqref{ref_frame} and
$U^* (0,\cdot)\geq \overline{u}_0$, we infer by comparison that for all~$c< \overline{c}$,
there holds
$U^* (t,(c-c^*)t) \to 1 $ as $t \to +\infty$. Recalling that $U^*$ is
periodic in time and that we are assuming that $U^*_+ \equiv \theta$,
we eventually find that
$c^* \geq \overline{c} >0$.
Let us go back now to the construction of $a^*$, $U^*$. We have that, up to a subsequence,
$$U^* (0,x) = \lim_{k \to +\infty} \Big(\lim_{c \nearrow c ^*} a_{c, n(c) +k} \big( x + x(c,n(c))\big)\Big).$$
In particular, one can take a sequence $c_k \nearrow c^*$ such that, locally uniformly in $x$,
\begin{equation}\label{U*0}
U^* (0,x) = \lim_{k \to +\infty} a_{c_k,n(c_k)+k} (x + x(c_k, n(c_k))).
\end{equation}
Now for any $c<c^*$ and $n\in\mathbb{N}$,
let $x ' (c,n)$ be such that
$$a_{c,n} ( x' (c,n)) = \frac{\theta}{2} .$$
Let us extract another subsequence so that the solution of \eqref{moving} with initial datum
$$a_{c_k, n(c_k)+k} (x+x'(c_k, n(c_k)+k))$$
converges locally uniformly in $(t,x)\in\mathbb{R}^2$
to some $V^* (t,x)$,
which is an entire solution of \eqref{moving} with $c=c^*$. Moreover, $V^* (n,x)$ is nondecreasing in $n \in \mathbb{Z}$, nonincreasing in $x \in \mathbb{R}$, and satisfies $V^* (0,0 ) = \theta/2 .$
One can further see that $V^* (0,\cdot) \leq \theta$; this follows from the fact that
$x' (c_k, n(c_k)+k) - x (c_k, n(c_k)) \to +\infty$, which, in turn, is a consequence of~\eqref{U*0}
and of the contradictory assumption $U^*_+ \equiv \theta$.
In particular, we have that $V^* (0,\cdot) \leq \underline{u}_0$ defined by
$$\underline{u}_0 = \theta \, \mathbbm{1}_{(-\infty,0]} + \frac{\theta}{2} \,\mathbbm{1}_{(0,+\infty)} .$$
Owing again to the spreading result for the monostable equation,
there exists a speed $\underline{c} <0$ such that
the solution $\underline{u}$ of \eqref{ref_frame}
emerging from $\underline{u}_0$ satisfies
$$\forall c < \underline{c}, \quad \underline{u} (t,ct) \to \theta\quad\text{as }\;t \to +\infty,$$
$$\forall c > \underline{c}, \quad \underline{u} (t,ct) \to 0\quad\text{as }\;t \to +\infty.$$
On one hand,
by comparison we get that $V^* (t,x - c^* t) \leq \underline{u} (t,x)$. On the other hand, by monotonicity we know that $V^* (n,x) \geq \frac{\theta}{2}$ for all $n \in \mathbb{N}$,
$x\leq0$. One then easily infers that $c^* \leq \underline{c} <0$. We have finally reached a contradiction.
\section{The iterative scheme in the periodic, $N$-dimen\-sional case}\label{sec:N}
We now turn to the general periodic case in arbitrary dimension.
Because the equation is no longer invariant by any space translation, we need to introduce a more complicated operator involving also a somewhat artificial variable. This makes things more technical, though the overall strategy remains the same.
\subsection{A time discretization}\label{sec:discrete}
The main ingredient of our proofs is inspired by Weinberger \cite{W02}, and consists in looking for travelling fronts as fixed points of an appropriate family of mappings issued from a time discretization of \eqref{eq:parabolic}.
First, we use the notation
$$v(t,y;x\mapsto v_0(x))$$
to indicate the solution to \eqref{eq:parabolic} with initial datum $v_0$, evaluated at $(t,y)$. In the sequel, we shall often
omit to write ``$x\mapsto$'' and we shall just use $x$ as the variable involved in the initial datum.
Let us now recall (see Definition~\ref{def:puls_front}) that a \textit{pulsating travelling front} in a direction $e \in \mathbb{S}^{N-1}$ is a solution of \eqref{eq:parabolic} of the form
$$u(t,x) = U (x,x\cdot e - ct)$$
with $U (x,z)$ periodic in the $x$-variable and converging to two distinct steady states as~$z \to \pm \infty$.
In particular, one may look at a travelling front as a family $(U(x,z))_{z\in\mathbb{R}}$, using the second variable as an index.
Let us translate the notion of pulsating travelling front to the discrete setting.
\begin{defi}\label{def:discrete_front}
A {\em discrete travelling front} in a direction $e \in \mathbb{S}^{N-1}$ with speed $c \in \mathbb{R}$ is a function $U(y,z)$ which is periodic in its first variable,
satisfies
$$\forall(y,z)\in\mathbb{R}^{N+1},\quad
v(1,y; x \mapsto U(x,z + x \cdot e)) \equiv U(y,z + y \cdot e - c),$$
and connects two steady states $q_1$ and $q_2$, i.e.,
$$U(\cdot, - \infty) \equiv q_1 (\cdot) > U (\cdot, \cdot) > U (\cdot , +\infty) \equiv q_2 (\cdot),$$
where convergences are understood to be uniform.\end{defi}
Clearly, if $u(t,x) = U (x,x\cdot e - ct)$ is a (continuous)
pulsating travelling front then~$U(x,z)$ is a discrete travelling front, at least if $c \neq 0$ so that the change of variables $(t,x) \mapsto (x, x\cdot e - ct)$ is invertible. The converse is a priori not obvious: we immediately deduce from
Definition~\ref{def:discrete_front} that, for every $\tau \in \mathbb{R}$,
the function $U(x,x\cdot e - ct)$ coincides with a solution $u_\tau$ of the parabolic
equation~\eqref{eq:parabolic} on the 1-time-step set
$(\{\tau\} + \mathbb{Z} ) \times \mathbb{R}^N$, but to recover a pulsating front we
should have that the $u_\tau$ are time-translations of the same solution.
This difficulty will be overcome by instead considering different discretizations
with time steps converging to $0$.
\begin{rmk}
This part of the argument, about going from discrete to continuous travelling fronts, was actually omitted by Weinberger in
the paper \cite{W02}
that we refer to in Theorem~\ref{thm:mono} above.
A proof in the homogeneous case can be found in~\cite{LWL}. However this does not seem to raise significant difficulties in the periodic case.
Let us also mention
that one can see that a discrete travelling front gives rise to an
``almost planar generalized transition front'' in the sense of Berestycki and Hamel \cite{BH12}.
Then, in some situations (typically under some strong stability assumptions and provided also that the front speed is not zero), it is shown
in \cite[Theorem 1.14]{BH12} that an almost planar transition front is also a travelling front in a usual sense.
\end{rmk}
Definition~\ref{def:discrete_front} leads us to define the
family of mappings $\mathcal{F}_{e,c}:
L^\infty(\mathbb{R}^{N+1})\to L^\infty(\mathbb{R}^{N+1})$
for $e\in \mathbb{S}^{N-1}$ and $c\in\mathbb{R}$ as follows:
\Fi{eq:def_mapping}
\mathcal{F}_{e,c}[V](y,z):=v(1,y;V (x , z+ x \cdot e - y \cdot e + c)).
\end{formula}\noindent
Rewriting the mapping $\mathcal{F}_{e,c}$ as
\Fi{eq:def_mapping1}
\mathcal{F}_{e,c}[V](y,z+y\cdot e-c)=v(1,y;V (x , z + x \cdot e)),
\end{formula}\noindent
we see that the discrete travelling fronts are given by the fixed points of
$\mathcal{F}_{e,c}$. Formula~\eqref{eq:def_mapping1} also allows one to use parabolic estimates to obtain regularity with respect to~$y\mapsto(y,z+y\cdot e)$.
In a similar fashion, notice that any spatially periodic stationary state $p(y)$ of \eqref{eq:parabolic} is a $z$-independent fixed point of $\mathcal{F}_{e,c}$ for any $c$ and $e$. The converse is also true, as a consequence of the next result.
\begin{prop}\label{prop:energy1}
Let $u(t,x)$ be a 1-periodic in time solution of \eqref{eq:parabolic} which is also periodic in space.
Then $u$ is actually stationary in time.
\end{prop}
\begin{proof}
Let us first introduce the energy
$$E (w): = \int_{[0,1]^N} \left(\frac{A |\nabla w |^2}{2} - F (x,w) \right)dx,$$
for any periodic function $w\in C^1(\mathbb{R}^N)$, where
$$F (x,s) := \int_0^s f(x,\sigma) d\sigma.$$
Then one may check that the solution $u(t,x)$ of \eqref{eq:parabolic} satisfies
$$\partial_t E (u (t,\cdot)) = - \int_{[0,1]^N} | \partial_t u |^2dx\leq0 .$$
On the other hand, the mapping $t\mapsto E (u (t,\cdot))$ is $1$-periodic, whence
it is necessarily constant. This implies that
$\partial_t u \equiv 0$.
\end{proof}
We also derive several properties of the mapping
$\mathcal{F}_{e,c}$ which will be useful later.
\begin{prop}\label{prop:Fec}
For given $e \in \mathbb{S}^{N-1}$ and $c \in \mathbb{R}$, the mapping $\mathcal{F}_{e,c}$ satisfies the following properties.
\begin{enumerate}[$(i)$]
\item {\em Periodicity:} if $V (y,z)$ is periodic with respect to $y \in \mathbb{R}^N$ then
this holds true for
$\mathcal{F}_{e,c} [V] (y,z)$.
\item {\em Monotonicity:}
if $V_1 \leq V_2$ then $$\mathcal{F}_{e,c} [V_1] \leq \mathcal{F}_{e,c} [V_2];$$
if in addition $\sup_{y\in\mathbb{R}^N}(V_2-V_1)(y,z+y\cdot e)>0$ for all $z\in\mathbb{R}$, then
$$\mathcal{F}_{e,c} [V_1] < \mathcal{F}_{e,c} [V_2].$$
\item {\em Continuity:} if $V_n (y,z+ y \cdot e)\to V_\infty (y,z+ y \cdot e)$
as $n\to+\infty$ locally uniformly in~$y\in\mathbb{R}^N$, for some $z\in\mathbb{R}$, then
$$\F{c}[V_n] (y,z+ y \cdot e-c)\to \F{c}[V_\infty] (y,z+ y \cdot e-c)
\quad\text{as }\;n\to+\infty$$
locally uniformly in $y\in\mathbb{R}^N$.
\item {\em Compactness:} for any sequence $(V_n)_{n \in\mathbb{N}}$
bounded in $L^\infty(\mathbb{R}^{N+1})$ and any $z \in \mathbb{R}$, there exists a subsequence (depending on $z$) along which the function $y \mapsto \mathcal{F}_{e,c} [V_n] (y,z+y\cdot e)$ converges in $L^\infty_{loc}(\mathbb{R}^N)$ as $n\to+\infty$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $V(y,z)$ be a periodic function in its first variable. Then for any $y \in \mathbb{R}^N$, $z\in \mathbb{R}$ and $L \in \mathbb{Z}^N$, the periodicity of equation \eqref{eq:parabolic} yields
\[\begin{split}
\F{c}[V](y+L,z) &=v(1,y+L;V (x , z+ x \cdot e - y \cdot e - L\cdot e + c))\\
&=v(1,y;V (x+L , z+ x \cdot e - y \cdot e + c))\\
&=\F{c}[V](y,z).
\end{split}\]
This proves $(i)$.
Statement $(ii)$ simply follows from \eqref{eq:def_mapping1} and the parabolic weak and strong comparison principles.
The continuity property follows from standard parabolic estimates.
Indeed, take a sequence $(V_n (y,z+ y\cdot e))_{n \in\mathbb{N}}$ converging locally uniformly in $y$ and for some
$z\in\mathbb{R}$ to $V_\infty (y,z+ y\cdot e)$. Then the functions $(w_n)_{n \in\mathbb{N}}$ defined~by
$$w_n (t,y) := v (t,y;V_n (x,z+ x \cdot e)) - v (t,y; V_\infty (x,z+x \cdot e))$$
solve, for any fixed $z \in \mathbb{R}$, a linear parabolic equation of the type
$$\partial_t w_n =\text{div} (A(y) \nabla w_n) + g^{z,n} (t,y) w_n,$$
with $|g^{z,n}|$ less than or equal to the Lipschitz constant of $f$,
together with the initial condition $V_n (x, z + x \cdot e) - V_\infty (x,z+x \cdot e)$.
It follows from the comparison principle and parabolic estimates
that $(w_n)_{n\in\mathbb{N}}$ converges to 0 locally uniformly with respect to~$t>0$, $y \in \mathbb{R}^N$
. In particular, $y\mapsto v(1,y;V_n(x,z+x \cdot e))$ converges locally uniformly as $n \to +\infty$ to $v(1,y;V_\infty (x,z + x \cdot e))$,
which owing to \eqref{eq:def_mapping1} translates into the desired property.
The last statement $(iv)$ is an immediate consequence of the parabolic estimates.
\end{proof}
Let us point out that the operators
$\mathcal{F}_{e,c}$ were initially introduced by Weinberger in \cite{W02},
who exhibited the existence of a spreading speed of solutions in a rather general context, but only proved the existence of pulsating fronts in the monostable case. These operators also fall into the scope of~\cite{FZ} (though they
lack the compactness property required in some of their results).
In particular, though one may proceed as in the aforementioned paper at least in the bistable case, we suggest here a slightly different approach. In some sense, our method is actually closer to the initial argument of Weinberger in~\cite{W02}, and though we do not address this issue here, it also seems well-suited to check that the speed of the pulsating front (or the speeds of the propagating terrace) also determines the spreading speed of solutions
of the Cauchy problem associated with~\eqref{eq:parabolic}.
\subsection{Basic properties of the iterative scheme}\label{sec:ac1}
From this point until the end of Section~\ref{sec:discreteTF}, we assume that the following holds.
\begin{assump}\label{ass:mix}
The equation \eqref{eq:parabolic} admits a finite number of asymptotically stable steady states,
among which $0$ and $\bar p$.
Furthermore, for any pair of ordered periodic steady states $q < \tilde{q}$,
there is an asymptotically stable steady state $p$ such that $q \leq p \leq \tilde{q}$.
\end{assump}
This hypothesis is guaranteed by both the bistable Assumption~\ref{ass:bi} and the multistable Assumption~\ref{ass:multi}.
For the sake of completeness as well as for convenience (several of the following properties will play in important role here), we repeat some of the arguments of~\cite{W02}. In particular, we start by reproducing how to define the speed $c^*$ (depending on the direction $e\in \mathbb{S}^{N-1}$) which was shown in~\cite{W02} to be the spreading speed for planar like solutions of the Cauchy problem. Roughly, for any $c<c^*$ we construct a time increasing solution of the parabolic equation in the moving frame with speed $c$ in the direction $e$. Later we shall turn to a new construction of a pulsating travelling front connecting $\bar p$ to a stable periodic steady state $p <\bar p$ with~speed~$c^*$.
The construction starts with an $L^\infty$ function $\phi$ satisfying the following:
\Fi{ass:phi}
\begin{cases}
\displaystyle
\phi (y,z) \mbox{ is periodic in $y \in \mathbb{R}^N$, and nonincreasing in $z \in \mathbb{R}$},\vspace{5pt}\\
\phi (y,z) \mbox{ is uniformly continuous in $(y,z)\in\mathbb{R}^{N+1}$},
\vspace{5pt}\\
\displaystyle\phi(y,z)=0 \mbox{ for } y\in\mathbb{R}^N,\ z\geq0 , \vspace{5pt}\\
\displaystyle\phi(y,-\infty)<\bar p(y),\vspace{5pt}\\
\displaystyle\exists\delta>0\ \text{ such that } \phi(y,-\infty)-\delta\ \text{ lies
in the basin of attraction of $\bar p$.}
\end{cases}
\end{formula}\noindent
Observe that the limit $\phi(y,-\infty)$ exists uniformly with respect to $y$, and thus it is continuous (and periodic).
The last condition is possible due to the (asymptotic) stability of
$\bar p$. Owing to the comparison principle, it implies that $\phi(y,-\infty)$ lies
in the basin of attraction of $\bar p$ too.
Then, for any $e \in \mathbb{S}^{N-1}$ and $c \in \mathbb{R}$, we define the sequence $(a_{c,n})_{n\in\mathbb{N}}$~by
\Fi{eq:def_acn}
\begin{array}{c} a_{c,0} := \phi,\vspace{5pt}\\
a_{c,n+1} := \max \{ \phi , \mathcal{F}_{e,c} [a_{c,n}] \},
\end{array}
\end{formula}\noindent
where $\mathcal{F}_{e,c}$ was defined in \eqref{eq:def_mapping}. The maximum is to be taken at each point $(y,z)$.
\begin{lem}\label{lem:a1}
The sequence $(a_{c,n})_{n\in\mathbb{N}}$ defined by~\eqref{eq:def_acn} is nondecreasing and satisfies $0<a_{c,n}< \bar p$ for $n\geq1$.
Moreover, $a_{c,n}(y,z)$ is periodic in $y$, nonincreasing with respect to $c$ and $z$ and satisfies
$a_{c,n}(y,+\infty)\equiv0$ uniformly with respect to $y$.
Lastly, $a_{c,n} (y,z+y\cdot e)$ is uniformly continuous
in $y\in\mathbb{R}^N$, uniformly with respect to $z\in\mathbb{R}$, $n \in \mathbb{N}$ and $c\in\mathbb{R}$.
\end{lem}
\begin{proof}
Firstly, recall from Proposition~\ref{prop:Fec}$(ii)$ that the operator
$\F{c}$ is order-preserving.
By recursion, one readily checks that the sequence $(a_{c,n})_{n\in\mathbb{N}}$
is nondecreasing.
Moreover, $0 <a_{c,n}<\bar p$ for $n\geq1$, always by Proposition~\ref{prop:Fec}$(ii)$.
Another consequence of \eqref{eq:def_mapping1} and the comparison principle is that if $V(y,z)$ is monotone in $z$ then so is
$\F{c}[V](y,z)$; whence the monotonicity of $a_{c,n} (y,z)$ with respect to $z$.
Let us now investigate the monotonicity with respect to $c$. We derive it
by noting that if $c_1 < c_2$, then \eqref{eq:def_mapping1} yields
\Fi{c1c21}
\F{c_1}[V](y,z+y\cdot e-c_1)=\F{c_2}[V](y,z+y\cdot e-c_2),
\end{formula}\noindent
for any function $V$. If furthermore $V (y,z)$ is nonincreasing in its second variable, then so is~$\F{c_2} [V]$ and we deduce that
$$\mathcal{F}_{e,c_1} [V] \geq \mathcal{F}_{e,c_2} [V].$$
Thus, owing to the monotonicity of the $\F{c}$,
the monotonicity of $a_{c,n}$ with respect to~$c$ follows by iteration.
Next, we want to show that $a_{c,n} (y,+\infty) =0$.
This is an easy consequence of the same property for $\phi$,
but we now derive a quantitative estimate which will prove useful in the sequel.
For this, we observe that, for any fixed $\lambda>0$,
there exists a supersolution
of \eqref{eq:parabolic} of the type $e^{-\lambda ( x \cdot e - \overline{c} t)}$,
provided $\overline c$ is sufficiently large.
Namely, by bounding~$f(x,u)$ by a linear function $Ku$ and also using the boundedness of the components of the diffusion matrix and their derivatives,
we can find $\overline c$
such that $e^{-\lambda ( x \cdot e - \overline{c} t)}$ satisfies
$$\partial_t u\geq \text{div} (A (x) \nabla u) + Ku,\quad
t \in \mathbb{R},\ x \in \mathbb{R}^N.$$
Let us show that if $V$
and $C>0$ satisfy
$$\forall (y,z)\in\mathbb{R}^{N+1},\quad
V (y, z) \leq C e^{-\lambda z},$$
then there holds
$$\forall (y,z)\in \mathbb{R}^{N+1},\quad
\mathcal{F}_{e,c}[V](y,z) \leq C e^{\lambda ( \overline{c}-c) } e^{ - \lambda z}.$$
Indeed, we have that
$$V ( x , z+ x \cdot e - y \cdot e + c) \leq \big(C e^{ - \lambda (z - y \cdot e + c)}\big)
e^{ - \lambda x \cdot e},$$
whence
$$\mathcal{F}_{e,c}[V](y,z)=v(1,y;V (x , z+ x \cdot e - y \cdot e + c))
\leq C e^{-\lambda (z +c -\overline{c})}.$$
Up to increasing $\overline{c}$, we can assume without loss of generality that $\overline{c} \geq c$.
Now, for any~$C \geq\max\bar p$, we have that
$$\forall (y,z)\in\mathbb{R}^{N+1},\quad
\phi (y, z) \leq C e^{-\lambda z}.$$
As a consequence
$$\forall (y,z)\in \mathbb{R}^{N+1},\quad
a_{c,1}(y,z) = \max \{ \phi , \mathcal{F}_{e,c} [\phi] \}
\leq C e^{\lambda ( \overline{c}-c) } e^{ - \lambda z},$$
and therefore, by iteration,
\Fi{acn<1}
\forall n \in \mathbb{N}, \ \forall (y,z)\in \mathbb{R}^{N+1},\quad a_{c,n}(y,z)
\leq C e^{n \lambda (\overline{c}-c) } e^{- \lambda z}.
\end{formula}\noindent
In particular $a_{c,n}(y,+\infty)=0$ uniformly with respect to $y$; however, this limit may not be uniform with respect to $c$ nor to $n$.
Finally, we point out that the uniform continuity in the crossed variables follows from our choice of $\phi$ and parabolic estimates. Indeed, the function
$$y \mapsto \mathcal{F}_{e,c} [a_{c,n-1}] (y, z + y \cdot e) = v (1,y; a_{c,n-1} (x,z + x \cdot e + c))$$
is not only uniformly continuous but also $C^2$,
and its derivatives are uniformly bounded by some constant which only depends on the terms in the equation \eqref{eq:parabolic}
as well as $\max\bar p$.
Recalling that $a_{c,n}$ is the maximum of $\mathcal{F}_{e,c} [a_{c,n-1}]$ and $\phi$, the latter being also uniformly continuous, we reach the desired conclusion.
\end{proof}
From Lemma~\ref{lem:a1} and the fact that the mapping $\mathcal{F}_{e,c}$ preserves spatial periodicity, one readily infers the following.
\begin{lem}\label{lem:a2}
The pointwise limit
$$a_c(y,z):=\lim_{n\to+\infty} a_{c,n}(y,z),$$
is well-defined,
fulfils $\phi \leq a_c \leq \bar p$ and $a_c(y,z)$ is periodic in $y$ and nonincreasing
with respect to both $z$ and $c$.
Moreover, the convergence
$$a_{c,n}(y,z+y\cdot e)\to a_c(y,z+y\cdot e)
\quad\text{as $n \to +\infty$}$$
holds locally uniformly in $y\in\mathbb{R}^N$, but still pointwise in $z\in \mathbb{R}^N$.
\end{lem}
We emphasize that no regularity properties could be expected for
$a_c$ with respect to the second variable.
Let us further note that, as a byproduct of the proof of Lemma~\ref{lem:a1}, and more specifically of \eqref{c1c21}, we deduce by iteration that
\begin{equation}\label{acc'1}
\forall c<c',\ n\in\mathbb{N},\quad
a_{c,n}(\cdot,\cdot + n(c'-c))\leq a_{c',n} .
\end{equation}
This will be used in later arguments, in particular in the proof of Lemma~\ref{lem:limit1} below.
\subsection{Defining $c^*$}\label{sec:c^*}
We want to define $c^*$ as the largest $c$ such that $a_c \equiv \bar p$, where $a_c$ comes from Lemma~\ref{lem:a2}. This is the purpose of the following lemma.
\begin{lem}\label{lem:ac1}
For any $c \in \mathbb{R}$, the function $a_c$ satisfies $a_c (y,-\infty) = \bar p (y)$
uniformly with respect to $y\in[0,1]^N$.
Moreover,
\begin{enumerate}[$(i)$]
\item $a_c \equiv \bar p$ for $-c$ large enough;
\item $a_c \not\equiv \bar p$ for $c$ large enough.
\end{enumerate}
In particular, the following is a well-defined real number:
$$c^* := \sup \{ c \in \mathbb{R} \ : \ a_c (y,z) \equiv \bar p(y) \}.$$
\end{lem}
\begin{proof}
We first prove that, for $-c$ large enough,
\Fi{Fn11}
(\mathcal{F}_{e,c})^n [\phi] (y,z) \to \bar p(y)\quad\text{as $n \to +\infty$},
\end{formula}\noindent
uniformly with respect to $y\in[0,1]^N$ and $z \in (-\infty,Z_0]$, for any $Z_0 \in \mathbb{R}$.
In particular, because $a_{c,n} \geq (\mathcal{F}_{e,c})^n [\phi]$ by the monotonicity of $\F{c}$,
this will yield statement $(i)$ of the lemma.
In order to show \eqref{Fn11}, we first introduce, in a similar fashion as in the proof of
Lemma~\ref{lem:a1}, two real numbers $\lambda >0$ and $\overline{c}$ large enough such that the function $e^{\lambda(x \cdot e + \overline{c} t)}$ satisfies the parabolic inequality
$$\partial_t u \geq \text{div} (A \nabla u ) + K u.$$
Here $K$ is the supremum with respect to $x$
of the Lipschitz constants of $u\mapsto f(x,u)$.
Next, we let $\psi (t,x)$ be the solution of \eqref{eq:parabolic}
emerging from the initial datum $\phi(x,-\infty)-\delta$, where~$\delta$ is the positive
constant in condition~\eqref{ass:phi}, that is, such that
$\phi(x,-\infty)-\delta$ lies in the basin of attraction of $\bar p$. Hence
$\psi (t, \cdot) \to \bar p$ uniformly as $t \to +\infty$.
The choice of $\lambda$ and $\overline{c}$ imply that, for any $\gamma>0$, the function
$$u_\gamma (t,x) := \psi (t,x) - \gamma e^{\lambda (x \cdot e + \overline{c} t)}$$
is a subsolution of~\eqref{eq:parabolic}. Let us now pick $C$ large enough such that
$$\forall(y,z)\in\mathbb{R}^{N+1},\quad
\phi (y,z) \geq \phi(y,-\infty)-\delta - C e^{\lambda z},$$
and thus, for any given $c\in\mathbb{R}$,
$$\phi (x , z+x\cdot e) \geq
\phi(x,-\infty)-\delta - C e^{\lambda (z+x \cdot e)}=u_{Ce^{\lambda z}}(0,x).$$
Now, iterating \eqref{eq:def_mapping1} one gets
$$
\forall n\in\mathbb{N},\quad
(\mathcal{F}_{e,c})^n[V](y,z+y\cdot e-nc)=v(n,y;V (x , z + x \cdot e)).
$$
It then follows from the comparison principle that
$(\mathcal{F}_{e,c})^n [\phi ] (y,z-nc) \geq u_{Ce^{\lambda z}}(n,y)$, that~is,
\Fi{Fn>psi}
(\mathcal{F}_{e,c})^n [\phi ] (y,z) \geq
\psi (n,y) - C e^{\lambda[z+y\cdot e+n (c+ \overline{c})]}.
\end{formula}\noindent
From one hand, this inequality implies that if
$c < - \overline{c}$ then \eqref{Fn11} holds uniformly with respect to $y\in[0,1]^N$ and
$z \in (-\infty,Z_0]$, for any $Z_0 \in \mathbb{R}$, whence statement $(i)$ of the lemma. From the other hand, if $c\geq - \overline{c}$ we derive
$$a_{c,n} (y,-2n(c+ \overline{c}+1)) \geq \psi (n,y) -
C e^{-n \lambda (c+ \overline{c}+2)+\lambda y\cdot e}
\geq \psi (n,y) - C e^{\lambda(-2n+y\cdot e)}.$$
Because the sequence $(a_{c,n})_{n\in\mathbb{N}}$ is nondecreasing and converges to $a_c$,
we get that
$$a_{c} (y,-2n(c+ \overline{c}+1)) \geq \psi (n,y) - C e^{\lambda(-2n+y\cdot e)} ,$$
for any $n \in \mathbb{N}$.
Passing to the limit as $n \to +\infty$ and
recalling that $a_c$ is monotone with respect to its second variable,
we infer that $a_c(y,-\infty)=\bar p(y)$ uniformly with respect to
$y\in[0,1]^N$.
It remains to prove statement $(ii)$. Fix $\lambda>0$.
Because $\phi$ satisfies \eqref{ass:phi},
for~$C:=\max\bar p$ there holds that
$\phi (y, z) \leq C e^{-\lambda z}$ for all $(y,z)\in\mathbb{R}^{N+1}$.
As seen in the proof of Lemma~\ref{lem:a1}, this implies that
\eqref{acn<1} holds for all $c$ smaller than or equal
a suitable value~$\overline{c}$, and then
in particular for $c = \overline{c}$, i.e.,
$a_{\overline{c},n}(y,z)\leq Ce^{-\lambda z}$ for all $n\in\mathbb{N}$.
As a consequence, $a_{\overline{c}} \not\equiv \bar p$ and, by monotonicity with respect to~$c$,
we also have that $a_c \not \equiv \bar p$ if $c\geq\bar c$.
\end{proof}
We see now that, while $c^*$ is the supremum of the speeds $c$ such that
$a_c \equiv \bar p$, it actually holds that $a_{c^*} \not \equiv \bar p$.
This will be crucial for the construction of the~front.
\begin{lem}\label{lem:ac2}
The following properties are equivalent:
\begin{enumerate}[$(i)$]
\item $ c < c^*$,
\item $ a_c \equiv \bar p$,
\item $ \exists n_0 \in \mathbb{N}, \ \exists z_0 >0, \
\forall y\in[0,1]^N,\quad
a_{c,n_0} (y,z_0) > \phi (y,-\infty).$
\end{enumerate}
In particular, in the case $c = c^*$, we have that
for all $n\in\mathbb{N}$ and $z > 0$, there exists $y\in[0,1]^N$
such that $a_{c^*,n} (y,z)\leq\phi (y,-\infty)$.
\end{lem}
\begin{proof}
By definition of $c^*$ and monotonicity of $a_c$ with respect to $c$,
we already know that~$(i)$ implies $(ii)$.
We also immediately see that $(ii)$ implies $(iii)$, using the fact that
$a_{c,n}(y,z)$ is nonincreasing in $z$ and
$a_{c,n}(y,z+y\cdot e)\to a_c(y,z+y\cdot e)$ as $n\to+\infty$
uniformly with respect to $y\in[0,1]^N$ (see Lemma~\ref{lem:a2}).
It remains to prove that $(iii)$ implies $(i)$. We assume that $(iii)$ holds
and we start by showing~$(ii)$,
which will serve as an intermediate step.
Thanks to the monotonicity with respect to~$z$ and the fact that
$a_{c,n_0}>0$ and $\phi (\cdot ,z) =0$ for $z \geq 0$,
we get
$$\forall n \geq n_0, \ \forall (y,z) \in \mathbb{R}^{N+1},\quad
a_{c,n} (y, z + z_0) > \phi (y,z).$$
Since the operator $\mathcal{F}_{e,c}$ is order preserving, we also get that
$$\forall (y,z) \in \mathbb{R}^{N+1},\quad
a_{c,n_0 +1} (y,z+ z_0)\geq\mathcal{F}_{e,c} [a_{c,n_0}](y,z+ z_0)
\geq \mathcal{F}_{e,c} [\phi] (y,z) .$$
It follows from the two inequalities above that
$$\forall (y,z) \in \mathbb{R}^{N+1},\quad
a_{c,n_0 +1} (y,z+ z_0) \geq a_{c,1} (y,z).$$
A straightforward induction leads to
$$\forall m \geq 0,\
\forall (y,z) \in \mathbb{R}^{N+1},\quad
a_{c,n_0 +m} (y,z+z_0) \geq a_{c,m} (y,z).$$
Passing to the limit $m \to +\infty$ on both sides, we infer that
$$\forall (y,z) \in \mathbb{R}^{N+1},\quad
a_{c} (y,z+z_0) \geq a_c (y,z).$$
Recalling that $z_0 >0$ and that $a_c$ is nonincreasing with respect to $z$,
we find that $a_c (y,z) = a_c (y)$ does not depend on $z$. Since we know by Lemma~\ref{lem:ac1} that $a_c (\cdot,-\infty) \equiv \bar p(\cdot)$, we conclude that $a_c \equiv \bar p$. We have shown that $(iii)$ implies $(ii)$.
Next we show that the set of values of $c$ such that $(iii)$ holds is open. Using \eqref{eq:def_mapping1},
it is readily seen by iteration that, for any fixed $n \in \mathbb{N}$, the function $a_{c,n}$ inherits from~$\phi$ the continuity with respect to the variable $(y,z)$ (though
this is not uniform with respect to $n\in\mathbb{N}$).
From this, by another iterative argument and \eqref{c1c21}, one deduces that
$a_{c,n}(y,z)\to a_{c_0,n}(y,z)$ locally uniformly in $(y,z)$ as $c\to c_0$,
for every $n\in\mathbb{N}$. Openness follows.
We are now in the position to conclude the proof of Lemma~\ref{lem:ac2}.
Assume that $(iii)$ holds for some $c$. From what we have just proved, we know that $(iii)$ holds true for some $c' >c$, and thus $(ii)$ holds for $c'$ too. By the definition of $c^*$, we have that $c^* \geq c' > c$, that is, $(i$) holds.
\end{proof}
Before proceeding we have to check that $c^*$ is intrinsic to \eqref{eq:parabolic} and does not
depend on~$\phi$. This will be useful later on, when going back to the continuous case and
more specifically to check that the speed
of the discrete front we shall obtain does not depend on the choice of the time step
of the discretization.
\begin{lem}\label{lem:indep1}
The speed $c^*$ does not depend on the choice of $\phi$ satisfying the
properties~\eqref{ass:phi}.
\end{lem}
\begin{proof}
Consider two admissible functions $\phi$ and $\hat{\phi}$ for the conditions~\eqref{ass:phi}.
Let $a_{c,n}$, $\hat{a}_{c,n}$ and $c^*$, $\hat{c}^*$
denote the functions and constants
constructed as above, starting from~$\phi$, $\hat{\phi}$ respectively.
Take an arbitrary $c\in\mathbb{R}$.
Using the first part of
Lemma~\ref{lem:ac1} and the fact that $a_{c,n} (y,z + y\cdot e) \to a_c (y , z + y \cdot e)$
locally uniformly in $y$ as $n \to +\infty$,
we can find $\bar z <0$ and $\bar n\in\mathbb{N}$ such that
$$\inf_{y\in[0,1]^N}\left(a_{c,\bar n}(y,\bar z+y\cdot e) - \hat\phi(y,-\infty)\right)>0.$$
Because $|y\cdot e|\leq \sqrt{N}$ if $y\in[0,1]^N$,
one readily deduces that $a_{c,\bar n} (y,z-\sqrt{N} + \bar z) > \hat \phi (y,z)$ for all $(y,z) \in \mathbb{R}^{N+1}$, whence $a_{c,n} (\cdot, \cdot - \sqrt{N} + \bar z ) > \hat \phi$ for all $n \geq \bar n$ by the monotonicity in~$n$.
It follows that
$$a_{c,\bar n+1}(\cdot,\cdot - \sqrt{N} +\bar z)\geq \max\{\hat\phi,\F{c}[a_{c,\bar n}](\cdot,\cdot - \sqrt{N} +\bar z)\}
\geq \max\{\hat\phi,\F{c}[\hat\phi]\}={\hat a}_{c,1}.$$
By iteration we eventually infer that $a_{c,\bar n+m}(\cdot,\cdot - \sqrt{N}+\bar z)\geq\hat a_{c,m}$
for all $m\in\mathbb{N}$. This implies that $c^*\geq \hat c^*$.
Switching the roles of $\phi$ and $\hat\phi$ we get the reverse inequality.
\end{proof}
\section{A discrete travelling front with speed $c^*$}
\label{sec:discreteTF}
Under the Assumption~\ref{ass:mix},
we have constructed in the previous section a candidate speed~$c^*$ for the existence
of a pulsating travelling front.
In the current one we show that there exists a discrete travelling front in the direction $e$ with speed~$c^*$ connecting~$\bar p$ to some stable periodic steady state
(in the sense of Definition~\ref{def:discrete_front}).
To derive the stability of the latter we will make use of
the additional Assumption~\ref{ass:speeds}.
We recall that in order to define the minimal
speeds~$\overline{c}_{q}$ and~$\underline{c}_{q}$ appearing in
Assumption~\ref{ass:speeds},
we have shown after the statement of Theorem~\ref{thm:mono} that the hypothesis
there is guaranteed by Assumption~\ref{ass:multi}. However,
this was achieved without using the linear stability hypothesis in Assumption~\ref{ass:multi}
and therefore~$\overline{c}_{q}$ and~$\underline{c}_{q}$ are well
defined under Assumption~\ref{ass:mix}~too.
The strategy is as follows. For $c<c^*$, Lemma~\ref{lem:ac2} implies
that $a_{c,n} > \phi$ for $ n $ sufficiently large. We deduce that the nondecreasing sequence
$(a_{c,n})_{n\in\mathbb{N} }$ is eventually given by the recursion
$a_{c,n} = \mathcal{F}_{e,c} [a_{c,n-1}]$.
Roughly speaking, we have constructed a solution of \eqref{eq:parabolic}
which is non-decreasing with respect to $1$-time steps in the frame
moving with speed $c$ in the direction $e$. We now want to pass to the limit as $c \nearrow c^*$ in order to get a fixed point for $\mathcal{F}_{e,c^*}$ and, ultimately, a pulsating travelling front in the direction~$e$. To achieve this, we shall need to capture such solutions
at a suitable time step, and suitably translated.
\begin{rmk}
The equivalent argument in the continuous case of what we
are doing here is to construct a family of functions $U_c$ such that $U_c (x, x\cdot e - ct)$ is a subsolution of \eqref{eq:parabolic}, and to use this family and a limit argument to find a pulsating front. Notice that an inherent difficulty in such an argument is that a subsolution does not satisfy regularity estimates in general. We face a similar difficulty in the discrete framework.
\end{rmk}
\subsection{Choosing a diagonal sequence as $c \nearrow c^*$}\label{sec:subsequence}
Consider the function $\phi$ satisfying~\eqref{ass:phi}
from which we initialize the construction of the sequence~$(a_{c,n})_{n\in\mathbb{N}}$.
The first step in order to pass to the limit as $c \nearrow c^*$ is to capture the sequence at a suitable iteration, and roughly at the point where it `crosses' the limit~$\phi(\cdot,-\infty)$, which, we recall, lies in the basin of attraction
of $\bar p$.
\begin{lem}\label{lem:n(c)}
For $c < c^*$, there exists $n(c) \in \mathbb{N}$ such that, for all $n \geq n(c)$, the quantity
$$z_{c,n}:=\sup \{z\ :\ a_{c,n} (y,z+y\cdot e)> \phi(y,-\infty)\ \text{ for all }y\in[0,1]^N\}$$
is a well-defined real number. In addition, there holds
\Fi{eq:nc1}
\forall m\geq 0,\quad
a_{c,n(c)+m}=(\mathcal{F}_{e,c})^m [a_{c,n(c)}]\geq a_{c,n(c)+m-1},
\end{formula}\noindent
\Fi{eq:zcn1}
\forall 0\leq m\leq 1/\sqrt{c^* - c},\quad
0 \leq z_{c,n(c) + m} - z_{c,n(c)} \leq 2 \sqrt{c^* - c}.
\end{formula}\noindent
\end{lem}
While property \eqref{eq:nc1} holds for any $c<c^*$ provided $n(c)$ is sufficiently large,
the same is not true for \eqref{eq:zcn1}.
The latter will play a crucial role for getting a travelling front in the limit.
Loosely speaking, it guarantees that, as $c\nearrow c^*$, there exists an
index~$n(c)$ starting from which the ``crossing point''
$z_{c,n}$ moves very little along an arbitrary large number of iterations.
\begin{proof}[Proof of Lemma \ref{lem:n(c)}]
Fix $c <c^*$.
First of all, from the equivalence between~$(i)$ and~$(iii)$ in Lemma~\ref{lem:ac2}, we know that there exists $n(c)$ such that $a_{c,n} > \phi$
for $ n \geq n(c)$. We deduce that the nondecreasing sequence
$(a_{c,n})_{n \geq n(c)}$ is simply given by the recursion $a_{c,n} = \mathcal{F}_{e,c} [a_{c,n-1}]$, that is
property \eqref{eq:nc1}.
Now, Lemma~\ref{lem:a1} implies that the set
$$\{z\ :\ a_{c,n} (y,z+y\cdot e)> \phi(y,-\infty)\
\text{ for all }y\in[0,1]^N\}$$ is either a left half-line or the empty set, while Lemmas~\ref{lem:a2}-\ref{lem:ac1}
show that it is nonempty for $n$ sufficiently large.
As a consequence,
up to increasing $ n(c)$ if need be,
its supremum $z_{c,n}$ is well-defined and finite for~$n \geq n(c)$.
It remains to prove \eqref{eq:zcn1}, for which we can assume that $c\geq c^*-1$.
We claim that
\Fi{eq:z/n}
\limsup_{n \to +\infty} \frac{z_{c,n}}{n} \leq c^* - c.
\end{formula}\noindent
Indeed by the definition of $z_{c,n}$ and \eqref{acc'1}, for $n\geq n(c)$ we get
\[\begin{split}
0 &< \min_{y\in[0,1]^N} \big(a_{c,n} (y,z_{c,n}-1+ y \cdot e) - \phi(y,-\infty)\big)\\
&\leq\min_{y\in[0,1]^N} \big(a_{c^*,n} (y,z_{c,n}+n(c-c^*) + y \cdot e -1 ) - \phi(y,-\infty)\big).
\end{split}\]
Hence if \eqref{eq:z/n} does not hold,
we would find a large $n$ contradicting the last statement of Lemma \ref{lem:ac2}.
Next, let $N(c)\geq1$ be the integer part of $1/\sqrt{c^* - c}$.
Owing to \eqref{eq:z/n}, we can further increase $n(c)$ to ensure that
$$z_{c,n(c) + N(c)} - z_{c,n(c)} \leq 2 N(c) (c^* - c).$$
Moreover, we know that $z_{c,n+1} \geq z_{c,n} $ for all $c$ and $n$, due to the monotonicity of $a_{c,n}$ with respect to $n$.
In particular, for any integer $0 \leq m \leq N(c)$, we also have that
$$0 \leq z_{c,n(c) + m} - z_{c,n(c)} \leq 2 N(c) (c^* - c),$$
from which we deduce \eqref{eq:zcn1}.
\end{proof}
In the next lemma, we state what we obtain when passing to the limit as $c \nearrow c^*$.
\begin{lem}\label{lem:limit1}
There exists a lower semicontinuous
function $a^*(y,z)$ satisfying the following properties:
\begin{enumerate}[$(i)$]
\item $a^* (y,z+y\cdot e)$ is uniformly continuous
in $y\in\mathbb{R}^N$, uniformly with respect to $z\in\mathbb{R}$;
\item $a^*(y,z)$ is periodic in $y$ and nonincreasing in $z$;
\item $(\mathcal{F}_{e,c^*})^n [a^*]$ is nondecreasing with respect to $n$;
\item $\lim_{n \to+\infty}
\Big( \max_{y \in [0,1]^N} \big(\bar p(y) -(\mathcal{F}_{e,c^*})^n [a^*] (y,y \cdot e)\big)\Big) > 0$;
\item $(\mathcal{F}_{e,c^*})^n [a^*](\cdot,-\infty)\nearrow \bar p$ uniformly
as $n\to+\infty$;
\item $(\mathcal{F}_{e,c^*})^n [a^*] (\cdot, +\infty) \nearrow p$ uniformly
as $n \to +\infty$, where
$0 \leq p < \bar p$ is a periodic steady state of~\eqref{eq:parabolic}.
\end{enumerate}
\end{lem}
Thanks to our previous results,
we know that the properties $(i)$-$(iii)$
are fulfilled with $c^*$ and $a^*$ replaced respectively by any $c < c^*$ and $a_{c,n}$
with $n$ sufficiently large.
In order to get $(iv)$-$(vi)$ we need to pass to the limit $c\nearrow c^*$ by picking
the $a_{c,n}$ at a suitable iteration~$n$.
The choice will be $n=n(c)$ given by Lemma \ref{lem:n(c)},
which fulfils the key property \eqref{eq:zcn1}.
When passing to the limit, we shall face the problem of the lack of regularity in the $z$-variable.
This will be handled by considering the following relaxed notion of limit.
\begin{lem}\label{lem:diagonal}
Let $(\alpha_n)_{n\in\mathbb{N}}$ be a bounded sequence of functions from $\mathbb{R}^N\times\mathbb{R}$ to $\mathbb{R}$ such that
$\alpha_n(y,z)$ is periodic in $y$ and nonincreasing in $z$,
and $\alpha_n(y,z+y\cdot e)$ is uniformly
continuous in $y\in\mathbb{R}^N$, uniformly with respect to $z$ and $n$.
Then there exists a subsequence $(\alpha_{n_k})_{k\in\mathbb{N}}$ such that the following double limit
exists locally uniformly in~$y\in\mathbb{R}^N$:
$$\beta(y,z):=\lim_{ \mathbb{Q}\ni \zeta \to z^+}
\Big(\lim_{k\to+\infty}\alpha_{n_k}(y,\zeta+y\cdot e)\Big).$$
Furthermore, $\beta(y,z)$ is uniformly continuous
in $y\in\mathbb{R}^N$ uniformly with respect to $z\in\mathbb{R}$.
Finally, the function
$\alpha^*(y,z):=\beta(y,z-y\cdot e)$ is periodic in $y$ and
nonincreasing and lower semicontinuous in $z$.
\end{lem}
\begin{proof}
Using a diagonal method, we can find a subsequence
$\alpha_{n_k}(y,\zeta+y\cdot e)$ converging locally uniformly in $y\in\mathbb{R}$ to some function $\tilde\beta(y,\zeta)$
for all $\zeta\in\mathbb{Q}$. The function $\tilde\beta(y,\zeta)$ is uniformly continuous in $y$
uniformly with respect to $\zeta\in\mathbb{Z}$.
We then define $\beta:\mathbb{R}^N\times\mathbb{R}\to\mathbb{R}$ by setting
$$\beta(y, z) := \lim_{ \mathbb{Q}\ni \zeta \to z^+} \tilde\beta (y, \zeta).$$
This limit exists thanks to the monotonicity with respect to $z$, and it is locally uniform with respect to $y$
by equicontinuity. We point out
that $\beta\leq\tilde \beta$ on $\mathbb{R}^N\times\mathbb{Q}$,
but equality may fail.
We also see that $\beta(y,z)$ is uniformly continuous
in $y\in\mathbb{R}^N$ uniformly with respect to $z\in\mathbb{R}$, and it is nonincreasing and
lower semicontinuous in $z$.
Next, we define $\alpha^*(y,z):=\beta(y,z-y\cdot e)$.
We need to show that $\alpha^*(y,z)$ is periodic in~$y$.
Fix $(y,z)\in\mathbb{R}^N\times\mathbb{R}$ and $L \in \mathbb{Z}^N$.
Then, using the periodicity of $\alpha_n$,
for every $\zeta,\zeta'\in\mathbb{Q}$ satisfying
$\zeta< z$ and $\zeta'+L \cdot e > \zeta$,
we get
$$\alpha_n(y, \zeta+ y \cdot e) \geq \alpha_n (y+L ,\zeta'+ (y+L) \cdot e).$$
Passing to the limit along the subsequence $\alpha_{n_k}$ we deduce
$$\tilde\beta (y, \zeta) \geq
\tilde\beta (y + L , \zeta').$$
Now we let $\mathbb{Q}\ni\zeta \to z^+$ and $\mathbb{Q}\ni\zeta' \to( z - L \cdot e)^+$ and we derive
$$\beta (y, z) \geq
\beta (y + L , z - L \cdot e).$$
That is, $\alpha^*(y, z+y \cdot e) \geq \alpha^* (y+L,z+y \cdot e)$.
Because $y$ and $z$ are arbitrary, this means that $\alpha^*\geq \alpha^* (\cdot+L,\cdot)$
for all $L\in\mathbb{Z}^N$, i.e., $\alpha^*$ is periodic in its first variable.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:limit1}]
Consider the family of functions $(a_{c,n(c)}(y,z+z_{c,n(c)}))_{c<c^*}$, with $n(c)$, $z_{c,n(c)}$ given by
Lemma \ref{lem:n(c)}. From Lemma~\ref{lem:a1}, we know that this family is uniformly bounded by 0 and $\max\bar p$, and that any element $a_{c,n(c)}$ is periodic in the first variable and nonincreasing in the second one.
Moreover, the functions
$a_{c,n(c)}(y,z+y\cdot e)$ are uniformly continuous in $y\in\mathbb{R}^N$,
uniformly with respect to $z\in\mathbb{R}$ and $c \in \mathbb{R}$, due to Lemma~\ref{lem:a1}.
In particular, any sequence extracted from this family fulfils the hypotheses of Lemma~\ref{lem:diagonal}.
Then, there exists a sequence $c^k\nearrow c^*$ such that
the following limits exist locally uniformly in $y\in\mathbb{R}^N$:
\Fi{def:a*}
a^*(y,z+y\cdot e):=\lim_{ \mathbb{Q}\ni \zeta \to z^+}\Big(\lim_{k\to+\infty}
a_{c^k,n(c^k)}(y,\zeta+z_{c^k,n(c^k)}+y\cdot e)\Big).
\end{formula}\noindent
We further know that the function $a^*$ satisfies the desired properties $(i)$-$(ii)$.
The definition of $z_{c,n(c)}$ translates into the following normalization conditions:
\Fi{atilde>}
\forall z<0,\quad
\min_{y\in[0,1]^N} \big(a^*(y,z+y\cdot e) - \phi(y,-\infty)\big) \geq 0,
\end{formula}\noindent
\Fi{atilde<}
\min_{y\in[0,1]^N}\big(a^*(y,y\cdot e) -\phi(y,-\infty)\big) \leq 0,
\end{formula}\noindent
where we have used the monotonicity in $z$ and for the second one also
the locally uniform convergence with respect to~$y$.
Let us check property $(iii)$. Using the continuity property of
Proposition \ref{prop:Fec} together with \eqref{c1c21} we obtain
\[\begin{split}
\mathcal{F}_{e,c^*} [a^*] (y, z + y \cdot e - c^*) &=
\lim_{ \mathbb{Q}\ni \zeta \to (z-c^*)^+}\Big(\lim_{k\to+\infty}
\mathcal{F}_{e,c^*} [a_{c^k,n(c^k)}](y,\zeta+z_{c^k,n(c^k)}+y\cdot e)\Big)\\
&=\lim_{ \mathbb{Q}\ni \zeta \to (z-c^*)^+}\Big(\lim_{k\to+\infty}
\mathcal{F}_{e,c^k} [a_{c^k,n(c^k)}](y,\zeta+z_{c^k,n(c^k)}+y\cdot e+c^*-c^k)\Big).
\end{split}\]
We now use property \eqref{eq:nc1} to deduce that the latter term is larger
than or equal to
\[
\limsup_{ \mathbb{Q}\ni \zeta \to (z-c^*)^+}\Big(\limsup_{k\to+\infty}
a_{c^k,n(c^k)}(y,\zeta+z_{c^k,n(c^k)}+y\cdot e+c^*-c^k)\Big),\]
which, in turn, is larger than or equal to
\[\lim_{k\to+\infty}
a_{c^k,n(c^k)}(y,\zeta'+z_{c^k,n(c^k)}+y\cdot e),
\]
for any rational $\zeta'>z-c^*$.
Letting $\mathbb{Q}\ni \zeta' \to (z-c^*)^+$, we eventually conclude that
$$\mathcal{F}_{e,c^*} [a^*] (y, z + y \cdot e - c^*)\geq
a^*(y, z + y \cdot e - c^*).$$
Property $(iii)$ then follows by iteration.
Next, fix $m\in\mathbb{N}$ and a positive $\zeta\in\mathbb{Q}$.
We know by \eqref{eq:nc1} that, for every $k\in\mathbb{N}$ and~$y\in\mathbb{R}^N$,
\[\begin{split}
a_{c^k, n(c^k)+m} (y,\zeta+z_{c^k, n(c^k)}+y\cdot e ) &=
(\mathcal{F}_{e,c^k})^m [a_{c^k, n(c^k)}] (y,\zeta+z_{c^k, n(c^k)}+y\cdot e )\\
&\geq(\mathcal{F}_{e,c^*})^m [a_{c^k, n(c^k)}] (y,\zeta+z_{c^k, n(c^k)}+y\cdot e ).\\
\end{split}\]
Let $k$ large enough so that $1/\sqrt{c^* -c^k }\geq m$ and
$2 \sqrt{c^* -c^k }<\zeta$. We deduce from
\eqref{eq:zcn1} that $\zeta+z_{c^k, n(c^k)}>z_{c^k,n(c^k) + m}$ and thus
$$\min_{y\in[0,1]^N}
\big((\mathcal{F}_{e,c^*})^m [a_{c^k, n(c^k)}] (y,\zeta+z_{c^k, n(c^k)}+y\cdot e ) - \phi(y,-\infty)\big)
\leq 0.$$
Letting now $k\to+\infty$ and next $\zeta\to0^+$ and using the continuity of $\mathcal{F}_{e,c}$ (hence of~$(\mathcal{F}_{e,c})^m$) in the locally uniform topology, we eventually obtain
$$\forall m\in\mathbb{N},\quad
\min_{y\in[0,1]^N}
\big((\mathcal{F}_{e,c^*})^m [a^*] (y,y\cdot e ) - \phi(y,-\infty)\big) \leq 0,$$
from which property $(iv)$ readily follows.
It remains to look into the asymptotics of $a^*$ as $z \to \pm \infty$.
We define the left limit
$$a_{\ell} (y) := \lim_{ z \to -\infty} a^* (y,z + y \cdot e),$$
which exists by the monotonicity of $a^*(y,z)$ with respect to $z$, and it is
locally uniform in~$y$.
Then, by monotonicity and periodicity, we deduce that the limit $a^* (y,-\infty)=a_\ell(y)$
holds uniformly in $y$.
The function $a_\ell$ is continuous and periodic.
Moreover, the normalization condition \eqref{atilde>} yields $a_\ell(y)\geq\phi(y,-\infty)$.
Finally, by the continuity of~$\mathcal{F}_{e,c^*}$ we get, for all $y \in [0,1]^N$,
$$\mathcal{F}_{e,c^*}[a_{\ell}](y)=
\lim_{ z \to -\infty} \mathcal{F}_{e,c^*}[a^*] (y,z + y \cdot e)\geq a_{\ell}(y).$$
This means that the sequence $((\mathcal{F}_{e,c^*})^n[a_{\ell}])_{n\in\mathbb{N}}$ is
nondecreasing. Because $a_{\ell}$ is independent of $z$,
by the definition \eqref{eq:def_mapping} we see that
$(\mathcal{F}_{e,c^*})^n[a_{\ell}]$ reduces to $(\mathcal{F}_{e,0})^n[a_{\ell}]$,
that is, to the solution of~\eqref{eq:parabolic}
with initial datum $a_{\ell}$ computed at time $t=n$.
Then, because $a_{\ell}\geq\phi(y,-\infty)$ and recalling that
the latter lies in the basin of attraction of $\bar p$,
we infer that $(\mathcal{F}_{e,0})^n[a_{\ell}]\to \bar p$ as $n\to+\infty$,
and the limit is uniform thanks to
Proposition~\ref{prop:Fec}$(iv)$.
In a similar fashion, we define the (locally uniform) right limit
$$a_{r} (y) := \lim_{ z \to +\infty} a^* (y,z + y \cdot e).$$
As before, we see that the limit $a^* (y,+\infty)=a_{r}(y)$ is uniform
in $y$, it is continuous, periodic
and the sequence $((\mathcal{F}_{e,0})^n[a_{r}])_{n\in\mathbb{N}}$
is nondecreasing. Therefore, $(\mathcal{F}_{e,0})^n[a_{r}](y)$ converges
uniformly as $n\to+\infty$
to a fixed point $p(y)$ of $\mathcal{F}_{e,0}$.
This means that the solution $u$ of~\eqref{eq:parabolic}
with initial datum $p$ is 1-periodic in time and periodic in space and
therefore, by Proposition~\ref{prop:energy1}, it is actually stationary.
We conclude that $(vi)$ holds, completing the proof of the lemma.
\end{proof}
\subsection{The uppermost pulsating front}\label{sec:uppermost}
From now on, $a^*$ will denote the function provided by Lemma \ref{lem:limit1}
and more specifically defined by \eqref{def:a*} for a suitable sequence
$c^k\nearrow c^*$. Next we show that the discrete front
is given by the limit of the iterations
$(\mathcal{F}_{e,c^*})^n [a^*]$. We shall further show that its limit state as~$z\to+\infty$ is stable.
\begin{lem}\label{lem:U1}
There holds that
$$(\mathcal{F}_{e,c^*})^n [a^*](y,z+y\cdot e)\to U^* (y,z+y\cdot e)\quad\text{as }\;
n\to+\infty,$$
locally uniformly in $y$ and pointwise in $z$, where $U^*(x,z)$ is nonincreasing in $z$ and it
is a discrete travelling front
connecting~$\bar p$ to some stable periodic steady state~$p^*< \bar p$, in the sense of
Definition~\ref{def:discrete_front}.
\end{lem}
\begin{proof}
Let us observe that, because $((\mathcal{F}_{e,c^*})^n [a^*])_{n\in\mathbb{N}}$ is a nondecreasing sequence, it is already clear that it converges pointwise to some function $U^*(y,z)$ which is periodic in~$y$ and nonincreasing in $z$. By writing
$$(\mathcal{F}_{e,c^*})^{n+1} [a^*] (y,z+y\cdot e) =
\mathcal{F}_{e,c^*}\circ(\mathcal{F}_{e,c^*})^n [a^*] (y,z+y\cdot e),$$
we deduce from Proposition \ref{prop:Fec}$(iv)$ that
$(\mathcal{F}_{e,c^*})^n [a^*] (y,z+y\cdot e)$ converges as $n \to +\infty$ locally uniformly in~$y$,
for any $z\in\mathbb{R}$. In particular, we can pass to the limit $n \to +\infty$ in the above equation and conclude that $U^*$ is a fixed point for $\mathcal{F}_{e,c^*}$.
Let us now turn to the asymptotics as $z \to \pm \infty$. We know from
Lemma~\ref{lem:limit1}$(v)$ that
$(\mathcal{F}_{e,c^*})^n [a^*] (\cdot , -\infty) \to \bar p$ as $n \to +\infty$.
We can easily invert these limits using
the continuity of $(\mathcal{F}_{e,c^*})^n$ and the uniformity
of the limit $a^*(\cdot,-\infty)$, together with the
monotonicity of $U^*$ in the second variable.
This yields $U^*(\cdot,-\infty) \equiv \bar p$.
Next, property~$(iv)$ of Lemma~\ref{lem:limit1} implies that
$$\max_{y\in[0,1]^N}\big(\bar p(y)-U^*(y,y\cdot e)\big) >0.$$
Writing $U^*(y,+\infty)=\lim_{z\to+\infty}U^*(y,z+y\cdot e)$,
we deduce that the limit $U^*(\cdot,+\infty)$ is uniform and therefore
$\mathcal{F}_{e,0}[U^*](\cdot,+\infty)\equiv
\mathcal{F}_{e,c^*}[U^*](\cdot,+\infty)\equiv U^*(\cdot,+\infty)$.
We also deduce from the previous inequality
that $U^*(\cdot,+\infty)\not\equiv\bar p $.
As seen in Proposition~\ref{prop:energy1},
any solution of~\eqref{eq:parabolic}
that is periodic in both time and space is actually constant in time.
Thus, $U^*(\cdot ,+\infty)$ is a periodic steady state of~\eqref{eq:parabolic},
denoted by~$p^*$,
that satisfies $0\leq p^*<\bar p$, where the second inequality is strict due to the
elliptic strong maximum principle.
It remains to check that $p^*$ is stable.
We shall do this using Assumption~\ref{ass:speeds}.
Proceed by contradiction and assume that~$p^*$ is unstable.
As seen after the statement of Theorem~\ref{thm:mono},
Assumption~\ref{ass:mix} guarantees the existence of a minimal (resp.~maximal)
stable periodic steady state above (resp.~below)~$p^*$, denoted by
$p_+$ (resp.~$p_-$), and also that~\eqref{eq:parabolic} is of the
monostable type between $p_-$ and $p^*$, as well as between $p^*$ and~$p_+$.
As a consequence, Theorem~\ref{thm:mono}
provides two minimal speeds of fronts~$\overline{c}_{p^*}$ and~$\underline{c}_{p^*}$
connecting $p_+$ to $p^*$ and $p^*$ to $p_-$ respectively.
Our Assumption~\ref{ass:speeds} states that~$\underline{c}_{p^*}<\overline{c}_{p^*}$.
According to Weinberger~\cite{W02}, these quantities coincide with the spreading speeds
for~\eqref{eq:parabolic} in the ranges between $p^*$ and $p_+$ and between
$p_-$ and $p^*$ respectively.
Namely, taking a constant $\delta>0$ such that
$ p^*+\delta < p_+$, and considering the Heaviside-type function
$$H(y,z):=\begin{cases} p^*(y)+\delta & \text{if }z<-\sqrt N,\\
p^* (y) & \text{if }z\geq-\sqrt N,
\end{cases}$$
we have that for any $Z \in \mathbb{R}$, the solution $v(t,y;H(x,Z+x\cdot e))$ of \eqref{eq:parabolic} spreads with speed $\overline{c}_{p^*}$ in the following sense: for any $\varepsilon >0$,
$$\lim_{t \to+\infty} \sup_{y \cdot e \leq (\overline{c}_{p^*}-\varepsilon) t}
|v(t,y; H(x,Z+x\cdot e)) - p_+ (y)| = 0,$$
$$\lim_{t \to +\infty} \sup_{y \cdot e \geq (\overline{c}_{p^*} + \varepsilon) t}
|v(t,y;H (x,Z+x\cdot e)) - p^*(y)| = 0.$$
A similar result holds when looking at solutions between $p_-$ and $p^*$.
Let us show that $c^* \geq \overline{c}_{p^*}$.
Since $U^* (\cdot, - \infty) \equiv \bar p \geq p_+$ and $U^* \geq p^*$, we can choose $Z >0$ large enough so that
$$U^* \geq H (\cdot, \cdot + Z).$$
Now we argue by contradiction and assume that $c^* < \overline{c}_{p^*}$.
Then, calling $\varepsilon:=(\overline{c}_{p^*}-c^*)/2$, we have that
$\overline{c}_{p^*}-\varepsilon=c^*+\varepsilon$ and thus, by comparison,
\begin{equation}\label{eq:spread11}
\liminf_{n \to +\infty} \inf_{y\cdot e \leq (c^*+\varepsilon)n}
\big(v(n,y; U^* (x,x\cdot e)) - p_+ (y )\big) \geq 0.
\end{equation}
Consequently, because
$$v(n,y ; U^* (x,x\cdot e)) = (\mathcal{F}_{e,c^*})^n [U^*] (y , y\cdot e - nc^*) = U^* (y, y \cdot e - nc^*),$$
we find that $U^*(y,y\cdot e - nc^*) > p_+ (y)-\delta$
for $n$ sufficiently large and for all $y$ such that
$y \cdot e \leq (c^*+ \varepsilon)n$.
Taking for instance $y = (c^* + \varepsilon )n\, e$
and passing to the limit as $n \to +\infty$ yields
$p^* (y_\infty) \geq p_+ (y_\infty)-\delta$, where $y_\infty$
is the limit of $(c^* + \varepsilon )n\, e$ (up to subsequences and modulo the periodicity;
recall that the limit $U^*(\cdot ,+\infty)$ is uniform). This is impossible because
$\delta$ was chosen in such a way that
$p^* + \delta < p_+$. As announced, there holds $c^* \geq \overline{c}_{p^*}$.
Let us now show that $c^* \leq \underline{c}_{p^*}$.
The strategy is to follow a level set between $p_-$ and~$p^*$ of a suitable iteration
$(\mathcal{F}_{e,c^*})^n[a_{c,n(c)}]$ and to pass again to the (relaxed) limit
as $c\nearrow c^*$. Notice that, in the situation where~$p$ (coming from
Lemma~\ref{lem:limit1}$(vi)$) satisfies $p < p^*$, then it would be sufficient to consider the sequence
$((\mathcal{F}_{e,c^*})^n [a^*])_n$ to capture such a level set;
however it may happen that $p \equiv p^*$ and for this reason we need to
come back to the family~$a_{c,n(c)}$.
For $k\in\mathbb{N}$, we can find $n_k\in\mathbb{N}$ such that the following properties hold:
$$
\max_{y\in[0,1]^N}\big|(\F{c^*})^{n_k}[a^*](y,k+y\cdot e)-U^*(y,k+y\cdot e)\big|
<\frac1k,
$$
$$
\max_{y\in[0,1]^N}\big|(\F{c^*})^{n_k}[a^*](y,2k+y\cdot e)-U^*(y,2k+y\cdot e)\big|
<\frac1k.
$$
Then, recalling the definition \eqref{def:a*}
of $a^*$ and up to extracting a subsequence of the sequence $c^k\nearrow c^*$
appearing there,
we find that for every $k\in\mathbb{N}$, there holds
\Fi{eq:k+1}
\max_{y\in[0,1]^N}\big((\F{c^*})^{n_k}[a_{c^k,n(c^k)}](y,k+1+z_{c^k,n(c^k)}+y\cdot e)-
U^*(y,k+y\cdot e)\big)<\frac2k,
\end{formula}\noindent
\Fi{eq:2k-1}
\min_{y\in[0,1]^N}\big((\F{c^*})^{n_k}[a_{c^k,n(c^k)}](y,2k-1+z_{c^k ,n(c^k)}+y\cdot e)-
U^*(y,2k+y\cdot e)\big)>-\frac2k.
\end{formula}\noindent
Notice that in~\eqref{eq:k+1} and \eqref{eq:2k-1} we have translated by $z_{c^k, n(c^k)}\pm1$ instead of $z_{c^k,n(c^k)}$
because of the `relaxed'
limit in \eqref{def:a*}.
In order to pick the desired level set, take a constant $\delta>0$ small enough so that
$p_- +\delta< p^*$. We then define
$$\hat z_k :=\inf\{z\ :\ (\F{c^*})^{n_k}[a_{c^k,n(c^k)}](y,z+y\cdot e)- p_- (y)\leq \delta \text{ for all }y\in[0,1]^N\}.$$
Observe that $\hat z_k\in\mathbb{R}$ and actually $\hat z_k\geq z_{c^k ,n(c^k)}$,
as a consequence of the definition of $z_{c,n}$ in Lemma~\ref{lem:n(c)}
and the fact that $\varphi (\cdot, - \infty) > p^*$, since it lies in the basin of attraction of $\bar p$. Because $U^*(y,+\infty)=p^*(y)>p_- (y)+\delta$
uniformly in $y$, we deduce from \eqref{eq:2k-1} that, for $k$ large enough,
$$\min_{y\in[0,1]^N}\big((\F{c^*})^{n_k}[a_{c^k,n(c^k)}](y,2k-1+z_{c^k ,n(c^k)}+y\cdot e)-
p_- (y)\big)>\delta,$$
whence $\hat z_k\geq2k-1+z_{c^k ,n(c^k)}$. It then follows from \eqref{eq:k+1} that,
for $k$ sufficiently large,
\Fi{eq:-k+2}
\max_{y\in[0,1]^N}\big((\F{c^*})^{n_k}[a_{c^k,n(c^k)}](y,\hat z_k-k+2+y\cdot e)-
U^*(y,k+y\cdot e)\big)<\frac2k.
\end{formula}\noindent
We now apply Lemma \ref{lem:diagonal} to the sequence
$\big((\F{c^*})^{n_k}[a_{c^k,n(c^k)}](y,z+\hat z_k +y\cdot e)\big)_{k\in\mathbb{N}}$.
This provides us with a function $\hat \alpha^*(y,z)$
periodic in $y$ and nonincreasing in $z$
and such that $\hat \alpha^*(y,z+y\cdot e)$ is uniformly continuous
in $y\in\mathbb{R}^N$, uniformly with respect to $z\in\mathbb{R}$.
Moreover, proceeding exactly as in the proof of Lemma \ref{lem:limit1},
we deduce from the inequality $\F{c^k}\circ(\F{c^*})^{n_k}[a_{c^k,n(c^k)}]\geq
(\F{c^*})^{n_k}[a_{c^k,n(c^k)}]$ that
$(\mathcal{F}_{e,c^*})^n [\hat \alpha^*]$ is nondecreasing with respect to~$n$.
The choice of $\hat z_k$ further implies that
\Fi{alpha>p+d}
\forall z< 0, \quad \max_{y \in [0,1]^N}
\Big(\hat \alpha^*(y, z + y \cdot e) -p_- (y)\Big)\geq\delta,
\end{formula}\noindent
$$\max_{y \in [0,1]^N}\Big(\hat \alpha^*(y,y\cdot e) - p_- (y)\Big)\leq\delta.$$
Finally, property \eqref{eq:-k+2} and the monotonicity in $z$ yield $\hat \alpha^*\leq p^*$.
We are now in a position to prove that $c^* \leq \underline{c}_{p^*}$. We again use a comparison argument with an Heaviside-type function. Indeed, from the above, we
know that $\hat \alpha^* \leq \hat{H}$, where
$$\hat{H} (y,z) :=
\left\{
\begin{array}{ll}
p^* (y) & \mbox{if } z \leq \sqrt{N},\\
p_- (y)+\delta & \mbox{if } z > \sqrt{N}.
\end{array}
\right.
$$
According to Weinberger's spreading result in \cite{W02}, the solution $v(t,y;\hat{H}(x,x\cdot e))$ of~\eqref{eq:parabolic} spreads with speed~$\underline{c}_{p^*}$, which implies in
particular that for any $\varepsilon >0$,
$$\lim_{t \to \infty} \sup_{y \cdot e \geq (\underline{c}_{p^*} + \varepsilon) t} |v(t,y;\hat{H} (x,x\cdot e)) - p_- (y)| = 0.$$
By comparison we obtain
$$\limsup_{n \to +\infty} \sup_{y \cdot e \geq (\underline{c}_{p^*} + \varepsilon) n}
\Big(v(n,y; \hat \alpha^* (x,x\cdot e)) - p_- (y)\Big)\leq 0.$$
However, because $(\mathcal{F}_{e,c^*})^n[\hat \alpha^*] $ is
nondecreasing in $n$, we have that
$$\hat \alpha^* (y,y\cdot e - n c^*)\leq
(\mathcal{F}_{e,c^*})^n [\hat \alpha^*] (y,y\cdot e - n c^*)=
v(n,y; \hat \alpha^*(x,x+\cdot e)),$$
and thus
$$\limsup_{n \to +\infty} \sup_{y \cdot e \geq (\underline{c}_{p^*} + \varepsilon) n}
\Big(\hat \alpha^* (y,y\cdot e - n c^*) - p_- (y)\Big)\leq 0.$$
For $y \in[0,1]^N$ and $\xi_n:=[(\underline{c}_{p^*} + \varepsilon) n+\sqrt{N}]e$
there holds $(y+\xi_n)\cdot e
\geq (\underline{c}_{p^*} + \varepsilon) n$, whence
$$\limsup_{n \to +\infty}\max_{y\in[0,1]^N}
\Big( \hat \alpha^* (y+\xi_n, (\underline{c}_{p^*} + \varepsilon - c^*)n) -
p_- (y+\xi_n)\Big) \leq 0.$$
By periodicity, we can drop the $\xi_n$ in the above expression.
We eventually deduce from~\eqref{alpha>p+d} that
$\underline{c}_{p^*} + \varepsilon - c^*\geq0$, that is,
$\underline{c}_{p^*} \geq c^*$ due to the arbitrariness of $\varepsilon>0$.
In the end, we have shown that $\overline{c}_{p^*} \leq c^* \leq \underline{c}_{p^*}$, which directly contradicts Assumption~\ref{ass:speeds}.
Lemma~\ref{lem:U1} is thereby proved.
\end{proof}
\begin{rmk}
Under the bistable Assumption~\ref{ass:bi}, obviously $p^*$ has to be 0, and therefore we have constructed a discrete travelling front connecting $\bar p$ to $0$. In order to conclude the proof of Theorem~\ref{th:bi}, one may directly skip to Section~\ref{sec:continuous}.
\end{rmk}
\section{A (discrete) propagating terrace}\label{sec:terrace}
At this stage we have constructed the `highest floor' of the terrace.
Then in the bistable case we are done. In the multistable case it remains to construct the lower floors, and thus we place ourselves under the pair of Assumptions~\ref{ass:multi} and \ref{ass:speeds}.
To proceed, we iterate
the previous argument to the restriction of \eqref{eq:parabolic} to the `interval' $[0,p^*]$, with~$p^*$ given by Lemma~\ref{lem:U1},
and we find a second travelling front connecting $p^*$ to another stable state smaller than $p^*$.
For this the stability of $p^*$ is crucial.
The iteration ends as soon as we reach the $0$ state,
which happens in a finite number of steps because there is a finite number of stable periodic steady states.
This procedure provides us with some finite sequences $(q_j)_{0\leq j\leq J}$ and
$(U_j)_{1\leq j\leq J}$, where the $q_j$ are {\em linearly}
stable periodic steady states and the $U_j$ are discrete travelling fronts connecting $q_{j-1}$ to~$q_j$.
We need to show that the speeds are ordered, so that the family of travelling fronts we construct is a (at this point, discrete) propagating terrace.
It is here that we use the linear stability hypothesis in Assumption~\ref{ass:multi}.
As we mentioned in the introduction, the order of the speeds is a crucial property of the terrace,
which is not a mere collection of unrelated fronts but what should actually emerge
in the large-time limit of solutions of the Cauchy problem.
\begin{prop}
Under Assumptions~\ref{ass:multi} and~\ref{ass:speeds}, the speeds $c_j$ of the fronts $U_j$ are ordered:
$$c_1 \leq c_2 \leq \cdots \leq c_J .$$
\end{prop}
\begin{proof}
We only consider the two uppermost travelling fronts $U_1$ and $U_2$ and
we show that $c_1 \leq c_2$. The same argument applies for the subsequent speeds.
We first come back to the family $(a_{c,n})_{c,n}$ and the function $a^*$ used to construct the front~$U_1$
connecting $q_0 \equiv \bar p$ to~$q_1$. The main idea is that, capturing another level
set between $q_2$ and $q_1$, we should obtain a solution moving with a
speed larger than or equal to $c_1$, but which is smaller than $q_1$.
Then, comparing it with the second front $U_2$, we expect to recover the desired
inequality $c_1 \leq c_2$.
In the proof of Lemma \ref{lem:U1}, we have constructed two sequences
$(n_k)_{k\in\mathbb{N}}$, $(c^k)_{k\in\mathbb{N}}$, with $c^k\nearrow c_1$,
such that \eqref{eq:k+1}, \eqref{eq:2k-1} hold with $c^*=c_1$ and $U^*=U_1$.
Take a small positive constant $\delta$ so that $q_j \pm \delta$ lie in the basin of attraction of $q_j$,
for $j=1,2$, and moreover $\min (q_1 - q_2) \geq 2 \delta$.
Then define
$$\hat{z}_k:=\inf\{z\ :\ (\F{c_1})^{n_k}[a_{c^k,n(c^k)}](y,z+y\cdot e)- q_2 (y)\leq
\delta \text{ for all }y\in[0,1]^N\}.$$
The inequality
\eqref{eq:2k-1} implies that, for $y\in[0,1]^N$ and $z\leq z_{c^k,n(c^k)} + 2k -1$, there holds
$$(\F{c_1})^{n_k}[a_{c^k,n(c^k)}](y,z+y\cdot e)>U_1(y,2k+y\cdot e)-\frac2k\to q_1(y)
\quad\text{as }\;k\to+\infty.$$
Because $q_1>q_2 +\delta$, we infer that, for $k$ large enough,
$$\hat{z}_k\geq z_{c^k,n(c^k)} + 2k -1,$$
whence, by \eqref{eq:k+1},
\Fi{zk-k}
\max_{y\in[0,1]^N}\big((\F{c_1})^{n_k}[a_{c^k,n(c^k)}](y, \hat{z}_k +2-k+y\cdot e)
- U_1(y,k + y \cdot e )\big)<\frac{2}{k}.
\end{formula}\noindent
We now consider the sequence of functions
$\big((\F{c_1})^{n_k}[a_{c^k,n(c^k)}](y,z+\hat{z}_k+y\cdot e)\big)_{k\in\mathbb{N}}$ and apply Lemma~\ref{lem:diagonal}. We obtain a function $\hat{\alpha} (y,z)$ which is periodic in $y$, nonincreasing in $z$. Moreover, it is such that $\hat{\alpha} (y,z+y\cdot e)$ is uniformly continuous in $y$, uniformly with respect to $z$, and $(\mathcal{F}_{e,c_1})^{n} [\hat \alpha]$ is nondecreasing with respect to $n$.
Our choice of $\hat{z}_k$ further implies
\begin{equation}\label{alphap'_norm}
\forall z< 0, \ \max_{ y \in [0,1]^N} \Big( \hat \alpha(y, z + y \cdot e) - q_2 (y) \Big) \geq \delta,
\end{equation}
\begin{equation}\label{alphap'_norm2}
\forall y \in [0,1]^N, \quad \hat \alpha (y,z+y\cdot e)\leq q_2 (y)+\delta .
\end{equation}
The latter property, together with the facts that $(\mathcal{F}_{e,c_1})^n [\hat \alpha](\cdot ,+\infty)$
is nondecreasing in $n \in \mathbb{N}$ and that $q_2+\delta$ lies in the basin of attraction of $q_2$, yield
$$\hat \alpha (\cdot , + \infty) \leq q_2 .$$
On the other hand, using \eqref{zk-k} one infers that
$$\hat \alpha (\cdot , - \infty) \leq q_1 .$$
Our aim is to compare $\hat \alpha$ with $U_2$ using the sliding method. To this end, we shall increase
$U_2$ a bit without affecting its asymptotical dynamics, exploiting the linear stability of
$q_1$, $q_2$.
Let $\varphi_{q_1}$ and $\varphi_{q_2}$ denote the periodic principal eigenfunctions associated
with the linearization of \eqref{eq:parabolic} around $q_1$ and $q_2$ respectively,
normalized by $\max \varphi_{q_1} =\max \varphi_{q_2}=1$. Then consider
a smooth, positive function $\Phi=\Phi(y,z)$ which is periodic in $y$ and
satisfies
$$\Phi(y,z)=\begin{cases}
\varphi_{q_1}(y) & \text{if }z\leq-1 , \vspace{3pt}\\
\varphi_{q_2}(y) & \text{if }z\geq1 ,
\end{cases}$$
and define, for $\varepsilon \in (0,\delta)$,
$$U_{2,\varepsilon} (y,z):=U_2(y,z)+\varepsilon\,\Phi(y,z).$$
Now, because the limits as $z\to\pm\infty$ satisfy the following inequalities uniformly in $y$:
$$U_{2,\varepsilon}(y,-\infty)>q_1(y)\geq
\hat \alpha(y,-\infty),\qquad
U_{2,\varepsilon}(y,+\infty) > q_2(y)\geq
\hat \alpha(y,+\infty),$$
and using also \eqref{alphap'_norm},
we can define the following real number:
$$Z_\varepsilon:=\sup\big\{Z\ :\
U_{2,\varepsilon} (y,Z+z+y\cdot e)>\hat \alpha(y,z+y\cdot e)
\text{ for all }(y,z)\in\mathbb{R}^{N+1}\big\}.$$
Let us assume by way of contradiction that the speed of $U_2$ satisfies
$c_2<c_1$.
Then if we fix
$$\tilde Z_\varepsilon\in(Z_\varepsilon-c_1+c_2,Z_\varepsilon),$$
we can find $(y_\varepsilon,z_\varepsilon)\in\mathbb{R}^{N+1}$ such that
\Fi{Zeps}
U_{2,\varepsilon} (y_\varepsilon,\tilde Z_\varepsilon+c_1-c_2+z_\varepsilon+y_\varepsilon\cdot e)\leq
\hat \alpha(y_\varepsilon,z_\varepsilon+y_\varepsilon\cdot e).
\end{formula}\noindent
Consider the following functions:
$$u_\varepsilon(t,y):=v(t,y;\hat \alpha(x,c_1+z_\varepsilon+x\cdot e)),$$
$$w_\varepsilon(t,y):=v(t,y;U_2 (x,\tilde Z_\varepsilon+c_1+z_\varepsilon+x\cdot e))
+\varepsilon\,\Phi(y,\tilde Z_\varepsilon+c_1-c_2 t+z_\varepsilon+y\cdot e).$$
We find from one hand that
\[\forall y\in\mathbb{R}^N,\quad
w_\varepsilon(0,y)-u_\varepsilon(0,y)=
U_{2,\varepsilon}(y,\tilde Z_\varepsilon+c_1+z_\varepsilon+y\cdot e)
-\hat \alpha (y,c_1+z_\varepsilon+y\cdot e)>0\]
because $\tilde Z_\varepsilon<Z_\varepsilon$. Hence, recalling that
$U_{2,\varepsilon},\hat \alpha$ are periodic in $y$ and satisfy
$U_{2,\varepsilon}(y,\pm\infty)>\hat \alpha(y,\pm\infty)$,
which yields
\Fi{w-u}
\liminf_{y\cdot e\to\pm\infty}(w_\varepsilon-u_\varepsilon)(0,y)\geq
\varepsilon\min\left\{\min\varphi_{q_1},\min\varphi_{q_2}\right\}>0,
\end{formula}\noindent
we infer that
$\inf_y(w_\varepsilon(0,y)-u_\varepsilon(0,y))>0$.
Then, by uniform continuity,
$w_\varepsilon>u_\varepsilon$ for~$t>0$ small enough.
From the other hand, using the fact that, for all $m\in\mathbb{N}$,
\Fi{ueps}
u_\varepsilon(m,y)=\F{c_1}[\hat \alpha]^m (y,c_1+z_\varepsilon+y\cdot e-m c_1)
\geq \hat \alpha(y,c_1+z_\varepsilon+y\cdot e-m c_1),
\end{formula}\noindent
\Fi{weps}
w_\varepsilon(m,y)=U_{2,\varepsilon}(y,\tilde Z_\varepsilon+c_1+z_\varepsilon+y\cdot e-m c_2),
\end{formula}\noindent
we derive
\[
w_\varepsilon(1,y_\varepsilon)-u_\varepsilon(1,y_\varepsilon)\leq U_{2,\varepsilon}(y_\varepsilon,\tilde Z_\varepsilon+c_1-c_2+z_\varepsilon+y_\varepsilon\cdot e)
-\hat \alpha(y_\varepsilon,z_\varepsilon+y_\varepsilon\cdot e),
\]
which is nonpositive by \eqref{Zeps}. Let us point out that, if $w_\varepsilon$ was a supersolution on the whole domain, this would contradict the comparison principle; unfortunately we shall see below that we only know it to be a supersolution in some subdomains. Therefore we shall first use a limiting argument as $\varepsilon \to 0$ to find that $\hat{\alpha}$ also lies below a shift of~$U_2$ itself, so that the comparison principle will become available.
From the above we deduce the existence of a time $T_\varepsilon\in(0,1]$ such that
$w_\varepsilon>u_\varepsilon$ for $t\in[0,T_\varepsilon)$ and
$\inf_y(w_\varepsilon-u_\varepsilon)(T_\varepsilon,y)=0$.
There exists then a sequence $(y_\varepsilon^n)_{n\in\mathbb{N}}$ satisfying $(w_\varepsilon-u_\varepsilon)(T_\varepsilon,y_\varepsilon^n)\to0$
as $n\to+\infty$.
We observe that the sequence $(y_\varepsilon^n\cdot e)_{n\in\mathbb{N}}$ is necessarily bounded because
the inequalities \eqref{w-u} hold true for all times,
as a consequence of the fact that,
for solutions of parabolic equations such as \eqref{eq:parabolic}, the property of being bounded
from one side by a steady state at the limit in a given direction is preserved along evolution.
The linear stability of $q_1$ and $q_2$ means that
the periodic principal eigenvalues $\lambda_{q_1}$, $\lambda_{q_2}$ of
the associated linearized operators are negative.
Then, for a given solution $u$ to~\eqref{eq:parabolic}, the function
$u+\varepsilon\varphi_{q_j}$, with $\varepsilon>0$ and $j=1,2$, satisfies for $t>0$, $x\in\mathbb{R}^N$,
\[\begin{split}
\partial_t(u+\varepsilon\varphi_{q_j}) - \text{div} (A(x) \nabla (u+\varepsilon\varphi_{q_j}) )
&=f (x,u)+(f_u(x,q_j)-\lambda_{q_j})\varepsilon\varphi_{q_j}\\
&=f(x,u+\varepsilon\varphi_{q_j})+(f_u(x,q_j)-f_u(x,s)-\lambda_{q_j})\varepsilon\varphi_{q_j},
\end{split}\]
for some $u(t,x)<s<u(t,x)+\varepsilon\varphi_{q_j}(x)$.
Thus, because $\lambda_{q_j}<0$, the regularity of~$f_u$
allows us to find $\gamma >0$ such that
$u+\varepsilon\varphi_{q_j}$ is a supersolution to \eqref{eq:parabolic}
whenever $|u-q_j|<\gamma$ and $\varepsilon\in(0,\gamma)$.
From now on, we restrict to $\varepsilon\in(0,\gamma)$.
Take $Z\geq1$ in such a way that
$$U_2 (\cdot,z)> q_1 -\gamma \ \text{ if }z\leq-Z,\qquad
U_2 (\cdot,z)< q_2+\gamma \ \text{ if }z\geq Z,$$
as well as, for all $0 \leq t \leq 1$,
$$v (t,y; U_2 (x, \tilde Z_\varepsilon + c_1 + z_\varepsilon + x \cdot e)) > q_1 - \gamma
\quad \text{if } y \cdot e \leq -Z - \tilde{Z}_\varepsilon - c_1 + c_2 t - z_\varepsilon ,$$
$$v (t,y; U_2 (x, \tilde Z_\varepsilon + c_1 + z_\varepsilon + x \cdot e)) < q_2 + \gamma
\quad \text{if } y \cdot e
\geq Z - \tilde{Z}_\varepsilon - c_1 + c_2 t - z_\varepsilon .$$
We have just seen that these conditions imply the property that $w_\varepsilon$ is a supersolution
to~\eqref{eq:parabolic} in corresponding subdomains.
We claim that this implies that
\Fi{yeps}
\liminf_{n\to+\infty}
|\tilde Z_\varepsilon+c_1+z_\varepsilon+y_\varepsilon^n\cdot e|\leq Z+|c_2|+3\sqrt{N},
\end{formula}\noindent
which will in turn guarantee that functions $u_\varepsilon$ and $w_\varepsilon$ do not become trivial as $\varepsilon \to 0$.
To prove \eqref{yeps}, consider $(k^n)_{n\in\mathbb{N}}$ in $\mathbb{Z}^N$ such that
$y_\varepsilon^n-k^n\in[0,1]^N$.
Clearly, $(k^n\cdot e)_{n\in\mathbb{N}}$ is bounded because $(y_\varepsilon^n\cdot e)_{n\in\mathbb{N}}$ is.
Let $y^\infty_\varepsilon$ be the limit of (a subsequence of) $(y_\varepsilon^n-k^n)_{n\in\mathbb{N}}$.
The functions
$w_\varepsilon(t,y+k^n)$ and $u_\varepsilon(t,y+k^n)$
converge as $n\to+\infty$ (up to subsequences) locally uniformly in
$[0,1)\times\mathbb{R}^N$ to some functions $\tilde w_\varepsilon$, $\tilde u_\varepsilon$
satisfying
$$\min_{[0,T_\varepsilon]\times\mathbb{R}^N}(\tilde w_\varepsilon-\tilde u_\varepsilon)=
(\tilde w_\varepsilon-\tilde u_\varepsilon)(T_\varepsilon,y^\infty_\varepsilon)=0.$$
The function $\tilde u_\varepsilon$ is a solution to \eqref{eq:parabolic}.
Instead, $\tilde w_\varepsilon$ is a supersolution to
\eqref{eq:parabolic} for $t\in(0,T_\varepsilon]$ and $y\cdot e<2\sqrt{N}$
or $y\cdot e>-2\sqrt{N}$
if respectively one or the other of the following inequalities holds for
infinite values of $n$:
$$\tilde Z_\varepsilon+c_1 +z_\varepsilon+k^n\cdot e<-Z-|c_2|-2\sqrt{N},
\qquad \tilde Z_\varepsilon+c_1 +z_\varepsilon+k^n\cdot e>
Z+|c_2|+2\sqrt{N}.$$
Hence if \eqref{yeps} does not hold we have that $\tilde w_\varepsilon$ is a supersolution of
\eqref{eq:parabolic} in a half-space orthogonal to $e$ containing the point $y^\infty_\varepsilon$,
and thus the parabolic strong maximum principle yields
$\tilde w_\varepsilon\equiv\tilde u_\varepsilon$ in such half-space for $t\leq T_\varepsilon$.
This is impossible because, by the boundedness of $(k^n\cdot e)_{n\in\mathbb{N}}$,
the property \eqref{w-u} holds true with $w_\varepsilon-u_\varepsilon$ replaced by~ $\tilde w_\varepsilon-\tilde u_\varepsilon$. This proves~\eqref{yeps}.
Using \eqref{yeps} we can find a family $(\tilde y_\varepsilon)_{\varepsilon\in(0,\gamma)}$
such that $(\tilde Z_\varepsilon+c_1+z_\varepsilon+\tilde y_\varepsilon\cdot e)_{\varepsilon\in(0,\gamma)}$
is bounded and
$(w_\varepsilon-u_\varepsilon)(T_\varepsilon,\tilde y_\varepsilon)\to0$ as $\varepsilon\to0$.
Arguing as before, by considering the translations
$u_\varepsilon(t,y+k_\varepsilon)$, $w_\varepsilon(t,y+k_\varepsilon)$ with
$k_\varepsilon\in\mathbb{Z}^N$ such that $\tilde y_\varepsilon-k_\varepsilon\in[0,1]^N$, we obtain at the limit
$\varepsilon\searrow0$ (up to some subsequences) two functions
$\tilde u$ and $\tilde w$ which are now both solutions to \eqref{eq:parabolic} and
satisfy
$$\min_{y\in\mathbb{R}^N}(\tilde w-\tilde u)(\tilde T,y)=
(\tilde w-\tilde u)(\tilde T,\tilde y)=0,$$
where $\tilde T=\lim_{\varepsilon\to0}T_\varepsilon$ and $\tilde y=\lim_{\varepsilon\to0}(\tilde y_\varepsilon-k_\varepsilon)$.
If $\tilde T>0$ then $\tilde w\equiv\tilde u$, otherwise we can only infer that
$\tilde w\geq\tilde u$ for all times and that $(\tilde w-\tilde u)(0,\tilde y)=0$.
In both cases, roughly the spreading speed of $\tilde{u}$ has to be less than that of $\tilde{w}$, which ultimately will contradict the inequality $c_2 < c_1$.
More precisely, since $(\tilde Z_\varepsilon+c_1+z_\varepsilon+k_\varepsilon\cdot e)_{\varepsilon\in(0,\gamma)}$
is bounded, we derive
$$\tilde u(0,\tilde y)=\tilde w(0,\tilde y)=
\lim_{\varepsilon\to0}w_\varepsilon(0,\tilde y +k_\varepsilon)=
\lim_{\varepsilon\to0}U_2 (\tilde y,\tilde Z_\varepsilon+c_1+z_\varepsilon+k_\varepsilon\cdot e+\tilde y \cdot e),$$
and thus $q_2 (\tilde y)<\tilde u(0,\tilde y)<q_1(\tilde y)$
because $q_2<U_2<q_1$ thanks to Proposition \ref{prop:Fec}$(ii)$.
Next, fix $c' \in(c_2,c_1)$ and consider a sequence $(h_m)_{m\in\mathbb{N}}$ satisfying
$c' m<h_m\cdot e<c_1 m$ for~$m$ larger than some~$m_0$.
From one hand, using \eqref{ueps} and the monotonicity of $\hat \alpha$ with respect to its second variable, we get
$$\forall m\geq m_0,\ y\in\mathbb{R}^N,\quad
u_\varepsilon(m,y+h_m)\geq \hat \alpha ( y,c_1+z_\varepsilon+(y+h_m)\cdot e-mc_1)\geq
u_\varepsilon(0,y),$$
from which we deduce
\begin{equation}\label{eq:u>q2}
\tilde u(m,\tilde y+h_m)\geq\tilde u(0,\tilde y)>q_2(\tilde y).
\end{equation}
From the other hand, \eqref{weps} yields
$$\forall m\geq m_0,\quad
w_\varepsilon(m,\tilde y+h_m+k_\varepsilon)\leq
U_{2,\varepsilon}(\tilde y,\tilde Z_\varepsilon+c_1 +z_\varepsilon+(k_\varepsilon+ \tilde y) \cdot e+m(c'-c_2)),$$
whence, letting $L>0$ be such that $\tilde Z_\varepsilon+c_1+z_\varepsilon+k_\varepsilon\cdot e\geq-L$
for all $\varepsilon \in (0,\gamma)$, we find that
$$\tilde w(m,\tilde y+h_m)\leq
U_2(\tilde y,-L+\tilde y \cdot e+m(c'-c_2)).$$
The above right-hand side converges to $q_2(\tilde y)$ as $m\to+\infty$,
and therefore, by \eqref{eq:u>q2},
we derive for $m$ sufficiently large,
$$\tilde w(m,\tilde y +h_m)<\tilde u(0,\tilde y)
\leq \tilde u(m(z),\tilde y +h_m).$$
This contradicts the inequality $\tilde w\geq\tilde u$, concluding the proof of the proposition.
\end{proof}
\section{To the continuous case}\label{sec:continuous}
In this section, we place ourselves under Assumption~\ref{ass:speeds} and either
Assumption~\ref{ass:bi} or~\ref{ass:multi}.
In both situations, we have constructed in the previous sections a `discrete' travelling front
or terrace (i.e., a finite and appropriately ordered sequence of discrete travelling fronts)
in the sense of Definition~\ref{def:discrete_front}.
Clearly our argument may be performed with any positive time step (not necessarily equal to 1), and thus we can consider a sequence of `discrete' terraces associated with the time steps $2^{-k}$,
$k\in\mathbb{N}$.
By passing to the limit as $k \to +\infty$, we expect to recover an actual propagating terrace in the sense of Definition~\ref{def:terrace}.
\begin{rmk}
As we mentioned earlier, in some cases this limiting argument is not needed. Indeed, it is rather straightforward to show that a discrete travelling front, regardless of the time step, is also a generalized transition front in the sense of Berestycki and Hamel~\cite{BH12}; without going into the details, we recall that a transition front is an entire solution whose level sets remain at a bounded distance uniformly with respect to time. Under an additional monotonicity assumption on the neighborhood of limiting stable steady states, and provided that the speed is not 0, they have proved that any almost planar transition front is also a pulsating travelling front. However, this is not true in general, therefore we proceed with a different approach.
\end{rmk}
For any direction $e \in \mathbb{S}^{N-1}$ and any $k \in \mathbb{N}$,
the discrete terrace associated with the time step $2^{-k}$ consists of
a finite sequence of ordered stable steady states
$$\bar p \equiv q_{0,k} > q_{1,k} > \cdots > q_{J(k),k} \equiv 0,$$
and a finite sequence of discrete travelling fronts
connecting these steady states with nondecreasing speeds.
Because the $(q_{j,k})_{0\leq j \leq J(k)}$ belong to the finite set of periodic stable steady states of \eqref{eq:parabolic}, we can extract from the sequence
of time steps $(2^{-k})_{k\in\mathbb{N}}$ a subsequence
$(\tau_k)_{k\in\mathbb{N}}$ along which
the family $(q_{j,\tau_k})_{0\leq j \leq J(\tau_k)}$
does not actually depend on $k$.
Therefore, we simply denote it by $(q_j)_{0 \leq j \leq J}$.
Let $(U_{j,k})_{0 \leq j \leq J,\ k\in\mathbb{N}}$ be the corresponding fronts, i.e.,
the $U_{j,k} (y,z)$ are periodic in $y$, nonincreasing in~$z$
and satisfy
$$\forall (y,z) \in \mathbb{R}^{N+1}, \quad
v(\tau_k,y; U_{j,k} (x, z + x \cdot e)) = U_{j,k} (y,z+y\cdot e - c_{j,k}),$$
with
$$c_{1,k} \leq c_{2,k}\leq \cdots \leq c_{J,k},$$
as well as
$$U_{j,k} (\cdot, - \infty) \equiv q_{j-1} , \quad U_{j,k} (\cdot , + \infty) \equiv q_{j}.$$
As a matter of fact, the speeds $c_{j,k}$
are proportional to the time step $\tau_k$, by a factor depending on $j$.
This is the subject of the next lemma, whose proof exploits the link between the front and the spreading speed,
which is the heart of the method developed by Weinberger in~\cite{W02}
and used in the present paper.
\begin{lem}\label{lem:speed_k1}
There exists a sequence
$$c_1 \leq c_2\leq \cdots \leq c_J$$
such that
$$ \forall j\in\{1,\dots, J\},\ k\in\mathbb{N},\quad
c_{j,k}=\tau_k c_j.$$
\end{lem}
\begin{proof}
The proof amounts to showing that
$$ \forall j\in\{1,\dots, J\},\ k\in\mathbb{N},\quad
\frac{c_{j,k+1}}{c_{j,k}}=\frac{\tau_{j,k+1}}{\tau_{j,k}}.$$
We do it for $j=1$.
Then, since the intermediate states $q_{i}$ do not depend on $k$, and because the subsequent speeds were constructed in a similar fashion,
the case $j>1$ is analogously derived.
Let us first show that $\frac{c_{1,k+1}}{c_{1,k}}\geq \frac{\tau_{1,k+1}}{\tau_{1,k}}$.
This easily follows from our earlier construction.
Let us consider the shifted evolution operators associated with the time steps $(\tau_k)_{k\in\mathbb{N}}$.
In analogy with \eqref{eq:def_mapping1}, these are defined by
$$\mathcal{F}_{e,c,k} [V] (y,z+y\cdot e-c) := v (\tau_k,y; V(x,z+ x \cdot e)).$$
Then, for $\phi$ satisfying \eqref{ass:phi}, we define the sequence
$(a_{c,n}^k)_{n\in\mathbb{N}}$ through \eqref{eq:def_acn} with
$\mathcal{F}_{e,c}$ replaced by $\mathcal{F}_{e,c,k}$.
Fix $k\in\mathbb{N}$ and call $\rho:=\frac{\tau_k}{\tau_{k+1}} \in \mathbb{N}^*$.
Because
$$(\mathcal{F}_{e,c,k+1})^\rho [V] (y,z+y\cdot e) = v (\rho\tau_{k+1},y; V(x,z+ x \cdot e+\rho c))
=\mathcal{F}_{e,\rho c,k} [V] (y,z+y\cdot e),$$
we find that
$a_{c,\rho}^{k+1} \geq a_{\rho c,1}^k$, where the inequality comes from the fact that
in the time step $\tau_k$
the sequence $(a_{c,n}^{k+1})_{n\in\mathbb{N}}$ is `boosted'
$\rho$ times by the function $\phi$,
while $(a_{\rho c,n}^k)_{n\in\mathbb{N}}$ only once. We can readily iterate this argument to get
that $a_{c,\rho n}^{k+1} \geq a_{\rho c,n}^k$ for all $n\in\mathbb{N}$.
It then follows from Lemma~\ref{lem:ac2} that if $\rho c<c_{1,k}$ then $c<c_{1,k+1}$
This means that $\rho c_{1,k+1}\geq c_{1,k}$, which is the first desired inequality.
To prove the reverse inequality,
we shall use Lemma~\ref{lem:indep1}
which asserts that $c_{1,k}$ does not depend on the choice of $\phi$ satisfying
\eqref{ass:phi}.
Then we choose the function generating the sequences
$(a_{c,n}^k)_{n\in\mathbb{N}}$, $(a_{c,n}^{k+1})_{n\in\mathbb{N}}$ of a particular form.
Namely, we consider a solution $u$ of the Cauchy problem associated with
\eqref{eq:parabolic}, with a continuous periodic initial datum $u_0<\bar p$ such that $u_0-\delta$ lies in the
basin of attraction of $\bar p$, for some constant $\delta>0$.
In particular, there exists $T>0$ such that $u(t,\cdot)>u_0$ for $t\geq T$.
We then initialize $(a_{c,n}^k)_{n\in\mathbb{N}}$ with a function
$\phi$ satisfying $\phi(y,-\infty)=u(T,y)$.
It follows that $v(t,y;\phi(x,-\infty))>u_0(y)$ for all $t\geq0$, and thus,
by parabolic estimates,
$$\forall 0\leq t\leq\tau_k,\
y\in[0,1]^N,\quad
v(t,y;\phi(x,z+x\cdot e))>u_0(y),$$
provided $z$ is smaller than some $Z$.
Then, by the periodicity of $\phi$ and $u_0$, we get
$$\forall 0\leq t\leq\tau_k,\
z+y\cdot e\leq Z-\sqrt{N},\quad
v(t,y;\phi(x,z+x\cdot e))>u_0(y).$$
We now initialize $(a_{c,n}^{k+1})_{n\in\mathbb{N}}$
with a function $\phi'$ satisfying
$$\phi'(y,-\infty)=u(0,y),\qquad
\phi'(y,z)=0\ \text{ for }z\geq Z-\sqrt{N}-\rho|c|.$$
We deduce that
$$\forall j=1,\dots,\rho,\ (y,z)\in\mathbb{R}^{N+1},\quad
\phi' (y,z+y\cdot e- \rho|c|)\leq v (j\tau_{k+1},y; \phi (x, z + x \cdot e)).$$
We claim that $a_{c,\rho n}^{k+1}\leq a_{\rho c,n}^k$ for all $n\in\mathbb{N}$.
This property holds for $n=0$. Suppose that it holds for some $n\in\mathbb{N}$.
Using the property of $\phi'$, and recalling that
$\phi\leq a_{\rho c,n}^k$, we find that
\[\begin{split}
a_{c,\rho n+1}^{k+1}(y,z+y\cdot e- c) &=
\max \{ \phi' (y,z+y\cdot e- c), v (\tau_{k+1},y; a_{c,\rho n}^{k+1} (x, z + x \cdot e))\}\\
&\leq v (\tau_{k+1},y; a_{\rho c,n}^k(x, z + x \cdot e)).
\end{split}\]
Iterating $\rho$ times we get
\[\begin{split}
a_{c,\rho(n+1)}^{k+1}(y,z+y\cdot e- \rho c) &=
\max \{ \phi' (y,z+y\cdot e-\rho c),\\
&\qquad\qquad v ( \tau_{k+1},y; a_{c,\rho n+\rho-1}^{k+1} (x, z + x \cdot e-(\rho-1) c))\}\\
&\leq v (\rho \tau_{k+1},y; a_{\rho c,n}^k(x, z + x \cdot e))\\
&= \mathcal{F}_{e,\rho c,k}[a_{\rho c,n}^k](y, z + y \cdot e-\rho c)\\
&\leq a_{\rho c,n+1}^k(y, z + y \cdot e-\rho c).
\end{split}\]
The claim $a_{c,\rho n}^{k+1}\leq a_{\rho c,n}^k$ is thereby proved for all $n\in\mathbb{N}$.
Then, as before, owing to Lemma~\ref{lem:ac2} we conclude that
$c_{1,k}\geq \rho c_{1,k+1}$.
\end{proof}
We are now in a position to conclude the proofs of Theorems \ref{th:bi} and \ref{th:multi}.
Namely, in the next lemma we show that for each
level $1 \leq j \leq J$
of the discrete propagating terrace one can find a continuous propagating terrace
whose fronts have the same speed $c_j$ from Lemma~\ref{lem:speed_k1}.
Then, by `merging' the so obtained
$J$ terraces, one gets a propagating terrace
of \eqref{eq:parabolic} connecting $\bar p$ to~$0$.
In the bistable case, the terrace reduces to a single pulsating travelling front,
thanks to Assumptions \ref{ass:bi} and~\ref{ass:speeds}.
Instead, in the multistable case, our construction
allows the possibility that the continuous propagating terrace contains more fronts than the discrete terraces did. This is actually not true in typical situations
(such as the already mentioned ones where the argument of Berestycki and Hamel
\cite{BH12} applies),
but it remains unclear whether this can happen in general.
\begin{lem}
For any $1 \leq j \leq J$, there exists a propagating terrace connecting $q_{j-1}$ to~$q_j$
in the sense of Definition \ref{def:terrace}.
Moreover, all the fronts in this terrace have the speed~$c_j$.
\end{lem}
\begin{proof}
The aim is to pass to the limit as $k \to +\infty$ in the sequence of
discrete terraces associated to the time steps $(\tau_k)_{k\in\mathbb{N}}$.
The first step consists in showing
that the profiles $U_{j,k}$ converge as $k \to +\infty$.
Due to the lack of regularity with respect to the second variable,
the limit will be taken in the relaxed sense of Lemma \ref{lem:diagonal}.
As usual, the argument is the same regardless of the choice of $j$
and then for simplicity of notation we take $j=1$.
Beforehand, we shift $U_{1,k}$ so that
\Fi{U1k>}
\forall z<0,\quad
\min_{y\in[0,1]^N} \big(U_{1,k}(y,z+y\cdot e) - \eta (y)\big) \geq 0,
\end{formula}\noindent
\Fi{U1k<}
\min_{y\in[0,1]^N}\big(U_{1,k}(y,y\cdot e) -\eta(y)\big) \leq 0,
\end{formula}\noindent
where $q_1<\eta< \bar p$ is a given function lying in the basin of attraction of $\bar p$.
We know that~$U_{1,k}$ is a fixed point for $\mathcal{F}_{e,c_{1,k},k}$ by construction,
that is, it is a fixed point for $\mathcal{F}_{e,\tau_{k}c_1,k}$
owing to the previous lemma. Then, for $k'<k$, observing that
$$\mathcal{F}_{e,\tau_{k'}c_1,k'} =
\left( \mathcal{F}_{e,\tau_{k}c_1,k}\right)^{\frac{\tau_{k'}}{\tau_k}},$$
where $\frac{\tau_{k'}}{\tau_k}\in\mathbb{N}$,
we see that it is a fixed point for $\mathcal{F}_{e,\tau_{k'}c_1,k'}$ too.
We now apply Lemma~\ref{lem:diagonal}
to the sequence $(U_{1,k})_{k\in\mathbb{N}}$. We point out that the hypothesis
there that $U_{1,k}(y,z + y\cdot e)$ is uniformly
continuous in $y\in\mathbb{R}^N$, uniformly with respect to~$z$ and~$k$,
follows from parabolic estimates due to the fact that all the $U_{1,k}$ are fixed points of
$\mathcal{F}_{e,\tau_{1}c_1,1}$.
We obtain in the relaxed limit (up to subsequences)
a function $U_1 (y,z)$ which is periodic in~$y$, nonincreasing in~$z$ and such that $U_1 (y,z + y\cdot e)$ is uniformly continuous in~$y$, uniformly with respect to $z$. Moreover,~$U_1$ satisfies the normalization
\eqref{U1k>}-\eqref{U1k<}.
Finally, by the above consideration,
it also follows from Lemma~\ref{lem:diagonal} and the continuity of the operators~$\mathcal{F}_{e,\tau_{k}c_1,k}$ in the locally uniform topology, that $U_1$ fulfils
$$\forall k\in\mathbb{N},\quad
\mathcal{F}_{e,\tau_{k}c_1,k} [U_1] \equiv U_1.$$
Let $u(t,y)$ denote the solution of the problem~\eqref{eq:parabolic}
with initial datum~$U_1 (y, y \cdot e)$.
Then for any $k,m \in \mathbb{N}$, we have that
$$u(m\tau_{k},y) = \left( \mathcal{F}_{e,\tau_{k}c_1,k}\right)^m [U_1] (y,y \cdot e - m\tau_{k}c_1)
= U_1 (y,y \cdot e - m\tau_{k}c_1).$$
By continuity of the solution of~\eqref{eq:parabolic} with respect to time, as well as the monotonicity of~$U_1$ with respect to its second variable, we immediately extend this inequality to all positive times, i.e.,
$$u( t,y) = U_1 (y,y \cdot e - c_1 t) .$$
In particular, $U_1 (y, y\cdot e -c_1 t)$ solves \eqref{eq:parabolic} for positive times in the whole space; by periodicity in the first variable, it is straightforward to check that it solves \eqref{eq:parabolic} for negative times too.
\begin{rmk}
We have shown above that $U_1$ is continuous with respect to both its variables,
on the condition that $c_1 \neq 0$.
\end{rmk}
To show that $U_1 (y, y \cdot e- c_1 t)$ is a pulsating travelling front in the sense of Definition~\ref{def:puls_front}, it only remains to check that it satisfies the appropriate asymptotic. By monotonicity in the second variable, we already know that $U_1 (\cdot , \pm \infty)$ exist, and moreover these limits are periodic steady states of \eqref{eq:parabolic}.
We further have that $U_1 (\cdot, -\infty) \geq \eta $ and $U_1 (\cdot, +\infty)\not\equiv\bar p $, because
$U_1$ satisfies~\eqref{U1k>}-\eqref{U1k<}. Recalling that $\eta$ lies in the basin of attraction of $\bar p$, we find that
$U_1 (\cdot,-\infty) \equiv \bar p$.
Let us now deal with the limit as $z \to +\infty$. Let us call $p^*:=U_1 (\cdot, + \infty)$. This is a periodic
steady state satisfying $q_1\leq p^*<\bar p $;
however it could happen that the first inequality is strict too.
We claim that $p^*$~is stable. In which case, changing the normalization \eqref{U1k>}-\eqref{U1k<}
by taking $\eta<p^*$ in the basin of attraction of $p^*$,
and then passing to the limit as before, we end up with a new function $U_2$.
Because of this normalization, together with the fact that
$U_1(\cdot,+\infty)=p^*$, it turns out that $U_2$
connects $p^*$ to another steady state $p^*_2\geq q_1$.
Then, by iteration, we eventually construct a terrace connecting $\bar p$ to $q_1$.
It remains to show that $p^*$ is stable.
We proceed by contradiction and assume that this is not the case. In particular, $p^* > q_1$.
Let $p_{+}$,~$p_{-}$ denote respectively the smallest stable periodic steady state above $p^*$
and the largest stable periodic steady state below~$p^*$, and let
$\overline{c}_{p^*}$ and $\underline{c}_{p^*}$ be the minimal speeds of fronts connecting $p_{+}$ to~$p^*$
and $p^*$ to~$p_{-}$ respectively.
By the same comparison argument as in the proof of Lemma~\ref{lem:U1}
one readily sees that the speed $c_1$ of~$U_1$ satisfies
$$c_1 \geq \overline{c}_{p^*}.$$
We recall that the argument exploits Weinberger's result in~\cite{W02}
which asserts that $\overline{c}_{p^*}$ coincides with the
spreading speed for solutions between $p^*$ and $p_{+}$.
Next, one shows that
$$c_1 \leq \underline{c}_{p^*}.$$
This is achieved by choosing the normalization
$$
\forall z<0,\quad
\max_{y\in[0,1]^N} \big(U_{1,k}(y,z+y\cdot e) - \eta^* (y)\big) \geq 0,
$$
$$
\max_{y\in[0,1]^N}\big(U_{1,k}(y,y\cdot e) - \eta^*(y)\big) \leq 0,$$
with $\eta^*$ between $p_{-}$ and $p^*$ and in the basin of attraction of $p_{-}$, which is possible because
$U_{1,k}(\cdot,+\infty)\equiv q_1\leq p_{-}$.
One gets in the (relaxed) limit a solution $U^* (y, y\cdot e -c_1 t)$
satisfying $U^*(\cdot,-\infty)\leq p^*$ (because compared with $U_1$, the function $U^*$ is obtained as the limit of an infinite shift of the sequence $U_{1,k}$), as well as $U^*(\cdot,+\infty)\leq \eta^*$. Then the desired inequality follows again from the spreading result.
Finally, combining the previous two inequalities
one gets $\overline{c}_{p^*}\leq \underline{c}_{p^*}$, which
contradicts our Assumption~\ref{ass:speeds}. This concludes the proof.
\end{proof}
\begin{rmk}
As pointed out in Remark~\ref{rmk:trap}, in general it is not equivalent to find a function~$U_1$ as above and a pulsating front solution.
The function $U_{1}$ constructed above actually gives rise to a whole family of pulsating fronts $U_1 (x, x \cdot e + z - c_1 t)$. In the case when $c_1 \neq 0$, then this family merely reduces to the time shifts of a single front. In the case when $c_1 = 0$, however, it is much less clear how these fronts are related to each other: as observed earlier the function $U_1 (x,z)$ may be discontinuous with respect to $z$, hence the resulting family may not be a continuum of fronts (in general, it is not).
\end{rmk}
\section{Highly non-symmetric phenomena}\label{sec:asymmetric}
It is clear that, because equation \eqref{eq:parabolic} is heterogeneous, the terrace
$((q_{j})_j, (U_j)_j)$
provided by Theorem~\ref{th:multi} depends in general on the direction $e$.
In this section, we shall go further and exhibit an example where not only the fronts $U_j$, but also the
intermediate states $q_{j}$ and even their number, i.e., the number of `floors'
of the terrace, change when~$e$ varies.
Obviously this cannot happen in the bistable case where the stable
steady states reduce to $\bar p , 0$. Namely, we prove
Proposition~\ref{prop:asymmetric}.
The main idea is to stack a heterogeneous bistable problem below
an homogeneous one. Then in each direction there exists an ordered pair of
pulsating travelling fronts.
Whether this pair forms a propagating terrace depends on the order of their speeds.
If the latter is admissible for a terrace, that is, if the uppermost front is
not faster than the lowermost,
then the terrace will consists of the two fronts, otherwise it will reduce to a single front.
Since those speeds are given respectively by a function
$\mathbb{S}^{N-1}\ni e \mapsto c(e)$ and by a constant $c$,
and since the heterogeneity should make such a function
nonconstant, it should be possible to end up with a case
where the number of fronts of the terrace is nonconstant too.
Owing to the above consideration, the construction essentially amounts to finding a
heterogeneous bistable problem for which the speed of the pulsating travelling front $c(e)$
is nonconstant in $e$.
While such property should be satisfied by a broad class of problems (perhaps even generically),
getting it in the context of a bistable equation (in the sense of Assumption~\ref{ass:bi})
is rather delicate. We were not able to find an example of this type in the literature.
We place ourselves in dimension $N=2$ and denote a generic point in $\mathbb{R}^2$ by
$(x,y)$, as well as $e_1:=(1,0)$, $e_2:=(0,1)$.
We derive the following.
\begin{prop}\label{pro:speeds}
There exists a function $f_1=f_1 (y,u)$ which is periodic in the variable $y\in\mathbb{R}$,
satisfies Assumptions~\ref{ass:bi},~\ref{ass:speeds} with $\bar p\equiv1$,
and for which the equation
\Fi{eq:f1}
\partial_t u = \Delta u + f_1(y,u), \quad t \in \mathbb{R}, \ (x,y) \in \mathbb{R}^2 ,
\end{formula}\noindent
admits a unique (up to shifts in time) pulsating travelling front
connecting $1$ to $0$ for any given direction $e\in\mathbb{S}^{1}$.
Furthermore, the corresponding speeds $c(e)$ satisfy $c(e_1)>c(e_2)>0$.
\end{prop}
The function $f_1 (y,u)$ we construct will be periodic in $y$
with some positive period, which one can then reduce to $1$ (to be coherent with
the rest of the paper) by simply rescaling the spatial variables.
We first introduce a smooth function $f_0:[0,1]\to\mathbb{R}$ with the following properties:
$$f_0(0)=f_0\Big(\frac12\Big)=f_0(1)=0,\qquad
f_0<0 \text{ in }\Big(0,\frac12\Big),\qquad
f_0>0 \text{ in }\Big(\frac12,1\Big),$$
$$ f_0'(0)=f_0'(1)=-1,\qquad
f_0'\Big(\frac12\Big)>0,\qquad
|f_0 '|\leq1,\qquad
\int_0^1 f_0>0.$$
We let $\frac12<S<1$ be the quantity identified by the relation
$$
\int_0^S f_0=0.
$$
Next, we consider two smooth functions $\chi_i:\mathbb{R}\to\mathbb{R}$,
$i=1,2$, satisfying
$\chi_i\geq0,\not\equiv0$ and
$$\supp\chi_1\subset(0,1),\quad\chi_1=1\text{ on }\Big[\frac14,\frac34\Big],
\qquad\supp\chi_2\subset(S,1),$$
where $\supp$ denotes the closed support.
We then set
\Fi{def1}
\forall u \in\mathbb{R},\ y\in[0,2L],\quad
f_1 (y,u):=f_0(u) + M\chi_1\Big(\frac y L\Big)\,\chi_2(u),
\end{formula}\noindent
$L,M$ being positive constants that will be chosen later.
We finally extend $f_1 (y,u)$ to~$\mathbb{R}^2$ by periodicity
in the $y$-variable, with period $2L$.
Observe that $f_1 (y,u)\geq f_0(u)$, and that equality holds for $y\in[(2j-1)L,2jL]$, $j\in\mathbb{Z}$.
Until the end of the proof of Proposition~\ref{pro:speeds},
when we say that a function is periodic we mean that its period is $2L$.
Let us show that the equation~\eqref{eq:f1} is bistable in the sense of
Assumption~\ref{ass:bi}. We shall also check that it fulfils
Assumption~\ref{ass:speeds}, for which, owing to Proposition~\ref{prop:counter}
in the Appendix, it is sufficient to show that any intermediate state
is linearly unstable.
We shall need the following observations about the periodic steady states
of the homogeneous equation.
\begin{lem}\label{lem:f0}
For the equation
\Fi{eq:f0}
\partial_t u = \Delta u + f_0(u), \quad t \in \mathbb{R}, \ (x,y) \in \mathbb{R}^2 ,
\end{formula}\noindent
the following properties hold:
\begin{enumerate}[$(i)$]
\item the constant steady states $0$, $1$ are linearly stable, whereas
$\frac12$ is linearly unstable;
\item any periodic steady state which is not identically constant is linearly unstable;
\item there does not exist any pair $0<q<\tilde q<1$ of periodic steady states.
\end{enumerate}
\end{lem}
\begin{proof}
Statement $(i)$ is trivial, because the principal eigenvalue of the
linearized operator around the constant states $q_1\equiv0$, $q_2\equiv0$,
$q_3\equiv\frac12$ is equal to $f_0'(q_i)$.
Statement $(ii)$ is a consequence of the invariance of the equation by
spatial translation.
Indeed, if $q$ is a steady state which is not identically
constant then it admits a partial derivative $\partial_i q$ which is not identically
equal to $0$; if in addition $q$ is periodic then
$\partial_i q$ must change sign.
Then, differentiating the equation $\Delta q + f_0(q)=0$
with respect to~$x_i$ we find that
$\partial_i q$ is a sign-changing eigenfunction of the linearized operator
around~$q$, with eigenvalue 0.
It follows that the principal eigenvalue $\lambda_q$
of such operator in the space of periodic functions
(which is maximal, simple and associated with a positive eigenfunction) is positive,
that is, $q$ is linearly unstable.
We prove statement $(iii)$ by contradiction. Assume that \eqref{eq:f0} admits
a pair of periodic steady states $0<q<\tilde q<1$.
We know from $(i)$-$(ii)$ that such solutions are linearly unstable.
Then, calling $\varphi_q$ the principal eigenfunction associated with
$\lambda_q$, one readily checks that for $\varepsilon>0$ sufficiently small,
$q+\varepsilon\varphi_q$ is a stationary strict subsolution of~\eqref{eq:f0}.
Take $\varepsilon>0$ such that the above holds and in addition
$q+\varepsilon\varphi_q<\tilde q$.
It follows from the parabolic comparison principle
that the solution with initial datum $q+\varepsilon\varphi_q$ is strictly increasing in
time and then it converges as
$t\to+\infty$ to a steady state $\hat q$ satisfying $q<\hat q\leq\tilde q$.
This is impossible, because $\hat q$ is linearly unstable by $(i)$-$(ii)$
and then its basin of attraction
cannot contain the function $q+\varepsilon\varphi_q$.
\end{proof}
\begin{rmk}
Consider the homogeneous equation~\eqref{eq:homo}
with a general reaction term $f=f
(u)$.
Statement $(ii)$ of Lemma~\ref{lem:f0} holds true in such case,
because its proof only relies on the spatial-invariance of the equation.
Thus, if Assumption~\ref{ass:bi} holds, the uppermost steady state $\bar p$
must be constant. One then finds that
Assumption~\ref{ass:bi} necessarily implies that
$$
\exists \theta\in(0,\bar p),\quad
f(0)=f(\theta)=f(\bar p)=0,\quad f<0\text{ in }(0,\theta),\quad
f>0\text{ in }(\theta,\bar p).
$$
As a matter of fact, these conditions are equivalent to Assumption~\ref{ass:bi}.
Indeed, even though the constant state $\theta$ may not
be linearly unstable (if $f'(\theta)=0$), one sees that~$\theta$
is unstable in a strong sense:
$\theta+\varepsilon$ belongs to the basin of attraction of $0$ if $\varepsilon<0$ and
of $\bar p$ if $\varepsilon>0$.
This is enough for the proof of Lemma~\ref{lem:f0} $(iii)$ to work.
\end{rmk}
We can now derive the bistability character of \eqref{eq:f1}.
\begin{lem}\label{lem:f1}
Consider the equation~\eqref{eq:f1} with $f_1$ defined by \eqref{def1}.
The following properties hold:
\begin{enumerate}[$(i)$]
\item any periodic steady state $0<q<1$ is linearly unstable;
\item there does not exist any pair $0<q<\tilde q<1$ of periodic steady states.
\end{enumerate}
\end{lem}
\begin{proof}
The proof is achieved in several steps.
\smallskip
\setlength{\leftskip}{\parindent}
\noindent
{\em Step 1: any periodic steady state which is not $x$-independent is
linearly unstable.}
\setlength{\leftskip}{0cm}
\noindent
Because the equation \eqref{eq:f1} is invariant
by translation in the $x$-variable, we can proceed exactly as in the proof of
Lemma~\ref{lem:f0} $(i)$.
\smallskip
\setlength{\leftskip}{\parindent}
\noindent
{\em Step 2: if $0\leq q<1$ is a periodic steady state which is
$x$-independent then $q\leq S$.}
\setlength{\leftskip}{0cm}
\noindent
We recall that $S$ is defined by $\int_0^S f_0=0$.
Suppose that $q=q(y)$ is not constant,
otherwise it is identically equal to $0$ or $\frac12<S$.
Consider an arbitrary $\eta\in\mathbb{R}$ with $q'(\eta)>0$.
Let $a<\eta<b$ be such that $q'(a)=q'(b)=0$ and $q'>0$ in $(a,b)$.
Multiplying the inequality $-q''=f_1 (y,q)\geq f_0(q)$ by $q'$ and integrating on $(a,b)$ we~get
$$0\geq\int_a^b f_0(q(y))q'(y) dy =\int_{q(a)}^{q(b)} f_0(u)du.
$$
This implies first that $q(a)\leq\frac12$,
and then that $\int_{0}^{q(b)} f_0(u)du\leq0$.
Recalling the definition of $S$, we find that $q(b)\leq S$, whence
$q(\eta)<S$. We have thereby shown that $q(\eta)<S$ whenever $q'(\eta)>0$,
and therefore that $q\leq S$.
\smallskip
\setlength{\leftskip}{\parindent}
\noindent
{\em Step 3: if \eqref{eq:f1} admits a pair of periodic steady states
$0<q<\tilde q<1$, then
there exists a periodic steady state
$q\leq\hat q\leq S$ which is not linearly unstable.}
\setlength{\leftskip}{0cm}
\noindent
If $\tilde q$ is not linearly unstable then the Steps 1-2 imply that
$\tilde q\leq S$,
which means that the conclusion holds
with $\hat q=\tilde q$ in such case.
Suppose now that $\tilde q$ is linearly unstable. It follows from
the same argument as in the proof of Lemma~\ref{lem:f0} $(iii)$
that for $\varepsilon>0$ sufficiently small, the
function $\tilde q-\varepsilon\varphi_{\tilde q}$
is a supersolution of~\eqref{eq:f1},
which is larger than~$q$,
where $\varphi_{\tilde q}$
is the principal eigenfunction of the linearized operator around $\tilde q$.
The comparison principle then implies
that the solution of \eqref{eq:f1}
with initial datum $\tilde q-\varepsilon\varphi_{\tilde q}$ is strictly decreasing in time and then it converges as
$t\to+\infty$ to a steady state $\hat q$ satisfying $q\leq\hat q<\tilde q-\varepsilon\varphi_{\tilde q}$.
Such state cannot be linearly unstable, because its basin of attraction contains the function
$\tilde q-\varepsilon\varphi_{\tilde q}$. Then, as before, $\hat q\leq S$ by the Steps 1-2.
\smallskip
{\em Step 4: conclusion.}
\noindent
Assume by contradiction that there is a periodic steady state $0<q<1$
which is not linearly unstable.
From the Steps~1-2 we deduce that $q\leq S$.
This means that $q$ is a stationary solution of~\eqref{eq:f0}.
Lemma~\ref{lem:f0} then implies that $q$ is linearly unstable
for~\eqref{eq:f0}, and thus for \eqref{eq:f1} too.
This is a contradiction. We have thereby proved $(i)$.
Suppose now \eqref{eq:f1} admits a pair of periodic steady states
$0<q<\tilde q<1$.
Then Step~3 provides us with a periodic steady state $0<\hat q\leq S$
which is not linearly unstable, contradicting~$(i)$.
\end{proof}
Let $f_1$ be defined by \eqref{def1}, with $L,M>0$ still to be chosen.
Lemma~\ref{lem:f1} implies that the equation \eqref{eq:f1}
is bistable in the sense of Assumption~\ref{ass:bi} with $\bar p\equiv1$.
Moreover, thanks to Proposition~\ref{prop:counter} in the Appendix, it also entails
Assumption~\ref{ass:speeds}.
We can thus apply Theorem \ref{th:bi}, which provides us with a
monotonic in time pulsating travelling front connecting $1$
to $0$, for any given direction $e\in \mathbb{S}^{1}$.
Let~$c(e)$ be the associated speed.
Before showing that $c(e_1)>c(e_2)$, let us derive the uniqueness of the pulsating
travelling front and the positivity of its speed.
\begin{lem}\label{lem:uniqueness}
The equation~\eqref{eq:f1} with $f_1$ defined by \eqref{def1}
admits a unique (up to shifts in time) pulsating travelling front
connecting $1$ to $0$ for any given direction $e\in\mathbb{S}^{1}$.
Furthermore, the front is strictly increasing in time
and its speed $c(e)$ is positive.
\end{lem}
\begin{proof}
Firstly, the positivity of the speed of any front
connecting $1$ to $0$
is an immediate consequence of the facts that $f_1 \geq f_0$
and that equation~\eqref{eq:f0} admits solutions with compactly supported initial data
which spread with a positive speed~\cite{AW}.
Next, the fronts provided by Theorem~\ref{th:bi} are monotonic in time.
Applying the strong maximum principle to their temporal derivative (which
satisfies a linear parabolic equation) we infer that the
monotonicity is strict, unless they are constant in time. The
positivity of their speed then implies that they are necessarily strictly increasing in time.
Hence, the second part of the lemma holds for the fronts given by Theorem~\ref{th:bi}.
If we show that such fronts are the only ones existing we are done.
Throughout this proof, we use the notation $x$ to indicate a point in $\mathbb{R}^2$.
Let $u_i(t,x)=U_i(x,x\cdot e-c_i t)$, $i=1,2$, be two
pulsating travelling fronts for~\eqref{eq:f1} connecting $1$ to $0$
in a given direction $e\in\mathbb{S}^{1}$.
We have seen before that necessarily $c_i>0$.
This means that the transformation $(x,t)\mapsto (x,x\cdot e-c_i t)$
is invertible and thus $U_i(x,z)$ enjoys the regularity in $(x,z)$
coming from the parabolic regularity for $u_i$
(at least $C^1$, with bounded derivatives).
Let us suppose to fix the ideas that $c_1\geq c_2$.
We shall also assume that
either $U_1$ or $U_2$ is the front provided by~Theorem \ref{th:bi},
so that we further know that it is decreasing in $z$.
We use a sliding method.
The conditions $U_i(\cdot,-\infty)=1$ and $U_i(\cdot,+\infty)=0$
imply that for any $\varepsilon\in(0,1)$, the following property holds for
$-k >0$ sufficiently large (depending on $\varepsilon$):
$$\forall x\in\mathbb{R}^2,\ z\in\mathbb{R},\quad
U_1(x,z)<U_2(x,z+k)+\varepsilon.$$
The above property clearly fails for $k>0$ large, thus we can define $k^\varepsilon\in\mathbb{R}$
as the supremum for which it is fulfilled. Call $U_2^\varepsilon(x,z):=
U_2(x,z+k^\varepsilon)$ and $u_2^\varepsilon(t,x):=U_2^\varepsilon(x,x\cdot e-c_2 t)$.
Observe that $u_2^\varepsilon$ is just a temporal translation of $u_2$, because $c_2\neq0$,
whence it is still a solution of~\eqref{eq:f1}.
We see that
$$\sup(U_1-U_2^\varepsilon)=\varepsilon.$$
Using again $U_i(\cdot,-\infty)=1$ and $U_i(\cdot,+\infty)=0$
one infers that a maximizing sequence $(x_n,z_n)_{n\in\mathbb{N}}$
for $U_1-U_2^\varepsilon$ has necessarily $(z_n)_{n\in\mathbb{N}}$ bounded.
By periodicity, we can assume that the sequence $(x_n)_{n\in\mathbb{N}}$
is contained in $[0,L]^2$. Hence, there exists $(x^\varepsilon,z^\varepsilon)$ such that
$$(U_1-U_2^\varepsilon)(x^\varepsilon,z^\varepsilon)=
\max(U_1-U_2^\varepsilon)=\varepsilon.$$
It follows that
$$u_1(t^\varepsilon_1,x^\varepsilon)=u_2^\varepsilon(t^\varepsilon_2,x^\varepsilon)+\varepsilon,\quad\text{with }\
t^\varepsilon_i:=\frac{x^\varepsilon\cdot e-z^\varepsilon}{c_i}.$$
Next, if $U_1(x,z)$ is the front decreasing in $z$ provided by~Theorem \ref{th:bi} then
for $t\leq0$ we find that
\[\begin{split}
u_1(t^\varepsilon_1+t,x)&=U_1(x,x\cdot e-c_1(t^\varepsilon_1+ t))\\
&\leq
U_1(x,x\cdot e-c_1 t^\varepsilon_1-c_2 t)\\
&\leq
U_2^\varepsilon(x,x\cdot e-c_2(t^\varepsilon_2+t))+\varepsilon=u_2^\varepsilon(t^\varepsilon_2+t,x)+\varepsilon,
\end{split}\]
where we have used the equality $c_1 t^\varepsilon_1=c_2 t^\varepsilon_2$.
Similarly, if $U_2(x,z)$ is decreasing in $z$ then, for $t\leq0$, we get
$$u_1(t^\varepsilon_1+t,x)\leq
U_2(x,x\cdot e-c_2 t^\varepsilon_2-c_1 t)+\varepsilon\leq
u_2^\varepsilon(t^\varepsilon_2+t,x)+\varepsilon.$$
Namely, in any case, $u_1(t^\varepsilon_1+t,x)$ lies below
$u_2^\varepsilon(t^\varepsilon_2+t,x)+\varepsilon$ until $t=0$, when the two
functions touch.
Both $u_1(t^\varepsilon_1+t,x)$ and
$u_2^\varepsilon(t^\varepsilon_2+t,x)$ are solutions of~\eqref{eq:f1}.
Moreover, because $f_1 =f_0$ for $u$ close to $0$ and $1$,
and $f_0'(0),f_0'(1)<0$, one readily checks that the function
$u_2^\varepsilon(t^\varepsilon_2+t,x)+\varepsilon$ is a supersolution of~\eqref{eq:f1}
in the regions where it is smaller than $\delta$ or larger than $1-\delta+\varepsilon$,
for some small $\delta$ depending on $f_0$ and~$S$.
If the contact point $x^\varepsilon$ were in one of such regions,
the parabolic strong maximum principle would imply that
$u_1(t^\varepsilon_1+t,x)\equiv u_2^\varepsilon(t^\varepsilon_2+t,x)+\varepsilon$ there, for $t\leq0$,
which is impossible because $U_i(\cdot,-\infty)=1$ and $U_i(\cdot,+\infty)=0$.
Therefore, we have that
$$\delta\leq u_1(t^\varepsilon_1,x^\varepsilon)=u_2^\varepsilon(t^\varepsilon_2,x^\varepsilon)+\varepsilon
=u_2\Big(t^\varepsilon_2-\frac{k^\varepsilon}{c_2},x^\varepsilon\Big)+\varepsilon\leq 1-\delta+\varepsilon.$$
Now, because $x^\varepsilon\in[0,L]^2$, the above bounds imply that both
$t^\varepsilon_1$ and $t^\varepsilon_2-\frac{k^\varepsilon}{c_2}$
stay bounded as $\varepsilon\searrow0$. Calling $\hat x,\hat t_1,\hat t_2$ the limits
as $\varepsilon\searrow0$ of (some converging subsequences of) $x^\varepsilon$, $t^\varepsilon_1$,
$t^\varepsilon_2-\frac{k^\varepsilon}{c_2}$ respectively,
we eventually deduce that
$$u_1(\hat t_1,\hat x)=u_2(\hat t_2,\hat x)\qquad\text{and }\
\forall t\leq0,\ x\in\mathbb{R}^2,\quad
u_1(\hat t_1+t,x)\leq u_2(\hat t_2+t,x).$$
The parabolic strong maximum principle finally yields
$u_1(\hat t_1+t,x)\equiv u_2(\hat t_2+t,x)$ for $t\in\mathbb{R}$, $x\in\mathbb{R}^2$.
This concludes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{pro:speeds}]
We need to show that $c(e_1)>c(e_2)$ for a suitable choice of~$L,M>0$.
The proof is divided into several parts.
\smallskip
{\em Step 1: for $L>8$ there holds that $c(e_1)\to+\infty$ as
$M\to+\infty$.}
\noindent
Fix an arbitrary $c>0$.
We want to construct a subsolution $\underline u$ of \eqref{eq:f1} of the form
$$\underline{u}(t,x,y):=w\circ\gamma(t,x,y),\qquad
\gamma(t,x,y):=2-\Big|\Big(x-ct,y-\frac L2\Big)\Big|,$$
for a suitable function $w\in W^{2,\infty}(\mathbb{R})$.
Let $S<\sigma_1<\sigma_2<1$ be such that
$$m:=\min_{[\sigma_1,\sigma_2]}\chi_2>0.$$
We then define $w$ as follows:
$$w(r):=\begin{cases}
r^{c+3} & \text{for }0\leq r\leq \rho,\\
r^{c+3} - \frac{Mm}{2}(r-\rho)^2 & \text{for } r>\rho,
\end{cases}
$$
with $\rho:=\sigma_1^{\frac1{c+3}}$ so that $w(\rho)=\sigma_1$.
For $r>\rho$ we compute
$$w'(r)=(c+3)r^{c+2}- Mm(r-\rho),$$
which is negative for $M$ large.
This implies that, for $M$ sufficiently large (depending on $c$),
there exists $R_M>\rho$ such that
$$w'>0\ \text{ in }(0,R_M),\qquad w'(R_M)=0.$$
It also yields that $R_M\searrow\rho$ as $M\to+\infty$.
Thus, for $M$ large enough there holds that
$\sigma_2>R_M^{c+3}>w(R_M)$.
From now on we restrict ourselves to such values of $M$.
Direct computation shows that
the function $\underline u$ satisfies (in the weak sense)
$$\partial_t \underline u - \Delta \underline u\leq
-w''\circ\gamma+\left(c+\frac1{|(x-ct,y-\frac L2)|}\right)|w'\circ\gamma|.
$$
Hence, if $0<\gamma(t,x,y)<R_M$, i.e., if $2-R_M<|(x-ct,y-\frac L2)|<2$,
recalling that $R_M<\sigma_2^{\frac1{c+3}}<1$, we get
$$\partial_t \underline u - \Delta \underline u \leq
-w''\circ\gamma+\left(c+1\right)w'\circ\gamma.$$
Consider first the case
$0<\gamma<\rho\ (<1)$.
We see that
$$w''\circ\gamma-\left(c+1\right)w'\circ\gamma=
(c+3)(c+2)\gamma^{c+1}-(c+1)(c+3)\gamma^{c+2}
>(c+3)\gamma^{c+2}>w\circ\gamma.$$
Recalling that $|f_0'|\leq 1$,
we get $w\circ\gamma\geq-f_0(w\circ\gamma)\geq-f_1 (y,w\circ\gamma)$.
This means that
$\underline u$ is a subsolution of \eqref{eq:f1} in the region $0<\gamma<\rho$.
Instead, if $\rho<\gamma<R_M$,
there holds that $\sigma_1<w\circ\gamma<\sigma_2$ and thus
$$w''\circ\gamma-\left(c+1\right)w'\circ\gamma>-Mm\geq-M\chi_2(w\circ\gamma).$$
Observe that $|y-\frac L2|<2$ because $\gamma(t,x,y)>0$, whence
$\frac14 L<y<\frac34L$ provided $L>8$.
Then, under such condition, it turns out that
$\underline u$ is a subsolution of \eqref{eq:f1} in the region
$\rho<\gamma<R_M$ too.
We finally extend $w$ to $0$ on $(-\infty,0)$ and we change it into the
constant $w(R_M)\ (<\sigma_2)$ on $[R_M,+\infty)$.
This is still of class $W^{2,\infty}$ and, for $L>8$
and $M$ large enough, the function
$\underline u:=w\circ\gamma$ is a generalized subsolution
of~\eqref{eq:f1} in the whole space.
Notice that $\underline u$ shifts in the direction $e_1$ with speed $c$.
Moreover, for fixed time, it is compactly supported and bounded from above by
$\sigma_2$. It follows that,
up to translation in time, it can be placed below the pulsating travelling front
in the direction $e_1$.
This readily implies by comparison that the speed
of the latter satisfies $c(e_1)\geq c$.
Step 1 is thereby proved due to the arbitrariness of $c$.
\smallskip
\setlength{\leftskip}{\parindent}
\noindent
{\em Step 2: for $L>\ln4$,
there exists $\tau>0$, depending on $L$ but not on $M$,
such that $c(e_2)\leq2L/\tau$.}
\setlength{\leftskip}{0cm}
\noindent
We introduce the following function:
$$\psi(t,y):=\frac14+e^{2t-y -L}.$$
This is a strict supersolution of \eqref{eq:f0}. Indeed, we have that
$$\partial_t \psi-\Delta \psi=\psi-\frac14>f_0(\psi),$$
where the last inequality holds because $f_0(\psi)<0$ if $\frac14 < \psi<\frac12$
and $f_0(\psi)\leq\psi-\frac12$ if $\psi > \frac12$.
We now let $\tau$ be such that $\psi(\tau,0)=\frac12$, that is,
$$\tau=\frac12\left(L-\ln4\right).$$
In order to have $\tau>0$ we impose $ L>\ln4$.
We finally define
$$\forall j\in\mathbb{N},\ t\in(0,\tau],\ y\in\mathbb{R},\quad
\bar u(j\tau+t,y):=\psi(jL+t,y).$$
The function $\bar u(t,y)$ is increasing and
lower semicontinuous in $t$, because $L>\tau$.
Consider now a pulsating travelling front $u(t,x,y)=U(x,y,y-c(e_2) t)$
for \eqref{eq:f1} in the direction $e_2$ connecting
$1$ to $0$. The functions $u$ and $U$ are periodic in the $x$ variable.
Moreover, there exists $k\in\mathbb{N}$ such that
$\bar u(k\tau,y)>U(x,y,y)=u(0,x,y)$ for all $(x,y)\in\mathbb{R}^2$.
Assume by contradiction that the inequality
$\bar u(k\tau+t,y)>u(t,x,y)$
fails for some positive time $t$ and
let $T\geq0$ be the infimum of such times. Then, because $\bar u$ is increasing in
the first variable and $u$ is continuous,
we have that
$\bar u(k\tau+t,y)\geq u(t,x,y)$ for all $t\in[0,T]$. Moreover,
there exist some sequences
$t_n\searrow T$ and $((x_n,y_n))_{n\in\mathbb{N}}$ such that $\bar u(k\tau+t_n,y_n)\leq
u(t_n,x_n,y_n)$ for all $n\in\mathbb{N}$. By the periodicity of $u$ in $x$,
it is not restrictive to assume that the sequence $(x_n)_{n\in\mathbb{N}}$ is bounded.
The sequence $(y_n)_{n\in\mathbb{N}}$ is also bounded, because from one hand
$$\bar u(k\tau+t_n,y)\geq \bar u(k\tau+T,y)\geq\psi(T,y)>e^{2T-y -L},$$
which is larger than $1=\sup u$ if $y<2T -L$, while from the other hand
$u(t,x,y)=U(x,y,y-c(e_2) t)$
which converges to $0<\inf\bar u$ as $y\to+\infty$,
uniformly in $x$ and locally uniformly in $t$.
Let $(\bar x,\bar y)$ be the limit of (a converging subsequence)
of $((x_n,y_n))_{n\in\mathbb{N}}$. The continuity of $u$ and the lower semicontinuity
of $\bar u$ yield $\bar u(k\tau+T,\bar y)\leq u(T,\bar x,\bar y)$,
whence in particular $T>0$.
Summing up, we have that
\Fi{contact}
\min_{{0\leq t\leq T}\atop{(x,y)\in\mathbb{R}^2}}
\big(\bar u(k\tau+t,y)-u(t,x,y)\big)=0=
\bar u(k\tau+T,\bar y)-u(T,\bar x,\bar y).
\end{formula}\noindent
Let $j\in\mathbb{N}$ be such that $k\tau+T\in(j\tau,(j+1)\tau]$.
Using the inequalities
\[\begin{split}
1 &>u(T,\bar x,\bar y)=\bar u(k\tau+T,\bar y)=\psi(k \tau+ T-j\tau+jL,\bar y)=
\frac14+e^{2(k \tau+ T-j\tau+jL)-\bar y -L}\\
&>\frac14+e^{(2 j-1)L-\bar y},
\end{split}\]
we find that $\bar y>(2j-1) L$.
We claim that
$f_1(y,\bar u(k\tau+t,y))=f_0(\bar u(k\tau+t,y))$ for $t\leq T$ and
$y>(2j-1) L$.
Clearly, the claim holds if $(2j-1) L < y< 2j L$, because $f_0$ and $f_1$ coincide there.
Take $t\leq T$ and $y\geq 2j L$.
We see that
$$\bar u(k\tau+t,y)\leq\frac14+e^{2(k\tau + T-j\tau+jL)-y- L}
\leq \frac14+e^{2(\tau+jL)-y - L}
\leq \frac14+e^{2\tau-L}=\frac12,$$
where the last equality follows from the definition of $\tau$.
In particular, $\bar u(k\tau+t,y)<S$ and therefore
$f_1(y,\bar u(k\tau+t,y))=f_0(\bar u(k\tau+t,y))$.
This proves the claim. Thus, the function~$\psi$ being
a strict supersolution of \eqref{eq:f0}, as seen before, we deduce that
$\bar u$ is a (continuous) strict supersolution of \eqref{eq:f1}
for $t\in(j\tau,k\tau+T]$, $x\in\mathbb{R}$, $y>(2j-1)L$.
Recalling that \eqref{contact} holds with $\bar y>(2j-1) L$,
a contradiction follows from
the parabolic strong maximum principle.
We have thereby shown that $u(t,x,y)<\bar u(k\tau + t,y)$ for all $t\geq0$,
$(x,y)\in\mathbb{R}^2$.
Now, the function $\bar u$ satisfies, for $j\in\mathbb{N}$, $j\geq k$,
$$\bar u(j\tau, 2jL)=\psi(jL,2jL)=
\frac14+e^{-L}<\frac12$$
(recall that $L>\ln4$).
From this and the fact that $u(t,x,y)<\bar u(k\tau + t,y)$ for $t>0$,
one easily infers that the speed of $u$ satisfies
$$c(e_2) \leq\lim_{j\to+\infty}\frac{2jL}{j\tau}=\frac{2L}\tau.$$
\smallskip
{\em Step 3: there exist $L,M>0$ such that $c(e_1)>c(e_2)$.}
\noindent
Take $L>8$, so that the conclusions of the Steps 1-2 hold.
Hence we can choose $M$ large enough in such a way
that $c(e_1)$ is larger than
the upper bound $2L/\tau$ provided by the Step 2. It follows that
$c(e_1)>c(e_2)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:asymmetric}]
Let $f_1=f_1(y,u)$ be the function provided by Proposition~\ref{pro:speeds}
and let $c(e)$ be the speed of the
unique (up to shifts in time) pulsating travelling front
connecting $1$ to $0$ in the direction $e\in\mathbb{S}^{1}$.
We know that $c(e_1)>c(e_2)>0$. Fix $c(e_2)<c<c(e_1)$.
We claim that there exists a bistable reaction term $f_2=f_2(u)$ satisfying
$f_2'(0)=-1$ and
such that the homogeneous equation
\Fi{eq:f2}
\partial_t u = \Delta u + f_2(u), \quad t \in \mathbb{R}, \ (x,y) \in \mathbb{R}^2 ,
\end{formula}\noindent
admits a (unique up to shift) planar front with a speed equal to $c$.
Such a reaction term can be obtained under the form
$$f_2(u)=2u\Big(u-\frac12\Big)(1-u)+M'\chi_2(u),$$
for a suitable choice of $M'$.
Indeed, for any $M'\geq0$, \eqref{eq:f2} admits a unique planar front, see~\cite{AW},
and it is not hard to check that its speed $c_{M'}$
depends continuously on~$M'$.
To conclude, we observe that $c_0=0$~\cite{AW} and that $c_{M'}\to+\infty$ as
$M'\to+\infty$, as we have shown in the Step 1 of the proof
of Proposition~\ref{pro:speeds}. We point out that the proof of
Lemma~\ref{lem:uniqueness} still works for the homogeneous equation~\eqref{eq:f2}.
Namely, the planar front is the unique pulsating travelling front for~\eqref{eq:f2} (up to
shift in time or space).
We can now define the reaction $f$ as follows:
$$f(y,u):=\begin{cases}
f_1(y,u) & \text{if }0\leq u\leq 1,\\
f_2(u-1) & \text{if }1<u\leq 2.
\end{cases}$$
This function is of class $C^1$ because, we recall,
$\partial_u f_1(y,1)=f_0'(1)=-1= f_2 ' (0)$.
Moreover, it is a superposition of two reaction terms which are bistable in the sense of
Assumption~\ref{ass:bi}, due to Lemmas \ref{lem:f0},
\ref{lem:f1}.
Let us show that $f$ satisfies Assumption~\ref{ass:multi} with $I=2$
and $p_0\equiv\bar p\equiv2$, $p_1\equiv1$, $p_2\equiv0$.
We claim that any periodic steady state $q$ satisfying $0<q<2$ and $q \not \equiv 1$ is linearly unstable. By Lemmas~\ref{lem:f0} and~\ref{lem:f1}, we only need to consider the case when $\min q<1<\max q$. Assume by contradiction that such a~$q$ is not linearly unstable.
Because the equation is invariant in the direction $e_1$,
the Step 1 of the proof of Lemma~\ref{lem:f1} implies that $q$ is $x$-independent,
i.e., $q=q(y)$.
On the level set $q=1$ we necessarily have that $q'\neq0$, because otherwise
$q\equiv1$. Then, the function $q$ being periodic,
there exists $\eta\in\mathbb{R}$ such that $q(\eta)=1$ and $q'(\eta)>0$.
Let $a<\eta$ be such that $q'(a)=0$ and $q'>0$ in $(a,\eta)$.
Then in $(a,\eta)$ there holds that
$-q''=f_1(y,q)\geq f_0(q)$.
Multiplying this inequality by $q'$ and integrating on $(a,\eta)$ we get
$$-\frac12(q')^2(\eta)\geq\int_a^\eta f_0(q(y))q'(y) dy =\int_{q(a)}^1 f_0(u)du.$$
This is impossible, because $\int_s^1 f_0>0$ for any $s\in[0,1]$, by definition of the function~$f_0$.
The claim is proved.
Summing up, we know that all periodic steady states of~\eqref{eq:parabolic}
are linearly unstable, excepted for the constant states $0,1,2$ which are linearly stable.
As shown in the proof of Lemma~\ref{lem:f0}, between any pair of
linearly unstable periodic steady states $q<\tilde q$ there must exists a
periodic steady state which is not linearly unstable. This implies that
Assumption~\ref{ass:multi} holds, as announced. It entails
Assumption~\ref{ass:speeds} too, owing to Proposition~\ref{prop:counter} in the Appendix.
We are in the position to apply Theorem~\ref{th:multi}.
This provides us with a propagating terrace in any direction~$e\in\mathbb{S}^{1}$.
Two situations may occur: either the terrace reduces to
one single front connecting $2$ to $0$,
or it consists of two fronts,
one connecting $2$ to $1$ and the other connecting $1$~to~$0$.
In the latter case, we have by uniqueness
that the two fronts are respectively given
(up to translation in time) by the unique planar front for~\eqref{eq:f2} increased by $1$,
which has speed $c$, and by the unique pulsating front of Proposition~\ref{pro:speeds},
having speed $c(e)$.
This case is ruled out if $c>c(e)$ because this violates the condition on the
order of the speeds
of the propagating terrace, see Definition~\ref{def:terrace}. Therefore, when $c > c(e)$ the terrace consists of a single front connecting 2 to 0, and proceeding as in the proof of Lemma~\ref{lem:uniqueness}, one can show that this front is unique up to time shift.
Conversely, let us show that if $c\leq c(e)$ then the case of a single front is
forbidden. Suppose that there exists
a pulsating travelling front $\tilde u$ connecting $2$ to $0$ in the direction~$e$
with some speed $\tilde c$. Observe that the argument for the
uniqueness result in the proof of Lemma~\ref{lem:uniqueness}
still works if $U_2(\cdot,-\infty)\geq1$ or if $U_1(\cdot,+\infty)\leq0$.
Hence, on one hand,
applying this argument with $u_1$ equal to the front connecting $1$ to $0$ and with
$u_2=\tilde u$ we get $\tilde c>c(e)$.
On the other hand, taking $u_1=\tilde u-1$ and $u_2$ equal to the planar front
for~\eqref{eq:f2} yields $\tilde c<c$. We eventually infer that $c>c(e)$, a contradiction. Therefore, when $c \leq c(e)$, a terrace necessarily consists of two fronts, and as we pointed out above each of them is unique up to time shift.
We have proved that there exists a unique
propagating terrace in any given direction~$e\in\mathbb{S}^{1}$
and that it consists of two fronts if and only if $c\leq c(e)$.
This concludes the proof of the proposition because $c(e_2)<c<c(e_1)$.
\end{proof}
| proofpile-arXiv_065-1435 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The effect of strong nonequilibrium condition between electrons and phonons on solid state properties has been investigated both experimentally and theoretically. It influences not only thermodynamic properties, such as the electron specific heat and density-of-states \cite{lin}, but also the lattice dynamics spectra (i.e., phonon dispersion relations). Since the values of the force constants for ions are determined by the adiabatic potential that is a sum of the ion-ion repulsive and the ion-electron-ion interaction potential energies, it is possible to manipulate the lattice dynamics spectra by tuning the electron-mediated interaction potential. Based on density-functional theory (DFT) calculations, Recoules {\it et al}. have predicted that the phonon frequencies of Au increase over the entire Brillouin zone (BZ) when the electron temperature $T_{\rm e}$ is increased up to several eV \cite{recoules}. This is understood as a decrease in the electron-ion screening as a result of the thermal excitation of $5d$-electrons located below the Fermi level by a few eV. The femtosecond pump-probe technique has confirmed the bond hardening as an increase in the melting temperature of Au \cite{ernstorfer}, which paves the way for understanding the fundamental properties of warm-dense aluminum \cite{leguay}, copper \cite{cho}, molydbenum \cite{dorchies}, and electron gas \cite{groth}.
By this argument recent studies based on DFT calculations in Ref.~\cite{minakov,yan,harbour,bottin} is interesting; Even if $d$-electrons are absent in a system, a noticeable change in the phonon dispersion relations has been predicted when $T_{\rm e}$ increases. For example, fcc-structured metals such as Al show a phonon hardening over the entire BZ, while bcc-structured metals such as Na show a phonon softening at the point N within the BZ \cite{yan}. A similar conclusion has been reached in a recent study \cite{harbour}, where the neutral pseudoatom model developed from DFT and molecular dynamics simulations has been used. Interestingly, Bottin {\it et al}. have shown that for both Al and Au crystals the monovacancy formation enthalpy increases with $T_{\rm e}$ (i.e., the bond hardening), while its origin is different: The largest contribution to the stress is from the kinetic energy part for Al and the pseudopotential (ion-electron potential) energy part for Au \cite{bottin}. Since Al and Na are a typical free-electron metal in the ground state, as well as the supporting evidence for high $T_{\rm e}$ \cite{bottin}, it must be possible to develop a simple model to understand such a crystal structure dependence of the phonon property.
In this paper, we construct a minimal model for phonons in isochorically heated nearly free-electron metals and calculate the $T_{\rm e}$-dependence of the phonon dispersion relations. The phonon hardening and softening occur in fcc- and bcc-structured metals, respectively, which are consistent with DFT based calculations \cite{yan,harbour}. The phonon hardening originates from a significant increase in the force constant for the first nearest neighbor (NN) sites, while the phonon softening at the N point in bcc-structured metals originates from a delicate balance between force constants for the first and second NN sites.
\section{Formulation}
To compute the phonon dispersion relations in metals, we extend the theory of the lattice dynamics for simple metals at $T_{\rm e}=$0 K \cite{hartmann} to the case at $T_{\rm e}\ne$0 K. We consider a simple metal that consists of ions and conducting electrons. Each ion and electron have charges $Ze$ and $-e$, respectively, where $Z$ is the valence of the ion. With a charge neutrality condition, the number of electrons is uniquely determined when that of ions is given. We assume that the total potential energy between ions separated by a distance $R$ is
\begin{eqnarray}
V_{\rm tot}(R)
&=& v_{\rm d}(R)
+ v_{\rm ind}(R),
\label{eq:ion-ion}
\end{eqnarray}
where $v_{\rm d}$ is the direct interaction potential between ions and given by
\begin{eqnarray}
v_{\rm d}(R)
&=& \frac{Z^2 e^2}{4\pi \varepsilon_0 R}
\label{eq:d}
\end{eqnarray}
with the dielectric constant of vacuum $\varepsilon_0$. $v_{\rm ind}$ in Eq.~(\ref{eq:ion-ion}) is the indirect interaction potential that is derived from the electron-mediated ion-ion interaction. This is written as (see Appendix \ref{app} for the derivation)
\begin{eqnarray}
v_{\rm ind}(R)
&=&
\int_{0}^{\infty} dq C(q) \frac{\sin (qR)}{qR}
\label{eq:ind}
\end{eqnarray}
with the wavenumber $q$. The kernel $C(q)$ in Eq.~(\ref{eq:ind}) is
\begin{eqnarray}
C(q)
&=& - \left(\frac{\varepsilon_0 q^4}{2\pi^2 e^2}\right) v_{\rm ps}^{2}(q)
\frac{\chi(q,T_{\rm e})}{1+[1-G(q)]\chi(q,T_{\rm e})},
\label{eq:ind2}
\end{eqnarray}
where $v_{\rm ps}(q)$ is the Fourier component of the model pseudopotential. $\chi(q,T_{\rm e})$ is the $T_{\rm e}$-dependent response function and explicitly written as
\begin{eqnarray}
\chi(q, T_{\rm e})
&=& \frac{4}{\pi k_{\rm F}a_{\rm B}y^2}
\int_{0}^{\infty} dx \frac{x}{y} f(x,T_{\rm e})
\ln \left\vert \frac{2+y/x}{2-y/x} \right\vert
\nonumber\\
\label{eq:response}
\end{eqnarray}
with the Fermi wavenumber $k_{\rm F}$, the Bohr radius $a_{\rm B}$, $x=k/k_{\rm F}$, $y=q/k_{\rm F}$, and the Fermi-Dirac distribution function
\begin{eqnarray}
f(x,T_{\rm e}) = \left[ e^{(\varepsilon_{\rm F}x^2 - \mu)/(k_{\rm B}T_{\rm e})}\right]^{-1}
\label{eq:fermi}
\end{eqnarray}
with the Fermi energy $\varepsilon_{\rm F}$ at $T_{\rm e}=0$ K, the chemical potential $\mu$, and the Boltzmann constant $k_{\rm B}$. When $T_{\rm e}=0$ K, Eq.~(\ref{eq:response}) can be reduced to the Hartree formula \cite{hartmann}
\begin{eqnarray}
\chi(q, 0)
&=& \frac{4}{\pi k_{\rm F}a_{\rm B}y^2}
\left(
\frac{1}{2} +
\frac{4-y^2}{8y}
\ln \left\vert \frac{2+y}{2-y} \right\vert
\right).
\label{eq:response0}
\end{eqnarray}
In our model, the effect of $T_{\rm e}$ (i.e., electron occupation) on the phonon dispersion relations is entered into $\chi(q,T_{\rm e})$. Finally, $G(q)$ in Eq.~(\ref{eq:ind2}) accounts for the effects of exchange and correlation. The model functions $v_{\rm ps}(q)$ and $G(q)$ with material parameters will be given later.
\begin{figure*}[ttt]
\center
\includegraphics[scale=0.5]{Fig_1.eps}
\caption{\label{fig1}(a) The phonon dispersion relations of fcc-structured Al along symmetry lines for $k_{\rm B}T_{\rm e}=$0.025 (black), 2.0 (blue), and 4.0 (red) eV. $T_{\rm e}$-dependence of the force constants (b) $A_p$ and (c) $B_p$ in units of eV/\AA$^2$. (d) The total potential $V_{\rm tot}$ defined as Eq.~(\ref{eq:ion-ion}). The vertical dotted lines indicate the interatomic distance up to $p=4$. }
\end{figure*}
\begin{figure*}[ttt]
\center
\includegraphics[scale=0.5]{Fig_2.eps}
\caption{\label{fig2} Same as Fig.~\ref{fig1} but for Na. The phonon dispersion relations and $V_{\rm tot}$ are calculated for $k_{\rm B}T_{\rm e}=$0.025, 1.0, and 2.0 eV. The lower frequency phonon at the N point decreases with $T_{\rm e}$ enclosed by a rounded rectangle. The interatomic distance is indicated up to $p=3$.}
\end{figure*}
The phonon dispersion relations for the central potential of Eq.~(\ref{eq:ion-ion}) are calculated by a diagonalization of the dynamical matrix \cite{mermin}
\begin{eqnarray}
{\cal D} (\bm{q})= \sum_{l} \sin^2 \left( \frac{\bm{q}\cdot \bm{R}_l}{2}\right)
\left[ A \bm{1} + B \hat{R}_l \hat{R}_l \right],
\label{eq:dyn}
\end{eqnarray}
where $\bm{q}$ is the wavevector of phonons, $\bm{R}_l=(R_{lx}, R_{ly}, R_{lz})$ is the $l$th ion position, $\bm{1}$ is the $3\times3$ unit matrix, and $\hat{R}_l \hat{R}_l$ is the dyadic formed from the unit vectors $\hat{R}_l= \bm{R}_l/\vert \bm{R}_l \vert$. $A$ and $B$ are the force constants defined as
\begin{eqnarray}
A &=& \frac{2}{R_l}\frac{dV_{\rm tot} (R)}{dR}\Big\vert_{R=R_l},
\label{eq:A}
\\
B &=& 2\left[ \frac{d^2 V_{\rm tot} (R)}{dR^2}\Big\vert_{R=R_l}
- \frac{1}{R_l}\frac{dV_{\rm tot} (R)}{dR}\Big\vert_{R=R_l}
\right],
\label{eq:B}
\end{eqnarray}
where the derivatives of $V_{\rm tot}$ are evaluated at $R_l =\vert \bm{R}_l \vert$. For later use, we define $A_p$ and $B_p$ as the force constant of Eqs.~(\ref{eq:A}) and (\ref{eq:B}) for the $p$th NN ions. The phonon frequencies are given by $\omega = \sqrt{\lambda /M_{\rm ion}}$ with the ion mass $M_{\rm ion}$ and three eigenvalues $\lambda$ of Eq.~(\ref{eq:dyn}).
\section{Results and discussion}
\label{sec:3}
We study the phonon properties of Al ($Z=3$) and Na ($Z=1$) that show a fcc and bcc structure in the ground state, respectively. The lattice constant is $a_{\rm lat}=4.049$ \AA \ for Al and 4.225 \AA \ for Na. The Wigner-Seitz radius is $r_s=2.07$ for Al and 3.93 for Na (in units of Bohr radius $a_{\rm B}$) \cite{mermin}. The Fermi energy is then calculated to be 11.65 eV for Al and 3.24 eV for Na. For the model potential, we use the Ashcroft pseudopotential
\begin{eqnarray}
v_{\rm ps} (q) = - \frac{Ze^2}{\varepsilon_0 q^2} \cos(q r_{\rm c}),
\label{eq:Ashcroft}
\end{eqnarray}
where $r_{\rm c}$ is the cutoff radius, which is set to be 0.5911 \AA \ for Al and 0.8784 \AA \ for Na \cite{ashcroft}. For the correction $G(q)$ for exchange and correlation energies, we use the Hubbard-type function
\begin{eqnarray}
G(q) = \frac{a q^2}{q^2 + b},
\label{eq:Hubbard}
\end{eqnarray}
where the parameters of $a$ and $b$ are determined from an analytical formula given in Ref.~\cite{UI}. We have confirmed that the same conclusion (phonon hardening and softening with $T_{\rm e}$) holds when $G(q)$ is set to be zero. We have also performed other bcc-structured crystals (Li, K, Rb, and Cs) and confirmed that the trend of their results is similar to that of Na shown below.
\subsection{fcc-structured Al}
Figure \ref{fig1}(a) shows the phonon dispersion relations of Al for $k_{\rm B}T_{\rm e}=$0.025, 2.0, and 4.0 eV. A significant increase in the phonon energies is observed at $k_{\rm B}T_{\rm e}=4.0$ eV, which is consistent with the DFT results in Ref.~\cite{yan,harbour}. To understand the phonon hardening driven by an electronic excitation, we show $T_{\rm e}$-dependence of $A_p$ and $B_p$ ($p=1,2,3$, and $4$) in Fig.~\ref{fig1}(b) and \ref{fig1}(c), respectively. The magnitude of $B_1$ starts to increase from $T_{\rm e}\simeq$ 2 eV, while that of $B_p$ for $p\ge 2$ converges to zero, and $A_1$ decreases negatively. These changes are caused by the $T_{\rm e}$-dependence of $V_{\rm tot}(R)$ in Eq.~(\ref{eq:ion-ion}), shown in Fig.~\ref{fig1}(d). When $k_{\rm B}T_{\rm e}=$0.025 eV, $V_{\rm tot}(R)$ shows a Friedel oscillation that originates from the presence of the Fermi surface. As $T_{\rm e}$ increases, the oscillating amplitude becomes weak and thus the value of $V_{\rm tot}$ for $R/a_{\rm lat} >1$ becomes negligibly small, which lead to a significant decrease in $B_p$ for $p\ge 2$. In addition, $V_{\rm tot}$ becomes more repulsive at $p=1$ sites since $v_{\rm d} \gg \vert v_{\rm ind}\vert$. This also leads to an increase in $\vert A_1 \vert$ and $B_1$ that are defined as Eqs.~(\ref{eq:A}) and (\ref{eq:B}), respectively.
As is clear from Figs.~\ref{fig1}(b) and \ref{fig1}(c), the lattice dynamics up to $k_{\rm B}T_{\rm e}=$ 4.0 eV is almost regulated by $B_1$ only because $B_1 \gg \vert B_p\vert$ ($p=2,3$, and $4$) and $B_1\gg \vert A_p \vert$. A simple analysis, where the contribution from $A_p$ and $B_p$ with $p\ge 2$ is ignored, enables us to understand the phonon hardening phenomena observed above. For example, we focus on the phonon frequency at the X point, at which the phonon frequencies are given by
\begin{eqnarray}
\omega_{\rm 1,2} = \sqrt{\frac{8A_1+2B_1}{M_{\rm ion}}},
\ \
\omega_{\rm 3} = \sqrt{\frac{8A_1+4B_1}{M_{\rm ion}}},
\label{eq:omegaX}
\end{eqnarray}
where $\omega_1$ and $\omega_2$ are the doubly degenerate TA phonon frequencies and $\omega_3$ is the LA phonon frequency. From Eq.~(\ref{eq:omegaX}), it is obvious that the increase in the phonon energy in Fig.~\ref{fig1}(a) is directly related to the increase in $B_1$.
\subsection{bcc-structured Na}
We next investigate the phonon properties of Na. Figures \ref{fig2}(a), \ref{fig2}(b) and \ref{fig2}(c), and \ref{fig2}(d) show the phonon dispersion relations (for $k_{\rm B}T_{\rm e}=$0.025, 1.0, and 2.0 eV), $A_p$ and $B_p$ ($p=1,2$, and $3$), and $V_{\rm tot}(R)$, respectively. As shown in Fig.~\ref{fig2}(a), the phonon energy increases slightly for higher frequency region, while the lowest phonon frequency at the N point decreases with $T_{\rm e}$. Similar softening behavior and an imaginary frequency at the N point have been reported in Ref.~\cite{yan,harbour}. For lower $T_{\rm e}$ the lattice dynamics is almost regulated by $B_1$ again. However, the changes in $A_1$, $A_2$, and $B_2$ in response to $T_{\rm e}$ cannot be negligible. This is because the profile of $V_{\rm tot}$ is different from that in Al (Fig.~\ref{fig1}(d)): The Friedel oscillation is not clearly observed, since the value of $k_{\rm F}$ of Na is smaller than that of Al.
To understand the phonon softening at the N point, we derive analytical expressions for the phonon frequency. From Eq.~(\ref{eq:dyn}), the frequencies at the N point are written as
\begin{eqnarray}
\omega_1&=& \sqrt{\frac{4(A_1+A_2) + 2B_2}{M_{\rm ion}}},
\label{eq:omegaN1}
\\
\omega_2&=& \sqrt{\frac{4(A_1+A_2) + \frac{4}{3}B_1}{M_{\rm ion}}},
\label{eq:omegaN2}
\\
\omega_3 &=& \sqrt{\frac{4(A_1+A_2) + \frac{8}{3}B_1 + 2B_2}{M_{\rm ion}}},
\label{eq:omegaN3}
\end{eqnarray}
where $A_p$ and $B_p$ up to $p=2$ are considered because the use of two parameters $A_1(<0)$ and $B_1$ only is not enough to obtain a dynamically stable structure at $T_{\rm e}=0$ K. We emphasize that the expression for the lowest frequency $\omega_1$ in Eq.~(\ref{eq:omegaN1}) does not include the largest force constant $B_1$; The magnitude of $\omega_1$ is determined by a delicate balance between $A_1$, $A_2$, and $B_2$. Although $B_2$ increases with $T_{\rm e}$, the amount of the increase is completely cancelled out by the decrease in $A_1$ and $A_2$. The latter contributions are large enough to cause $\omega_{\rm 1}$ to decrease with $T_{\rm e}$.
The polarization vectors $\bm{e}_1$, $\bm{e}_2$, and $\bm{e}_3$ corresponding to $\omega_1$, $\omega_2$, and $\omega_3$ in Eqs.~(\ref{eq:omegaN1})-(\ref{eq:omegaN3}), respectively, are written as
\begin{eqnarray}
\bm{e}_1=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
0 \\
1 \\
-1
\end{array}
\right),
\
\bm{e}_2=
\left(
\begin{array}{c}
1 \\
0 \\
0
\end{array}
\right),
\
\bm{e}_3=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
0 \\
1 \\
1
\end{array}
\right).
\nonumber\\
\end{eqnarray}
The vectors $\bm{e}_i$ ($i=1$ and $2$) and $\bm{e}_3$ are perpendicular to and parallel to the N point wavevector $\bm{q}=(0,1/2,1/2)$ in units of $2\pi/a_{\rm lat}$, respectively, which will be helpful to identify the soft mode experimentally.
It should be noted that also for simple cubic structured lattices $B_1$ is not entered into the expression of the phonon frequencies at points X and M, implying an appearance of the phonon softening with $T_{\rm e}$. We thus speculate that the larger the NN coordination number $Z_{\rm C}$, the stronger the bond strength against the electron excitation. In fact, it has been shown that an electronic excitation can lead to a phonon softening in Bi with $Z_{\rm C}=6$ \cite{yan,murray} and Si with $Z_{\rm C}=4$ \cite{recoules,yan}, while it leads to a phonon hardening in hexagonal closed-packed structure of Mg with $Z_{\rm C}=12$ \cite{yan}.
\section{Summary}
We have studied the effect of the electron temperature on the phonon dispersion relations for fcc- and bcc-structured metals within a model pseudopotential approach. The phonon hardening and softening in simple metals are discussed in terms of the force constants and the adiabatic potential as a function of the electron temperature. The phonon hardening originates from a significant increase in the force constant for the first NN sites, while the phonon softening at the N point in bcc-structured metals originates from a delicate balance between force constants for the first and second NN sites.
The formulation of the present work can be extended to metals with $d$-electrons in a sense of the valence $Z$ change as discussed in the study of warm-dense gold \cite{fourment}, while parametrizing the relationship between $Z$ and $T_{\rm e}$ is necessary.
| proofpile-arXiv_065-1440 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction \label{sct1}}
\xa
\xb
\outl{Qm nonlocality notion widespread. Bell $\leq$s.; Local realism; steering}
\xa
The notion is widespread in popular articles, but also in many technical
papers, review articles, and books, that quantum mechanics is `nonlocal', in
some way that contrasts with the locality of classical physics. Here are a few
almost random selections from this vast literature:
\cite{AlGl09,HnSh18,QnVB14,Dng18,Brao14,Mdln11c,Nrsn17}.
In particular, quantum mechanics is, we are told, inconsistent with `local
realism' \cite{Hnao15ea,Wsmn15} because it predicts, and numerous experiments
confirm, a violation of Bell inequalities, and this means that if the
quantum-mechanical world is real there exist nonlocal influences which act
instantaneously over arbitrarily large distances. And if two distant systems
are in a suitable entangled quantum state, a measurement on one of them can
instantaneously influence the other through a process known as `steering'
\cite{Schr36,WsJD07,CvSk17,Tsao18}.
\xb
\outl{Maudlin-Werner debate. Black box approach}
\xa
To be sure, such claims have not gone unchallenged. Notable among more recent
discussions is an interchange between a proponent of nonlocality, Tim Maudlin
\cite{Mdln14,Mdln14b} and an advocate of quantum locality, Reinhard Werner
\cite{Wrnr14,Wrnr14b} that appeared in a special issue of the Journal of
Physics A published on the fiftieth anniversary of a famous paper \cite{Bll64b}
by John Bell; there was also a follow-up preprint \cite{Wrnr14c} by Werner. It
is of interest that neither protagonist in this debate actually applied quantum
theory to properties and processes taking place in a microscopic quantum
system. Instead, both used what might be called a `black box' approach: A
macroscopic preparation of the quantum system is followed later by a
measurement with a macroscopic output (`pointer position' in the antique but
picturesque terminology of quantum foundations), with the discussion based upon
quantum predictions of the relationship of input and output, without reference
to what might be going on at the microscopic quantum level at an intermediate
time. In Maudlin's case no reference to such goings on was needed for his
arguments, whereas Werner employed an operational approach to quantum theory in
which microscopic concepts are deliberately omitted. While a black box approach
can sometimes be useful in this as in other areas of science, the claim
of the present paper is that the locality issue is best addressed by opening
the black box and examining what happens inside it, using consistent quantum
principles. In particular it is important to understand how quantum
measurements can reveal microscopic quantum properties; something often
assumed by experimenters who design and build apparatus, but not properly
discussed in introductory (or advanced) quantum textbooks.
\xb
\outl{Wavefunction collapse: a tool to obtain conditional probabilities}
\xa
One source of the nonlocality idea is the widespread belief that measurements
`collapse' quantum wave functions. If one of two (or more) separated quantum
systems described by an entangled wavefunction is measured, then it is indeed
possible to discuss the post-measurement situation using a `collapsed' wave
function, and this no doubt contributes to the belief that there must be
nonlocal influences in the quantum world. However, in this situation the
wavefunction is merely a convenient tool for obtaining certain conditional
probabilities that can be calculated by other methods that do not suggest any
sort of nonlocal influence, as explained in Sec.~\ref{sct6}.
\xb
\outl{No-signaling $\Rightarrow $ experiments cannot detect nonlocal influences}
\xa
To be sure, those who claim that instantaneous nonlocal influences are present
in the quantum world will generally admit that they cannot be used to transmit
information; this is known as the `no-signaling' principle, widely assumed in
quantum information theory. This means that such influences (including
wavefunction collapse) cannot be directly detected in any experiment. The
simplest explanation for their lack of influence is that such influences do not
exist.
\xb
\outl{Correlations in Cl systems $\leftrightarrow $ provide an analogy for a Qm common
cause}
\xa
In classical physics two systems far apart can exhibit statistical correlations
that allow a measurement on one of them to reveal some property of the other.
No instantaneous nonlocal influences need be invoked if such correlations
result from a local common cause at some time in the past. As explained in
Sec.~\ref{sbct5.3}, an analogous kind of \emph{quantum} common cause can be
used to understand quantum correlations that violate Bell inequalities, thus
removing any need for nonlocal influences.
\xb
\outl{Overview of paper. Sec.~\ref{sct2}. CHSH violated where situation is purely
local. }
\xa
The arguments that support these conclusions are carried out in several steps.
First, in Sec.~\ref{sct2} the CHSH Bell inequality \cite{CHSH69} is shown to be
violated in a purely local situation where nonlocality plays no role. The key
point is that the CHSH inequality employs \emph{classical} hidden variables in a
situation where a proper \emph{quantum} description requires the use of
\emph{noncommuting} operators to represent physical quantities. That classical
physics fails in the quantum domain is not at all surprising. What is
surprising is that this fact \cite{Fne82b} has been overlooked or
ignored in much of the literature that claims quantum mechanics is nonlocal.
\xb
\outl{Sec.~\ref{sct3}: What projective Qm measurements measure}
\xa
\xb
\outl{Qm particles not simultaneously in two different locations}
\xa
\xb
\outl{Sec.~\ref{sct4}: Bell $\leq$ derivation factorization condition assumes
Cl HVs}
\xa
Next, Sec.~\ref{sct3} is devoted to an elementary discussion of projective
quantum measurements and what they reveal about properties of measured systems
\emph{before} a measurement takes place. This is an essential part of opening
the black box, and fills in a serious lacuna in textbooks. It justifies the
belief of many experimental physicists that the apparatus they have carefully
constructed and tested actually measures what it was designed to measure. A
proper understanding of measurements disposes of another type of supposed
quantum nonlocality: that quantum particles can simultaneously be in two
different locations.
The tools that allow quantum measurements to be understood in a rational manner
are then applied in Sec.~\ref{sct4} to a serious defect that enters many if not
all derivations of Bell inequalities: a key factorization assumption that is
supposed to represent the absence of nonlocal effects employs \emph{classical}
hidden variables that are inconsistent with Hilbert space quantum theory.
\xb
\outl{Sec.~\ref{sct5}: EPRB contains false counterfactual assumption }
\xa
\xb
\outl{EPRB correlations explained in terms of \emph{Qm} common causes}
\xa
\xb
\outl{Alice can infer properties of, but not influence or ``steer'' Bob's
particle}
\xa
The much-discussed Einstein-Podolsky-Rosen (EPR) argument is examined in
Sec.~\ref{sct5}, beginning in Sec.~\ref{sbct5.1} with Bohm's formulation in
terms of two spin-half particles. If one assumes, contrary to EPR, that Hilbert
space quantum mechanics is \emph{complete}, this undermines a key
counterfactual assumption about quantum measurements implicit in their work, an
assumption that has nothing to do with locality. Following this, in
Sec.~\ref{sbct5.3} it is shown that the experimentally observed correlations
which violate Bell inequalities can be understood as arising from local
\emph{quantum} common causes, and hence in no need of explanations based upon
instantaneous nonlocal influences. An analogy from the classical world helps
understand why Alice's measurement of one of a pair of spin-half articles has
not the slightest influence on the other particle, located far away in Bob's
possession, though she is able to infer something about its properties. In no
sense can she control or influence or `steer' Bob's particle.
\xb
\outl{Sec.~\ref{sct6} Wavefn collapse, Einstein locality}
\xa
In Sec.~\ref{sct6} it is argued that wavefunction collapse, while it can be
used to calculate correlations, is simply a mathematical tool, and should
\emph{not} be understood as a nonlocal physical process. Indeed, a quite
general \emph{Principle of Einstein Locality} states that noninteracting
systems cannot influence each other, whether or not they are in an entangled
state. So in this respect discussions, as in \cite{CvSk17,Tsao18}, of
Schr\"odinger steering are misleading.
\xb
\outl{Sec.~\ref{sct7}: Summary + suggestions for terminology}
\xa
A summary of the results of the paper are given in Sec.~\ref{sbct7.1}. This is
followed in Sec.~\ref{sbct7.2} with suggestions for changes in terminology
which might help clear up the confusion associated with long-standing, but
unsupportable, claims of quantum nonlocality, thus making quantum theory less
of an ordeal for students, and allowing more rapid progress in the study of
quantum foundations.
\xb
\outl{Earlier RBG work included, extended in this paper. CH opens black
box}
\xa
The present paper incorporates, but also extends, material from some of the
author's earlier publications
\cite{Grff11,Grff11b,Grff15,Grff17b}.
The aim is to present a unified and comprehensive critique of quantum
nonlocality claims, based in large part on a consistent analysis of quantum
measurements. In order to understand in physical terms what is going on in the
quantum world, measurements themselves must be described as physical processes
governed by general quantum principles that apply to all processes. In
particular, macroscopic measurement outcomes must be connected with the prior
\emph{quantum} properties, represented by Hilbert subspaces (as in Sec.~III.5
of \cite{vNmn32b}), the apparatus was designed to reveal. The consistent
histories (CH) approach%
\footnote{For an overview of consistent histories see \cite{Grff14b}; a
detailed treatment will be found in \cite{Grff02c}. The material in
\cite{Grff17b} is of particular relevance to the present article.} %
provides the precise rules needed to do this, and is the foundation of the
discussions in Sections~\ref{sct3}, \ref{sct4}, \ref{sct5}, and \ref{sct6}.
\xb
\subsection{Notation and Abbreviations \label{sbct1.1}}
\xa
A quantum \emph{physical property} is represented by a Hilbert subspace
or its projector, as distinct from a \emph{physical variable} represented by a
Hermitian operator; see Sec.~\ref{sbct3.1}.
For the most part standard Dirac notation is employed, with the addition
that $[\psi]=\dya{\psi}$ denotes the projector onto a normalized pure state
$\ket{\psi}$.
CH is an abbreviation for the \emph{Consistent (or Decoherent) Histories}
formulation or interpretation of quantum mechanics.
CHSH is an abbreviation for the Bell inequality of
Clauser, Horne, Shimony, and Holt \cite{CHSH69}.
EPR stands for Einstein, Podolsky, and Rosen and their paper [35].
PDI stands for \emph{projective decomposition of the identity}, see
\eqref{eqn9}.
\xb
\section{Bell Inequalities \label{sct2}}
\xa
\xb
\outl{3 steps to nonlocality using Bell inequality violations}
\xa
A common route to the belief that the world is
nonlocal comes from the following sort of reasoning:
\begin{description}
\item[B1.] Bell (and others) derived inequalities involving correlations of
separated quantum systems, inequalities which will always be satisfied if a
certain locality condition (local causality or local realism) is satisfied .
\item[B2.] Starting with the work of Freedman and Clauser \cite{FrCl72},
numerous experiments, among them \cite{AsGR81,Hnao15ea,Shao15ea,Gsao15ea},
have shown, with ever increasing precision and control for errors and
experimental loopholes, that experimentally measured correlations agree with
the predictions of quantum mechanics and violate Bell inequalities.
\item[B3.] Therefore quantum mechanics, and the world it describes,
must be nonlocal.
\end{description}
\subsection{The CHSH Inequality \label{sbct2.1}}
\xb
\outl{Formula $S=A_0B_0 +\cdots$}
\xa
To see what is wrong with this argument, consider the CHSH inequality
\cite{CHSH69}, one of the simplest Bell inequalities. It involves a
quantity
\begin{equation}
S = A_0 B_0 + A_0 B_1 + A_1 B_0 - A_1 B_1 = (A_0+A_1)B_0 +(A_0-A_1)B_1
\label{eqn1}
\end{equation}
where the $A_j$ and $B_k$ on the right hand side are either classical random
variables taking the values $+1$ and $-1$, or quantum observables (Hermitian
operators) whose eigenvalues are $+1$ and $-1$, subject to the condition that
each $A_j$ commutes with every $B_k$.
\xb
\outl{Cl case: $-2\leq S\leq 2$ $\Rightarrow $ $|\avg{S}|\leq 2$}
\xa
In the classical case it is easy to see that because either $A_0+A_1$ or
$A_0-A_1$ must be zero, $S$ will lie between the limits
\begin{equation}
-2 \leq S \leq 2,
\label{eqn2}
\end{equation}
so its average $\avg{S}$ must fall in the same interval, whence the CHSH
inequality:
\begin{equation}
|\avg{S}| \leq 2.
\label{eqn3}
\end{equation}
\xb
\outl{Qm case: operators don't commute. $S$ can have eval as large as $2\sqrt{2}$}
\xa
By contrast, if the $A_j$ and $B_k$ are quantum Hermitian operators with
eigenvalues $\pm 1$ subject to the requirement that $[A_j,B_k]=0$ for every $j$
and $k$, it is easy to construct an example, see below, in which $S$ has
eigenvalues of $\pm2\sqrt{2},0,0$, and thus using the eigenstate for the largest
eigenvalue to compute the average of $S$ will yield $\avg{S}=2\sqrt{2}$, in obvious
violation of the inequality \eqref{eqn3}. A key feature of this example is that
$A_0$ does not commute $A_1$, nor $B_0$ with $B_1$, and none of the four
summands in \eqref{eqn1} commute with the other three. There is no reason to
expect the eigenvalues of a sum of noncommuting operators to bear any simple
relationship with those of the summands, so the violation of \eqref{eqn3} in
the quantum case is not surprising. Nonlocality is irrelevant, as is
shown by the following example.
\subsection{Neon \label{sbct2.2}}
\xb
\outl{$^{21}$Ne spin 3/2. Let $\ket{00},\,\ket{01},\,\ket{10},\,\ket{11}$ be
any orbasis}
\xa
The $^{21}$Ne nucleus has a spin of $3/2$, which is also the spin of a neutral
neon atom of this isotope; it has a low but nonzero natural abundance. Thus
the ground state of a $^{21}$Ne atom is 4-fold degenerate, and its quantum
mechanical description uses a four-dimensional Hilbert space ${\mathcal H}$. Choose any
orthonormal basis for this space, and let the basis vectors carry binary
labels, thus $\ket{00},\,\ket{01},\,\ket{10},\,\ket{11}$. These could, for
example, be states in which the $z$ component of angular momentum $S_z$ for
some (arbitrary) direction $z$ takes on the values $+3/2,\,+1/2,\,-1/2,\,-3/2$
in units of $\hbar$, but any other choice would be equally good.
\xb
\outl{Ground state ${\mathcal H}={\mathcal H}_a\otimes {\mathcal H}_b$ with operators $A_j$, $B_k$,
$M_{jk}=A_jB_k$ }
\xa
Next, as a matter of convenience, write ${\mathcal H}$ as a tensor product
${\mathcal H}_a\otimes {\mathcal H}_b$ of two $2$-dimensional spaces with orthonormal bases
$\ket{0}_a$, $\ket{1}_a$, and $\ket{0}_b$, $\ket{1}_b$, respectively, related
to the previously chosen basis of ${\mathcal H}$ through
\begin{equation}
\ket{j,k} = \ket{j}_a\otimes \ket{k}_b,
\label{eqn4}
\end{equation}
Finally, using this tensor product structure, define four operators
\begin{equation}
A_0 = Z\otimes I,\quad A_1 = X\otimes I,\quad B_0= I\otimes X,\quad B_1 = I \otimes Z,
\label{eqn5}
\end{equation}
where $I$ is a $2\tm2$ identify matrix, while $X$, and $Z$ are the Pauli
$x$ and $z$ matrices. Define
the products $M_{jk} = A_j B_k$:
\begin{equation}
M_{00} =Z\otimes X,\quad M_{01} =Z\otimes Z,\quad
M_{10} =X\otimes X,\quad M_{11} =X\otimes Z,
\label{eqn6}
\end{equation}
(where the subscripts label different operators, not matrix elements),
and a quantum version of \eqref{eqn1} takes the form:
\begin{equation}
S = M_{00} + M_{01} + M_{10} - M_{11}.
\label{eqn7}
\end{equation}
Each $M_{jk}$ has eigenvalues $+1$ and $-1$, both doubly degenerate, and
each does not commute with any of the other three, even though each $A_j$
commutes with each $B_k$.
\xb
\outl{Measure $M_{jk}$ in $^{21}$Ne beam in 4 experiments $\rightarrow $
$S=2\sqrt{2}$, violates CHSH}
\xa
Now imagine that a skilled experimenter is able to produce a beam consisting of
neon atoms of this isotope, each in the same (pure) hyperfine state
$\ket{\psi}$, and then, using a large number of runs, measures each $M_{jk}$
and finds its average value. Note that four separate runs of the experiment are
needed, since each $M_{jk}$ does not commute with the others. Adding up the
averages provides the average of $S$. Since the eigenvalues of the operator in
\eqref{eqn7} are $\pm 2\sqrt{2},0,0$, if $\ket{\psi}$ is the eigenstate with the
largest eigenvalue, the average $\avg{S}$ of $S$ will be $2\sqrt{2}$, well outside
the range \eqref{eqn3}.
\xb
\outl{Experiment is local; violation of CHSH $\leftrightarrow $ noncommutation}
\xa
In the case of this hypothetical $^{21}$Ne experiment all the atoms belong to a
single beam, and while the four different measurements require different
apparatus settings, they can all be carried out in the same physical location
in the same laboratory. Thus the violation of the CHSH inequality in this case
has nothing to do with nonlocality. Instead it has everything to do with the
fact that in quantum mechanics, unlike classical mechanics, physical properties
and variables are represented by \emph{noncommuting operators}. To be sure,
doing the $A$ and $B$ measurements of photon polarizations at different
locations, as in the usual Bell tests, guarantees that the $A_j$ operators
commute with the $B_k$ operators, but this same requirement has simply been
built into the protocol of the neon experiment.
\xb
\outl{No reason to actually carry out experiment}
\xa
Performing such an experiment would be difficult and expensive, and there is no
reason to attempt it, since by now there is vast amount of experimental
evidence that demonstrates, with high precision, the correctness of quantum
mechanics. This includes the widely publicized experiments on correlated photon
pairs, e.g. \cite{Hnao15ea,Shao15ea,Gsao15ea}, which have confirmed that
quantum theory violates Bell inequalities in the same way one would expect to
be the case were the neon experiment actually carried out.
\xb
\section{Quantum Measurements \label{sct3}}
\xa
\xb
\outl{Measurements designed to determine prior property of measured system}
\xa
\xb
\outl{Split $^{21}$Ne beam into separate beams, send these to separate
detectors}
\xa
Quantum measurements of the sort we are interested in involve an amplification
of a microscopic quantum property in such a way to produce a macroscopic
result, the measurement \emph{outcome}, a pointer position in the archaic but
picturesque language of quantum foundations. Measurements of a beam of
$^{21}$Ne atoms could in principle be carried out in a similar way to the
famous Stern-Gerlach experiment: use magnetic (perhaps assisted with electric)
field gradients to separate the initial beam of particles into separate beams
having different properties, each identified by quantum numbers referring to
some observable. When far enough apart these beams would enter separate
detectors where individual atoms are ionized and the electron fed to an
electron multiplier resulting in a macroscopic current. Note that such a
measurement determines the property of each atom \emph{before} it is measured,
not afterwards when the measurement is over and the detector has destroyed the
atom.
\xb
\subsection{Observables and Properties \label{sbct3.1}}
\xa
\xb
\outl{Projective measurement of $F=\sum_j f_j P^j$, $\{P^j\}$ a PDI}
\xa
This simplest sort of \emph{projective} measurement can be discussed in quantum
mechanical terms as follows. Let $F=F^\dagger $ be the Hermitian operator
corresponding to the physical variable (quantum observable) to be measured,
and write it in the spectral form
\begin{equation}
F=\sum_j f_j P^j,
\label{eqn8}
\end{equation}
where the $f_j$ are eigenvalues---we assume that $f_j\neq f_k$ for $j\neq
k$---and the $P^j$ projectors onto the corresponding eigenspaces. Here the $j$
superscript of $P^j$ is a \emph{label}, not an exponent; this should cause no
confusion, because a projector is equal to its square. These projectors
satisfy the conditions
\begin{equation}
P^j = (P^j)^\dagger = (P^j)^2,\quad P^jP^k = \delta _{jk}P^j,\quad I = \sum_j P^j.
\label{eqn9}
\end{equation}
The first two equalities define a projector (orthogonal projection operator),
while the last two define the collection $\{P^j\}$ to be a \emph{projective
decomposition of the identity $I$} (PDI). A measurement of $F$ consists in
determining which $P^j$ represents the \emph{quantum property} of the particle
being measured at a time just before the measurement takes place. The term
``property'', following von Neumann, Sec,~III.5 of \cite{vNmn32b}, corresponds
to a (closed) subspace of the Hilbert space, or its corresponding projector,
and thus refers to something which can, at least potentially, be true or false.
One should distinguish $F$, an observable or physical variable, from the
property that $F$ takes on a particular value or a range of values.%
\footnote{In this usage ``the energy of a harmonic oscillator is no greater
than $(3/2)\hbar\omega $'' is a property corresponding to a projector on a
two-dimensional subspace, whereas ``energy'' by itself is a physical
variable, not a property.} %
Thus a projector is the quantum counterpart of a set of points in the classical
phase space, and a PDI is the quantum analog of a probabilistic \emph{sample
space}: a collection of mutually exclusive properties, one and only one of
which can occur in any given run of an experiment. (For more details about
measurement processes and their quantum description, see \cite{Grff17b} and
Chs.~17 and 18 of \cite{Grff02c}.)
\xb
\outl{Measuring two (in)compatible observables $F$ and $G$}
\xa
\xb
\outl{$PQ \neq QP$ $\Rightarrow $ property ``$P$ \mbox{\small AND}\ $Q$'' does not exist}
\xa
Suppose another observable $G=G^\dagger $ has the spectral form
\begin{equation}
G = \sum_k g_k Q^k,
\label{eqn10}
\end{equation}
where the $g_k$ are its eigenvalues, and the properties $\{Q^k\}$ form a PDI.
If $F$ and $G$ commute, $FG=GF$, then every $Q^k$ commutes with every $P^j$ and
it is possible to measure $F$ and $G$ at the same time, using the PDI which is
the common refinement of $\{P^j\}$ and $\{Q^k\}$, the collection of nonzero
products $P^j Q^k$. However, if $F$ and $G$ are \emph{incompatible}, $FG\neq
GF$, then there will be some $j$ and $k$ such that $P^j Q^k\neq Q^k P^j$, and
there is no common refinement of the two PDIs, so these observables cannot be
measured in a single experiment; they must be determined in separate
experimental runs. Note that if $P^jQ^k=Q^kP^j$ the product is itself a
projector that represents the property ``$P^j$ \mbox{\small AND}\ $Q^k$'', whereas if $P^j
Q^k\neq Q^k P^j$ neither product is a projector, so the property $P^j$ \mbox{\small AND}\
$Q^k$ is not defined.\footnote{%
The quantum logic of Birkhoff and von Neumann \cite{BrvN36} does assign a
property ``$P^j$ \mbox{\small AND}\ $Q^k$'' when the projectors do not commute, but no one
has yet turned their quantum logic into a useful tool for reasoning in
physical terms about microscopic quantum properties. See Sec.~4.6 of
\cite{Grff02c} for a very simple example of one of the difficulties one runs
into.} %
Textbooks tell us that two incompatible observables or properties cannot be
measured simultaneously, and for this there is a simple explanation (not always
given in textbooks): the simultaneous property is not represented by a
projector, and thus does not exist. Even skilled experimenters cannot measure
what is not there.
\xb
\outl{$^{21}$Ne: 4 $M_{jk}$ operators don't commute, need separate measurements}
\xa
\xb
\outl{ Calculated $\avg{S} =$ measured $M_{00} +\cdots$ a good test of SQM}
\xa
In the case of $^{21}$Ne, none of the four operators in \eqref{eqn6} commutes
with any of the other three, which means that determining their averages
requires four separate experiments. For example, $M_{01}$ has eigenvalues $+1$
and $-1$, both doubly degenerate, so the PDI contains two projectors. To find
the average $\avg{M_{01}}=\mte{\psi}{M_{01}}$ the apparatus needs to separate
the particles into two beams corresponding to these two eigenvalues, and after
a large number of runs the experimental average will be $(N_+ - N_-)/(N_+ +
N_-)$ if $N_+$ particles arrive in the $+1$ beam and $N_-$ in the $-1$ beam. A
\emph{separate experiment}, which is to say a different arrangement for
separating the incoming beams into separate beams, must be carried out for
\emph{each} of the $M_{jk}$ in order to measure its average. And since $S$ in
\eqref{eqn7} does not commute with any of the $M_{jk}$, an experimental check
of this equality in the sense of equating the average $\avg{S}$ of S, as
computed using quantum principles, with the sum of the experimental averages of
the quantities on the right side would be a rather stringent test of the
correctness of standard Hilbert space quantum mechanics.
\xb
\subsection{Quantum Measurement Model \label{sbct3.2}}
\xa
\xb
\outl{Projective measurement of $F=\sum_j f_j P^j$, $P^j = [\phi^j]$}
\xa
\xb
\outl{Initial state $\ket{\Psi_0}=\ket{\psi_0}\otimes \ket{\Phi_0} =
\sum_j c_j \ket{\phi^j}\otimes \ket{\Phi_0} \in {\mathcal H}_s\otimes {\mathcal H}_m$}
\xa
What follows is a simple quantum mechanical model of a projective measurement
of an observable $F$, \eqref{eqn8}. Additional details will be found in Chs.~17
and 18 of \cite{Grff02c}, and in \cite{Grff15,Grff17b}. In what follows we
assume that $F$ refers to a system, hereafter referred to as a `particle', with
Hilbert space ${\mathcal H}_s$, while a much large Hilbert space ${\mathcal H}_m$ represents the
measuring apparatus; together they constitute a closed system with Hilbert
space ${\mathcal H} = {\mathcal H}_s\otimes {\mathcal H}_m$. At an initial time $t_0$ the particle is in a
superposition of eigenstates of $F$---for simplicity assume the eigenvalues are
nondegenerate---
\begin{equation}
\ket{\psi_0} = \sum_j c_j \ket{\phi^j},\quad P^j = [\phi^j] = \dya{\phi^j},
\label{eqn11}
\end{equation}
and the apparatus is in the `ready-for-measurement' state
$\ket{\Phi_0}$, so that the combined system is in the state
\begin{equation}
\ket{\Psi_0} = \ket{\psi_0}\otimes \ket{\Phi_0} =
\sum_j c_j \ket{\phi^j}\otimes \ket{\Phi_0}.
\label{eqn12}
\end{equation}
\xb
\outl{Times $t_0 \approx t_1 < t_2$; during $[t_1,t_2]$ unitary transformation
$T$ }
\xa
\xb
\outl{$T (\ket{\phi^j} \otimes \ket{\Phi_0}) = \ket{\Psi_2^j}$;
$\{M^k\}$ = pointer PDI; $M^k \ket{\Psi_2^j} = \delta _{jk} \ket{\Psi_2^j}$}
\xa
Let $t_1$ be a time slightly later than $t_0$ during which there is negligible
change under unitary time evolution, so at $t_1$ $\ket{\Psi_1}$ is the same as
$\ket{\Psi_0}$. Next assume that during the time interval from $t_1$ to $t_2$
the particle and apparatus interact with each other in such a way that a
measurement process takes place, so that by $t_2$ the macroscopic quantity
representing the measurement outcome, the `pointer position', has reached its
final value. Let $T$ be the unitary
time development operator from $t_1$ to $t_2$ ($T=\exp[-i(t_2-t_1)H]$ in the
case of a time-independent Hamiltonian $H$), and let
\begin{equation}
\ket{\Psi_2^j} = T (\ket{\phi^j} \otimes \ket{\Phi_0}),\quad
\ket{\Psi_2} = \sum_j c_j \ket{\Psi_2^j} = T\ket{\Psi_1}.
\label{eqn13}
\end{equation}
Next assume there is a PDI $\{M^k\}$ on ${\mathcal H}$, whose
significance is that the property or projector $M^k$ corresponds to the
pointer (or whatever macroscopic variable indicates the measurement outcome)
being in position $k$, and that
\begin{equation}
M^k \ket{\Psi_2^j} = \delta _{jk} \ket{\Psi_2^j}.
\label{eqn14}
\end{equation}
Thus if the particle is initially in the state $\ket{\psi_j}$ at $t_1$, its
interaction with the apparatus will result in the pointer being in position
$j$, i.e., possessing the property $M^j$, at time $t_2$, as one might expect in
the case of a projective measurement. (Note that each $M^k$, since it
represents a macroscopic quantum property, will project onto a subspace of very
high dimension, compared to which 10 raised to the power $10^{10}$ is a
relatively small number.)
\xb
\outl{Family of 3-time histories $Y^{jk} = E_0 \odot E_1^j \odot E_2^k$ =Qm sample
space}
\xa
To discuss the time dependence of the measuring process when the initial
$\ket{\psi_0}$ is in a superposition---at least two of the $c_j$ in
\eqref{eqn11} are nonzero---requires the use of \emph{quantum histories},
sequences of quantum properties, represented by projectors, at successive
times.%
\footnote{See Sec.~III of \cite{Grff17b} for more details, and Chs.~8 through
11 of \cite{Grff02c} for an extended discussion of histories and their
probabilities.} %
For our purposes it suffices to consider histories of the form
\begin{equation}
Y^{jk} = E_0 \odot E_1^j \odot E_2^k,
\label{eqn15}
\end{equation}
interpreted as meaning that the system at time $t_0$ has the property $E_0$, at
time $t_1$ the property $E_1^j$, and at time $t_2$ the property $E_2^k$. (The
symbol $\odot $ denotes a tensor product, a form of $\otimes $ used to separate
properties at successive times. There is no assumption that events at
successive times are related by a unitary time transformation.) Here $j$ and
$k$ are labels, and the $\{E_1^j\}$ and $\{E_2^k\}$ are PDIs. The collection
$\{Y^{jk}\}$ constitutes a \emph{family of histories} or \emph{framework}. Each
history begins with the same property $E_0$ at time $t_0$, and different
histories correspond to different events at later times. A family of histories
constitutes a quantum sample space (analogous to a collection of random walks
in classical physics) to which probabilities can be assigned using an extension
of the Born rule, provided certain consistency conditions are satisfied. In our
case the initial property is
\begin{equation}
E_0= [\Psi_0] = [\psi_0]\otimes [\Phi_0],
\label{eqn16}
\end{equation}
and we shall consider three different families or frameworks based on different
choices for the PDIs at $t_1$ and $t_2$.
\xb
\outl{Unitary framework ${\mathcal F}_u: Y=[\Psi_0] \odot [\Psi_0] \odot [\Psi_2]$ $\rightarrow $
superposition incompatible with pointer positions = infamous measurement
problem}
\xa
The \emph{unitary framework} ${\mathcal F}_u$ contains but a single history
\begin{equation}
{\mathcal F}_u: Y=[\Psi_0] \odot [\Psi_0] \odot [\Psi_2],
\label{eqn17}
\end{equation}
with the projectors at $t_1$ and $t_2$ corresponding to a unitary time
development of the initial state. (Strictly speaking we should introduce a PDI
$\{[\Psi_0], I-[\Psi_0]\}$, $I$ the identity operator, at $t_1$, but the
extended Born rule assigns zero probability to the second of these
possibilities, so it can be ignored; similarly a PDI $\{[\Psi_2], I-[\Psi_2]\}$
at time $t_2$.) The trouble with the family ${\mathcal F}_u$ is that when two or more of
the $c_j$ are nonzero, the state $\ket{\Psi_2}$ is a coherent superposition of
states that correspond to different pointer positions, and hence the
corresponding property $[\Psi_2]$ does not commute with projectors representing
different positions of the pointer; the two are incompatible, and trying to
combine them will give a meaningless result, as noted earlier in the case of
incompatible observable $F$ and $G$. We have arrived at the infamous
measurement problem of quantum foundations, or, in popular parlance,
Schr\"odinger's cat.
\xb
\outl{Family $FC_1: Y^k =[\Psi_0] \odot [\psi_0]\otimes [\Phi_0] \odot M^k$; $Y^k$ has
Born rule probability}
\xa
This difficulty can be avoided by using in place of ${\mathcal F}_u$ a family
\begin{equation}
{\mathcal F}_1: Y^k =[\Psi_0] \odot [\psi_0] \odot M^k,
\label{eqn18}
\end{equation}
where by physicists' convention $[\psi_0]$ at $t_1$ stands for $[\psi_0]\otimes
I_m$ on the full Hilbert space, and histories with $I-[\psi_0]$ at $t_1$ have
been omitted since they have zero probability. The use of $[\psi_0]$ rather
than $[\Psi_0]$ as in \eqref{eqn17} serves to focus attention on the particle
at time $t_1$. The $k$'th history $Y^k$ ends in the pointer position $M^k$ at
time $t_2$, and the extended Born rule assigns to this outcome a probability
\begin{equation}
\Pr(Y^k) = \mte{\Psi_2}{M^k} = |c_k|^2 = \mte{\psi_0}{P^k}.
=|\inpd{\phi^k}{\psi_0}|^2.
\label{eqn19}
\end{equation}
The final expression on the right is the formula students learn in an
introductory course.
\xb
\outl{Open black box: ${\mathcal F}_2: Y^{jk}=[\Psi_0] \odot P^j \odot M^k$ $\rightarrow $
$\Pr(Y^{jk}) = \delta _{jk} |c_k|^2$\\ $\Pr([\phi^j]_1 \,\boldsymbol{|}\, [M^k]_2) = \delta _{jk}$}
\xa
One can go a step further in opening the black box by using the framework
\begin{equation}
{\mathcal F}_2: Y^{jk}=[\Psi_0] \odot [\phi^j] \odot M^k,
\label{eqn20}
\end{equation}
where $[\phi^j]$ (i.e., $[\phi^j] \otimes I_m$) at $t_1$ means the particle has the
property $[\phi^j]$, while nothing is said
about the state of the apparatus. It is easily shown that the consistency
conditions for this family are satisfied, and the extended Born's rule assigns
probabilities
\begin{equation}
\Pr(Y^{jk}) = \delta _{jk} |c_k|^2=|\inpd{\phi^k}{\psi_0}|^2.
\label{eqn21}
\end{equation}
This agrees with \eqref{eqn19}, but provides additional information, namely the
conditional probabilities (where subscripts $1$ and $2$ identify the time):
\begin{equation}
\Pr([\phi^j]_1 \,\boldsymbol{|}\, [M^k]_2) = \delta _{jk} = \Pr( [M^j]_2 \,\boldsymbol{|}\, [\phi^k]_1),
\label{eqn22}
\end{equation}
assuming $c_k\neq 0$. The first says that if the measurement outcome (pointer
position) is $k$ at $t_2$, then at the earlier time $t_1$, \emph{before} the
measurement took place, the particle had the corresponding microscopic property
$[\phi^k]$. In other words, a projective measurement of this sort reveals a
prior property of the measured system when one uses an appropriate quantum
description that allows for this possibility. Herein lies the key difference
between ${\mathcal F}_2$, in which the different $[\phi^k]$ make sense at $t_1$, and
${\mathcal F}_1$, where they do not, since $[\psi_0]$, assuming at least two of the
$c_j$ in \eqref{eqn13} are nonzero, does not commute with the relevant
$[\phi^k]$.
\xb
\outl{$\ket{\psi_0}$ used at time $t_1$ in ${\mathcal F}_2$ is \emph{pre-probability},
not property}
\xa
In addition, since in ${\mathcal F}_2$ $[\phi^k]$ occurs at an \emph{earlier time} than
the measurement outcome $M^k$, the second equality in \eqref{eqn22} allows one
to identify the earlier $[\phi_k]$ as the \emph{cause} of the later $M^k$.
This is the way an experimenter will normally think about the operation of a
measurement apparatus; e.g., it is the arrival of a photon which caused the
photodetector to produce a click, not vice versa.
Note that the superposition state $\ket{\psi_0}$, whereas it does not appear at
time $t_1$ in ${\mathcal F}_2$, can nonetheless be used, as in \eqref{eqn21}, for
calculating the probabilities assigned to the different properties $[\phi^k]$
at this time. A wavefunction or ket used in this manner is referred to as a
`pre-probability' in Sec.~9.4 of \cite{Grff02c}. This role as a calculational
tool should be carefully distinguished from its use as a quantum property, as
in \eqref{eqn18}.
\xb
\outl{Apparatus calibration enables retrodiction: outcome $\Rightarrow $ earlier Qm
property }
\xa
A careful experimenter will want to check that the measurement apparatus built
to measure a particular observable is functioning properly. One check is
calibration: if the device has been built to measure $F$ in \eqref{eqn8}, then
for each $j$ send in a stream of particles known to have the property $P^j$ and
check that the pointer always ends up at position $j$. Once the device has been
calibrated, the experimenter will normally assume that if a particle whose
property is unknown arrives at the detector and the pointer points at $j$, then
the particle earlier had the property $P^j$. Thus the earlier property can be
inferred or retrodicted from the measurement outcome.
\xb
\outl{Retrodiction valid even in particle initially in superposition state}
\xa
But what if the particle was initially prepared in a superposition
$\ket{\psi_0}$ of states corresponding to different values of $j$? The use of
the framework ${\mathcal F}_2$ shows that such an inference remains valid. If the same
initial state is used in a successive runs of the experiment, the outcomes will
be different, with probabilities given by the usual formula \eqref{eqn19}. It
is not meaningful to ask, ``Did the particle have the property $[\psi_0]$ or
the property $[\phi^k]$ prior to the measurement?'', because the projectors do
not commute. But if the question is: ``Which among
the $[\phi^j]$ was the property possessed by the particle just before it
reached the apparatus,'' then the answer is given by using the framework
${\mathcal F}_2$ leading to the formula \eqref{eqn22}. Inferences of this sort are made
all the time by experimenters, and it is to be regretted that this ``common
sense'' understanding of quantum measuring processes is not explained in
introductory textbooks.
\xb
\outl{No one of ${\mathcal F}_1. {\mathcal R}_2, FC_3$ is the \emph{right} framework; choice
$\leftrightarrow $ question being asked}
\xa
We have employed three distinct frameworks or families of histories, ${\mathcal F}_u$,
${\mathcal F}_1$, and ${\mathcal F}_2$ in order to describe what goes on in a projective
measurement. Which is the \emph{right} framework? That depends on the question
one wishes to address. If one is interested in relating the measurement outcome
to the quantity it was designed to measure, ${\mathcal F}_2$ is the right framework,
because it contains the corresponding microscopic events. These events are not
simply absent from ${\mathcal F}_u$ and ${\mathcal F}_1$; in those families they have no meaning,
because the $[\phi^j]$ are incompatible with the projectors used in ${\mathcal F}_u$ and
${\mathcal F}_1$ at time $t_1$. On the other hand, were one interested in whether the
particle was perturbed on its way from an initial preparation to the time $t_1$
just before the measurement took place, a PDI at $t_1$ that included the state
that evolved unitarily from the initial preparation would be appropriate. It is
always a mistake to try and answer a question about a quantum property using a
framework in which it is meaningless.
\xb
\outl{Different frameworks answer different questions}
\xa
Different incompatible frameworks are used in quantum mechanics for answering
different questions, and it is important to note that when a particular setup
allows for several alternative incompatible frameworks, the answer provided by
one of them to a question properly posed (in quantum terms) is not invalidated
by the existence of alternative frameworks. Instead, there is a general
consistency argument, see Ch.~16 of \cite{Grff02c}, that using alternative
frameworks will never lead to contradictory results, i.e., some property $P$ is
true (probability 1) in one framework and false (probability 0) in another
framework. Numerous quantum paradoxes represent apparent violations of this,
but when examined they always involve some combination of arguments carried out
by combining results from incompatible frameworks. Thus a central principle of
CH is the \emph{single framework rule}: valid quantum reasoning requires that
different parts of an argument can all be embedded in, or expressed using, a
single overall framework. The choice of which framework to use will depend upon
which questions one wishes to answer. If one wants to assign probabilities to
measurement outcomes it is necessary to employ a quantum description or
framework in which the different macroscopic outcomes make sense: thus ${\mathcal F}_1$
or ${\mathcal F}_2$, rather than ${\mathcal F}_u$, for the example discussed above. If one wants
to relate the measurement outcome to the corresponding prior microscopic
property that was measured, the framework must be one in which those properties
make sense, ${\mathcal F}_2$ rather than ${\mathcal F}_u$ or ${\mathcal F}_1$.
\xb
\subsection{Quantum Particle In Different Locations? \label{sbct3.3}}
\xa
\xb
\outl{Projector $\hat R$ for particle in region $R$}
\xa
Can a quantum particle be in two different locations at the same time? To
address this we first need to say what it means for a quantum particle to have
the property that it is in some region of space $R$. That property is
represented by a projector $\hat R$ whose action on the position-space
wavefunction $\psi(\textbf{r})$ is given by
\begin{equation}
\hat R\psi(\textbf{r}) = \begin{cases}
\psi(\textbf{r}) \text{ if $\textbf{r} \in R$}\\
0 \text{ otherwise.}
\end{cases}
\label{eqn23}
\end{equation}
That is, it sets $\psi(\textbf{r})$ to zero when $\textbf{r}$ is not in $R$, but otherwise
leaves it unchanged. The projector for the particle to be simultaneously in two
regions $R_1$ and $R_2$ is $\hat R_1 \hat R_2 = \hat R_2 \hat R_1$. If
the regions $R_1$ and $R_2$ do not overlap, this product is zero, which means
the corresponding property cannot occur. Thus if `two places' is understood as
two regions in space that do not overlap, the particle cannot be in both of
them at the same time.
\xb
\outl{Particle location measurement $\rightarrow $only one place $\Rightarrow $
particle not in 2 places}
\xa
\xb
\outl{$\psi(\textbf{r})$ can be considered a pre-probability}
\xa
Once one understands that projective quantum measurements can be understood as
measuring prior properties, the same conclusion follows from the textbook
statement that even if a particle has a spread-out wavefunction, a measurement
of position will find it in only one place. Thus if the support of the particle
wavefunction is in the union $R = R_1 \cup R_2$
\ca
\begin{equation}
R = R_1 \cup R_2
\notag
\end{equation}
\cb%
of two nonoverlapping regions $R_1$ and $R_2$, a
position measurement will reveal its presence in one but not in the other, and
its position just prior to measurement will be in the region indicated by the
measurement outcome.
Note that the \emph{property} $[\psi]$, the Hilbert space projector that
corresponds to the wavefunction $\psi(\textbf{r})$, will not commute with either of
projectors $\hat R_1$ or $\hat R_2$ associated with these two regions. assuming
the support of $\psi(\textbf{r})$ is not confined to one or the other. Thus in
calculating the probabilities that the particle will be in (thus measured to be
in) $R_1$ or $R_2$ one must understand $\psi(\textbf{r})$ to be a
\emph{pre-probability}; assuming it is normalized, $\rho(\textbf{r}) = |\psi(\textbf{r})|^2$
is a probability density which can be integrated over $R_1$ or $R_2$ to find
the probability that the particle is in one of these regions, or that an
appropriate measurement will find it there. Thus implicit in our discussion is
a framework analogous to ${\mathcal F}_2$ in \eqref{eqn20}.
\xb
\outl{Application: double-slit experiment, regions $R_j$, $j=1,2$ include slit
$j$}
\xa
As a particular application one can think of the case of a double-slit
experiment, and let $R_1$ and $R_2$ be nonoverlapping regions, where $R_1$
includes the first slit and its vicinity, but not the second slit, while $R_2$
is the vicinity of the second slit, but excludes the first. Suppose that at a
particular time the wave representing the particle is in the union of $R_1$ and
$R_2$. If detectors are placed immediately behind each slit, detection will
show that the particle was in one of these regions, not both. If, on the other
hand, the particle/wave emerging from the slits is undisturbed as it proceeds
towards the distant interference region where there are a large number of
detectors which detect its position at a later time, then it is correct to say
that at the earlier time the particle was in the region $R$, but introducing
the separate regions $R_1$ and $R_2$ into the quantum description at this time
will violate the consistency conditions required to assign probabilities,
making ``Which slit did it pass through?'' a meaningless question.%
\footnote{There is an alternative framework in which the particle passes
through a definite slit, but the detectors in the later interference region
end up in a macroscopic quantum superposition (Schr\"odinger cat) state. One
can understand why this framework has little appeal for understanding real
experiments!} %
For more details see Ch.~13 of \cite{Grff02c}.
\xb
\outl{Particle in $R=R_1\cup R_2$ $\not\Rightarrow $ particle in $R_1$ or $R_2$}
\xa
\xb
\outl{Analogy: Energy of harmonic oscillator two lowest levels}
\xa
But, the reader may ask, if the particle was in $R=R_1\cup R_2$,
does that not immediately imply that it was either in $R_1$ or else it was in
$R_2$? That would represent good classical reasoning, but it need not hold in
the quantum world. To see why it can fail, consider a different situation: a
quantum harmonic oscillator in which the possible energies are
$(n+1/2)\hbar\omega $ with corresponding (orthogonal) eigenstates $\ket{n}$,
$n=0,1,\cdots$. Consider the two-dimensional subspace spanned by $\ket{0}$ and
$\ket{1}$ whose projector is $P=[0]+[1]$. If the oscillator is in either of the
two energy eigenstates $\ket{0}$ or $\ket{1}$, it possesses the property $P$.
However, a superposition state $\ket{\chi} =(\ket{0} + \ket{1})/\sqrt{2}$ also lies
in this 2-dimensional subspace, but does \emph{not} possess either property
$[0]$ or $[1]$, as it does not have a well-defined energy. Similarly, a quantum
particle passing through a double-slit system cannot, in general, be said to
pass through a particular slit.
\xb
\section{Classical Hidden Variables \label{sct4}}
\xa
\xb
\outl{Claim of nonlocality for any present or future theory}
\xa
There are a large number of published derivations of Bell inequalities, and it
has even been claimed
\cite{dEsp06, Mdln11c, Nrsn17}
that \emph{any local} theory of the world, present or future, \emph{must} lead
to inequalities of this sort. That is, the experimental violations of Bell
inequalities not only imply that the quantum world is nonlocal, but any future
theory that gives results in agreement with these experiments will involve the
same nonlocality. It is therefore useful to say a few words about what is wrong
(from the perspective of Hilbert-space quantum mechanics) with the assumptions
made in typical derivations of Bell inequalities, and why the aforementioned
claim is false.
\xb
\outl{Bell derivations use \emph{factorization condition}
$\Pr(A,B|a,b) = \sum_\lambda \cdots$.}
\xa
\xb
\outl{Symbols defined. $\lambda =$ hidden variable, a common cause}
\xa
It will suffice to focus on the \emph{factorization condition}, which always
appears in some form or another in a derivation of the CHSH or other Bell
inequalities:
\begin{equation}
\Pr(A,B\mspace{1mu}|\mspace{1mu} a,b) = \sum_\lambda \Pr(A \mspace{1mu}|\mspace{1mu} a,\lambda ) \Pr(B \mspace{1mu}|\mspace{1mu} b,\lambda ) \Pr(\lambda ).
\label{eqn24}
\end{equation}
The symbols entering this expression have the following significance. Alice and
Bob, who are far away from each other, are measuring pairs of particles
produced at a common source. The outcome (pointer position) of Alice's
measurement is $A$ given the setting $a$ of her apparatus, which determines the
type of measurement being performed. Likewise, $B$ and $b$ refer to the outcome
and setting for Bob's measurement. On the right side of \eqref{eqn24} the
``hidden variable'' $\lambda $ determines, in a probabilistic sense, the dependence
of $A$ on $a$ and of $B$ on $b$. (One can replace the sum over $\lambda $ with an
integral; it makes no difference.). The equation \eqref{eqn24} expresses
locality in the sense that if Alice and Bob are far from each other, the choice
of $a$ and the resulting outcome $A$ should not influence $B$, nor the choice
of $b$ influence $A$, as long as $\lambda $, a ``common cause'', is held fixed.
\xb
\outl{Apply factorization to single term $M_{00} =A_0B_0$ in $S=\cdots$}
\xa
To better understand the connection of such hidden variables with Hilbert space
quantum mechanics, consider applying \eqref{eqn24} to just one of the terms on
the right side of \eqref{eqn7}, say $M_{00}=A_0 B_0$, whose average in the
state $\ket{\psi}$ we wish to evaluate using a proper quantum-mechanical
calculation. Let us set $a=0,\, b=0$, and since they are fixed,
drop them from both sides of \eqref{eqn24}, which becomes
\begin{equation}
\Pr(A_0,B_0) = \sum_\lambda \Pr(A_0 \mspace{1mu}|\mspace{1mu} \lambda ) \Pr(B_0 \mspace{1mu}|\mspace{1mu} \lambda ) \Pr(\lambda ).
\label{eqn25}
\end{equation}
Here $A_0$ and $B_0$ are defined in \eqref{eqn5}, with eigenvalues $\pm1$, so
can be written in the form, see \eqref{eqn8}:
\begin{equation}
A_0 = P_+ - P_-,\quad B_0 = Q_+ - Q_-,
\label{eqn26}
\end{equation}
using the two commuting PDIs $\{P_+,P_-\}$ and $\{Q_+,Q_-\}$. One can
think of the arguments of $\Pr(A_0,B_0)$ as the eigenvalues of these operators,
and thus \eqref{eqn25} as the set of four equations, one for each $p$ and $q$,
\begin{equation}
\Pr(P_p Q_q) = \sum_\lambda \Pr(P_p \mspace{1mu}|\mspace{1mu} \lambda ) \Pr(Q_q \mspace{1mu}|\mspace{1mu} \lambda ) \Pr(\lambda ),
\label{eqn27}
\end{equation}
which assigns probabilities to the projectors $P_p Q_q$ that together
constitute the quantum sample space that is the common refinement
of the PDIs used in \eqref{eqn26}.
Identifying \eqref{eqn24} with \eqref{eqn25} is not completely trivial, since
in the former $A$ and $B$ represent scalar quantities, measurement outcomes of
$+1$ and $-1$, whereas in \eqref{eqn25} $A_0$ and $B_0$ refer to the
eigenvalues $+1$ and $-1$ of quantum operators, and \eqref{eqn27} to the
corresponding eigenspaces. This identification is correct provided projective
measurements reveal pre-existing values, as explained in Sec.~\ref{sct3}.
\xb
\outl{Identify HV $\lambda $}
\xa
The two sides of \eqref{eqn27} will be equal if we let the hidden variable
$\lambda $ take on one of the four values $++$, $+-$, $-+$, $--$, given by the pair
$pq$, and use conditional probabilities
\begin{equation}
\Pr(P_p \mspace{1mu}|\mspace{1mu} p'q) = \delta _{pp'},\quad \Pr(Q_q \mspace{1mu}|\mspace{1mu} pq') = \delta _{qq'}
\label{eqn28}
\end{equation}
together with
\begin{equation}
\Pr(\lambda =pq) = \mte{\psi}{P_p Q_q}.
\label{eqn29}
\end{equation}
Inserting these in the right side of \eqref{eqn27} makes it equal to
$\mte{\psi}{P_p Q_q}$, the Born rule for $\Pr(P_p Q_q)$. Thus we have
a particular quantum application of \eqref{eqn24} in the case
$a=0,\,b=0$.
\xb
\outl{Need same HV for all $j,k$ as BI derivation assumes $\lambda $ independent of
$a,b$}
\xa
What works for $a=0,\,b=0$, the $M_{00}$ term in \eqref{eqn7}, will work
equally well for any of the other terms; one simply has to use appropriate
choices for the PDIs $\{P_p\}$ and $\{Q_q\}$. But we know that the quantum
average of the quantum $S$ in \eqref{eqn7} can exceed the classical CHSH bound
in \eqref{eqn3}. Why is this? The trouble arises because if, for example, we
consider $M_{10}$ in place of $M_{00}$ we will need a different choice for
$P_+$ and $P_-$ when $A_0$ in \eqref{eqn26} is replaced with $A_1$, with which
it does not commute, and this means changing the definition, or at least the
physical meaning, of $\lambda $. But derivations of Bell inequalities always assume
that $\lambda $, which is supposed to represent an earlier state of the measured
particles, does \emph{not} depend on the choices of $a$ and $b$ made by Alice
and Bob, so changing it is not allowed.
\xb
\outl{Bell inequality derivations do not accommodate noncommutation}
\xa
Might there be some different choice for $\lambda $ that evades this difficulty? Not
likely, given the large number of careful analyses that show that
\eqref{eqn24}, with $\lambda $ independent of $a$ and $b$, leads inexorably to the
CHSH inequality, which is \emph{not} satisfied by the correct quantum average
of $S$ in \eqref{eqn7}. What the foregoing analysis suggests is that the
fundamental problem with such derivations is that they do not take proper
account of the possible \emph{noncommutativity} of quantum projectors
representing the quantum properties of interest. And since this failure applies
to $^{21}$Ne, locality cannot be an issue.
\xb
\outl{Basic problem: factorization assumes $\lambda $ is element of Cl
sample space}
\xa
To summarize, the fundamental difficulty with the factorization condition
\eqref{eqn24} is that it assumes a \emph{single} sample space of
mutually-exclusive possibilities, independent of $a$ and $b$, with elements
labeled by $\lambda $. This would be quite appropriate for a classical system where
there is a single phase space and the sample space employs non-overlapping
subsets of this phase space. But a quantum Hilbert space allows incompatible
samples spaces, different PDIs with projectors that do not commute, and
therefore lack a common refinement. Thus the usual derivations of CHSH and
other Bell inequalities employ \emph{classical} physics to discuss
\emph{quantum} systems, so it is not surprising when these inequalities fail to
agree with quantum predictions, or the experiments that confirm these
predictions.
\xb
\section{Einstein Podolsky Rosen (EPR)\label{sct5}}
\xa
\xb
\subsection{Bohm Version of EPR \label{sbct5.1}}
\xa
\xb
\outl{Singlet state, correlated measurement outcomes}
\xa
While the mistake associated with the claim that the violation of Bell
inequalities implies nonlocality in the quantum world should be evident from
the neon example of Sec.~\ref{sbct2.2}, and from the use of classical hidden
variables for deriving these inequalities, Sec.~\ref{sct4}, there are
useful lessons to be learned from considering the original
Einstein-Podolsky-Rosen (EPR) argument \cite{EnPR35}, where locality was simply
assumed, using the simplified version introduced by Bohm, Ch.~22
of \cite{Bhm51}. Two spin-half particles, $a$ and $b$, are prepared in the
spin-singlet state
\begin{equation}
\ket{\psi_s} = \left( \ket{0}_a\otimes \ket{1}_b -
\ket{1}_a\otimes \ket{0}_b \right)/\sqrt{2},
\label{eqn30}
\end{equation}
with $\ket{0}$ and $\ket{1}$ the $+1/2$ and $-1/2$ (in units of $\hbar$)
eigenstates of $S_z$. Particle 1 is sent to Alice and 2 to Bob, who can then
carry out measurements of the same or different components of spin angular
momentum. If they measure the same component, say $S_w$, where $w$ could be $x$
or $z$ or any other direction in space, the results will be
opposite: if Alice observes $+1/2$ Bob will find $-1/2$, or $+1/2$ if
Alice observes $-1/2$.
\xb
\outl{No Hilbert subspace corresponds to ``$S_x=+1/2$ \mbox{\small AND}\ $S_z = -1/2$''}
\xa
Recall that the Hilbert space of a spin-half particle is two-dimensional, and
thus the PDI associated with any spin component $S_w$ consists of two
projectors onto pure states. Neither projector associated with $S_x$
commutes with either of the projectors associated with $S_z$, and
consequently there is no subspace of the Hilbert space which can represent
simultaneous values of both $S_x$ and $S_z$. Hence expressions like
``$S_x=+1/2$ \mbox{\small AND}\ $S_z = -1/2$'' are meaningless,\footnote{%
In quantum logic such a conjunction is a property that is always
false (the zero-dimensional subspace).} %
and the same holds for any two distinct components of angular momentum.
\xb
\subsection{The Counterfactual Argument \label{sbct5.2}}
\xa
\xb
\outl{Apparatus measures either $S_z$ or $S_x$; choice made just before particle
arrives }
\xa
While, for reasons given above, $S_x$ and $S_z$
for a spin-half particle cannot be measured simultaneously, it is possible in
principle to design an apparatus to measure \emph{either} $S_x$ \emph{or} $S_z$,
with the choice between the two made just before the particle enters the
measuring device. (E.g., a small region with a uniform magnetic field in the $y$
direction placed just in front of the apparatus can cause $S_x=\pm 1/2$ to
precess into $S_z=\pm1/2$, turning an $S_z$ into an $S_x$ measurement; this
field can be switched on or off just before the arrival of the particle.)
\xb
\outl{Ctfl: $S_z$ measured, $S_x$ could have been measured: $\Rightarrow $ QM
incomplete?}
\xa
\xb
\outl{References for additional discussions of counterfactuals, locality}
\xa
Suppose that with the $S_z$ setting Alice finds $S_z=+1/2$ during a particular
run. One can imagine that Alice \emph{could} have chosen the $S_x$ setting, and
in that case \emph{would have} obtained either $S_x=+1/2$ or $-1/2$, we do not
know which. Does it not follow that the particle had \emph{both} a definite
$S_z$ value revealed by the later measurement\emph{and} a specific $S_x$
component, the one that Alice \emph{would} have learned \emph{had} she measured
$S_x$ rather than $S_z$, a choice which she \emph{could have made} at the very
last instant before the particle reached the apparatus? The italicized words
indicate that this is a \emph{counterfactual} argument: it combines what
actually happened with what \emph{would have} happened in a similar but
different situation. (For a discussion of consistent ways to discuss
counterfactuals within quantum mechanics, see \cite{Grff99} or Ch.~19 of
\cite{Grff02c}, and for an application to (non)locality issues, the interchange
in \cite{Stpp12,Grff12b}.) Doesn't this prove that the quantum Hilbert space
provides but an \emph{incomplete} description of physical reality? The reader
familiar with their original paper will notice the similarity with EPR's
argument, which also contains the (implicit) assumption that if one measured
one observable, one could very well have measured a different, incompatible
observable.
\xb
\outl{Reverse EPR: QM complete $\Rightarrow $ separate runs for incompatible observables;
locality has nothing to do with the matter}
\xa
However, one can just as well run the EPR argument in reverse. Given a
classical situation where all observables (by definition) commute, or a quantum
situation with two \emph{commuting} observables, $FG=GF$, it makes perfectly
good sense to ask: Suppose $F$, \eqref{eqn8} was measured with the result
indicating, say, the property $P_2$, what \emph{would have happened} in this
instance \emph{if instead} $G$, \eqref{eqn10} had been measured, i.e., what is
the probability that the measurement would have revealed a property $Q_k$?
Given some initial state the joint probability distribution corresponding to
the common refinement, the PDI composed of the nonzero $P_jQ_k$, can be
computed, and from it a conditional probability $\Pr(Q_k\,\boldsymbol{|}\, P_2)$, which is
then a sensible (in general, probabilistic) answer to the counterfactual
question. But when the projectors do not commute this cannot be done, and then,
as noted earlier, $F$ and $G$ must be measured in separate experiments, and
there is no reason to suppose that the value of $F$ revealed in one experiment
has anything to do with the value of $G$ obtained in a different, independent
experiment. In other words, the \emph{completeness} of Hilbert space quantum
mechanics, which makes \emph{impossible} the simultaneous measurement of $S_x$
and $S_z$, as there is nothing in the Hilbert space that corresponds to a joint
property, undermines the counterfactual assumption that when Alice measured
$S_z$ she \emph{could have} measured $S_x$ \emph{in the same run}. Were $S_x$
to have been measured, it would have to have been in a different run, and there
is no reason why the value of $S_z$ measured in one run will somehow be related
to the value of $S_x$ measured in a different run.
\xb
\outl{Ctfl argument implicit in EPR is blocked if QM is complete}
\xa
Hence the counterfactual notion which enters, at least implicitly, the EPR
argument is blocked as soon as one assumes, contrary to EPR, that Hilbert space
quantum theory is complete, and there are no additional hidden variables. Given
that attempts to supplement the quantum Hilbert space with hidden variables
have thus far failed---as shown most clearly by experiments confirming the
(Hilbert space) quantum violations of Bell inequalities
\cite{AsGR81,Hnao15ea,Shao15ea,Gsao15ea}, it would seem that the original EPR
argument, that (Hilbert space) quantum mechanics is incomplete, fails.
Locality, or its absence, has nothing to do with the matter: the issue is what
measurements carried out on a \emph{single} particle in a \emph{single}
location can tell one about the properties of \emph{that} particle.
\xb
\subsection{Quantum Common Cause \label{sbct5.3}}
\xa
\xb
\outl{Nonlocality belief arises from correlations without common cause}
\xa
\xb
\outl{Will show there is a \emph{quantum} common cause}
\xa
\xb
\outl{Photon experiments use common source/cause to define coincidences}
\xa
\xb
\outl{Need argument for common source of polarizations}
\xa
As noted in Sec.~\ref{sct1}, one reason for the belief in instantaneous
nonlocal quantum influences is that quantum theory predicts, and experiment
confirms, the existence of \emph{correlations} which violate Bell inequalities,
and thus cannot be explained by a common cause based on classical hidden
variables. However, opening the black box and applying consistent quantum
principles provides an explanation for the correlations in terms of local
\emph{quantum} common causes.
Experiments that test Bell inequalities using entangled photon pairs already
assume a common cause in the sense that pairs of photons produced at the source
in the same, rather than a different, down conversion event are identified
using their arrival times. All that is needed in addition is an argument that
the polarizations measured later were also created in the same (local) event.
\xb
\outl{A,B (spin) measurements $\leftrightarrow $ prior properties traced back to common
preparation $\Rightarrow $ correlation via common cause, not evident in `collapse'
framework}
\xa
Here we employ the principle discussed in Sec.~\ref{sct3}, that measurements of
a suitable sort can be interpreted, by using a suitable framework, as revealing
prior properties of the measured system. Reverting to spin-half language, if
Alice's apparatus is set to measure $S_z$ for particle $a$ and the outcome
corresponds to, say, $S_z=-1/2$, she can conclude that particle $a$ possessed
this property just before the measurement took place, and, assuming it was not
perturbed on its way to her apparatus, at all previous times following the
initial preparation. The same applies to Bob's measurement of $S_z$ for
particle $b$. Thus by applying the Born rule right after the two particles are
prepared in the singlet state \eqref{eqn30}, one sees that the probability that
particles 1 and 2 have the same z component of spin is zero, and the two
possibilities for opposite $S_z$ values each has a probability of 1/2. A similar
argument using an appropriate framework applies to the case where Bob
measures $S_w$ for an arbitrary direction $w$. The probabilities for the
correlations predicted by using what might be called a `measurement' framework,
in which both measurement outcomes are traced all the way back to the source,
are exactly the same as those predicted by textbook quantum theory using
wavefunction collapse in a `collapse' framework, Sec.~\ref{sct6},
in which the entangled singlet state persists right up to the instant
before one of the measurements. There is no reason that the Born rule can only
be applied when a measurement takes place; this mistaken notion has been one of
the reasons for the lack of progress in quantum foundations in resolving its
infamous `measurement problem'.
\xb
\outl{Collapse framework does not invalidate inferences in
prior property framework}
\xa
\xb
\outl{$M_{00}$ measurement in Sec.~\ref{sct4}, illustrates `quantum cause'}
\xa
As noted in Sec.~\ref{sbct3.2}, inferences obtained in one framework are not
invalidated by the existence of alternative frameworks. The `collapse'
framework, which treats the entangled state as a property right up until the
measurement takes place, precludes any discussion during that time period of
spin states of the individual particles---see the comments in
Sec.~\ref{sct6}---thus concealing the fact made obvious in the `measurement
framework', in which measurements reveal prior properties, that the quantum
correlations between measurement outcomes have an explanation in terms of a
(quantum) common cause.
The reader may also find it helpful to consider the discussion of the
measurement of $M_{00}$ in Sec.~\ref{sct4}, where proper use was made of a
genuinely quantum `hidden variable' $\lambda $, as an example of a `quantum cause',
in the same sense as that employed here.
\xb
\outl{Alice' measurement of $a$ does not affect particle $b$, but her knowledge
of the initial $\ket{\psi_s}$ allows her to infer a property of $b$}
\xa
Alice's choice of measurement on particle $a$ has no influence at all on Bob's
particle $b$ and whatever measurements may be carried out on it. However, her
knowledge of the outcome of a measurement of a particular component of angular
momentum allows her to infer a property possessed by particle $a$ before the
measurement took place. Combined with what she knows about the preparations
protocol, in particular the initial state $\ket{\psi_s}$, this allows her to
infer something about particle $b$, from which she can also infer the
probability of the outcome of a measurement of particle $b$. Thus if particle
$a$ is measured to have $S_z=-1/2$, Alice can assign an $S_z=+1/2$ property to
particle $b$ and predict with certainty the outcome of Bob's measurement of
$S_z$, or assign a probability to the outcome if Bob instead measures some
other component of spin angular momentum.
\xb
\outl{Classical analogy: colored slips of paper sent to Alice, Bob}
\xa
The following classical analogy may help in understanding this.
Charlie inserts red and green slips of paper into two identical, opaque
envelopes; then chooses one at random and mails it to Alice in Atlanta, and the
other to Bob in Boston. From her knowledge of the preparation protocol Alice,
upon opening her envelope and seeing the color of the slip of paper it contains,
can immediately infer the color of the paper in Bob's envelope, whether or not
he has already opened it or will open it at a later time. No magic or
mysterious long-range influence is needed to understand how this works, and the
same is true of its quantum analog.
Granted, this classical analogy does not cover all possibilities present in the
quantum case; in particular the situation in which Alice measures one component
of spin angular momentum and Bob a different component. However, it is still
correct to say that from the $S_z$ outcome of her measurement and her knowledge
of the initial preparation, Alice can assign (conditional) probabilities to the
outcomes of a measurement by Bob in the $S_x$ or any other basis, and this
possibility has nothing to do with her measurement having some mysterious
effect upon Bob's particle.
\xb
\section{Wavefunction Collapse and Einstein Locality \label{sct6}}
\xa
\xb
\outl{EPR $\Pr(a^j,b^k) = \mte{\psi_s}{\,[a^j]\otimes [b^k]\,}$}
\xa
Spin-spin correlations in the Bohm version of EPR are usually calculated
by one of two closely-related methods. Let us suppose that Alice and Bob carry
our measurements in the orthonormal bases $\{\ket{a^0},\ket{a^1}\}$ and
$\{\ket{b^0},\ket{b^1}\}$, respectively. The joint probability distribution
for an initial state $\ket{\psi_s}$ can be computed using the standard formula
\begin{equation}
\Pr(a^j,b^k) = \mte{\psi_s}{\,[a^j]\otimes [b^k]\,}.
\label{eqn31}
\end{equation}
The discussion in Sec.~\ref{sbct3.2} justifies thinking of $\ket{\psi_s}$
as a pre-probability, and identifying $[a^j]$ and
$[b^k]$ as properties of the $a$ and $b$ particles prior to the measurement,
the point of view adopted in the common cause discussion in Sec.~\ref{sbct5.3}.
\xb
\outl{$\Pr(a^0,b^k)$ by using `collapse' by measurement of $a$ to give
$\Pr(b^k\,\boldsymbol{|}\, a^0)$ }
\xa
An alternative approach which yields the same joint probabilities employs
\emph{wavefunction collapse}. Assume that Alice's measurement is carried out
first, and the outcome corresponds to $[a^0]$. This is thought of as
``collapsing'' the wavefunction $\ket{\psi_s}$ to a new state
\begin{equation}
\ket{\psi_c^0} = [a^0]\, \ket{\psi_s}/\sqrt{\mte{\psi_s}{\,[a^0]\,}},
\label{eqn32}
\end{equation}
(where $[a^0]$ stands for $[a^0]\otimes I_b$). The (conditional)
probability that Bob's measurement outcome will correspond to $[b^k]$ is then
computed using the collapsed state:
\begin{equation}
\Pr(b^k\,\boldsymbol{|}\, a^0) = \mte{\psi_c^0}{\,[b^k]\,}.
\label{eqn33}
\end{equation}
When multiplied by $\Pr(a^0) = \mte{\psi_s}{\,[a^0]\,}$ this gives the result
in \eqref{eqn31}.
\xb
\outl{The `collapse' from Alice's measurement of $a$ $\not\Rightarrow $ effect on
particle $b$}
\xa
\xb
\outl{Confusion arising from Schr\"odinger steering}
\xa
There is nothing wrong with this collapse procedure for obtaining the result
in \eqref{eqn31}. However, as noted earlier in Sec.~\ref{sbct5.3}, Alice's
measurement has no effect upon Bob's particle. Thus treating the collapse
process in which $\ket{\psi_s}$ is replaced by $\ket{\psi_c^0}$, as an actual
physical process in which Alice's measurement has somehow altered a property of
particle $b$, is incorrect, and this error has given rise to a great deal of
confusion, starting with EPR and extending up to more recent discussions of
\emph{steering}, e.g. \cite{WsJD07,CvSk17,Tsao18}, a term originating with
Schr\"odinger \cite{Schr36} and expressing the idea that if Alice and Bob share
an entangled state, Alice's measurement may be able to alter Bob's particle.
\xb
\outl{$\ket{\psi_s}$ = pre-probability; $[\psi_s]$ inconsistent with individual
$a$, $b$ properties}
\xa
\xb
\outl{Does Alice measurement affect $b$?---requires appropriate framework.
Answer: no effect. Special case of Principle of Einstein Locality}
\xa
\xb
\outl{Entangled state $\not\Rightarrow $ one system influences the other}
\xa
The mistake arises from a misunderstanding of the collapse framework. When
$\ket{\psi_s}$ is employed as a pre-probability, as in \eqref{eqn31}, it cannot
be identified with a physical property of either particle $a$ or $b$, since the
corresponding projector $[\psi_s]$ does not commute with any nontrivial
property of either particle. (The trivial properties are the identity projector
$I$, always true, and the zero projector, always false.) Therefore its
collapse to $\ket{\psi^0_c}$ in \eqref{eqn32} cannot by itself indicate a
change in some property of particle $b$. To discuss whether a measurement by
Alice has a physical effect upon Bob's particle requires the use of a framework
in which properties of the latter make sense, both before and after Alice's
measurement takes place. This matter was studied in Ch.~23 of \cite{Grff02c}
for the Bohm version of EPR, showing that there is no such nonlocal effect as
long as Alice's measurement apparatus does not directly interact with Bob's
particle. This is a particular instance of a quite general \emph{Principle of
Einstein Locality}:%
\begin{quote}
Objective properties of isolated individual systems do not change when
something is done to another non-interacting system.
\end{quote}
Its proof will be found in \cite{Grff11}. Here ``non-interacting'' means that
the two systems have independent dynamics: the unitary time-development
operator for the combined systems is the tensor product of the individual
time-development operators of the separate systems. Whether or not the systems
are initially in an entangled state is irrelevant; entanglement should never be
thought of as a mechanism by which one system can `influence' another. This
result is hardly surprising given the widespread acceptance of the no-signaling
principle, since if, contrary to Einstein locality, there were a change in some
objective property, that change could be used to convey information, or at
least this is how a physicist would tend to view the matter.%
\footnote{For an alternative perspective by a philosopher, including a very
clever construction of an influence that carries no information, see Ch.~4 of
\cite{Mdln11c}.} %
\xb
\section{\hspace*{.2cm} Conclusion \label{sct7}}
\xa
\xb
\subsection{\ Summary \label{sbct7.1}}
\xa
\xb
\outl{There are NO nonlocal influences. Cannot be detected. No signaling }
\xa
The central conclusion of this paper is the complete absence of nonlocal
influences between quantum systems which are spatially separated and not
interacting with each other: doing something to one system has no effect,
instantaneous or otherwise, upon the other system.
Experiments show no evidence of such effects, and the ``no-signaling''
principle, widely accepted in discussions of of quantum information, assumes
their absence.
\ca%
Were such effects present it
should be possible to detect them by means of an experiment, and a confirmed
detection would be worth a Nobel Prize. The ``no-signaling'' principle, widely
accepted in discussions of quantum information, assumes the absence of such
influences.
\cb%
In brief, if physical reality is quantum mechanical, then quantum
nonlocality, in the sense of nonlocal influences, is a myth.
\xb
\outl{Source of nonlocality idea: Wavefn collapse misunderstood }
\xa
Why, then, the widespread assumption, which often seems taken for granted
without any need to defend it, that quantum mechanics is somehow ``nonlocal''
in a way in which classical physics is not? Wavefunction collapse, produced by
measurements when applied to a system in an entangled state with a distant
system, is one source of the nonlocality notion, and this reflects the
inadequate treatment of measurements in textbooks and much of the quantum
foundations literature. As shown in Sec.~\ref{sct6}, wavefunction collapse is
simply a method of computing a conditional probability, as in classical physics
when two particles are statistically correlated. While this method of
calculation might sometimes be useful in terms of intuitive insight, it does
not correspond to a physical process.
\xb
\outl{Main nonlocality source: Claim local world $\Rightarrow $ Bell inequalities hold}
\xa
\xb
\outl{Refuted by local $^{21}$Ne; factorization formula requires Cl HVs}
\xa
\xb
\outl{Bell $\leq$s assume micro world is \emph{classical}}
\xa
The principal source of the current widespread belief in quantum nonlocality is
undoubtedly the claim by Bell and his successors that in a local world certain
statistical correlations must satisfy some type of Bell inequality. The CHSH
inequality, which belongs to this category, was studied in Sec.~\ref{sct2}
where it was shown that it is violated by quantum correlations which have
nothing to do with spatial separation, but are already exhibited by states
associated with the spin $3/2$ ground state of a $^{21}$Ne atom. This was
followed in Sec.~\ref{sct4} with a discussion of the factorization formula
which is central to derivations of Bell inequalities, and makes reference to a
hidden variable or variables, typically denoted by $\lambda $. Such hidden variables
are always assumed to be classical; they lack the structure of noncommuting
projectors which are central to Hilbert space quantum mechanics. It is
regrettable that so much attention has been paid to the locality assumptions
involved in the derivation of Bell inequalities, and so little to the equally or
more important assumption that quantum probabilities can be discussed using a
classical sample space: in essence, assuming the microscopic world is not
quantum mechanical but classical.
\xb
\outl{Qm measurement not properly discussed in textbooks. }
\xa
\xb
\outl{(First) measurement problem solved by choice of pointer PDI}
\xa
\xb
\outl{Second problem solved using histories. Single framework rule is needed }
\xa
Much of the confusion surrounding discussions of nonlocality has to do with the
absence from standard quantum mechanics, understood as what is found in
textbooks, of a proper discussion of quantum \emph{measurements}, and for this
reason the essential principles have been summarized in Sec.~\ref{sct3}. The
key to resolving what is generally referred to as the \emph{measurement
problem}, the possible appearance of superpositions of macroscopic `pointer'
states (Schr\"odinger cats), is to use the consistent histories formulation of
quantum theory in which time development is represented by stochastic histories
rather than restricted to the unitary time development of a wavefunction. Using
a \emph{framework} (family of histories) with projectors for the pointer states
gets rid of this measurement problem. Using a framework in which these
macroscopic measurement outcomes are correlated with microscopic properties of
the measured system at a time just before the measurement took place, resolves
a second measurement problem: how the macroscopic outcomes can be used to infer
(retrodict) the prior microscopic property that resulted in (caused) a
particular outcome. Consistent reasoning using frameworks requires paying
attention to the (possible) noncommutativity of quantum projectors as embodied
in the \emph{single framework rule}.
\xb
\outl{Bell $\leq$s error: Factorization condition assumes single Cl sample
space}
\xa
The tools used to analyze measurements in a fully quantum mechanical fashion
made it possible to identify, in Sec.~\ref{sct4}, the fundamental error, from
the perspective of a consistent quantum theory, in derivations of the CHSH and
other Bell inequalities. It is the assumption that the factorization condition
\eqref{eqn24} for probabilities can use \emph{classical} hidden variables
(parametrized by the symbol $\lambda $) associated with a \emph{single} sample
space, rather than appropriate quantum sample spaces, projective decompositions
of the identity (PDIs).
\xb
\outl{EPR: did not understand measurements; false counterfactual assumption}
\xa
\xb
\outl{Einstein was right: there are no ghostly nonlocal influences}
\xa
Bell's work was motivated by the Einstein-Podolsky-Rosen (EPR) paper, in which
locality was simply assumed, and the claim was made that quantum mechanics is
incomplete. Their work was based on an inadequate understanding of quantum
measurements, which at that time were assumed to simply collapse wavefunctions.
In addition, their argument employs a counterfactual assumption which,
translated into the Bohm version of the EPR paradox, is that while Alice
actually measured (say) $S_z$, she could instead have measured and obtained a
value for $S_x$ during this particular run. But if one assumes, contrary to
EPR, that Hilbert space quantum mechanics is complete, such a counterfactual
assumption is misleading, since a spin-half particle cannot simultaneously
possess an $S_x$ and an $S_z$ property. That has nothing to do, at least in any
direct sense, with the EPR locality assumption. On the other hand, Einstein's
belief that there are no ghostly nonlocal influences (``spukhafte
Fernwirkungen'') is fully justified, as noted in Sec.~\ref{sct6}, by a
consistent analysis employing Hilbert subspaces resulting in a Principle of
Einstein Locality.
\xb
\outl{EPR correlations explained by \emph{quantum} common cause}
\xa
An additional argument, Sec.~\ref{sbct5.3}, undermines claims for quantum
nonlocality based on correlations that violate Bell inequalities by showing that
the relevant \emph{quantum} correlations can be understood as arising from a
\emph{local quantum common cause}, something which, in the case of the
polarization of down-converted photons, occurs at the source where they were
created. This understanding makes use of the analysis of quantum measurements
in Sec.~\ref{sct3}, in particular the fact that measurement outcomes
reflect earlier microscopic properties of the measured system when analyzed
using an appropriate framework.
\xb
\subsection{\ Terminology: Some Suggestions \label{sbct7.2}}
\xa
\xb
\outl{Use of misleading terminology. Example of `heat capacity'}
\xa
Even the reader who agrees with the arguments presented in this paper may
nonetheless, and with some justification, take the attitude that scientific
terminology often acquires a technical meaning that is different from the way in
which it was first used, and hence there is no difficulty if `local' and
`nonlocal' continue to be used in the same way as in much of the current
literature on quantum foundations and quantum information. After all, there
are other examples: the term `heat capacity' is in common use in
thermodynamics, and no one, except perhaps beginning students, is confused by
the fact that `heat' is no longer regarded as a fluid, and heat capacities are
typically measured by doing work on the system of interest, rather than
connecting it to a thermal reservoir.
\xb
\outl{'Heat capacity' sometimes OK; Qm `nonlocality' almost always wrong}
\xa
However, in the case of `heat capacity' there are at least some circumstances
in which heat can, indeed, be treated as a conserved fluid, whereas in quantum
mechanics `nonlocality' seems in almost every respect a misleading and
confusing term. Granted, students who are setting up apparatus in the
laboratory are, at least after a while, not likely to worry that an experiment
set up at some distant location might suddenly make a photon disappear while on
its way through an optical fiber to a detector, or perhaps suddenly appear out
of nowhere. Theoreticians are more likely to be confused by nonlocality claims,
and the appearance of such claims in textbooks and the popular literature can
only add to the confusion felt by students learning quantum theory for
the first time.
\xb
\outl{Suggest: BELL nonlocal; SCHRODINGER steering; ``Qm world is LOCAL'' }
\xa
Those who agree with the author that clear thinking is a key part of good
physics, and using appropriate terms is an aid to clear thinking, might at
least wish to consider some alterations and/or clarifications in the use of
various terms. Replacing `nonlocal' with `Bell nonlocal', a term already used
in some publications, would be a useful clarification, and certainly
appropriate, in that Bell himself believed (incorrectly) that violations of his
inequalities indicated nonlocality. Similarly, replacing `steering' with
`Schr\"odinger steering' would be a step in the right direction. However, in
both cases adding a comment that the quantum world is in reality local---there
are no instantaneous long-range influences---would help counter a widespread,
but mistaken, belief to the contrary.
\xb
\outl{Replace or supplement `local' with `classical', in `local
realism/causality'}
\xa
Replacing, or at least supplementing, `local' with `classical' in certain
phrases would also be an improvement. Thus claims, e.g. \cite{Hnao15ea,Wsmn15},
that recent experiments show that quantum mechanics is inconsistent with `local
realism' lead to the strange conclusion that if quantum mechanics is local (as
argued here) it must be unreal. But we have ample evidence that the real world
is best described by quantum, not classical, mechanics, and so it is
`\emph{classical} realism' that is ruled out by experiments. Similarly,
replacing `local causality' as used in \cite{Bll90d,Nrsn11} with
`\emph{classical} local causality' as a key ingredient in the derivation of
Bell inequalities would help clarify their true nature.
These are simply offered as suggestions; the author does not wish to lose
friends (assuming he still has some) through disputes over terminology. The
goal should be to use terms, including technical terms, which aid clear
thinking rather than creating confusion.
\xb
| proofpile-arXiv_065-1443 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction \vspace{-0.3cm}}
The energy loss of a hard parton traveling in the quark-gluon plasma can be studied within a weakly-coupled kinetic approach.
Parton-plasma interactions is treated perturbatively, and the dynamics of well-defined quasiparticles (quarks and gluons) are described by transport equations \cite{arnold2003effective}. Both the energy gain and loss of the partons are included naturally.
Leading-order realizations of weakly-coupled effective kinetic theory, such as MARTINI~\cite{Schenke:2009gb}, generally divide parton-plasma interactions as elastic and inelastic processes.
In Ref.~\cite{ghiglieri2016jet}, it was shown that the parton energy loss can be equivalently reformulated in terms of hard large-angle interactions and soft small-angle collisions.
A major advantage of this reformulated approach is that it could be extended to next-to-leading order~\cite{caron2009g,ghiglieri2016jet}.
There are nevertheless important benefits at leading order as well. Soft and hard parton-plasma interactions can be systematically factorized. The large number of soft interactions can be efficiently described in a stochastic approach with transport coefficients; these drag and diffusion coefficients absorb plasma effects (e.g. Debye screening) that are particularly important for soft interactions. Large-angle interactions can be treated separately with emission rates, as in previous implementations. \par
In this work, we present the first numerical implementation of this reformulated energy loss model. We study the dependence of the parton energy loss on the cutoffs dividing the soft and hard interactions. We show that the reformulated energy loss remains valid at large values of the strong coupling constant $\alpha_s$.
\section{
Description of the Reformulated Model \vspace{-0.3cm}}
The Boltzmann transport equation for a parton propagating through the quark-gluon plasma is \mbox{$p\cdot \partial f = - (p \cdot u) \mathcal{C}[f]$},
where $p$ is its four-momentum, $f$ is the distribution of partons, $u$ is the velocity of the medium and $\mathcal{C}[f]$ is the collision kernel.
In previous implementations, $\mathcal{C}[f]$ would be divided into an elastic and an inelastic term.
In the leading order reformulation~\cite{ghiglieri2016jet},
the collision kernel is divided into (i) hard elastic and inelastic interactions, (ii) diffusion, and (iii) conversion processes.
\paragraph{Hard interactions}
In the case of inelastic interactions, multiple soft interactions with the plasma induce the radiation of a parton of energy $\omega$.
These induced parton emissions can be divided as large-$\omega$ and small-$\omega$ interactions by a cutoff $\mu_\omega$ with $\mu_\omega \lesssim T$, where $T$ is the temperature of the plasma.
Small-$\omega$ inelastic interactions are absorbed into drag and diffusion coefficients. Large-$\omega$ inelastic interactions are described with emission rates, which are obtained from the AMY integral equation \cite{Arnold:2002ja}. \par
In the elastic case, a kinematic cutoff is imposed on the transverse momentum transfer $q_\perp$. The cutoff $\mu_{q_\perp}$ is chosen such that $g T \ll \mu_{q_\perp} \ll T$, with $g=\sqrt{4 \pi \alpha_s}$.
Small-$q_\perp$ interactions are again absorbed into transport coefficients.
Large-$q_\perp$ interactions are calculated perturbatively. Because plasma effects are significant only at low $q_\perp$, it is sufficient to use vacuum matrix elements to compute the large-$q_\perp$ rate.
Given that $p \gg T$ is an excellent approximation for phenomenological applications, we further simplify the evaluation of the large-$q_\perp$ rate by keeping only the zeroth-order term in $T/p$.
\par
\paragraph{Drag and diffusion}
Number- and identity-preserving
soft interactions are described stochastically with drag and diffusion. Elastic as well inelastic interactions are included: because soft radiated particles are absorbed by the plasma, the number-preserving assumption still holds in the inelastic case.
The diffusion processes can be described by a Fokker-Planck equation:
\begin{equation}
\label{FP}
\mathcal{C}^{diff}[f] = -\frac{\partial}{\partial p^i}\left[\eta_D(p)p^if(\vec{p})\right]
-\frac{1}{2}\frac{\partial^2}{\partial p^i\partial p^j}\left\{\left[\hat{p}^i\hat{p}^j \hat{q}_L(p)+\frac{1}{2}\left(\delta^{ij}-\hat{p}^i\hat{p}^j\right)\hat{q}(p)\right]f(\vec{p})\right\}
\end{equation}
where $\eta_D$ is the drag, and $\hat{q}_L(p)$ and $\hat{q}$ are the longitudinal and transverse momentum diffusion coefficients.
\par
The latter are calculated perturbatively while the drag coefficient $\eta_D$ is obtained by the Einstein relation.
The elastic contribution to both $\hat{q}$ and $\hat{q}_L$ can be found in Ref.~\cite{ghiglieri2016jet}.
Inelastic interactions also contribute to the longitudinal diffusion: $\hat{q}^{inel}_L = \frac{(2-\ln2)g^4C_RC_AT^2\mu_\omega}{4\pi^3}$. \par
\par
\paragraph{Conversion}
An additional type of interactions are conversions, in which the incoming parton changes identity.
The leading contribution to this process, both in the elastic and inelastic cases, is suppressed by $\mathcal{O}(T/p)$ compared to the large-angle and diffusion processes. Consequently, their contribution is not included in this work.
\paragraph{Reformulation}
The reformulation can be written as:
\begin{equation}
\mathcal{C} = \mathcal{C}^{2\leftrightarrow 2} + \mathcal{C}^{1\leftrightarrow 2} = \mathcal{C}^{large-\omega}\left(\mu_\omega\right)+\mathcal{C}^{large-q_\perp}\left(\mu_{q_\perp}\right)+\mathcal{C}^{diff}\left(\mu_\omega, \mu_{q_\perp}\right)+\mathcal{C}^{conv}\left(\mu_{q_\perp}\right)
\label{reformulation}
\end{equation}
Every term has a cutoff dependence, which cancels out when added together. In what follows we test this cutoff independence numerically, for the first time.
\section{Numerical implementation \& results in a static medium \vspace{-0.3cm}}
To implement the reformulated energy loss model described in the previous section, we used the public version of the JETSCAPE framework~\cite{kauder2018jetscape}, building upon the existing implementation of MARTINI~\cite{Schenke:2009gb} already in the framework.
The Fokker-Planck equation describing the soft interactions (\ref{FP}) was added as a Langevin equation, solved with the pre-point Ito scheme.
\par
Strictly speaking, the reformulation of parton energy loss Eq. (~\ref{reformulation}) is valid in the small coupling limit. Independence on the cutoffs separating soft and hard interactions ($\mu_{q_\perp}$ and $\mu_\omega$) is not assured in phenomenological applications, where a large coupling is used. We discuss this cutoff dependence in what follows. Because the cancellation of the cutoff is essentially independent in the elastic and inelastic cases, we look at them separately.
\paragraph{Inelastic energy loss}
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{inel.pdf}
\caption{Energy distribution of a 20 GeV gluon losing energy through inelastic processes only in a 1 fm/c static quark-gluon plasma at T = 300 MeV, for different $\mu_\omega$. For (a) large-$\omega$ only, (b) small-$\omega$ only, and (c) combined energy loss. }
\label{fig_inel}
\end{figure}
We set $\alpha_s = 0.3$ and simulate the propagation of a 20 GeV gluon traveling for 1 fm/c in an infinite static medium. Only inelastic energy loss is included. The temperature of the plasma is 300 MeV. We vary the inelastic cutoff $\mu_\omega$ to the value $\mu_\omega = 0.25T, \, T, \, 2T$.
The leading-gluon energy distribution
is shown in Figure \ref{fig_inel}, first separately for the large-$\omega$ interactions and the drag and diffusion, and then combined.
In the large-$\omega$ case (Figure \ref{fig_inel}(a)), all soft radiations below the cutoff $\mu_\omega$ are forbidden; inevitably, the energy distribution around the initial parton energy is found to depend on $\mu_\omega$. In the drag and diffusion case (Figure \ref{fig_inel}(b)), because the transport coefficients increase linearly with $\mu_\omega$, the parton energy loss also increases on $\mu_\omega$.
Once combined, Figure \ref{fig_inel}(c), the $\mu_\omega$ dependence of the parton energy loss cancels out.
In fact, when $\mu_\omega$ is decreased, we reach the limit where the effect of small-$\omega$ radiation is negligible. This is the limit in which inelastic energy loss is typically implemented. The reformulated energy loss allows for this cutoff to be increased and varied.
We verified that results similar to Figure \ref{fig_inel} were obtained with (i) a smaller coupling constant, (ii) different initial parton energies, (iii) a quark propagating instead of a gluon, and (iv) a realistic hydrodynamic medium used instead of a brick.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\linewidth]{elas.pdf}
\caption{Energy distribution of a gluon with initial energy $E_0 = 200$ GeV losing energy through elastic processes in a static plasma with $T = 300$ MeV, for a propagation length $\tau=\alpha_s^{-2} 0.3^2$~fm/c at different elastic cutoff $\mu_{q_\perp}$. The results are shown for four different coupling constants $\alpha_s$.}
\label{fig_elas}
\end{figure}
\paragraph{Elastic energy loss}
Figure \ref{fig_elas}(a) shows the energy distribution of a 200 GeV gluon propagating for 1 fm/c in the same medium as used in the previous example: infinite length, $T = 300$~MeV. Only elastic energy loss is included. We show only the combined result with both large-angle scattering and drag and diffusion. At large coupling, the dependence on the elastic cutoff $\mu_{q_\perp}$ is modest, although larger than found in the inelastic case. This larger cutoff dependence, in the large coupling limit, is likely a consequence of our use of vacuum matrix elements in the large $q_\perp$ elastic energy loss calculation, instead of screened matrix elements. In the small coupling regime, screening effects are small for large-$q_\perp$ interactions, and the use of vacuum matrix elements is enough. As expected, we find that the cutoff independence is recovered when the coupling is reduced\footnote{We fix the propagation length $\tau = \alpha_s^{-2} 0.3^2$~fm/c to make the number of elastic collisions the same for tests with different coupling constants.}, as shown in Figure \ref{fig_elas}(b-c).
\section{Summary \& outlook \vspace{-0.3em}}
The reformulated parton energy loss from Ref.~\cite{ghiglieri2016jet} provides a systematic factorization of soft and hard interactions in the weakly-coupled limit. The numerical implementation presented in this work indicates that this factorization still holds well at large coupling and can be used in phenomenological studies.
Naturally one benefit of this reformulation is the possibility of extending it to next-to-leading order~\cite{ghiglieri2016jet}. A separate benefit is phenomenological: this methodical separation of the transport (soft) sector of the parton energy loss from hard interactions provides a framework for data-driven constraints on the drag and diffusion coefficient of partons. We expect both of these directions to have important applications for the study of jets in heavy ion collisions.
\vspace{-0.3em}
\paragraph{Acknowledgements}
We are grateful to Jacopo Ghiglieri, Weiyao Ke and Yingru Xu for their help with this project. We thank Bjoern Schenke and Heikki M\"antysaari for their collaboration in the early stage of this project, and the JETSCAPE Collaboration for their assistance with the JETSCAPE framework.
This work was supported by the National Science Foundation under Award Number ACI-1550225 (T.D.), the U.S. Department of Energy under Award Numbers DE-FG02-05ER41367 (S.A.B., T.D., J.-F.P.) and DE-FG02-88ER40388 (J.-F.P., D.T.).
\bibliographystyle{JHEP}
| proofpile-arXiv_065-1458 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
More and more companies aim to improve production processes with data science and machine learning (ML) methods, for example, by using a ML model to better understand which factors contribute to higher quality products or greater production yield. While advanced ML models such as neural networks (NN) might, theoretically, in many cases provide the most accurate predictions, they have several drawbacks in practice. First of all, with many hyperparameters to set, these model can be difficult and time consuming to fit, which is only aggravated by the current shortage of ML specialists in industry. Second, in many cases there is not enough data available in the first place to train a low bias/high variance model like a NN, for example, because comprehensive data collection pipelines are not yet fully implemented or because obtaining individual data points is expensive, e.g., when it takes several days to produce a single product. Last but not least, the insights generated by a ML analysis need to be communicated to others in the company, who want to use these results as a basis for important business decisions \cite{lime}. While great progress has been made to improve the interpretability of NNs, e.g., by using layer-wise relevance propagation (LRP) to reveal which of the input features contributed most to a neural net's prediction \cite{arras2017relevant,bach2015pixel,montavon2018methods}, this is in practice still not sufficient to convince those with only a limited understanding of statistics. Especially when dealing with data collected from physical systems, using a plausible model might even be more important than getting small prediction errors \cite{martius2016extrapolation}.
To avoid these shortcomings of NNs and other non-linear ML models, in practice we find it necessary to rely mostly on linear prediction models, which are intuitive to understand and can be trained easily and efficiently even on very small datasets.
But of course, employing linear models generally comes at the cost of a lower prediction accuracy, because in most datasets there is no linear relation between the original input features and the target variable.
The ability to learn expressive representations from the given input features is one of the main reasons for the popularity of neural networks and ``deep learning'' \cite{bengio2013deep,lecun2015deep}. Upon closer examination, a NN is nothing more than a linear model operating on better features: While a complex layer hierarchy maps the inputs to the last hidden layer, thereby transforming the original features into a more informative representation, the output layer, acting on the activations of this last hidden layer to generate the final prediction, corresponds to a simple linear model. Using pre-trained NNs as feature extractors, i.e., to transform the original inputs into more useful representations, often improves the performance of simpler models on tasks such as image classification \cite{sharif2014cnn}. To improve forecasts involving time series data, echo state networks use a randomly connected ``reservoir'' to create more informative feature vectors that are then used to train a ridge regression model \cite{lukovsevivcius2012practical}.
Similarly, kernel methods like SVM use the kernel trick to employ linear models but implicitly operate in a very high dimensional feature space where, e.g., classification problems become linearly separable \cite{muller2001introduction}. As these examples demonstrate, linear models are very capable of solving complex problems -- provided the right features are available. While NNs and kernel methods transform the original inputs into more useful feature representations internally, explicit feature engineering aims to create better features in a preprocessing step, i.e., before using the data to fit a (linear) prediction model.
Manually engineering new, more informative features is often quite tedious. Therefore, inspired by the SISSO algorithm \cite{ouyang2018sisso}, we propose a framework to automatically generate several tens of thousands of non-linear features from the original inputs and then carefully select the most informative of them as additional input features for a linear model. We have found that this approach leads to sufficiently accurate predictions on real world data while providing a transparent model that has a high acceptance rate amongst non-statisticians in the company and therefore provides the possibility to positively contribute to important business decisions.
To make this framework more accessible to other data scientists, our implementation is publicly available on GitHub.\footnote{\url{https://github.com/cod3licious/autofeat}}
The rest of the paper is structured as follows: After introducing some related work in the area of automated feature engineering and selection, we describe our approach and the \texttt{autofeat} Python library in detail (Section~\ref{sec:autofeat}). We then report experimental results on several datasets (Section~\ref{sec:exp}) before concluding the paper with a brief discussion (Section~\ref{sec:discussion}).
\subsection{Related Work}
Feature construction frameworks generally include both a feature engineering, as well as a feature selection component \cite{markovitch2002feature}. One of the main differences between feature construction approaches is whether they first generate an exhaustive feature pool and then perform feature selection on the whole feature set (which is also the strategy \texttt{autofeat} follows), or if the set of features is expanded iteratively, by evaluating at each step whether the inclusion of the new features would improve the prediction accuracy. Both approaches have their drawbacks: The first approach is very memory intensive, especially when starting off with a large initial feature set from which the additional features are constructed via various transformations. With the second approach, important features might be missed if some variables are eliminated too early in the feature engineering process and can therefore not serve to construct more complex, possibly helpful features. Furthermore, depending on the strategy for including additional features, the whole process might either be very time intensive, if at each step a model is trained and evaluated on the feature subset, or can fail to include (only) the relevant features, if a simple heuristic is used for the feature evaluation and selection.
Most existing feature construction frameworks follow the second, iterative feature engineering approach: The FICUS algorithm \cite{markovitch2002feature} uses a beam search to expand the feature space based on a simple heuristic, while the FEADIS algorithm \cite{dor2012strengthening} and Cognito \cite{khurana2016cognito} use more complex selection strategies. A more recent trend is to use meta-learning, i.e., algorithms trained on other datasets, to decide whether to apply specific transformation to the features or not \cite{katz2016explorekit,khurana2018feature,nargesian2017learning}. While theoretically promising, we could not find an easy to use open source library for any of these approaches, which makes them essentially irrelevant for practical data science use cases.
The well-known \texttt{scikit-learn} Python library \cite{scikit-learn} provides a function to generate polynomial features (e.g.\ $x^2$), including feature interactions (e.g.\ $x_1\cdot x_2, x_1^2\cdot x_2^3$). Polynomial features are a subset of the features generated by \texttt{autofeat}, yet, while they might be helpful for many datasets, in our experience with \texttt{autofeat}, a lot of times the ratios of two features or feature combinations turn out to be informative additional features, which can not be generated with the \texttt{scikit-learn} method.
The \texttt{scikit-learn} library also contains several options for feature selection, such as univariate feature scoring, recursive feature elimination, and other model-based feature selection approaches \cite{guyon2003introduction,kursa2010feature}. Univariate feature selection methods consider each feature individually, which can lead to the inclusion of many correlated features, like those contained in the feature pool generated by \texttt{autofeat}. The more sophisticated feature selection techniques rely on the use of an external prediction model that provides coefficients indicating the importance of each feature. However, algorithms such as linear regression get numerically unstable if the number of features is larger than the number of samples, which makes these approaches impractical for feature pools as large as those generated by \texttt{autofeat}.
One popular Python library for automated feature engineering is \texttt{featuretools}, which generates a large feature set using ``deep feature synthesis'' \cite{kanter2015deep}. This library is targeted towards relational data, where features can be created through aggregations (e.g.\ given some customers (data table 1) and their associated loans (in table 2), a new feature could be the sum of each customer's loans), or transformations (e.g.\ time since the last loan payment). A similar approach is also implemented by the ``one button machine'' \cite{lam2017one}. The strategy followed by \texttt{autofeat} is somewhat orthogonal to that of \texttt{featuretools}: It is not meant for relational data, found in many business application areas, but was rather built with scientific use cases in mind, where e.g.\ experimental measurements would instead be stored in a single table. For this reason, \texttt{autofeat} also makes it possible to specify the units of the input variables to prevent the creation of physically nonsensical features.
Another Python library worth mentioning is \texttt{tsfresh} \cite{christ2018time,christ2016distributed}, which provides feature engineering methods for time series, together with a univariate feature selection strategy. However, while \texttt{autofeat} can be applied to a variety of datasets, the features generated by \texttt{tsfresh} only make sense for time series data, as they are constructed, e.g., using rolling windows.
To the best of our knowledge, there does not exist a general purpose open source library for automated feature engineering and selection, which is why we felt compelled to share our work.
\section{Automated Feature Engineering and Selection with \texttt{autofeat}}\label{sec:autofeat}
The \texttt{autofeat} library provides the \texttt{AutoFeatRegressor} and \texttt{AutoFeatClassifier} models, which automatically generate and select additional non-linear input features given the original data and then train a linear prediction model with these features. The models provide a familiar \texttt{scikit-learn} \cite{scikit-learn} style interface, as demonstrated by a simple usage example, where \texttt{X} corresponds to a $n \times d$ feature matrix and \texttt{y} to an $n$-dimensional target vector (both NumPy arrays \cite{numpy} and Pandas DataFrames \cite{pandas} are supported as inputs): \vspace{-0.5cm}
\begin{lstlisting}
# instantiate the model
model = AutoFeatRegressor()
# fit the model and get a pandas DataFrame with the original,
# as well as the additional non-linear features
df = model.fit_transform(X, y)
# predict the target for new test data points
y_pred = model.predict(X_test)
# compute the additional features for new test data points
# (e.g. as input for a different model)
df_test = model.transform(X_test)
\end{lstlisting}
In the following, we describe the feature engineering and selection steps happening during a call to e.g.\ \texttt{AutoFeatRegressor.fit()} or \texttt{AutoFeatRegressor.fit\_transform()} in more detail. The \texttt{autofeat} library requires Python 3 and is pip-installable.
\subsection{Construction of Non-Linear Features}
Additional non-linear features are generated in an alternating multi-step process by applying user selectable non-linear transformations to the features (e.g.\ $\log(x)$, $\sqrt{x}$, $1/x$, $x^2$, $x^3$, $|x|$, $\exp(x)$, $2^x$, $\sin(x)$, $\cos(x)$) and combining pairs of features with different operators ($+, -, \cdot$). This results in an exponentially growing feature space, e.g., with only three original features, the first feature engineering step (applying non-linear transformation) results in about 20 new features, the second step (combining features), results in about 750 new features, and after a third step (again applying transformations), the feature space has grown to include over 4000 features. As this may require a fair amount of RAM depending on the number of original input features, the data points can be subsampled before computing the new features. In practice, performing only two or three feature engineering steps is usually sufficient.
The new features are computed using the SymPy Python library \cite{sympy}, which automatically simplifies the generated mathematical expressions and thereby makes it possible to exclude redundant features. If the original features are provided with physical units, only `legal' new features are retained, e.g., a feature representing a temperature would not be subtracted from a feature representing a volume of something. This is implemented using the Pint Python library,\footnote{\url{https://pint.readthedocs.io/en/latest/}} which is additionally used to compute several dimensionless quantities from the original features using the Buckingham $\pi$-theorem \cite{buckingham1914physically}. If categorical features are included in the original features, these are first transformed into one-hot encoded vectors using the corresponding \texttt{scikit-learn} model before using them in the main feature engineering procedure.
\subsection{Feature Selection}
After having generated several thousands of features (often more than data points in the original dataset), it is now indispensable to carefully select only those features that contribute meaningful information when used as input to a linear model. To this end, we first remove those engineered features that are highly correlated with the original or other simpler features and then employ a multi-step feature selection approach relying heavily on L1-regularized linear models. In addition to the \texttt{AutoFeatRegressor} and \texttt{AutoFeatClassifier} models, the library also provides only this feature selection part alone in the \texttt{FeatureSelector} class, which again provides a \texttt{scikit-learn} style interface.
Individual features can provide redundant information or they might seem uninformative by themselves yet proof useful in combination with others. Therefore, instead of ranking the features independently by some criterion, it is advantageous to use a wrapper method that considers multiple features at once to select a promising subset \cite{guyon2003introduction}. For this we use the Lasso LARS regression model \cite{baraniuk2007compressive,efron2004least,friedman2010regularization} and an L1-regularized logistic regression model \cite{cox1958regression,bishop} provided in the \texttt{scikit-learn} library, which yield sparse weights based on which the features can be chosen \cite{ng2004feature}. To select the features, we mainly rely on a noise filtering approach, where the model is trained on the original features, as well as several additional `noise' features (either created by shuffling the original data or randomly drawn from a normal distribution), and only those of the original features are kept that have a model coefficient larger than the largest coefficient associated with any of the noise features \cite{kursa2010feature}.
Selecting relevant features with an L1-regularized model amongst a feature pool that contains more features than data samples works quite well when the features are independent \cite{ng2004feature,doquetagnostic}. However, when trained with a large set of interrelated features, such as those generated in our feature engineering process, the models often fail to identify all of the truly relevant features. Therefore, we first identify an initial set of promising features by training an L1-regularized linear model on all features and selecting those with the largest absolute coefficients. Then, the remaining features are split into equal chunks and combined with the initial set (such that each of the chunks contains less than $n/2$ features) and a model is then fit on each chunk to select additional features. The feature subsets are then combined and used to train another model based on which a final feature set is determined. To get a more robust set of features, this selection process is performed multiple times on subsamples of the data. The feature subsets of the independent feature selection runs are then combined and highly correlated features are filtered out (keeping those features that were selected in the most runs). The remaining features are then again used to fit a model to select the ultimate feature set.
After this multi-step selection process, typically only a few dozen of the several thousand engineered features are retained and used to train the actual prediction model. For new test data points, the \texttt{AutoFeatRegressor} and \texttt{AutoFeatClassifier} models can then either generate predictions directly, or a DataFrame with the new features can be computed for all data points and used to train other models.
By examining the coefficients of the linear prediction model (possibly normalized by the standard deviation of the corresponding features, in case these are not of comparable magnitudes), the most prominent influencing factors related to higher or lower values of the target variable can be identified.
\section{Experimental Results}\label{sec:exp}
To give an indication of the performance of the \texttt{AutoFeatRegressor} model in practice, compared to other non-linear ML algorithms, we test our approach on five regression datasets (Table~\ref{table:datasets}), provided in the \texttt{scikit-learn} package (\emph{diabetes} and \emph{boston}) or obtainable from the UCI Machine Learning Repository.\footnote{\url{http://archive.ics.uci.edu/ml/index.php}} For further details on the experiments, including the hyperparameter selection of the other models, please refer to the corresponding Jupyter notebook in the GitHub repository.
\begin{table}[!htb]
\centering\setlength{\tabcolsep}{5pt}
\caption{Overview of datasets, including the number of samples $n$ and number of original input features $d$.}
\begin{tabular}{lr r l}
\toprule
Dataset & $n$ & $d$ & Prediction task\\\midrule
\emph{diabetes} \cite{efron2004least} & 442 & 10 & disease progression one year after baseline\\
\emph{boston} \cite{harrison1978hedonic} & 506 & 13 & median housing values in suburbs of Boston\\
\emph{concrete} \cite{yeh1998modeling} & 1030 & 8 & compressive strengths of concrete mixtures\\
\emph{airfoil} \cite{brooks1989airfoil} & 1503 & 5 & sound pressure levels of airfoils in a wind tunnel\\
\emph{wine quality} \cite{cortez2009modeling} & 6497 & 12 & red \& white wine quality from physiochemical tests\\
\bottomrule
\end{tabular}
\label{table:datasets}
\end{table}
While on most datasets, the \texttt{AutoFeatRegressor} model does not quite reach the state-of-the-art performance of a random forest regression model (Table~\ref{table:results}), it clearly outperforms standard linear ridge regression, while retaining its interpretability. Across all datasets, with one feature engineering step, \texttt{autofeat} generated between 2 and 11 additional features, while with two and three steps, it produced on average 31 additional features (Table~\ref{table:nfeat}). Most of the selected features are ratios or products of (transformed) features (Table~\ref{table:feattypes}).
\begin{table}[!htb]
\centering\setlength{\tabcolsep}{3pt}
\caption{$R^2$ scores on the training and test folds of different datasets for ridge regression (RR), support vector regression (SVR), random forests (RF), and the \texttt{autofeat} regression model with one, two, or three feature engineering steps (AFR1-3). Best results per column are in boldface (existing methods) and underlined (AFR).}\small
\begin{tabular}{lcccccccccc}
\toprule
& \multicolumn{2}{c}{\textbf{diabetes}}& \multicolumn{2}{c}{\textbf{boston}}& \multicolumn{2}{c}{\textbf{concrete}}& \multicolumn{2}{c}{\textbf{airfoil}}& \multicolumn{2}{c}{\textbf{wine quality}}\\ \cmidrule(l){2-3}\cmidrule(l){4-5}\cmidrule(l){6-7}\cmidrule(l){8-9}\cmidrule(l){10-11}
& train & test & train & test & train & test & train & test & train & test\\\midrule
\emph{RR} & 0.541 & \textbf{0.383}& 0.736 & 0.748& 0.625 & 0.564& 0.517 & 0.508& 0.293 & 0.310\\
\emph{SVR} & 0.580 & 0.320& 0.959 & \textbf{0.882}& 0.933 & 0.881& 0.884 & 0.851& 0.572 & 0.411\\
\emph{RF} & \textbf{0.598} & 0.354& \textbf{0.983} & 0.870& \textbf{0.985} & \textbf{0.892}& \textbf{0.991} & \textbf{0.934}& \textbf{0.931} & \textbf{0.558}\\\addlinespace[0.5ex]
\hdashline \addlinespace[0.5ex]
\emph{AFR1} & 0.553 & \underline{0.400}& 0.825 & \underline{0.810}& 0.847 & 0.818& 0.569 & 0.560& 0.320 & 0.341\\
\emph{AFR2} & 0.591 & 0.353& 0.893 & 0.791& \underline{0.913} & \underline{0.868}& 0.863 & 0.842& \underline{0.397} & \underline{0.384}\\
\emph{AFR3} & \underline{0.638} & -12.4& \underline{0.932} & 0.048& 0.867 & 0.824& \underline{0.884} & \underline{0.883}& 0.350 & 0.342\\
\bottomrule
\end{tabular}
\label{table:results}
\end{table}
\begin{table}[!htb]
\centering\setlength{\tabcolsep}{5pt}
\caption{Number of engineered (eng) and selected (sel) additional features for each dataset from an \texttt{autofeat} regression model with one, two, or three feature engineering steps (AFR1-3).}
\begin{tabular}{lrrrrrrrrrr}
\toprule
& \multicolumn{2}{c}{\textbf{diabetes}}& \multicolumn{2}{c}{\textbf{boston}}& \multicolumn{2}{c}{\textbf{concrete}}& \multicolumn{2}{c}{\textbf{airfoil}}& \multicolumn{2}{c}{\textbf{wine quality}}\\ \cmidrule(l){2-3}\cmidrule(l){4-5}\cmidrule(l){6-7}\cmidrule(l){8-9}\cmidrule(l){10-11}
& eng & sel & eng & sel & eng & sel & eng & sel & eng & sel\\\midrule
\emph{AFR1} & 45 & 2 & 60 & 6 & 34 & 5 & 21 & 6 & 59 & 11\\
\emph{AFR2} & 5950 & 8 & 10528 & 15 & 3456 & 40 & 530 & 42 & 9959 & 80\\
\emph{AFR3} & 32161 & 16 & 54631 & 21 & 14485 & 16 & 2355 & 44 & 55648 & 26\\
\bottomrule
\end{tabular}
\label{table:nfeat}
\end{table}
\begin{table}[!htb]
\centering\setlength{\tabcolsep}{5pt}
\caption{Most frequently selected features across all datasets for one, two, or three feature engineering steps (AFR1-3). Only the non-linear transformations $\log(x)$, $\sqrt{x}$, $1/x$, $x^2$, $x^3$, $|x|$, and $\exp(x)$ were applied during the feature engineering steps.}
\begin{tabular}{ll}
\toprule
\textbf{AFR1} & $1/x,\; x^3,\; x^2,\; \exp(x)$ \\[8pt]
\textbf{AFR2} & $\sqrt{x_1}/x_2,\; 1/(x_1x_2),\; x_1/x_2,\; x_1^3/x_2,\; x_1^2/x_2,\; \exp(x_1)\exp(x_2),\; \exp(x_1)/x_2,$\\[5pt]
& $\sqrt{x_1}\sqrt{x_2},\; \sqrt{x_1}x_2^3,\; x_1\log(x_2),\; \log(x_1)/x_2,\; x_1^3x_2^3,\; x_1^3x_2,\; x_1^3\log(x_2), \; ...$\\[8pt]
\textbf{AFR3} & $x_1^3/x_2^3,\; \exp(\sqrt{x_1} - \sqrt{x_2}),\; 1/(x_1^3x_2^3),\; \sqrt{x_1x_2},\; 1/(x_1 + x_2),\; x_1/x_2^2,$\\[5pt]
& $1/(\sqrt{x_1} - \log(x_2)),\; |\sqrt{x_1} - \log(x_2)|, \; \exp(\log(x_1)/x_2),\; \log(x_1)^2/x_2^2,$\\[5pt]
& $ |\log(x_1) + \log(x_2)|,\; ...$\\
\bottomrule
\end{tabular}
\label{table:feattypes}
\end{table}
With only a single feature engineering step, the \texttt{AutoFeatRegressor} model often only performs slightly better than ridge regression on the original features. With three feature engineering steps, on the other hand, the model can overfit on the training data (as indicated by the discrepancy between the training and test $R^2$ scores), because the complex features do not only explain the signal, but also the noise contained in the data. However, the only datasets where this is a serious problem here is the \emph{diabetes} and \emph{boston} datasets, where over 30k and 50k features were generated in the feature engineering process, while less than 500 data points were available for feature selection and model fitting, which means overfitting is somewhat to be expected.
\section{Conclusion}\label{sec:discussion}
In this paper, we have introduced the \texttt{autofeat} Python library, which includes an automated feature engineering and selection procedure to improve the prediction accuracy of a linear model by using additional non-linear features. The regression and classification models are based on \texttt{scikit-learn} models and provide a familiar interface. During the model fit, a vast number of non-linear features is generated from the original features and a few of these are selected in an elaborate iterative process to optimally explain the target variable. By combining a linear model with complex non-linear features, a high prediction accuracy can be achieved, while retaining a transparent model that yields traceable results as a basis for business decisions made by non-statisticians.
The \texttt{autofeat} library was developed with scientific use cases in mind and is especially useful for heterogeneous datasets, e.g., containing sensor measurements with different physical units. It should not be seen as a competitor for the existing feature engineering libraries \texttt{featuretools} or \texttt{tsfresh}, which would be the first choice when dealing with relational business data or time series respectively.
We have demonstrated on several datasets that the \texttt{AutoFeatRegressor} model significantly improves upon the performance of a linear regression model and sometimes even outperforms other non-linear ML models. While the model can be used for predictions directly, it might also be beneficial to use the generated features as input to train other ML models. By adapting the kinds of transformations applied in the feature engineering process, as well as the number of feature engineering steps, further insights can be gained with respect to how which of the input features influences the target variable, as well as the complexity of the system as a whole.
\acks{FH was a part-time employee at BASF when initially programming the \texttt{autofeat} library.}
\vskip 0.2in
| proofpile-arXiv_065-1470 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:intro}
Perhaps the most distinguishing characteristic of
granular materials is their internal heterogeneity,
particularly when viewed at the micro-scale of
individual particles or particle clusters.
Granular materials often consist of a wide range of particle
sizes and shapes, and these particles are usually arranged
in an irregular manner.
This geometric and topologic multiformity produce
nonuniform distributions of internal force and deformation,
which are often expressed in spatial and temporal patterning.
In the paper, we catalog the many forms in which heterogeneity
may be manifest, and we provide a classification
scheme for its measurement.
Examples of several forms of heterogeneity are
presented, and certain expressions of their evolution and
spatial patterning are described.
Although the proposed classification scheme applies to both
two- and three-dimensional (2D and 3D) granular materials, to
particles of arbitrary shape and composition, to both sparse and dense
packings, and to both dynamic and quasi-static deformations,
the paper illustrates the classification within a two-dimensional
framework and with a 2D example of the quasi-static deformation
of a dense disk assembly.
\par
In Section~\ref{sec:classification} we consider
a classification scheme
for heterogeneity and the various forms in which it can be expressed
and measured.
Section~\ref{sec:methods} describes the simulation methods that
are used to explore several forms of heterogeneity.
In the same section, we also consider a means of
measuring the heterogeneity of vector and tensor objects.
Section~\ref{sec:results} presents experimental results and characterizes
several types of heterogeneity and their evolution during
biaxial compression.
\section{Classifying heterogeneity} \label{sec:classification}
Table~\ref{table:class1} gives a classification of material characteristics
that can manifest heterogeneity in granular materials.
\begin{table}
\centering
\caption{Heterogeneity categories and references to experimental studies}
\label{table:class1}
\input{Table1.tex}
\end{table}
The table references sample experimental studies in which these
characteristics have been measured, although the short lists of references
are far from exhaustive.
The characteristics in the table are organized within a hierarchy of
heterogeneity categories: topologic,
geometric, kinematic, static, and constitutive.
These categories are described in a general manner in the next paragraph.
Table~\ref{table:class2} presents a short list of informational
forms that can be used for describing each characteristic.
\begin{table}
\caption{Analyses of heterogeneity}
\label{table:class2}
\centering
\begin{tabular}{ll}
\hline\noalign{\smallskip}
Informational forms & Examples\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Central tendency & Mean, median, modes\\
Dispersion & Standard deviation, \\
& variance, \\
& coefficient of variation,\\
& histograms,\\
& probability and cumulative \\
& \quad distributions,\\
& quartile plots\\
Spatial correlation & n-point correlations,\\
& correlation lengths\\
Temporal correlation & Rates of change \\
Spatial and temporal & Spatial plots,\\
\quad patterning & time series analyses,\\
& spatial domain transforms\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
The arrangement of the forms in Table~\ref{table:class2}
reflects their complexity and the
usual historical order in which measurements have been proposed
and collected.
The simplest form of information is some measure of central tendency:
a mean, median, or modal value.
Heterogeneity implies diversity and fluctuation, and this
dispersion in measured
values can be expressed as a variance or standard deviation,
with standard graphical means such as histograms, or by
fitting experimental results to an appropriate probability distribution.
Of greater complexity are measurements of temporal correlation
(e.g. rates of change) and spatial correlation.
The most complex data analyses can also disclose the spatial and
temporal patterning of heterogeneity.
The paper presents data on the six characteristics
that are accompanied by section numbers in Table~\ref{table:class1}, and
these characteristics are explored with a range of the
informational forms that are given in
Table~\ref{table:class2}.
\par
Table~\ref{table:class1} begins with topologic characteristics,
which concern the arrangement of the particles and their contacts,
but without reference to their position, size, or orientation.
This information can be expressed as a \emph{particle graph}
for both 2D and 3D assembles, which
gives the topologic connectivity of the particles in a packing,
as explained in~\cite{Satake:1993b}.
The paper presents data on the variation in local topology and
its evolution during loading.
A discrete metric is also proposed as a means of tracking
inter-particle processes between distant particles.
Geometric information includes the additional descriptors of
length, shape, and angle, which relate to the positional arrangements,
orientations, and sizes of particles.
Together, topology and geometry describe the \emph{fabric} of
a granular assembly.
The paper characterizes the evolution of one form of heterogeneity in
this fabric.
Kinematic information (Table~\ref{table:class1}) concerns
the movements and rotations of particles,
relative inter-particle movements,
and the local deformations within small groups
of particles.
The paper gives examples of heterogeneous movements and
deformations, the spatial correlation of inter-particle movements, and
the patterning of local rotations and deformations.
Static (or statical) information
(Table~\ref{table:class1})
involves the transmission of
force and stress within a material, and the paper depicts the local
diversity of stress and its evolution during
loading.
Table~\ref{table:class1} also includes the category of \emph{constitutive}
heterogeneity (or, perhaps, mechanical heterogeneity), which would
involve the diversity in local material stiffness.
Except for simple two-particle models that rely on uniform
strain assumptions, there is, as of yet, no consistent vocabulary or
experimental methods for measuring and characterizing
this form of heterogeneity.
The reader is referred to the recent work of
Gaspar and Koenders~\cite{Gaspar:2001b} and Gaspar~\cite{Gaspar:2002a},
which may
provide a needed framework for characterizing constitutive
heterogeneity.
\par
As a simple example of the classification scheme in Table~\ref{table:class2},
we could consider the diversity of grain size in a granular material.
Methods for measuring and describing particle size, such
as sieving methods, are standardized and widely applied, so
that references to these methods are excluded from Table~\ref{table:class2}.
These methods can readily depict a representative (central)
grain size as well as the dispersion of sizes.
Certain processes, such as shearing and compression, can cause
particle breakage, which could be measured with temporal correlations of
the size distribution.
Processes that promote size segregation could be studied with
methods that reveal the spatial correlation of size.
Size segregation usually leads to a spatial patterning of the local
size distribution, and processes that produce a periodic recurrence in
such patterning would lead to both spatial and temporal patterning.
\section{Methods and notation} \label{sec:methods}
A conventional implementation of the Discrete Element Method (DEM)
was used to simulate the quasi-static behavior of a large 2D
granular assembly and to illustrate different manifestations
of internal heterogeneity and their evolution.
\subsection{Simulation methods}
The study employs a square assembly containing 10,816 circular
disks of multiple diameters.
The disk sizes are randomly distributed over
a fairly small range of between 0.56$\overline{D}$ and 1.7$\overline{D}$,
where $\overline{D}$ is the mean particle diameter.
The material was created by slowly and isotropically compacting
a sparse arrangement of particles, during which friction between particle
pairs was disallowed
(friction was later restored for biaxial compression tests).
This compaction technique produced a
material that was dense, random, and isotropic,
at least when viewed at a macro-scale.
The average initial void ratio was 0.1715 (solid fraction of $0.854$),
the average coordination number was 3.95, and the average overlap between neighboring particles was
about 9$\times$10$^{-4}$ of $\overline{D}$.
The assembly was surround by periodic boundaries, a choice that would
eliminate the topologic and geometric nonuniformity that
might otherwise occur in the vicinity of rigid platens or assembly
corners.
The initial height and width of the assembly were each about
$102\overline{D}$.
\par
All examples of heterogeneity were collected from a single loading test
of biaxial compression.
The height of the assembly was reduced at a constant rate of compressive
strain ($\dot{\varepsilon}_{22}<0$), while maintaining a constant average
horizontal stress ($\dot{\sigma}_{11}=0$).
About 200,000 time steps were required to reach
the final vertical strain, $\overline{\varepsilon}_{22}$, of $-0.01$,
and at this
rate of loading, the average imbalance of
force on a particle was less than 1$\times$10$^{-4}$
times the average contact force.
\par
During biaxial compression,
a simple force mechanism was employed between contacting particles.
Linear normal and tangential contact springs were assigned equal
stiffnesses ($k_{\mathrm{n}}=k_{\mathrm{t}}$),
and slipping between particles would occur whenever
the contact friction coefficient of 0.50 was attained.
\par
The average, macro-scale mechanical behavior is shown in
Fig.~\ref{fig:crs_q}, which gives the dimensionless compressive stress
\mbox{$\Delta\overline{\sigma}_{22}/\overline{p}_{\mathrm{o}}$},
where $\overline{p}_{\mathrm{o}}$ is the initial mean stress,
$\overline{p}_{\mathrm{o}}=(\overline{\sigma}_{11}+\overline{\sigma}_{22})/2$.
\begin{figure}
\centering
\includegraphics{crs_q.eps}
\caption{Evolution of the average compressive stress within the assembly
of 10,816 circular disks during biaxial compression.}
\label{fig:crs_q}
\end{figure}
This initial mean stress was about 5$\times$10$^{-4}$ times the normal
contact stiffness,~$k_{\mathrm{n}}$.
\par
The rates of several micro-quantities
(position, force, orientation, etc.)
were periodically measured during
the loading.
These rates were calculated by first collecting
the assembly's status at two instants that were separated by
100 time steps, and the difference in these states was then used
to compute the rates.
Because time is used in quasi-static DEM simulations
as simply a means of ordering or parameterizing events,
the rates of micro-quantities will usually be expressed
in a dimensionless form by dividing by an average,
macro-scale rate (average stress rate, average strain rate, etc.).
\subsection{Notation}\label{sec:notation}
Vectors and tensors are represented by bold Roman letters,
lower and upper case respectively.
Their inner products are computed as
\begin{equation} \label{eq:innerp}
\mathbf{a} \cdot \mathbf{b} = a_{p}a_{p}, \quad
\mathbf{A} \cdot \mathbf{B} = A_{pq}B_{pq},
\end{equation}
with the associated norms
\begin{equation} \label{eq:norm}
|\mathbf{a}| = (\mathbf{a} \cdot \mathbf{a})^{1/2}, \quad
|\mathbf{A}| = (\mathbf{A} \cdot \mathbf{A})^{1/2}.
\end{equation}
A juxtaposed tensor and vector will represent the
conventional product
\begin{equation}
\mathbf{A} \mathbf{b} = A_{pq}b_{q},
\end{equation}
and juxtaposed tensors represent the product
\begin{equation}
\mathbf{A} \mathbf{B} = A_{pr}B_{qr}.
\end{equation}
Various quantities are measured at both micro and macro scales
so that the variability of the micro-scale measurements
can be deduced.
A macro-level, assembly average is indicated with
an overline ($\overline{\mathbf{L}}$, $\overline{\sigma}_{22}$,
$\overline{p}_{\mathrm{o}}$, $\overline{q}$);
whereas local, micro-level quantities appear with superscripts
($\mathbf{L}^{i}$, $\boldsymbol{\sigma}^{k}$, $\widehat{\mathbf{v}}^{j}$,
$p^{k}$, Table~\ref{table:superscripts}).
\begin{table}
\caption{Superscript notation}
\label{table:superscripts}
\centering
\begin{tabular}{cl}
\hline\noalign{\smallskip}
Index & Usage \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$i$ & A polygonal void cell having $m^{i}$ edges and \\
& vertices. An $m$-tuple of particles or contacts,\\
& $i=(k_{1},k_{2},\ldots,k_{m^{i}})$ or
$i=(j_{1},j_{2},\ldots,j_{m^{i}})$\\
$j$ & A contacting pair of particles $(k_{1},k_{2})$\\
$k$ & A single particle\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
The ``$k$'' superscript is used with quantities that can be measured
within a single particle or its immediate vicinity;
the ``$i$'' superscript is assigned to quantities that are
measured within a single void cell (the dual of particles);
and the ``$j$'' superscript is used for quantities associated
with a pair of particles or a pair of void cells
(e.g. contacts, contact forces, branch vectors,
and inter-particle velocities).
No contractive summation is implied with superscripts,
e.g. $a^{j}b^{j}$.
\par
The non-uniformity of scalar, vector, and tensor quantities is
considered in the paper.
A consistent notation is used to express the conformity (or diversity)
of a local quantity $\mathbf{a}^{\mathrm{local}}$
with respect to
the corresponding assembly average $\overline{\mathbf{a}}$.
The pair $\mathbf{a}^{\mathrm{local}}$ and $\overline{\mathbf{a}}$
may be scalars, vectors, or tensors.
Three dimensionless scalars measure the \emph{participation}
of $\mathbf{a}^{\mathrm{local}}$
($= \mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}}$)
in the assembly-average $\overline{\mathbf{a}}$;
the \emph{non-conformity} of $\mathbf{a}^{\mathrm{local}}$
($= \mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}}$);
and the \emph{alignment} of $\mathbf{a}^{\mathrm{local}}$
($= \mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}}$)
with respect to the assembly-average~$\overline{\mathbf{a}}$:
\begin{align}
\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}} &=
\frac{1}{|\overline{\mathbf{a}}|^{2}}
\left( \mathbf{a}^{\mathrm{local}} \cdot \overline{\mathbf{a}}\right)
\label{eq:parallel}\\
\mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}} &=
\frac{1}{|\overline{\mathbf{a}}|}
\left| \mathbf{a}^{\mathrm{local}} -
(\mathbf{a}^{\mathrm{local}} \parallel \overline{\mathbf{a}})
\overline{\mathbf{a}}
\right|
\label{eq:perp}\\
\mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}} &=
\frac{1}{|\mathbf{a}^{\mathrm{local}}|\,|\overline{\mathbf{a}}|}
\left( \mathbf{a}^{\mathrm{local}} \cdot \overline{\mathbf{a}}\right)
\label{eq:circ}
\end{align}
The participation and non-conformity in Eqs.~\ref{eq:parallel}
and~\ref{eq:perp} are the
dimensionless magnitudes of $\mathbf{a}^{\mathrm{local}}$
in directions parallel and perpendicular to $\overline{\mathbf{a}}$,
and relative to the length of $\overline{\mathbf{a}}$.
The alignment $\mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}}$
is the cosine of the angle separating
$\mathbf{a}^{\mathrm{local}}$ and $\overline{\mathbf{a}}$.
These quantities are unambiguous when $\mathbf{a}$ is a
vector or tensor.
If $\mathbf{a}$ is a scalar,
then $\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}}$
is simply the quotient $a^{\mathrm{local}}/\,\overline{a}$;
$\mathbf{a}^{\mathrm{local}} \!\!\perp \overline{\mathbf{a}}$
is zero;
and $\mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}}$
is sgn($a^{\mathrm{local}},\,\,\overline{a}$).
By reducing vector and tensor objects to the scalars in
Eqs.~(\ref{eq:parallel}--\ref{eq:circ}),
we can compute conventional statistical measures such as the
mean, standard deviation, and coefficient of variation.
These measures will be represented with the notation
$\mathsf{Mean}(\cdot)$, $\mathsf{Std}(\cdot)$,
and $\mathsf{Cov}(\cdot)$,
where the coefficient of variation
$\mathsf{Cov}(\cdot) = \mathsf{Std}(\cdot) / \mathsf{Mean}(\cdot)$.
\par
As an example with vector quantities $\mathbf{a}$, we can consider
two different sets of two-dimensional vectors $\mathbf{a}^{\mathrm{local}}$,
and this example can serve as a reference case for comparing
the results given later in the paper.
In both sets, the vectors $\mathbf{a}^{\mathrm{local}}$ all have
unit length.
In the first set, the vectors
$\mathbf{a}^{\mathrm{local}}$ have a uniform direction that is aligned with
the reference vector $\overline{\mathbf{a}}$;
but in the second set, the vectors $\mathbf{a}^{\mathrm{local}}$
have uniformly random directions.
In the example, the reference vector $\overline{\mathbf{a}}$
is also assumed to have unit length.
The four statistical measures
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}})$,
$\mathsf{Std}(\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}})$,
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}})$,
and $\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \circ \overline{\mathbf{a}})$
are used in the paper as indicators of local non-conformity and
heterogeneity, and their values
for this simple example are
summarized in Table~\ref{table:values}.
\begin{table}
\caption{Statistics of uniform and random vector sets}
\label{table:values}
\centering
\begin{tabular}{lcc}
\hline\noalign{\smallskip}
&\multicolumn{2}{c}{Vectors $\mathbf{a}^{\mathrm{local}}$}\\
& Uniform,& \\
Measure & aligned & Random \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \parallel \overline{\mathbf{a}})$ &
1 & 0 \\
$\mathsf{Std}(\mathbf{a}^{\mathrm{local}} \parallel \overline{\mathbf{a}})$ &
0 & $1/2$ \\
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \perp \overline{\mathbf{a}})$ &
0 & $2/\pi$ \\
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \circ \overline{\mathbf{a}})$ &
1 & 0 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
In the simulated biaxial loading of 10,816 circular disks, certain local
vector and tensor quantities are found to have measured
values of
$\mathsf{Std}(\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}})$
and $\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}})$
that greatly exceed those of the random set, as given in the
final column of Table~\ref{table:values}.
These large values are due to variations in the magnitudes of the
local quantities as well as in their directions.
\section{Heterogeneity measurements} \label{sec:results}
The experimental results are analyzed for indications of four
categories of heterogeneity:
topologic, geometric (fabric), kinematic, and static.
\subsection{Topologic heterogeneity} \label{sec:topology}
In a 2D setting, the topology of an assembly can be
described by the \emph{particle graph} of its particles
(the graph vertices) and their contacts (the graph edges)~\cite{Satake:1993b}.
The particle graph is associated with the Voronoi-Dirichlet
tessellation of a 2D region, except that the particle
graph admits only the real contacts as graph edges.
The faces of the planar graph are polygonal void cells, which are
enclosed by the circuits of contacting particles
(an example void cell is shaded in Fig.~\ref{fig:graph}).
\begin{figure}
\centering
\includegraphics{graph.eps}
\caption{Particle graph of a 2D granular assembly. A single
void cell is shaded. The void cells labeled~a, b, and~c have
valences of~6, 4, and~3 respectively.}
\label{fig:graph}
\end{figure}
For this topologic description of a 2D granular material,
the simplest local topologic measures are the local
coordination number $n^{k}$ and the local valence $m^{i}$, defined
as the number of contacts of a single particle $k$, and the number
of edges of a single void cell $i$
(see Fig.~\ref{fig:graph} for examples of valence).
Because gravity is absent in the current simulations,
some particles will be unattached and, hence, excluded from the
particle graph.
The effective average coordination number $\overline{n}_{\mathrm{eff}}$
of the attached particles will be somewhat larger than the coordination
number $\overline{n}$ that includes both attached and unattached
particles~\cite{Kuhn:1999a,Thornton:2000a}.
Dense assemblies have large coordination numbers and small valences,
but during biaxial compression, the average effective
coordination number is reduced, while the average valence
increases~\cite{Kuhn:1999a,Thornton:2000a}.
In the simulation of biaxial compression, $\overline{n}_{\mathrm{eff}}$
is reduced from 4.14 in the initial particle arrangement to a value
of 3.50 at the final compressive strain, $\overline{\varepsilon}_{22}=-0.01$.
The average valence $\overline{m}$ increases from 3.87 to 4.66.
\par
A simple measure of topologic nonuniformity is the dispersion in
the local values of $n^{k}$ and $m^{i}$.
Figure~\ref{fig:topology} shows the evolution of the coefficients
of variation of these two local topologic measures.
\begin{figure}
\centering
\includegraphics{topology.eps}
\caption{The evolution of two measures of topologic heterogeneity
during biaxial compression:
the coefficients of variation ($\mathsf{Cof}$)
of the local coordination number ($n^{k}$)
and local valence ($m^{i}$).}
\label{fig:topology}
\end{figure}
Together, the results indicate an increase in topologic heterogeneity during
loading.
The large increase in the dispersion of local valence,
as expressed by the coefficient of variation $\mathsf{Cov}(m^{i})$,
is consistent with the results of
Tsuchikura and Satake~\cite{Tsuchikura:2001a},
who have shown that the sizes of void cells become more diverse
during biaxial compression.
The increase in the coefficient of variation of the local coordination
number, $\mathsf{Cov}(n^{i}) = \mathsf{Std}(n^{i}) / \mathsf{Mean}(n^{i})$,
is due, in part, to a reduction in the mean coordination number.
\subsection{Geometric heterogeneity}\label{sec:fabric}
Geometric characteristics of granular materials are listed
in Table~\ref{table:class1},
and numerous studies have shown how the assembly averages of these
characteristics evolve during loading.
Fewer studies indicate how the internal diversity
of these characteristics changes with loading.
Tsuchikura and Satake~\cite{Tsuchikura:2001a} have developed
methods for examining the diversity of local fabric
in a 2D granular material and found that void cells become more
elongated during loading, but that the variation in elongation
remains fairly uniform.
To study this form of fabric anisotropy, they
propose a method for computing the magnitude of the anisotropy of a general
second order symmetric tensor $\mathbf{T}$ by considering
its deviatoric part $\mathbf{T}'$.
The self-product of $\mathbf{T}'$ yields a scalar measure $\beta$
of anisotropy:
\begin{equation}\label{eq:beta}
\mathbf{T}' \mathbf{T}' = \beta^2 \,\mathbf{I}\;.
\end{equation}
In their experimental study, they used $\beta$ to measure
the local anisotropy (elongation magnitude) of the loop tensors
of individual void cells.
The current study applies the same methods to analyze heterogeneity in
the local fabric tensor.
\par
Satake~\cite{Satake:1982a} proposed the fabric tensor as a measure
of particle arrangement in a granular material, and we
use a local form, $\mathbf{F}^{k}$, to analyze fabric heterogeneity:
\begin{equation}
F_{pq}^{k} = \frac{1}{n^{k}} \sum_{j=1}^{n^{k}} \eta_{p}^{\,j}\eta_{q}^{\,j}\;,
\end{equation}
where the tensor for a particle $k$ involves its $n^{k}$ contacts.
Superscript $j$ denotes the $j$th contact with particle $k$
(Table~\ref{table:superscripts}).
Vectors $\boldsymbol{\eta}^{j}$ are unit vectors in the directions
of the branch vectors that join the center of particle $k$ with the centers of
its contacting neighbors.
The assembly average $\overline{\mathbf{F}}$ is computed from the sum
of local values for all $N_{\mathrm{eff}}$ particles that
are included in (attached to) the particle graph,
\begin{equation} \label{eq:Fbar}
\overline{\mathbf{F}} = \frac{1}{2 N_{\mathrm{eff}}}
\sum_{k=1}^{N_{\mathrm{eff}}} n^{k} \mathbf{F}^{k} \;.
\end{equation}
Studies have shown that $\overline{\mathbf{F}}$ becomes increasingly
anisotropic during deviatoric loading, with the major
principal direction of $\overline{\mathbf{F}}$ becoming more aligned with the
direction of compressive loading~\cite{Oda:1982a,Thornton:2000a}.
\par
The current study considers variability in the local anisotropy of
fabric.
We apply Eq.~\ref{eq:beta} to the local fabric tensor $\mathbf{F}^{k}$
to compute a local measure $\alpha^{k}$ of fabric anisotropy:
\mbox{$\mathbf{T} \rightarrow \mathbf{F}^{k}$},
\mbox{$\beta \rightarrow \alpha^{k}$}.
Fig.~\ref{fig:fabric} shows the results for the biaxial compression
tests.
\begin{figure}
\centering
\includegraphics{fabric.eps}
\caption{Changes in the average and local fabric anisotropies
during biaxial compression.}
\label{fig:fabric}
\end{figure}
The average fabric anisotropy of the entire assembly, $\overline{\alpha}$,
increases with loading (Eqs.~\ref{eq:beta} and~\ref{eq:Fbar}),
a results that is consistent with previous
experiments.
As would be expected, the mean local anisotropy, $\mathsf{Mean}(\alpha^{k})$,
is larger than the average assembly anisotropy $\overline{\alpha}$,
and the increase in local anisotropy parallels that of the entire assembly.
The results also show, however, that the standard deviation
of fabric anisotropy increases with strain.
The increase in $\mathsf{Std}(\alpha^{k})$
suggests that the geometric arrangement of particles becomes
more varied during loading.
\subsection{Inter-particle movements} \label{sec:move}
The change in stress within a dry granular material is
due to local changes in the inter-particle
forces that result from the relative shifting of particles during
assembly deformation.
The simplest models of this mechanism are based upon
the interactions of particle pairs that are constrained
to move in accord with a homogeneous deformation field.
Bathurst and Rothenburg~\cite{Bathurst:1988a} studied the
inter-particle movements at small strains in
the biaxial compression of a disk assembly.
Their results demonstrate that, on average, the inter-particle
movements at small strains are less than those that would be consistent with
uniform deformation (see also~\cite{Kruyt:2002a}).
The current study addresses the non-conformity of inter-particle movements
relative to the average deformation,
the diversity of this non-conformity, its evolution during loading,
and the spatial coherence of the non-conformity.
In this regard, we consider only those particles that are
included in the particle graph at a particular
stage of loading.
The relative velocity $\widehat{\mathbf{v}}^{j}$ of two particles
$k_{1}$ and~$k_{2}$ is the difference in their velocities
\begin{equation}
\widehat{\mathbf{v}}^{j} = \mathbf{v}^{k_{2}} - \mathbf{v}^{k_{1}}\;,
\end{equation}
where index $j$ represents the contacting pair \mbox{$(k_{1},k_{2})$}.
The relative movement that would be consistent with homogeneous deformation
is the product $\overline{\mathbf{L}}\,\mathbf{l}^{j}$,
where $\overline{\mathbf{L}}$ is the average velocity gradient of the assembly,
and $\mathbf{l}^{j}$ is the branch vector between the
centers of particles $k_{1}$ and $k_{2}$ (Table~\ref{table:superscripts}).
\par
The quantities in Eqs.~(\ref{eq:parallel}--\ref{eq:circ}) can
be applied to
describe the conformity (or non-conformity) and
diversity of the local, inter-particle
movements $\widehat{\mathbf{v}}^{j}$ with respect to the
mean-field displacement $\overline{\mathbf{L}}\,\mathbf{l}^{j}$.
We begin by considering only pairs of particles that are in
direct contact during biaxial compression
(the number of these pairs ranges from 17,600 to 21,300 for
the 10,816 particles),
although we will consider more distant pairs in a later paragraph.
\par
The evolution of measures~(\ref{eq:parallel}--\ref{eq:circ})
are shown in Fig.~\ref{fig:contactMove_strain}.
\begin{figure}
\centering
\includegraphics{contactMove_strain.eps}
\caption{Evolution of the non-conformity and heterogeneity
of inter-particle motions $\widehat{\mathbf{v}}^{j}$
during biaxial compression.
The motions are for particle pairs $j$ that are in direct contact
($\rho=1$). Over 18,000 pairs are represented in each point.}
\label{fig:contactMove_strain}
\end{figure}
The average inter-particle motions
$\widehat{\mathbf{v}}^{j}$ are consistently less than the
mean-field motions, as is shown by a mean conformity
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
less than 1.
This result is consistent with studies~\cite{Kruyt:2002a}
and~\cite{Bathurst:1988a},
which investigated the local behavior at small strains.
Figure~\ref{fig:contactMove_strain} shows that the mean
conformity,
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
is modestly reduced during loading,
from about 0.91 to about 0.82.
As we will see, however, the diversity of the fluctuations
can be quite large.
Both the non-conformity and heterogeneity of inter-particle
motions are indicated
by the additional measures
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
and
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$.
If the local motions were in uniform conformance with the
assembly deformation,
these two measures would have values of~0 and~1 respectively.
At large strains, the value of
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
approaches~2, compared with a value of
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\!\parallel\!
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
of about~0.82.
These results reveal that, on average and at
large strains, the components of inter-particle
movements that are \emph{orthogonal} to their mean-field directions
can be more than twice as large as the components that
are aligned with the mean-field directions
(Eqs.~\ref{eq:parallel} and~\ref{eq:perp}).
This lack of vector alignment is also indicated by the
cosine-type measure
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
which is reduced to a value of about 0.15 (see Eq.~\ref{eq:circ}).
At the end of the test, fully 40\% of inter-particle motions were in
the ``wrong'' direction, with values
$\widehat{\mathbf{v}}^{j}\cdot(\overline{\mathbf{L}}\,\mathbf{l}^{j})<0$.
The fourth measure in Fig.~\ref{fig:contactMove_strain} is
$\mathsf{Std}(\widehat{\mathbf{v}}^{j} \parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
which displays a rather extreme degree of nonuniformity
in the components of inter-particle movements that are
parallel to the mean-field directions.
This nonuniformity is particularly sizable
at large strains.
A set of random vectors of uniform length would have a value
of
$\mathsf{Std}(\widehat{\mathbf{v}}^{j} \!\!\parallel\!
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
of only 0.5 (Table~\ref{table:values}),
a value several times smaller than those
in Fig.~\ref{fig:contactMove_strain}.
Such large values
indicate a substantial heterogeneity in both the magnitudes
and directions of the inter-particle
movements $\widehat{\mathbf{v}}^{j}$.
\par
We can also use the biaxial compression simulation to investigate
the spatial correlation of inter-particle movements and the length scale
at which the inter-particle movements approximate the mean deformation
field.
Kruyt and Rothenburg~\cite{Kruyt:2002a} measured the spatial
correlation of movements at small strains by using a 2-point
correlation technique.
In the current study, we do not consider all possible particle
pairs, but instead use only those pairs of particles that are included
in (attached to) the particle graph, as only these particles participate
directly in the deformation and load-bearing mechanisms.
This limitation suggests a \emph{discrete metric} $\rho$
for describing the distance between two particles $k_{1}$ and $k_{2}$.
The distance $\rho(k_{1},k_{2})$ is the least number
of contacts (graph edges) that must be traversed to connect
$k_{1}$ and $k_{2}$ (Fig.~\ref{fig:distance}).
\begin{figure}
\centering
\includegraphics{distance.eps}
\caption{Discrete distances $\rho$ from a reference particle 0.
The vertices represent particle centers;
edges represent particle contacts.}
\label{fig:distance}
\end{figure}
The results in Fig.~\ref{fig:contactMove_strain},
which have already been described, were
collected from the sets of all particle pairs at a
discrete distance of~1,
i.e. the sets $\{ (k_{1},k_{2}\mbox{):}\; \rho(k_{1},k_{2})=1 \}$
at various stages of loading.
The discrete metric does not provide angle or size, so all subsequent
calculations with the objects $\widehat{\mathbf{v}}^{j}$,
$\overline{\mathbf{L}}$, and $\mathbf{l}^{j}$ were, of course,
performed in Euclidean space, but only on the selected particle pairs.
\par
Figure~\ref{fig:Contact_move_dist_005}
shows the non-conformity and heterogeneity of
inter-particle movements $\widehat{\mathbf{v}}^{j}$
for particle pairs $j$ at distances
$\rho$ of~1 to~10,
but at the single large strain $\overline{\varepsilon}_{22}=-0.005$
(see Fig.~\ref{fig:crs_q}).
\begin{figure}
\centering
\includegraphics{Contact_move_dist_005.eps}
\caption{The correlation of inter-particle motions with the
discrete distance $\rho$ between particle pairs at
a strain $\overline{\varepsilon}_{22}=-0.005$.
The superscript $j$ represents a pair of particles $(k_{1},k_{2})$
that are separated by distance $\rho$.
The results at $\rho=1$ involve 18,000 pairs; results
at $\rho=10$ involve over 250,000 pairs.}
\label{fig:Contact_move_dist_005}
\end{figure}
(The results for $\rho=10$ involve over one-quarter million particle pairs.)
As would be expected, the average conformity of
the observed inter-particle movements
with their corresponding mean-strain movements
improves with an increasing discrete
distance between the pairs.
This improved conformity is evidenced by increases
in the measures
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
and
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
and in the reduction of
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$.
However, at a distance of $\rho=10$ and at the strain
$\overline{\varepsilon}_{22}=-0.005$,
the values of these three measures are about the same as
those at distance $\rho=1$ with zero strain,
$\overline{\varepsilon}_{22}\approx 0$.
That is,
at the large strain of $-0.005$, the non-conformity of motion at a distance of
about~8--10 particle diameters is no better than the
modestly substantial non-conformity of neighboring particles at small
strains.
\par
The conformity between the actual and mean-field motions is particularly
poor at large strains if we consider only the \emph{normal} motions
between the particle pairs that are in direct contact (i.e. with $\rho=1$).
Figure~\ref{fig:Contact_move_orient_005} shows the assembly averages of the
normal and tangential motions of those particle pairs that are separated
by distances $\rho$ of~1 and~3, at the large strain
$\overline{\varepsilon}_{22}=-0.005$.
\begin{figure}
\centering
\includegraphics{Contact_move_orient_005.eps}
\caption{The average normal and tangential motions of particle
pairs as a function of the pair orientation $\theta^{j}$.
Mean-field motions $\overline{\mathbf{L}}\,\mathbf{l}^{j}$
are represented by heavy lines; whereas, the averaged
actual inter-particle motions are the lighter lines.
Values are given for pairs having discrete distances
$\rho$ of~1 and~3. The compressive strain $\overline{\varepsilon}_{22}$
is $-0.005$ (see Fig.~\ref{fig:crs_q}).}
\label{fig:Contact_move_orient_005}
\end{figure}
These motions are plotted against the orientation
angles $\theta^{j}$ of the pairs (Fig.~\ref{fig:theta}),
and advantage has been taken of the loading symmetry by
folding the angles
$\theta^{j}$ into the single quadrant 0$^{\circ}$ to 90$^{\circ}$.
\begin{figure}
\centering
\includegraphics{theta.eps}
\caption{Orientation angle $\theta^{j}$ for a particle pair.}
\label{fig:theta}
\end{figure}
The normal inter-particle motions are the inner products
\mbox{$\widehat{\mathbf{v}}^{j} \!\cdot\! \boldsymbol{\eta}^{\,j}$},
where $\boldsymbol{\eta}^{\,j}$ is the unit vector
aligned with the branch vector $\mathbf{l}^{j}$ that connects the centers
of a particle pair $j = (k_{1},k_{2})$.
Figure~\ref{fig:Contact_move_orient_005} compares the
averages of these values with the
corresponding averages of the mean-field motions
$\overline{\mathbf{L}}\,\mathbf{l}^{j}$
(the latter are represented with heavy lines).
The results have been normalized by dividing by the average
length $\ell^{,\rho} = \langle |\mathbf{l}^{j,\rho}| \rangle$
for a particular separation $\rho$ and by the strain rate
$\overline{L}_{22}$.
Figure~\ref{fig:Contact_move_orient_005} shows that, at large strains,
the movements of contacting particles ($\rho=1$) are predominantly tangential,
and that the mean normal motion is quite small.
That is, at $\rho = 1$ and at large strains,
the normal inter-particle movements are grossly overestimated
by the mean-field motion $\overline{\mathbf{L}}\,\mathbf{l}^{j}$.
At a distance $\rho=3$, the motions are, on average, in
much closer conformity with those predicted by a mean-field assumption.
The apparent conformity at $\rho=3$
in Fig.~\ref{fig:Contact_move_orient_005}
is, however, based upon an average of movements,
and the true diversity in their values is more appropriately reflected in
the measures
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
and
$\mathsf{Std}(\widehat{\mathbf{v}}^{j} \!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
which are reported in Figs.~\ref{fig:contactMove_strain}
and~\ref{fig:Contact_move_dist_005}.
\subsection{Deformation heterogeneity} \label{sec:deform}
Micro-scale deformations within a 2D granular material
can be computed by considering the small polygonal void cells
as being representative micro-regions among particle
clusters (Fig.~\ref{fig:graph})~\cite{Bagi:1996a,Kruyt:1996a,Kuhn:1999a}.
The region of 10,816 particles can be partitioned into over 7500
of these void cells.
The average velocity gradient
$\mathbf{L}^{i}$ within a single polygonal void cell $i$ is computed
from the motions of the particles at its vertices.
These local velocity gradients can then be compared with the
average assembly gradient $\overline{\mathbf{L}}$,
and the measures in Eqs.~(\ref{eq:parallel}--\ref{eq:circ})
can be used to investigate the non-conformity and
heterogeneity of local deformations.
Figure~\ref{fig:def_var_strain} shows the evolution of
these measures in the course of a biaxial compression test.
\begin{figure}
\centering
\includegraphics{def_var_strain.eps}
\caption{The evolution of deformation non-conformity
and heterogeneity during biaxial compression.
Each point represents the deformations $\mathbf{L}^{i}$
in over 7500 void cells,
where superscript $i$ represents an $i$th void cell.}
\label{fig:def_var_strain}
\end{figure}
At small strains, the local deformations are modestly aligned
with the averaged deformation:
the average cosine of alignment,
$\mathsf{Mean}(\mathbf{L}^{i} \!\circ \overline{\mathbf{L}})$,
is 0.91, only slightly lower than~1,
and the average component of the local gradient
$\mathbf{L}^{i}$ that is perpendicular to the assembly
average $\overline{\mathbf{L}}$ is about 35\% of $|\overline{\mathbf{L}}|$.
At larger strains, the local deformations are, on average, far more deviant
and exhibit a much larger dispersion of values.
The standard deviation of the aligned deformations,
$\mathsf{Mean}(\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}})$,
becomes more than twice its mean value of~1.
Deformations that are orthogonal to $\overline{\mathbf{L}}$
become, on average, much larger than those parallel to
$\overline{\mathbf{L}}$
(compare the
$\mathsf{Mean}(\mathbf{L}^{i} \!\perp \overline{\mathbf{L}})$
in Fig.~\ref{fig:def_var_strain}
with a
$\mathsf{Mean}(\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}})$
of 1).
\par
This non-conformity and heterogeneity is also illustrated in
Fig.~\ref{fig:Group_align},
which shows the distributions of aligned deformations
at moderate and large compressive strains, $\overline{\varepsilon}_{22}$
of $-0.0005$ and $-0.005$.
\begin{figure}
\centering
\parbox{8.5cm}
{\centering%
\includegraphics{Group_align_0005.eps}\\[0ex]
\small{(a) $\overline{\varepsilon}_{22} = -0.0005$}\\[3.0ex]
\includegraphics{Group_align_005.eps}\\[0ex]
\small{(b) $\overline{\varepsilon}_{22} = -0.005$}
}
\caption{Distributions of the aligned deformation of void cells
at two strains. The void cells have been grouped according
to a ranking of their $\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}}$
values
(10,900 and 8300 void cells are included at the two strains).}
\label{fig:Group_align}
\end{figure}
In each figure, the void cells have been placed into~20 bins,
arranged according to a ranking of the
aligned deformations
$\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}}$
of each, $i$th void cell.
At moderate strains, the 10\% of most contributory void cells
participate disproportionately in the average assembly
deformation and about 6.5 times more than
the lowest 10\% of void cells (Fig.~\ref{fig:Group_align}a).
At the larger strain of $-0.005$, about 22\% of the material makes
a \emph{negative} contribution to the overall assembly deformation,
and, in a sense, is deforming in the ``wrong'' direction
(Fig.~\ref{fig:Group_align}b).
As another measure of this heterogeneity at large strain, the 31\% of
most contributory void cells could account, by themselves, for
the entire assembly deformation.
This situation is akin to that of a material in which a shear band
has developed, where intense shearing within the band
is accompanied by unloading outside of the band.
No shear bands were observed in the current simulations,
although another type of localization, in the form of multiple
non-persistent \emph{micro-bands},
was present throughout the biaxial compression
test.
This type of deformation patterning, described in~\cite{Kuhn:1999a},
was subtly present at the start of deformation and became more
pronounced as deformation proceeded.
Microband localization accounts for much of the deformation
heterogeneity that is recorded in
Figs.~\ref{fig:def_var_strain} and~\ref{fig:Group_align}.
An example of micro-band patterning at small strain is shown in
Fig.~\ref{fig:microbands}, in which the local, void cell deformations
$\mathbf{L}^{i}$ have been filtered to highlight
a right-shear deformation mode
(see~\cite{Kuhn:1999a} for a discussion of the visualization
technique).
\begin{figure}
\centering
\includegraphics{d0005_cell_def.eps}
\caption{The presence of right-shear microbands at
strain $\overline{\varepsilon}_{22} = -0.0005$.
The local void cell deformations $\mathbf{L}^{i}$ have
been filtered as $\mathbf{L}^{i} \boldsymbol{\Phi}$,
where the filter $\boldsymbol{\Phi} = [0.49\;0.41 ; -0.58\; -0.49]$
captures a deformation mode that produces shearing that is
downward and to the right.
A complementary set of left-shear microbands would be present with
the use of an alternative filter.
The gray scale illustrates the magnitudes of
the local filtered deformations, but some of the white regions have
negative filtered values in this monochrome plot.}
\label{fig:microbands}
\end{figure}
\subsection{Particle rotation heterogeneity} \label{sec:rotate}
Particle rotations in granular materials are known to be large,
particularly in 2D assemblies of circular disks.
Dedecker et~al.~\cite{Dedecker:2000a} found that the standard
deviation of the particle rotation rates could be several times larger
than the average strain rate of an assembly.
Calvetti et~al.~\cite{Calvetti:1997a} reported that the variability
of particle rotations increased consistently with increasing
strain.
Figure~\ref{fig:rotations} shows that this variability
is expressed in a
spatial patterning of
particle rotations.
The figure is taken at the
moderate strain $\overline{\varepsilon}_{22}$ of $-0.0005$, but
\begin{figure}
\centering
\includegraphics{d0005_part_rotat.eps}
\caption{Particle spins in a biaxial compression test at
strain $\overline{\varepsilon}_{22} = -0.0005$. Only
clockwise spinning particles are shown in the plot.}
\label{fig:rotations}
\end{figure}
\emph{only counter-clockwise} rotations are shown in this
monochrome plot, where the shading depends upon the
dimensionless rotation rate $\omega^{k}/ |\overline{\mathbf{L}}|$.
The most rapidly rotating particles are usually aligned in chain-like
patterns oblique to the principal stress directions.
These chains are closely associated with microbands,
as can be seen by comparing Figs.~\ref{fig:microbands}
and~\ref{fig:rotations}~\cite{Kuhn:1999a}.
\subsection{Stress heterogeneity} \label{sec:stress}
The transmission of force among particles occurs in a
non-uniform manner, with certain chains of particles bearing
a disproportionate share of the surface tractions.
These force chains have been widely observed, and several related
references are given in Table~\ref{table:class1}.
The current study concerns the distribution of \emph{stress}
among an assembly's particles.
In two previous studies,
the local variation of stress within stacks of rods
has been studied by withdrawing groups of rods and
measuring the removal force~\cite{Bacconnet:1992a,Auvinet:1992a}.
The DEM simulations of
the current study allow the direct computation
of stress $\boldsymbol{\sigma}^{k}$ within each, $k$th disk:
\begin{equation}
\sigma_{pq}^{k} = \frac{r^{k}}{A^{k}}\sum_{j=1}^{n^{k}}
\eta_{p}^{\,j} f_{q}^{\,j} \;,
\end{equation}
where summation is over the $n^{k}$ contacts $j$ of the particle $k$,
$r^{k}$ is the disk radius, $\boldsymbol{\eta}^{k}$ is the
unit normal vector, and $\mathbf{f}^{j}$ is the contact force.
Satake~\cite{Satake:1992a} and
Kruyt and Rothenburg~\cite{Kruyt:2002a} have
described a dual of the particle graph that could be used
to compute a representative particle area $A^{k}$ that includes a portion
of the void space around a particle.
To compute a local stress that can be compared with the
average assembly stress, we instead use the (solid) disk area
$\pi (r^{k})^{2}$ and
simply divide it by the assembly-average solid fraction.
\par
Figure~\ref{ref:stress_var_strain}
shows the evolution of non-conformity and heterogeneity
in the local stress $\boldsymbol{\sigma}^{k}$
(eqs.~\ref{eq:parallel}--\ref{eq:circ}).
\begin{figure}
\centering
\includegraphics{stress_var_strain.eps}
\caption{The evolution of stress non-conformity and heterogeneity during
biaxial compression.}
\label{ref:stress_var_strain}
\end{figure}
The average, cosine-type alignment of the local stress,
$\mathsf{Mean}(\boldsymbol{\sigma}^{k} \!\circ
\overline{\boldsymbol{\sigma}})$,
is less than 0.6, but there is little
change in this average alignment during loading.
The spatial variation in local stress, as measured by
$\mathsf{Std}(\boldsymbol{\sigma}^{k} \!\parallel
\overline{\boldsymbol{\sigma}})$,
decreases at small strains, but then increases at larger strains.
At large strains,
all three measures in Fig.~\ref{ref:stress_var_strain}
depict a greater conformity
and homogeneity of stress than was found with inter-particle movements
and void cell deformations
(\textit{cf} Figs.~\ref{fig:contactMove_strain},
\ref{fig:def_var_strain} and~\ref{ref:stress_var_strain}).
This greater regularity is likely due to the stress being represented
in its status, whereas movement and deformation were represented
in their rates.
At small strains, however, the three measures
in Fig.~\ref{ref:stress_var_strain} show less conformity and heterogeneity
in stress than in the inter-particle movements.
The diversity of stress at small strain is primarily
the inheritance of the initial particle packing, and this diversity
increases only modestly during loading.
\par
The variation in stress is greatest in its deviatoric component.
Figures~\ref{fig:stress_hist}a and~\ref{fig:stress_hist}b are histograms
of the local mean stress and deviator stress,
defined for particle $k$ as $p^{k}=(\sigma_{11}^{k}+\sigma_{22})/2$
and $q^{k}=(\sigma_{22}^{k}-\sigma_{11})/2$
respectively.
\begin{figure}
\centering
\parbox{8.5cm}
{\centering%
\includegraphics{stress_hist_p_005.eps}\\[0ex]
\small{(a) Local mean stress}\\[3.0ex]
\includegraphics{stress_hist_q_005.eps}\\[0ex]
\small{(b) Local deviator stress}
}
\caption{Participation of the local stress in the average assembly
stress.
Figures~\ref{fig:stress_hist}a and~\ref{fig:stress_hist}b are
histograms of the local participation in the mean and deviator
stresses.
Both figures are compiled from the stresses in over 10,000
particles at the large strain $\overline{\varepsilon}_{22} = -0.005$.}
\label{fig:stress_hist}
\end{figure}
The figure gives these components at the large strain
$\overline{\varepsilon}_{22} = -0.005$.
Because only compressive force can be delivered between particles,
the local mean stress is uniformly positive, but the
standard deviation of the local mean stress $p^{k}$
is about 0.60 (Fig.~\ref{fig:stress_hist}a).
The standard deviation of the local deviator stress $q^{k}$ is 1.0
(Fig.~\ref{fig:stress_hist}b).
About~15\% of particles have a negative alignment of the
deviator stress, $q^{k} \!\parallel\! \overline{q}$,
and these particles provide a negative contribution toward bearing the
average assembly deviator stress.
\section{Conclusion}
In the paper, we have considered several categories
of heterogeneity in granular materials:
topologic, geometric, kinematic, and static.
In all respects, the heterogeneity can be described, at a minimum,
as being moderate.
Heterogeneity increases during biaxial compressive loading.
In the case of inter-particle movements, the non-uniformity
becomes extreme, and particle motions are only coarsely aligned
with the mean-field movement.
At large strains, significant fluctuations from the mean-field
motion extend to distances of at least eight particle diameters.
Non-uniform motion is expressed in the patterning of local
movements, which includes microband patterning and
rotation chain patterning.
The extent and magnitude of the heterogeneity and its patterning
proffer an imposing challenge to the continuum
representation of granular materials at micro and macro scales,
especially at large strains.
Before such efforts can be productive,
further statistical analyses should be undertaken to
further characterize heterogeneity,
to determine characteristic lengths at which heterogeneity
dominates the meso-scale behavior, to quantify the heterogeneity
in the local stress rates, and to establish the relationships among
topologic, geometric, kinematic, and static heterogeneities.
\bibliographystyle{unsrt}
| proofpile-arXiv_065-1477 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:Introduction}
Three-dimensional (3D) Dirac semimetals are a new class of quantum materials that host two sets of linear, doubly degenerate bands which cross at so-called Dirac points.
Breaking time reversal or inversion symmetry lifts the degeneracy of these bands, resulting in singly degenerate band crossings referred to as Weyl nodes~\cite{Yan2017_1}.
Few materials have been experimentally verified as hosts of these exotic band dispersions, which are analogous to 3D versions of graphene.
Both Cd$_3$As$_2$ and Na$_3$Bi have shown evidence of 3D symmetry-protected Dirac cones in the bulk~\cite{Liu2014_1,Liu2014_2}.
In both materials, the energy range in which the band dispersion remains linear is quite small, on the order of 0.01-0.1 eV, which can make them challenging to systematically study because impurities, defects, and pressure can alter the chemical potential away from the linear regime~\cite{Schoop_2016}.
ZrSiS hosts multiple Dirac cones and has been shown to exhibit an unusually robust linear dispersion (up to $\sim$ 2 eV in part of the Brillouin zone)~\cite{Singha_2017}.
ZrSiS is also chemically stable (as opposed to Na$_3$Bi) and non-toxic (as opposed to Cd$_3$As$_2$).
Together, these features make ZrSiS a promising system for studying the physics of 3D Dirac/Weyl fermions.
The crystal structure of ZrSiS can be described as layered, containing quintuple layers of S-Zr-Si-Zr-S, with a PbFCl structure and space group P4/nmm (No.\ 129), lattice parameters of $a = b = 3.5440$~\AA, $c = 8.0550$~\AA, and volume $V = 101.17$~\AA$^3$~\cite{Sankar_2017,Lv_2016}.
Polycrystalline samples were first synthesized via solid state reaction by Haneveld \textit{et al.}~\cite{Haneveld_1964}.
Millimeter-sized, high quality single crystals can be grown via vapor transport of polycrystalline source material, using iodine as a vapor transport agent~\cite{Sankar_2017,Lv_2016}.
Both angle-resolved photoemission spectroscopy (ARPES) experiments and electronic structure calculations have shown evidence of a Dirac line node phase, a diamond-shaped Fermi surface at the Brillouin zone center $\mathbf{(\Gamma)}$ point, an ellipsoidal-shaped Fermi surface at the \textbf{M} point, and small electron-like pockets at the \textbf{X} point~\cite{Sankar_2017,Schoop_2016,Wang_2016_1,Neupane_2016,Lv_2016,Zhou_2017}.
The ambient pressure Fermi surface has also been well characterized via quantum oscillation measurements~\cite{Sankar_2017,Wang_2016_1,Matusiak_2017,Hu_2017_2,Zhang_2017,Singha_2017,Ali_2016,Lv_2016}.
Most reports provide evidence for two distinct oscillation frequencies when the magnetic field is aligned along the crystallographic $c$-axis.
Ali \textit{et al.}~\cite{Ali_2016} reported that the phase of the high frequency oscillation goes through a sharp transition as a function of the angle of the applied magnetic field.
Matusiak \textit{et al.}~\cite{Matusiak_2017} found the thermoelectric response in ZrSiS to be a more sensitive Fermi surface probe than Shubnikov-de Haas (SdH) or de Haas-van Alphen (dHvA) quantum oscillations, observing a total of five distinct oscillation frequencies, with some oscillations still resolvable at 100 K.
One of the unique features of ZrSiS that has gained much attention is the large, anisotropic magneto-resistance, which can be as high as $1.4\times 10^5~\%$ at 2 K and 9 T.
The magneto-resistance is maximized when the magnetic field is aligned along the [011] axis~\cite{Singha_2017}.
Hall measurements suggest ZrSiS exhibits a nearly perfect electron-hole compensation ratio of $\sim$ 0.94~\cite{Zhang_2017,Lv_2016}.
Lv \textit{et al.}~\cite{Lv_2016} suggest the unusual magneto-resistive properties of ZrSiS can be attributed to the electron-hole compensation as well as the open orbital Fermi surface.
The relativistic phenomenon of massless Dirac fermions known as the Adler-Bell-Jackiw~\cite{Adler1969,Bell1969} chiral anomaly has also been observed in ZrSiS~\cite{Singha_2017}.
Recently, Singha \textit{et al.}~\cite{Singha2018_1} studied the effect of pressure on the lattice dynamics and electron-phonon coupling in ZrSiS, which revealed two pressure-induced structural phase transitions near 3.7 and 18.7 GPa.
ZrSiS was also found to exhibit tip-induced superconductivity coexisting with the preserved topological properties in a point-contact electrical study~\cite{Aggarwal2018_1}.
The authors suggest the tip-induced superconductivity arises due to an increase in the density of states near the Fermi level due to the presence of the Ag point contact.
The same work also reported an absence of pressure-induced superconductivity in ZrSiS to at least \SI{8}{GPa}. The base temperature for the high-pressure measurements was not stated.
How the unique electronic properties of ZrSiS might evolve under pressure is, at present, largely an open question.
In this work, we report measurements of Shubnikov-de Haas (SdH) oscillations in single crystals of ZrSiS to hydrostatic pressures of $\sim 2.5$~GPa.
We also report the results of electrical resistivity measurements to $\sim 27$~GPa.
\section{Experimental methods}
\label{sec:Methods}
Single crystals of ZrSiS were grown by solid state reaction followed by chemical vapor transport, using the prescription detailed in Ref.~\cite{Lv_2016}.
The crystal structure was characterized via powder x-ray diffraction, and Rietveld refinement of the data gave lattice parameters of $a = b = 3.55$~\AA, and $c = 8.06$~\AA, which are consistent with literature values~\cite{Sankar_2017,Lv_2016}.
Rocking curve measurements of un-cleaved single crystals present a single sharp peak indicating high crystal quality.
Small pieces of sample with dimensions of about $500\,\mathrm{\mu m} \times 500\,\mathrm{\mu m} \times 100\,\mathrm{\mu m}$ were cut from a larger crystal.
For the low-pressure SdH measurements, Pt wires were connected to the samples using EPO-TEK H20E conductive epoxy.
The samples were then mounted to the wire and fiber optic feed-throughs of a Teflon-capsule piston-cylinder type pressure cell constructed of MP35N alloy.
The pressure was calibrated at both room temperature and the lowest temperature reached using the fluorescence of the R1 peak of a small ruby chip~\cite{Chijioke2005}.
Daphne 7474 oil was used as the pressure-transmitting medium surrounding the sample~\cite{murata_2008_1}.
At room temperature, Daphne 7474 does not solidify until 3.7~GPa, which is beyond the range of the Teflon-capsule cell measurements.
Four-wire resistance measurements were performed in the crystalline $ab$-plane using either a Quantum Design PPMS resistance bridge or a Lakeshore 370 resistance bridge.
Magnetic fields were applied along the $c$-axis.
Samples 2, 3, and 4 were studied at ambient pressure only, while samples 1 and 8 were subjected to SdH measurements under pressure.
The higher-pressure resistance measurements were carried out on single crystals of ZrSiS (samples 5 and 6) in a gas membrane-driven diamond anvil cell.
The pressure was measured using the fluorescence of the $R_1$ peak of small ruby spheres placed next to the sample~\cite{Chijioke2005}.
One of the diamonds used was a designer diamond anvil~\cite{weir_2000_1}.
Resistance was measured in the crystalline $ab$-plane by a Lakeshore Model 370 AC resistance bridge using the four-probe van der Pauw method with currents of $\leq 1\,\mathrm{mA}$.
Quasihydrostic, soft, solid steatite was used as the pressure-transmitting medium.
Additional details of the high pressure methods are available in Ref.~\cite{VanGennep2017}.
\section{Results}
\label{sec:Results}
Hydrostatic pressure measurements of the electrical resistivity of ZrSiS are summarized in Fig.~\ref{fig:fig1}.
In all samples, we find that pressure tends to make the room-temperature resistivity increase.
Pressure produces an increase the magnitude of the high-field resistivity (see Fig.~\ref{fig:fig1}b).
The magnetic field in all of these measurements was applied parallel to the crystallographic $c$-axis, while the resistivity was measured in the crystalline $ab$-plane.
The two SdH frequencies observed in all samples correspond well to the frequencies previously reported at ambient pressure \cite{Singha_2017,Sankar_2017,Wang_2016_1,Matusiak_2017,Hu_2017_2,Zhang_2017,Ali_2016}.
The data agree well with both the oscillation frequency and the phase of each oscillation obtained from the LL fan diagram at ambient pressure.
Thermoelectric measurements indicate several oscillation frequencies which we did not observe in our measurements and were also not observed in other dHvA and SdH experiments~\cite{Matusiak_2017}.
\begin{figure}
\includegraphics[width=\columnwidth]{./fig1}
\caption{(a) Resistivity vs temperature, (b) resistivity vs magnetic field measured at 2 K, and (c) oscillatory part of the resistivity vs magnetic field at various pressures for sample 8 measured at 2 K. The data have been vertically offset for clarity. The magnetic field was applied parallel to the crystallographic \textit{c}-axis.
Fermi surface parameters derived from analyzing this and other data are presented in Fig.~\ref{fig:fig2}.}
\label{fig:fig1}
\end{figure}
\begin{figure*}
\includegraphics[width=0.95\columnwidth]{./fig2_1}
\hspace{\columnsep}
\includegraphics[width=0.95\columnwidth]{./fig2_2}
\caption{(left) Several Fermi surface parameters of ZrSiS at 2 K as a function of pressure for the smaller Fermi surface, with an oscillation frequency $F_1$. (a) Landau level fan diagram with selected pressures from sample 8. (b) Oscillation frequency $F_1$ for all samples as a function of pressure, (c) n-intercept, $n_0$, of the LL fan diagram of $F_1$. This Fermi surface is known to be 3D and fairly isotropic. The intercept $n_0$ discontinuously drops by $\sim$0.47 between 0.16-0.5 GPa, which could suggest that the Berry phase of this orbit changes by a factor of $\pi$. The dotted lines indicate the average values of $n_0$ below and above the transition pressure, which are 0.66 and 0.19, respectively. The nature of this change in phase is discussed in more detail in the text. (right) Several Fermi surface parameters of ZrSiS at 2 K as a function of pressure for the larger Fermi surface, with frequency $F_2$. (d) Landau level fan diagram with selected pressures from sample 8. (e) Oscillation frequency $F_2$ for all samples as a function of pressure, (f) n-intercept of the LL fan diagram of the larger orbit, which exhibits a Berry phase of $\pi$ and stays roughly constant.}
\label{fig:fig2}
\end{figure*}
Landau quantization of electronic states gives rise to SdH quantum oscillations, which can be described by the Lifshitz-Kosevich (LK) relation.
The oscillatory part of the LK expression is given by:
\begin{equation}
\Delta\sigma_{xx} \propto cos[2\pi(F/B+\phi)],
\end{equation}
where $B$ is the magntiude of the magnetic field, $F$ is the frequency of the oscillation, and $\phi$ is the phase shift, which encodes information about the topography of the Fermi surface~\cite{Shoenberg1984}.
To identify the phase shift, the Landau indices where $F/B+\phi$ takes on integral values, $n$, need to be identified from the magneto-resistance.
A plot of $n$ vs $1/B$, referred to as a Landau level (LL) fan diagram, then extrapolates to the phase shift on the $n$-axis, which we call $n_0$.
Reference~\cite{Wang_2016_3} provides a useful description of the QO phases observed in 3D topological semimetals.
Analysis of the quantum oscillation data follows prescriptions described in Refs.~\cite{Shoenberg1984,Ando_2013_1}.
The results of this analysis are summarized in Fig.~\ref{fig:fig2}.
Two distinct frequencies of SdH quantum oscillations are visible in the raw data.
While $\rho_{xx}$ is roughly parabolic as a function of field, subtraction of the non-oscillatory part of the magneto-resistance was performed by fitting the raw data to a 5th-order Chebyshev polynomial.
If only the data above \SI{1}{T} are used, a simple quadradic background subtraction results in the same conclusions regarding LL assignments.
We refer to the small oscillation frequency ($\sim$ 16 T) as $F_1$ and the large frequency ($\sim$ 240 T) as $F_2$.
The cyclotron masses associated with the oscillations are indicated by $m_1^*$ and $m_2^*$, respectively.
In order to determine the correct phases from SdH quantum oscillations, one should make sure that integral values of $n$ are being assigned to minima in $\Delta \sigma_{xx}$.
Since $\sigma_{xx}= \rho_{xx}/(\rho_{xx}^2+\rho_{xy}^2)$, this could correspond to either maxima or minima in $\Delta\rho_{xx}$, depending on the ratio: $\left|\rho_{xx}/\rho_{xy}\right|$.
If this ratio is much larger (smaller) than one, integral values of $n$ should occur when $\Delta\rho_{xx}$ is a maximum (minimum)~\cite{Ando_2013_1}.
We performed Hall measurements at ambient pressure and found that $\left|\rho_{xx}/\rho_{xy}\right| = 17.5$ at 2 K, 9 T for S8, indicating that integral values of $n$ should be assigned to maxima in $\Delta\rho_{xx}$.
Hall measurements were not performed under pressure, but we observe that $\rho_{xx}$ increased by a factor of $\sim$2 from 0-1.7 GPa, measured at 2 K, 9 T for S8.
This means that $\left|\rho_{xy}\right|$ would have to increase by a factor of $\sim$35 over this pressure range in order to modify our assignment of LLs.
This is highly improbable given that the oscillation frequencies only change by $\sim$15 \% and $\sim$2 \% over the entire pressure range for the small and large Fermi surfaces, respectively, indicating small changes in the carrier densities for these Fermi surfaces.
Figure~\ref{fig:fig2} contains a summary of the SdH oscillation data.
Fig.~\ref{fig:fig2}a,d show the LL fan diagrams for each set of oscillations at various pressures.
Linear fits to the fan diagrams allow us to extrapolate $F$ from the slope (Fig.~\ref{fig:fig2}b,e) and $n_0$ from the $n$-intercept (Fig.~\ref{fig:fig2}c,f).
From ambient pressure to \SI{2.3}{GPa}, $F_2$ increases by $\sim$ 2\%.
The lower frequency, $F_1$ appears to show an abrupt drop below \SI{0.5}{GPa} and then increases at higher pressures.
We considered the possibility that this drop in frequency was due to a small tilt in the sample from the application of pressure, but this could not be the case due to the highly 3D nature of this orbit~\cite{Zhang_2017}.
Temperature dependent data were collected under pressure for sample 1 taken at 7 different temperatures ranging from 2 - 30 K.
These data revealed a small monotonic decrease in $m_2^*$ from values of 0.17, 0.14, and 0.13 $m_0$ at 0, 1.0, and 2.3 GPa, respectively, which agree well with the literature values at ambient pressure~\cite{Matusiak_2017}.
It was not possible to reliably determine the cyclotron masses of the smaller orbit due to a small number of low frequency oscillations being resolvable for S1. Reference values for $m_1^*$ range between 0.1 - 0.14 m$_0$ at ambient pressure~\cite{Matusiak_2017}.
The data show that $n_0$ for the phase of the large orbit remains constant up to $\sim$ 2.3 GPa, while $n_0$ for the small orbit seems to exhibit an abrupt change between 0.16 - 0.5 GPa.
This nature of this change in phase and its possible significance is discussed in further detail below.
Figure~\ref{fig:fig3} shows the results of electrical resistivity measurements to pressures as high as \SI{27}{GPa}.
For both samples 5 and 6, data was first collected during pressure application at room temperature, where a slope change near $12-\SI{15}{GPa}$ is apparent.
This corresponds roughly to the same pressure at which previous measurements showed changes in the Raman spectrum but is somewhat lower than the pressure where a monoclinic phase first appears (\SI{19}{GPa})~\cite{Singha2018_1}.
For sample 5, after reaching \SI{20}{GPa} at room temperature, the the cell was cooled to \SI{1.8}{K}.
At \SI{20}{GPa}, the sample showed metallic behavior ($d\rho/dT > 0$).
Pressure was then released at low temperature.
During low temperature unloading, the resistance remained roughly constant, which may be related to some hysteresis in the structural transition.
The low temperature unloading data indicate ZrSiS is not superconducting down to \SI{1.8}{K} at these pressure, where ZrSiS has been reported to adopt orthorhombic and monoclinic crystal structures~\cite{Singha2018_1}.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{./fig3}
\caption{Electrical resistivity of ZrSiS in the $ab$-plane as a function of applied pressure for samples 5 and 6. Clear changes in the slope of $\rho$ vs P occur between $11-14.5$ GPa. Pressure was released at $\sim$ 1.8 K for sample 5.}
\label{fig:fig3}
\end{figure}
\section{Discussion}
\label{sec:Discussion}
At ambient pressure, it is generally agreed upon that the phase of the low frequency oscillation ($\sim5/8$) corresponds to a topologically nontrivial orbit~\cite{Singha_2017,Hu_2017_2,Zhang_2017,Wang_2016_1}, although band structure calculations tend to lack evidence of this orbit~\cite{Wang_2016_1,Li2018}. For 3D Dirac/Weyl semimetals, phases of $\pm$5/8 and $\pm$1/8 can both be observed from topologically nontrivial orbits~\cite{Wang_2016_3}. Thus, there is a question of whether the observed pressure-driven change in phase in this work corresponds to a topologically nontrivial-nontrivial transition, or a nontrivial-trivial transition. A concrete answer to this question is difficult to determine without a better theoretical understanding of this orbit. Below, we argue that this transition is likely a nontrivial-trivial transition.
The pressure-induced change in the phase of the low frequency oscillation exhibited by ZrSiS (see Fig.~\ref{fig:fig2}c) closely resembles the behavior of Cd$_3$As$_2$ under pressure~\cite{Zhang2017_2}.
At $\sim$1.3 GPa, Cd$_3$As$_2$ shows a sudden change in the phase factor for one of the oscillations accompanied by an abrupt shrinkage of the Fermi surface cross-sectional area. These features are very similar to what we observe in ZrSiS between $\sim$0.16-0.5 GPa. In the case of Cd$_3$As$_2$, the change in phase factor was attributed to pressure-driven node-pair annihilation which results from shifting the Dirac nodes toward the center of the Brillouin zone and eventually introducing a nonzero gap in the energy spectrum~\cite{Zhang2017_2}.
This picture was supported by first principles calculations.
Crucially, x-ray diffraction measurements showed the changes in phase-factor are not due to a change in crystal structure, but are instead purely electronic in nature.
X-ray diffraction measurements under pressure~\cite{Singha2018_1} demonstrate that ZrSiS remains in the ambient pressure crystal structure to pressures above \SI{2.5}{GPa}.
This result confirms that the apparent change in phase observed in the present work is not a consequence of a structural transformation.
Other than the annihilation of Weyl nodes and opening of a band gap, there are several other scenarios in which $n_0$ can change in such systems.
Firstly, Hu \textit{et al.}~\cite{Hu_2017_2} found that the apparent phase of this oscillation is magnetic field-dependent below $\sim$4 T, and smoothly changes from a value of 0.3 below 4 T to a value of 0.6 above 4 T.
For the present analysis, all resolvable oscillations occur between 4-9 T.
It is possible that, when pressure is applied, the field at which this value of $n_0$ saturates is larger than 4 T, but it seems unlikely this would cause $n_0$ to abruptly change by a value of 0.5.
Recently, Wang \textit{et al.}~\cite{Wang_2016_3} showed that the phase factor in Weyl semimetals can be strongly dependent on the position of the chemical potential when the chemical potential is in the vicinity of the Lifshitz point.
They find that moving the chemical potential through the Lifshitz point could produce a change in the phase from 5/8 to 1/8, which could both be considered topologically nontrivial phases.
This should produce a nonmonotonic change in the phase as the chemical potential is moved past the Lifshitz point, as well as a monotonic change in the oscillation frequency.
Our data clearly show a monotonic change in the phase as well as a nonmonotonic change in the oscillation frequency.
Thus, our data is not consistent with the nontrivial-nontrivial transition described in~\cite{Wang_2016_3}. Lastly, we observe no evidence of direct Zeeman splitting in our low frequency oscillations, which would complicate the determination of the phase~\cite{Hu_2017_2}.
As for the higher frequency set of oscillations, various reports disagree on whether to consider this orbit to be topologically trivial or nontrivial - though they agree that the SdH phase of this orbit is zero.
References \cite{Singha_2017,Matusiak_2017,Hu_2017_2,Ali_2016} consider this to be nontrivial, while references \cite{Wang_2016_1,Zhang_2017} consider it to be trivial.
Recently, Li \textit{et al.} have showed that this Fermi pocket encloses a nodal line, proving that this orbit is nontrivial with a Berry phase of $\pi$~\cite{Li2018}.
Multiple theoretical efforts concerning pressure-induced topological phase transitions have described what one can expect to observe during such a transition.
The compound LaSb has been predicted to undergo a transition from topologically trivial to nontrivial near 3-4 GPa without breaking any symmetry, which could be verified in transport experiments by observing a change in the Berry phase from 0 to $\pi$~\cite{Guo_2017_1}.
It has been predicted that a topological transition from normal insulator to topological insulator might occur in noncentrosymmetric BiTeI under moderate pressures~\cite{Bahramy_2012_1}.
Evidence of this transition has been observed in the quantum oscillation phase of one of the bulk Fermi surface oscillations~\cite{Park_2015_1}.
Liu \textit{et al.}~\cite{Liu_2014_3} showed that a Weyl semimetal phase might exist in BiTeI for a non-zero range of pressures, but this has not yet been experimentally verified, most likely due to the small range of pressure over which this phase exists.
ZrSiSe and ZrSiTe have also been shown to possess nodal Fermi arcs, which have been observed the bulk of ZrSiS \cite{Hu2016_1,Fu2017}.
The topological phases in the ZrSiX family of materials show a transition from nodal-line to nodeless gapped phase by tuning the chalcogenide from S to Te~\cite{Hosen2017_1}. A study of these compounds under pressure might yield further insights into the nature of the transition.
Finally, an investigation of ZrSiO, which has been predicted to be a 3D weak topological insulator at ambient pressure, would be a natural next step in the investigation of this family of materials \cite{Xu2015,Onken1964}.
\section{Conclusions}
High-pressure electrical transport measurements were performed on single crystals of the topological nodal line semimetal ZrSiS.
Measurements of SdH oscillations up to $\sim$2.2 GPa and 9 T show two oscillation frequencies.
The effective mass of the larger Fermi surface decreases, and the phase remains topologically nontrivial and roughly constant as a function of pressure.
For the smaller orbit, we find a clear change in the phase of the quantum oscillations between 0.16-0.5 GPa, which is accompanied by an abrupt decrease in the oscillation frequency.
These changes are consistent with a pressure-driven topological quantum phase transition in which a bulk band gap is introduced~\cite{Hosen2017_1,Fang_2015}.
Higher pressure measurements to \SI{20}{GPa} show no evidence for pressure-induced superconductivity down to \SI{1.8}{K}.
The apparent topological transition in ZrSiS occurs under modest pressures below \SI{0.5}{GPa}.
This very low pressure makes it possible to study the transition using a wide variety of probes that are unavailable at higher pressures.
It would be particularly interesting to see if computational efforts can shed further light on the nature of the transition.
\section*{Acknowledgments}
This work was supported by National Science Foundation (NSF) CAREER award DMR-1453752. High pressure technique development was partially supported by a National High Magnetic Field Laboratory User Collaboration Grant. The National High Magnetic Field Laboratory is supported by the NSF via Cooperative agreement No.\ DMR-1157490, the State of Florida, and the U.S. Department of Energy. Designer diamond anvils were supported by DOE-NNSA Grant No.\ DE-NA0002928 and under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We thank Yuxuan Wang (UFL) for informative conversations.
\bibliographystyle{apsrev4-1}
| proofpile-arXiv_065-1485 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction, Definitions and Notations}
Let $%
\mathbb{C}
$ be the complex plane and $\mathbb{U}=\left\{ z:z\in
\mathbb{C}
\text{ and }\left\vert z\right\vert <1\right\} $ be open unit disc in $%
\mathbb{C}
$. Further, let $\mathcal{A}$ represent the class of functions analytic in $%
\mathbb{U}$, satisfying the condition%
\begin{equation*}
f(0)=\ f^{\prime }(0)-1=0.
\end{equation*}%
Then each function $f$ in $\mathcal{A}$ has the following Taylor series
expansion%
\begin{equation}
f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots =z+\overset{\infty }{\underset{n=2}{\sum
}}a_{n}z^{n}. \label{eq1}
\end{equation}%
The class of this kind of functions is represented by $\mathcal{S}$.
With a view to reminding the rule of subordination for analytic functions,
let the functions $f,g$ be analytic in $\mathbb{U}$. A function $f$ is
\textit{subordinate} to $g,$ indited as $f\prec g,$ if there exists a
Schwarz function
\begin{equation*}
\mathbf{\varpi }(z)=\overset{\infty }{\underset{n=1}{\sum }}\mathfrak{c}%
_{n}z^{n}\ \ \left( \mathbf{\varpi }\left( 0\right) =0,\text{\ }\left\vert
\mathbf{\varpi }\left( z\right) \right\vert <1\right) ,
\end{equation*}%
analytic in $\mathbb{U}$ such that
\begin{equation*}
f\left( z\right) =g\left( \mathbf{\varpi }\left( z\right) \right) \ \ \ \
\left( z\in \mathbb{U}\right) .
\end{equation*}%
For the Schwarz function $\mathbf{\varpi }\left( z\right) $ we know that $%
\left\vert \mathfrak{c}_{n}\right\vert <1$ (see \cite{Duren 83}).
According to the \textit{Koebe-One Quarter Theorem}, every univalent
function $f\in \mathcal{A}$ has an inverse $f^{-1}$ satisfying $f^{-1}\left(
f\left( z\right) \right) =z~~\left( z\in \mathbb{U}\right) $ and $f\left(
f^{-1}\left( w\right) \right) =w~$ $\left( \left\vert w\right\vert
<r_{0}\left( f\right) ;~~r_{0}\left( f\right) \geq \frac{1}{4}\right) ,$
where%
\begin{equation}
\begin{array}{l}
g(w)=f^{-1}\left( w\right) =w~-a_{2}w^{2}+\left( 2a_{2}^{2}-a_{3}\right)
w^{3} \\
\\
\ \ \ \ \ \ \ \ \ \ \ -\left( 5a_{2}^{3}-5a_{2}a_{3}+a_{4}\right)
w^{4}+\cdots .%
\end{array}
\label{eq2}
\end{equation}%
A function $f\in \mathcal{A}$ is said to be bi-univalent in $\mathbb{U}$ if
both $f$ and $f^{-1}$ are univalent in $\mathbb{U}.~$Let $\Sigma $ denote
the class of bi-univalent functions in $\mathbb{U}$ given by (\ref{eq1}).
For a brief historical account and for several notable investigation of
functions in the class $\Sigma ,$ see the pioneering work on this subject by
Srivastava et al. \cite{Srivastava 2010} (see also \cite{Brannan and Clunie
80, Brannan and Taha 86, Lewin 67, Netanyahu 69}). The interest on estimates
for the first two coefficients $\left\vert a_{2}\right\vert $, $\left\vert
a_{3}\right\vert $ of the bi-univalent functions keep on by many researchers
(see, for example, \cite{AA, Hayami 2012, HO, Seker 2016, Srivastava 2013}).
However, in the literature, there are only a few works (by making use of the
Faber polynomial expansions) determining the general coefficient bounds $%
\left\vert a_{n}\right\vert $ for bi-univalent functions (\cite{AY, Hamidi
and Jahangiri 2014, Hamidi and Jahangiri 2016, S}). The coefficient estimate
problem for each of $\left\vert a_{n}\right\vert $ $\left( \ n\in
\mathbb{N}
\backslash \left\{ 1,2\right\} ;\ \
\mathbb{N}
=\left\{ 1,2,3,...\right\} \right) $ is still an open problem.
Now, we recall to a notion of $q$-operators that play a major role in
Geometric Function Theory. The application of the $q$-calculus in the
context of Geometric Function Theory was actually provided and the basic (or
$q$-) hypergeometric functions were first used in Geometric Function Theory
in a book chapter by Srivastava \cite{Srivastava1989}. For the convenience,
we provide some basic notation details of $q$-calculus which are used in
this paper.
\begin{definition}
(See \cite{SO}) For a function $f$ (analytic in a simply-connected region of
$%
\mathbb{C}
$), the fractional derivative of order $\rho $ is stated by%
\begin{equation*}
D_{z}^{\rho }f(z)=\frac{1}{\Gamma (1-\rho )}\frac{d}{dz}\int\limits_{0}^{z}%
\frac{f(\xi )}{(z-\xi )^{\rho }}d\xi \ \ \ (0\leq \rho <1)
\end{equation*}%
and the fractional integral of order $\rho $ is stated by%
\begin{equation*}
I_{z}^{\rho }f(z)=\frac{1}{\Gamma (\rho )}\int\limits_{0}^{z}f(\xi )(z-\xi
)^{\rho -1}d\xi \ \ \ (\rho >0).
\end{equation*}
\end{definition}
\begin{definition}
(See \cite{S}) The Tremblay fractional derivative operator of the function $%
f $ is defined as%
\begin{equation}
I_{z}^{\mu ,\rho }f(z)=\frac{\Gamma (\rho )}{\Gamma (\mu )}z^{1-\rho
}D_{z}^{\mu -\rho }z^{\mu -1}f(z)\ \ \ (0<\mu \leq 1,0<\rho \leq 1,\mu \geq
\rho ,0<\mu -\rho <1). \label{eq3}
\end{equation}
\end{definition}
From (\ref{eq3}), we deduce that%
\begin{equation*}
I_{z}^{\mu ,\rho }f(z)=\frac{\mu }{\rho }z+\overset{\infty }{\underset{n=2}{%
\sum }}\frac{\Gamma (\rho )\Gamma (n+\mu )}{\Gamma (\mu )\Gamma (n+\rho )}%
a_{n}z^{n}.~
\end{equation*}
In this paper, we study the new class $\mathfrak{R}_{\Sigma ,\gamma }^{\mu
,\rho }\left( \widetilde{\mathfrak{p}}\right) $ of bi-univalent functions
established by using the Tremblay fractional derivative operator. Further,
we use the Faber polynomial expansions and Fibonacci numbers to derive
bounds for the general coefficient $\left\vert a_{n}\right\vert $ of the
bi-univalent function class.
\section{Preliminaries}
By utilizing the Faber polynomial expansions for functions $f$ $\in \mathcal{%
A}$ of the form (\ref{eq1}), the coefficients of its inverse map $g=f$ $^{-1}
$ may be stated by \cite{Airault and Bouali 2006, Airault and Ren 2002}:
\begin{equation*}
g\left( w\right) =f^{-1}\left( w\right) =w+\overset{\infty }{\underset{n=2}{%
\sum }}\frac{1}{n}K_{n-1}^{-n}\left( a_{2},a_{3},...\right) w^{n},
\end{equation*}%
where
\begin{eqnarray*}
K_{n-1}^{-n} &=&\frac{\left( -n\right) !}{\left( -2n+1\right) !\left(
n-1\right) !}a_{2}^{n-1}~+\frac{\left( -n\right) !}{\left[ 2\left(
-n+1\right) \right] !\left( n-3\right) !}a_{2}^{n-3}a_{3}~ \\
&&+~\frac{\left( -n\right) !}{\left( -2n+3\right) !\left( n-4\right) !}%
a_{2}^{n-4}a_{4}~ \\
&&+\frac{\left( -n\right) !}{\left[ 2\left( -n+2\right) \right] !\left(
n-5\right) !}a_{2}^{n-5}\left[ a_{5}+\left( -n+2\right) a_{3}^{2}\right] \\
&&+\frac{\left( -n\right) !}{\left( -2n+5\right) !\left( n-6\right) !}%
a_{2}^{n-6}\left[ a_{6}+\left( -2n+5\right) a_{3}a_{4}\right] \\
&&+\overset{}{\underset{j\geq 7}{\sum }}a_{2}^{n-j}V_{j},
\end{eqnarray*}%
such that $V_{j}$ $\left( 7\leq j\leq n\right) $ is a homogeneous polynomial
in the variables $a_{2},a_{3},...,a_{n}$. In the following, the first three
terms of $K_{n-1}^{-n}$ are stated by
\begin{eqnarray*}
\frac{1}{2}K_{1}^{-2} &=&-a_{2}, \\
\frac{1}{3}K_{2}^{-3} &=&2a_{2}^{2}-a_{3}, \\
\frac{1}{4}K_{3}^{-4} &=&-\left( 5a_{2}^{3}-5a_{2}a_{3}+a_{4}\right) .
\end{eqnarray*}%
In general, the expansion of $K_{n}^{p}$ $(p\in
\mathbb{Z}
=\left\{ 0,\pm 1,\pm 2,\ldots \right\} )$ is stated by
\begin{equation*}
K_{n}^{p}=pa_{n}+\frac{p\left( p-1\right) }{2}\mathcal{G}_{n}^{2}+\frac{p!}{%
\left( p-3\right) !3!}\mathcal{G}_{n}^{3}+...+\frac{p!}{\left( p-n\right) !n!%
}\mathcal{G}_{n}^{n},
\end{equation*}%
where $\mathcal{G}_{n}^{p}=$ $\mathcal{G}_{n}^{p}\left(
a_{1},a_{2},...\right) $ and by \cite{Airault 2007},
\begin{equation*}
\mathcal{G}_{n}^{m}\left( a_{1},a_{2},...,a_{n}\right) =\overset{\infty }{%
\underset{n=1}{\sum }}\frac{m!\left( a_{1}\right) ^{\delta _{1}}...\left(
a_{n}\right) ^{\delta _{n}}}{\delta _{1}!...\delta _{n}!},
\end{equation*}%
while $a_{1}=1$, the sum is taken over all nonnegative integers $\delta
_{1},...,\delta _{n}$ satisfying%
\begin{eqnarray*}
\delta _{1}+\delta _{2}+~...~+\delta _{n} &=&m, \\
\delta _{1}+2\delta _{2}+~...~+n\delta _{n} &=&n.
\end{eqnarray*}%
The first and the last polynomials are%
\begin{equation*}
\mathcal{G}_{n}^{1}=a_{n}\ \ \ \ \ \ \ \ \mathcal{G}_{n}^{n}=a_{1}^{n}.
\end{equation*}%
For two analytic functions $\mathfrak{u}\left( z\right) $, $\mathfrak{v}%
\left( w\right) $ $\left( \mathfrak{u}\left( 0\right) =\mathfrak{v}\left(
0\right) =0,\ \left\vert \mathfrak{u}\left( z\right) \right\vert <1,\
\left\vert \mathfrak{v}\left( w\right) \right\vert <1\right) ,\ $suppose that%
\begin{equation*}
\begin{array}{l}
\mathfrak{u}\left( z\right) =\sum_{n=1}^{\infty }t_{n}z^{n}\ \ \left(
\left\vert z\right\vert <1,\ z\in \mathbb{U}\right) \ \ \ , \\
\\
\mathfrak{v}\left( w\right) =\sum_{n=1}^{\infty }s_{n}w^{n}\ \ \left(
\left\vert w\right\vert <1,\ w\in \mathbb{U}\right) .%
\end{array}%
\end{equation*}%
It is well known that
\begin{equation}
\left\vert t_{1}\right\vert \leq 1,\ \ \left\vert t_{2}\right\vert \leq
1-\left\vert t_{1}\right\vert ^{2},\ \ \left\vert s_{1}\right\vert \leq 1,\
\ \left\vert s_{2}\right\vert \leq 1-\left\vert s_{1}\right\vert ^{2}.
\label{eq9}
\end{equation}
\begin{definition}
A function $f\in \Sigma $ is said to be in the class%
\begin{equation*}
\mathfrak{R}_{\Sigma ,\gamma }^{\mu ,\rho }\left( \widetilde{\mathfrak{p}}%
\right) \ \ \ (\gamma \in
\mathbb{C}
\backslash \{0\},\ 0<\mu \leq 1,\ 0<\rho \leq 1,\ z,w\in \mathbb{U})
\end{equation*}%
if the following subordination relationships are satisfied:%
\begin{equation*}
\left[ 1+\frac{1}{\gamma }\left( \frac{\rho \left( I_{z}^{\mu ,\rho
}f(z)\right) ^{\prime }}{\mu }-1\right) \right] \prec \widetilde{\mathfrak{p}%
}\left( z\right) =\frac{1+\tau ^{2}z^{2}}{1-\tau z-\tau ^{2}z^{2}}
\end{equation*}%
and%
\begin{equation*}
\left[ 1+\frac{1}{\gamma }\left( \frac{\rho \left( I_{z}^{\mu ,\rho
}g(w)\right) ^{\prime }}{\mu }-1\right) \right] \prec \widetilde{\mathfrak{p}%
}\left( w\right) =\frac{1+\tau ^{2}w^{2}}{1-\tau w-\tau ^{2}w^{2}},
\end{equation*}%
where the function $g$ is given by (\ref{eq2}) and $\tau =\frac{1-\sqrt{5}}{2%
}\approx -0.618.$
\end{definition}
\begin{remark}
The function $\widetilde{\mathfrak{p}}\left( z\right) $ is not univalent in $%
\mathbb{U}$, but it is univalent in the disc $\left\vert z\right\vert <\frac{%
3-\sqrt{5}}{2}\approx -0.38$. For example, $\widetilde{\mathfrak{p}}\left(
0\right) =\widetilde{\mathfrak{p}}\left( -\frac{1}{2\tau }\right) $ and $%
\widetilde{\mathfrak{p}}\left( e^{\pm i\arccos (1/4)}\right) =\frac{\sqrt{5}%
}{5}$. Also, it can be written as%
\begin{equation*}
\frac{1}{\left\vert \tau \right\vert }=\frac{\left\vert \tau \right\vert }{%
1-\left\vert \tau \right\vert }
\end{equation*}%
which indicates that the number $\left\vert \tau \right\vert $ divides $%
\left[ 0,1\right] $ such that it fulfills the golden section (see for
details Dziok et al. \cite{D}).
\end{remark}
Additionally, Dziok et al. \cite{D} indicate a useful connection between the
function $\widetilde{\mathfrak{p}}\left( z\right) $ and the Fibonacci
numbers. Let $\left\{ \Lambda _{n}\right\} $ be the sequence of Fibonacci
numbers
\begin{equation*}
\Lambda _{0}=0,\ \Lambda _{1}=1,\ \Lambda _{n+2}=\Lambda _{n}+\Lambda
_{n+1}\ (n\in
\mathbb{N}
_{0}=\left\{ 0,1,2,\ldots \right\} ),
\end{equation*}%
then%
\begin{equation*}
\Lambda _{n}=\frac{(1-\tau )^{n}-\tau ^{n}}{\sqrt{5}},\ \ \tau =\frac{1-%
\sqrt{5}}{2}.
\end{equation*}%
If we set
\begin{eqnarray*}
\widetilde{\mathfrak{p}}\left( z\right) &=&1+\overset{\infty }{\underset{n=1}%
{\sum }}\widetilde{\mathfrak{p}}_{n}z^{n}=1+(\Lambda _{0}+\Lambda _{2})\tau
z+(\Lambda _{1}+\Lambda _{3})\tau ^{2}z^{2} \\
&& \\
&&+\overset{\infty }{\underset{n=3}{\sum }}(\Lambda _{n-3}+\Lambda
_{n-2}+\Lambda _{n-1}+\Lambda _{n})\tau ^{n}z^{n},
\end{eqnarray*}%
then the coefficients $\widetilde{\mathfrak{p}}_{n}$ satisfy%
\begin{equation}
\widetilde{\mathfrak{p}}_{n}=\left\{
\begin{array}{ll}
\tau & \left( n=1\right) \\
& \\
3\tau ^{2} & \left( n=2\right) \\
& \\
\tau \widetilde{\mathfrak{p}}_{n-1}+\tau ^{2}\widetilde{\mathfrak{p}}_{n-2}
& \left( n=3,4,\ldots \right)%
\end{array}%
\right. . \label{D}
\end{equation}
Specializing the parameters $\gamma ,\mu $ and $\rho $, we state the
following definitions.
\begin{definition}
For $\mu =\rho =1,$ a function $f\in \Sigma $ is said to be in the class $%
\mathfrak{R}_{\Sigma ,\gamma }\left( \widetilde{\mathfrak{p}}\right) \left(
\gamma \in
\mathbb{C}
\backslash \{0\}\right) $ if it satisfies the following conditions
respectively:%
\begin{equation*}
\left[ 1+\frac{1}{\gamma }\left( f^{\prime }(z)-1\right) \right] \prec
\widetilde{\mathfrak{p}}\left( z\right)
\end{equation*}%
and%
\begin{equation*}
\left[ 1+\frac{1}{\gamma }\left( g^{\prime }(w)-1\right) \right] \prec
\widetilde{\mathfrak{p}}\left( w\right) ,
\end{equation*}%
where $g=f^{-1}.$
\end{definition}
\begin{definition}
For $\gamma =\mu =\rho =1,$ a function $f\in \Sigma $ is said to be in the
class $\mathfrak{R}_{\Sigma }\left( \widetilde{\mathfrak{p}}\right) $ if it
satisfies the following conditions respectively:%
\begin{equation*}
f^{\prime }(z)\prec \widetilde{\mathfrak{p}}\left( z\right)
\end{equation*}%
and%
\begin{equation*}
g^{\prime }(w)\prec \widetilde{\mathfrak{p}}\left( w\right) ,
\end{equation*}%
where $g=f^{-1}.$
\end{definition}
\section{Main Result and its consequences}
\begin{theorem}
For $\gamma \in
\mathbb{C}
\backslash \{0\}$, let $f\in \mathfrak{R}_{\Sigma ,\gamma }^{\mu ,\rho
}\left( \widetilde{\mathfrak{p}}\right) $. If $a_{m}=0~\left( 2\leq m\leq
n-1\right) $, then
\begin{equation*}
\left\vert a_{n}\right\vert \leq \frac{\left\vert \gamma \right\vert
\left\vert \tau \right\vert \Gamma (\mu +1)\Gamma (n+\rho )}{n\Gamma (\rho
+1)\Gamma (n+\mu )}\ \ \ (n\geq 3).
\end{equation*}
\end{theorem}
\begin{proof}
Let $f$ be given by (\ref{eq1}). By the definition of subordination yields%
\begin{equation}
\left[ 1+\frac{1}{\gamma }\left( \frac{\rho \left( I_{z}^{\mu ,\rho
}f(z)\right) ^{\prime }}{\mu }-1\right) \right] =\widetilde{\mathfrak{p}}(%
\mathfrak{u}(z)) \label{eq16}
\end{equation}%
and%
\begin{equation}
\left[ 1+\frac{1}{\gamma }\left( \frac{\rho \left( I_{z}^{\mu ,\rho
}g(w)\right) ^{\prime }}{\mu }-1\right) \right] =\widetilde{\mathfrak{p}}(%
\mathfrak{v}(w)). \label{eq17}
\end{equation}%
Now, an application of Faber polynomial expansion to the power series $%
\mathfrak{R}_{\Sigma ,\gamma }^{\mu ,\rho }\left( \widetilde{\mathfrak{p}}%
\right) $ (e.g. see \cite{Airault and Bouali 2006} or [\cite{Airault and Ren
2002}, equation (1.6)]) yields
\begin{equation*}
1+\frac{1}{\gamma }\left( \frac{\rho \left( I_{z}^{\mu ,\rho }f(z)\right)
^{\prime }}{\mu }-1\right) =1+\frac{\Gamma (\rho +1)}{\gamma \Gamma (\mu +1)}%
\overset{\infty }{\underset{n=2}{\sum }}\mathcal{F}_{n-1}\left(
a_{2},a_{3},...,a_{n}\right) z^{n-1}
\end{equation*}%
where%
\begin{equation*}
\begin{array}{ll}
\mathcal{F}_{n-1}\left( a_{2},a_{3},...,a_{n}\right) z^{n-1} & =n\frac{%
\Gamma (n+\mu )}{\Gamma (n+\rho )} \\
& \\
& \times \overset{\infty }{\underset{i_{1}+2i_{2}+\cdots +(n-1)i_{(n-1)}=n-1}%
{\sum }}\frac{\left( 1-\left( i_{1}+i_{2}+\cdots +i_{n-1}\right) \right) !%
\left[ \left( a_{2}\right) ^{i_{1}}\left( a_{3}\right) ^{i_{2}}...\left(
a_{n}\right) ^{i_{n-1}}\right] }{\left( i_{1}!\right) \left( i_{2}!\right)
...\left( i_{n-1}!\right) }%
\end{array}%
\end{equation*}%
\begin{equation*}
\end{equation*}%
In particular, the first two terms are, $\mathcal{F}_{1}=\frac{2(\mu +1)}{%
\gamma (\rho +1)}a_{2},\mathcal{F}_{1}=\frac{3(\mu +1)(\mu +2)}{\gamma (\rho
+1)(\rho +2)}a_{3}.$
By the same token, for its inverse map $g=f^{-1}$, it is seen that
\begin{eqnarray*}
1+\frac{1}{\gamma }\left( \frac{\rho \left( I_{z}^{\mu ,\rho }g(w)\right)
^{\prime }}{\mu }-1\right) &=&1+\overset{\infty }{\underset{n=2}{\sum }}%
\frac{\Gamma (\rho +1)\Gamma (n+\mu )}{\Gamma (\mu +1)\Gamma (n+\rho )}\frac{%
n}{\gamma }\times \frac{1}{n}K_{n-1}^{-n}\left( a_{2},a_{3},...\right)
w^{n-1} \\
&& \\
&=&1+\frac{\Gamma (\rho +1)}{\gamma \Gamma (\mu +1)}\overset{\infty }{%
\underset{n=2}{\sum }}\mathcal{F}_{n-1}\left( b_{2},b_{3},...,b_{n}\right)
w^{n-1}.
\end{eqnarray*}%
Next, the equations (\ref{eq16}) and (\ref{eq17}) lead to%
\begin{eqnarray*}
\widetilde{\mathfrak{p}}\left( \mathfrak{u}\left( z\right) \right) &=&1+%
\widetilde{\mathfrak{p}}_{1}\mathfrak{u}(z)+\widetilde{\mathfrak{p}}_{2}(%
\mathfrak{u}(z))^{2}z^{2}+\cdots \\
&& \\
&=&1+\widetilde{\mathfrak{p}}_{1}t_{1}z+\left( \widetilde{\mathfrak{p}}%
_{1}t_{2}+\widetilde{\mathfrak{p}}_{2}t_{1}^{2}\right) z^{2}+\cdots \\
&& \\
&=&1+\underset{}{\underset{n=1}{\overset{\infty }{\sum }}}\underset{k=1}{%
\overset{n}{\sum }}\widetilde{\mathfrak{p}}_{k}\mathcal{G}_{n}^{k}\left(
t_{1},t_{2},...,t_{n}\right) z^{n},
\end{eqnarray*}%
and
\begin{eqnarray*}
\widetilde{\mathfrak{p}}\left( \mathfrak{v}\left( w\right) \right) &=&1+%
\widetilde{\mathfrak{p}}_{1}\mathfrak{v}(w)+\widetilde{\mathfrak{p}}_{2}(%
\mathfrak{v}(w))^{2}z^{2}+\cdots \\
&& \\
&=&1+\widetilde{\mathfrak{p}}_{1}s_{1}w+\left( \widetilde{\mathfrak{p}}%
_{1}s_{2}+\widetilde{\mathfrak{p}}_{2}s_{1}^{2}\right) w^{2}+\cdots \\
&& \\
&=&1+\underset{}{\underset{n=1}{\overset{\infty }{\sum }}}\underset{k=1}{%
\overset{n}{\sum }}\widetilde{\mathfrak{p}}_{k}\mathcal{G}_{n}^{k}\left(
s_{1},s_{2},...,s_{n}\right) w^{n}.
\end{eqnarray*}%
Comparing the corresponding coefficients of (\ref{eq16}) and (\ref{eq17})
yields%
\begin{equation*}
\frac{\Gamma (\rho +1)\Gamma (n+\mu )}{\Gamma (\mu +1)\Gamma (n+\rho )}\frac{%
n}{\gamma }a_{n}=\widetilde{\mathfrak{p}}_{1}t_{n-1,}
\end{equation*}%
\begin{equation*}
\frac{\Gamma (\rho +1)\Gamma (n+\mu )}{\Gamma (\mu +1)\Gamma (n+\rho )}\frac{%
n}{\gamma }b_{n}=\widetilde{\mathfrak{p}}_{1}s_{n-1}.
\end{equation*}%
For $a_{m}=0\ \left( 2\leq m\leq n-1\right) ,$ we get $b_{n}=-a_{n}$ and so%
\begin{equation}
\frac{\Gamma (\rho +1)\Gamma (n+\mu )}{\Gamma (\mu +1)\Gamma (n+\rho )}\frac{%
n}{\gamma }a_{n}=\widetilde{\mathfrak{p}}_{1}t_{n-1} \label{eq18}
\end{equation}%
and%
\begin{equation}
-\frac{\Gamma (\rho +1)\Gamma (n+\mu )}{\Gamma (\mu +1)\Gamma (n+\rho )}%
\frac{n}{\gamma }a_{n}=\widetilde{\mathfrak{p}}_{1}s_{n-1}. \label{eq19}
\end{equation}%
Now taking the absolute values of either of the above two equations and from
(\ref{eq9}), we obtain%
\begin{equation*}
\left\vert a_{n}\right\vert \leq \frac{\left\vert \gamma \right\vert
\left\vert \tau \right\vert \Gamma (\mu +1)\Gamma (n+\rho )}{n\Gamma (\rho
+1)\Gamma (n+\mu )}.
\end{equation*}
\end{proof}
\begin{corollary}
For $\gamma \in
\mathbb{C}
\backslash \{0\}$, suppose that $f\in \mathfrak{R}_{\Sigma ,\gamma }\left(
\widetilde{\mathfrak{p}}\right) $. If $a_{m}=0~\left( 2\leq m\leq n-1\right)
$, then
\begin{equation*}
\left\vert a_{n}\right\vert \leq \frac{\left\vert \gamma \right\vert
\left\vert \tau \right\vert }{n}\ \ \ (n\geq 3).
\end{equation*}
\end{corollary}
\begin{corollary}
Suppose that $f\in \mathfrak{R}_{\Sigma }\left( \widetilde{\mathfrak{p}}%
\right) $. If $a_{m}=0~\left( 2\leq m\leq n-1\right) $, then
\begin{equation*}
\left\vert a_{n}\right\vert \leq \frac{\left\vert \tau \right\vert }{n}\ \ \
(n\geq 3).
\end{equation*}
\end{corollary}
\begin{theorem}
Let $f\in \mathfrak{R}_{\Sigma ,\gamma }^{\mu ,\rho }\left( \widetilde{%
\mathfrak{p}}\right) \ (\gamma \in
\mathbb{C}
\backslash \{0\}).$Then%
\begin{eqnarray*}
\left\vert a_{2}\right\vert &\leq &\min \left\{ \dfrac{\left\vert \gamma
\right\vert \left\vert \tau \right\vert }{\sqrt{\left\vert \tfrac{3\gamma
(\mu +1)(\mu +2)}{(\rho +1)(\rho +2)}-\tfrac{12(\mu +1)^{2}}{(\rho +1)^{2}}%
\right\vert \left\vert \tau \right\vert +\tfrac{4(\mu +1)^{2}}{(\rho +1)^{2}}%
}},\right. \\
&& \\
&&\left. \left\vert \tau \right\vert \sqrt{\frac{\left\vert \gamma
\right\vert (\rho +1)(\rho +2)}{(\mu +1)(\mu +2)}}\right\}
\end{eqnarray*}%
and%
\begin{eqnarray*}
\left\vert a_{3}\right\vert &\leq &\min \left\{ \frac{\left\vert \gamma
\right\vert \tau ^{2}(\rho +1)(\rho +2)}{(\mu +1)(\mu +2)},\right. \\
&& \\
&&\left. \dfrac{\left\vert \tau \right\vert }{\frac{3(\mu +1)(\mu +2)}{%
\left\vert \gamma \right\vert (\rho +1)(\rho +2)}}\left[ 1+\frac{\left[
\frac{3(\mu +1)(\mu +2)\left\vert \gamma \right\vert \left\vert \tau
\right\vert }{(\rho +1)(\rho +2)}-\frac{4(\mu +1)^{2}}{(\rho +1)^{2}}\right]
}{\left\vert \dfrac{3\gamma (\mu +1)(\mu +2)}{(\rho +1)(\rho +2)}-\dfrac{%
12(\mu +1)^{2}}{(\rho +1)^{2}}\right\vert \left\vert \tau \right\vert +%
\dfrac{4(\mu +1)^{2}}{(\rho +1)^{2}}}\right] \right\} .
\end{eqnarray*}
\end{theorem}
\begin{proof}
Substituting $n$ by $2$ and $3$ in (\ref{eq18}) and (\ref{eq19}),
respectively, we find that%
\begin{equation}
\frac{2(\mu +1)}{\gamma (\rho +1)}a_{2}=\widetilde{\mathfrak{p}}_{1}t_{1},
\label{eq20}
\end{equation}%
\begin{equation}
\frac{3(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}a_{3}=\widetilde{%
\mathfrak{p}}_{1}t_{2}+\widetilde{\mathfrak{p}}_{2}t_{1}^{2}, \label{eq21}
\end{equation}%
\begin{equation}
-\frac{2(\mu +1)}{\gamma (\rho +1)}a_{2}=\widetilde{\mathfrak{p}}_{1}s_{1},
\label{eq22}
\end{equation}%
\begin{equation}
\frac{3(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}(2a_{2}^{2}-a_{3})=%
\widetilde{\mathfrak{p}}_{1}s_{2}+\widetilde{\mathfrak{p}}_{2}s_{1}^{2}.
\label{eq23}
\end{equation}%
Obviously, we obtain%
\begin{equation}
t_{1}=-s_{1}. \label{eq24}
\end{equation}%
If we add the equation (\ref{eq23}) to (\ref{eq21}) and use (\ref{eq24}), we
get
\begin{equation}
\frac{6(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}a_{2}^{2}=\widetilde{%
\mathfrak{p}}_{1}\left( t_{2}+s_{2}\right) +2\widetilde{\mathfrak{p}}%
_{2}t_{1}^{2}. \label{eq25}
\end{equation}%
Using the value of $t_{1}^{2}$ from (\ref{eq20}), we get
\begin{equation}
\left[ \frac{6(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}\widetilde{%
\mathfrak{p}}_{1}^{2}-\frac{8(\mu +1)^{2}}{\gamma ^{2}(\rho +1)^{2}}%
\widetilde{\mathfrak{p}}_{2}\right] a_{2}^{2}=\widetilde{\mathfrak{p}}%
_{1}^{3}\left( t_{2}+s_{2}\right) . \label{eq26}
\end{equation}%
Combining (\ref{eq26}) and (\ref{eq9}), we obtain
\begin{eqnarray*}
2\left\vert \frac{3(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}\widetilde{%
\mathfrak{p}}_{1}^{2}-\frac{4(\mu +1)^{2}}{\gamma ^{2}(\rho +1)^{2}}%
\widetilde{\mathfrak{p}}_{2}\right\vert \left\vert a_{2}\right\vert ^{2}
&\leq &\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert ^{3}\left(
\left\vert t_{2}\right\vert +\left\vert s_{2}\right\vert \right) \\
&& \\
&\leq &2\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert ^{3}\left(
1-\left\vert t_{1}\right\vert ^{2}\right) \\
&& \\
&=&2\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert ^{3}-2\left\vert
\widetilde{\mathfrak{p}}_{1}\right\vert ^{3}\left\vert t_{1}\right\vert ^{2}.
\end{eqnarray*}%
It follows from (\ref{eq20}) that%
\begin{equation}
\left\vert a_{2}\right\vert \leq \dfrac{\left\vert \gamma \right\vert
\left\vert \tau \right\vert }{\sqrt{\left\vert \dfrac{3\gamma (\mu +1)(\mu
+2)}{(\rho +1)(\rho +2)}-\dfrac{12(\mu +1)^{2}}{(\rho +1)^{2}}\right\vert
\left\vert \tau \right\vert +\dfrac{4(\mu +1)^{2}}{(\rho +1)^{2}}}}.
\label{eq28}
\end{equation}%
Additionally, by (\ref{eq9}) and (\ref{eq25})
\begin{eqnarray*}
\frac{6(\mu +1)(\mu +2)}{\left\vert \gamma \right\vert (\rho +1)(\rho +2)}%
\left\vert a_{2}\right\vert ^{2} &\leq &\left\vert \widetilde{\mathfrak{p}}%
_{1}\right\vert \left( \left\vert t_{2}\right\vert +\left\vert
s_{2}\right\vert \right) +2\left\vert \widetilde{\mathfrak{p}}%
_{2}\right\vert \left\vert t_{1}\right\vert ^{2} \\
&& \\
&\leq &2\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert \left(
1-\left\vert t_{1}\right\vert ^{2}\right) +2\left\vert \widetilde{\mathfrak{p%
}}_{2}\right\vert \left\vert t_{1}\right\vert ^{2} \\
&& \\
&=&2\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert +2\left\vert
t_{1}\right\vert ^{2}(\left\vert \widetilde{\mathfrak{p}}_{2}\right\vert
-\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert ).
\end{eqnarray*}%
Since $\left\vert \widetilde{\mathfrak{p}}_{2}\right\vert >\left\vert
\widetilde{\mathfrak{p}}_{1}\right\vert $, we get%
\begin{equation*}
\left\vert a_{2}\right\vert \leq \left\vert \tau \right\vert \sqrt{\frac{%
\left\vert \gamma \right\vert (\rho +1)(\rho +2)}{(\mu +1)(\mu +2)}}.
\end{equation*}%
Next, in order to derive the bounds on $\left\vert a_{3}\right\vert ,$ by
subtracting (\ref{eq23}) from (\ref{eq21}), we may obtain%
\begin{equation}
\frac{6(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}a_{3}=\frac{6(\mu +1)(\mu
+2)}{\gamma (\rho +1)(\rho +2)}a_{2}^{2}+\widetilde{\mathfrak{p}}_{1}\left(
t_{2}-s_{2}\right) . \label{eq29}
\end{equation}%
Evidently, from (\ref{eq25}), we state that%
\begin{eqnarray*}
a_{3} &=&\frac{\widetilde{\mathfrak{p}}_{1}\left( t_{2}+s_{2}\right) +2%
\widetilde{\mathfrak{p}}_{2}t_{1}^{2}}{\frac{6(\mu +1)(\mu +2)}{\gamma (\rho
+1)(\rho +2)}}+\frac{\widetilde{\mathfrak{p}}_{1}\left( t_{2}-s_{2}\right) }{%
\frac{6(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}} \\
&& \\
&=&\frac{\widetilde{\mathfrak{p}}_{1}t_{2}+\widetilde{\mathfrak{p}}%
_{2}t_{1}^{2}}{\frac{3(\mu +1)(\mu +2)}{\gamma (\rho +1)(\rho +2)}}
\end{eqnarray*}%
and consequently%
\begin{eqnarray*}
\left\vert a_{3}\right\vert &\leq &\frac{\left\vert \widetilde{\mathfrak{p}}%
_{1}\right\vert \left\vert t_{2}\right\vert +\left\vert \widetilde{\mathfrak{%
p}}_{2}\right\vert \left\vert t_{1}\right\vert ^{2}}{\frac{3(\mu +1)(\mu +2)%
}{\left\vert \gamma \right\vert (\rho +1)(\rho +2)}} \\
&& \\
&\leq &\frac{\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert \left(
1-\left\vert t_{1}\right\vert ^{2}\right) +\left\vert \widetilde{\mathfrak{p}%
}_{2}\right\vert \left\vert t_{1}\right\vert ^{2}}{\frac{3(\mu +1)(\mu +2)}{%
\left\vert \gamma \right\vert (\rho +1)(\rho +2)}} \\
&& \\
&=&\frac{\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert +\left\vert
t_{1}\right\vert ^{2}(\left\vert \widetilde{\mathfrak{p}}_{2}\right\vert
-\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert )}{\frac{3(\mu +1)(\mu
+2)}{\left\vert \gamma \right\vert (\rho +1)(\rho +2)}}.
\end{eqnarray*}%
Since $\left\vert \widetilde{\mathfrak{p}}_{2}\right\vert >\left\vert
\widetilde{\mathfrak{p}}_{1}\right\vert $, we must write%
\begin{equation*}
\left\vert a_{3}\right\vert \leq \frac{\left\vert \gamma \right\vert \tau
^{2}(\rho +1)(\rho +2)}{(\mu +1)(\mu +2)}.
\end{equation*}%
On the other hand, by (\ref{eq9}) and (\ref{eq29}), we have
\begin{eqnarray*}
\frac{6(\mu +1)(\mu +2)}{\left\vert \gamma \right\vert (\rho +1)(\rho +2)}%
\left\vert a_{3}\right\vert &\leq &\frac{6(\mu +1)(\mu +2)}{\left\vert
\gamma \right\vert (\rho +1)(\rho +2)}\left\vert a_{2}\right\vert
^{2}+\left\vert \widetilde{\mathfrak{p}}_{1}\right\vert \left( \left\vert
t_{2}\right\vert +\left\vert s_{2}\right\vert \right) \\
&& \\
&\leq &\frac{6(\mu +1)(\mu +2)}{\left\vert \gamma \right\vert (\rho +1)(\rho
+2)}\left\vert a_{2}\right\vert ^{2}+2\left\vert \widetilde{\mathfrak{p}}%
_{1}\right\vert \left( 1-\left\vert t_{1}\right\vert ^{2}\right) .
\end{eqnarray*}%
Then, with the help of (\ref{eq20}), we have%
\begin{equation*}
\frac{3(\mu +1)(\mu +2)}{\left\vert \gamma \right\vert (\rho +1)(\rho +2)}%
\left\vert a_{3}\right\vert \leq \left[ \frac{3(\mu +1)(\mu +2)}{\left\vert
\gamma \right\vert (\rho +1)(\rho +2)}-\frac{4(\mu +1)^{2}}{\left\vert
\gamma \right\vert ^{2}(\rho +1)^{2}\left\vert \widetilde{\mathfrak{p}}%
_{1}\right\vert }\right] \left\vert a_{2}\right\vert ^{2}+\left\vert
\widetilde{\mathfrak{p}}_{1}\right\vert .
\end{equation*}%
By considering (\ref{eq28}), we deduce that%
\begin{equation*}
\left\vert a_{3}\right\vert \leq \dfrac{\left\vert \tau \right\vert }{\frac{%
3(\mu +1)(\mu +2)}{\left\vert \gamma \right\vert (\rho +1)(\rho +2)}}\left\{
1+\frac{\left[ \frac{3(\mu +1)(\mu +2)\left\vert \gamma \right\vert
\left\vert \tau \right\vert }{(\rho +1)(\rho +2)}-\frac{4(\mu +1)^{2}}{(\rho
+1)^{2}}\right] }{\left\vert \dfrac{3\gamma (\mu +1)(\mu +2)}{(\rho +1)(\rho
+2)}-\dfrac{12(\mu +1)^{2}}{(\rho +1)^{2}}\right\vert \left\vert \tau
\right\vert +\dfrac{4(\mu +1)^{2}}{(\rho +1)^{2}}}\right\} .
\end{equation*}
\end{proof}
\begin{corollary}
Let $f\in \mathfrak{R}_{\Sigma ,\gamma }\left( \widetilde{\mathfrak{p}}%
\right) \ (\gamma \in
\mathbb{C}
\backslash \{0\}).$Then%
\begin{equation*}
\left\vert a_{2}\right\vert \leq \min \left\{ \dfrac{\left\vert \gamma
\right\vert \left\vert \tau \right\vert }{\sqrt{3\left\vert \gamma
-4\right\vert \left\vert \tau \right\vert +4}},\left\vert \tau \right\vert
\sqrt{\left\vert \gamma \right\vert }\right\}
\end{equation*}%
and%
\begin{equation*}
\left\vert a_{3}\right\vert \leq \min \left\{ \left\vert \gamma \right\vert
\left\vert \tau \right\vert ^{2},\dfrac{\left( \left\vert \gamma
-4\right\vert +\left\vert \gamma \right\vert \right) \left\vert \tau
\right\vert ^{2}\left\vert \gamma \right\vert }{3\left\vert \gamma
-4\right\vert \left\vert \tau \right\vert +4}\right\} .
\end{equation*}
\end{corollary}
\begin{corollary}
Let $f\in \mathfrak{R}_{\Sigma }\left( \widetilde{\mathfrak{p}}\right) .$Then%
\begin{equation*}
\left\vert a_{2}\right\vert \leq \dfrac{\left\vert \tau \right\vert }{\sqrt{%
9\left\vert \tau \right\vert +4}}
\end{equation*}%
and%
\begin{equation*}
\left\vert a_{3}\right\vert \leq \frac{4\left\vert \tau \right\vert ^{2}}{%
9\left\vert \tau \right\vert +4}.
\end{equation*}
\end{corollary}
| proofpile-arXiv_065-1498 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
In observational cosmology, the morphological classification is the most basic information when creating galaxy catalogs. The first classification system, by \citet{Hubble1,Hubble2}, distinguishes galaxies with dominant bulge component -- also known as Early-Type Galaxies (ETGs) -- from galaxies with a prominent disk component -- named Late-Type Galaxies (LTGs). LTGs are commonly referred to as spiral galaxies because of their prominent spiral arms, while ETGs are commonly referred to as elliptical (E) galaxies as they have a simpler ellipsoidal structure, with less structural differentiation (less information). More refined classifications fork spirals into two groups: barred (SB) and unbarred (S) galaxies. These two groups can also be refined even further by their spiral arms strength. A number known as T-Type can be assigned to the morphological types: ETGs have T-Type $\le$ 0 and LTGs have T-Type $>$ 0 \citep{Vaucouleurs}. T-Type considers ellipticity and spiral arms strength but does not reflect the presence or absence of the bar feature in spirals.
Morphology reveals structural, intrinsic and environmental properties of galaxies. In the local universe, ETGs are mostly situated in the center of galaxy clusters, have a larger mass, less gas, higher velocity dispersion, and older stellar populations than LTGs, which are rich star-forming systems \citep{roberts,blanton,pozzetti}. By mapping where the ETGs are, it is possible to map the large-scale structure of the universe. Therefore, galaxy morphology is of paramount importance for extragalactic research as it relates to stellar properties and key aspects of the evolution and structure of the universe.
Astronomy has become an extremely data-rich field of knowledge with the advance of new technologies in recent decades. Nowadays it is impossible to rely on human classification given the huge flow of data attained by current research surveys. New telescopes and instruments on board of satellites provide massive datasets. Therefore, in view of their voluminous size, much of the data are never explored. The potential extraction of knowledge from these collected data is only partially accomplished, even though many answers of the contemporary science critically depend on the processing of such large amount of data \citep{advML4astro,statisticsAstroML,statChallengesInAstro}.
Automatic classification can address this bottleneck of observational research.
One of the most used astronomical datasets is the Sloan Digital Sky Survey -- SDSS, which has been acquiring photometry from the northern sky since 1998. After its first two phases, SDSS Data Release 7 has publicly released photometry for 357 million unique sources, and it is expected to be around 15 terabytes of data when the survey is complete \citep{sdss}. This massive dataset is just one of hundreds of surveys that are currently underway.
One effort to overcome the challenge to classify hundreds of thousands of galaxies depends on the laborious engagement of many people interested in the subject. Galaxy Zoo is a citizen science project which provides a visual morphological classification for nearly one million galaxies in its first phase (Galaxy Zoo 1) distinguishing elliptical from spiral galaxies. With general public help, this project has obtained more than $4 \times 10^7$ individual classifications made by $\sim 10^5$ participants. In its second phase, Galaxy Zoo 2 extends the classification
into more detailed features such as bars, spiral arms, bulges, and many others, providing a catalog with nearly 300 thousand galaxies present in SDSS. Throughout this work, we use Galaxy Zoo \citep{GZ1a,GZ1b,GZ2} classification as supervision and validation (ground truth) to our classification models.
Several authors \citep{abraham,Asymmetry,conselice,lotz} studied and presented results about objective galaxy morphology measures with Concentration, Asymmetry, Smoothness, Gini, and M20 (CASGM system).
\citet{ferrari} introduced the entropy of information H (Shannon entropy) to quantify the distribution of pixel values in the image. \citet{gpa2018} introduced the Gradient Pattern Analysis (GPA) technique to separate elliptical from spiral galaxies by the second moment of the gradient of the images. This whole system used by \citet{gpa2018} -- called CyMorph -- is described in this paper (Section \ref{sec:cymorph}).
It is not trivial to determine the success of each non-parametric morphological parameter to perform this classification task. Considering the separation between elliptical and spiral galaxies, for example,
a morphological parameter is more reliable
if it maximizes the separation of the distributions of these two types.
\citet{gpa2018} described the evaluation technique proposed and adopted to measure the success of metrics to separate elliptical from spiral galaxies \citep[][see also Subsection \ref{sub:ghs}]{pyGHS}.
The main purpose of this investigation is to answer the question ``How to morphologically classify galaxies using Galaxy Zoo \citep{GZ1a,GZ1b,GZ2} classification through non-parametric features and Machine Learning methods?'' We also apply Deep Learning techniques directly to images to overcome the same challenge and compare results from both approaches. Deep Convolutional Neural Network (CNN) is a well-established methodology to classify images \citep{deepLearning}. Without the need of a feature extractor, the network itself adjusts its parameters in the learning process to extract the features. Figure \ref{fig:ml-dl} shows both flows for each approach used in this work:
Traditional Machine Learning (TML) and Deep Learning (DL).
\begin{figure*}
\centering
\includegraphics[width=0.65\textwidth]{./figures/ml-dl.png}
\caption{Illustrative sketch of traditional Machine Learning and Deep Learning flows.}
\label{fig:ml-dl}
\end{figure*}
The huge amount of photometric astrophysical data available and the highly increasing advancements on hardware and methods to perform automatic classifications has been leveraging related publications \citep{law,freeman,deepGal,blueNugget,jcis,dieleman,khan2018,huertas,deepGal2}.
Highlight to \citet{deepGal2} who use questions and answers
from Galaxy Zoo 2 for replicating the answers from the users, and provide morphology classification by T-Type in their final catalog.
The approach used in this paper is different from the one used in \citet{deepGal2}. Instead of using questions and answers from Galaxy Zoo 2, we use the classifications and images themselves. Also, we revisit issues not touched upon in previous studies dealing with morphological parameters \citep{abraham,Asymmetry,conselice,lotz}; namely, threshold dependence in the use of the segmented image. We study the impact of that on the parameters that ultimately will be used in the TML approach.
Although it is already a well-established observation that for perception tasks (which galaxy morphology is) Deep Learning is likely to outperform machine learning models trained on hand engineered features \citep{imagenet}, this subject is in its infancy in galaxy morphology and such comparison of these two approaches have never been presented in the same work in the literature. Also, deep learning methods need huge amounts of data to learn from and huge computational resources to make it effective. Deep learning models can be hard to tune and tame, and the prediction time can take much longer than other models because of the complexity \citep{deepLearning}. The traditional machine learning approach is still relevant.
This document is organized as follows: Section \ref{sec:data} describes the sample and data used to measure morphology and build the classification models. Section \ref{sec:cymorph} describes the advances in non-parametric galaxy morphological system (CyMorph). Sections \ref{sec:ml} and \ref{sec:dl} describe the basics and methodology
of TML and DL employed, respectively. Section \ref{sec:results} presents the results and validation for all experiments conducted. We compare the final product of this work with state-of-the-art catalogs in Section \ref{sec:catalogs}, followed by a summary in Section \ref{sec:summary}. We present catalog details in \ref{sec:fincat}.
\section{Sample and Data}
\label{sec:data}
This work uses data acquired from the SDSS-DR7 \citep{sdss}
and Galaxy Zoo catalogs \citep{GZ1a,GZ1b,GZ2} for measuring morphology and training
the classification models.
The samples are composed of galaxies in r-band from SDSS-DR7 in the redshift range
$0.03 < z < 0.1$, Petrosian magnitude in r-band brighter than
17.78 (spectroscopic magnitude limit), and $|b| \ge 30^o$, where
$b$ is the galactic latitude.
For supervised learning purposes, we consider the defined classification from
Galaxy Zoo 1 \citep[][GZ1 hereafter]{GZ1a,GZ1b} between
E and S galaxies,
and the classification from Galaxy Zoo 2 \citep[][GZ2]{GZ2} with prefixes in one of
11 following classes: Er, Ei, Ec, Sa, Sb, Sc, Sd, SBa, SBc, SBd.
Other three different scenarios are explored with GZ2 supervision.
Classification considering 9 classes (same as
11 classes except that we have one class for all elliptical galaxies united), 7 classes (same as previous but disconsidering the faintest galaxy types: Sd and SBd) and three classes:
E, S and SB.
We study the impact of different datasets on the training process, varying the number and size of objects in the samples.
We define a parameter $K$ as the area of the galaxy's Petrosian ellipse divided by
the area of the Full Width at Half Maximum (FWHM).
Equation \ref{eq:k} presents how to calculate $K$, where $R_P$ is the Petrosian radius \citep[see][for more details about $R_P$]{petrosian,sdss}.
By restricting the samples to a minimum $K$, we limit the number and size of objects in the dataset.
The number of galaxies for the three main samples we explore
($K \ge $ 5, $K \ge $ 10 and $K \ge $ 20) are presented in Table \ref{tab:sample}.
\begin{equation}
\label{eq:k}
K = \left( \frac{R_P}{\textnormal{FWHM}/ 2} \right)^2
\end{equation}
\begin{table}
\centering
\caption{Number of galaxies for the main samples in this work from each database (SDSS, GZ1 and GZ2).}
\vspace*{1mm}
\begin{tabular}{c c c c}
\hline
\multirow{2}{*}{\textbf{Restriction}} & \multicolumn{3}{c}{\textbf{Number of galaxies in}}\\
\cline{2-4}
& \textbf{SDSS} & \textbf{GZ1} & \textbf{GZ2} \\ \hline
$K \ge 5$ & 239,833 & 104,787 & 138,430 \\
$K \ge 10$ & 175,167 & 89,829 & 110,163 \\
$K \ge 20$ & 96,787 & 58,030 & 67,637 \\ \hline
\end{tabular}
\label{tab:sample}
\end{table}
With smaller values of $K$ we have more but smaller objects,
while samples restricted by bigger values of $K$ have less but bigger objects.
To properly check the impact of the number and sizes of objects in the samples, we explore the Deep Learning approach for three classes problem in detail with other restrictions: $K \ge $ 7, $K \ge $ 9, $K \ge $ 11, $K \ge $ 14 and $K \ge $ 17.
For Machine and Deep Learning experiments, we split the datasets from GZ1 e GZ2 into training-validation-test subsets in the proportion 80-10-10. In all experiments, each of these subsets are constrained to the same restriction (the model trained and validated with a subset restricted to $K\ge20$ is also tested with the subset restricted to $K\ge20$).
We should
keep in mind that the data used in this work, SDSS-DR7, does not have a proper spatial resolution (0.396 $\arcsec$ pixel $^{-1}$) and not adequate PSF FWHM ($\sim$1.5 $\arcsec$). For comparison, the Dark Energy Survey \citep[][DES]{DES}, has a pixel size of (0.27 $\arcsec$) and PSF FWHM of $\sim$0.9. This is why we study the quality of our classification as a function of $K$.
\section{Advances in Non-Parametric Galaxy Morphology - CyMorph}
\label{sec:cymorph}
Methodologies for computing non-parametric morphological metrics have been presented by several authors \citep{morganci,Kent1985ccd,abraham,takamiya,conselice,lotz,ferrari,gpa2018}.
In this section, we present CyMorph - a non-parametric galaxy morphology system which determines Concentration (C), Asymmetry (A), Smoothness (S), Entropy (H) and Gradient Pattern Analysis (GPA) metrics.
We perform image preprocessing techniques to ensure consistency and improve feature extraction.
CyMorph achieves this goal in three major steps: producing the galaxy stamp, removing secondary objects, and generating the segmented image.
To remove secondary objects inside the stamp we replace their pixels by the median value of the isophotal level that cross that object.
Concentration is the only metric we calculate using the clean galaxy stamp
since we want the whole accumulated flux profile of the galaxy. For all other metrics, we use the segmented image as input
which we obtain by applying a mask upon the clean image. The mask is computed by a region growing algorithm \citep{pedrini}. We summarize CyMorph metrics as follows:
\begin{itemize}
\item Concentration is defined as $C = log_{10} (R_1 / R_2)$, where $R_1$ and $R_2$ are the outer and inner radii, respectively, enclosing some fraction of the total flux \citep{conselice,lotz,ferrari}. We use an optimization process for setting up the best configuration parameters for CyMorph (described in Subsection \ref{sub:optm}). The best configuration by this method is: $C = log_{10} \left( R_{75\%} / R_{35\%} \right)$.
\item Asymmetry is measured using the correlation between the original and rotated image: $A = 1 - s(I^0, I^\pi)$, where $I^0$ and $I^\pi$ are the original and the $\pi$-rotated images. The function $s()$ is the Spearman's rank correlation coefficient \citep{press}, which has been proved to be a stable and robust correlation coefficient \citep{rubens}.
\item Smoothness describes the flux dispersion of an image, namely how the gradient varies over the entire image. This can be measured as the correlation between the original image and its smoothed counterpart \citep{abraham,conselice,ferrari}. We apply the Butterworth filter for smoothing the images. This filter provides the advantage of a continuous adaptive control of the smoothing degree applied to the image \citep[see][for more details]{butter,pedrini,rubens}. We use the the Spearman's rank correlation coefficient to compute smoothness, following the same reasoning as for asymmetry. We define smoothness as $S = 1 - s(I^0, I^s)$, where $I^0$ is the flux intensity of the original image, and $I^s$ is the flux intensity of the smoothed image.
\item Gradient Pattern Analysis (GPA) is a well-established method to estimate the local gradient properties of a set of points, which is generally represented in a two-dimensional (2D) space \citep{gpa1999,gpa2000,gpa2003}. We use the improved version of GPA developed for galaxy morphology (see \citet{gpa2018} and references therein for more details).
\item In digital image processing, the entropy of information, H, \citep[Shannon entropy, ][]{bishop} measures the distribution of pixel values in the image. In galaxy morphology, we expect high values of H for clumpy galaxies because of their heterogeneous pixel distribution, and low H for smooth galaxies \citet[see][for more details]{ferrari,bishop}.
\end{itemize}
For more specific details about how to compute each of these metrics, see \citet{rubens} and references therein.
\subsection{Geometric Histogram Separation ($\delta_{\rm{GHS}}$)}
\label{sub:ghs}
For a given sample of galaxies, CyMorph measures C, A, S, H and GPA and these parameters depend on some quantities. Our main goal is to choose the best quantities possible for reaching a maximum performance in classifying galaxies. Using an independent morphological classification (from GZ1, e.g.), we have elliptical and spiral distributions for each parameter. All we need is a simple and reliable method to objectively assign a value for the separation between elliptical and spiral distributions.
Here, we measure the geometric distance between the distributions with the GHS (Geometric Histogram Separation) algorithm \citep[see][for more details]{pyGHS,gpa2018}.
\subsection{Optimizing Morphological Metrics Configuration}
\label{sub:optm}
\begin{table*}
\centering
\caption{Parameter ranges explored in the optimization process. Asymmetry is ommited since it depends only on $d_{\sigma}$. Concentration(*) does not depend on $d_{\sigma}$. }
\vspace*{1mm}
\begin{tabular}{c c c c c}
\hline
\textbf{Sextractor} & \textbf{C}* & \textbf{S} & \textbf{G$_2$} & \textbf{H} \\ \hline
\multirow{2}{*}{$0.1 \le d_{\sigma} \le 5.0$} & \multirow{2}{*}{\shortstack{$0.55 \le R_1 \le 0.80$\\$0.20 \le R_2 \le 0.45$}} & \multirow{2}{*}{$0.2 \le c \le 0.8$} & \multirow{2}{*}{\shortstack{$ 0.00 \le m_{tol} \le 0.20$\\$0.01 \le p_{tol} \le 0.04$}} & \multirow{2}{*}{$100 \le \beta \le 250$} \\
& & & & \\ \hline
\end{tabular}
\label{tab:optim}
\end{table*}
\begin{figure*}
\centering
\subfloat[Concentration]{%
\includegraphics[width=0.41\linewidth]{./figures/CN.png}%
\label{fig:CN}
}%
\subfloat[Asymmetry]{%
\includegraphics[width=0.41\linewidth]{./figures/sA3.png}%
\label{fig:sA3}
}%
\\
\subfloat[Smoothness]{%
\includegraphics[width=0.41\linewidth]{./figures/sS3.png}%
\label{fig:sS3}
}%
\subfloat[Entropy]{%
\includegraphics[width=0.41\linewidth]{./figures/sH.png}%
\label{fig:sH}
}%
\\
\subfloat[GPA -- modular tolerance]{%
\includegraphics[width=0.41\linewidth]{./figures/sGa1.png}%
\label{fig:GPA1}%
\subfloat[GPA -- fine tuning]{%
\includegraphics[width=0.41\linewidth]{./figures/sGa2.png}%
\label{fig:GPA2}%
}
\caption{Plots describing the whole optimization process for morphological metrics
configuration. Red continuous lines (not dashed) have the best configuration for the
given parameter. See the explanation for the experiments and best configuration
results obtained in this Subsection \ref{sub:optm}.}
\label{fig:optim}
\end{figure*}
CyMorph has configurable parameters that we have to fine tune for better distinction between different morphological types. One specific configuration is the threshold parameter used in Sextractor \citep{sextractor} to detect objects on an image: \textit{DETECT\_THRESH} (hereafter $d_{\sigma}$). Sextractor detects an object if a group of connected pixels is above the background by a given $d_{\sigma}$. Thus, we want to find the minimum $d_{\sigma}$ value, sufficiently above the background limit, for which we do not lose information when computing each metric. Most of the configurable parameters are related to each morphological metric. It is important to stress these possibilities to obtain the best performance out of CyMorph.
Asymmetry only depends on $d_{\sigma}$. For the other metrics, we exhaustively explore the combinations of configurable parameters: outer ($R_1$) and inner ($R_2$) radii for Concentration; control parameter $c$ of Butterworth Filter for Smoothness; modular ($m_{tol}$) and phase tolerance ($p_{tol}$) for G$_{2}$; and, number of bins $\beta$ for Entropy.
Table \ref{tab:optim} summarizes parameters and ranges explored. The optimization process may be approached in different ways. One of them consists of optimizing all variables at once by maximizing a metric which is output from the application of a local two-sample test \citep{kim}. In this work, we focus on a variable-by-variable optimization which not only enables to select the best configuration and input metrics to TML methods but also leaves to the same accuracy in morphology as that obtained in GZ1 (see Section \ref{sec:results}). In the optimization experiments reported here, we randomly select a sample with 1,000 ellipticals and 1,000 spiral galaxies.
Figure \ref{fig:optim} presents the results for all optimization experiments. In each plot, all lines are dashed except the red one which contains the best configuration for a given metric. The \textit{y}-axis has GHS separation values ($\delta_{\rm{GHS}}$) in every panel. In the following Subsection, we interpret the results displayed in Figure \ref{fig:optim}.
\subsection{Results on Morphology}
\label{sub:CyMorph-results}
\begin{figure}[!h]
\centering
\includegraphics[width=.33\textwidth]{./figures/classicCAS.png}
\caption{Results on galaxy morphology using Classic CAS \citep{conselice,lotz},
with elliptical galaxies in red and spiral galaxies in blue.}
\label{fig:CAS}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{./figures/hists-2x3.png}
\caption{Results on galaxy morphology using CyMorph (the proposed system),
with elliptical galaxies in red and spiral galaxies in blue.}
\label{fig:results}
\end{figure*}
In this subsection, we compare the results obtained by computing the classic CAS system \citep{conselice,lotz}, which are presented in Figure \ref{fig:CAS} and the optimal results obtained by CyMorph system, exhibited in (Figure \ref{fig:results}). \citet{conselice,lotz} estimate Concentration and Asymmetry without significant differences among them. However, Smoothness is implemented in different ways. We present Smoothness as in \citet{lotz}, which gives the most consistent results. For each non-parametric morphological index, we display a binomial distribution histogram with elliptical galaxies (in red) and spiral galaxies (in blue). In each
panel we also list the $\delta_{\rm{GHS}}$ value.
The classic CAS system has the best result with Concentration ($\delta_{\rm{GHS}} = 0.79$), however, this is still lower than the lowest performance obtained by CyMorph metrics, which is Asymmetry with $\delta_{\rm{GHS}} = 0.83$ (see Figure \ref{fig:results}). With this improvement in CAS metrics within CyMorph (Smoothness, for instance, the best result: $\delta_{\rm{GHS}} = 0.92$), and the adoption of Entropy ($\delta_{\rm{GHS}} = 0.87$) and Gradient Pattern Analysis ($G_2$: $\delta_{\rm{GHS}} = 0.90$), we have satisfactory non-parametric morphology metrics to serve as input features to the Traditional Machine Learning algorithms. $G_2$ and $H$, two of the best metrics by $\delta_{\rm{GHS}}$, have highly correlated results: the greater the Entropy value, the more asymmetric gradient patters, and vice versa, lower entropy values correspond to more symmetric gradient patters.
The reasons for the improvement upon classic metrics can be summarized as: (1) the three-step preprocessing, (2) Butterworth filter to smooth the image (concerning Smoothness metric), (3) usage of correlation coefficients for Asymmetry and Smoothness, and (4) optimization process to better configure each metric.
\section{Machine Learning Applied to Galaxy Morphology}
\label{sec:ml}
CyMorph presents a consistent non-parametric morphological system. By employing Machine Learning (ML) methods with CyMorph metrics as features, we can value the best morphological information and obtain reliable and consistent classification results in galaxy morphology. An alternative would be to test logistic regression and other regression methods, which is beyond the scope of this paper. The five input features for the learning process are the best morphological metrics (given by $\delta_{\rm{GHS}}$) computed by CyMorph: C, A, S, G$_2$ and H. We maintain the restriction related to the area of the galaxies to build up different classification models: (1) $K \ge 5$; (2) $K \ge 10$; and (3) $K \ge 20$, i.e., the area of the galaxy is at least five (model 1), ten (model 2) and twenty (model 3) times larger than the FWHM area for each corresponding object, respectively.
We build up Decision Tree (DT), Support Vector Machine (SVM) and Multilayer Perceptron (MLP) models to classify galaxies considering different numbers of classes. We use \texttt{scikit-learn} \citep{scikit} \texttt{python} library to perform the experiments and procedures reported in this Section. We use Cross-Validation (CV) to split the dataset in training-validation-testing to address the trade-off between bias and variance \citet{mitchell,hands-on}. First, we split the dataset in a 90/10 proportion for training and testing sets, respectively. CV is applied in the 90\% portion of the dataset.
Consistent performance validation metrics are crucial to guide the learning process and to objectively measure the performance of each model. No metric is designed to perform this task alone. We employ the Overall Accuracy (OA) as the figure of merit to compare all the different models.
Additionally, we employ other performance metrics: Precision (P) and Recall (R) -- see \citet{mitchell,hands-on,deepLearning} for more details about OA, P, and R. For a further analysis on the problem with two classes, we use the Receiver Operating Characteristic (ROC) curve and the Area Under the ROC curve \citep[AUC,][]{roc1}.
One of the most used methods for classification and regression is the Decision Tree (DT).
Among the different versions and variations of DTs, we use the optimized version of Classification and Regression Tree (CART) algorithm. CART builds up binary trees using feature and threshold that yield the largest information gain at each node \citep{tree,hands-on}.
Another influential method for supervised classification is the Support Vector Machine (SVM) which finds the optimal hyperplane that divides the target classes. SVM performs this task by drawing infinite different hyperplanes for separating target classes aiming to get the minimum error rate \citep{svm,svm2}.
A standard Neural Network (NN) consists of many simple, connected neurons, each one being a computing unit which outputs a sequence of real-valued activations. Neurons are organized in layers: input, hidden (which may be one or many) and output. A Multilayer Perceptron (MLP) has at least three layers (one input, one hidden and one output layer).
Each neuron has $n$ inputs $i$, weights ($w$), bias ($b$), an activation function ($F(x)$) and output ($y$)
\citep[see][for more details about NN]{mitchell,hands-on,deepLearning}. We define the MLP architecture by empirically testing different configurations for the two classes problem using the restricted sample defined by $K \ge 20$. We predefine three hidden layers, test different numbers of neurons and alternate between logistic and ReLU as activation functions. Our final NN configuration consists of 44 neurons in the first, 88 in the second and 22 in the third hidden layer, with ReLu as the activation function.
\section{Deep Learning}
\label{sec:dl}
A Neural Network (NN) can be extremely complex when using an architecture with many layers. Deep Learning (DL) methods are built through a deep neural network with multiple layers of non-linear transformations.
Deep Convolutional Neural Network (CNN) or simply Convolutional Networks \citep{LeCun} are a special kind of neural network for processing data with grid-like topology.
Convolution preserves the spatial relationship between pixels by using submatrices from the input data to learn features.
It is not feasible to go through all the possibilities of architectures and configurations concerning CNNs.
In this work, we perform different experiments focusing on two notable robust CNN architectures, Residual Networks \citep[ResNet,][]{resNet} and GoogleNet \citep{GoogleNet}, judging overall accuracy performance and training time. We select the architecture which provide the best overall results: GoogleNet Inception, the winner of ILSVRC 2014 Classification Challenge in visual databases \citep[see][for more details]{GoogleNet,hands-on}.
\section{Results on Classification and Discussion}
\label{sec:results}
\subsection{Classifier\textquotesingle s performance by Overall Accuracy (OA)}
\label{sub:oa}
\begin{table*}
\centering
\caption{Overall Accuracy (OA in percentage) for all approaches considering GZ1 classification
(elliptical and spiral galaxies separation).}
\vspace*{1mm}
\begin{tabular}{c|c|c|c|c||c|c|c|c||c|c|c|c}
\multirow{2}{*}{} & \multicolumn{4}{c||}{\textbf{$K \ge 5$}} & \multicolumn{4}{c||}{\textbf{$K \ge 10$}} &
\multicolumn{4}{c}{\textbf{$K \ge 20$}} \\
\cline{2-13}
& \textbf{DT} & \textbf{SVM} & \textbf{MLP} & \textbf{CNN} &
\textbf{DT} & \textbf{SVM} & \textbf{MLP} & \textbf{CNN} &
\textbf{DT} & \textbf{SVM} & \textbf{MLP} & \textbf{CNN} \\ \hline
\textbf{two classes} & 94.8 & 94.6 & 94.6 & 98.7 & 95.7 & 95.8 & 95.6 & 99.1 & 98.5 & 98.6 & 98.6 & 99.5 \\ \hline
\end{tabular}
\label{tab:resultsGZ1}
\end{table*}
\begin{table*}
\centering
\caption{Overall Accuracy (OA in percentage) for all approaches considering GZ2 classification. The darker the green colour of a cell, the better OA obtained.}
\vspace*{1mm}
\begin{tabular}{c|c|c|c|c||c|c|c|c||c|c|c|c}
\multirow{2}{*}{} & \multicolumn{4}{c||}{\textbf{$K \ge 5$}} & \multicolumn{4}{c||}{\textbf{$K \ge 10$}} &
\multicolumn{4}{c}{\textbf{$K \ge 20$}} \\
\cline{2-13}
& \textbf{DT} & \textbf{SVM} & \textbf{MLP} & \textbf{CNN} &
\textbf{DT} & \textbf{SVM} & \textbf{MLP} & \textbf{CNN} &
\textbf{DT} & \textbf{SVM} & \textbf{MLP} & \textbf{CNN} \\ \hline
\textbf{11 classes} & \cellcolor{mycolor!10}49.3 & \cellcolor{mycolor!10}48.8 & \cellcolor{mycolor!10}49.4 &
\cellcolor{mycolor!50}63.0 & \cellcolor{mycolor!15}51.6 & \cellcolor{mycolor!15}51.6 &
\cellcolor{mycolor!15}51.7 & \cellcolor{mycolor!50}63.0 & \cellcolor{mycolor!25}57.7 &
\cellcolor{mycolor!25}57.4 & \cellcolor{mycolor!25}57.7 & \cellcolor{mycolor!50} 65.2 \\ \hline
\textbf{9 classes} & \cellcolor{mycolor!50}60.9 & \cellcolor{mycolor!50}63.2 & \cellcolor{mycolor!50}63.0 &
\cellcolor{mycolor!95}70.2 & \cellcolor{mycolor!50}60.5 & \cellcolor{mycolor!50}63.8 &
\cellcolor{mycolor!50}63.6 & \cellcolor{mycolor!95}75.7 & \cellcolor{mycolor!50}63.5 &
\cellcolor{mycolor!50}66.4 & \cellcolor{mycolor!50}66.2 & \cellcolor{mycolor!50}67.4 \\ \hline
\textbf{7 classes} & \cellcolor{mycolor!50}63.0 & \cellcolor{mycolor!50}62.5 & \cellcolor{mycolor!50}63.3 &
\cellcolor{mycolor!95}72.2 & \cellcolor{mycolor!50}62.9 & \cellcolor{mycolor!50}62.6 &
\cellcolor{mycolor!50}63.0 & \cellcolor{mycolor!95}77.6 & \cellcolor{mycolor!50}65.9 &
\cellcolor{mycolor!50}65.8 & \cellcolor{mycolor!50}66.0 & \cellcolor{mycolor!95}70.0 \\ \hline
\textbf{3 classes} & \cellcolor{mycolor!95}71.9 & \cellcolor{mycolor!95}71.2 & \cellcolor{mycolor!95}71.2 &
\cellcolor{mycolor!135}80.8 & \cellcolor{mycolor!95}71.9 & \cellcolor{mycolor!95}74.6 &
\cellcolor{mycolor!95}74.9 &\cellcolor{mycolor!145}81.8 & \cellcolor{mycolor!115}78.7 &
\cellcolor{mycolor!115}78.5 & \cellcolor{mycolor!115}78.8 & \cellcolor{mycolor!145}82.7 \\ \hline
\end{tabular}
\label{tab:resultsGZ2}
\end{table*}
As we have shown in previous sections, there are several parameters driving the final classification
and an appropriate figure of merit is needed to establish which setup/method works best. In Tables \ref{tab:resultsGZ1} and \ref{tab:resultsGZ2}, we present the Overall Accuracy (OA) achieved by all the experiments carried-out in this work. The main goal here is to distinguish between an Early-Type Galaxy (ETG), elliptical (E), and a Late-Type Galaxy (LTG), spiral (S). In the case of TML, using the $K \ge 20$ sample, all methods reached over 98\% of OA. In this training set, there are many more S galaxies ($\sim $87\%) than E galaxies ($\sim$13\%). This difference in the number of examples between classes is called class imbalance. We discuss class imbalance in Subsection \ref{sub:imbalance}. Despite of the imbalance, we have at least 95\% precision and 96\% recall for E systems. Since most of the training set is constituted by S galaxies, it is not surprising that we reach $\sim $99\% precision and recall, establishing a model with $\sim $99\% OA for this dataset.
Overall, CNN is the best approach to establish morphological classification of galaxies. We can safely assert that starting from the classes E and S from Galaxy Zoo 1, we can reproduce the human eye classification with all methods and samples (OA $>$ 94.5\%). When trying to distinguish among 11 classes, the problem is much more complex, as it would be for the human eye, and the best result is $\rm OA \sim 65.2$\% using CNN with $K \ge 20$. However, if we use only three classes we find an OA $>$ 80\% with CNN, for all samples, namely elliptical (E), unbarred (S) and barred spiral (SB) galaxies. We study the three classes problem considering imbalance and different samples in Subsection \ref{sub:imbalance}.
\subsection{Class Imbalance in Galaxy Morphology}
\label{sub:imbalance}
\begin{figure*}[!h]
\centering
\subfloat[Number of examples as a function of K, for different classes.]{%
\includegraphics[width=0.32\linewidth]{./figures/3classes_along_K.png}%
\label{fig:proportions}
}%
\subfloat[Imbalanced (original dataset).]{%
\includegraphics[width=0.32\linewidth]{./figures/OA-P-R_3-classes_unbalanced.png}%
\label{fig:unbalanced}
}%
\\
\subfloat[Balanced -- SMOTE.]{%
\includegraphics[width=0.32\linewidth]{./figures/OA-P-R_3-classes_smote.png}%
\label{fig:smote}
}
\subfloat[Balanced -- undersampling.]{%
\includegraphics[width=0.32\linewidth]{./figures/OA-P-R_3-classes_undersampling.png}%
\label{fig:undersampling}
}%
\subfloat[Balanced -- oversampling.]{%
\includegraphics[width=0.32\linewidth]{./figures/OA-P-R_3-classes_oversampling.png}%
\label{fig:oversampling}
}
\caption{The first plot shows number of elliptical (E), unbarred spiral (S) and barred spiral (SB) galaxies from GZ2 classification varying K. The other four plots are related to the class imbalance problem considering three classes: E (in redder colours), S and SB -- in bluer colours. The black lines indicate the Overall Accuracy (OA). For each of the three classes, continuos lines represent Precision (P) and the dashed lines indicate Recall (R), considering the original imbalanced dataset (panel b), the dataset generated with SMOTE (panel c), the undersampled balanced dataset (panel d) and the oversampled balanced dataset (panel e).}
\label{fig:imbalance}
\end{figure*}
The class imbalance problem is one of the top in data mining, data science, and pattern recognition \citep{10problems}. It arises when at least one of the classes has considerably fewer examples than the other(s).
This problem is inherent in galaxy morphology, as the number of examples among classes will never be equal. Applying the restriction of $K\ge20$ in the dataset from Galaxy Zoo 1 \citep{GZ1a,GZ1b}, for example, there are $\sim$ 87\% of galaxies classified as spiral and only $\sim $13\% as elliptical.
Balancing the dataset generally improves the performance for minority classes (since we increase the number of examples of such classes for training), and thus increases precision and recall for these classes \citep{deepLearning}.
Figure \ref{fig:proportions} shows the number of examples from three classes (E, S, SB) in Galaxy Zoo 2 - SDSS DR7 in different bins of K. The bin size is 0.5 and K varies from 5 to 20. SB is the minority class with the number of galaxies approximately constant -- the bar component is a feature identified in all resolutions explored. The number of S galaxies increases until $K = 10$, approximately where the numbers of S and E galaxies are equal.
We investigate the impact of the imbalance class problem on the morphological classification testing four different datasets: imbalanced, undersampling, oversampling and Synthetic Minority Over-sampling Technique (SMOTE). The imbalanced dataset is the original query. In the undersampling dataset all classes have the same number of examples as SB originally have. For oversampling, we sample the minority class set with replacement. Using SMOTE, we synthetically generate more examples for SB and we consider the smaller of either the number of E galaxies or double the number of SB galaxies to be the number of examples for each class \citep{scikit}.
Figures \ref{fig:unbalanced}, \ref{fig:smote}, \ref{fig:undersampling} and \ref{fig:oversampling} exhibit OA, P, and R for all experiments exploring the imbalance class problem considering the three classes described above. The minority class (SB) is the one more affected by imbalanced datasets, with low P (51\%) and R (69\%), in average. By employing balancing techniques, we improve to P (76\%) and R (79\%) for the minority class, thus reducing the misclassification for the SB class. All balancing strategies have similar performances. In all strategies, there is a $\Delta \rm{OA} \sim$ 2\% when $K$ varies from 5 to 20. From panels (b) to (e) of Figure 10, we notice that OA weakly increases with $K$, a trend that would imply that restricting the sample to bigger objects reduces classification problems, but the impact is not very significant. Thus, our model built up with the sample restricted by $K \ge$ 5 can safely be used to classify an unknown dataset as it classifies smaller objects with a similar OA compared to bigger objects (such as $K \ge$ 20).
In the remaining of this paper we continue analysing three methods: Traditional Machine Learning (TML) and Deep Learning (DL) approaches using imbalanced dataset for discriminating between two classes; and DL using SMOTE dataset to classify into three classes. For the TML approach, we choose the Decision Tree (DT) algorithm as it is the simplest solution (compared to Support Vector Machine and Artificial Neural Network) and the results have $\Delta \rm{OA} \sim 0$ among them \citep[Occam's razor,][]{occam}. For three classes, we select the model trained with SMOTE dataset because it is in the middle ground between under and overbalancing techniques, and the results using different balanced datasets are equivalent (Figures \ref{fig:smote}, \ref{fig:undersampling} and \ref{fig:oversampling}). These are the selected classifiers to build up the catalog -- see details about the final catalog in \ref{sec:fincat}.
\subsection{Classifier\textquotesingle s performance by ROC curve and AUC}
\label{sub:roc}
\begin{figure*}[!h]
\centering
\subfloat[ROC curves - TML - two classes.]{%
\includegraphics[width=0.33\linewidth]{./figures/ROC-DT.png}%
\label{fig:roc-TML-2c}
}%
\subfloat[ROC curves - DL - two classes.]{%
\includegraphics[width=0.33\linewidth]{./figures/ROC-DL.png}%
\label{fig:roc-DL-2c}
}%
\\
\subfloat[Truth Probability - TML - two classes.]{%
\includegraphics[width=0.33\linewidth]{./figures/truthProbHistDT.png}%
\label{fig:truthProbTML}
}
\subfloat[Truth Probability - DL - two classes.]{%
\includegraphics[width=0.33\linewidth]{./figures/truthProbHistDL2c.png}%
\label{fig:truthProbDL2c}
}%
\subfloat[Truth Probability - DL - three classes.]{%
\includegraphics[width=0.33\linewidth]{./figures/truthProbHistDL3c.png}%
\label{fig:truthProbDL3c}
}
\\
\subfloat[Truth Probability - DL - three classes - $K \ge 5$]{%
\includegraphics[width=0.33\linewidth]{./figures/DL3cByClassK5.png}%
\label{fig:DL-3c-K5}
}
\subfloat[Truth Probability - DL - three classes - $K \ge 10$]{%
\includegraphics[width=0.33\linewidth]{./figures/DL3cByClassK10.png}%
\label{fig:DL-3c-K10}
}%
\subfloat[Truth Probability - DL - three classes - $K \ge 20$]{%
\includegraphics[width=0.33\linewidth]{./figures/DL3cByClassK20.png}%
\label{fig:DL-3c-K20}
}
\caption{The first row presents ROC Curve and the Area Under the ROC Curve (AUC -- area) for each approach and different dataset restrictions considering the two classes problem (panels a and b). Such plots consider the ground truth and predicted labels. The dotted black line represents a random guess. The second row shows histograms with ground truth probabilities given by the models for each class (panels c, d, e). The third row presents histograms with ground truth probabilities given by the models for each class in regard to the three classes problem (panels f, g, h).}
\label{fig:roc-curves}
\end{figure*}
One of the most important issues in machine learning is performance measurement. A very popular method is the ROC (Receiver Operating Characteristics) curve and the Area Under the ROC curve \citep[AUC,][]{roc1}. In our particular case, ROC is the probability curve and AUC represents a measure of separability. It indicates how a model is capable of distinguishing between classes. Higher the AUC, better the model is at predicting E's as E's and S's as S's. Based on data presented in Tables \ref{tab:resultsGZ1} and \ref{tab:resultsGZ2}, Figures \ref{fig:roc-TML-2c} and \ref{fig:roc-DL-2c} display the ROC curves. Figures \ref{fig:truthProbTML}, \ref{fig:truthProbDL2c} and \ref{fig:truthProbDL3c} show the histograms with ground truth probabilities given by the models using different datasets, and deeper into the three classes problem Figures \ref{fig:DL-3c-K5}, \ref{fig:DL-3c-K10} and \ref{fig:DL-3c-K20} exhibit histograms with ground truth probabilities given by the models for each class.
ROC curves are typically used in binary classification to study the output of a classifier \citep{roc1,hands-on}.
Figures \ref{fig:roc-TML-2c} and \ref{fig:roc-DL-2c} show ROC curves considering the ground truth and predicted labels (no probabilities). These ROC curves and area values confirm what Table \ref{tab:resultsGZ1} shows with OA: all models have high standards for acting upon the two classes problem with AUC $>$ 0.90; restricting to the Deep Learning (DL) approach we improve it to AUC $>$ 0.97. By experimenting with different dataset restrictions and approaches we can draw some interesting conclusions. The restriction on the dataset has more impact on TML approach than using DL. The ROC curves are closer to each other on Figure \ref{fig:roc-DL-2c} ($\Delta \rm{AUC} = 0.014$) than \ref{fig:roc-TML-2c} ($\Delta \rm{AUC} = 0.075$). One example is to compare TML using $K \ge 20$ and DL using $K \ge 10$ ($\Delta \rm{OA} \sim 0.5\%$ and $\Delta \rm{AUC} \sim 0$ among them). Using smaller objects, DL can achieve a very similar performance as TML using bigger objects.
The output probabilities given by these models with regard to the ground truth from Galaxy Zoo are explored in Figures \ref{fig:truthProbTML}, \ref{fig:truthProbDL2c}, \ref{fig:truthProbDL3c}, \ref{fig:DL-3c-K5}, \ref{fig:DL-3c-K10} and \ref{fig:DL-3c-K20}). These histograms do not distinguish each class. We consider the output probability from each model for the ground truth of each galaxy. Again, we confirm that: (1) both approaches have a very high performance considering two classes -- very high concentration of frequency density for truth probability $>$ 0.9, and (2) DL (Figure \ref{fig:truthProbDL2c}) improves the results when comparing to TML (Figure \ref{fig:truthProbTML}) by reducing the frequency density with low truth probability values. The impact of the dataset restriction continues as well: the higher we set the threshold for $K$, the denser the frequency for truth probability $>$ 0.9.
Although Figure \ref{fig:truthProbDL3c} also presents a high frequency density for truth probability $> 0.9$, it is natural to see a higher frequency density for lower truth probabilities when comparing to Figures \ref{fig:truthProbTML} and \ref{fig:truthProbDL2c} since this problem is more complex when having one more class to consider. Exploring further, Figures \ref{fig:DL-3c-K5}, \ref{fig:DL-3c-K10} and \ref{fig:DL-3c-K20} show different dataset restrictions employed to train models on the three classes problem. Once more, we can see clearly the impact of using bigger (but less) objects: the frequency density for lower truth probabilities decreases from Figure \ref{fig:DL-3c-K5} to Figure \ref{fig:DL-3c-K10} and gets even lower in Figure \ref{fig:DL-3c-K20}. Non-barred spiral galaxies does not have a very high concentration for truth probability $>$ 0.9. However, the other two classes have a high frequency density for truth probability $>$ 0.9.
\subsection{Learning About Differences between TML and DL From Misclassifications}
\begin{figure*}[!h]
\captionsetup[subfigure]{labelformat=empty,justification=centering}
\centering
\subfloat[587732582054559872\newline GZ1: 0; TML: 1]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/TML_E_587732582054559872.png}%
}%
\subfloat[587741491440713856\newline GZ1: 0; TML: 1]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/TML_E_587741491440713856.png}%
}%
\subfloat[588023046942425216\newline GZ1: 0; TML: 1]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/TML_E_588023046942425216.png}%
}%
\subfloat[587733604255989888\newline GZ1: 1; TML: 0]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/TML_S_587733604255989888.png}%
}%
\subfloat[587735696448749696\newline GZ1: 1; TML: 0]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/TML_S_587735696448749696.png}%
}%
\subfloat[587742059994480768\newline GZ1: 1; TML: 0]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/TML_S_587742059994480768.png}%
}%
\caption{Sample of misclassified galaxies comparing the classification of Galaxy Zoo 1 (GZ1) and our Traditional Machine Learning (TML) approach trained with the sample restricted by $K \ge 20$. Under each galaxy image, we present the object ID number from SDSS-DR7 and the classification given by GZ1 and TML (0: Elliptical; 1: Spiral).}
\label{fig:error_TML}
\end{figure*}
\begin{figure*}[!h]
\captionsetup[subfigure]{labelformat=empty,justification=centering}
\centering
\subfloat[587739305287811105\newline GZ1: 0; DL: 1]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/DL_E_587739305287811105.png}%
}%
\subfloat[587741489822302351\newline GZ1: 0; DL: 1]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/DL_E_587741489822302351.png}%
}%
\subfloat[588013382210093213\newline GZ1: 0; DL: 1]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/DL_E_588013382210093213.png}%
}%
\subfloat[587739165702422588\newline GZ1: 1; DL: 0]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/DL_S_587739165702422588.png}%
}%
\subfloat[587742576444571849\newline GZ1: 1; DL: 0]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/DL_S_587742576444571849.png}%
}%
\subfloat[588017626688585921\newline GZ1: 1; DL: 0]{%
\includegraphics[width=0.16\linewidth]{figures/errorGalaxies/DL_S_588017626688585921.png}%
}%
\caption{Sample of misclassified galaxies comparing the classification of Galaxy Zoo 1 (GZ1) and our Deep Learning (DL) approach trained with the sample restricted by $K \ge 20$. Under each galaxy image, we present the object ID number from SDSS-DR7 and the classification given by GZ1 and DL (0: Elliptical; 1: Spiral).}
\label{fig:error_DL}
\end{figure*}
The two approaches used in this work, Traditional Machine Learning -- TML; and Deep Learning -- DL, achieve almost the ideal performance considering the Overall Accuracy (OA $\sim$ 99\%) for the two classes problem with the sample restricted by $K \ge 20$. However, it is still worthwhile to further investigate what causes misclassification even at a low percentage. We remind the reader that misclassification is always established using Galaxy Zoo 1 as the ground truth. Figure \ref{fig:error_TML} presents some examples of misclassification using TML. In the first and the last image we see that the preprocessing phase was not able to properly clean the image or discard such examples as bright objects remain close to the central galaxy. The other cases reflect the variance in the parameters used by Decision Tree and the natural uncertainty of the process. In Figure \ref{fig:error_DL} we display some misclassified galaxies by DL. Here, the absence of preprocessing allows galaxies to be too close to the border (second and fourth images) and as before the other examples are simple misclassifications imposed by the method itself, namely the galaxies are easy to be misclassified -- a bright central structure which gradually fades away to the outer part of the galaxy, which, in a more detailed classification (visual), could be considered as a S0 galaxy. We should stress that this misclassification is very low. Using a sample of 6,763 galaxies selected from the grand total of 58,030 galaxies listed in Table 1 ( $K \ge 20$, from GZ1), not used in the training process, TML misclassifies only 72 galaxies (1\%) while DL gets 0.5\% misclassified galaxies. Also, we noticed that none of the galaxies misclassified by TML are in the list of misclassifications by DL. These results seem innocuous, however they remind us of how important is to treat objects close to the border and those near a very bright source as an specific set since this will always be present in any sample. It also reinforces how important it is that we treat independent methodologies along the process of establishing a final morphology attached to an object. The examples presented here show how visual inspection is still an important source of learning about morphology (the problem is not the eye but the quality of the image placed in front of you), although inefficient for large catalogs currently available and the ones coming up in the near future.
\subsection{Validating Classification with Spectroscopic Data}
The performance analysis presented in the previous section reflects our ability to establish a morphological classification using a given method among several that might in principle works properly, and that's why finding a robust figure of merit is of paramount importance. However, an independent validation is even more essential when presenting a catalog with reliable morphology, namely, we have to show that our new classes recover well know relations. Figure \ref{fig:spectro} presents histograms of Age, stellar mass (M$_{\rm stellar}$), Metallicity ([Z/H]) and central velocity dispersion ($\sigma$) \citep[for more details on how these parameters were obtained and errors, see][]{deCarvalho17}. In every panel we show the distribution for ellipticals (in red) and spirals (in blue). Also, we display the parameter $\delta_{\rm GHS}$ which measures how distant these two distributions are (see Section 3.8). This validation procedure was done using only galaxies from GZ1 classified as "Undefined". The classification here is provided by DT (TML). We remind an important characteristic of the samples: $K \ge 5$, which has more but smaller objects; $K \ge 10$; and $K \ge 20$, which has less but bigger objects. Although we have a bigger dataset with $K \ge 5$, the presence of smaller objects impairs our classification. The degradation of the quality of our classification as we go to smaller galaxies in evident from Figure \ref{fig:spectro} where $\delta_{\rm GHS}$ decreases for smaller $K$ for all quantities except for Age where only a small fluctuation is noticed.
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{./figures/spectroscopy.png}
\caption{Spectroscopic validation for the ``Undefined'' galaxies from Galaxy Zoo 1 which here are classified by
our Machine Learning approach using Decision Tree. Elliptical galaxies are displayed in red and spirals in blue. In each panel we give the geometric histogram separation, $\delta_{\rm GHS}$.}
\label{fig:spectro}
\end{figure*}
The number of galaxies for each histogram from Figure \ref{fig:spectro} is as follows:
\begin{itemize}
\item $K \ge 5$: 13,373 ellipticals; 87,095 spirals; Total: 100,468.
\item $K \ge 10$: 9,030 ellipticals; 59,096 spirals; Total: 68,126.
\item $K \ge 20$: 6,390 ellipticals; 24,988 spirals; Total: 31,378.
\end{itemize}
Figure \ref{fig:spectro} shows how our classification recovers well known properties of galaxies like in the first row we see that ellipticals have ages peaked around 9 Gyr while spirals are younger and the distributions is more spread, probably due to the contamination by S0 galaxies. The second row exhibits the stellar mass distribution and again ellipticals have larger M$_{\rm stellar}$ compared to spirals with a difference of $\sim$0.9 dex, peak to peak. In the third row it is also evident the difference in metallicity ($\sim$0.4 dex), ellipticals are more metal rich than spirals, specially for larger systems. Finally, the distributions of central velocity dispersion show larger values for ellipticals and for spirals we even see a bimodality which reflects the disk to bulge ratio in this morphological type. These distributions attest credibility to our final classification using DT (TML).
\subsection{Case Study: Star Formation Acceleration and Morphologies}
\label{sub:case_study}
In this section we describe an application of the method presented here to study the relation between morphologies and galaxy evolution. More specifically, we use the method to classify a sample of galaxies between disks and spirals and measure quenching timescales for each group separately. There results will be discussed in detail in S\'a-Freitas et al. (in prep).
It has been established that galaxies are show a bimodal distribution in colors, with two distinct peaks with young (blue) and old (red) stellar populations \citep[e.g.,][]{Baldry2004,Wyder2007} and a minimum in the distribution commonly known as the green valley. Although it is generally accepted that galaxies move from blue to red, the physical processes associated with this transition are not completely understood, i.e., we do not know which phenomena are responsible for accelerating the decline in star formation rates, whether a single one or a combination of effects.
Using galaxy colours and stellar population synthesis models, \citet{Schawinski2014} has shown that galaxy quenching can be divided into two distinct processes depending on morphology: elliptical galaxies quench faster, probably through merger activity. \citet{Nogueira-Cavalcante2018} have reached a similar conclusion with more precise measurements from spectral indices (the 4000{\AA} break in the spectral continuum and the equivalent width of the $H_\delta$ absorption line). Nevertheless, both these works rely on assuming specific exponentially declining star-formation histories.
To circumvent this limitation, \citet{Martin2017} have developed a method using the same spectral indices but with no restraints regarding a parametric star formation history. The authors have shown, by comparisons with results from cosmological simulations, that one could infer the instantaneous time derivative of the star-formation rates, denominated the {\it star-formation acceleration}. Formally, this is defined as
\begin{equation}
SFA \equiv \frac{d}{dt}({\rm NUV}-{\rm r}),
\label{eq_sfa}
\end{equation}
with higher values representing stronger quenching.
In S\'a-Freitas et al. (in prep) we apply this methodology to a sample of galaxies out to $z=0.120$, divided by morphology. We only consider galaxies brighter than $M_r=-20.4$ for the sake of sample completeness. When compared to previous works, we are able to measure SFA in galaxies according to morphology for {\it all} objects, regardless of colour and assumed quenching histories. In that sense, the learning techniques presented here are fundamental to our analysis: by classifying a much larger number of galaxies (almost 30,000 galaxies in total), we are able to bin our sample by colours and draw conclusions based on smaller subsamples of objects according to their morphologies.
In Figure \ref{camila_sfa} we show our results: as expected, the bluest galaxies are currently undergoing strong bursts, while red galaxies are typically quenching. More importantly, we detect a significant distinction between SFA values for spirals and ellipticals in the green valley. Elliptical galaxies are quenching more strongly, while spirals appear to be moving gradually towards redder colours. We perform Kolmogorov-Smirnoff and Anderson-Darling tests to test for the null hypothesis that the distributions for spirals and ellipticals are drawn from the same parent sample in each bin, ruling this out ($p < 0.05$) only for $2\lesssim ({\rm NUV}-{\rm r})\lesssim 5$. We therefore conclude that this is an effect distinguishable primarily within the green valley, which means that the star formation histories of spirals and ellipticals are only significantly different during their transition to the red sequence.
In the near future, we expect the large upcoming imaging and spectroscopic surveys such as Euclid and DESI to increase our samples significantly, and deep learning techniques will yield reliable morphological classification of millions of objects. This will in turn allow us to further divide our galaxy sample, correlating morphologies with other phenomena such as AGN activity and environment in order to narrow our studies to the specific impact of each on the star formation histories of spiral and elliptical galaxies.
\begin{figure*}[!h]
\begin{center}
\includegraphics[width=\textwidth]{./figures/csf_SFAvsNUVr_GZDL_transparent.png}
\caption{Star formation acceleration (SFA) as a function of NUV-r colours. Higher SFA values represent faster quenching, while more negative values indicate strong bursts of star formation, with the green line showing no current variation in SFR. Data are binned in colour, with blue triangles for spiral galaxies and red circles for ellipticals. Error bars show the standard deviation within each bin. Contours show the number of galaxies in the diagram as percentage of the total count for each morphological type. Red galaxies are statistically indistinguishable, while ellipticals in the green valley are quenching significantly faster than spirals. At the blue end, the difference is not large enough for this sample to draw any conclusions.}
\label{camila_sfa}
\end{center}
\end{figure*}
\section{Comparison to Other Available Catalogs}
\label{sec:catalogs}
\begin{figure*}[!h]
\centering
\subfloat[Traditional Machine Learning - two classes.]{%
\includegraphics[width=0.32\linewidth]{figures/NairAbraham-TML-2classes-TType.png}%
\label{fig:NairTML2classes}
}%
\subfloat[CNN - two classes.]{%
\includegraphics[width=0.32\linewidth]{figures/NairAbraham-CNN-2classes-TType.png}%
\label{fig:NairCNN2classes}
}%
\subfloat[CNN - three classes.]{%
\includegraphics[width=0.32\linewidth]{figures/NairAbraham-CNN-3classes-TType.png}%
\label{fig:NairCNN3classes}
}%
\caption{Histograms (normalized by area) presenting classifications for \citet{Nair} sample by T-Type
using Traditional Machine Learning classification with two classes
(panel a) and classification with two (panel b) and three classes (panel c) from deep CNN.}
\label{fig:NairClassification}
\end{figure*}
\begin{figure*}[!h]
\centering
\subfloat[Traditional Machine Learning - two classes.]{%
\includegraphics[width=0.32\linewidth]{figures/cat-670k-TML-2classes-TType.png}%
\label{fig:cat-670k-TML2classes}
}%
\subfloat[CNN - two classes.]{%
\includegraphics[width=0.32\linewidth]{figures/cat-670k-CNN-2classes-TType.png}%
\label{fig:cat-670k-CNN2classes}
}%
\subfloat[CNN - three classes.]{%
\includegraphics[width=0.32\linewidth]{figures/cat-670k-CNN-3classes-TType.png}%
\label{fig:cat-670k-CNN3classes}
}%
\caption{Histograms presenting classifications for \citet{deepGal2} sample by T-Type
using Traditional Machine Learning classification with two classes (panel a), and,
classification with two (panel b) and three classes (panel c) from deep CNN (normalized by area).}
\label{fig:cat-670k-classification}
\end{figure*}
\begin{figure*}[!h]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[587742903942840576]{%
\includegraphics[width=0.16\linewidth]{figures/S_-225-TType--2/587742903942840576.png}%
}%
\subfloat[587745244699623552]{%
\includegraphics[width=0.16\linewidth]{figures/S_-225-TType--2/587745244699623552.png}%
}%
\subfloat[588007004732719488]{%
\includegraphics[width=0.16\linewidth]{figures/S_-225-TType--2/588007004732719488.png}%
}%
\subfloat[588007005231776000]{%
\includegraphics[width=0.16\linewidth]{figures/S_-225-TType--2/588007005231776000.png}%
}%
\subfloat[588007005239050496]{%
\includegraphics[width=0.16\linewidth]{figures/S_-225-TType--2/588007005239050496.png}%
}%
\subfloat[588009367478403200]{%
\includegraphics[width=0.16\linewidth]{figures/S_-225-TType--2/588009367478403200.png}%
}%
\caption{Sample classified as spiral galaxies by our classifier with -2.25 $\le$ T-Type $\le$ -2 by \citet{deepGal2}. The object ID number from SDSS-DR7 is presented under each galaxy image.}
\label{fig:S-TType-sample}
\end{figure*}
To attest the reliability of the morphological classification we provide in this work (see \ref{sec:fincat} for details about our catalog), it is of paramount importance to do external comparisons. There are currently two reliable catalogs that serve this purpose. First, \citet{Nair} provide T-Type information for 14,034 galaxies visually classified by an expert astronomer. Second, \citet{deepGal2} lists
670,722 galaxies also with T-Type available.
Figure \ref{fig:NairClassification} presents the histogram of T-Type provided by \citet{Nair} for the elliptical and spirals classes from our work. In general, the distributions are as expected - ellipticals peak around T-Type = -5 and spirals around T-Type = 5. In all three cases we notice an extension of the histogram for ellipticals towards larger T-Types, like a secondary peak around T-Type = 1, which may be associated to S0 galaxies. In Figure \ref{fig:NairCNN2classes}, we note an improvement in using DL over TML by observing the decrease of the fraction of elliptical galaxies and the corresponding increase of spiral galaxies with T-Type $>$ 0. Such behavior is also there in Figure \ref{fig:NairCNN3classes}, considering three classes, with elliptical galaxies mostly with T-Type $\le$ 0 and spirals (S and SB) primarily with T-Type $>$ 0. The general comparison to the classification obtained by \citet{Nair} exhibit a 87\% agreement.
Figure \ref{fig:cat-670k-classification}, analogous to Figure \ref{fig:NairClassification}, shows how our classification
performs in comparison with that provided by \citet{deepGal2}. In all panels, we see a striking different {\it wrt} the comparison with \citet{Nair} - a considerable amount of spirals around T-Type $\sim$ -2. Along with T-Type, \citet{deepGal2} provide also the probability of each galaxy being S0: $P_{\rm{S0}}$. They define elliptical galaxies as those with T-Type $\le 0$ and $P_{\rm{S0}} < 0.5$; S0 galaxies have T-Type $\le 0$ and $P_{\rm{S0}} > 0.5$; and spiral galaxies have T-Type $> 0$. We plot the elliptical galaxies, following their definition, as a filled histogram in orange, and this shows a higher peak at T-Type $\sim$ -2 {\it wrt} the distribution of the ellipticals with no T-Type restriction. Therefore, restricting the definition we get much higher concordance, namely higher fraction of systems that we classify as ellipticals, which translates into a higher peak around T-Type $\sim$ -2. Not only this, but as we can see from panel (c), using the three classes morphologies, the fraction of ellipticals with no T-Type restriction gets lower and the barred spirals appear more prominently around T-Type $\sim$ 4. In the same way we did when comparing to \citet{Nair}, here we find a 77\% agreement when comparing only elliptical and spiral galaxies.
A final note on the comparison with \citet{deepGal2} is related to the S0 class, in which we see a prominent bulge and a disk. They classify 230,217 as S0 and 27.96\% of these systems (64,380) have $K < 5$, i.e., $\sim$ 28\% of galaxies classified as S0 are very small objects. Visually, it is easy to misclassify galaxies with predominant, oval and bright structure; and the task becomes even more difficult if the objects are small. Figure \ref{fig:S-TType-sample} shows a sample of galaxies with -2.25 $\le$ T-Type $\le$ -2 according to \citet{deepGal2} that we classify as spiral galaxies. As our classifiers do not discriminate the S0 morphology it is not surprising that we classify as spiral if the galaxy has a prominent disk.
Finally, we note that since \citet{Nair} and \citet{deepGal2} present their classification as T-Type, a proper comparison is difficult to make. However, the agreement displayed in Figures \ref{fig:NairClassification} and \ref{fig:cat-670k-classification}, together with the global concordance when
comparing elliptical and spiral galaxies, give us confidence that the classifications obtained here in this work are consistent and robust.
\section{Summary}
\label{sec:summary}
With the new photometric surveys coming up, in several bands and with varying depth, it is of paramount importance to have the proper machinery for morphological classification, which is one of the first elements to create
a reliable galaxy catalog, from which we can select clusters of galaxies and study the large scale structure of
the universe. Here, we present models and methodologies to achieve these goals. We investigate the limits of applicability of TML and DL, in the supervised mode and compare their performances. We revisit the non-parametric
methodology using C, A, S, H and G$_{2}$ and study some details ignored in previous works. Also, we examine
how different methods are sensitive to the size of the galaxies, here identified by the ratio between the object's area and the PSF area. Finally, we remind the readers again of the importance of comparing TML with DL, since they are radically different approaches that in principle should result in similar classes. In the following, we summarize the main contributions of this paper:
\begin{itemize}
\item We investigated how parameters involved in the TML (S,A,H, and G$_{2}$) depend on the threshold used to obtain the segmented image. Although this seems a minor detail, it has proven to be an important ingredient
in improving the TML performance, since the separation between ellipticals and spirals, $\delta_{\rm GHS}$,
is maximized according to the threshold. Comparison with the traditional CAS system shows a considerable
improvement in distinguishing ellipticals from spirals, namely, for CAS ($\delta_{\rm GHS}$ = 0.79, 0.50, 0.21 for
C, A and S respectively), while in our modified CAS we have $\delta_{\rm GHS}$ = 0.84, 0.83, 0.92. Besides,
the new parameters H and G$_{2}$ have their $\delta_{\rm GHS}$ values very high (0.87 and 0.90, respectively)
attesting their usefulness in galaxy morphology analysis. We list all these parameters in our main catalog with 670,560 galaxies.
\item One way of testing the quality of our morphological classification (based on photometric data) is to compare
with independent classes established with different data (spectroscopy). We use only galaxies from Galaxy Zoo 1 classified as ÓUndefinedÓ and applied our Decision Tree with the traditional machine learning approach. The result
is presented in Figure \ref{fig:spectro} where we see an excellent performance of our classes in distinguishing
the stellar population properties of ellipticals and spirals. Ellipticals have ages peaked around 9 Gyr while spirals are younger and the distribution is more spread. Ellipticals have larger M$_{stellar}$ compared to spirals with a difference of $\sim$0.9 dex. The difference in metallicity ($\sim$0.4 dex) between ellipticals and spirals is noticeable, specially for larger systems. Also, ellipticals show larger values of central velocity dispersion show larger values for ellipticals compared to spirals, for which we even see a bimodality reflecting the disk to bulge ratio variation in this morphological type.
\item We present a preliminary result on SFA which study was always hampered by the lack of reliable morphological classification for a sizeable sample. Our catalog provides the necessary input data for such analysis. We show that the bluest galaxies are currently experiencing strong bursts while red galaxies are quenching. Also, we present for the first time a significant distinction between SFA values for spirals and ellipticals in the green valley. We find that the star formation histories of spirals and ellipticals are only significantly different during their transition to the red sequence. A full analysis of this topic and consequences for galaxy evolution is presented in S\'a-Freitas et al. (in prep).
\item We use a deep convolutional neural network (CNN) - GoogLeNet Inception, to obtain morphological classifications for galaxies for all galaxies in the main catalog under study here. With the twenty-two layer network and imbalanced datasets, the results obtained considering two classes are very consistent ($\rm OA\ge98.7\%$) and for the three classes problem they are still good, considering the quality of the data ($\rm OA \sim 82\%$). Also, in comparison with TML, DL outperforms by $\Delta \rm OA \sim 4\%$ and $\Delta \rm AUC \sim 0.07$ for galaxies with K$\ge$ 5.
\item We make public a complete catalog for 670,560 galaxies, in the redshift range 0.03 $<$ z $<$ 0.1, Petrosian magnitude in r-band brighter than 17.78, and $|b| \ge 30^o$. The input data comes from SDSS-DR7. We provide morphological classification using TML and DL, together with all parameters measured with our new non-parametric method (see \ref{sec:fincat} for catalog details). We append classifications (T-Type) from \citet{Nair} and \citet{deepGal2} whenever available.
\end{itemize}
\section*{Acknowledgements}
This work was financed in part by the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior - Brasil (CAPES) - Finance Code 001. R.R.dC. and R.R.R. acknowledge financial support from FAPESP through grant \# 2014/11156-4. RRR and PHB thank Santos Dumont Supercomputer-LNCC for providing 500KUAs for the partial development of this research. The authors thank to MCTIC/FINEP (CT-INFRA grant 0112052700), the Embrace Space Weather Program for the computing facilities at INPE and NVIDIA for providing GPUs. We thank Dr. Diego Stalder, Dr. Bjoern Penning, Alyssa Garcia, Luke Korley, Dr. Helena Dom\'{i}nguez Sanch\'{e}z, and Dr. Sandro B. Rembold for productive discussions and thoughtful comments on several topics related to the present work.
| proofpile-arXiv_065-1501 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Quantum Field Theory (QFT), which is the term that generically describes a number of different approaches to the theory of fundamental interactions in particle physics, is a spectacular enterprise where the physical requirements meet the mathematical tools in a mix that
since the late twenties has been a powerful ground for developing impressive crossing relations between the two disciplines.
The algebraic approach to QFT (AQFT) stands for its neat conceptual clarity and mathematical rigour, at least for what concerns the general structural analysis.
However, over the last few years it has been proven to be useful also in providing a framework for constructing specific models.
In the original formulation,
the main object of study of AQFT is a so-called {\it net of local observables}, namely a correspondence
$$O \to {\cal A}(O)$$
between certain regions of Minkowski spacetime and operator algebras \cite{Ta} acting on a fixed (vacuum) Hilbert space ${\cal H}$, satisfying a bunch of physically meaningful axioms.
This point of view is exposed in full detail in R. Haag's book \cite{Ha} (see also \cite{R}).
The representation theory of this net accounts for the physical superselection sectors, i.e., the rules specifying which physical processes may take place and which
are ruled out, e.g., by conservation laws. Thanks to a breakthrough result by Doplicher and Roberts~\cite{DR}, when carefully analyzed, this superselection structure uniquely identifies a global gauge group $G$ and a {field net} $O \to {\cal F}(O)$ on a Hilbert space $\tilde{\cal H}$, acted upon by $G$, such that ${\cal F}(O)^G \simeq {\cal A}(O)$ and, moreover, all the relevant representations of ${\cal A}$ appear in $\tilde{\cal H}$.
Another important ingredient for us is the concept of {\it scaling limit net} due to D. Buchholz and R. Verch \cite{BV}, which is the counterpart in the algebraic setting of the renormalization group analysis (as discussed also in~\cite{BDM}) and in particular allows one to attach an intrinsic meaning to degrees of freedom (like quarks and gluons) that are unobservable at the ambient scale, i.e. they are {\it confined}.
In short, this is a net
$$O \to \mathcal{A}_{0,\iota}(O)$$
(actually a family of such nets, selected through different limiting processes)
which is supposed to capture the short distance (or, equivalently, high energy) behaviour of the theory described by the net $O \to {\cal A}(O)$ and, in a sense, is a sort of tangent space
(\`a la Gromov) of the original net.
The concept of scaling limit applies to $({\cal F},G)$ as well, giving rise to a number of natural concepts like that of {\it preserved charge} \cite{DMV,DMV2}.
Moreover, the net $O \to \mathcal{A}_{0,\iota}(O)$ displays its own superselection structure, and the relationship between the superselection structures of ${\cal A}$ and $\mathcal{A}_{0,\iota}$ provides the above mentioned intrinsic definition of confined sectors of ${\cal A}$
Since the sectors of ${\cal A}$ may be described by suitable endomorphisms,
following an earlier suggestion by S. Doplicher we studied the possibility to describe the superselection sectors of $\mathcal{A}_{0,\iota}$ in terms of some sort of {\it asymptotic endomorphisms}
of ${\cal A}$. A general treatment of this topic appears in \cite{CM2}. The emerging mathematical concept resembles the so-called asymptotic morphisms of Connes-Higson in $E$-theory, a variant of Kasparov $KK$-theory, which is a cornerstone in the formulation of Noncommutative Geometry in the sense of A. Connes \cite{Co}.
Similar ideas where quantum features (here, superselection rules) are described in terms of noncommutative geometry, have appeared ever since.
On the physical side, the interpretation of asymptotic morphisms can be understood by observing that, composing one of them with the vacuum, one obtains a family, indexed by the spatio-temporal scale $\lambda} \def\L{\Lambda$, of neutral states of the original theory which for $\lambda} \def\L{\Lambda \to 0$ approximate, in a suitable sense, a charged state of the scaling limit theory. This is of course reminiscent of the procedure of ``shifting a compensating charge behind the Moon'' by which one obtains charged states as limits of neutral ones at a fixed scale.
Such a picture is particularly interesting in connection with the theoretical problem, mentioned above, of the formulation of a physically meaningful notion of confined charge. As first pointed out in~\cite{Buc1}, the conventionally accepted approach to confinement relies on the comparison between the fundamental degrees of freedom used to defined the theory (e.g., gauge and Fermi fields in the Langrangian) and its scattering states. As such, it boils down essentially to attaching a physical interpretation to unobservable objects and therefore it can not have an intrinsic meaning. This is confirmed also by the fact that there are well known examples of theories whose observables can be obtained starting from very different sets of basic fields. On the contrary, as already recalled, a notion of confined charge which is entirely based on observables can instead be obtained in the algebraic framework of QFT by combining the scaling limit construction with the superselection sectors analysis. In this setting, asymptotic morphisms
can be used, at least in principle, to operationally decide if a given theory features confined charges. Indeed, the above mentioned states induced by asymptotic morphisms converge to eigenstates of the charge operator of the scaling limit theory. Therefore, it should be sufficient to test their values and dispersions at small scales on suitable observables converging in the scaling limit to the conserved current generating the appropriate charge.
In view of the above considerations, it seems desirable, both from the mathematical and the physical standpoint, to understand in some detail
how the abstract
concepts developed in \cite{CM2} fit with the analysis of some concrete models and to present explicit examples of asymptotic morphisms associated to confined sectors.
A popular toy model exhibiting the main features which are expected to characterize the confinement picture is the so-called Schwinger model, i.e., $d=2$ quantum electrodynamics with massless fermions. As it is well known, this model is exactly solvable, and its net of observables is isomorphic to the one generated by the free massive scalar field~\cite{LS}, i.e., the distributional solution of the Klein-Gordon equation $(\Box + m^2) \varphi = 0$. In view of the fact that the Coulomb energy of two opposite electric charges grows linearly with their mutual distance, the absence of charged sectors of the observable algebra has been interpreted as a manifestation of confinement (see, e.g., \cite{BJ}).
For this specific model the scaling limit analysis has been carried through in~\cite{Buc1, BV2}. The final outcome is that the scaling limit net of the free massive scalar field in $d=2$ spacetime dimensions contains the corresponding massless net. It is worth mentioning here that the usual technical complications due to the infrared singularity of the massless free field in $d=2$ are overcome in this case by describing it in terms of Weyl operators. Then, as discussed already in~\cite{StWi,Cio}, the latter model exhibits non-trivial sectors (localizable in wedges), which are therefore confined sectors of the Schwinger model net in the language of~\cite{DMV2}.
(We also point out references~\cite{HL,DeMe} for a discussion of many other features of the $d=2$ massless free field.)
Hence, in the present work we discuss the Schwinger model from the point of view of our previous paper~\cite{CM2}, and in particular we provide an explicit construction of its asymptotic morphisms.
In conclusion, despite some technicalities (infrared problems, choice of subnets, several kinds of limits), probably the most important message that can be read off from this work is that
it provides evidence to the fact that
in principle
confined charges are indeed accessible to observation, at least in some idealized sense.
\medskip
We summarize the content of the paper.
In Section \ref{sect:Free} we mainly fix the notation and the general background. Namely, we recall the construction of the scaling limit and of the asymptotic morphisms associated to its sectors. We also introduce the Weyl algebra and use it to define the local nets associated to free fields of different masses in $d=2$. The section ends with a description of a family of sectors of the scaling limit $\Aoi^{(m)}$ of the massive free field.
Section \ref{sect:asymptotic} contains the main results of this work.
It is devoted to the explicit construction of asymptotic morphisms for the net ${\cal C}^{(m)}$ generated by the derivatives of the time zero massive fields.
Such asymptotic morphisms correspond, in the sense of \cite{CM2}, to the sectors of the scaling limit of ${\cal C}^{(m)}$ that are obtained by restricting those of $\Aoi^{(m)}$.
As a preliminary step towards this construction, we first provide a description of the scaling limit of ${\cal C}^{(m)}$ by showing that it basically coincides with the corresponding massless version ${\cal C}^{(0)}$.
In Section \ref{sec:Schwinger} we examine the possibility of exhibiting asymptotic morphisms directly for the sectors of $\Aoi^{(m)}$ by taking a similar route as in the previous section. This program can be partially carried on, however the properties of the emerging objects are weaker than those axiomatized in \cite{CM2}.
In Section \ref{sec:highdim} we consider the free charged massive scalar field in $d=4$ and show that in this case the construction of asymptotic morphisms for the sectors of the scaling limit net can be achieved
without any trouble, thus showing that the difficulties of Section \ref{sec:Schwinger} are due to the singular behaviour of the two-dimensional massless free field.
Finally, in Appendix \ref{app:quasiequiv}, we prove for the $d=2$ case the local quasiequivalence of the massive and massless vacuum states of the Weyl algebra in restriction to the
subalgebra generated by the derivatives of time zero fields, a statement which is needed in Section \ref{sect:asymptotic} and interesting on its own (cf. \cite{EF}).
A recent result also implying this fact has been independently obtained in \cite{BFR}.
\section{Free scalar fields and their scaling limits}\label{sect:Free}
For the convenience of the reader, we quickly recapitulate in this section the main results of~\cite{BV, BV2} about the abstract scaling limit construction and its application to the concrete model of the free scalar field, and of \cite{CM2} about the asymptotic morphisms associated to the sectors of the scaling limit theory.
\medskip
Let $O \mapsto {\cal A}(O)$ a local net of von Neumann algebras indexed by open double cones $O \subset {\mathbb R}^d$ and acting on a vacuum Hilbert space ${\cal H}$, cf.~\cite{Ha, Ar2}. We assume that ${\cal A}$ is covariant with respect to a unitary, strongly continuous representation $U : {\cal G} \to \cU({\cal H})$ satisfying the spectrum condition, where ${\cal G}$ is a subgroup of the connected component ${\cal P}_+^\uparrow$ of the Poincar\'e group containing the translations, and that there is a unique (up to a phase) translation invariant unit vector $\O$ (the vacuum).
As usual, we will write $\alpha_{(\L,x)}={\rm Ad}(U(\L,x))$, $(\L,x) \in {\cal G}$.
We indicate with ${\cal A}_\text{loc}$ the union of the local algebras ${\cal A}(O)$, and, by a slight abuse of notation, we also use ${\cal A}$ for the quasi-local C*-algebra defined by the net, i.e.\ the norm closure of ${\cal A}_\text{loc}$. Moreover, for more general possibly unbounded open regions $S \subset {\mathbb R}^d$, ${\cal A}(S)$ will denote the C*-algebra generated by all the ${\cal A}(O)$ with $O \subset S$.
\medskip
The local scaling algebra $\underline{\mathfrak A}(O)$ is then defined as the C*-algebra of all the bounded functions $\lambda} \def\L{\Lambda \in (0, +\infty) \mapsto \underline{A}_\lambda} \def\L{\Lambda$ such that $\underline{A}_\lambda} \def\L{\Lambda \in {\cal A}(\lambda} \def\L{\Lambda O)$ for all $\lambda} \def\L{\Lambda > 0$, and
\[
\| \underline{\alpha}_{(\L,x)}(\underline{A}) - \underline{A} \| := \sup_{\lambda} \def\L{\Lambda > 0} \| \alpha_{(\L,\lambda} \def\L{\Lambda x)}(\underline{A}_\lambda} \def\L{\Lambda) - \underline{A}_\lambda} \def\L{\Lambda \| \to 0 \qquad\text{as }(\L,x) \to ({\bf 1}, 0) \text{ in ${\cal G}$}.
\]
Given a bounded function $\lambda} \def\L{\Lambda \in (0, +\infty) \mapsto A_\lambda} \def\L{\Lambda$ such that $A_\lambda} \def\L{\Lambda \in {\cal A}(\lambda} \def\L{\Lambda O)$ for all $\lambda} \def\L{\Lambda > 0$ and $h \in C_c({\mathbb R}^2)$ (continuous functions with compact support), it is convenient to set
$$(\underline{\alpha}_h A)_\lambda} \def\L{\Lambda= \int_{{\mathbb R}^2} dx \, h(x) \alpha_{\lambda} \def\L{\Lambda x}(A) \ , \quad \lambda} \def\L{\Lambda > 0 $$
(with the integral defined in the strong sense),
which defines an element in $\underline{\mathfrak A}(O+{\text{supp}\,} h)$.
Given then a locally normal state $\omega} \def\O{\Omega$ on ${\cal A}$ (e.g., $\omega} \def\O{\Omega = \langle \O, (\cdot)\O\rangle$), one can consider the states $(\underline{\omega}_\lambda} \def\L{\Lambda)_{\lambda} \def\L{\Lambda > 0}$ on $\underline{\mathfrak A}$ defined by $\underline{\omega}_\lambda} \def\L{\Lambda(\underline{A}) := \omega} \def\O{\Omega(\underline{A}_\lambda} \def\L{\Lambda)$, and the set of their weak* limit points as $\lambda} \def\L{\Lambda \to 0$, which is actually independent of the original state $\omega} \def\O{\Omega$. For any such limit state $\underline{\omega} \def\O{\Omega}_{0,\iota}$, the corresponding scaling limit net is then defined by
\[
\mathcal{A}_{0,\iota}(O) := \pi_{0,\iota}(\underline{\mathfrak A}(O))'',
\]
with $(\pi_{0,\iota}, {\cal H}_{0,\iota}, \Omega_{0,\iota})$ the GNS representation determined by $\underline{\omega} \def\O{\Omega}_{0,\iota}$. This new net satisfies the same structural properties of the underlying net ${\cal A}$, possibly apart from uniqueness of the vacuum if $d = 2$.
In order to formulate the notion of asymptotic morphisms of ${\cal A}$, we also need to introduce the net of C*-algebras $O \mapsto \uAA^\bullet(O)$, defined as the C*-algebras of bounded functions $\lambda} \def\L{\Lambda \in (0,+\infty) \mapsto A_\lambda} \def\L{\Lambda \in {\cal A}(\lambda} \def\L{\Lambda O)$ such that for all $\hat O \Supset O$, i.e., $\hat O \supset \bar O$, and for all $\varepsilon} \def\se{\epsilon} \def\z{\zeta > 0$, there exist elements $\underline{A}, \underline{A}' \in \underline{\mathfrak A}(\hat O)$ for which
\[
\limsup_\kappa \| (A_{\lambda} \def\L{\Lambda_\kappa} - \underline{A}_{\lambda} \def\L{\Lambda_\kappa})\O\| + \|(A^*_{\lambda} \def\L{\Lambda_\kappa}-\underline{A}'_{\lambda} \def\L{\Lambda_\kappa})\O \| < \varepsilon} \def\se{\epsilon} \def\z{\zeta,
\]
where $(\lambda} \def\L{\Lambda_\kappa)_{\kappa \in K}$ ($K$ some index set) is a net, fixed once and for all, such that $\underline{\omega} \def\O{\Omega}_{0,\iota} = \lim_\kappa \underline{\omega}_{\lambda} \def\L{\Lambda_\kappa}$. It is then clear that $\underline{\mathfrak A}(O) \subset \uAA^\bullet(O)$, and one finds that $\pi_{0,\iota}$ extends to a morphism $\poi^\bullet : \uAA^\bullet \to \mathcal{A}_{0,\iota}$. Moreover, under the mild assumption that ${\cal A}$ has a convergent scaling limit~\cite[Def.\ 4.4]{BDM2}, there also holds $\mathcal{A}_{0,\iota}(O) \subset \poi^\bullet(\uAA^\bullet(O))$.
We can now define a (tame) asymptotic morphism of ${\cal A}$ (relative to the scaling limit state $\underline{\omega} \def\O{\Omega}_{0,\iota} = \lim_\kappa \underline{\omega}_{\lambda} \def\L{\Lambda_\kappa}$) as a family of maps $\rho_\lambda} \def\L{\Lambda : {\cal A} \to {\cal A}$, $\lambda} \def\L{\Lambda > 0$, such that, for all $A, B \in {\cal A}$, $\alpha \in {\mathbb C}$, there holds
\begin{align*}
&\lim_\kappa \| [\rho_{\lambda} \def\L{\Lambda_\kappa}(A+\alpha B) -\rho_{\lambda} \def\L{\Lambda_\kappa}(A)-\alpha \rho_{\lambda} \def\L{\Lambda_\kappa}(B)]\O \| = 0,\\
&\lim_\kappa \| [\rho_{\lambda} \def\L{\Lambda_\kappa}(AB) - \rho_{\lambda} \def\L{\Lambda_\kappa}(A)\rho_{\lambda} \def\L{\Lambda_\kappa}(B)]\O \| = 0,\\
&\lim_\kappa \| [\rho_{\lambda} \def\L{\Lambda_\kappa}(A)^*-\rho_{\lambda} \def\L{\Lambda_\kappa}(A^*)]\O \| = 0,
\end{align*}
and moreover such that for all $A \in {\cal A}$ the function $\lambda} \def\L{\Lambda \mapsto \boldsymbol{\rho}^\bullet(A)_\lambda} \def\L{\Lambda := \rho_\lambda} \def\L{\Lambda(A)$ belongs to $\uAA^\bullet$, the resulting map $\boldsymbol{\rho}^\bullet : {\cal A} \to \uAA^\bullet$ is norm continuous, and
\[
\poi^\bullet\Big(\boldsymbol{\rho}^\bullet\Big(\bigcup_O {\cal A}(O)\Big)\Big) \subset \bigcup_O \mathcal{A}_{0,\iota}(O).
\]
In particular, an asymptotic isomorphism is an asymptotic morphism $(\phi_\lambda} \def\L{\Lambda)$ such that the map $\boldsymbol{\phi}^\bullet : {\cal A} \to \uAA^\bullet$ is injective and there exists a continuous section $\bar s : \mathcal{A}_{0,\iota} \to \uAA^\bullet$ of $\poi^\bullet$ for which
\[
\boldsymbol{\phi}^\bullet\Big(\bigcup_O {\cal A}(O)\Big) = \bar s\Big(\bigcup_O \mathcal{A}_{0,\iota}(O)\Big).
\]
With these definitions, the main result of~\cite{CM2} states that if ${\cal A}$ has convergent scaling limit and its quasi-local C*-algebra is isomorphic to $\mathcal{A}_{0,\iota}$, there is a 1-1 correspondence between unitary equivalence classes of morphisms $\rho_0 : \mathcal{A}_{0,\iota} \to \mathcal{A}_{0,\iota}$ such that $\rho_0(\bigcup_O \mathcal{A}_{0,\iota}(O))\subset \bigcup_O \mathcal{A}_{0,\iota}(O))$ and naturally defined equivalence classes of pairs of an asymptotic morphism $(\rho_\lambda} \def\L{\Lambda)$ and an asymptotic isomorphism $(\phi_\lambda} \def\L{\Lambda)$, such correspondence being defined by the formula
\[
\rho_0 = \poi^\bullet \boldsymbol{\rho}^\bullet (\boldsymbol{\phi}^\bullet)^{-1}\bar s.
\]
We notice explicitly that the above definitions and results make sense in any number of spacetime dimensions $d$.
We now turn to the description, following~\cite{BV2}, of the scaling limit of the free scalar field, focusing on the $d=2$ case. We denote by ${\mathfrak W}$ the Weyl algebra, the C*-algebra generated by the unitary operators $W(f)$, $ f \in {\cal D}({\mathbb R})$ (complex valued functions), satisfying
\begin{gather*}
W(f)W(g) = e^{-\frac{i}{2}\sigma(f,g)}W(f+g), \\
\sigma(f,g) = \Im \int_{\mathbb R} d{\boldsymbol{x}}\,\overline{f({\boldsymbol{x}})}g({\boldsymbol{x}}).
\end{gather*}
For each mass $m \geq 0$, there is an automorphic action of the Poincar\'e group ${\cal P} = O(1,1) \ltimes {\mathbb R}^2$ on ${\mathfrak W}$, denoted by $(\Lambda,x) \mapsto \alpha^{(m)}_{(\L,x)}$ and induced by an action $\tau^{(m)}_{(\L,x)}$ on ${\cal D}({\mathbb R})$, i.e., $\alpha^{(m)}_{(\L,x)}(W(f))=W(\tau^{(m)}_{(\L,x)}f)$. For reference's sake, we give the explicit expression of time translations:
\begin{equation}\label{eq:timeev}\begin{split}
(\tau^{(m)}_t f)\hat{}({\boldsymbol{p}}) &= \left[\cos(t\omega} \def\O{\Omega_m(\bdp)) + i\omega} \def\O{\Omega_m(\bdp)^{-1}\sin(t\omega} \def\O{\Omega_m(\bdp))\right] (\Re f)\hat{}({\boldsymbol{p}}) \\
&\quad+ i\left[\cos(t\omega} \def\O{\Omega_m(\bdp))+i\omega} \def\O{\Omega_m(\bdp) \sin(t\omega} \def\O{\Omega_m(\bdp))\right](\Im f)\hat{}({\boldsymbol{p}}), \qquad t \in {\mathbb R},
\end{split}\end{equation}
where $\omega} \def\O{\Omega_m(\bdp)=\sqrt{{\boldsymbol{p}}^2 + m^2}$.
There is also an automorphic action $\lambda} \def\L{\Lambda \in {\mathbb R}_+ \mapsto \sigma_\lambda} \def\L{\Lambda$ of dilations on ${\mathfrak W}$, induced by an action $\lambda} \def\L{\Lambda \mapsto \delta} \def\D{\Delta_\lambda} \def\L{\Lambda$ on ${\cal D}({\mathbb R})$, see~\cite[Eq.\ (2.7)]{BV2}. We also consider the vacuum states $\omega} \def\O{\Omega^{(m)}$, $m \geq 0$, on ${\mathfrak W}$. For $m > 0$ they are defined by $\omega} \def\O{\Omega^{(m)}(W(f)) = e^{-\frac 1 2 \|f\|_m^2}$, where
$$\|f\|_m^2 = \frac{1}{2}\int_{\mathbb R} d{\boldsymbol{p}}\left|\omega} \def\O{\Omega_m({\boldsymbol{p}})^{-1/2} (\Re f)\hat{}({\boldsymbol{p}}) + i\omega} \def\O{\Omega_m({\boldsymbol{p}})^{1/2} (\Im f)\hat{}({\boldsymbol{p}})\right|^2, \qquad m \geq 0,$$
while, for $m=0$,
\begin{equation*}
\omega} \def\O{\Omega^{(0)}(W(f)) = \begin{cases}e^{-\frac{1}{2}\| f\|_0^2} &\text{if }(\Re f)\hat{}(0) = 0,\\
0 &\text{otherwise.}\end{cases}
\end{equation*}
It turns out that $\|\cdot\|_m$ for $m>0$ (resp.\ $m=0$) is indeed a norm on ${\cal D}({\mathbb R})$ (resp.\ $\{f \in {\cal D}({\mathbb R}) \ : \ \int_{\mathbb R} \Re f = 0\}$) considered as a real vector space. We denote by $\pi^{(m)}$ the GNS representation induced by $\omega} \def\O{\Omega^{(m)}$, $m \geq 0$, acting on the Hilbert space ${\cal H}^{(m)}$ with cyclic vector $\Omega^{(m)}$, and by $O \mapsto {\cal A}^{(m)}(O)$ the corresponding net of von Neumann algebras
$$
{\cal A}^{(m)}(O) := \{\pi^{(m)} (\alpha^{(m)}_{(\L,x)}(W(f)))\,: {\text{supp}\,} f \subset I\}'',
$$
where $I \subset {\mathbb R}$ is an open interval such that $O = \L O_I+x$, with $O_I$ the double cone with base $I$ in the time-zero line.
(Note that here we depart from the notation of~\cite{BV2}; also, in order to simplify the notation, we will drop the indication of the representation $\pi^{(m)}$ when this does not cause confusion.)
It is clear that the net ${\cal A}^{(m)}$
satisfies
\begin{equation}\label{eq:innercont}
{\cal A}^{(m)}(O_I) = \bigvee_{J \Subset I} {\cal A}^{(m)}(O_J) \ ,
\end{equation}
since every $f$ supported in an open interval $I$ is actually supported in a $J \Subset I$.
If $m > 0$, the net ${\cal A}^{(m)}$ satisfies the split property~\cite{Dr} and its local algebras are type III$_1$ factors \cite{GlJa}.
In particular, ${\cal A}^{(m)}$ is a simple C*-algebra. Note also that ${\cal H}^{(0)}$ is non-separable, see \cite[Section 4]{AMS93}.\footnote{A representation of ${\mathfrak W}$ in which $\tau^{(0)}$ is unitarily implemented on a separable Hilbert space is constructed in~\cite{DeMe}.}
According to the general construction discussed above, we associate to ${\cal A}^{(m)}$, $m>0$, the scaling algebra $\uAA^{(m)}$ and the scaling limit nets $O \mapsto \Aoi^{(m)}(O) = \pi_{0,\iota}(\uAA^{(m)}(O))''$, on the scaling limit Hilbert spaces $\Hoi^{(m)}$, along with corresponding automorphic actions $\alpha^{(m;0,\iota)}$ of the Poincar\'e group. We will also make use of the net $\uAA^{(m)\bullet}$ of the elements asymptotically contained in $\uAA^{(m)}$, and of the corresponding extension $\poi^\bullet$ of the scaling limit representation. When there is no ambiguity, we will just write $\uAA^\bullet$ instead of $\uAA^{(m)\bullet}$. It is known that, for each $\iota$, the quasi-local algebra has a non-trivial center, and the local algebras $\Aoi^{(m)}(O)$ are not factors, as they contain non-trivial elements of the center~\cite[proof. of Thm. 4.1]{BV2}. As a consequence, since ${\cal A}^{(m)}$ is irreducible (because the Fock vacuum is the only translation invariant vector), the quasi-local C*-algebras $\Aoi^{(m)}$ and ${\cal A}^{(m)}$ cannot be isomorphic. This is in sharp contrast with the situation considered in~\cite{CM2}.
One can also introduce rescaled Weyl operators
$${\underline{W}}(f)_\lambda} \def\L{\Lambda := W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f), \qquad \lambda} \def\L{\Lambda>0,\, f \in {\cal D}({\mathbb R}).$$
From~\cite[Eq. (4.13)]{BV2} one sees that
\begin{equation}\label{eq:weylreg}
\limsup_{\lambda} \def\L{\Lambda \to 0} \| [{\underline{W}}(f)_\lambda} \def\L{\Lambda - (\ua^{(m)}_h{\underline{W}}(f))_\lambda} \def\L{\Lambda]\O\|
\end{equation}
can be made arbitrarily small choosing $h$ sufficiently close to a $\delta} \def\D{\Delta$-function, which entails ${\underline{W}}(f) \in \uAA^\bullet(O)$ for every double cone $O$ based on the time zero line and whose base contains ${\text{supp}\,} f$. Using~\cite[Lemma 4.2]{BV2} and~\cite[Lemma 4.5]{CM2}, one also concludes that
\begin{equation}\label{eq:Woi}
W_{0,\iota}(f) := \poi^\bullet({\underline{W}}(f) ), \qquad f \in {\cal D}({\mathbb R}),
\end{equation}
satisfy the Weyl relations
$$W_{0,\iota}(f) W_{0,\iota}(g) = e^{-i\sigma(f,g)/2}W_{0,\iota}(f+g), \qquad f,g\in{\cal D}({\mathbb R}),$$
and $W_{0,\iota}(f) \in \Aoi^{(m)}(O_I)$ if ${\text{supp}\,} f \subset I$.
The arguments in~\cite{Buc1} suggest that the net ${\cal A}^{(0)}$ obtained in the GNS representation of the massless vacuum $\omega} \def\O{\Omega^{(0)}$ is isomorphic to the subnet of $\Aoi^{(m)}$ generated by the operators $W_{0,\iota}(f)$.\footnote{The results in~\cite{Buc1} also suggest that the scaling limit of the C*-subalgebra of the scaling algebra generated by (smoothed-out) functions $\lambda} \def\L{\Lambda \mapsto W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f)$ and $\lambda} \def\L{\Lambda \mapsto W(|\log \lambda} \def\L{\Lambda|^{1/2}\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f)$ is isomorphic to ${\cal A}^{(0)} \otimes {\cal Z}$, with ${\cal Z}$ (a subalgebra of) the center of $\Aoi^{(m)}$.} In Sec.~\ref{sec:Schwinger} we will make this identification more explicit.
The net $\Aoi^{(m)}$ has non-trivial automorphisms $\rho_{q,\iota}$, $q \in {\mathbb R}$, defined as follows. Let $u_n^q = u_n \in {\cal D}_{\mathbb R}({\mathbb R})$ be such that, for some $a>0$,
$$u_n({\boldsymbol{x}}) = \begin{cases}0 \qquad &\text{if }{\boldsymbol{x}} \leq -a,\\
\text{independent of }n &\text{if }-a < {\boldsymbol{x}} < a,\\
q &\text{if }a \leq {\boldsymbol{x}} \leq na,\end{cases}
$$
and consider ${V_n^{(q)}} := W_{0,\iota}(iu_n)$. It follows that, for $m \geq n$, ${V_n^{(q)}} V_m^{(q)*} =W_{0,\iota}(i(u_n-u_m))$ is localized in a double cone
whose closure is contained
in the right spacelike complement of ${\boldsymbol{x}} = na$. Therefore, if $A \in \Aoi^{(m)}(O)$, ${V_n^{(q)}}^*A{V_n^{(q)}}$ is independent of $n$ for $n$ sufficiently large, and there exists the (norm) limit
\begin{equation}\label{eq:rqi}
\rho_{q,\iota}(A) = \lim_{n \to +\infty}{V_n^{(q)}}^*A{V_n^{(q)}}.
\end{equation}
It is then clear that $\rho_{q,\iota}$ extends, by norm continuity, to an endomorphism of the quasi-local algebra $\Aoi^{(m)}$, which is easily seen to be
localized in $W_+-a$
and
invertible. It is shown in~\cite{BV2} that such automorphisms induce non-trivial translation covariant sectors of $\Aoi^{(m)}$ (which are independent of the chosen $a > 0$ and of $u_n$ on $(-a,a) \cup (na,+\infty)$). More precisely, it is shown in \cite[Thm.\ 4.1]{BV2} that $\omega_{0,\iota}\circ\rho_{q,\iota} \upharpoonright \Aoi^{(m)}(W_a^\pm) = \omega_{0,\iota} \upharpoonright \Aoi^{(m)}(W_a^\pm)$, where $W_a^\pm$ is the right/left spacelike complement of the interval $[-a,a]$ of the time-zero axis. From this it follows, by cyclicity of $\Omega_{0,\iota}$ for wedge algebras and GNS unicity, that $\rho_{q,\iota}$ is a BF-sector of $\Aoi^{(m)}$. Notice also that, due to the above mentioned localization properties of ${V_n^{(q)}}$, $\rho_{q,\iota}$ is a properly supported morphism of $\Aoi^{(m)}$ in the sense of~\cite[Sec.\ 5]{CM2}.
\section{Asymptotic morphisms for the derivatives of time zero fields} \label{sect:asymptotic}
Since the sectors $\rho_{q,\iota}$ can be interpreted as describing confined charges of the underlying theory ${\cal A}^{(m)}$ \cite{Buc1,DMV}, it seems interesting to
exhibit explicit examples of the associated asymptotic morphisms of ${\cal A}^{(m)}$.
Actually, due to the bad infrared behaviour of the massless scalar field in $d=2$ which is responsible for the appearance of the nontrivial center of $\Aoi^{(m)}$,
these sectors do not fall into the framework of \cite{CM2}. Therefore,
it will only be possible to construct the associated asymptotic morphisms at the price of passing from ${\cal A}^{(m)}$ to a suitable subnet ${\cal C}^{(m)}$ generated by the derivatives of the time zero fields (see Eq. \eqref{eq:Cm} below for a precise definition), which has the property that its scaling limit inherits the same sectors from $\Aoi^{(m)}$.
\medskip
To begin with,
we compute for reference the action of $\rho_{q,\iota}$ on the Weyl operators defined above:
\begin{equation*}\begin{split}
\rho_{q,\iota}(W_{0,\iota}(f)) &= \lim_{n \to +\infty} W_{0,\iota}(iu_n)^*W_{0,\iota}(f)W_{0,\iota}(iu_n) \\
&= \lim_{n \to +\infty} e^{i\sigma(iu_n,f)}W_{0,\iota}(f)\\
&=\lim_{n \to +\infty}e^{i \Im \int_{\mathbb R} d{\boldsymbol{x}}\,(-i)u_n({\boldsymbol{x}})f({\boldsymbol{x}})}W_{0,\iota}(f)\\
&=\lim_{n \to +\infty}e^{-i\int_{\mathbb R} d{\boldsymbol{x}}\,u_n({\boldsymbol{x}})\Re f({\boldsymbol{x}})}W_{0,\iota}(f)\\
&= e^{-i \int_{\mathbb R} d{\boldsymbol{x}}\,u_\infty({\boldsymbol{x}})\Re f({\boldsymbol{x}})}W_{0,\iota}(f),
\end{split}\end{equation*}
where $u_\infty = \lim_{n\to +\infty} u_n$ is such that $u_\infty({\boldsymbol{x}}) = 0$ if ${\boldsymbol{x}} < -a$ and $u_\infty({\boldsymbol{x}}) = q$ if ${\boldsymbol{x}} > a$, cf.~\cite[Sec. 4]{Cio}.
We also observe that similar formulas hold for the rescaled Weyl operators at each fixed $\lambda} \def\L{\Lambda > 0$.
Namely, by the same argument as above, we can define, for each $\lambda} \def\L{\Lambda > 0$, a morphism $\rho(\lambda} \def\L{\Lambda)$ of ${\cal A}^{(m)}$ by
\begin{equation*}
\rho(\lambda} \def\L{\Lambda)(A) = \lim_{n \to +\infty}{\underline{W}}(iu_n)^*_\lambda} \def\L{\Lambda A {\underline{W}}(iu_n)_\lambda} \def\L{\Lambda, \qquad A \in {\cal A}^{(m)},
\end{equation*}
and there holds
\begin{equation*}\begin{split}
\rho(\lambda} \def\L{\Lambda)(W(f)) &= \lim_{n \to +\infty} {\underline{W}}(iu_n)^*_\lambda} \def\L{\Lambda W(f) {\underline{W}}(iu_n)_\lambda} \def\L{\Lambda \\
&= \lim_{n \to +\infty}W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda iu_n)^*W(f)W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda iu_n) \\
&= \lim_{n \to +\infty}e^{-i\int_{\mathbb R} d{\boldsymbol{x}}\,\delta} \def\D{\Delta_\lambda} \def\L{\Lambda u_n({\boldsymbol{x}})\Re f({\boldsymbol{x}})} W(f)\\
&= \lim_{n \to +\infty}e^{-i\int_{\mathbb R} d{\boldsymbol{x}}\,u_n(\lambda} \def\L{\Lambda^{-1}{\boldsymbol{x}})\Re f({\boldsymbol{x}})} W(f)\\
&= e^{-i \int_{\mathbb R} d{\boldsymbol{x}}\,u_\infty(\lambda} \def\L{\Lambda^{-1}{\boldsymbol{x}})\Re f({\boldsymbol{x}})}W(f).
\end{split}\end{equation*}
We can also perform the limit $\lambda} \def\L{\Lambda \to 0$ of the last line, obtaining
$$
\lim_{\lambda} \def\L{\Lambda \to 0}\rho(\lambda} \def\L{\Lambda)(W(f)) =
\lim_{\lambda} \def\L{\Lambda \to 0}\lim_{n \to +\infty} {\underline{W}}(iu_n)^*_\lambda} \def\L{\Lambda W(f) {\underline{W}}(iu_n)_\lambda} \def\L{\Lambda =e^{-i q\int_0^{+\infty} d{\boldsymbol{x}}\,\Re f({\boldsymbol{x}})}W(f).
$$
Moreover, we can extend the automorphisms $\sigma_\lambda} \def\L{\Lambda : {\mathfrak W} \to {\mathfrak W}$ to automorphisms $\phi_\lambda} \def\L{\Lambda : {\cal A}^{(m)} \to {\cal A}^{(m)}$ such that
\begin{equation}\label{eq:phil}
\phi_\lambda} \def\L{\Lambda(\pi^{(m)}(W(f))) = \pi^{(m)}\sigma_\lambda} \def\L{\Lambda(W(f)) = \pi^{(m)}(W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f)) = \pi^{(m)}({\underline{W}}(f)_\lambda} \def\L{\Lambda), \qquad \lambda} \def\L{\Lambda > 0, f \in {\cal D}({\mathbb R}).
\end{equation}
This is a direct consequence of the local normality of the states $\omega} \def\O{\Omega^{(m)}$ and $\omega} \def\O{\Omega^{(m)}\circ \sigma_\lambda} \def\L{\Lambda = \omega} \def\O{\Omega^{(\lambda} \def\L{\Lambda m)}$~\cite{EF}.
Therefore, if we define the morphisms $\rho_\lambda} \def\L{\Lambda := \rho(\lambda} \def\L{\Lambda)\phi_\lambda} \def\L{\Lambda : {\cal A}^{(m)} \to {\cal A}^{(m)}$, we obtain, from the above formulas,
\begin{equation}\label{eq:rhol}
\rho_\lambda} \def\L{\Lambda(W(f)) = e^{-i \int_{\mathbb R} d{\boldsymbol{x}}\,u_\infty(\lambda} \def\L{\Lambda^{-1}{\boldsymbol{x}})\Re (\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f)({\boldsymbol{x}})}W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f) = e^{-i \int_{\mathbb R} d{\boldsymbol{x}}\,u_\infty({\boldsymbol{x}})\Re f({\boldsymbol{x}})}{\underline{W}}(f)_\lambda} \def\L{\Lambda.
\end{equation}
Identifying ${\mathfrak W}$ with the C$^*$-subalgebras of ${\cal A}^{(m)}$ and $\Aoi^{(m)}$ generated by the operators $W(f)$, $W_{0,\iota}(f)$ ($f \in {\cal D}({\mathbb R})$) respectively, we can also consider an isomorphism $\boldsymbol{\phi} : {\mathfrak W} \to {\mathfrak W}$ such that
$$\boldsymbol{\phi}(W(f)) = W_{0,\iota}(f) \ . $$
Therefore, defining $\boldsymbol{\rho} = \rho_{q,\iota} \boldsymbol{\phi}: {\mathfrak W} \to \Aoi^{(m)}$, we get
$$\poi^\bullet\big(\lambda} \def\L{\Lambda \mapsto \rho_\lambda} \def\L{\Lambda(W(f))\big) = \boldsymbol{\rho}(W(f)), \qquad f \in {\cal D}({\mathbb R}) $$
from which, using the fact that, as $\rho_\lambda} \def\L{\Lambda$ is a morphism of C*-algebras, $\| \rho_\lambda} \def\L{\Lambda\| \leq 1$ uniformly for $\lambda} \def\L{\Lambda > 0$, and the norm continuity of $\poi^\bullet$ and $\boldsymbol{\rho}$, we conclude that (cf.~\cite[Thm. 5.2]{CM2})
$$\poi^\bullet\big(\lambda} \def\L{\Lambda \mapsto \rho_\lambda} \def\L{\Lambda(W)\big) = \boldsymbol{\rho}(W), \qquad W \in {\mathfrak W}.$$
In passing, we also note that $(\rho_\lambda} \def\L{\Lambda\upharpoonright {\mathfrak W})$ satisfies properties analogous to properties (i)-(iii) of~\cite[Def. 5.1]{CM2} (in particular, for property (iii), replacing $\bigcup_O {\cal A}^{(m)}(O)$ with $\bigcup_I {\mathfrak W}_I$, where ${\mathfrak W}_I$ is the Weyl algebra over ${\cal D}(I)$, $I \subset {\mathbb R}$ interval).
Moreover, it is worth pointing out that, at variance with the case of preserved sectors considered in~\cite[Sec. 7]{CM2}, the morphisms $\rho(\lambda} \def\L{\Lambda)$ do not induce non-trivial sectors of ${\cal A}^{(m)}$, because there are no such sectors~\cite[Thm.\ 3.1 and Sec.\ 7]{Mu}. The latter fact in particular implies that the sectors induced by $\rho_{q,\iota}$ are interpreted as confined sectors of ${\cal A}^{(m)}$~\cite{Buc1, DMV, DMV2}.
\medskip
The family of maps $(\rho_\lambda} \def\L{\Lambda)$ defined above enjoys some of the properties of asymptotic morphisms of ${\cal A}^{(m)}$, but it can be shown that $\lambda} \def\L{\Lambda \mapsto \rho_\lambda} \def\L{\Lambda(A)$ does not belong to $\uAA^{(m)\bullet}$ for all $A \in {\cal A}^{(m)}$. We defer a discussion of this and related aspects to Section \ref{sec:Schwinger}, where it will appear that the real obstruction lies in the zero mode of the field momentum. Therefore, for the time being,
in order to achieve our goal of constructing \emph{bona fide} asymptotic morphisms,
we restrict our attention to the von Neumann algebras
\begin{equation}\label{eq:Cm}
{\cal C}^{(m)}(O_I +x) := \{\pi^{(m)}(\alpha^{(m)}_{x}(W(f)))\,: {\text{supp}\,} f \subset I, \, {\textstyle \int_{\mathbb R} f = 0}\}'', \qquad m > 0.
\end{equation}
This way, since the condition of having null integral is preserved by the translation $\tau^{(m)}_x$, $m > 0$, we obtain a
translation covariant subnet ${\cal C}^{(m)}$ of the restriction of ${\cal A}^{(m)}$ to the upright double cones (i.e., those of the form $O_I + x$).
However, the property $\int_{\mathbb R} f = 0$ is not stable under Lorentz transformations \cite[Section 7.2]{BDM2}, and therefore ${\cal C}^{(m)}$ cannot be extended to a Poincar\'e covariant isotonous subnet of ${\cal A}^{(m)}$.
This obstruction disappears for $m=0$, so that
\begin{equation}\label{eq:Cz}
{\cal C}^{(0)}(\L O_I +x) := \{\pi^{(0)}(\alpha^{(0)}_{(\L,x)}(W(f)))\,: {\text{supp}\,} f \subset I, \, {\textstyle \int_{\mathbb R} f = 0}\}''
\end{equation}
defines a Poincar\'e covariant subnet of ${\cal A}^{(0)}$ (indexed by all double cones). We denote by ${\cal K}^{(0)} = \overline{{\cal C}^{(0)} \Omega^{(0)}}$ the corresponding cyclic subspace of ${\cal H}^{(0)}$.
We note that the dual net local algebras ${\cal C}^{(m)d}(O) := {\cal C}^{(m)}(O')'$ can be defined for all double cones $O$ (not just the upright ones). With this in mind, we summarize in the next statement the main relations between ${\cal C}^{(m)}$ and ${\cal A}^{(m)}$.
\begin{Proposition}\label{prop:Cm}
Let $m > 0$. Then the following properties hold.
\renewcommand{\theenumi}{(\roman{enumi})}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
\item If $W_+$ denotes the right wedge, ${\cal C}^{(m)}(W_+)'' = {\cal A}^{(m)}(W_+)''$.
\item ${\cal C}^{(m)d} = {\cal A}^{(m)}$.
\item There is an action $\gamma : {\mathbb R}^2 \to \aut({\cal A}^{(m)})$ by net automorphisms, such that
\[
\gamma} \def\G{\Gamma_{(\mu,\nu)}(\pi^{(m)}(W(f))) = e^{i(\mu\,\Re \int_{\mathbb R} f + \nu \,\Im \int_{\mathbb R} f)}\pi^{(m)}(W(f)),
\]
and ${\cal C}^{(m)}(O) \subset {\cal A}^{(m)}(O)^{{\mathbb R}^2}$ for all upright double cones $O$ .
\end{enumerate}
\end{Proposition}
\begin{proof}
(i) Let $f \in {\cal D}({\mathbb R})$ have support contained in ${\mathbb R}_+$ and set $\alpha := \int_{\mathbb R} f$. Moreover, let $\chi \in {\cal D}({\mathbb R})$ be a real function with support in $(0,1)$ and $\int_{\mathbb R} \chi = 1$, and consider the function $f_\varepsilon} \def\se{\epsilon} \def\z{\zeta({\boldsymbol{x}}) := f({\boldsymbol{x}}) - \alpha \varepsilon} \def\se{\epsilon} \def\z{\zeta \chi(\varepsilon} \def\se{\epsilon} \def\z{\zeta {\boldsymbol{x}})$, $\varepsilon} \def\se{\epsilon} \def\z{\zeta > 0$. Then clearly ${\text{supp}\,} f_\varepsilon} \def\se{\epsilon} \def\z{\zeta \subset {\mathbb R}_+$, $\int_{\mathbb R} f_\varepsilon} \def\se{\epsilon} \def\z{\zeta = 0$ and
\[\begin{split}
\|f-f_\varepsilon} \def\se{\epsilon} \def\z{\zeta \|^2_m &= \int_{\mathbb R} d{\boldsymbol{p}}\left|\frac {\Re \alpha} {\omega} \def\O{\Omega_m({\boldsymbol{p}})^{1/2}} + i \omega} \def\O{\Omega_m({\boldsymbol{p}})^{1/2} \Im \alpha\right|^2 |\hat\chi({\boldsymbol{p}}/\varepsilon} \def\se{\epsilon} \def\z{\zeta)|^2 \\
&= \int_{\mathbb R} d{\boldsymbol{p}}\left[\frac {(\Re \alpha)^2} {\omega} \def\O{\Omega_{m/\varepsilon} \def\se{\epsilon} \def\z{\zeta}({\boldsymbol{p}})} + \omega} \def\O{\Omega_{\varepsilon} \def\se{\epsilon} \def\z{\zeta m}(\varepsilon} \def\se{\epsilon} \def\z{\zeta^2 {\boldsymbol{p}}) (\Im \alpha)^2\right] |\hat\chi({\boldsymbol{p}})|^2 \to 0
\end{split}\]
as $\varepsilon} \def\se{\epsilon} \def\z{\zeta \to 0$, by a straightforward application of the dominated convergence theorem. This implies that $W(f_\varepsilon} \def\se{\epsilon} \def\z{\zeta) \in {\cal C}^{(m)}(W_+)$ converges strongly to $W(f) \in {\cal A}^{(m)}(W_+)$, and therefore the statement.
(ii) Given a double cone $O = (W_++a)\cap (-W_++b)$ one has
\[\begin{split}
{\cal C}^{(m)d}(O) &= {\cal C}^{(m)}(O')' = {\cal C}^{(m)}(-W_++a)' \wedge {\cal C}^{(m)}(W_++b)'\\
&= {\cal A}^{(m)}(-W_++a)' \wedge {\cal A}^{(m)}(W_++b)' = {\cal A}^{(m)d}(O) = {\cal A}^{(m)}(O),
\end{split}\]
where (i) and duality for the net ${\cal A}^{(m)}$ have been used.
(iii) Let $O = O_I$, and let $f \in {\cal D}(I)$. Given $g \in {\cal D}({\mathbb R})$ such that $g|_I = -i$, one has, by the Weyl relations,
\begin{equation}\label{eq:weylauto}
W(\mu g) W(f) W(\mu g)^* = e^{i \mu \Re\int_{\mathbb R} f}W(f),
\end{equation}
which shows that $\gamma_{(\mu,0)}$ can be defined as an automorphism of the von Neumann algebra ${\cal A}^{(m)}(O_I)$. Then, it extends to an automorphism of the quasi-local algebra, as the latter is generated, as a C*-algebra, by local algebras of this form. By the same argument, using a function $h \in {\cal D}({\mathbb R})$ such that $h|_I = 1$, one gets $\gamma_{(0,\nu)}$. Moreover, it is clear, again by the Weyl relations, that the group-like commutator
\[
W(\nu g)^* W(\mu h)^* W(\nu g) W(\mu h)
\]
is a phase factor, thus yielding the required automorphic action of ${\mathbb R}^2$ (cf.~\cite[Sec. 5]{AMS93}). By~\eqref{eq:weylauto}, it is also clear that $\gamma} \def\G{\Gamma_{(\mu,\nu)}({\cal A}^{(m)}(O)) = {\cal A}^{(m)}(O)$ for all upright
double cones $O$ (not necessarily based at time zero). Finally, the $\gamma} \def\G{\Gamma$-invariance of elements of ${\cal C}^{(m)}(O_I)$ is obvious, and the general case readily follows by the fact that every upright double cone is included in one based on the time zero line.
\end{proof}
Property (iii) above entails in particular that ${\cal C}^{(m)}$ is a proper subnet of ${\cal A}^{(m)}$ (restricted to the upright double cones). We also remark that the automorphisms $\gamma} \def\G{\Gamma_{(\mu,\nu)}$ commute with spatial translations.
We will also need to consider the Poincar\'e covariant subnet ${\cal C}$ of ${\cal A}^{(0)}$ defined by
\begin{equation}\label{eq:C}
{\cal C}(\Lambda O_I + x) := \Big\{ \pi^{(0)}(\alpha^{(0)}_{\Lambda,x}(W(f)))\,:\,{\text{supp}\,} f \subset I, {\textstyle\int_{\mathbb R} \Re f = 0}\Big\}''.
\end{equation}
Here the isotony is guaranteed by the fact that massless Poincar\'e transformations also preserve the property $\int_{\mathbb R} \Re f = 0$. This net is Haag-dual by~\cite[Appendix 3]{HL} when considered as a net acting on the cyclic Hilbert space defined by $\Omega^{(0)}$. Moreover, it enjoys the split property and the local algebras ${\cal C}(O)$ are hyperfinite type III$_1$ factors. This follows from the fact that, as shown for instance in~\cite{BLM}, the local algebras decompose into a tensor product of local algebras associated to the U(1) current on the light rays (for the latter see, e.g., \cite{BMT}). We will see in Sec.~\ref{sec:Schwinger} that ${\cal C}$ can be naturally identified with a Poincar\'e covariant subnet of $\Aoi^{(m)}$. Also, by an argument similar to the one in Prop.~\ref{prop:Cm}, we obtain that ${\cal C}^{(0)d} = {\cal C}$. In particular, $\overline{{\cal C}\Omega^{(0)}} = {\cal K}^{(0)}$.
According to~\cite[Thm.\ 3.1]{CM2}, there exists then a C*-algebra isomorphism $\phi : {\cal A}^{(m)} \to {\cal C}$ which identifies a countable increasing family of local von Neumann algebras. In view of~\cite{EF}, a natural question would then be if $\phi$ can be chosen in such a way that $\phi(\pi^{(m)}(W(f))) = \pi^{(0)}(W(f))$ whenever $\int_{\mathbb R} \Re f = 0$. That this can not be the case follows at once from the fact, noted below in Remark \ref{notiso}, that the algebra~\eqref{eq:Ctildem} defined there is a proper subalgebra of ${\cal A}^{(m)}(O)$ for any $O$. Similarly, there is no isomorphism between ${\cal A}^{(m)}(O)$ and ${\cal A}^{(0)}(O)$ mapping $\pi^{(m)}(W(f))$ into $\pi^{(0)}(W(f))$ for all $f \in {\cal D}({\mathbb R})$: indeed, the map ${\mathbb R} \ni \alpha \mapsto \pi^{(m)}(W(\alpha f))$ is $\sigma$-strongly continuous for every $f$,
while $\alpha \mapsto \pi^{(0)}(W(\alpha f))$ is not, and any von Neumann algebra isomorphism is automatically continuous for the $\sigma$-strong topologies.
However, at this stage we can not exclude the existence of an isomorphism between~\eqref{eq:Ctildem} and ${\cal C}(O)$, defined by the same formula.
On the positive side, it follows from results in \cite{BFR}
that for any interval $I \subset {\mathbb R}$ there exists a von Neumann algebra isomorphism
$\phi :{\cal C}^{(m)}(O_I) \to {\cal C}^{(0)}(O_I)$ such that $\phi(\pi^{(m)}(W(f))) = \pi^{(0)}(W(f))$ if ${\text{supp}\,} f \subset I$ and $\int_{\mathbb R} f = 0$.
For the sake of self-containment, a direct proof of this result is provided in Appendix~\ref{app:quasiequiv}.
This is an extension to $d=2$ of the classical result of~\cite{EF}, compatible with the infrared singularity of the massless free field.
\smallskip
We will need a result about a certain phase space property for the massless scalar field in two dimensions that might be of independent interest.
\begin{Proposition}
The net ${\cal C}$ on ${\mathbb R}^2$ in its vacuum representation on ${\cal K}^{(0)}$ satisfies the Buchholz-Wichmann nuclearity condition, namely the map
\[
\Theta : {\cal C}(O) \to {\cal K}^{(0)},\qquad A \mapsto e^{-\beta H} A \Omega^{(0)},
\]
is nuclear for all $\beta > 0$ and $O$.
\end{Proposition}
\begin{proof}
The local algebras ${\cal C}(O)$ can be written as tensor products of two local algebras of the $U(1)$ current algebra on ${\mathbb R}$ relative to suitable intervals $I,J$. Now, it is well known that the $U(1)$ current algebra satisfies the trace-class condition and thus, thanks to \cite{BDL}, it satisfies the Buchholz-Wichmann nuclearity. Finally, it can be shown without too much trouble that the von Neumann tensor product of the Buchholz-Wichmann nuclear maps associated to the two intervals $I,J$ is again nuclear. To see this, observe that thanks to \cite[Lemma 2.2]{BDF} one can write $\Theta_I = \sum_n f_{n,I}(\cdot) \xi_{n,I}$ for some normal functionals $f_{n,I}$ on the local von Neumann algebra of the $U(1)$ current associated to the interval $I$ and some Hilbert space vectors $\xi_{n,I}$ such that $\sum_n \|f_{n,I}\| \, \|\xi_{n,I}\| < +\infty$. Now, by \cite[Sec. IV.5]{Ta}, $f_{n,I} \otimes f_{m,J}$ extends to a normal functional on ${\cal C}(O)$ whose norm is equal to $\|f_{n,I}\| \, \| f_{m,J} \|$. It is now clear that
$\Theta = \sum_{n,m} (f_{n,I} \otimes f_{m,J})(\cdot) \xi_{n,I} \otimes \xi_{m,J}$ on the algebraic tensor product of the U(1)-von Neumann algebras of the intervals $I$ and $J$. Since both sides of this equality are continuous w.r.t.\ the $\sigma$-weak topology on the domain and the weak topology on the target, then they coincide as maps on ${\cal C}(O)$ and furthermore $\sum_{n,m} \|f_{n,I} \otimes f_{m,J}\| \, \|\xi_{n,I} \otimes \xi_{m,J}\| < +\infty$.
\end{proof}
Hereafter, as in~\cite{BV2} it will be convenient to pass from the net ${\cal C}^{(m)}$ defined in~\eqref{eq:Cm}, to the one whose local algebras are
\[
\{\pi^{(0)}(\alpha^{(m)}_x(W( f)))\,: {\text{supp}\,} f \subset I, \, {\textstyle \int_{\mathbb R} f = 0}\}'',
\]
which is net-isomorphic to the former one thanks to Thm.\ \ref{thm:quasiequiv}. We will therefore assume that this has been done. Note that in particular, if $O_I$ is a double cone with basis on the time zero line, with this definition one has ${\cal C}^{(m)}(O_I) = {\cal C}^{(0)}(O_I)$.
\begin{Theorem}\label{thm:limitCm}
Let ${\cal C}^{(m)}_{0,\iota}$ be a scaling limit net of ${\cal C}^{(m)}$, $m > 0$, acting on the Hilbert space ${\cal K}_{0,\iota}^{(m)}$. Then there exists a unitary operator $V : {\cal K}_{0,\iota}^{(m)} \to {\cal K}^{(0)}$ satisfying the following properties:
\renewcommand{\theenumi}{(\roman{enumi})}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
\item \label{it:defadV}for all $\underline{A} \in \underline{{\cal C}}^{(m)}(O)$ there holds ${\rm Ad} V(\pi_{0,\iota}(\underline{A})) = \lim_\kappa \phi_{\lambda} \def\L{\Lambda_\kappa}^{-1}(\underline{A}_{\lambda} \def\L{\Lambda_\kappa})$ weakly;
\item \label{it:Vvacuum}$V\Omega_{0,\iota} = \Omega^{(0)}$;
\item ${\rm Ad} V\circ \alpha^{(m;0,\iota)}_{x} = \alpha^{(0)}_{x}$ for all translations $x \in {\mathbb R}^d$;
\item \label{it:adVCoim}for each pair of upright double cones $O \Subset \tilde O$,
\[
{\cal C}^{(0)}(O) \subset {\rm Ad} V({\cal C}^{(m)}_{0,\iota}(\tilde O)) \subset {\cal C}^{(0)}(\tilde O);
\]
\item \label{it:adVpoi} given a double cone $O_I$ with basis on the time zero line and $A \in {\cal C}^{(0)}(O_I) = {\cal C}^{(m)}(O_I)$, there holds
\[
{\rm Ad} V(\pi_{0,\iota}(\ua^{(m)}_h \boldsymbol{\phi}^\bullet(A) )) = \int_{{\mathbb R}^2} dx\,h(x) \alpha^{(0)}_x(A)
\]
for all $h \in C_c({\mathbb R}^2)$;
\item
for all $f \in {\cal D}({\mathbb R})$ such that $\int_{\mathbb R} f = 0$ the Weyl operator $W_{0,\iota}(f)$ leaves ${\cal K}_{0,\iota}^{(m)}$ invariant and
\[
{\rm Ad} V (W_{0,\iota}(f)) = \pi^{(0)} (W(f)) |_{{\cal K}^{(0)}} \ .
\]
\end{enumerate}
In particular, for the associated quasi-local algebras there holds ${\rm Ad} V({\cal C}^{(m)}_{0,\iota}) = {\cal C}^{(0)}$.
\end{Theorem}
\begin{proof}
The proof closely follows the one of~\cite[Thm.~3.1]{BV2}, so we limit ourselves to point out the main differences. The key ingredient of that proof, namely the local normality of the vacuum states of the massive and massless free scalar field, is here replaced by the isomorphism of Thm.~\ref{thm:quasiequiv}. Moreover,
one has, for $f \in {\cal D}({\mathbb R})$ such that $\int_{\mathbb R} f = 0$,
\begin{equation}\label{eq:normtranslation}\begin{split}
&\| \tau^{(\lambda} \def\L{\Lambda m)}_x f - \tau_x^{(0)}f \|_0^2 = \|\tau_t^{(\lambda} \def\L{\Lambda m)}f-\tau^{(0)}_tf\|^2_0\\
&= \frac12\int_{\mathbb R}\frac{d{\boldsymbol{p}}}{|{\boldsymbol{p}}|}\bigg| \big[ \cos(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))-\cos(t|{\boldsymbol{p}}|)\big] \widehat{\Re f}({\boldsymbol{p}}
-\big[ \omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})\sin(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))-|{\boldsymbol{p}}|\sin(t|{\boldsymbol{p}}|)\big]\widehat{\Im f}({\boldsymbol{p}})\\
&\phantom{= \frac12\int_{\mathbb R}\frac{d{\boldsymbol{p}}}{|{\boldsymbol{p}}|}}
+i|{\boldsymbol{p}}|\big[ \cos(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))-\cos(t|{\boldsymbol{p}}|)\big] \widehat{\Im f}({\boldsymbol{p}}
+i|{\boldsymbol{p}}|\bigg[\frac{\sin(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))}{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})}-\frac{\sin(t|{\boldsymbol{p}}|)}{|{\boldsymbol{p}}|}\bigg]\widehat{\Re f}({\boldsymbol{p}})\bigg|^2.
\end{split}\end{equation}
This integral
is seen to converge to zero, as $\lambda \to 0$, by an application of the dominated convergence theorem, since the integrand can be bounded, for fixed $t \in {\mathbb R}$ and all $\lambda} \def\L{\Lambda \in [0,1]$, by the function
\begin{equation}\label{eq:estimate}
\frac{4}{|{\boldsymbol{p}}|}\left[(1+|t|\omega} \def\O{\Omega_m({\boldsymbol{p}}))\big|\widehat{\Re f}({\boldsymbol{p}})\big|+2\omega} \def\O{\Omega_m({\boldsymbol{p}})\big|\widehat{\Im f}({\boldsymbol{p}})\big|\right]^2,
\end{equation}
which is integrable thanks to the fact that $\hat f$ is a Schwarz function such that $\hat f(0)=0$. This shows that the key estimate in~\cite[Lemma 3.2(b)]{BV2} can be done also in our case.
The fact that ${\cal C}$ satisfies Buchholz-Wichmann nuclearity entails that this is true also for the subnet ${\cal C}^{(0)}$, and this allows us to repeat in our context the proof of~\cite[Lemma~3.3]{BV2}. The above arguments show the validity of (i)-(v). Finally, (vi) is obtained by observing that for all $\underline{A} \in \underline{{\cal C}}^{(m)}(O_I)$ and $f$ as in the statement one has $W_{0,\iota}(f) \poi^\bullet(\underline{A}) \Omega_{0,\iota} = \poi^\bullet({\underline{W}}(f)\underline{A})\Omega_{0,\iota} \in {\cal K}_{0,\iota}^{(m)}$ and then computing directly the l.h.s. of the equality using (v).
\end{proof}
Taking into account the outer regularity of ${\cal C}^{(0)}$, consequence of the strong continuity of the action of the dilation group on ${\cal K}^{(0)}$, it readily follows that ${\rm Ad} V$ implements a net isomorphism between the outer regularized net of ${\cal C}^{(m)}_{0,\iota}$ and ${\cal C}^{(0)}$, similar to the situation discussed in \cite{BV2} for the higher dimensional case. Of course, the quasi-local $C^*$-algebras of the net ${\cal C}^{(m)}_{0,\iota}$ and of its outer regularized net coincide.
\smallskip
We also see from this concrete situation that the dual of the scaling limit net is not necessarily equal to the scaling limit net of the dual.
\begin{Remark}\label{notiso}
The above results also show that the scaling limit of ${\cal C}^{(m)}$ is a proper subnet of the net ${\cal C}$ defined in~\eqref{eq:C}. In order to obtain the full net ${\cal C}$ in the scaling limit, the first guess would be to associate to the double cone $O = \Lambda O_I +x$ the von Neumann algebra
\begin{equation}\label{eq:Ctildem}
\{\alpha^{(m)}_{\L,x}(\pi^{(m)}(W(f)))\,: {\text{supp}\,} f \subset I, \, {\textstyle \int_{\mathbb R} \Re f = 0}\}''.
\end{equation}
Being invariant under $\gamma} \def\G{\Gamma_{(\lambda} \def\L{\Lambda,0)}$, this is again a proper subalgebra of ${\cal A}^{(m)}(O)$. However, as pointed out to us by D. Buchholz, this has the serious drawback that the resulting family of von Neumann algebras does not satisfy isotony: if $J \Subset I$ and ${\text{supp}\,} f \subset J$ with $\int_{\mathbb R} \Re f = 0$, it is easy to see that for sufficiently small $t >0$ ${\text{supp}\,} \tau^{(m)}_t f \subset I$ but $\int_{\mathbb R} \Re \tau^{(m)}_t f \neq 0$, and therefore $\alpha^{(m)}_t(W(f))$ does not belong to the algebra associated to $O_I$. This also shows that the union of all such algebras associated to double cones based on the time zero line is not invariant under time translations.
\end{Remark}
\medskip
As shown in the following statement, the subnet ${\cal C}^{(m)}_{0,\iota}$ captures the relevant information about the above described sectors of $\Aoi^{(m)}$.
\begin{Proposition}\label{prop:restrictionz}
Let $\rho_{q,\iota}$ be a sector of the scaling limit theory $\Aoi^{(m)}$, then its
restriction to the scaling limit net ${\cal C}^{(m)}_{0,\iota}$ is well defined and properly supported and induces a non trivial translation covariant sector of ${\cal C}^{(m)}_{0,\iota}$.
Moreover, the right cohomological extension of $\rho_{q,\iota} \upharpoonright {\cal C}^{(m)}_{0,\iota}$ coincides with $\rho_{q,\iota}$.
\end{Proposition}
\begin{proof}
According to Thm. \ref{thm:limitCm},
every element $A \in {\cal C}^{(m)}_{0,\iota}(O_I)$ is a strong limit of linear combinations of Weyl operators $W_{0,\iota}(f)$ with ${\text{supp}\,} f \subset I$ and $\int_{\mathbb R} f = 0$.
For any such Weyl operator,
by the Weyl relations $\rho_{q,\iota}(W_{0,\iota}(f)) = {V_n^{(q)}}^*W_{0,\iota}(f){V_n^{(q)}}$ differs from $W_{0,\iota}(f)$ by a phase factor, and therefore $\rho_{q,\iota}(A)$ still belongs to ${\cal C}^{(m)}_{0,\iota}(O_{\tilde{I}})$, where $\tilde{I} \Supset I$.
This shows that
the restriction of $\rho_{q,\iota}$ to ${\cal C}^{(m)}_{0,\iota}$ is well defined.
Moreover, such restriction is properly supported because $\rho_{q,\iota}$ is, and
it is not equivalent to the vacuum sector by a similar
argument as in~\cite[Sec. 4]{BV2}, when one observes that the operators $Z_w^{(n)}(\pi/q)$ used there, which form an asymptotically central sequence, belong to ${\cal C}^{(m)}_{0,\iota}$ too, and $\rho_{q,\iota}(Z_w^{(n)}(\pi/q)) = e^{-i \pi}Z_w^{(n)}(\pi/q)$ for $n$ large enough. Finally, the statements about translation covariance and the cohomological extension are consequences of the following observations. For ${\boldsymbol{x}} \in {\mathbb R}$, the morphism
\[
\rho^{({\boldsymbol{x}})}_{q,\iota}(A) = \lim_{n\to +\infty} W_{0,\iota}(i\tau^{(0)}_{\boldsymbol{x}} u_n)^* A W_{0,\iota}(i\tau^{(0)}_{\boldsymbol{x}} u_n), \qquad A \in \Aoi^{(m)},
\]
is localized in $W_+ - a +{\boldsymbol{x}}$, restricts to a morphism of ${\cal C}^{(m)}_{0,\iota}$ by the same argument used for $\rho_{q,\iota}$, and is equivalent to the latter, a unitary intertwiner being
\[
W_{\boldsymbol{x}} = \lim_{n \to +\infty} W_{0,\iota}(iu_n) W_{0,\iota}(i\tau^{(0)}_{\boldsymbol{x}} u_n)^* ,
\]
where the limit exists in the strong operator topology as discussed in~\cite{BV2}. Moreover, for any wedge
$W \Supset (W_+-a) \cup (W_+ - a + {\boldsymbol{x}})$
we have $W_{0,\iota}(iu_n) W_{0,\iota}(i\tau^{(0)}_{\boldsymbol{x}} u_n)^* \in {\cal C}^{(m)}_{0,\iota}(W)$ for all $n \in {\mathbb N}$ because $\int_{\mathbb R} (u_n - \tau^{(0)}_{\boldsymbol{x}} u_n)=0$,
and thus $W_{\boldsymbol{x}} \in {\cal C}^{(m)}_{0,\iota}(W)''$.
Therefore, for each $A \in \Aoi^{(m)}(O)$, given ${\boldsymbol{x}} \in {\mathbb R}$ such that $W_+ - a+{\boldsymbol{x}} \subset O'$, one has
\[
\rho_{q,\iota}(A) = W_{\boldsymbol{x}} \rho_{q,\iota}^{({\boldsymbol{x}})}(A)W_{\boldsymbol{x}}^* = W_{\boldsymbol{x}} AW_{\boldsymbol{x}}^* \ ,
\]
as desired.
\end{proof}
As a matter of fact, the restriction of $\rho_{q,\iota}$ to ${\cal C}^{(m)}_{0,\iota}$ is localized in $O_{(-a,a)}$, cf. \cite[Prop. 4.5]{Cio}.
It is now possible to construct explicit examples of asymptotic morphisms of the net ${\cal C}^{(m)}_{0,\iota}$, satisfying the general properties discussed in \cite{CM2}.
\begin{Theorem}\label{thm:asympmorCm}
The family $(\phi_\lambda} \def\L{\Lambda)_{\lambda} \def\L{\Lambda > 0}$ defined in Eq.~\eqref{eq:phil} is an asymptotic isomorphism of ${\cal C}^{(m)}$ with respect to $\underline{\omega} \def\O{\Omega}_{0,\iota} = \lim_\kappa \underline{\omega}_{\lambda} \def\L{\Lambda_\kappa}$.
Moreover, given a sector $\rho_{q,\iota}$ of the scaling limit theory $\Aoi^{(m)}$, the family $(\rho_\lambda} \def\L{\Lambda)_{\lambda} \def\L{\Lambda > 0}$, defined by the norm limit
\begin{equation}\label{eq:rl}
\rho_\lambda} \def\L{\Lambda(A) = \lim_n {\underline{W}} (i u_n)_\lambda} \def\L{\Lambda \phi_\lambda} \def\L{\Lambda(A) {\underline{W}}(iu_n)^*_\lambda} \def\L{\Lambda \ ,\quad A \in {\cal C}^{(m)} \ ,
\end{equation}
is a tame asymptotic morphism of ${\cal C}^{(m)}$
and it holds
\begin{equation}\label{eq:R2}
\poi^\bullet(\boldsymbol{\rho}^\bullet(A)) = \rho_{q,\iota}\big(\poi^\bullet\boldsymbol{\phi}^\bullet(A)\big)= \rho_{q,\iota}\big({\rm Ad} V^*(A)\big), \quad A \in {\cal C}^{(m)}.
\end{equation}
\end{Theorem}
\begin{proof}
We start by proving that $(\phi_\lambda} \def\L{\Lambda)_\lambda} \def\L{\Lambda$ is an asymptotic isomorphism.
The properties (5.1)-(5.3) of~\cite{CM2} are obvious since $\phi_\lambda} \def\L{\Lambda$ is an automorphism for each $\lambda} \def\L{\Lambda > 0$. In order to prove properties (i) and (ii) of~\cite[def.\ 5.1]{CM2}, consider an element $A \in {\cal C}^{(m)}(O)$
and, given $\varepsilon > 0$, choose $h \in C_c({\mathbb R}^2)$ such that
\begin{equation}
\Big\| \Big[A-\int_{{\mathbb R}^2} dx\,h(x)\alpha^{(0)}_x(A)\Big] \Omega^{(0)}\Big\| < \varepsilon.
\end{equation}
We have then the following equalities:
\begin{equation*}\begin{split}
\lim_\kappa \| [\phi_{\lambda} \def\L{\Lambda_\kappa}(A) - \ua^{(m)}_{h}\boldsymbol{\phi}^\bullet(A)_{\lambda} \def\L{\Lambda_\kappa}]\Omega^{(0)}\|^2 &= \lim_\kappa \| [A-\phi^{-1}_{\lambda} \def\L{\Lambda_\kappa}(\ua^{(m)}_{h}\boldsymbol{\phi}^\bullet(A)_{\lambda} \def\L{\Lambda_\kappa})]\Omega^{(0)}\|^2\\
&= \| [A-{\rm Ad} V(\pi_{0,\iota}(\ua^{(m)}_{h}\boldsymbol{\phi}^\bullet(A)))]\Omega^{(0)}\|^2 \\
&= \Big\| \Big[A-\int_{{\mathbb R}^2} dx\,h(x)\alpha^{(0)}_x(A)\Big] \Omega^{(0)}\Big\|^2 < \varepsilon^2.
\end{split}\end{equation*}
Here, the first equation follows from the fact that $\phi_\lambda} \def\L{\Lambda$ is unitarily implemented and leaves the massless vacuum invariant, the second one follows from Thm.~\ref{thm:limitCm}\ref{it:defadV}, and the third one follows from Thm.~\ref{thm:limitCm}\ref{it:adVpoi} when one observes that $O$ is contained in some $O_I$ large enough. Since a similar argument holds for $\| [\phi_{\lambda} \def\L{\Lambda_\kappa}(A) - \ua^{(m)}_{h}\boldsymbol{\phi}^\bullet(A)_{\lambda} \def\L{\Lambda_\kappa}]^*\Omega^{(0)}\|$,
we conclude that $\boldsymbol{\phi}^\bullet(A):=(\lambda} \def\L{\Lambda \mapsto \phi_\lambda} \def\L{\Lambda(A))$ belongs to $\underline{{\cal C}}^{(m)\bullet}(O)$. Moreover, it is clear that the map $A \in {\cal C}^{(m)}_{\text{loc}} \mapsto \boldsymbol{\phi}^\bullet(A) \in {\underline{{\cal C}}^{(m)\bullet}_{\text{loc}}}$ is norm continuous, and therefore it extends to a norm continuous map from ${\cal C}^{(m)}$ to $\underline{{\cal C}}^{(m)\bullet}$. Since $\poi^\bullet ({\underline{{\cal C}}^{(m)\bullet}_{\text{loc}}}) \subset {\cal C}_{0,\iota,\text{loc}}^{(m)}$ by~\cite[thm.\ 4.6]{CM2}, this also shows that property (iii) of~\cite[def.\ 5.1]{CM2} is valid. It is also clear that, being $\phi_\lambda} \def\L{\Lambda$ an automorphism, $A \mapsto \boldsymbol{\phi}^\bullet(A)$ is injective, i.e., we get property (i) of~\cite[def.\ 5.4]{CM2}. Finally we show the validity of property (ii) of~\cite[def.\ 5.4]{CM2}. To this end, we claim that the map $\bar s : {\cal C}^{(m)}_{0,\iota} \to \underline{{\cal C}}^{(m)\bullet}$ defined by
\[
\bar s(A_0)_\lambda} \def\L{\Lambda := \phi_\lambda} \def\L{\Lambda{\rm Ad} V(A_0), \qquad A_0 \in {\cal C}^{(m)}_{0,\iota},
\]
is a continuous section of $\poi^\bullet : \underline{{\cal C}}^{(m)\bullet} \to {\cal C}^{(m)}_{0,\iota}$. The map is obviously continuous. Moreover, the fact that it is a section follows at once from the identity
\begin{equation}\label{eq:poiacpac}
\poi^\bullet(\boldsymbol{\phi}^\bullet(A)) = {\rm Ad} V^*(A), \qquad A \in {\cal C}^{(m)},
\end{equation}
applied to $A = {\rm Ad} V(A_0) \in {\cal C}^{(0)} = {\cal C}^{(m)}$. In turn, the latter equation is proven through the equalities
$$\poi^\bullet \boldsymbol{\phi}^\bullet (A) = \lim_{h \to \delta} \pi_{0,\iota}(\ua^{(m)}_h \boldsymbol{\phi}^\bullet(A)) = \lim_{h \to \delta} {\rm Ad} V^* \bigg( \int_{{\mathbb R}^2} dx\,h(x) \alpha^{(0)}_x(A) \bigg) = {\rm Ad} V^* (A) \ $$
(where the limits are taken in the strong operator topology),
which are consequences of \cite[Lemma 4.5]{CM2} and Thm.\ \ref{thm:limitCm}(v).
The proof of (ii) of~\cite[def.\ 5.4]{CM2} is then achieved by observing that by Thm.~\ref{thm:limitCm}\ref{it:adVCoim} ${\rm Ad} V(\bigcup_O {\cal C}^{(m)}_{0,\iota}(O)) = \bigcup_O {\cal C}^{(0)}(O)= \bigcup_O {\cal C}^{(m)}(O)$, and therefore $\bar s(\bigcup_O {\cal C}^{(m)}_{0,\iota}(O)) = \boldsymbol{\phi}^\bullet {\rm Ad} V(\bigcup_O {\cal C}^{(m)}_{0,\iota}(O)) = \boldsymbol{\phi}^\bullet(\bigcup_O {\cal C}^{(m)}(O))$.
In order to show that $\rho_\lambda} \def\L{\Lambda$ is a tame asymptotic morphism, we recall that ${\underline{W}}(f) \in \bigcup_O \underline{{\cal C}}^{(m)\bullet}(O)$ for $f \in {\cal D}({\mathbb R})$ such that $\int_{\mathbb R} f =0$. This fact, together with what we have just shown, makes it clear that $\boldsymbol{\rho}^\bullet(A) \in \underline{{\cal C}}^{(m)\bullet}$ for all $A \in {\cal C}^{(m)}$ and that
$\poi^\bullet(\boldsymbol{\rho}^\bullet(\bigcup_O {\cal C}^{(m)}(O))) \subset \bigcup_O {\cal C}^{(m)}_{0,\iota}(O)$, as required. All the remaining properties are obviously satisfied.
Finally, the formula (\ref{eq:R2}) can be verified by a direct computation. Indeed, by~\eqref{eq:rl} and~\eqref{eq:rqi},
$$\poi^\bullet(\boldsymbol{\rho}^\bullet(A)) = \lim_n W_{0,\iota} (iu_n) \poi^\bullet \boldsymbol{\phi}^\bullet (A) W_{0,\iota}(iu_n)^* = \rho_{q,\iota}(\poi^\bullet \boldsymbol{\phi}^\bullet (A))$$
and then one uses~\eqref{eq:poiacpac}.
\end{proof}
It can also be shown that ${\cal C}^{(m)}$ has convergent (and therefore unique) scaling limit, by repeating the argument in \cite[Thm.\ 7.5]{BDM2}, \emph{mutatis mutandis}.
\section{Asymptotic morphisms and smoothed out Weyl operators}\label{sec:Schwinger}
As remarked in Sec.~\ref{sect:Free}, ${\cal A}^{(m)}$ and $\Aoi^{(m)}$ are not isomorphic and therefore $(\phi_\lambda} \def\L{\Lambda)$ can not be an asymptotic isomorphism of ${\cal A}^{(m)}$. It is however natural to ask how far it goes in this direction. To this end, a relevant condition is that the function $\lambda} \def\L{\Lambda \mapsto \phi_\lambda} \def\L{\Lambda(A)$ belongs to $\uAA^{(m)\bullet}$ for all $A \in {\cal A}^{(m)}$. We will see shortly that this is not the case in general. However, a partial result in this direction can be formulated introducing the local, Poincar\'e covariant net of C*-algebras $O \mapsto {\mathfrak W}^{(m)}_r(O)$ generated by smoothed out Weyl operators, i.e., by elements of ${\cal A}^{(m)}(O)$ of the form
\begin{equation}\label{eq:regweyl}
\int_{{\mathbb R}^2} dy \,g(y) \alpha^{(m)}_y(W(f)), \qquad g \in C_c({\mathbb R}^2).
\end{equation}
It is easy to verify that the action $x \mapsto \alpha^{(m)}_x(W)$ of translations is norm continuous for all $W \in {\mathfrak W}^{(m)}_r(O)$ and, using
\eqref{eq:innercont} and Lorentz covariance, that ${\mathfrak W}^{(m)}_r(O)$ is strongly dense in ${\cal A}^{(m)}(O)$.
We also define the C*-subalgebra ${\cal A}^{(m)}_\phi(O) \subset {\cal A}^{(m)}(O)$ of all elements $A \in {\cal A}^{(m)}(O)$ which are mapped into $\uAA^{(m)\bullet}(O)$ by the isometric morphism $\boldsymbol{\phi}^\bullet : {\cal A}^{(m)} \to \ell^\infty({\mathbb R}_+,B({\cal H}))$ (the C*-algebra of bounded functions from ${\mathbb R}_+$ to $B({\cal H})$), and we denote by ${\cal A}^{(m)}_\phi$ the inductive limit of the net $O \mapsto {\cal A}^{(m)}_\phi(O)$.
We already know, by~\eqref{eq:weylreg}, that $W(f) \in {\cal A}^{(m)}_\phi(O_I)$ for any test function $f \in {\cal D}(I)$. We now show that ${\mathfrak W}^{(m)}_r(O) \subset {\cal A}^{(m)}_\phi(O)$.
\begin{Proposition}\label{prop:philW}
For all $W \in {\mathfrak W}^{(m)}_r(O)$, $\lambda} \def\L{\Lambda \mapsto \phi_\lambda} \def\L{\Lambda(W)$ belongs to $\uAA^{(m)\bullet}(O)$.
\end{Proposition}
\begin{proof}
Thanks to the fact the $\boldsymbol{\phi}^\bullet$ is an isometric morphism, it is sufficient to prove the statement for elements $W \in {\mathfrak W}^{(m)}_r(O)$ of the form~\eqref{eq:regweyl}. Then, for such a $W$ one has
\[\begin{split}
\sup_{\lambda} \def\L{\Lambda \in (0,1)} \big\|\big[\phi_\lambda} \def\L{\Lambda(W)-\alpha^{(m)}_{\lambda} \def\L{\Lambda x}&\phi_\lambda} \def\L{\Lambda(W)\big]\Omega^{(m)}\big\|\\
&=\sup_{\lambda} \def\L{\Lambda \in (0,1)}\left\| \int_{{\mathbb R}^2}dy\,g(y)\big[\phi_\lambda} \def\L{\Lambda(W(\tau_y^{(m)}f))-\alpha^{(m)}_{\lambda} \def\L{\Lambda x}\phi_\lambda} \def\L{\Lambda(W(\tau^{(m)}_yf))\big]\Omega^{(m)}\right\| \\
&\leq \int_{{\mathbb R}^2}dy\,|g(y)|\sup_{\lambda} \def\L{\Lambda \in (0,1)}\big\|\big[\phi_\lambda} \def\L{\Lambda(W(\tau_y^{(m)}f))-\alpha^{(m)}_{\lambda} \def\L{\Lambda x}\phi_\lambda} \def\L{\Lambda(W(\tau^{(m)}_yf))\big]\Omega^{(m)}\big\|,
\end{split}\]
and therefore~\cite[Eq. (4.13)]{BV2}, together with the dominated convergence theorem, shows that
\[
\lim_{x \to 0}\sup_{\lambda} \def\L{\Lambda \in (0,1)} \big\|\big[\phi_\lambda} \def\L{\Lambda(W)-\alpha^{(m)}_{\lambda} \def\L{\Lambda x}\phi_\lambda} \def\L{\Lambda(W)\big]\Omega^{(m)}\big\| = 0.
\]
This, in turn, implies that $\limsup_{\lambda} \def\L{\Lambda \to 0} \| [\phi_\lambda} \def\L{\Lambda(W) - \underline{\alpha}_h (\boldsymbol{\phi}^\bullet(W))_\lambda} \def\L{\Lambda]\Omega^{(m)}\|$ can be made arbitrarily small for $h \in C_c({\mathbb R}^2)$ sufficiently close to a delta function. In a similar way, $\limsup_{\lambda} \def\L{\Lambda \to 0} \| [\phi_\lambda} \def\L{\Lambda(W) - \underline{\alpha}_h (\boldsymbol{\phi}^\bullet(W))_\lambda} \def\L{\Lambda]^*\Omega^{(m)}\|$ can be made small as well and therefore $\boldsymbol{\phi}^\bullet(W) \in \uAA^{(m)\bullet}(O)$, as desired.
\end{proof}
\begin{Proposition}\label{prop:notiso}
Let $W = \int_{{\mathbb R}^2} dy \,g(y) \alpha^{(m)}_y(W(f))$ be such that $\int_{\mathbb R} \Re f = 0$, $\int_{\mathbb R} \Im f \neq 0$, and $g \in C_c({\mathbb R}^2)$ non-negative and not identically zero. Then $W \neq 0$ and $\poi^\bullet\boldsymbol{\phi}^\bullet(W) = 0$.
\end{Proposition}
\begin{proof}
One has that
\[
\langle \Omega^{(m)}, W\Omega^{(m)}\rangle = \int_{{\mathbb R}^2}dx\,g(x) e^{-\frac 1 2 \| \tau_x^{(m)} f\|_m^2}
\]
is strictly positive, and therefore $W \neq 0$. We can now choose a sequence $(\lambda} \def\L{\Lambda_n)_{n \in {\mathbb N}} \subset {\mathbb R}_+$ such that
\begin{equation}\label{eq:normzero}\begin{split}
\|\poi^\bullet\boldsymbol{\phi}^\bullet(W)\Omega_{0,\iota}\|^2 &= \lim_{n \to +\infty} \| \phi_{\lambda} \def\L{\Lambda_n}(W)\Omega^{(m)}\|^2\\
&= \lim_{n \to +\infty} \left\| \int_{{\mathbb R}^2} dx\, g(x) W(\delta} \def\D{\Delta_{\lambda} \def\L{\Lambda_n}\tau_x^{(m)}f)\Omega^{(m)}\right\|^2 \\
&= \lim_{n \to +\infty} \int_{{\mathbb R}^4} dxdy\, g(x)g(y) e^{\frac i 2 \sigma(\tau_x^{(m)}f,\tau_y^{(m)}f)}e^{-\frac 1 2 \| (\tau_y^{(m)}-\tau_x^{(m)})f\|^2_{\lambda_n m}},
\end{split}\end{equation}
where in the last equality we used the dilation invariance of the symplectic form. We now observe that, using the notation $x = (t,{\boldsymbol{x}})$, $y = (s,{\boldsymbol{y}})$, we have
\[\begin{split}
\int_{{\mathbb R}} \Re(\tau_y^{(m)} - \tau_x^{(m)})f &= \int_{{\mathbb R}} \Re(\tau_{\boldsymbol{y}}\tau_s^{(m)} - \tau_{\boldsymbol{x}}\tau_t^{(m)})f = \int_{{\mathbb R}} \Re(\tau_s^{(m)} - \tau_t^{(m)})f\\
&= \big[\Re(\tau_s^{(m)} - \tau_t^{(m)})f\big]\widehat{\;}(0) = m[\sin(tm)-\sin(sm)]\widehat{\Im f}(0),
\end{split}\]
where we used the translation invariance of the integral in the second equality and~\eqref{eq:timeev} in the fourth one. The above quantity vanishes only if $s = t +\frac{2\pi}{m}{\mathbb Z}$ or $s = -t +\frac{\pi}{m}(2{\mathbb Z}+1)$, and therefore on a set of measure zero in ${\mathbb R}^2$. Recalling then that, for $h \in {\cal D}({\mathbb R})$ such that $\int_{\mathbb R} \Re h \neq 0$,
\[
\lim_{m \to 0} \|h\|_{m}^2 = \lim_{m\to 0} \int_{{\mathbb R}} d{\boldsymbol{p}}\,\left|\frac{\widehat{\Re h}({\boldsymbol{p}})}{\sqrt{\omega_m({\boldsymbol{p}})}}+i\sqrt{\omega_m({\boldsymbol{p}})}\widehat{\Im h}({\boldsymbol{p}})\right|^2 = +\infty
\]
we see that the limit, as $n \to +\infty$, of the integrand in the last member of~\eqref{eq:normzero} vanishes almost everywhere, and therefore, by dominated convergence,
\[
\|\poi^\bullet\boldsymbol{\phi}^\bullet(W)\Omega_{0,\iota}\|^2 = 0.
\]
The conclusion is then obtained by the separating property of $\Omega_{0,\iota}$ for local algebras.
\end{proof}
We put on record a pair of immediate consequences of the above result.
\begin{Corollary}\label{cor:notsimple}
\renewcommand{\theenumi}{(\roman{enumi})}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
The following statements hold:
\item the quasi-local C*-algebras ${\mathfrak W}^{(m)}_r$, ${\cal A}^{(m)}_\phi$ are not simple;
\item ${\cal A}^{(m)}_\phi$ is a proper subalgebra of ${\cal A}^{(m)}$.
\end{enumerate}
\end{Corollary}
The second statement in the corollary makes it plain that $\boldsymbol{\phi}^\bullet$ does not map ${\cal A}^{(m)}$ into $\uAA^{(m)\bullet}$.
However, as shown in the next proposition, the map $\poi^\bullet\boldsymbol{\phi}^\bullet : {\cal A}^{(m)}_\phi \to \Aoi^{(m)}$ acts, on suitable elements, in a way that closely resembles the isomorphism between the free scalar field net in $d \geq 3$ and its scaling limit built in~\cite{BV2}.
\begin{Proposition}\label{prop:Wupzero}
Let $f \in {\cal D}({\mathbb R})$, $h \in C_c({\mathbb R}^2)$.
There holds:
\renewcommand{\theenumi}{(\roman{enumi})}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
\item ${\displaystyle \pi_{0,\iota}(\ua^{(m)}_h{\underline{W}}(f)) = \int_{{\mathbb R}^2} dx\,h(x) \alpha^{(m;0,\iota)}_x(W_{0,\iota}(f))};$
\item there exists
\begin{equation}\label{eq:Wupzero}
\lim_{\lambda} \def\L{\Lambda \to 0} \phi^{-1}_\lambda} \def\L{\Lambda(\ua^{(m)}_h{\underline{W}}(f)_\lambda} \def\L{\Lambda) = \int_{{\mathbb R}^2}dx\,h(x) W(\tau^{(0)}_xf) =:W^{(0)}_{h,f}\in {\cal A}^{(m)}
\end{equation}
in the strong operator topology, where the integral in the r.h.s.\ is defined in the strong sense;
\item $\boldsymbol{\phi}^\bullet(W^{(0)}_{h,f}) \in \uAA^{(m)\bullet}$ and $\poi^\bullet(\boldsymbol{\phi}^\bullet(W^{(0)}_{h,f})) = \pi_{0,\iota}(\ua^{(m)}_h{\underline{W}}(f))$;
\end{enumerate}
\end{Proposition}
\begin{proof}
(i) Let $g \in C_c({\mathbb R}^2)$. One has:
\[\begin{split}
\int_{{\mathbb R}^2} dy\,g(y)\alpha^{(m;0,\iota)}_y\big(\pi_{0,\iota}(\ua^{(m)}_h{\underline{W}}(f))\big) &= \int_{{\mathbb R}^2}dy\,g(y)\pi_{0,\iota}\big(\ua^{(m)}_y\ua^{(m)}_h{\underline{W}}(f)\big)\\
&=\pi_{0,\iota}\bigg(\int_{{\mathbb R}^2}dy\,g(y) \ua^{(m)}_y\ua^{(m)}_h{\underline{W}}(f)\bigg)\\
&= \pi_{0,\iota}\big(\ua^{(m)}_{g*h}{\underline{W}}(f)\big)\\
&= \int_{{\mathbb R}^2}dx\,h(x)\alpha^{(m;0,\iota)}_x\big(\pi_{0,\iota}(\ua^{(m)}_g{\underline{W}}(f))\big),
\end{split}\]
where we used in the second equality the fact that the C*-algebra-valued function
$$y \mapsto g(y) \ua^{(m)}_y\ua^{(m)}_h{\underline{W}}(f),$$
being continuous and compactly supported, is Bochner-integrable, and in the fourth one the commutativity of the convolution product. The statement in then obtained by taking, in the strong operator topology, the limit $g \to \delta$ on both sides, and by recalling that in such a limit $\pi_{0,\iota}(\ua^{(m)}_g{\underline{W}}(f))\toW_{0,\iota}(f)$.
(ii) We postpone for a moment the proof that the integral in the r.h.s.\ is well defined.
Using the commutation relations between dilations and translations on ${\cal D}({\mathbb R})$, $\delta} \def\D{\Delta_\lambda} \def\L{\Lambda \tau^{(\lambda} \def\L{\Lambda m)}_x = \tau^{(m)}_{\lambda} \def\L{\Lambda x}\delta} \def\D{\Delta_\lambda} \def\L{\Lambda$, there holds:
\[
\bigg\|\bigg[\phi^{-1}_\lambda} \def\L{\Lambda(\ua^{(m)}_h{\underline{W}}(f)_\lambda} \def\L{\Lambda)-\int_{{\mathbb R}^2}dx\,h(x) W(\tau^{(0)}_xf)\bigg]\Omega^{(m)}\bigg\| \leq \int_{{\mathbb R}^2}dx\,|h(x)| \big\|\big[W(\tau^{(\lambda} \def\L{\Lambda m)}_xf)-W(\tau^{(0)}_xf)\big]\Omega^{(m)}\big\|.
\]
Moreover, if $x = (\tau, {\boldsymbol{x}})$, since $\| \tau_{\boldsymbol{x}} g \|_m = \|g\|_m$ for all $m \geq 0$,
\begin{equation*}
\| \tau^{(\lambda} \def\L{\Lambda m)}_x f - \tau_x^{(0)}f \|_m^2 = \|\tau_t^{(\lambda} \def\L{\Lambda m)}f-\tau^{(0)}_tf\|^2_m
\end{equation*}
can be expressed by an integral as in Eq.~\eqref{eq:normtranslation}, with the change $|{\boldsymbol{p}}| \to \omega} \def\O{\Omega_m({\boldsymbol{p}})$. Therefore the same argument used there, with the same change in~\eqref{eq:estimate}, guaranteees that the dominated convergence theorem is applicable, thus yielding $\lim_{\lambda} \def\L{\Lambda \to 0} \| \tau^{(\lambda} \def\L{\Lambda m)}_x f - \tau_x^{(0)}f \|_m^2 =0$. This in turn implies
\[
\lim_{\lambda} \def\L{\Lambda \to 0} \big\|\big[W(\tau^{(\lambda} \def\L{\Lambda m)}_xf)-W(\tau^{(0)}_xf)\big]\Omega^{(m)}\big\| = 0,
\]
and a further application of the dominated convergence theorem, together with the fact that the vacuum is separating for the local algebras, gives the statement. Finally, we notice that in a similar way one can show that $\|\tau^{(0)}_{t'} f-\tau^{(0)}_{t}f\|_m \to 0$ as $t' \to t$, which, together with the fact that space translations are mass independent, implies that the function $x\in {\mathbb R}^2 \mapsto W(\tau^{(0)}_xf)$ is strongly continuous, and therefore the integral on the right hand side of~\eqref{eq:Wupzero} is well defined in the strong topology.
(iii) Similarly to point (ii) above, one has, for all $\lambda} \def\L{\Lambda > 0$,
\[
\big\|\big[\phi_\lambda} \def\L{\Lambda(W^{(0)}_{h,f})-(\ua^{(m)}_h{\underline{W}}(f))_\lambda} \def\L{\Lambda\big]\Omega^{(m)}\big\| \leq \int_{{\mathbb R}^2}dx\,|h(x)|\big\|\big[W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda\tau^{(0)}_xf)-W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda\tau_{x}^{(\lambda} \def\L{\Lambda m)}f)\big]\Omega^{(m)}\big\|,
\]
and $\lim_{\lambda} \def\L{\Lambda \to 0}\| \delta} \def\D{\Delta_\lambda} \def\L{\Lambda\tau^{(0)}_xf-\delta} \def\D{\Delta_\lambda} \def\L{\Lambda\tau_{x}^{(\lambda} \def\L{\Lambda m)}f\|_m^2 = \lim_{\lambda} \def\L{\Lambda \to 0} \| \tau^{(0)}_t f - \tau^{(\lambda} \def\L{\Lambda m)}_t f\|_{\lambda} \def\L{\Lambda m}^2 = 0$. This last statement follows by observing that $\| \tau^{(0)}_t f - \tau^{(\lambda} \def\L{\Lambda m)}_t f\|_{\lambda} \def\L{\Lambda m}^2$ is expressed again by an integral obtained from the one in~\eqref{eq:normtranslation} by the replacement $|{\boldsymbol{p}}| \to \omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})$. Moreover, this integral can be split as the sum of an integral over the region $|{\boldsymbol{p}}| \leq 1$ and one over the region $|{\boldsymbol{p}}| > 1$. The latter integral is seen to converge to zero, as $\lambda \to 0$,
since the bound \eqref{eq:estimate}
is integrable for $|{\boldsymbol{p}}| > 1$. To treat the possible divergence, for $\lambda} \def\L{\Lambda \to 0$, of the integral over $|{\boldsymbol{p}}| \leq 1$, we observe that, for $\lambda} \def\L{\Lambda \in (0,1)$, there hold the elementary bounds
\begin{align*}
\frac{|\cos(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))-\cos(t|{\boldsymbol{p}}|)|}{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})^{1/2}} &\leq |t| \frac{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})-|{\boldsymbol{p}}|}{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})^{1/2}} \leq |t| \, \omega} \def\O{\Omega_m({\boldsymbol{p}})^{1/2},\\
\frac{| \omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})\sin(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))-|{\boldsymbol{p}}|\sin(t|{\boldsymbol{p}}|)|}{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})^{1/2}} &\leq \frac{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})-|{\boldsymbol{p}}|}{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})^{1/2}} |\sin(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))|\\
&+|{\boldsymbol{p}}|\frac{ |\sin(t\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}}))-\sin(t|{\boldsymbol{p}}|)|}{\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})^{1/2}} \leq 3 \, \omega} \def\O{\Omega_m({\boldsymbol{p}})^{1/2},
\end{align*}
which show that the integrand is bounded by an integrable function of ${\boldsymbol{p}}$ uniformly for $\lambda} \def\L{\Lambda \in (0,1)$, and therefore one can apply the dominated convergence theorem once more.
Therefore, one concludes that
\[
\lim_{\lambda} \def\L{\Lambda \to 0}\big\|\big[\phi_\lambda} \def\L{\Lambda(W^{(0)}_{h,f})-(\ua^{(m)}_h{\underline{W}}(f))_\lambda} \def\L{\Lambda\big]\Omega^{(m)}\big\| = 0,
\]
which, together with $\phi_\lambda} \def\L{\Lambda(W^{(0)}_{h,f})^* = \phi_\lambda} \def\L{\Lambda(W^{(0)}_{\bar h,-f})$, implies that $\boldsymbol{\phi}^\bullet(W^{(0)}_{h,f}) \in \uAA^{(m)\bullet}$ and,
using~\cite[Lemma~4.4]{CM2} and the fact that $\Omega_{0,\iota}$ is separating for the local algebras $\Aoi^{(m)}(O)$,
also that $\poi^\bullet(\boldsymbol{\phi}^\bullet(W^{(0)}_{h,f})) = \pi_{0,\iota}(\ua^{(m)}_h{\underline{W}}(f))$.
\end{proof}
In particular, one can deduce from the proof of point (iii) above that $\alpha^{(m;0,\iota)}_x(W_{0,\iota}(f)) =W_{0,\iota}(\tau^{(0)}_x f)$ for all $x \in {\mathbb R}^2$ and $f \in {\cal D}({\mathbb R})$. Moreover, by similar arguments, using the expressions for Lorentz transformation in~\cite[Eqs.\ (7.12)-(7.13)]{BDM2}, it is also possible to show that
$\lim_{\lambda} \def\L{\Lambda \to 0} \| \tau^{(0)}_\L f - \tau^{(\lambda} \def\L{\Lambda m)}_\L f\|_{\lambda} \def\L{\Lambda m}^2 = 0$, which entails
$\alpha^{(m;0,\iota)}_\L(W_{0,\iota}(f)) =W_{0,\iota}(\tau^{(0)}_\L f)$ for all $\L$ in the Lorentz group. Thanks to this observation, we see that the net ${\cal A}^{(0)}$ is isomorphic to the subnet of $\Aoi^{(m)}$ generated by the Weyl operators $W_{0,\iota}(f)$, $f \in {\cal D}({\mathbb R})$, with an isomorphism mapping $\pi^{(0)}(W(f))$ to $W_{0,\iota}(f)$ which intertwines the respective Poincar\'e group actions. In particular, we can also think of ${\cal C}$ as a covariant subnet of $\Aoi^{(m)}$. Upon this identification, it follows from statements (i) and (iii) of Prop.~\ref{prop:Wupzero} that
\begin{equation}\label{eq:WhfinC}
\int_{\mathbb R} \Re f = 0 \quad \Rightarrow \quad \poi^\bullet(\boldsymbol{\phi}^\bullet(W^{(0)}_{h,f})) = \int_{{\mathbb R}^2} dx\,h(x) \alpha^{(m;0,\iota)}_x(W_{0,\iota}(f)) \in {\cal C}.
\end{equation}
According to Thm.~\ref{thm:asympmorCm}
and the results in this section, the relations among the various subnets of ${\cal A}^{(m)}$ introduced so far can be summarized as
$$
\begin{array}{ccccc}
{\mathfrak W}^{(m)}_r \cup {\mathfrak W}^{(0)}_r & \subset & {\cal A}^{(m)}_\phi & \subsetneq & {\cal A}^{(m)} \\
&& \rotatebox{90}{$\subsetneq$} & & \\
&& {\cal C}^{(m)}&&
\end{array}
$$
where ${\mathfrak W}^{(0)}_r$ is the net of $C^*$-algebras generated by the operators in \eqref{eq:Wupzero}.
For reference's sake, we note that if $W \in {\mathfrak W}^{(m)}_r$ is an element of the form~\eqref{eq:regweyl}, then
\[
\phi_\lambda} \def\L{\Lambda(W) = \int_{{\mathbb R}^2}dy\,g(y) \alpha^{(\lambda} \def\L{\Lambda^{-1}m)}_{\lambda} \def\L{\Lambda y}(W(\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f)).
\]
In the sequel, we focus on the net of $C^*$-algebras ${\cal A}^{(m)}_\phi$. By Prop.~\ref{prop:philW}, ${\cal A}^{(m)}_\phi$ is strongly locally dense in ${\cal A}^{(m)}$.
As above, we consider
\[\rho_\lambda(B) = \rho(\lambda} \def\L{\Lambda) \phi_\lambda} \def\L{\Lambda (B) = \lim_n {\underline{W}}(iu_n)^*_\lambda} \def\L{\Lambda \phi_\lambda} \def\L{\Lambda(B) {\underline{W}}(iu_n)_\lambda} \def\L{\Lambda, \qquad \lambda} \def\L{\Lambda >0,\, B \in {\cal A}^{(m)}_\phi,
\]
where the limit exists in the norm topology, and $\rho_\lambda} \def\L{\Lambda$ is a morphism from ${\cal A}^{(m)}_\phi$ into ${\cal A}^{(m)}$.
\begin{Proposition}
The following statements hold:
\renewcommand{\theenumi}{(\roman{enumi})}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
\item for $B \in {\cal A}^{(m)}_\phi$, one has $\boldsymbol{\rho}^\bullet(B) \in \uAA^\bullet$ and
\begin{equation}\label{eq:rhoqaphi}
\poi^\bullet(\boldsymbol{\rho}^\bullet(B)) = \rho_{q,\iota}(\poi^\bullet\boldsymbol{\phi}^\bullet(B)) \in \Aoi^{(m)};
\end{equation}
\item the map $B \in {\cal A}^{(m)}_\phi \mapsto \boldsymbol{\rho}^\bullet(B) \in \uAA^\bullet$ is norm continuous;
\item $\poi^\bullet\left(\boldsymbol{\rho}^\bullet\left(\bigcup_O {\cal A}^{(m)}_\phi(O)\right)\right) \subset \bigcup_O \Aoi^{(m)}(O)$.
\end{enumerate}
\end{Proposition}
\begin{proof}
(i) For a local element $B \in {\cal A}^{(m)}_\phi(O)$, both assertions follow from the fact that, for $n$ large enough (namely, $n$ such that $\bar O$ is in the left spacelike complement of the point $(0,na)$),
\[
\rho_\lambda(B) = {\underline{W}}(iu_n)^*_\lambda} \def\L{\Lambda \phi_\lambda} \def\L{\Lambda(B) {\underline{W}}(iu_n)_\lambda} \def\L{\Lambda, \qquad \lambda} \def\L{\Lambda > 0,
\]
and, since $\poi^\bullet\boldsymbol{\phi}^\bullet(B) \in \Aoir^{(m)}(O)$ by~\cite[Thm.\ 4.6]{CM}, also
\[
\rho_{q,\iota}(\poi^\bullet\boldsymbol{\phi}^\bullet(B)) = W_{0,\iota}(iu_n)^*\poi^\bullet\boldsymbol{\phi}^\bullet(B)W_{0,\iota}(iu_n).
\]
The extension to quasi-local elements in ${\cal A}^{(m)}_\phi$ is then simply a consequence of the fact that $\boldsymbol{\rho}^\bullet$, $\poi^\bullet$, $\boldsymbol{\phi}^\bullet$ and $\rho_{q,\iota}$ are C*-algebra morphisms.
(ii) This is an immediate consequence of the obvious fact that $\boldsymbol{\rho}^\bullet : {\cal A}^{(m)}_\phi \to \uAA^\bullet$ is a C*-algebra morphism.
(iii) As seen in the proof of (i), $\poi^\bullet(\boldsymbol{\phi}^\bullet(B)) = \rho_{q,\iota}(\poi^\bullet\boldsymbol{\phi}^\bullet(B))$ belongs to $\bigcup_O \Aoi^{(m)}(O)$ for $B \in \bigcup_O {\cal A}^{(m)}_\phi(O)$.
\end{proof}
By letting $h$ converge to a $\delta$ function in Eq.~\eqref{eq:WhfinC}, we observe
that $\poi^\bullet\boldsymbol{\phi}^\bullet({\cal A}^{(m)}_\phi(O))^-$ contains ${\cal C}(O)
. We deduce that $\rho_{q,\iota}$, being strongly continuous in restriction to local algebras, is uniquely determined on ${\cal C}$ by the knowledge of $(\rho_\lambda} \def\L{\Lambda)$ through Eq.~\eqref{eq:rhoqaphi}, and then on the whole $\Aoi^{(m)}$ by a cohomological extension procedure as in Prop.~\ref{prop:restrictionz}.
In particular, setting $B = W_{h,f}^{(0)}$ in~\eqref{eq:rhoqaphi} we obtain, using Prop.~\ref{prop:Wupzero}(i) and (iii), and Eq.~\eqref{eq:rhol},
\[
\rho_{q,\iota}\bigg(\int_{{\mathbb R}^2} dx\, h(x) \alpha^{(m;0,\iota)}_x(W_{0,\iota}( f))\bigg) = \poi^\bullet\bigg(\lambda} \def\L{\Lambda \mapsto \int_{{\mathbb R}^2} dx\,h(x) e^{-i\int_{\mathbb R} u_\infty \Re( \tau_x^{(0)}f)} W(\tau^{(0)}_x\delta} \def\D{\Delta_\lambda} \def\L{\Lambda f)\bigg).
\]
By Cor.~\ref{cor:notsimple} ${\cal A}^{(m)}_\phi$ is a proper subalgebra of ${\cal A}^{(m)}$. This implies in particular that we can not make sense of Eq.~\eqref{eq:rhoqaphi} for an arbitrary $B \in {\cal A}^{(m)}$.
Moreover, $(\phi_\lambda} \def\L{\Lambda)$ is not an asymptotic isomorphism of ${\cal A}^{(m)}$ in the sense of \cite[Def. 5.4]{CM} and
therefore we can not establish the analogue of Eq.~(5.5) of~\cite{CM}. To obtain such a result, one could be tempted to enlarge the algebra $\uAA^\bullet$ so as to encompass all functions of the form $\lambda} \def\L{\Lambda \mapsto \phi_\lambda} \def\L{\Lambda(A)$, $A \in {\cal A}^{(m)}$, but Prop.~\ref{prop:notiso} implies that this can not be done in such a way that $\poi^\bullet$ is multiplicative on the enlarged algebra.
In a sense, one might think that the situation at hand hints at a not yet existing notion of {\it unbounded asymptotic morphism}.
The origin of these complications has to be ascribed to the bad infrared behaviour, indeed
we will show in Sec.~\ref{sec:highdim} that the above approach works well for the free charged field in $d+1$ dimensions, where $d=2,3$.
\medskip
In the last part of this section, we discuss one more aspect of our setting that may be of independent interest.
The following result is a version, adapted to the Cauchy data formulation of the free field we are adopting here, of the calculations in~\cite[Sec.~4]{Buc1}.
For the sake of completeness we include a proof based on similar computations as those appearing in the proof of Prop.~\ref{prop:Wupzero}.
\begin{Lemma}\label{lem:limitweyl}
Given functions $f_1,\dots, f_n \in {\cal D}({\mathbb R})$ such that $\int_{\mathbb R} \Re f_j = 0$, $j=1,\dots,n$, and functions $h_1,\dots,h_n \in C_c({\mathbb R}^2)$, $n \in {\mathbb N}$, there exists
$$
\lim_{\lambda} \def\L{\Lambda \to 0}\omega} \def\O{\Omega^{(m)}\big(\ua^{(m)}_{h_1}{\underline{W}}(f_1)_\lambda} \def\L{\Lambda\dots\ua^{(m)}_{h_n}{\underline{W}}(f_n)_\lambda} \def\L{\Lambda\big)
$$
\end{Lemma}
\begin{proof}
Exploiting the commutation relations between translations and dilations, the Weyl relations, and the definition of the vacuum state, one has
\begin{multline}\label{eq:omega}
\omega} \def\O{\Omega^{(m)}\big(\ua^{(m)}_{h_1}{\underline{W}}(f_1)_\lambda} \def\L{\Lambda\dots\ua^{(m)}_{h_n}{\underline{W}}(f_n)_\lambda} \def\L{\Lambda\big) \\= \int_{{\mathbb R}^{2n}}dx_1\dots dx_n h_1(x_1)\dots h_n(x_n)\eta_\lambda(x_1,\dots,x_n)\exp\Big\{-\frac{1}{2}\| \tau_{x_1}^{(\lambda m)}f_1 + \dots + \tau_{x_n}^{(\lambda m)}f_n\|_{\lambda m}^2\Big\},
\end{multline}
where
$$\eta_\lambda(x_1,\dots,x_n) = \exp\Big\{-\frac{i}{2}\sum_{1\leq i<j\leq n}\sigma(\tau_{x_i}^{(\lambda m)}f_i,\tau_{x_j}^{(\lambda m)}f_j)\Big\}. $$
Moreover, there holds $\tau^{(m)}_{(t,{\boldsymbol{x}})} = \tau^{(m)}_t\tau_{\boldsymbol{x}}$, and therefore one has, by the definition of the action of time translations, and setting $g_j := \tau_{{\boldsymbol{x}}_j}f_j$
\begin{equation*}
\| \tau_{x_1}^{(\lambda m)}f_1 + \dots + \tau_{x_n}^{(\lambda m)}f_n\|_{\lambda m}^2 \\
= \frac{1}{2}\int_{\mathbb R} d{\boldsymbol{p}}\bigg|\sum_{j=1}^ne^{it_j\omega_{\lambda m}({\boldsymbol{p}})}\Big[\frac{\widehat{\Re g_j}({\boldsymbol{p}})}{\sqrt{\omega_{\lambda m}({\boldsymbol{p}})}} + i\sqrt{\omega_{\lambda m}({\boldsymbol{p}})}\widehat{\Im g_j}({\boldsymbol{p}})\Big]\bigg|^2 \ .
\end{equation*}
The integrand in the last expression converges pointwise, as $\lambda} \def\L{\Lambda \to 0$, to the corresponding value for $\lambda = 0$, and the following bounds hold uniformly for $\lambda\in[0,1]$:
$$ \frac{|\widehat{\Re g_j}({\boldsymbol{p}})|}{\sqrt{\omega_{\lambda m}({\boldsymbol{p}})}} \leq \frac{|\widehat{\Re g_j}({\boldsymbol{p}})|}{|{\boldsymbol{p}}|}, \qquad \sqrt{\omega_{\lambda m}({\boldsymbol{p}})}|\widehat{\Im g_j}({\boldsymbol{p}})| \leq \sqrt{\omega_{m}({\boldsymbol{p}})}|\widehat{\Im g_j}({\boldsymbol{p}})|.$$
Since $\widehat{\Re g_j}$, $\widehat{\Im g_j}$ are Schwartz functions, and $\widehat{\Re g_j}(0) = 0$, the two functions on the right hand sides of the above inequalities are square-integrable, and therefore, by the dominated convergence theorem,
$$\lim_{\lambda \to 0} \| \tau_{x_1}^{(\lambda m)}f_1 + \dots + \tau_{x_n}^{(\lambda m)}f_n\|_{\lambda m}^2 = \| \tau_{x_1}^{(0)}f_1 + \dots + \tau_{x_n}^{(0)}f_n\|_{0}^2.$$
We now consider the limit of $\eta_\lambda$. There holds, with the same notations as above,
\begin{multline*}
\sigma(\tau_{x_i}^{(\lambda m)}f_i,\tau_{x_j}^{(\lambda m)}f_j) \\
= \int_{\mathbb R} d{\boldsymbol{p}}\bigg[\sin\big[(t_j-t_i)\omega_{\lambda m}({\boldsymbol{p}})\big]\bigg(\frac{\widehat{\Re g_i}({\boldsymbol{p}})\widehat{\Re g_j}({\boldsymbol{p}})}{\omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})}+\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})\widehat{\Im g_i}({\boldsymbol{p}})\widehat{\Im g_j}({\boldsymbol{p}})\bigg)\\
+ \cos\big[(t_j-t_i)\omega} \def\O{\Omega_{\lambda} \def\L{\Lambda m}({\boldsymbol{p}})\big]\Big(\widehat{\Re g_i}({\boldsymbol{p}})\widehat{\Im g_j}({\boldsymbol{p}})-\widehat{\Im g_i}({\boldsymbol{p}})\widehat{\Re g_j}({\boldsymbol{p}})\Big)\bigg],
\end{multline*}
and, by a similar argument as before employing the dominated convergence theorem, one sees that
$$\lim_{\lambda \to 0} \eta_\lambda(x_1,\dots,x_n) = \eta_0(x_1,\dots, x_n).$$
Then, we have shown that the integrand in~\eqref{eq:omega} converges to the corresponding value for $\lambda} \def\L{\Lambda = 0$, and thanks to the fact that $h_j \in C_c({\mathbb R}^2)$ and that the other factors are bounded uniformly in $\lambda} \def\L{\Lambda$, we obtain the thesis, appealing again to the dominated convergence theorem.
\end{proof}
\begin{Proposition}\label{prop:Csubnet}
For all double cones $O \subset {\mathbb R}^2$,
\begin{equation}
{\cal C}(O) \subset \poi^\bullet(\uAA^{(m)\bullet}(O)).
\end{equation}
\end{Proposition}
\begin{proof}
By Poincar\'e covariance, it is sufficient to prove the statement for $O = O_I$, a double cone based on the time zero line. Moreover, following the argument in~\cite[Thm. 4.8]{CM2}, it is sufficient to show that there is a C*-subalgebra $\underline{\mathfrak B}(O) \subset \uAA^{(m)}(O)$ with the properties that for all $\underline{B} \in \underline{\mathfrak B}(O)$ there exists $\lim_{\lambda} \def\L{\Lambda \to 0} \omega} \def\O{\Omega^{(m)}(\underline{B}_\lambda} \def\L{\Lambda) = \om_{0,\iota}(\underline{B})$, and that $\pi_{0,\iota}(\underline{\mathfrak B}(O))$ is dense in ${\cal C}(O)$ in the strong* operator topology. We take as $\underline{\mathfrak B}(O)$ the C*-subalgebra of $\uAA^{(m)}(O)$ generated by the operators $\ua^{(m)}_h {\underline{W}}(f) \in \uAA^{(m)}(O)$ with $\int_{\mathbb R} \Re f = 0$. Then, from Lemma~\ref{lem:limitweyl} it follows at once, by an $\varepsilon/3$-argument, that actually $\lim_{\lambda} \def\L{\Lambda \to 0} \omega} \def\O{\Omega^{(m)}(\underline{B}_\lambda} \def\L{\Lambda) = \om_{0,\iota}(\underline{B})$ for all $\underline{B} \in \underline{\mathfrak B}(O)$. The fact that $\pi_{0,\iota}(\underline{\mathfrak B}(O))$ is strongly* dense in ${\cal C}(O)$ follows from the strong* limit $\pi_{0,\iota}(\underline{\alpha}_h{\underline{W}}(f)) \to W_{0,\iota}(f)$ for $h \to \delta$, see Eq.~\eqref{eq:Woi}.\footnote{Note that if ${\text{supp}\,} f\subset O$, since $O$ is open it is always possible to choose ${\text{supp}\,} h$ so small that $\underline{\alpha}_h{\underline{W}}(f) \in \uAA^{(m)}(O)$.}
\end{proof}
This property should be compared to \cite[Thm.\ 4.8]{CM2} and could replace it in the formulation of a notion of asymptotic morphism relative to a suitable subnet of the scaling limit net for theories without convergent scaling limit. We plan to address this issue elsewhere.
\section{Asymptotic morphisms for the free charged scalar field} \label{sec:highdim}
Let $\varphi$ be the mass $m \geq 0$ free charged scalar field in $d = 4$ spacetime dimensions, and let $O \mapsto {\cal F}^{(m)}(O)$, resp.\ $O \mapsto {\cal A}^{(m)}(O) = {\cal F}^{(m)}(O)^{U(1)}$, be the corresponding field, resp.\ observable, net of von Neumann algebras, in the locally Fock representation induced by the massless vacuum state, as in~\cite{BV2}. We recall that, in such representation, ${\cal F}^{(m)}(O) = {\cal F}^{(0)}(O)$, ${\cal A}^{(m)}(O) = {\cal A}^{(0)}(O)$ for all double cones $O$ with base in the time-zero hyperplane. This entails, in particular, that the corresponding quasi-local field and observable C*-algebras coincide for all possible values of $m \geq 0$. Moreover, we recall that ${\cal F}^{(0)}$ is covariant under an automorphic action of the dilation group, denoted by $\lambda} \def\L{\Lambda \in {\mathbb R}_+ \mapsto \phi_\lambda} \def\L{\Lambda \in \aut({\cal F}^{(0)})$.
The superselection structure of ${\cal A}^{(m)}$ is well known: the sectors are in 1-1 correspondence with ${\mathbb Z}$, they are all simple, and the representative automorphism $\gamma^{(m)}_n$, $n \in {\mathbb Z}$, localized in $O$, can be expressed by~\cite{F}, \cite[Sec.\ 8.4.B]{BLOT}
\begin{equation}\label{eq:gamman}
\gamma^{(m)}_n(A) = \psi_f^n A (\psi_f^n)^*, \qquad A \in {\cal A}^{(m)},
\end{equation}
where $\psi_f \in {\cal F}^{(m)}(O)$ is the unitary phase of the polar decomposition of $\varphi(f)$, with $f \in {\cal D}_{\mathbb R}(O)$ such that its Fourier transform is not identically zero on the mass $m$ hyperboloid.
On the other hand, according to~\cite{DM, CM}, the outer regularized scaling limit nets ${\cal F}_{0,\iota,r}^{(m)}, \Aoir^{(m)}$ can be identified with the corresponding nets ${\cal F}^{(0)}, {\cal A}^{(0)}$ through the net isomorphism $\phi : {\cal F}_{0,\iota,r}^{(m)} \to {\cal F}^{(0)}$ defined by
\begin{equation}\label{eq:isoscalinglimit}
\phi(\pi_{0,\iota}(\underline{F})) = \operatorname{w-lim}\displaylimits_{\kappa} \phi_{\lambda} \def\L{\Lambda_\kappa}^{-1}(\underline{F}_{\lambda} \def\L{\Lambda_\kappa}), \qquad \underline{F} \in \underline{\mathfrak F}^{(m)},
\end{equation}
which is unitarily implemented. This also implies that $\Aoir^{(m)}$ is a Haag-dual net. Moreover, since $\Aoi^{(m)}$ satisfies essential duality by~\cite[Prop.\ 6.3]{BV}, one has that the outer regularized net $\Aoir^{(m)}$ coincides with the dual net ${\cal A}_{0,\iota}^{(m)d}$. Therefore, in particular, combining~\eqref{eq:gamman} and~\eqref{eq:isoscalinglimit}, the superselection structure of the scaling limit net $\Aoi^{(m)}$ coincides with that of ${\cal A}^{(0)}$ described above. We also recall that, as quasi-local C*-algebras, $\Aoir^{(m)} = \Aoi^{(m)}$.
\begin{Proposition}\label{prop:R4}
Let $f \in {\cal D}_{\mathbb R}(O)$, with $O$ a double cone with base in the time-zero hyperplane, be such that its Fourier transform does not vanish identically on the mass $m$ hyperboloid. With the notation $f_\lambda} \def\L{\Lambda(x) := \lambda} \def\L{\Lambda^{-3} f(\lambda} \def\L{\Lambda^{-1}x)$, $x \in {\mathbb R}^4$, the formula
\[
\rho_{n,\lambda} \def\L{\Lambda}(A) := \psi_{f_\lambda} \def\L{\Lambda}^n \phi_\lambda} \def\L{\Lambda(A) (\psi_{f_\lambda} \def\L{\Lambda}^n)^*, \qquad \lambda} \def\L{\Lambda>0, A \in {\cal A}^{(m)},
\]
defines a tame asymptotic morphism of ${\cal A}^{(m)}$ in the sense of~\cite[Def.\ 5.1]{CM2} for every $n \in {\mathbb Z}$, such that $\poi^\bullet\boldsymbol{\rho}^\bullet_n = \phi^{-1} \gamma^{(0)}_n$, as an (iso)morphism from the quasi-local C*-algebra ${\cal A}^{(m)} = {\cal A}^{(0)}$ to $\Aoi^{(m)}$.
\end{Proposition}
\begin{proof}
The chosen representation of the nets generated by the free charged scalar fields of different masses is such that, for $f$ as in the statement, the field operators $\varphi(f)$ all coincide. This implies that the dilations of the mass $m = 0$ theory act on $\varphi(f)$ according to
\[
\phi_\lambda} \def\L{\Lambda(\varphi(f)) = \varphi(f_\lambda} \def\L{\Lambda), \qquad \lambda} \def\L{\Lambda > 0,
\]
which entails, thanks to the unicity of the polar decomposition, $\phi_\lambda} \def\L{\Lambda(\psi_f) = \psi_{f_\lambda} \def\L{\Lambda}$. In turn, this has as a consequence that, according to~\cite[Thm.\ 4.4]{DM}, the sectors of ${\cal A}^{(m)}$ are all preserved, and that all the functions $\lambda} \def\L{\Lambda \mapsto \psi_{f_\lambda} \def\L{\Lambda}^n$, $n \in {\mathbb Z}$, belong to ${\underline{\mathfrak F}^{(m)}}^\bullet(O)$. Moreover, ${\rm Ad} \,\psi_{f_\lambda} \def\L{\Lambda}^n$ is precisely the morphism $\rho(\lambda} \def\L{\Lambda)$ of ${\cal A}^{(m)}$ appearing in~\cite[Prop.\ 7.1]{CM2}.
Furthermore, we claim that $(\phi_\lambda} \def\L{\Lambda)$ is an asymptotic isomorphism of ${\cal A}^{(m)}$ in the sense of~\cite[Def.\ 5.4]{CM2} and that
\begin{equation}\label{eq:poiacbd}
\poi^\bullet(\boldsymbol{\phi}^\bullet(A)) = \phi^{-1}(A), \qquad A \in {\cal A}^{(m)}.
\end{equation}
The validity of such statements is proven by arguments similar to those in the proof of Thm.~\ref{thm:asympmorCm}, using the properties of the isomorphism $\phi : {\cal F}_{0,\iota,r}^{(m)} \to {\cal F}^{(0)}$ obtained in~\cite{BV2}.
We conclude that, by~\cite[Thm.\ 7.2]{CM2}, the $\rho_{n,\lambda} \def\L{\Lambda}$ as in the statement is a tame asymptotic morphism. Moreover, again by~\cite[Thm.\ 7.2]{CM2}, and by the definition of $\Psi$ in~\cite[Eq. (5.5)]{CM2}, we have
\[\begin{split}
\poi^\bullet \boldsymbol{\rho}^\bullet_n &= \Psi\big((\rho_{n,\lambda} \def\L{\Lambda}), (\phi_\lambda} \def\L{\Lambda)\big) \phi^{-1} = {\rm Ad}\, \big[\poi^\bullet(\boldsymbol{\phi}^\bullet(\psi_f^n))\big]\phi^{-1} \\
&= {\rm Ad}\,\big[\phi^{-1}(\psi_f^n)\big]\phi^{-1} = \phi^{-1}{\rm Ad}\,\psi_f^n = \phi^{-1}\gamma^{(0)}_n,
\end{split}\]
where in the third equality~\eqref{eq:poiacbd} was used. This completes the proof.
\end{proof}
Notice that a similar argument applies to the charged free scalar field in $d=3$.
\medskip
Let us now consider the tensor product theory ${\cal A}^{(m_1)}\otimes {\cal A}^{(m_2)}$. There is a (tensor product) net isomorphism, which we still denote by $\phi$, between the corresponding outer regularized scaling limit theory and the tensor product ${\cal A}^{(0)} \otimes {\cal A}^{(0)}$~\cite[Thm.~3.8]{DM}, whose superselection sectors are represented by the tensor product automorphisms $\gamma^{(0)}_h \otimes \gamma^{(0)}_k$, $(h,k) \in {\mathbb Z}^2$. Considering then $\rho_{h,k,\lambda} \def\L{\Lambda} := \rho_{h,\lambda} \def\L{\Lambda}\otimes \rho_{\kappa,\lambda} \def\L{\Lambda}$ one has that $(\rho_{h,k,\lambda} \def\L{\Lambda})$ is a tame asymptotic morphism of ${\cal A}^{(m_1)}\otimes {\cal A}^{(m_2)}$, and $\poi^\bullet \boldsymbol{\rho}^\bullet_{h,k} = \phi^{-1}\gamma^{(0)}_h\otimes \gamma^{(0)}_k$. Notice that the existence of $\rho_{h,k,\lambda} \def\L{\Lambda}$ is due to the fact that $\rho_{h,\lambda} \def\L{\Lambda}$ and $\rho_{\kappa,\lambda} \def\L{\Lambda}$ are \emph{bona fide} morphisms for all $\lambda} \def\L{\Lambda > 0$. In more general situations, because of the fact that asymptotic morphisms are not necessarily linear maps, it is not obvious how to define a kind of tensor product.
| proofpile-arXiv_065-1509 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The spectrum of the eight-nucleon system ${}^8\mathrm{B}$ was established as a
result of many experimental and theoretical studies (an extensive list of
publications related to ${}^8\mathrm{B}$, can be found
in Ref.~\cite{Tilley_8}). There are excited states
that appear in all of them more or less at the same energies. These well
established levels (taken from Ref.~\cite{Tilley_8}) are schematically depicted
in Fig.~\ref{fig.establishedlevels}.
\begin{figure}
\centerline{\epsfig{file=01_establishedlevels.eps}}
\caption{\sf
Low-lying excited states of the neuclear system ${}^8\mathrm{B}$ and the
two-body thresholds for the decays
${}^8\mathrm{B}\to{}^7\mathrm{Be}(\frac32^-)+p$ and
${}^8\mathrm{B}\to{}^7\mathrm{Be}^*(\frac12^-)+p$. The data are taken from
Refs.~\cite{Tilley_8,Tilley_5_6_7}.
}
\label{fig.establishedlevels}
\end{figure}
However, there are
also few unconfirmed levels that follow from some of the studies and not from
the others. In the present work, we do an attempt to clarify the existence and
parameters of the resonance level with the quantum numbers $J^\pi=0^+$.
Using available experimental data, we construct the two-channel Jost matrices
and then analytically continue them onto the Riemann surface of the energy. The
spectral points are sought as the zeros of the Jost-matrix determinant, which
correspond to the poles of the $S$-matrix.
As the experimental data, we use the partial cross sections with $J^\pi=0^+$
for all four possible
transitions between the states $p\,{}^7\mathrm{Be}(\frac32^-)$ and
$p\,{}^7\mathrm{Be}^*(\frac12^-)$, where the second channel involves the first
excited state of the Beryllium isotope. In order to obtain these
cross sections, we use the $R$-matrix
given in Ref.~\cite{Rogachev2013}. That $R$-matrix was constructed by
parametrizing the measured excitation functions for the elastic and inelastic
$p\,{}^7\mathrm{Be}$ scattering.
We fit these data using
the two-channel Jost matrix. Since the cross sections are extracted from
the available $R$-matrix parametrization of a large collection of experimental
data, we indirectly fit the original data. After the fitting at real energies,
the Jost matrix is considered at complex $E$, where the zeros of its determinant
correspond to the poles of the $S$-matrix.
The milti-channel Jost matrix is taken in a special representation suggested in
Refs.~\cite{our.MultiCh, our_Coulomb}, where it is given as a sum
of two terms. Each of these term is factorized in a product of two matrices, one
of which is an unknown analytic single-valued function of the energy
and the other is given explicitly as a function of the channel momenta. The
explicitly given factors are responsible for the branching of the Riemann
surface. The unknown single-valued matrices are parametrized and the parameters
are found via fitting the available experimental data.
With the semi-analytic representation of the Jost matrix, where the
factors responsible for the topology of the Riemann surface are given
explicitly, it is easy to explore the behaviour of the Jost matrix on all the
sheets of the Riemann surface. In this way we are able to accurately locate
the resonance poles and to examine the possibility that the so called
``shadow'' poles exist on the other sheets of the surface.
\section{Jost matrices}
The Jost matrices are only defined for the binary reactions, where the colliding
particles may either change their internal states ($a+b\to a^*+b^*$) or transit
to another pair of particles ($a+b\to c+d$). In general, the masses of
the particles may change. This means that the channels $ab$, $a^*b^*$, $cd$,
etc. have different thresholds.
For a given two-body system of particles, $a$ and $b$, there are infinite
number of possible
combinations of their orbital angular momentum $\ell$ and the two-body
spin $\vec{s}=\vec{s}_a+\vec{s}_b$.
However, not all combinations of $\ell$ and $s$ are coupled to each other. For
the conserving total angular momentum $J$ and the parity $\pi$, only few
transitions of the type $(\ell,s)\leftrightarrow(\ell',s')$ are possible.
In the present paper, we consider the low-energy ($E<3.4$\,MeV) collision of
proton with the
nucleus ${}^7\mathrm{Be}$. In its ground state, this nucleus has $J^\pi=3/2^-$.
As a result of such a collision, the target nucleus may be excited to the state
with $J^\pi=1/2^-$ at the energy $0.4291$\,MeV
(see Ref.~\cite{Tilley_5_6_7}). The other excited states of
${}^7\mathrm{Be}$ are too high as compared to the maximal collision energy
and thus can be safely ignored. Therefore we deal with the following (elastic
and inelastic) coupled processes:
\begin{equation}
\label{coupled_gammas}
\begin{array}{rlcl}
\text{channel 1:\hspace{0.5cm}} &
p+{}^7\mathrm{Be}(\frac32^-) & &
p+{}^7\mathrm{Be}(\frac32^-)\\
&& \displaystyle\genfrac{}{}{0pt}{}{\searrow}{\nearrow}
\bigcirc
\displaystyle\genfrac{}{}{0pt}{}{\nearrow}{\searrow}
& \\
\text{channel 2:\hspace{0.5cm}} &
p+{}^7\mathrm{Be}^*(\frac12^-) & &
p+{}^7\mathrm{Be}^*(\frac12^-)\\
\end{array}\ ,
\end{equation}
where the circle in the middle is either the intermediate scattering state
of the direct reaction or the compound resonance state of ${}^8\mathrm{B}$.
It is easy to see that the state $0^+$ of our eight-body system can only
be formed if $\ell=1$ and $s=1$ in both channels. This means that we deal with
a two-channel problem.
The $N$-channel Jost matrices $\bm{f}^{\mathrm{(in)}}$ and
$\bm{f}^{\mathrm{(out)}}$ are defined as the energy-dependent
$(N\times N)$-``amplitudes'' of the incoming and
outgoing multi-channel (diagonal-matrix) spherical waves, $\bm{H}^{(-)}$ and
$\bm{H}^{(+)}$, in the asymptotic behaviour of the
regular solution, $\bm{\phi}(E,r)$, of the radial Schr\"odinger equation,
\begin{equation}
\label{Jost_definition}
\bm{\phi}(E,r)
\mathop{\longrightarrow}\limits_{r\to\infty}
\bm{H}^{(-)}(E,r)\bm{f}^{\mathrm{(in)}}(E)+
\bm{H}^{(+)}(E,r)\bm{f}^{\mathrm{(out)}}(E)\ .
\end{equation}
A more detailed description of their meaning and properties can be found in
Refs.~\cite{two_channel,our.MultiCh,our_Coulomb,our_He5}. It is worthwhile to
write Eq.~(\ref{Jost_definition}) in the explicit form for the case of two
coupled channels ($N=2$):
\begin{eqnarray}
\nonumber
\bm{\phi}(E,r)
&\mathop{\longrightarrow}\limits_{r\to\infty}&
\begin{bmatrix}
H_{\ell_1}^{(-)}(\eta_1,k_1r)e^{i\sigma_{\ell_1}} & 0\\[3mm]
0 & H_{\ell_2}^{(-)}(\eta_2,k_2r)e^{i\sigma_{\ell_2}}
\end{bmatrix}
\begin{bmatrix}
f_{11}^{\mathrm{(in)}}(E) & f_{12}^{\mathrm{(in)}}(E)\\[3mm]
f_{21}^{\mathrm{(in)}}(E) & f_{22}^{\mathrm{(in)}}(E)
\end{bmatrix}
+\\[3mm]
\label{regular_ass}
&+&
\begin{bmatrix}
H_{\ell_1}^{(+)}(\eta_1,k_1r)e^{-i\sigma_{\ell_1}} & 0\\[3mm]
0 & H_{\ell_2}^{(+)}(\eta_2,k_2r)e^{-i\sigma_{\ell_2}}
\end{bmatrix}
\begin{bmatrix}
f_{11}^{\mathrm{(out)}}(E) & f_{12}^{\mathrm{(in)}}(E)\\[3mm]
f_{21}^{\mathrm{(out)}}(E) & f_{22}^{\mathrm{(in)}}(E)
\end{bmatrix}\ ,
\end{eqnarray}
where
\begin{equation}
\label{Riccati_Coulomb}
H_\ell^{(\pm)}(\eta,kr)=F_\ell(\eta,kr)\mp iG_\ell(\eta,kr)
\ \mathop{\longrightarrow}\limits_{r\to\infty}
\ \mp i\exp\left\{\pm i\left[kr-\eta\ln (2kr)
-\frac{\ell\pi}{2}+\sigma_\ell\right]\right\}\ .
\end{equation}
In these equations, $k_n$, $\ell_n$, $\eta_n$, and $\sigma_{\ell_n}$ are the
momentum, angular momentum, Sommerfeld parameter, and
the pure Coulomb phase-shift in the channel $n$; the functions $F_\ell$ and
$G_\ell$ are the standard regular and irregular Coulomb solutions of the
Schr\"odinger equation (see, for example, Ref.~\cite{abramowitz}).
\subsection{Observables}
The $N$ columns of the matrix $\bm{\phi}(E,r)$ constitute a regular basis.
Therefore a physical wave function, i.e. a column $\bm{u}(E,r)$, is their
linear combination:
\begin{equation}
\label{physical_wf}
\bm{u}(E,r) =\bm{\phi}(E,r)\bm{c}\ ,
\end{equation}
where $\bm{c}$ is a column matrix of the combination coefficients.
These coefficients are to be chosen to satisfy certain physical
boundary conditions at infinity. For a spectral point (either bound or a
resonant state) the physical wave function should only have the outgoing waves
in its asymptotic behaviour,
\begin{equation}
\label{spectral_bc}
\bm{u}(E,r)
\mathop{\longrightarrow}\limits_{r\to\infty}
\bm{H}^{(-)}(E,r)
\bm{f}^{\mathrm{(in)}}(E)\bm{c}
+
\bm{H}^{(+)}(E,r)
\bm{f}^{\mathrm{(out)}}(E)\bm{c}\ .
\end{equation}
This can only be achieved if the first term in this equation is zero, i.e. if
the unknown combination coefficients $c_n$ obey the homogeneous system of linear
equations,
\begin{equation}
\label{fczero}
\bm{f}^{\mathrm{(in)}}(E)\bm{c}=
\begin{bmatrix}
f_{11}^{\mathrm{(in)}}(E) & f_{12}^{\mathrm{(in)}}(E)\\[3mm]
f_{21}^{\mathrm{(in)}}(E) & f_{22}^{\mathrm{(in)}}(E)
\end{bmatrix}
\begin{pmatrix} c_1 \\ c_2 \end{pmatrix}=0\ ,
\end{equation}
which has a non-zero solution if and only if
\begin{equation}
\label{detfzero}
\det
\begin{bmatrix}
f_{11}^{\mathrm{(in)}}(E) & f_{12}^{\mathrm{(in)}}(E)\\[3mm]
f_{21}^{\mathrm{(in)}}(E) & f_{22}^{\mathrm{(in)}}(E)
\end{bmatrix}
=0\ .
\end{equation}
The roots $E=\mathcal{E}_n$ of this equation are the spectral points. At real
negative energies ($\mathcal{E}_n<0$) they
correspond to the bound states, and at the complex energies
($\mathcal{E}_n=E_r-i\Gamma/2$) they give us the resonances.
It is not difficult to shown (see, for example,
Refs.~\cite{our.MultiCh,two_channel}) that the scattering is determined by the
``ratio'' of the amplitudes of the out-going and in-coming waves, i.e. by the
$S$-matrix,
\begin{equation}
\label{Smatrix}
\bm{S}(E)=\bm{f}^{\mathrm{(out)}}(E)
\left[\bm{f}^{\mathrm{(in)}}(E)\right]^{-1}\ ,
\end{equation}
whose poles correspond to the roots of eq.~(\ref{detfzero}).
The partial cross section that describes the transition between any two
particular channels, can be obtained via the corresponding elements of
the $S$-matrix (see, for example, Ref.~\cite{Frobrich}),
\begin{equation}
\label{particular_sigma}
\sigma^J(n'\gets n)=\pi
\frac{\mu_n k_{n'}}{\mu_{n'}k_n}\cdot
\frac{2J+1}{2s+1}\left|
\frac{S^J_{n'n}-\delta_{n'n}}{k_n}
\right|^2\ ,
\end{equation}
where $\mu_n$ is the reduced mass in the channel $n$.
The partial widths of a resonance can be found using the method
developed in Ref.~\cite{my.partial}:
\begin{equation}
\label{Gpartial}
\Gamma_n=
\displaystyle\frac{\mathrm{Re}(k_n)|\mathcal{A}_n|^2\Gamma}
{\displaystyle\sum_{n'=1}^N\frac{\mu_n}{\mu_{n'}}
\mathrm{Re}(k_{n'})|\mathcal{A}_{n'}|^2}\ ,
\end{equation}
where $\mathcal{A}_1$ and $\mathcal{A}_2$ are the asymptotic amplitudes
(see Ref.~\cite{my.partial}) of the channels, given by
\begin{equation}
\label{A_1A_2_Final}
\mathcal{A}_1=f^{(\mathrm{out})}_{11}-\displaystyle
\frac{f^{(\mathrm{in})}_{11}f^{(\mathrm{out})}_{12}}{f^{(\mathrm{in})}_{12}}
\ ,\qquad
\mathcal{A}_2=f^{(\mathrm{out})}_{21}-\displaystyle
\frac{f^{(\mathrm{in})}_{11}f^{(\mathrm{out})}_{22}}{f^{(\mathrm{in})}_{12}}
\ .
\end{equation}
In these equations the Jost matrices are taken at the complex resonant energy.
\subsection{Analytic properties}
The Jost matrices (and thus the $S$-matrix) are multi-valued complex functions
of the energy-variable $E$. They can be treated as single-valued, if considered
on a multi-layered Riemann surface. Each thereshold is a branch point of such a
surface. The multi-valuedness and thus the branching stem from the fact that
the Jost matrices depend on the energy via the channel momenta,
\begin{equation}
\label{ch_momenta}
k_n=\pm\sqrt{\frac{2\mu_n}{\hbar^2}(E-E_n)}\ ,
\qquad n=1,2,\dots,N\ ,
\end{equation}
where $E_n$ are the threshold energies. There are $2^N$ possible
combinations of the signs in front of the $N$ square roots (\ref{ch_momenta}),
and thus for a single value of $E$ there are $2^N$ different values of the Jost
matrices. If the interacting particles are charged, there is an additional
uncertainty in calculating the Jost matrices for a given $E$. This is because
the Coulomb spherical waves (\ref{Riccati_Coulomb}) and thus their amplitudes,
$\bm{f}^{\mathrm{(in/out)}}(E)$, in the asymptotic behaviour
(\ref{Jost_definition}) depend on the logarithms, $\ln k_n$, of the channel
momenta. The complex function $\ln k_n$ has infinitely many different values,
\begin{eqnarray}
\label{Logarithm_ch_momenta}
\ln k_n=\ln\left\{|k_n|e^{i[\arg(k_n)+2\pi m_n]}\right\} &=&
\ln|k_n|+i\arg(k_n)+i2\pi m_n\ ,\\[3mm]
\nonumber
&& m_n = 0,\pm1,\pm2,\dots\ ,
\end{eqnarray}
corresponding to different choices of $m_n$. This implies that the Jost
matrices are defined on a ``spiral'' Riemann surface with infinitely many layers
(for more details see Ref.~\cite{our_He5}). At each threshold, this surface is
branching due to both the square-root and the logarithm multi-valuedness. The
layers are identified by the signs of $\mathrm{Im}\,k_n$ and the logarithmic
indices $m_n$. For the two-channel problem, the layers can be denoted by the
symbols of the type $(\pm\pm)_{m_1m_2}$. The layers with $m_n\neq0$ are far
away from the real axis, where the physical scattering energies belong to. This
means that such layers may be safely ignored, and we should only consider the
``principal'' layers corresponding to $m_1=m_2=0$.
For our two-body problem, the Riemann surface is schematically depicted in
Fig.~\ref{fig.sheets_2ch_Coulomb}. Each sheet of this surface is cut along
its own real axis and the interconnections among the cuts are done in such a
way that one full circle around the threshold $E_n$ changes the sign of
$\mathrm{Im}\,k_n$, two full circles around $E_n$ change the logarithmic index
$m_n$. If we go around both thresholds, then both momenta and both logarithmic
indices do change. This is illustrated in Fig.~\ref{fig.spiral.elliptic}.
\begin{figure}
\centerline{\epsfig{file=02_sheets_2ch_Coulomb.eps}}
\caption{\sf
Interconnections of the Riemann sheets for the two-channel problem, where the
Coulomb potential is present in both channels:
(a) the interconnections between the thresholds $E_1$ and $E_2$; (b) the
interconnections above the highest threshold. The symbols
$(\pm\pm)_{m_1m_2}$ label the sheets where $\mathrm{Im}(k_1)$ and
$\mathrm{Im}(k_2)$
are either positive or
negative, and the subscripts $m_1m_2$ are the numbers of $i2\pi$ in
Eq.~(\protect{\ref{Logarithm_ch_momenta}}) for the channels.
}
\label{fig.sheets_2ch_Coulomb}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=03_spiral.elliptic.eps}}
\caption{\sf
Two-circle path around both thresholds ($E_1$ and $E_2$) on the Riemann surface
of a two-channel problem with the Coulomb forces in both channels. If the
starting point is on the sheet $(+-)_{00}$, then the final point is on the
sheet $(+-)_{11}$.
}
\label{fig.spiral.elliptic}
\end{figure}
\subsection{Analytic structure}
The multi-channel Riemann surface has a very complicated topology.
The intricate interconnections of its layers should be kept in mind not only in
pure theoretical considerations, but also in the analysis of experimental data
when one tries to extract the information on the resonances. The reason is that
the resonance poles of the $S$-matrix lie on one of the Riemann sheets. In
order to reach them, one has to do the analytic continuation of the $S$-matrix,
starting from the real axis. In doing this, one should be careful, and
especially when such a continuation is done near a branch point.
All the complications caused by the branching of the Riemann surface, can be
circumvented by using the semi-analytic representations of the Jost matrices
suggested in Refs.~\cite{our.MultiCh, our_Coulomb}. In these representations,
the factors responsible for the branching of the Riemann surface are given
explicitly. For the charged-particle case, it was shown\cite{our_Coulomb} that
the Jost matrices have the following structure
\begin{equation}
\label{multi.Finout_structure}
\bm{f}^{(\mathrm{in/out})}=
\bm{Q}^{(\pm)}\left[
\bm{D}^{-1}\bm{A}\bm{D}-(\bm{M}\pm i)
\bm{K}^{-1}\bm{D}\bm{B}\bm{D}\right]\ ,
\end{equation}
where the unknown matrices $\bm{A}(E)$ and $\bm{B}(E)$ are single-valued
functions of the energy and are defined on the simple energy-plane without any
branch points. All the troubles with the branching stem from the explicitly
given factors (diagonal matrices):
\begin{equation}
\label{multi.defQ}
\bm{Q}^{(\pm)} =
\operatorname{diag}\left\{
\frac{e^{\pi\eta_1/2}\ell_1!}{\Gamma(\ell_1+1\pm i\eta_1)},
\frac{e^{\pi\eta_2/2}\ell_2!}{\Gamma(\ell_2+1\pm i\eta_2)},\dots,
\frac{e^{\pi\eta_N/2}\ell_N!}{\Gamma(\ell_N+1\pm i\eta_N)}\right\}\ ,
\end{equation}
\begin{equation}
\label{multi.defD}
\bm{D} =
\operatorname{diag}\left\{
C_{\ell_1}(\eta_1)k_1^{\ell_1+1},
C_{\ell_2}(\eta_2)k_2^{\ell_2+1},\dots,
C_{\ell_N}(\eta_N)k_N^{\ell_N+1}\right\}\ ,
\end{equation}
\begin{equation}
\bm{M} =
\operatorname{diag}\left\{
\frac{2\eta_1h(\eta_1)}{C_0^2(\eta_1)},
\frac{2\eta_2h(\eta_2)}{C_0^2(\eta_2)},
\dots,
\frac{2\eta_Nh(\eta_N)}{C_0^2(\eta_N)}\right\}\ ,
\end{equation}
\begin{equation}
\label{multi.chmom}
\bm{K}=\operatorname{diag}\left\{k_1,k_2,\dots,k_N\right\}\ .
\end{equation}
They involve the Coulomb barrier
factor $C_\ell$ and the function $h(\eta)$ that is responsible for the
logarithmic branching:
\begin{equation}
\label{CL}
C_\ell(\eta)=
\frac{2^\ell e^{-\pi\eta/2}}{(2\ell)!!}
\exp\left\{\frac12\left[\ln\Gamma(\ell+1+i\eta)+
\ln\Gamma(\ell+1-i\eta)\right]\right\}
\ \mathop{\longrightarrow}\limits_{\eta\to0}\ 1\ ,
\end{equation}
\begin{equation}
\label{h_function}
h(\eta)=\frac12\left[\psi(i\eta)+
\psi(-i\eta)\right]-\ln{\eta}\ ,
\qquad
\psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}\ ,
\qquad
{\eta}=\frac{e^2Z_1Z_2\mu}{\hbar^2k}\ .
\end{equation}
In the explicit form for the matrix elements,
Eq.~(\ref{multi.Finout_structure}) can be written as
\begin{eqnarray}
\label{multi.matrixelements}
f^{(\mathrm{in/out})}_{mn}(E) &=&
\frac{e^{\pi\eta_m/2}\ell_m!}{\Gamma(\ell_m+1\pm i\eta_m)}
\left\{
\frac{{C}_{\ell_n}(\eta_n)k_n^{\ell_n+1}}
{{C}_{\ell_m}(\eta_m)k_m^{\ell_m+1}}{A}_{mn}(E)\ -\right.\\[3mm]
\nonumber
&-&
\left.\left[
\frac{2\eta_mh(\eta_m)}{C_0^2(\eta_m)}\pm i\right]
{C}_{\ell_m}(\eta_m){C}_{\ell_n}(\eta_n)
k_m^{\ell_m}k_n^{\ell_n+1}{B}_{mn}(E)\right\}\ .
\end{eqnarray}
The matrices $\bm{A}(E)$ and $\bm{B}(E)$ are
the same for both $\bm{f}^{\rm(in)}$ and $\bm{f}^{\rm(out)}$, and they are real
for real energies.
Apparently, the analytic structure of the $S$-matrix (\ref{Smatrix}) is even
more complicated than that of the Jost matrices. This means that none of the
simplified phenomenological formulae for the multi-channel $S$-matrix (that
very often are used to fit experimental data) can guarantee the correct
topology of the Riemann surface. The consequences of such a simplification for
the analytic continuation of the $S$-matrix are unclear and unpredictable.
\subsection{Approximation and analytic continuation}
\label{sect.ApprCont}
In the exact expressions (\ref{multi.matrixelements}), the only unknowns are the
matrices $\bm{A}(E)$ and $\bm{B}(E)$, which are single-valued and analytic.
They can be expanded in Taylor series around an arbitrary complex
energy $E_0$,
\begin{equation}
\label{A.Taylor}
\bm{A}(E)=\bm{a}^{(0)}+\bm{a}^{(1)}(E-E_0)+
\bm{a}^{(2)}(E-E_0)^2+\cdots\ ,
\end{equation}
\begin{equation}
\label{B.Taylor}
\bm{B}(E)=\bm{b}^{(0)}+\bm{b}^{(1)}(E-E_0)+
\bm{b}^{(2)}(E-E_0)^2+\cdots\ .
\end{equation}
Here $\bm{a}^{(m)}(E_0)$ and $\bm{b}^{(m)}(E_0)$ are the $(N\times
N)$-matrices (for a two-channel case, $N=2$) depending on the choice of the
center $E_0$ of the expansion. The matrix elements of $\bm{a}^{(m)}(E_0)$ and
$\bm{b}^{(m)}(E_0)$ are the
unknown parameters. We can take the first several terms of these
expansions and find the unknown parameters by
fitting some available experimental data. As a result, we obtain
approximate analytic expressions (\ref{multi.matrixelements}) for the Jost
matrices.
It is convenient to choose the central point $E_0$ on the real axis. Such a
choice makes the matrices $\bm{a}^{(m)}(E_0)$ and $\bm{b}^{(m)}(E_0)$ real.
After adjusting the parameters (via fitting the data) we can consider the same
Jost matrices (\ref{multi.matrixelements}) at complex energies and thus can
locate the resonances, as is schematically illustrated in
Fig.~\ref{fig.fit.around.E0}.
\begin{figure}
\centerline{\epsfig{file=04_fit.around.E0.eps}}
\caption{\sf
The data points fitted within an interval centered at $E_0$ on the real axis,
give us the Jost matrices valid within a circle of the adjacent complex domain,
where the resonance spectral points can be found.
}
\label{fig.fit.around.E0}
\end{figure}
When looking at complex $E$, we can choose the
appropriate sheet of the Riemann surface. The single-valued functions
$\bm{A}(E)$ and $\bm{B}(E)$ are the same on all the sheets. The differencies
only stem from the explicit factors depending on $k_n$ and $\ln k_n$ in
Eq.~(\ref{multi.matrixelements}).
For a given energy $E$, we calculate the square roots (\ref{ch_momenta}) and
$\ln k_n$ for all the channel
momenta. Choosing appropriate signs in front of the square roots and adding
appropriate number of $2\pi$ in Eq.~(\ref{Logarithm_ch_momenta}), we can place
the point on any Riemann sheet that we need. In other words, the analytic
continuation of the Jost matrices from the real axis (where the fitting is done)
to a chosen Riemann sheet is always done correctly despite the approximations
(\ref{A.Taylor},\ref{B.Taylor}).
\section{The data and fitting procedure}
\label{sec.fitting}
The Jost matrices describe the two-body states with definite quantum numbers,
namely, $(J^\pi,\ell,s)$. If we were trying to fit ``raw'' experimental data,
we would need to sum up several states with different $J$ and many partial
waves. This would result in too many free parameters and the task would
become unmanageable. To avoid such a difficulty, we consider partial cross
sections (for a given $J^\pi$) separately.
In the present work, we deal with the state $0^+$ of the system
$p\,{}^7\mathrm{Be}$. This state involves only one partial wave,
namely, $(\ell,s)=(1,1)$ in both channels. In order to obtain the partial cross
sections, one has to do the partial-wave analysis of the ``raw'' data.
This is a very complicated task by itself. We therefore rely on the existing
$R$-matrix analysis of the system
$p\,{}^7\mathrm{Be}$, published in Ref.~\cite{Rogachev2013}, where the
experimental data on the
excitation functions for the elastic and inelastic $p\,{}^7\mathrm{Be}$
scattering were parametrized. As a result of this analysis, the authors of
Ref.~\cite{Rogachev2013} reported three new low-energy resonances with the
quantum numbers $0^+$, $1^+$ and $2^+$.
Using the parameters given in Ref.~\cite{Rogachev2013}, we construct the
$R$-matrix and then the corresponding $S$-matrix,
from which any partial cross section can be calculated. Since the $R$-matrix
of Ref.~\cite{Rogachev2013} was obtained by fitting the ``raw'' data, the
partial cross sections we obtain from this $R$-matrix, can be considered as
experimental. In a sense, such an approach is similar to treating the
scattering phase-shifts as experimental data despite the fact that nobody
measures them directly and they are obtained from a complicated partial-wave
analysis of the ``raw'' data.
\begin{figure}
\centerline{\epsfig{file=05_plot_sigma11.eps}}
\caption{\sf
Partial cross section for the transition $1\to1$ of the processes
(\ref{coupled_gammas}) in the state with $J^\pi=0^+$. The dots are the
``experimental'' points obtained from the $R$-matrix taken from
Ref.~\cite{Rogachev2013}. The curve is our fit with the Jost matrix parameters
given in Table~\ref{table.parameters}. The collision energy is counted from the
$p\,{}^7\mathrm{Be}$ threshold.
}
\label{fig.fit.11}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=06_plot_sigma12.eps}}
\caption{\sf
Partial cross section for the transition $1\to2$ of the processes
(\ref{coupled_gammas}) in the state with $J^\pi=0^+$. The dots are the
``experimental'' points obtained from the $R$-matrix taken from
Ref.~\cite{Rogachev2013}. The curve is our fit with the Jost matrix parameters
given in Table~\ref{table.parameters}. The collision energy is counted from the
$p\,{}^7\mathrm{Be}$ threshold.
}
\label{fig.fit.12}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=07_plot_sigma21.eps}}
\caption{\sf
Partial cross section for the transition $2\to1$ of the processes
(\ref{coupled_gammas}) in the state with $J^\pi=0^+$. The dots are the
``experimental'' points obtained from the $R$-matrix taken from
Ref.~\cite{Rogachev2013}. The curve is our fit with the Jost matrix parameters
given in Table~\ref{table.parameters}. The collision energy is counted from the
$p\,{}^7\mathrm{Be}$ threshold.
}
\label{fig.fit.21}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=08_plot_sigma22.eps}}
\caption{\sf
Partial cross section for the transition $2\to2$ of the processes
(\ref{coupled_gammas}) in the state with $J^\pi=0^+$. The dots are the
``experimental'' points obtained from the $R$-matrix taken from
Ref.~\cite{Rogachev2013}. The curve is our fit with the Jost matrix parameters
given in Table~\ref{table.parameters}. The collision energy is counted from the
$p\,{}^7\mathrm{Be}$ threshold.
}
\label{fig.fit.22}
\end{figure}
Thus obtained partial cross sections for the four processes
(\ref{coupled_gammas}) are given in Figs.~\ref{fig.fit.11}-\ref{fig.fit.22},
where they are shown by the dots. We
consider these dots as the (indirectly obtained) experimental points, which we
fit by varying the Jost matrices.
As the basis for parametrizing the Jost matrices, we use the semi-analytic
expressions (\ref{multi.matrixelements}), where the unknown matrices
$\bm{A}(E)$ and $\bm{B}(E)$ are analytic and single-valued functions of
the energy. We therefore can approximate them by the $(M+1)$ Taylor terms,
\begin{eqnarray}
\label{Aapprox}
A_{n'n}(E) &\approx& \sum_{m=0}^M a^{(m)}_{n'n}(E-E_0)^m\ ,\\[3mm]
\label{Bapprox}
B_{n'n}(E) &\approx& \sum_{m=0}^M b^{(m)}_{n'n}(E-E_0)^m\ ,
\qquad n',n=1,2,\dots,N\ ,
\end{eqnarray}
with $E_0$ taken somewhere in the middle of the interval covered by the
experimental points. The unknown expansion coefficients $a^{(m)}_{n'n}$ and
$b^{(m)}_{n'n}$ serve as the fitting parameters and $N=2$
is the number of the coupled channels. These matrices $\bm{A}$ and $\bm{B}$,
when substituted in the semi-analytic expressions
(\ref{multi.matrixelements}), give us the approximate Jost matrices and the
corresponding $S$-matrix (\ref{Smatrix}), which is used to calculate the
approximate partial cross sections (\ref{particular_sigma}),
$\tilde{\sigma}_{n'\gets n}$, depending on the fitting parameters.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{$E_0$}
& $1.6\,\mathrm{MeV}$ & $1.8\,\mathrm{MeV}$ & $2.0\,\mathrm{MeV}$\\
\hline
$m$ & $n'$ & $n$
&
\parbox{2cm}{\begin{center}
$a_{n'n}^{(m)}$,\ $b_{n'n}^{(m)}$\\[1mm]
$[\mathrm{MeV}^{-m}]$\end{center}}
&
\parbox{2cm}{\begin{center}
$a_{n'n}^{(m)}$,\ $b_{n'n}^{(m)}$\\[1mm]
$[\mathrm{MeV}^{-m}]$\end{center}}
&
\parbox{2cm}{\begin{center}
$a_{n'n}^{(m)}$,\ $b_{n'n}^{(m)}$\\[1mm]
$[\mathrm{MeV}^{-m}]$\end{center}}\\
\hline
\multirow{4}{*}{0} & 1 & 1 &
$ -3.8980$,\ $ 260.02 $ &
$ -2.5158$,\ $ 132.16 $ &
$ -1.6501$,\ $ 51.674 $ \\
& 1 & 2 &
$ -0.076997$,\ $ 19.287 $ &
$ -0.85222$,\ $ 61.581 $ &
$ -1.3865$,\ $ 66.164 $ \\
& 2 & 1 &
$ 0.0071846$,\ $ 142.56 $ &
$ 0.28927$,\ $ 113.93 $ &
$ 0.46568$,\ $ 71.761 $ \\
& 2 & 2 &
$ -0.19860$,\ $ -19.181 $ &
$ -0.16232$,\ $ 13.827 $ &
$ -0.10506$,\ $ 29.145 $ \\
\hline
\multirow{4}{*}{1} & 1 & 1 &
$ 3.5822$,\ $ -537.13 $ &
$ -2.4338$,\ $ -67.781 $ &
$ -5.7215$,\ $ 37.455 $ \\
& 1 & 2 &
$ 0.18923$,\ $ -15.205 $ &
$ -0.74242$,\ $ -1.6230$ &
$ -4.5979$,\ $ 69.555 $ \\
& 2 & 1 &
$ 1.2508$,\ $ 41.701 $ &
$ 2.0021$,\ $ 149.42 $ &
$ 2.0154$,\ $ 131.71 $\\
& 2 & 2 &
$ -0.036673$,\ $ -13.918 $ &
$ -0.028947$,\ $ 22.033 $ &
$ -0.37030$,\ $ 42.200 $ \\
\hline
\multirow{4}{*}{2} & 1 & 1 &
$ -7.7575$,\ $ 1313.9 $ &
$ -19.442 $,\ $ 581.08 $ &
$ -16.051 $,\ $ 118.57 $ \\
& 1 & 2 &
$ -0.46898$,\ $ 111.02 $ &
$ -6.3748$,\ $ 250.95 $ &
$ -12.680 $,\ $ 135.59 $ \\
& 2 & 1 &
$ 2.8715$,\ $ 353.10 $ &
$ 3.7550$,\ $ 330.22 $ &
$ 3.4106$,\ $ 192.31 $ \\
& 2 & 2 &
$ -0.77670$,\ $ 10.873 $ &
$ -1.3155$,\ $ 33.713 $ &
$ -1.8707$,\ $ 25.314 $ \\
\hline
\multirow{4}{*}{3} & 1 & 1 &
$ -29.889 $,\ $ -698.57 $ &
$ -27.627 $,\ $ -462.76 $ &
$ -14.980 $,\ $ -195.76 $ \\
& 1 & 2 &
$ 0.032721$,\ $ -78.511 $ &
$ -8.0587$,\ $ -198.45 $ &
$ -10.889 $,\ $ -190.90 $ \\
& 2 & 1 &
$ 2.8100$,\ $ 220.84 $ &
$ 3.0660$,\ $ 109.69 $ &
$ 2.6138$,\ $ 57.036 $ \\
& 2 & 2 &
$ -2.4411$,\ $ -151.14 $ &
$ -2.5425$,\ $ -92.968 $ &
$ -2.2098$,\ $ -51.518 $ \\
\hline
\multirow{4}{*}{4} & 1 & 1 &
$ -13.780 $,\ $ -18.493 $ &
$ -9.4436$,\ $ -19.098 $ &
$ -4.1096$,\ $ -12.805 $ \\
& 1 & 2 &
$ 0.86303$,\ $ 27.495 $ &
$ -2.1777$,\ $ 19.390 $ &
$ -2.5535$,\ $ 14.865 $ \\
& 2 & 1 &
$ 1.2247$,\ $ -79.035 $ &
$ 0.98110$,\ $ -50.869 $ &
$ 0.74022$,\ $ -30.866 $ \\
& 2 & 2 &
$ -1.4628$,\ $ 26.744 $ &
$ -1.1201$,\ $ 10.597 $ &
$ -0.72228$,\ $ -1.1996$ \\
\hline
\end{tabular}
\end{center}
\caption{\sf
Parameters of the expansions (\ref{Aapprox},\ref{Bapprox}) with three choices
of the central point $E_0$. These
parameters for $E_0=1.8$\,MeV were used to generate the curves shown in Figs.
\ref{fig.fit.11}, \ref{fig.fit.12}, \ref{fig.fit.21}, and \ref{fig.fit.22}.
}
\label{table.parameters}
\end{table}
The optimal values of the fitting parameters are found by minimizing the
following function:
\begin{eqnarray}
\label{chisquare}
\chi^2 &=&
W_{11} \sum_{i=1}^K\left|
\tilde{\sigma}_{1\gets 1}(E_i)-\sigma_{1\gets 1}(E_i)\right|^2+\\[3mm]
\nonumber
&+&
W_{21} \sum_{i=1}^K\left|
\tilde{\sigma}_{2\gets 1}(E_i)-\sigma_{2\gets 1}(E_i)\right|^2+\\[3mm]
\nonumber
&+&
W_{12} \sum_{i=1}^K\left|
\tilde{\sigma}_{1\gets 2}(E_i)-\sigma_{1\gets 2}(E_i)\right|^2+\\[3mm]
\nonumber
&+&
W_{22} \sum_{i=1}^K\left|
\tilde{\sigma}_{2\gets 2}(E_i)-\sigma_{2\gets 2}(E_i)\right|^2\ ,
\end{eqnarray}
where $K$ is the number of experimental points, and $\sigma_{n'\gets n}(E_i)$ is
the experimental cross section at the energy $E_i$. The experimental
errors are not defined because the data are taken from the $R$-matrix analysis.
We therefore put all of them to unity in the $\chi^2$-function
(\ref{chisquare}). Since the experimental errors are absent, each point is
equally important in this function. However the magnitudes of the cross
sections in different channels are significantly different (compare, for
example, Figs. \ref{fig.fit.11} and \ref{fig.fit.22}). As a result of such a
difference, the minimization tends to give preference to the curves with larger
values of $\sigma_{n'\gets n}$, while the quality of the fitting of the smaller
cross sections remains poor. To avoid this tendency, we introduce the weight
factors $W_{n'n}$ in the $\chi^2$-function (\ref{chisquare}). These factors are
chosen in such a way that the contributions from the four terms are more or
less the same.
For the minimization, we use the MINUIT program developed in
CERN~\cite{MINUIT}. The function (\ref{chisquare}) has many local minima.
The search for the best of them can be based on the following strategy. First
of all, the minimization procedure should be repeated many times (we did it
$\sim1000$ times) with randomly chosen starting values of the parameters. Then,
after a good minimum is found, it can be refined by choosing random starting
point around the best point found in the parameter space. After each
improvement, the new starting parameters are chosen by random variations
of the parameters around the new best point.
The cross sections (as well as any other observables) are expressed via the
elements of the $S$-matrix (\ref{Smatrix}), i.e. via the ratio of the Jost
matrices. In such a ratio, any common factor in $\bm{f}^{(\mathrm{in})}$
and $\bm{f}^{(\mathrm{out})}$ cancels out. This means that the set
of the parameters $a^{(m)}_{n'n}$ and $b^{(m)}_{n'n}$ can be scaled by any
convenient factor. Such a scaling does not affect any results.
\section{Results}
\label{sec.results}
The experimental data (obtained from the $R$-matrix given in
Ref.~\cite{Rogachev2013}) for the four processes (\ref{coupled_gammas}), were
fitted as is decribed in Sec.~\ref{sec.fitting}, with $M=4$,
$W_{11}=3$, $W_{12}=15$, $W_{21}=0.04$, and $W_{22}=0.002$. We repeated the fit
with five different values of the central energy $E_0$, namely, with
$E_0=1.6$\,MeV, $E_0=1.7$\,MeV, $E_0=1.8$\,MeV, $E_0=1.9$\,MeV,
and $E_0=2.0$\,MeV (the energy is counted from the
$p\,{}^7\mathrm{Be}$-threshold). Formally, the results should not depend on
the choice of $E_0$. However, the Taylor expansions
(\ref{Aapprox}, \ref{Bapprox}) are truncated and the minimization procedure
is always an approximate one. The calculations with several different $E_0$
allow us to see how stable the results are, to find the average values of the
resonance parameters and their standard deviations, and to exclude any possible
spurious poles of the $S$-matrix (that should be unstable).
The results of the fit with $E_0=1.8$\,MeV are
graphically shown in Figs. \ref{fig.fit.11}, \ref{fig.fit.12}, \ref{fig.fit.21},
and \ref{fig.fit.22}. For the other choices of $E_0$, the quality of the fit is
the same and it would be impossible to distinguish the corresponding curves.
The optimal parameters for the three (out of five) choices of $E_0$ are given
in Table~\ref{table.parameters}. The units for the parameters are chosen in
such a way that the Jost matrices are dimensionless.
The resonances were sought as zeros of $\det\bm{f}^{\mathrm{(in)}}(E)$ on the
principal sheet $(--)_{00}$ of the Riemann surface. This was done using the
Newton's method~\cite{Press}. In this way, we found {\it two resonances that are
close to each other}. For each of the five choices of $E_0$, their parameters
are given in Tables~\ref{table.resonance1} and \ref{table.resonance2}. It is
seen that our procedure gives at least three stable digits.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$E_0$\,(MeV) & $E_r$\,(MeV) & $\Gamma$\,(MeV) & $\Gamma_{1}$\,(MeV) &
$\Gamma_{2}$\,(MeV)\\
\hline
$1.6$ &
$ 1.65255$ &
$ 0.44772$ &
$ 0.13420$ &
$ 0.31352$ \\
\hline
$1.7$ &
$1.65198$ &
$0.44653$ &
$0.13357$ &
$0.31295$ \\
\hline
$1.8$ &
$1.65283$ &
$0.44908$ &
$0.13486$ &
$0.31422$ \\
\hline
$1.9$ &
$1.65230$ &
$0.44713$ &
$0.13404$ &
$0.31309$ \\
\hline
$2.0$ &
$1.65223$ &
$0.44745$ &
$0.13412$ &
$0.31333$ \\
\hline
\end{tabular}
\end{center}
\caption{\sf
Parameters of the first $0^+$ resonance found with five different choices of
the expansion parameter $E_0$. The energy $E_r$ is counted from the
$p\,{}^7\mathrm{Be}$ threshold. The partial widths $\Gamma_1$ and $\Gamma_2$
correspond to the elastic and inelastic channels, respectively.
}
\label{table.resonance1}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$E_0$\,(MeV) & $E_r$\,(MeV) & $\Gamma$\,(MeV) & $\Gamma_{1}$\,(MeV) &
$\Gamma_{2}$\,(MeV)\\
\hline
$1.6$ &
$1.81879$ &
$0.83658$ &
$0.54461$ &
$0.29196$ \\
\hline
$1.7$ &
$1.82066$ &
$0.83932$ &
$0.53713$ &
$0.30219$ \\
\hline
$1.8$ &
$1.81891$ &
$0.84133$ &
$0.54164$ &
$0.29968$ \\
\hline
$1.9$ &
$1.81947$ &
$0.86394$ &
$0.51899$ &
$0.34495$ \\
\hline
$2.0$ &
$1.82101$ &
$0.83255$ &
$0.55292$ &
$0.27963$ \\
\hline
\end{tabular}
\end{center}
\caption{\sf
Parameters of the second $0^+$ resonance found with five different choices of
the expansion parameter $E_0$. The energy $E_r$ is counted from the
$p\,{}^7\mathrm{Be}$ threshold. The partial widths $\Gamma_1$ and $\Gamma_2$
correspond to the elastic and inelastic channels, respectively.
}
\label{table.resonance2}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$E_{\mathrm{ex}}$\,(MeV) & $\Gamma$\,(MeV) & $\Gamma_{1}$\,(MeV) &
$\Gamma_{2}$\,(MeV) & \sffamily{Ref.}\\
\hline
$1.7899\pm0.0003$ &
$0.4476\pm0.0009$ &
$0.1342\pm0.0005$ &
$0.3134\pm0.0005$ &
\sffamily{this work}\\
\hline
$1.9573\pm0.0010$ &
$0.8427\pm0.0123$ &
$0.5391\pm0.0126$ &
$0.3037\pm0.0247$ &
\sffamily{this work}\\
\hline
$1.9\pm0.1$ &
$0.53\begin{array}{cl}+&0.60\\[-2mm] -&0.10\end{array}$ &
$0.06\begin{array}{cl}+&0.30\\[-2mm] -&0.02\end{array}$ &
$0.47\begin{array}{cl}+&0.40\\[-2mm] -&0.10\end{array}$ &
\cite{Rogachev2013}\\
\hline
\end{tabular}
\end{center}
\caption{\sf
Statistically averaged parameters of the two $0^+$ resonances (the first two
lines) and the single $0^+$ resonance reported in Ref.~\cite{Rogachev2013}. The
energy $E_{\mathrm{ex}}$ is counted from the ground state of ${}^8\mathrm{B}$
nucleus.
}
\label{table.resonances.average}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\sffamily $S$-matrix poles (MeV)}\\
\hline
$(++)_{00}$ & $(-+)_{00}$ & $(+-)_{00}$ & $(--)_{00}$ \\
\hline
$1.8042 -i0.3971$ & $1.8013 -i0.2575$ &
$1.6693 -i0.2178$ & $1.6528 -i0.2245$ \\
$1.8068 -i0.2664$ & $1.8325 -i0.4072$ &
$1.7904 -i0.4035$ & $1.8189 -i0.4207$ \\
\hline
$1.8042 +i0.3971$ & $1.7648 +i0.5187$ &
$1.7697 +i0.3874$ & $1.7178 +i0.4407$ \\
$1.8068 +i0.2664$ & $1.8251 +i0.2468$ &
$1.8415 +i0.5646$ & $1.7793 +i0.6420$ \\
\hline
\end{tabular}
\end{center}
\caption{\sf
Poles of the two-channel $S$-matrix on all the principal sheets of the Riemann
surface within a distance of $\sim1$\,MeV from the central point,
$E_0=1.8$\,MeV, of the expansions (\ref{Aapprox}, \ref{Bapprox}).
The energy is counted from the $p\,{}^7\mathrm{Be}$ threshold.
}
\label{table.allpoles}
\end{table}
The resonance energies obtained with different $E_0$, are statistically
independent. We assume that they have the normal distribution and calculate the
corresponding average values as well as the standard deviations. The results of
these calculations (statistical averaging) are given in
Table~\ref{table.resonances.average}, where for the purpose of comparison, we
also put the parameters of the $0^+$ resonance obtained in
Ref.~\cite{Rogachev2013}.
By scanning all four principal sheets of the Riemann surface within a distance
of $\sim1$\,MeV around the central energy $E_0$, we found several $S$-matrix
poles on each of the sheets. These calculations were done with
$E_0=1.8$\,MeV. Thus found poles are listed in Table~\ref{table.allpoles}.
Among all the poles, only those that
are adjacent to the physical scattering energies, may influence the physical
observables. They are those given in the
left bottom and right top blocks of Table~\ref{table.allpoles}. There are four
of them: two resonances on the sheet $(--)_{00}$ and two poles on the physical
sheet $(++)_{00}$. They are depicted in Fig.~\ref{fig.4poles}.
The sheets
$(--)_{00}$ and $(++)_{00}$ are cut along the real axis. At the energies
above the second threshold, the upper rim of the $(--)_{00}$-cut is connected
to the lower rim of the $(++)_{00}$-cut. The connecting line is the real axis
of the physical scattering energies. Thanks to the connection, it is possible
to continuously move from the sheet $(--)_{00}$ to $(++)_{00}$ and back, for
example, along the rectangular contour shown in Fig.~\ref{fig.4poles}.
In contrast to the resonances, the solutions of the Schr\"odinger equation,
corresponding to the complex poles on the sheet $(++)_{00}$, have an unphysical
behaviour (in particular, they have unphysical time dependence). The physical
system cannot be in such a state, but mathematically the poles exist anyway and
may influence the behaviour of the $S$-matrix on the real axis. Such poles are
sometimes called shadow ones. The influence of these poles on the scattering
cross section is explored in the next section.
\begin{figure}
\centerline{\epsfig{file=09_fig4poles.eps}}
\caption{\sf
The $S$-matrix poles adjacent to the real axis of the scattering energies
where the physical and non-physical sheets of the Riemann surface are connected.
The corresponding energies and the $S$-matrix
residues are given in Tables \ref{table.allpoles} and \ref{table.Residues}. The
numerical labels of
the poles are used as the corresponding references in Fig.~\ref{fig.MLexcl}.
The rectangular contour is used for the integration in the Mittag-Leffler sum
(\ref{MittagLeffler}).
}
\label{fig.4poles}
\end{figure}
\subsection{Contributions from individual poles}
\label{sec.Mittag-Leffler}
The $S$-matrix has many poles on the Riemann surface. Even
just on the principal sheets and only around the resonance
energy ($E\sim1.8\,\mathrm{MeV}$), it has eight poles given in
Table~\ref{table.allpoles}. Of course, their influence on the scattering cross
sections are different. Apparently, the poles that are far away from the axis
of the real scattering energies, contribute very little, if any. This axis
passes through the connection of the sheets $(++)_{00}$ and $(--)_{00}$
(see Fig.~\ref{fig.sheets_2ch_Coulomb}). Therefore the only noticeable
influence can be expected from the four poles shown in Fig.~\ref{fig.4poles},
which are near that axis.
It is always useful to know how important each
individual pole is. A reasonable answer to such a question can be obtained by
decomposing the $S$-matrix in a sum of the pole terms and the background
integral. Such a decomposition is possible thanks to the Mittag-Leffler theorem
known in the complex analysis. In fact, for our purpose, it is sufficient to
apply a more simple residue theorem, which leads to the Mittag-Leffler
decomposition (see Refs.~\cite{two_channel} and \cite{our_He5}).
Consider the rectangular contour shown in Fig.~\ref{fig.4poles},
which encloses the four chosen poles. If $E$ is a point inside this contour
(for calculating the cross section, we choose it on the real axis), then
according to the residue theorem, we have
\begin{equation}
\label{Res_theorem}
\oint\frac{\bm{S}(\zeta)}{\zeta-E}d\zeta=
2\pi i\bm{S}(E)+2\pi i\sum_{j=1}^L
\frac{\mathrm{Res}[\bm{S},E_j]}{E_j-E}\ ,
\end{equation}
where $E_j$ are the poles ($L=4$). This gives
\begin{equation}
\label{MittagLeffler}
\bm{S}(E)=\sum_{j=1}^L
\frac{\mathrm{Res}[\bm{S},E_j]}{E-E_j}+
\frac{1}{2\pi i}\oint\frac{\bm{S}(\zeta)}{\zeta-E}d\zeta\ ,
\end{equation}
which is a particular form of the Mittag-Leffler decomposition, where
the matrix $\bm{S}(E)$ is written as a sum of the individual pole contributions
and a background integral.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\sffamily sheet & \sffamily pole: $E$\,(MeV) &
\sffamily $\mathrm{Res}[S_{nn'},E]$\,(MeV) & $n,n'$ \\
\hline
\hline
\multirow{8}{*}{$(--)_{00}$} &
\multirow{4}{*}{$1.6528-i0.2245$} &
$-0.0138-i0.0072$ & 1,1 \\
&& $\phantom{+}0.0433+i0.0261$ & 1,2 \\
&& $\phantom{+}0.0328+i0.0261$ & 2,1 \\
&& $-0.1014-i0.0918$ & 2,2 \\
\cline{2-4}
&
\multirow{4}{*}{$1.8189-i0.4207$} &
$\phantom{+}0.0291-i0.0147$ & 1,1 \\
& & $-0.0028-i0.0207$ & 1,2 \\
& & $\phantom{+}0.0134-i0.0156$ & 2,1 \\
& & $-0.0066-i0.0114$ & 2,2 \\
\hline
\hline
\multirow{8}{*}{$(++)_{00}$} &
\multirow{4}{*}{$1.8042+i0.3971$} &
$-0.0271-i0.0131$ & 1,1 \\
& & $\phantom{+}0.0148-i0.0210$ & 1,2 \\
& & $-0.0102-i0.0249$ & 2,1 \\
& & $\phantom{+}0.0224-i0.0052$ & 2,2 \\
\cline{2-4}
&
\multirow{4}{*}{$1.8068+i0.2664$} &
$\phantom{+}0.0060+i0.0088$ & 1,1 \\
& & $-0.0221-i0.0441$ & 1,2 \\
& & $-0.0096-i0.0288$ & 2,1 \\
& & $\phantom{+}0.0266+i0.1381$ & 2,2 \\
\hline
\end{tabular}
\end{center}
\caption{\sf
Poles of the two-channel $S$-matrix and the corresponding residues of its
elements in the domains of the Riemann sheets $(--)_{00}$ and $(++)_{00}$
adjacent to the axis of the real scattering energies (see
Fig.~\ref{fig.4poles}). The energy is counted from the
$p\,{}^7\mathrm{Be}$-threshold.
}
\label{table.Residues}
\end{table}
For any given scattering energy $E$, the background integral can be found by
numerical integration of the $S$-matrix, which we obtained after fitting the
experimental data. We assume that all the poles of the $S$-matrix
(\ref{Smatrix}) are simple, i.e.
\begin{equation}
\label{detFsimpleZero}
\det\bm{f}^{(\rm in)}(E)
\ \mathop{\longrightarrow}\limits_{E\to E_j}
\ \mathrm{const}\cdot(E-E_j)\ .
\end{equation}
Therefore the residues of the $S$-matrix at the poles can be found by
numerical differentiation of the determinant of
the Jost matrix,
\begin{equation}
\label{ResidueExplicit}
{\rm Res}\,[\bm{S},E]=\bm{f}^{(\rm out)}(E)\left(
\begin{array}{cc}
f^{(\rm in)}_{22}(E) & -f^{(\rm in)}_{12}(E) \\[3mm]
-f^{(\rm in)}_{21}(E) & f^{(\rm in)}_{11}(E)
\end{array}\right)
\left[\frac{d}{dE}\det \bm{f}^{(\rm in)}(E)\right]^{-1}\ .
\end{equation}
Thus calculated residues for the four poles are
given in Table \ref{table.Residues}. Using these residues and the
numerically calculated background integral, we obtained (as it should be)
exactly the same cross sections that are shown in Figs.
\ref{fig.fit.11}, \ref{fig.fit.12}, \ref{fig.fit.21}, and \ref{fig.fit.22}.
This is a kind of cross-check of our calculations.
\begin{figure}
\centerline{\epsfig{file=10_plotMLexcl.eps}}
\caption{\sf
The dots represent the experimental (i.e. the $R$-matrix)
cross sections for the inter-channel transitions $n\to n'$, where the
channels are labeled as in Eq.~(\ref{coupled_gammas}).
The curves show the corresponding cross sections obtained when
a single pole is excluded from the Mittag-Leffler sum (\ref{MittagLeffler}).
}
\label{fig.MLexcl}
\end{figure}
Now, in order to get an idea of the role of each pole, we can omit them one by
one from the sum (\ref{MittagLeffler}) and see how this affects the partial
cross sections. The results of such pole exclusions are shown in
Fig.~\ref{fig.MLexcl}. The curves show the cross sections when one pole is
excluded. The dots are the experimental data (i.e. the $R$-matrix cross
sections).
\begin{figure}
\centerline{\epsfig{file=11_plotNopoles.eps}}
\caption{\sf
The background-scattering contributions (curves) to the partial cross sections
for the inter-channel transitions $n\to n'$, when all
four poles shown in Fig.~\protect{\ref{fig.4poles}} are excluded from the
Mittag-Leffler sum (\ref{MittagLeffler}). The dots are the corresponding
experimental (i.e. the $R$-matrix) cross sections.
}
\label{fig.Nopoles}
\end{figure}
It is seen that the second resonance (pole number 2) contributes very little
and mainly to the elastic cross section in the first channel. The influences of
the other three poles are noticeable in various channels.
It is also interesting to know what happens if we only leave the
background integral and exlude all the pole terms
from the Mittag-Leffler expansion (\ref{MittagLeffler}). The result of such an
exclusion can be seen in Fig.~\ref{fig.Nopoles}. The background term
describes the general behaviour of the cross sections and gives a
reasonable approximation for them to the left and to the right of the resonance
energies. However, inside the resonance energy-interval, without the resonant
and the shadow poles all the cross sections are far from the experimental
points.
\section{Summary and conclusion}
\label{sec.conclusion}
As was stated in the Introduction, the main task of the present work was to
confirm the existence and to accurately determine the
parameters of the lowest resonance level with the quantum numbers $J^\pi=0^+$
in the spectrum of the eight-nucleon system ${}^8\mathrm{B}$.
For this purpose, we constructed the two-channel Jost-matrices
that have proper analytic structure and are defined on the Riemann surface with
the proper topology (with both the square-root and logarithmic branching). The
free parameters of these Jost matrices were fixed using an available $R$-matrix
fit\cite{Rogachev2013} of experimental data on $p\,{}^7\mathrm{Be}$ scattering.
Exploring the behaviour of these Jost matrices on the principal sheets of the
Riemann surface, we located 16 poles of the $S$-matrix (see
Table~\ref{table.allpoles}). Among them, only four poles (two resonances and two
shadow poles) are located close enough to the axis of the real scattering
energies and therefore can influence the observable cross sections (see
Fig.~\ref{fig.4poles}).
Therefore, we found that instead of a single $0^+$ resonance, there are two
overlapping resonances with almost the same parameters as were reported in
Ref.~\cite{Rogachev2013} (see Table~\ref{table.resonances.average}). In
addition to them, there are also two overlapping shadow poles on the opposite
side of the real axis.
In order to isolate the individual contributions to the $S$-matrix from the
resonances and the shadow poles, we used the Mittag-Leffler decomposition.
In this way it was established that the second resonance has a rather weak
influence on the energy dependencies of the partial cross sections. The roles
of the other three poles are noticeable.
As is seen from Fig.~\ref{fig.MLexcl}, the first resonance and the second
shadow pole significantly change the inelastic cross sections and the
elastic scattering in the second channel. In principle, such changes could be
detected experimentally, if the ${}^7\mathrm{Be}$ target is exposed to
$\gamma$-rays of the energy $\sim0.5\,\mathrm{MeV}$, when the cross section
of $p\,{}^7\mathrm{Be}$ collision is being measured. In such a case, the
electromagnetic radiation could cause part of the target nuclei to transit from
the ground to the first excited state,
${}^7\mathrm{Be}(\frac32^-)+\gamma\to{}^7\mathrm{Be}^*(\frac12^-)$.
| proofpile-arXiv_065-1534 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Secular effects in de Sitter (dS) space quantum field theories have attracted a great deal of attention (see \cite{Starobinsky:1982ee}--\cite{Akhmedov:2013vka} for an incomplete list of references). Let us consider for instance the two--point Wightman function
\bqa\label{W12}
W(1,2) = \left\langle \phi\left(t_1, \, \vec{x}_1\right) \, \phi\left(t_2, \, \vec{x}_2\right)\right\rangle
\eqa
of a real scalar quantum field on the $D$--dimensional dS manifold (we consider here a fixed background, i.e. backreaction effects are not taken into account)\footnote{In this paper we restrict our attention to real scalar fields, but similar secular effects appear also in other theories.}.
There are at least the following three types of secular effects:
\begin{itemize}
\item The {\it secular growth of the first kind} appears in the case of the massless minimally coupled scalars and also for tachyonic fields \cite{Starobinsky:1982ee}--\cite{Tsamis:2005hd} and \cite{bemtach,bemtach2}. It was first seen in the tree--level correlation function (\ref{W12}) when $t_1 = t_2 = t$ and $\vec{x}_1 = \vec{x}_2$ and then observed in the loops in the expanding Poincar\'{e} patch (EPP) $$
ds^2 = dt^2 - e^{2t} \, d\vec{x}^2
$$ of the $D$--dimensional dS space of unit radius. In fact, as $t\to +\infty$ one finds that \cite{Starobinsky:1982ee}--\cite{Tsamis:2005hd}:
\bqa
W_{\rm tree+loops}(1,1) \equiv \left\langle \phi^2\left(t,\vec{x}\right)\right\rangle \approx t \, A_0 + \lambda \, t^3 \, A_1 + \dots
\eqa
$A_0$ is the tree--level contribution, $A_1$ is the first loop contribution which contains integrals of products of mode functions and $\lambda$ is the self-coupling constant of the scalar field theory. The dependence on $\vec{x}$ disappears due to the spatial homogeneity of the EPP and the chosen initial state. The effect can also be observed when $\vec{x}_1 \neq \vec{x}_2$ \cite{Gautier:2015pca}--\cite{Moreau:2018lmz}, where the mass term can be treated perturbatively.
This secular growth is specific for massless\footnote{Or to the case when the mass term can be treated perturbatively.} minimally coupled scalars in the EPP and violates dS isometry even at tree--level \cite{Starobinsky:1982ee}--\cite{Tsamis:2005hd}. In fact, in such a case at each order of perturbation theory the Wightman propagator is not a function of the scalar invariant --- the so called hyperbolic distance.
Methods to deal with the secular growth of the first kind are developed in \cite{Starobinsky}, \cite{Tsamis:2005hd}, \cite{Gautier:2015pca}--\cite{Moreau:2018lmz}. But they work only in the EPP, for small enough perturbations over the Bunch--Davies (BD) state
\cite{Chernikov:1968zm}, \cite{Bunch:1978yq}, i.e. only when the same sort of effects in higher point functions can be neglected.
We are not going to discuss effects of this type in detail in the present paper. We have mentioned them just to stress the difference with respect to the other secular effects on which we are going to focus on in this paper.
\item The {\it secular growth of the second kind} appears for scalar fields of arbitrary mass; it is seen in the loops when $\left|t_1 -t_2\right| \to \infty$. Namely, in the $\lambda \phi^3$ (resp. $\lambda \phi^4$) theory the one (resp. two) loop corrections to the Wightman propagator contain contributions of the following form:
\bqa
W_{\rm loop}(t_1, \, t_2|p) \approx \lambda^2 \left(t_1 - t_2\right) \, B,
\eqa
where
\bqa
W(t_1, \, t_2|p) = \int d^{D-1}\vec{x} \, e^{i \, \vec{p} \, \vec{x}} \, \left\langle \phi\left(t_1, \, \vec{x}\right) \, \phi\left(t_2, \, 0\right)\right\rangle
\eqa
is the spatial Fourier transformation of the two--point Wightman function (note that in this paper we always discuss spatially homogeneous states). $W_{\rm loop}$ denotes the one (resp. two) loop contribution, $B$ is a constant containing integrals of products of mode functions whose explicit form will be shown below. The implications of this effect for dS physics have been discussed in the literature (see e.g. \cite{Boyanovsky:1993xf}--\cite{Leblond}). Usually this effect leads to a mass renormalization or to a contribution to the imaginary part of the self--energy. This effect cannot be definitely attributed to the infrared (IR) contributions. It also can appear from the ultraviolet (UV) region of internal loop momenta and even in a atationary situation.
\item The {\it secular growth of the third kind} also appears for scalar fields of arbitrary mass. It shows up in the loops when $t_1+t_2 \to \infty$ while $t_1-t_2 ={\rm const}$. Namely, the one (resp. two) loop corrections to the Wightman function in the $\lambda \phi^3$ (resp. $\lambda \phi^4$) theory in the EPP contain contributions of the form:
\bqa\label{555}
W_{loop}(t_1, \, t_2|p) \approx \lambda^2 \left(t_1 + t_2\right) \, C,
\eqa
where $C$ is a constant containing integrals of products of mode functions. The calculation of the one--loop correction for the Wightman function for the $\lambda \phi^3$ theory with mass $m>(D-1)/2$, in units of dS curvature, has been done in \cite{PolyakovKrotov} both in the EPP and in global dS. The extension of this calculation to the $ \lambda\phi^4$ theory and at higher loops has been done in \cite{AkhmedovKEQ}, \cite{AkhmedovBurda}, \cite{AkhmedovGlobal}, \cite{AkhmedovPopovSlepukhin}, \cite{Akhmedov:2013vka}. The extension to light fields with mass $m<(D-1)/2$ in units of the dS curvature, at one loop was done in \cite{AkhmedovPopovSlepukhin} and at higher loops --- in \cite{Akhmedov:2017ooy}.
\item Finally, in the contracting Poincar\'{e} patch (CPP) and in the global dS manifold there is a {\it secular divergence} in place of the secular growth of the third kind \cite{PolyakovKrotov}, \cite{AkhmedovGlobal}, \cite{Akhmedov:2013vka}:
\bqa
W_{loop}(t_1, \, t_2|p) \approx \lambda^2 \left(t - t_0\right) \, F,
\eqa
where $t=\frac{t_1 + t_2}{2}$, $t_0$ is the initial time (Cauchy surface) from where the self--interactions are adiabatically turned on and $F$ is a constant containing integrals of products of mode functions. We will see that the secular growth of the third kind in the EPP (\ref{555}) and the above secular divergences in the CPP and global dS are of the same physical origin. But, as we explain in the present paper, the resummation of the leading contributions from all loops in the case of the secular growth (of any kind) and in the case of the secular divergence in dS space are physically distinct problems.
The appearance of $t_0$ in the expressions of the correlation functions means the divergence rather than just the growth with time. In fact, if one puts $t_0 \to - \infty$, the loop corrections to the correlation functions are infinite (divergent) even if one cuts off the ultraviolet divergences. It means that in such a case the initial Cauchy surface cannot be taken to past infinity (see \cite{Akhmedov:2013vka} for a generic discussion). This, in its own right, means that in such a situation correlators are not functions of the scalar invariant, i.e. dS isometry is violated in the loops by such secular divergences, even if it is respected at tree--level.
\end{itemize}
Secular effects of the second and third kinds or the secular divergence are generic and appear practically in any non-stationary situation. For example, the secular divergence appears even in flat space quantum field theory in non--stationary situation --- for non--Planckian initial distribution, as discussed in e.g. \cite{LL}.
Moreover, it appears in presence of a constant electric field \cite{Akhmedov:2014hfa,Akhmedov:2014doa,Akhmedov:2009vh}. It is similar to the divergence in global dS and in CPP.
In the electric pulse in QED there is also a secular growth instead of the divergence, which is similar to the one in the EPP.
The secular growth of the third kind in the case of black hole collapse is discussed in \cite{Akhmedov:2015xwa} (see also \cite{Leahy:1983vb}). At the same time the secular growth of the second kind in the case of black hole collapse was discussed in \cite{Burgess:2018sou}. Finally, the secular growth of the third kind in the presence of moving mirrors is discussed \cite{Akhmedov:2017hbj} (see also \cite{Astrahantsev:2018lho}).
The secular growth of the third kind and the corresponding divergence in the loops are the infrared effects. These two effects are sensitive to the boundary and initial conditions. As a result it is no wonder that they reveal themselves in different ways in various patches of the same dS space. The presence of a secular growth implies a violation of the applicability of perturbation theory. In fact, even if $\lambda$ is very small $\lambda^2 \, (t_2 - t_1)$, $\lambda^2 \, (t_1 + t_2)$ or $\lambda^2\,\left[(t_1 + t_2)/2 - t_0\right]$ can become of order unity as two arguments $t_{1,2}$ of the correlation function are taken to the future infinity.
Hence, to understand the physics even of massive fields in dS space one has to perform a resummation at least of the leading contributions from all loops. Usually in dS space quantum field theory this is done only for very specific initial states, when the mass term can be treated perturbatively. Meanwhile the result of the resummation strongly depends on the patch and the initial state. The goal of the present paper is to clarify some of these points.
In first place for the resummation of the large infrared effects one has to solve the system of the Dyson--Schwinger equations in some approximation. This system contains equations for the two--point functions and vertexes. Each of these unknowns of the system can possess independent secular contributions. In fact, in $D$--dimensional dS space, when $m\leq \frac{\sqrt{3}}{4}\, (D-1)$ in units of the dS curvature, higher point correlation functions also show a secular growth as one takes all their arguments to the future infinity \cite{Akhmedov:2017ooy} (see also \cite{Bros}--\cite{Bros:2006gs} for the discussion of the origin of such an effect). This phenomenon is also specific to dS space quantum field theory.
In this paper we discuss such situations, when the secular growth in the higher point functions is not present. Namely, we mostly discuss scalars from the principal series, $m>(D-1)/2$. We specifically designate which of the loop contributions provide the leading corrections at each loop level. It happens that in the case of secular growth and in the case of secular divergence different types of diagrams contribute leading corrections in dS quantum field theory. Hence, e.g. the problems of resummations of leading loop corrections in the EPP and in global dS are physically different.
The paper is organized as follows. In section II we establish the setup and the notations. In section III we explain the origin of the secular growth of the third kind in the EPP for the BD state.
Section IV deals with the difference between the secular growth in the EPP for the BD state and the secular divergences emerging for alpha--vacua in the EPP and for any state in the CPP and in the global dS.
In section V we investigate the relation between the secular growth of the second and the third kind for dS invariant situation. In particular, we find the relation between these two types of secular effects for the BD initial state in the EPP in the $x$--space representation.
In section VI we discuss the problem of the resummation of the leading secular contributions from all loops. We explicitly show which type of diagrams provide the leading contributions in the case of secular effects of the second and third kind in the EPP for the initial BD state. Then we show that in the CPP and in global dS different type of diagrams provide the leading contributions in the case of secular divergence.
Section VII contains some conclusions.
\section{Setup}
We consider the scalar field theory:
\bqa\label{freeac}
S = \int d^D x \sqrt{|g|}\, \left[\frac12 \, g^{\alpha\beta} \, \pr_\alpha \phi \, \pr_\beta \phi - \frac12 \, m^2 \, \phi^2 - \frac{\lambda}{3!} \, \phi^3\right],
\eqa
where $\phi$ is real. We restrict our attention to the $\phi^3$ potential just to simplify all expressions. The effects that we discuss here have
nothing to do with the runaway instability of the $\phi^3$ potential and can be seen also in $\phi^4$ theory \cite{AkhmedovPopovSlepukhin}.
The background geometry in (\ref{freeac}) is given by the expanding Poincar\'{e} patch (EPP):
\bqa\label{PP}
ds^2 = \frac{1}{\eta^2}\,\left[d\eta^2 - d\vec{x}^2\right], \quad \eta \equiv e^{-t}.
\eqa
Below we also consider the contracting Poinar\'{e} patch (CPP), global de Sitter (dS) metric and so called sandwich metric --- EPP like expansion interpolating between two flat regions. We set the Hubble constant to one. Only the scalar field $\phi$ is dynamical, gravity is fixed.
The expansion of the field operator over the Bunch--Davies (BD) modes \cite{Bunch:1978yq} is defined as:
\bqa\label{gph}
\phi\left(\eta, \vec{x}\right) = \int \frac{d^{D-1}\vec{p}}{(2\pi)^{D-1}} \, \left[a_{\vec{p}} \, f_{p}(\eta, \vec{x}) + a^+_{\vec{p}} \, f^*_{p}(\eta, \vec{x})\right],
\eqa
where the modes satisfy the following equation:
\bqa
\left[\eta^2 \partial_\eta^2 + (2-d)\eta \partial_\eta + m^2 - \eta^2\Delta \right] f_p\left(\eta, \vec{x}\right) = 0. \label{eoharm}
\eqa
Its solutions are
\bqa
f_p\left(\eta, \vec{x}\right) = \eta^{(D-1)/2} \, h(p\eta) \, e^{- i \, \vec{p}\, \vec{x}}, \quad {\rm and} \quad h(p\eta) = \frac{\sqrt{\pi}}{2} e^{-\frac12 \pi\mu} \, H_{i \, \mu}^{(1)}(p\eta).
\eqa
In the last expression:
\bqa
\mu \equiv \sqrt{m^2 - \left(\frac{D-1}{2}\right)^2}, \quad {\rm and} \quad p \equiv \left|\vec{p}\right|.
\eqa
The normalization factor of the modes fixed by the commutation relations of the field $\phi$ with its conjugate momentum and of the creation, $a^+_{\vec{p}}$, with the annihilation, $a_{\vec{p}}$, operators.
In this paper we consider the case of $m>0$. Below we discuss the loop IR contributions in the limit, when $p\eta_{1,2} \ll \left|\mu\right|$. In such a limit the leading behaviour of the modes is as follows
\bqa\label{hp1}
h(p\eta) \approx A_+ \, \left(p\eta\right)^{i\mu} + A_- \, \left(p\eta\right)^{-i\mu},
\eqa
if $m>(D-1)/2$. Here $A_+ =\frac{2^{-i\mu} e^{-\frac{\pi \mu}{2}} \sqrt{\pi}(1+\coth{\pi\mu})}{2\Gamma(1+i\mu)} \quad {\rm and} \quad A_- = -\frac{i 2^{i\mu} e^{-\frac{\pi \mu}{2}} \Gamma(i \mu)}{2\sqrt{\pi}}$. Most of the equations below are written for the case $m> (D-1)/2$ (for the precise expressions see \cite{Akhmedov:2013vka} for the $m>(D-1)/2$ case and \cite{Akhmedov:2017ooy} for the $m<(D-1)/2$ case). To simplify the discussion below we show just the leading expressions in appropriate limits to reveal the physical meaning of the phenomena that are discussed in the paper. Otherwise expressions become humongous.
In non--stationary quantum field theory every field is described by three propagators (see \cite{LL}, \cite{Kamenev} for an overview). The retarded and advanced propagators are proportional to the commutator, whose spatial Fourier representation at tree--level is equal to:
\bqa\label{C0}
C_0\left(p\left|\eta_1, \, \eta_2\right.\right) \equiv - i \, \int d^{D-1}\vec{x} \, e^{- i \vec{p}\, \vec{x}} \, \left[\phi\left(\eta_1, \vec{x}\right), \, \phi\left(\eta_2, 0\right)\right] = \nn \\ = 2 \, \left(\eta_1 \, \eta_2\right)^{(D-1)/2} \, {\rm Im} \left[h\left(p\eta_1\right) \, h^*\left(p\eta_2\right)\right].
\eqa
The third relevant two--point correlation function is the Keldysh propagator. Its tree--level spatially Fourier representation is:
\bqa\label{DK0}
D^K_0\left(p\left|\eta_1, \, \eta_2\right.\right) \equiv \frac{1}{2} \, \int d^{D-1}\vec{x} \, e^{- i \vec{p}\, \vec{x}}\left\langle \left\{\phi\left(\eta_1, \vec{x}\right), \, \phi\left(\eta_2, 0\right)\right\}\right\rangle = \nn \\ = \left(\eta_1 \, \eta_2\right)^{(D-1)/2} \, {\rm Re} \left[h\left(p\eta_1\right) \, h^*\left(p\eta_2\right)\right].
\eqa
Note that while $C_0$ does not depend on the state, the Keldysh propagator $D^K_0$ does. Eq. (\ref{DK0}) is the expression of the Keldysh propagator for the BD state.
The index ``$0$'' of $C$ and $D^K$ means that these are the tree--level expressions of the corresponding two--point functions.
Essentially with the use of the Schwinger--Keldysh technique one calculates correlation functions rather than amplitudes, as in the Feynman technique. The result of the calculation solves a Cauchy problem, where the ground state plays the role of the initial state. Unlike the Feynman technique, the Schwinger--Keldysh approach is completely causal. The point is that within the Schwinger--Keldysh technique the result of any loop contribution depends only on the causal past of the arguments of correlation functions.
\section{Secular growth of the third kind in the EPP}
Secular effect of the third kind and the corresponding divergence are, in our opinion, potentially the most relevant ones as regards backreaction on the background dS geometry \cite{Akhmedov:2013vka}, \cite{Akhmedov:2017ooy}. One of the goals of this paper is to show exactly this point.
\begin{figure}[t]
\begin{center} \includegraphics[scale=1]{1.pdf}
\caption{In the Schwinger--Keldysh technique there are several diagrams, of the type that is shown here, which contribute to the one loop correction to the two--point functions. The Schwinger--Keldysh diagrammatic technique in the context of cosmology is reviewed in \cite{vanderMeulen:2007ah} (see also \cite{Akhmedov:2013vka}).}\label{1}
\end{center}
\end{figure}
At the leading order the sum of the tree--level and one loop (see fig. \ref{1}) contributions for the Keldysh propagator can be expressed as \cite{Akhmedov:2013vka}:
\bqa\label{EPP}
D_{0+1}^{K}\left(p\, |\eta_1,\eta_2\right) \approx \eta^{D-1}\, \Bigl\{\Bigl[1 + 2 \, n_1(p\eta)\Bigr] \, {\rm Re} \left[h(p\eta_1)\, h^*(p\eta_2)\right] + \Bigl. \nonumber \\ + \Bigr. h(p\eta_1)\, h(p\eta_2) \,\kappa_1(p\eta) + h^*(p\eta_1)\, h^*(p\eta_2) \,\kappa_1^*(p\eta) \Bigl\},
\eqa
where $\eta = \sqrt{\eta_1\, \eta_2}$ and the modes $h(p\eta_{1,2})$ should be approximated by (\ref{hp1}). In $\phi^3$ theory
\bqa\label{nkappa}
n_1(p\eta) \propto \lambda^2 \, \log\left(\frac{\mu}{p \, \eta}\right)\, \iint_0^\infty \frac{dv}{v} \, dl \, l^{D-2}\, \left[\left|A_+\right|^2 \, v^{i\mu} + \left|A_-\right|^2 \, v^{-i\mu}\right]\, \left[h\left(l/\sqrt{v}\right)\, h^*\left(l\sqrt{v}\right)\right]^2, \nn \\
\kappa_1(p\eta) \propto \lambda^2 \, \log\left(\frac{\mu}{p \, \eta}\right)\, A_+ \, A_- \, \int_1^\infty \frac{dv}{v} \,\int^\infty_{0} dl \, l^{D-2}\, \left[v^{i\mu} + v^{-i\mu}\right]\, \left[h\left(l/\sqrt{v}\right)\, h^*\left(l\sqrt{v}\right)\right]^2.
\eqa
Here $A_\pm$ are defined in (\ref{hp1}) and $\kappa_1^*(p\eta)$ is just the complex conjugate of $\kappa_1(p\eta)$. These expressions are the leading contributions in the limit, when $p\sqrt{\eta_1 \, \eta_2} \to 0$ and $\eta_1/\eta_2 = {\rm const}$. The coefficients of proportionality in (\ref{nkappa})
can be found in \cite{Akhmedov:2013vka}. Their exact expression is not necessary for further discussion in this paper. The index 1 in the notation of $n(p\eta)$ and $\kappa(p\eta)$ means that these expressions are just one loop contributions\footnote{In $\lambda \phi^4$ theory the corresponding expressions are similar \cite{AkhmedovPopovSlepukhin} but the secular growth appears in two loop contributions which follow from the sunset diagrams.}.
We discuss the physical origin of such contributions in the next section. The loop IR corrections to $C_0$ are discussed below in the section V. We will see that in the limit that we are considering in this section $C_0$ does not receive any secularly growing contributions.
From the form of (\ref{EPP}) it is not hard to recognize in $n(p\eta) \, \left|h(p\eta)\right|^2$ the level population, where $n(p\eta) \, \delta\left(\vec{p} - \vec{q}\right) = \left\langle a^+_{\vec{p}} a_{\vec{q}} (\eta) \right\rangle$, evaluated at second order in $\lambda$ in the interaction picture. It is important to note that this level population is attributed to the comoving volume. (The volume factor $\eta^{D-1} \equiv \left(\eta_1 \, \eta_2\right)^{\frac{D-1}{2}}$ is the coefficient of proportionality in (\ref{EPP}).)
Similarly $\kappa(p\eta)\, h^2(p\eta)$ is nothing but the anomalous quantum average $\kappa(p\eta) \,\delta\left(\vec{p} + \vec{q}\right) = \left\langle a_{\vec{p}} a_{\vec{q}} (\eta) \right\rangle$ also evaluated in the interaction picture to the second order in $\lambda$ and attributed to the comoving volume. Finally, $\kappa^*(p\eta)\, \delta^{(3)}\left(\vec{p} + \vec{q}\right) = \left\langle a^+_{\vec{p}} a^+_{\vec{q}} (\eta) \right\rangle$.
Please remember that in this paper we consider spatially homogeneous quantizations only. Also we are working in the interaction picture. Hence, when $\lambda = 0$ all the above quantities, $n$, $\kappa$ and $\kappa^*$, are time independent.
They start to evolve, when one turns on self-interactions.
As $\sqrt{\eta_1 \, \eta_2} \to 0$, i.e. when $(t_1 + t_2)/2 \to + \infty$, we encounter the secular growth of the third kind. Usually its physical meaning is that due to the $\lambda \, \phi^3$ self--interaction, the level populations of the low laying exact modes in the theory are changing in time. Also the ground state does change due to the secular growth of the anomalous averages $\kappa_p$ and $\kappa^*_p$ \cite{Akhmedov:2013vka}. That is the usual picture, but in dS space there are peculiarities due to its symmetry. We discuss them in a moment.
In any case, because $\lambda^2 \, \left(t_1 + t_2\right) \gtrsim 1$, for long enough time of evolution, we encounter here a breakdown of perturbation theory, which is the usual phenomenon in non--stationary situations or even in finite temperature stationary state quantum field theory. This just means that to understand the physics in dS space one has to do at least a resummation of leading secular effects from all loops. The result of the resummation will provide the correct time dependence of $n$, $\kappa$ and $\kappa^*$ rather than just the approximate linear growth.
Consequently, the goal should be to understand which type of contributions are the leading corrections to these quantities at each loop order.
At this point it is important to stress that to perform the resummation of such contributions one usually has to apply the kinetic approach \cite{Kamenev}. However, in dS space there are important peculiarities, which are mainly discussed in the section V. These peculiarities appear because the dS space
has large isometry group which plays the same role as the Poincar\'{e} group in Minkowski space. However, it happens that loop corrections do respect the isometry {\it only} for the exact BD initial state in the exact EPP \cite{Polyakov:2012uc} (see also \cite{Akhmedov:2013vka}).
Meanwhile in the CPP as well as in global dS the isometry group is broken in the loops by IR divergences for any initial state \cite{PolyakovKrotov}, \cite{AkhmedovKEQ}, \cite{AkhmedovGlobal} and \cite{Akhmedov:2013vka}. For alpha--vacua the dS isometry is also broken in the loops even in EPP \cite{Polyakov:2012uc}, \cite{Akhmedov:2013vka}. In the next section we explain the reason for these symmetry violations.
\section{Secular growth vs. secular IR divergence}
Before discussing the implications of the dS isometry let us consider the origin of (\ref{EPP}) and (\ref{nkappa}) and the situation in the CPP as well as in global dS. For a generic spatially homogeneous background the one loop correction is similar to (\ref{EPP}), but instead of $n(p\eta)$ and $\kappa(p\eta)$ it contains the following expressions:
\bqa\label{basic1}
n^1_p(\eta) \propto \lambda^2 \, \int d^{D-1}q_1 \int d^{D-1}q_2 \, \int_{\eta_0}^\eta d\eta_3 \,\sqrt{g(\eta_3)} \, \int_{\eta_0}^\eta d\eta_4 \, \sqrt{g(\eta_4)} \, \delta\left(\vec{p} + \vec{q}_1 + \vec{q}_2\right) \times \nonumber \\ \times f_p^*\left(\eta_3\right) \, f_p\left(\eta_4\right) \, f_{q_1}^*(\eta_3)\, f_{q_1}(\eta_4) \, f_{q_2}^*(\eta_3)\, f_{q_2}(\eta_4), \nonumber \\
\kappa^1_p(\eta) \propto \lambda^2 \, \int d^{D-1}q_1 \int d^{D-1}q_2 \, \int_{\eta_0}^\eta d\eta_3 \,\sqrt{g(\eta_3)} \, \int_{\eta_0}^{\eta_3} d\eta_4 \, \sqrt{g(\eta_4)} \, \delta\left(\vec{p} + \vec{q}_1 + \vec{q}_2\right) \times \nonumber \\ \times f_p^*\left(\eta_3\right) \, f^*_p\left(\eta_4\right) \, f_{q_1}^*(\eta_3)\, f_{q_1}(\eta_4) \, f_{q_2}^*(\eta_3)\, f_{q_2}(\eta_4),
\eqa
and the complex conjugate expression for $\kappa^{1*}_p(\eta)$ \cite{Akhmedov:2013vka}; upper index 1 indicates that we are discussing here one loop corrections. Here $f_p(\eta)$ is the time dependent part of the mode functions, which in the case of the EPP is $f_p(\eta) = \eta^{(D-1)/2}\, h(p\eta)$; $\eta_0$ is the time after which the self--interaction $\lambda$ is adiabatically turned on. In dS space $n_p(\eta) = n(p\eta)$ and $\kappa_p(\eta) = \kappa(p\eta)$ due to the dS isometry invariance, which both in the EPP and CPP contains the simultaneous rescalings of $\eta$ and $\vec{x}$.
When the expressions in Eq. (\ref{basic1}) are not zero, they represent the leading contributions in the limit $\left|\eta-\eta_0\right| \to \infty$, if $\eta$ is the proper time, or in the limit $\eta/\eta_0 \to \infty$, if $\eta$ is the conformal time.
In the flat space case we have that $\sqrt{g(\eta)} = 1$ and $f_p(\eta) = e^{- i \, \omega_p \, \eta}/\sqrt{2\omega_p}$ with $\omega = \sqrt{p^2 + m^2}$ and $\eta$ is the proper (Minkowskian) time. As a result, in such a case in the limit $\left|\eta-\eta_0\right| \to \infty$ one obtains for e.g. $n_p$ the following expression:
\bqa\label{basic}
n^1_p(\eta) \propto \lambda^2 \, \left(\eta - \eta_0\right) \, \int d^{D-1}q_1 \int d^{D-1}q_2 \, \delta\left(\vec{p} + \vec{q}_1 + \vec{q}_2\right) \, \delta\left(\omega_{p} + \omega_{q_1} + \omega_{q_2}\right).
\eqa
Hence, in the situation under consideration the density does not change and remains zero $n_p(\eta) = 0$. Thus, there is no any secular IR divergence of the form $\lambda^2 \, \left(\eta - \eta_0\right)$ due to the energy--momentum conservation: creation of particles from the ground state is impossible. Similarly $\kappa_p(\eta) = 0$ and the ground state does not change. These are the core facts, which are deeply related to the adiabatic theorem (see e.g. \cite{Trunin:2018egi}--\cite{Leonidov:2018zdi} for the situation in non--stationary systems).
\subsection{Secular growth in the EPP}
Now let us return to the case of the EPP.
For BD modes we have that $f_p(\eta) \sim h(p\eta) \sim e^{i \, p \, \eta}$ when $p\eta \gg \mu$ ($\eta$ is the conformal time). Such a behaviour of the modes in the EPP is the consequence of the strong blue shift of every mode towards past infinity: modes with large physical momenta do not feel the curvature of the space--time and behave as if they were in flat space. Because of that in the expression (\ref{basic1}) the secular growth (\ref{nkappa}) arises only from integrating over momenta and conformal times for which $p\,\eta_{3,4} \ll \mu$ (see \cite{Akhmedov:2013vka} for more details)\footnote{It is important for the remaining part of the paper to understand that to obtain (\ref{nkappa}) from (\ref{basic1}) one has to perform the integration over $q_2$ in (\ref{basic1}) and then make the following change of variables $(q_1, \eta_3, \eta_4) \to (u=\sqrt{\eta_3\, \eta_4}, \, q_1 \, \sqrt{\eta_3 \, \eta_4} = l, \eta_3/\eta_4 = v)$. After that the logarithmic behavior appears as $\int^{\mu/p}_\eta du/u = \log\left(\mu/p\eta\right)$ from the region of integration where $q_1 \gg p$. Note that while expression $(\eta - \eta_0)$ appears in (\ref{basic}) as the consequence of the time translational invariance in Minkowski space, the expression $\log(\mu/p\eta)$ appears in (\ref{nkappa}) due to the conformal time scale invariance of the EPP metric (\ref{PP}). \label{foot1}}.
From these observations one can make two conclusions. First, the limits $\eta_{1,2} \to 0$ and $H\to 0$ do not commute. Here $H$ is the Hubble constant, which is set to one in this note. All the secular growth is gained from the region where mode's physical wave--length exceeds the Compton one, $p\eta < \mu$. That happens due to the space--time expansion, when $H\neq 0$, and mode functions start to behave as is shown in eq. (\ref{hp1}). Summarizing, for the BD state in the EPP we basically have the following situation:
\begin{gather}
n_1(p\eta) \sim \left\{\begin{matrix}
0, \quad p\eta \gg \mu,\\
\lambda^2 \log \frac{p\eta}{\mu},\quad p\eta \ll \mu
\end{matrix}\right.\label{EPP20}
\end{gather}
and similarly for $\kappa_1(p\eta)$ and its complex conjugate.
Second, for the case of the exact BD state in the exact EPP one can take $\eta_0$ to past infinity. In fact, for $p\eta_0 > \mu$ the modes behave as in flat space and one returnes to the situation discussed in the previous subsection. As a result, in (\ref{EPP}) and (\ref{nkappa}) we obtain the secular growth $\log\left(\mu/p\eta\right) \sim (t_1+t_2)/2 - \log\left(p/\mu\right)$ rather than the IR divergence $\log\left(\eta_0/\eta\right) \sim (t_1+t_2)/2 - t_0$.
This fact is crucial for the absence of secular IR divergence in the case of the exact initial BD state in the exact EPP. Which, in its own right, is important for the dS isometry invariance of the loop integrals in such a situation.
\subsection{Secular IR divergences in various situations in dS space} \label{IVB}
In the case of generic alpha--vacua in the EPP the modes behave as $f_p(\eta) \propto C_+ e^{ip\eta} + C_- e^{-ip\eta}$, when $p\eta \gg \mu$. Here $C_\pm \neq 0$ are complex constants whose values depend on the choice of the alpha--vacuum and one has to plug this expression into Eq. (\ref{basic1}). It follows that the coefficients of proportionality of $\lambda^2 \, \left(\eta - \eta_0\right)$ are not zero because the arguments of the delta--functions in the corresponding integrals analogous to (\ref{basic}) can be equal to zero. Thus there is an IR divergence, which is to be ascribed to the anomalous UV behaviour of the alpha--modes (for the BD state $C_-=0$, which corresponds to the normal UV behaviour, i.e. the same as in flat space).
It is probably worth stressing here that $\kappa^1_p(\eta)$ and its complex conjugate also possesses the same secular IR divergence. This means that the system flows to a proper ground state, which is the BD state for $p\eta > \mu$, as one may guess from the proper UV behaviour of the corresponding modes.
In the CPP the situation is as follows.
Future infinity there corresponds to $\eta \equiv \sqrt{\eta_1\, \eta_2} \to +\infty$ and the BD modes behave as (\ref{hp1}) at past infinity of the CPP\footnote{Here we restrict our attention to the spatially homogeneous states, which are unstable under inhomogeneous perturbations in the CPP, unlike the case of EPP.}. Then, at the leading order, when $p\eta_0 \ll \mu$, the one loop correction to the Keldysh propagator has the same form as (\ref{EPP}), but in the expressions for $n_1(p\eta)$ and $\kappa_1(p\eta)$ in (\ref{nkappa}) one has $\log\left(\mu/p\eta_0\right)$ instead of $\log\left(\mu/p\eta\right)$, if $p\eta > \mu$. At the same time in the case when $p\eta < \mu$ and $\eta/\eta_0 \to \infty$ one obtains $\log\left(\eta/\eta_0\right)$ instead of $\log\left(\mu/p\eta\right)$.
Summarizing, for the BD initial state in the CPP, when $p\eta_0 \ll \mu$ and $\eta/\eta_0 \to \infty$ we obtain that:
\begin{gather}
n_1(p\eta) \sim \left\{\begin{matrix}
\lambda^2 \log \frac{\eta}{\eta_0}, \quad p\eta \ll \mu,\\
\lambda^2 \log \frac{\mu}{p\eta_0},\quad p\eta \gg \mu,
\end{matrix}\right.\label{CPP21}
\end{gather}
and similarly for $\kappa_1(p\eta)$ and its complex conjugate. Please note the essential difference of this situation from the one in the EPP (\ref{EPP20}). Namely, while in the EPP the evolution of $n(p\eta)$ and $\kappa(p\eta)$'s starts after $\eta\sim \mu/p$, their evolution in the CPP starts right after the initial Cauchy surface $\eta_0$. That is due to the difference of the geometries of the EPP and CPP.
The coefficients of proportionality in (\ref{CPP21}) are the same as in (\ref{nkappa}), if one considers the BD initial state at the initial Cauchy surface $\eta_0$. Similar secular IR divergences are also present for other alpha--states, but with different coefficients. They are expressed by similar integrals to those in (\ref{nkappa}), but with the corresponding mode functions\footnote{Note that for other alpha--states there also will be secular effects coming from the region $p\eta > \mu$. They are of the same origin as those effects mentioned in the first paragraph of this subsection.}.
In presence of the IR divergence it is impossible to take $\eta_0$ to past infinity (e.g. $\eta_0 \to 0$ in the CPP) because otherwise even after a UV regularization the loop corrections will remain infinite. But keeping $\eta_0$ finite violates the dS isometry, because there are generators of the group that move $\eta_0$. In particular, as a result of that, propagators in $x$--space representation are not functions of the scalar invariant.
In global dS space the situation is similar to the CPP, because it contains EPP and CPP simultaneously. To see the appearance of the IR secular divergence in global dS space one can represent its metric as follows:
\bqa
ds^2 = \frac{1}{\eta^2} \, \left[d\eta^2 - d\vec{x}^2\right], \quad {\rm where} \quad \eta \in \left(-\infty, \, +\infty\right).
\eqa
In such a case the Cauchy surfaces are not compact and the mode functions will be piecewise defined separately for CPP $\eta = e^{t_-} \in [0,+\infty)$ and for EPP $\eta = - e^{-t_+} \in (-\infty, 0]$. Such a situation was considered in \cite{AkhmedovGlobal}, \cite{Akhmedov:2013vka}. Then the IR divergence appears from the CPP part of the loop expressions.
In fact, the situation in global dS can be understood from the following perspective\footnote{We would like to thank A.Polyakov for communicating to us this argument.}. As we already recalled, the result of a loop calculation within the Schwinger--Keldysh technique depends only on the causal past of the arguments of the correlation function. As can be seen from Fig. \ref{11111} this essentially means that the result of a loop calculation in global dS should be the same as in the CPP. If one chooses a different contracting patch from the one shown on this figure, he has to perform a dS isometry transformation, which shifts the patch.
\begin{figure}[ht!]
\centering
\includegraphics[scale=1]{PolArg.pdf}
\label{fig:PolArg}
\caption{Here is depicted the Penrose diagram of the 2D dS space. We show that the loop calculation in global dS is similar to the one in the CPP.}\label{11111}
\end{figure}
Another option is to consider compact spatial slicing of the global dS space--time:
\bqa
ds^2 = d\tau^2 - \cosh^2(\tau) \, d\Omega^2,
\eqa
where $d\Omega^2$ is the metric on the unit $(D-1)$--dimensional sphere.
To keep the discussion as simple as possible let us explore the 2D global dS space; here the calculations are quite easy to perform and are similar to those in the EPP and CPP. The mode expansion in this case is as follows:
\bqa
\phi\left(\tau, \varphi\right) = \sum_{k=-\infty}^{+\infty} \left[a_k \, f_k(\tau) \, e^{i k \varphi} + a^+_k \, f^*_k(\tau) \, e^{-i k \varphi}\right],
\eqa
where $\varphi$ is the angular coordinate on the spatial circle of the 2D global dS space.
The time dependent part of the modes satisfies the following equation:
\bqa
\left[\partial_{\tau}^2 + \tanh(\tau)\, \partial_{\tau} + \frac{k^2}{\cosh^2(\tau)} + m^2\right] \, f_k(\tau) = 0,
\eqa
which at past and future infinity, as $\tau \to \pm \infty$, becomes similar to the one in the EPP. Indeed, if one makes a change of variables $\tau = -\log \eta$, he approximately recovers (at the future and past infinity) Eq. \eqref{eoharm}. As a result, in this limit the modes behave as:
\bqa
f_k(\tau) \approx \tilde{f}\left(k\, e^{-\tau}\right) \approx \sqrt{k\, e^{-\tau}} \,\left[A_+ \, \left(k\, e^{-\tau}\right)^{-i\mu} + A_- \, \left(k\, e^{-\tau}\right)^{i\mu}\right].
\eqa
We are interested in the so called Euclidean modes, which obey the condition $f_k(-\tau) = f^*_k(\tau)$ and have the normal UV behaviour. These conditions restrict $A_\pm$, but we do not need the corresponding explicit form. These modes correspond to the BD waves in the EPP. Namely, tree--level two--point correlation functions for these two types of modes coincide with each other.
The leading one loop contribution to the Keldysh propagator in the limit, when initial time is taken to past infinity, $\tau_0 \to -\infty$, and the two arguments of the correlation function are taken to future infinity as $(\tau_1 + \tau_2)/2 = \tau\to +\infty$, is closely similar to (\ref{EPP}) with:
\bqa
n_k^1(\tau) \propto \lambda^2 \, \sum_{q=-\infty}^{+\infty} \, \int_{\tau_0}^{\tau} d\tau_3 \, \cosh(\tau_3) \, \int_{\tau_0}^{\tau} d\tau_4 \, \cosh(\tau_4) \, f_k^*(\tau_3) \, f_k(\tau_4) \, f_q^*(\tau_3) \, f_q(\tau_4) \, f_{k+q}^*(\tau_3) \, f_{k+q}(\tau_4).
\eqa
Similar expressions hold for $\kappa^1(\tau)$ and its complex conjugate.
The leading contributions to the last expression come from the regions of integration where $\tau_{3,4}\to \tau$, $\tau_{3,4}\to \tau_0$ and $q\gg n$. By changing the variables as
\bqa
u = e^{-(\tau_3 + \tau_4)/2}, \quad l = q \, e^{-(\tau_3 + \tau_4)/2} \quad {\rm and} \quad v = e^{-(\tau_3 - \tau_4)},
\eqa
and replacing of the summation over $q$ with an integral, we get the following expression
\bqa
n^1_k(\tau) \propto \lambda^2 \, \left(\tau - \tau_0 \right) \, \iint \limits_0^{+\infty} \frac{dv}{v} \, dl \, \left[\left|A_+\right|^2 v^{i\mu} + \left|A_-\right|^2 v^{-i\mu}\right] \times \nonumber \\ \times \left[A_+ \, \left(\frac{l}{\sqrt{v}}\right)^{i\mu} + A_- \, \left(\frac{l}{\sqrt{v}}\right)^{-i\mu}\right]^2 \, \left[A_+ \, \left(l\sqrt{v}\right)^{i\mu} + A_- \, \left(l \sqrt{v}\right)^{-i\mu}\right]^2.
\eqa
Here, while the contribution proportional to $\tau$ comes from future infinity, i.e. from that part of global dS which is similar to the EPP, the contribution proportional to $\tau_0$ comes from past infinity, i.e. from that part of global dS which is similar to the CPP.
\section{dS isometry and the relation between the secular growth of the third and the second kind}
As we have mentioned above, in the massive scalar quantum field theory in the EPP there is a dS invariant state, for which the isometry is respected not only at tree--level, but also in all loops \cite{Polyakov:2012uc} (see also \cite{Akhmedov:2013vka}). Such a state is the analogue of the Wightman vacuum in Minkowski space \cite{Bros,bem}. In fact, in the invariant situation nothing depends on the choice of a point in dS space--time. But even in such an invariant situation there is the secular growth of the second and third kind, but there is no secular divergence. Here we discuss the properties of the secular growth in the $x$--space representation of the correlation functions.
The position space representation of the tree--level BD Wightman function is as follows:
\bqa\label{W0}
W_0\left(Z_{12}\right) = \left\langle \phi\left(\eta_1, \vec{x}_1\right) \, \phi\left(\eta_2, \vec{x}_2 \right)\right\rangle = \left(\eta_1 \, \eta_2\right)^{\frac{D-1}{2}} \, \int d^{D-1}\vec{p} \, e^{i \, \vec{p} \, \left(\vec{x}_1 - \vec{x}_2\right)} \, h(p\eta_1) \, h^*(p\eta_2) \propto \nonumber \\ \propto \phantom{|}_2F_1\left(\frac{D-1}{2} + i \mu, \frac{D-1}{2} - i \mu; \frac{D}{2}; \frac{1+Z_{12}}{2}\right),
\eqa
where $\phantom{|}_2F_1\left[a, b; c; x\right]$ is the hypergeometric function and $Z_{12} = 1 + \frac{\left(\eta_1 - \eta_2\right)^2 - \left|\vec{x}_1 - \vec{x}_2\right|^2}{2 \, \eta_1 \, \eta_2}$ is the scalar invariant, also called hyperbolic distance between the points 1 and 2. The fact that the correlation functions depend on the scalar invariant reflects the dS invariance of the state under consideration.
The hypergeometric function in (\ref{W0}) is singular on the light--cone, i.e. when $Z_{12}=1$, and is analytic in the complex $Z_{12}$--plane with the cut going from $Z_{12} = 1$ to infinity along the positive real axis. These values of $Z_{12}$ correspond to time--like separated pairs of points. To define the correlation function in the vicinity of the cut one has to take the proper boundary value; this is usually encoded in an $\epsilon$ prescription as follows: $W_0\left(Z_{12}\right) \to W_0\left[Z_{12} + i \, \epsilon \, {\rm sign} \left(\eta_2 - \eta_1\right)\right]$.
Given the Wightman function one can construct the Keldysh propagator $D^K\left(Z_{12}\right)$, by taking its real part $D^K\left(Z_{12}\right) = {\rm Re}\, W\left[Z_{12} + i \, \epsilon \, {\rm sign} \left(\eta_2 - \eta_1\right)\right]$, and the commutator by taking its imaginary part $C\left(Z_{12}\right) = {\rm Im}\, W\left[Z_{12} + i \, \epsilon \, {\rm sign} \left(\eta_2 - \eta_1\right)\right]$. These relations are true even beyond the tree--level. That is why we drop off the index $0$ in the notations of the Keldysh propagator $D^K$ and the commutator $C$.
Two comments are in order here. The reason why the dS isometry is respected in the loops for the BD state in the EPP lays in the above analytic properties of the propagator (\ref{W0}) as a function of $Z_{12}$ and in the specific behaviour of the EPP geometry at past infinity \cite{Polyakov:2012uc} (see also \cite{Akhmedov:2013vka}). These facts are deeply related to the absence of the IR divergences in the loops for the BD ground/initial state in the EPP.
Second, frequently one defines the theory in dS space--time via analytical continuation from the sphere in the complex $Z$--plane (see e.g. \cite{MarolfMorrison}--\cite{Hollands:2010pr}). But such an approach does not allow to address non--vacuum and non--stationary situations in dS cosmology, because in the latter case propagators are not functions of $Z$ anymore. In particular such an approach does not allow one to address the issue of the IR divergences in the CPP and global dS, which are discussed in the previous section.
The limit of interest for us in this note, $p\sqrt{\eta_1\, \eta_2} \to 0$ and $\eta_1/\eta_2 ={\rm const}$, corresponds to the case $Z_{12} \to - \infty$, which is that of the large spatial separation between the points 1 and 2. In such a limit:
\bqa\label{2121}
W_0\left(Z_{12}\right) \approx B_+ Z_{12}^{- \frac{D-1}{2} + i \mu} + B_- Z_{12}^{- \frac{D-1}{2} - i \mu},
\eqa
where $B_{\pm}$ are some complex constants. They can be related, via the inverse Fourier transform of (\ref{C0}), (\ref{DK0}), to the behaviour of the modes in eq. (\ref{hp1}). Otherwise one can obtain eq. (\ref{2121}) from the asymptotics of the hypergeometric function for large values of its argument.
The one loop correction to the Wightman function in $\phi^3$ theory in the $x$--space representation was calculated in \cite{PolyakovKrotov}. The sum of the tree--level and one loop contributions is as follows:
\bqa\label{W01}
W_{0+1}\left(Z_{12}\right) \approx \left[1 + \lambda^2 \, K \, \log\left(-Z_{12}\right)\right]\, W_0\left(Z_{12}\right), \quad {\rm as} \quad Z_{12}\to - \infty,
\eqa
where $K$ is a constant related to the factors multiplying $\lambda^2 \, \log\left(\mu/p\eta\right)$ in (\ref{nkappa}). Consequently $\log\left(-Z_{12}\right)$ in (\ref{W01}) follows from $\log\left(\mu/p\sqrt{\eta_1 \, \eta_2}\right)$ after the inverse Fourier transformation along the spatial directions. In fact, making the $\epsilon$ shift, $W_{0+1}\left(Z_{12}\right) \to W_{0+1}\left[Z_{12} + i \, \epsilon \, {\rm sign} \left(\eta_2 - \eta_1\right)\right]$, and then taking the real part of the obtained expression one gets the Fourier transformation of (\ref{EPP}) and (\ref{nkappa}) with $h(p\eta_{1,2})$ approximated by (\ref{hp1}) \cite{PolyakovKrotov}. Of course these relations are valid approximately in the limit $p \, \sqrt{\eta_1 \, \eta_2} \to 0$ and $\eta_1/\eta_2 = {\rm const}$.
Taking the imaginary part of $W_{0+1}\left[Z_{12} + i \, \epsilon \,{\rm sign} \left(\eta_2 - \eta_1\right)\right]$ gives that
\bqa\label{C01}
C_{0+1}\left(Z_{12}\right) \approx \left[1 + \lambda^2 \, K \, \log\left|Z_{12}\right|\right]\, C_0\left(Z_{12}\right) + 2 \, \pi \, \lambda^2 \, K \, \theta\left(Z_{12} - 1\right) \, D^K_0 \left(Z_{12}\right).
\eqa
Here $\theta$ is the Heaviside step function. It appears from the imaginary part of the logarithm and should be present here due to the causality properties of the commutator $C$ (see e.g. \cite{Kamenev}). To write it in this form we recall that $\left|Z_{12}\right| \to \infty$.
Eq. (\ref{C01}) exhibits an interesting phenomenon. For space--like separated pairs ($Z_{12} < 1$) the commutator vanishes $C_0\left(Z_{12}\right) = 0$, because for these values of $Z_{12}$ the Wightman function $W_0(Z_{12})$ is real. As a result, there is no secular growth in the retarded and advanced propagators in the limit $p \, \sqrt{\eta_1\eta_2} \to 0$ and $\eta_1/\eta_2 = {\rm const}$, i.e. when $Z_{12} \to - \infty$. Thus, only the Keldysh propagator receives secular IR contributions in the limit that we have considered above, which is in agreement with the observations of \cite{Akhmedov:2013vka} and, more generally, of \cite{Kamenev}.
However, for large time--like separations, $Z_{12} \to + \infty$, we have that $C_0\left(Z_{12}\right) \neq 0$ and there is a secular growth in $C_{0+1}\left(Z_{12}\right)$, as follows from (\ref{C01}). This is in agreement with the calculation of the secular loop corrections of \cite{Leblond}. In the latter paper it was found that in $\phi^3$ theory in the limit $\eta_{1,2}\to 0$ and $\eta_1/\eta_2 \to \infty$ all three propagators (Keldysh, $D^K$, retarded, $D^R$, and advanced, $D^A$) receive the following one--loop contributions:
\bqa\label{D1}
D_1^{K,R,A}\left(p\, |\eta_1,\eta_2\right) \propto \lambda^2 \, \log\left(\frac{\eta_1}{\eta_2}\right) \, D_0^{K,R,A}\left(p\, |\eta_1,\eta_2\right) \, \left[\left|A_+\right|^2 - \left|A_-\right|^2\right] \, \int^{\infty}_1 \frac{dv}{v} \,\int_0^{+\infty} dl \, l^{D-2} \times \nonumber \\ \times \left[v^{i\mu} - v^{-i\mu}\right] \, \left[h\left(l/\sqrt{v}\right) \, h^*\left(l\sqrt{v}\right)\right]^2.
\eqa
This is the secular effect of the second kind, because $\log(\eta_1/\eta_2) = t_2 - t_1$.
Note that $\lambda^2 \, \log\left(\eta_1/\eta_2\right) \approx \lambda^2 \log Z_{12}$ for the time--like $Z_{12}$, when $\eta_1/\eta_2 \to \infty$ and $\vec{x}_1 \approx \vec{x}_2$. Thus, in the dS invariant situation both secular effects of the second and third kinds are related to each other via the isometry group and the analytical continuation in $Z_{12}$.
Below we show that in global dS space and in the CPP the situation with the two secular effects under discussion becomes different. In particular, the problems of resummations of the secular effects and of the secular divergence become physically distinct.
\begin{figure}[t]
\begin{center} \includegraphics[scale=0.8]{3.pdf}
\caption{In the Schwinger--Keldysh technique there are several diagrams, of the type that is shown here, which contribute to the two loop correction (with bubble inside bubble) to any two--point function.}\label{2}
\end{center}
\end{figure}
\section{Leading vs. subleading higher loop secular corrections}
To perform the resummation of the leading loop secular effects one has to solve the system of Dyson--Schwinger equations. This system is imposed on the two--point functions and on the vertices. As we have mentioned in the Introduction for low enough masses higher--point functions, i.e. vertices, start to grow, when all their arguments are taken to future infinity \cite{Akhmedov:2017ooy}. In such a situation it is not yet clear how to perform the resummation. Hence, below we restrict ourselves to high enough masses. Our equations are valid for $m>(D-1)/2$. In such a case, if one takes into account only the leading corrections in powers of $\lambda$ and logarithms, he can put vertices to their tree--level values inside the system of Dyson--Schwinger equations.
In this section we show that for secular effects of the second and third kind only the bubble diagrams of the type shown in Fig. \ref{3} provide the leading contributions in powers of $\lambda^2 \log Z$. At the same time secular effects receive subleading corrections from the diagrams depicted on the Fig. \ref{2}. The latter are suppressed by higher powers of $\lambda$.
On the other hand, in the case of the secular divergence both the diagrams in Fig. \ref{2} and Fig. \ref{3} provide corrections of the same order. As the result, while the resummation of the secular effects is always a linear problem in powers of the exact Keldysh propagator, the resummation of the secular divergences is necessarily non--linear. The last problem has much richer zoo of solutions \cite{Akhmedov:2017ooy}.
\subsection{Exact BD state in the EPP}
Let us start with the correction of the type shown on the Fig. \ref{2}. In such a case in the leading IR corrections one of the propagators in the loops should be represented as (\ref{EPP}) and (\ref{nkappa}). The other propagators should have the tree--level form. This means that instead of $h(q\eta_3) \, h^*(q\eta_4)$ and $h(q\eta_3) \, h(q\eta_4)$, as in (\ref{C0}) and (\ref{DK0}), one loop corrected propagator will contain such contributions as $\lambda^2 \log\left(p\sqrt{\eta_3\eta_4}/\mu\right) \, h(q\eta_3) \, h^*(q\eta_4)$
and $\lambda^2 \log\left(p\sqrt{\eta_3\eta_4}/\mu\right) \, h(q\eta_3) \, h(q\eta_4)$ in the case of third type of secular growth or $\lambda^2 \log\left(\eta_3/\eta_4\right) \, h(q\eta_3) \, h^*(q\eta_4)$ and $\lambda^2 \log\left(\eta_3/\eta_4\right) \, h(q\eta_3) \, h(q\eta_4)$, in the case of the second type of secular growth.
Consider, first, the limit $p\sqrt{\eta_1\, \eta_2} \to 0$ and $\eta_1/\eta_2 = const$, i.e. the third type of secular effect. Then, in this case the second loop of Fig. \ref{2} will contribute to e.g. $n(p\eta)$ the corrections of the following form:
\bqa\label{n2}
n_2(p\eta) \propto \lambda^4 \, \int^{\mu/p}_\eta \frac{du}{u} \, \iint_0^\infty \frac{dv}{v} \, dl \, l^{D-2} \, \log\left(\frac{\mu}{l}\right) \, \left[\left|A_+\right|^2 \, v^{i\mu} + \left|A_-\right|^2 \, v^{-i\mu}\right]\, \left[h\left(l/\sqrt{v}\right)\, h^*\left(l\sqrt{v}\right)\right]^2 \times \nn \\ \times \iint_0^\infty \frac{dv'}{v'} \, dl' \, \left(l'\right)^{D-2} \, \left[\left|A_+\right|^2 \, \left(v'\right)^{i\mu} + \left|A_-\right|^2 \, \left(v'\right)^{-i\mu}\right]\, \left[ h\left(l'/\sqrt{v'}\right)\, h^*\left(l'\sqrt{v'}\right)\right]^2 + \dots.
\eqa
The index 2 here designates that we are discussing second loop corrections, $u$, $l$ and $v$ and their primed versions are defined in the footnote (\ref{foot1}) above; while $v',l'$ are the integration variables corresponding to the internal loop, $v,l$ --- correspond to the big loop in Fig. \ref{2}, and $\log\left(\frac{\mu}{l}\right)$ under the $l$ integral appears from the one loop corrections (\ref{nkappa}). This $dl$ integral is convergent, which is essential for further discussion.
Ellipses in (\ref{n2}) stand for similar contributions to $n_2(p\eta)$ coming from $\kappa_1(p\eta)$ and its complex conjugate. Their expressions are similar to (\ref{n2}). Moreover, expressions similar to (\ref{n2}) will also appear for $\kappa_2(p\eta)$ and its complex conjugate.
We do not need to know the exact expression for $n_2(p\eta)$ to make the following important conclusion.
The form of Eq. (\ref{n2}) shows that such diagrams as shown in the Fig. \ref{2} (containing loops inside internal propagators) provide contributions of the form $\lambda^4 \log\left(p\, \sqrt{\eta_1 \, \eta_2}\right)$ in the limit under consideration.
Let us now continue with the consideration of the growth in the limit $\eta_1/\eta_2 \to \infty$, i.e. the second type of secular effect. In such a case one of the internal propagators should have the form (\ref{D1}). As we have discussed at the beginning of this section the leading correction coming from the diagram of the Fig. \ref{2} to all three propagators (R,A and K) will contain contributions as follows:
\bqa\label{D2}
D_2^{K,R,A}\left(p\, |\eta_1,\eta_2\right) \propto \lambda^4 \, \left[D_0^{K,R,A}\left(p\, |\eta_1,\eta_2\right)\right]^2 \, \left[\left|A_+\right|^2 - \left|A_-\right|^2\right]^2 \, \int_{\eta_2}^{\eta_1} \frac{d\eta}{\eta} \times \nn \\ \times \int\limits^{+\infty}_1 \frac{dv}{v} \, \int \limits_0^{+\infty} dl \, l^{D-2} \, \log\left(v\right) \, \left[v^{i\mu} - v^{-i\mu}\right] \,\left[h\left(l/\sqrt{v}\right)\, h^*\left(l\sqrt{v}\right)\right]^2 \times \nn \\ \times \, \int\limits^{+\infty}_1 \frac{dv'}{v'} \, \int\limits_0^{+\infty} dl' \, \left(l'\right)^{D-2} \, \left[\left(v'\right)^{i\mu} - \left(v'\right)^{-i\mu}\right] \,\left[h\left(l'/\sqrt{v'}\right)\, h^*\left(l'\sqrt{v'}\right)\right]^2 + \dots.
\eqa
The $\log(v)$ under the $v$ integral here appears from the first loop correction (\ref{D1}).
Thus, in the limit $\eta_1/\eta_2 \to \infty$ such diagrams as in Fig. \ref{2} contribute $\lambda^4 \log\left(\eta_1/\eta_2\right)$ corrections.
\begin{figure}[t]
\begin{center} \includegraphics[scale=1]{2.pdf}
\caption{In the Schwinger--Keldysh technique there are several diagrams, of the type that is shown here, which contribute multiple bubble type two loop correction to any two--point function.}\label{3}
\end{center}
\end{figure}
At the same time it is quite straightforward exercise to check that the diagrams from the Fig. \ref{3} lead to the contributions of the form $\left[\lambda^2 \log\left(p\, \sqrt{\eta_1 \, \eta_2}\right)\right]^2$ and $\left[\lambda^2 \log\left(\eta_1/\eta_2\right)\right]^2$ in the case of the third and the second kind of secular effects, correspondingly. Thus, if one considers the exact BD state in the exact EPP the diagrams from the Fig. \ref{2} contribute subleading corrections in comparison with those from the Fig. \ref{3}, if $\lambda$ is very small and $\eta_{1,2} \to 0$. This is a very important observation for the resummation procedure.
\subsection{Higher loops in the case of the secular divergence in the CPP and in global dS}
Let us continue now with the discussion of secular effects in the CPP. The secular growth of the second kind in the CPP has the same properties as in the EPP. The calculations are practically the same as in the EPP with the same conclusions that diagrams from Fig. \ref{2} provide subleading corrections in comparison with diagrams from Fig. \ref{3}.
In the case of the secular divergence, which is present instead of the secular growth of the third kind, the situation now is quite different. Because the contribution from the internal loop of the Fig. \ref{2} comes from the past of the external time arguments, $\eta_{1,2}$, the integration over times in this loop are bounded as $\eta_3,\eta_4<u$. As the result the contribution in question has the following form
\bqa
n_2(p\eta) \propto \lambda^4 \, \int \limits^{\min\left(\eta,\frac{\mu}{p}\right)}_{\eta_0} \frac{du}{u} \, \log\left(\frac{u}{\eta_0}\right) \times \nn \\ \times \left\{\iint_0^\infty \frac{dv}{v} \, dl \, l^{D-2} \, \left[\left|A_+\right|^2 \, v^{i\mu} + \left|A_-\right|^2 \, v^{-i\mu}\right]\, \left[h\left(l/\sqrt{v}\right)\, h^*\left(l\sqrt{v}\right)\right]^2\right\}^2 + \dots.
\eqa
Similar expressions will also appear for $\kappa_2(p\eta)$ and its complex conjugate.
Thus, the second loop from the Fig. \ref{2} contributes as follows:
\begin{gather}
n_2(p\eta) \sim \left\{\begin{matrix}
\left[\lambda^2 \log \frac{\eta}{\eta_0}\right]^2, \quad p\eta \ll \mu,\\
\left[\lambda^2 \log \frac{\mu}{p\eta_0}\right]^2, \quad p\eta \gg \mu.
\end{matrix}\right.
\end{gather}
This means that, for the case of the secular divergence the diagrams from the Fig. \ref{2} contribute in the same order as those from the Fig. \ref{3}. That has crucial consequences for the resummation. In particular now the problem of the resummation of the leading IR secular divergences becomes of the kinetic type: one has to derive a dS space analog of the Boltzman's kinetic equation to resum the leading IR divergences \cite{Akhmedov:2013vka}. Meanwhile in the CPP the situation with the resummation of the secular growth of the second kind remains of the same type as in the EPP.
Finally, let us stress that the situation in global dS is again similar to the one in the CPP for the same reason as was explained in the section \ref{IVB}.
\subsection{Perturbations over the BD state in the EPP}
Let us see now what happens if one perturbs the BD state by a non--invariant initial density. In such a case
the initial form of the Keldysh propagator instead of being as in eq. (\ref{DK0}) will be represented by (\ref{EPP}) with $\kappa = 0$ (to have the proper Hadamard behaviour) and some initial distribution $n^0_p$. The retarded and advanced propagators do not depend on the state at the tree--level.
Please recall at this point that $n^0_p$ is the comoving density. Hence, one cannot just put $n^0_p$ at past infinity of the EPP, because then the initial physical density will be infinite. To overcome this problem one has to consider an initial Cauchy surface $+\infty > \eta_0 > 0$ and impose $n^0_p$ there. Let us stress that, if one keeps $n_p^0$ finite, then $\eta_0$ cannot be taken to the past infinity. In this sense now the situation in EPP becomes very similar to the one in the CPP and global dS \cite{Akhmedov:2013vka}. Furthermore, the $x$--space representation of the tree--level Kledysh propagator will not be a function of $Z_{12}$ anymore. Hence, the dS isometry will be broken by the initial condition.
However, despite the presence of the IR cutoff $\eta_0$, the situation for the secular effect of the second kind does not change substantially.
Namely, from the diagram of the Fig. \ref{2} it still has the form (\ref{D1}) and (\ref{D2}) with different coefficients multiplying $\lambda^2 \, \log\left(\eta_1/\eta_2\right)$ and $\lambda^4 \, \log\left(\eta_1/\eta_2\right)$. I.e. for the secular effect of the second kind diagrams from the Fig. \ref{2} still provide subleading corrections in comparison with those shown on the Fig. \ref{3}. The situation in this case is similar to the one in the CPP.
Furthermore, the calculation of the one loop secular contribution of the third kind to the propagators (in the limit $p\sqrt{\eta_1 \, \eta_2} \to 0$ and $\eta_1/\eta_2 = const$) which follows from the diagrams of the form shown on the Fig. \ref{1}, is also not much different from the dS invariant case. Namely the retarded and advanced propagators again do not receive growing correction in such a limit. At the same time the Keldysh propagator receives correction of the form (\ref{EPP}) with (see \cite{Akhmedov:2013vka} for the details):
\begin{gather}\label{npn0}
n^1_p(\eta) \propto \lambda^2 \, \int \limits_\eta^{{\rm min}(\eta_0, \mu/p)} \frac{du}{u} \, \int_0^\infty \frac{dv}{v} \, \int dl \, l^{D-2}\, \left[\left|A_+\right|^2 \, v^{i\mu} + \left|A_-\right|^2 \, v^{-i\mu}\right] \,h^2\left(l/\sqrt{v}\right)\, \left[h^*\left(l\sqrt{v}\right)\right]^2 + \dots.
\end{gather}
This expression is obtained under the assumption that $n^0_p \gg n^0_q$ for $q \gg p$ and we extend the limits of integration over $l$ and $v$, because these integrals are rapidly converging. The ellipses in (\ref{npn0}) stand for other terms that also describe the change of the level population and vanish when $n_p^0 = 0$. Essentially the RHS of this expression is an analog of the collision integral in Boltzman's kinetic equation \cite{Akhmedov:2013vka}. In the following it is sufficient to realise that $n_p^1(\eta) \propto \lambda^2 \log\left(\eta_0/\eta\right) \, I\left(n^0_p\right)$, where $I\left(n^0_p\right)$ is some kind of the collision integral evaluated for the initial density $n_p^0$.
Similar contribution one obtains for $\kappa_p^0$ and its complex conjugate.
Thus, we obtain that
\begin{gather}\label{np1}
n^1_p(\eta) \propto \left\{\begin{matrix}
\lambda^2 \log \frac{\eta}{\eta_0}, \quad p \ll \frac{\mu}{\eta_0},\\
\lambda^2 \log \frac{p\eta}{\mu},\quad p \gg \frac{\mu}{\eta_0}
\end{matrix}\right.
\end{gather}
In the second loop from the diagram of the Fig. \ref{2} instead of (\ref{n2}) we obtain:
\bqa\label{n22}
n^2_p(\eta) \propto \lambda^4 \, \int \limits^{{\rm min}(\eta_0, \mu/p)}_\eta \frac{du}{u}\, \int_0^\infty \frac{dv}{v} \, \int_0^{\infty} dl \, l^{D-2} \, \left[\left|A_+\right|^2 \, v^{i\mu} + \left|A_-\right|^2 \, v^{-i\mu}\right]\, h^2\left(l/\sqrt{v}\right)\, \left[h^*\left(l\sqrt{v}\right)\right]^2 \times \nn \\
\times\int\limits^{\min\left(\eta_0,\frac{\mu u}{l}\right)}_u \frac{du'}{u'}\int_0^\infty \frac{dv'}{v'} \, \int dl' \, \left(l'\right)^{D-2} \, \left[\left|A_+\right|^2 \, \left(v'\right)^{i\mu} + \left|A_-\right|^2 \, \left(v'\right)^{-i\mu}\right]\, h^2\left(l'/\sqrt{v'}\right)\, \left[h^*\left(l'\sqrt{v'}\right)\right]^2 + \dots.
\eqa
The $dl$ integral here can be separated into two regions: $l<\frac{\mu u}{\eta_0}$ and $l>\frac{\mu u}{\eta_0}$. The second region contributes an expression behaving similarly to the eq. \eqref{n2} in the limit $u\to 0$. As was shown above, it does not give an additional power of the logarithm.
Then we have to estimate only the contribution coming from the region $l< \frac{\mu u}{\eta_0}$:
\bqa\label{n23}
n^2_p(\eta) \propto \lambda^4 \, \int \limits^{{\rm min}(\eta_0, \mu/p)}_\eta \frac{du}{u} \log\left(\frac{\eta_0}{u}\right)\, \int_0^\infty \frac{dv}{v} \, \int_0^{\frac{\mu u}{\eta_0}} dl \, l^{D-2} \, \left[\left|A_+\right|^2 \, v^{i\mu} + \left|A_-\right|^2 \, v^{-i\mu}\right]\, \left[h\left(l/\sqrt{v}\right) h^*\left(l\sqrt{v}\right)\right]^2 \nn
\eqa
where the upper limit of integration of the $l$ integral, $\mu u/ \eta_0$, appears because the contribution of the order $\log\frac{\eta}{\eta_0}$ follows only from this region of momenta, as can be seen from eq. (\ref{np1}). When $u \to 0$, the integral over $l$ in the last expression goes to zero. This indicates that the integral has a polynomial behavior and does not provide higher power of the logarithm.
Thus, it is worthwhile to remark that even if one perturbs the initial BD state in the EPP the resummation of the secular effects remains essentially the same linear problem as in the case of the exact BD in the exact EPP.
\subsection{Secular effects in the sandwich space}
To check whether resummation of the secular effect (or divergence) of the third kind is always a linear problem when there is only expansion (but there is no contraction) we continue with the consideration of the so called sandwich space--time proposed in e.g. \cite{Akhmedov:2017dih}:
\begin{equation}
\label{sandwi}
ds^2 =
\begin{cases}
\left(1+\frac{T^2}{\eta^2+\epsilon^2} \right)\left[d\eta^2-d\vec{x}^2 \right], & \text{} \eta \in (-\infty,0] \\
\frac{T^2}{\epsilon^2}\left[d\eta^2-d\vec{x}^2 \right], & \text{} \eta \in [0,+\infty),
\end{cases}
\quad {\rm where} \quad T^2 \gg \epsilon^2
\end{equation}
This metric describes an expansion between two flat Minkowski spaces at $\eta \ll - T$ and $\eta > - \epsilon$. The expansion stage is very similar to the EPP.
As is discussed in \cite{Akhmedov:2017dih} free modes in this space can be approximately represented as:
\begin{equation}\label{eq:17}
f_p(\eta) \approx
\begin{cases}
\frac{1}{\sqrt{\omega_{in}}}e^{i\omega_{in}\eta}, & \text{} \eta \ll -T \\
|\eta|^{(D-1)/2}\Bigl[A_p \, H^{(1)}_{i\mu}(p|\eta|) + B_p \, H^{(2)}_{i\mu}(p|\eta|) \Bigr], & \text{} -T \ll \eta \ll -\epsilon \\
\frac{1}{\sqrt{\omega_{out}}}\left(C_p \, e^{i\omega_{out}\eta} + D_p \, e^{-i\omega_{out}\eta} \right), & \text{} \eta \gg -\epsilon
\end{cases}
\end{equation}
where $\omega_{in}(p)=\sqrt{\vec{p}^2+m^2}$ and $\omega_{out}(p)=\sqrt{\vec{p}^2+m^2\frac{T^2}{\epsilon^2}}$. The complex coefficients $A_p$, $B_p$, $C_p$ and $D_p$ can be fixed from the gluing conditions at $\eta \sim T$ and $\eta \sim \epsilon$.
These modes can be separated into three calsses
\begin{itemize}
\item High energy quanta, for which $p|\eta| \gg \mu$ for all the expanding region $\eta \in [-T, -\epsilon]$. These modes do not feel any expansion and do not contribute to the secular growth of interest.
\item Intermediate energy quanta, for which $p\epsilon \ll \mu \ll pT$.
\item Low energy quanta, for which $p|\eta| \ll \mu$ for all the expanding region $\eta \in [-T, -\epsilon]$. These are the modes of the main interest for us.
\end{itemize}
As shown in \cite{Akhmedov:2017dih} during the expansion stage the Keldysh propagator for the low energy and intermediate modes receives secular corrections in the limit $\eta_{1,2} > - \epsilon$ and $T/\epsilon \to \infty$. The corrections are as follows:
\begin{gather}
n^1_p \propto \left\{\begin{matrix}
\lambda^2 \log \frac{\epsilon}{T}, \quad {\rm low} \,\, {\rm energy} \,\, {\rm modes},\\
\lambda^2 \log \frac{p\epsilon}{\mu},\quad {\rm intermidiate} \,\, {\rm modes},
\end{matrix}\right.
\end{gather}
and $n_p^1$ is of order zero for high energy modes. Similar situation appears for $\kappa_p^1$ and its complex conjugate. In \cite{Akhmedov:2017dih} $\lambda \phi^4$ theory was considered in 2D, but similar situation appears in $\lambda \phi^3$ theory at one loop and for any $D$.
Hence, the situation for the sandwich space for the low energy modes is similar to the CPP and it is not hard to see that the diagrams from the Fig. \ref{2} and \ref{3} contribute of the same order.
Interestingly enough, if one excludes either one of the flat space regions of the entire sandwich space--time and keeps the other, the situation with the IR loop corrections becomes similar to the EPP case. Namely, if one considers space that descrives once started from flat space eternal expansion or nucleation of flat space from zero volume (eternal EPP towards the past, but expansion stops at some moment in the future), then the digrams from the Fig. \ref{2} contribute subleading corrections in comparison with those from the Fig. \ref{3}.
\section{Conclusions and acknowledgements}
In conclusion, one can respect the dS isometry at each loop level only for massive fields in the EPP with initial BD state. In such a case there are secular effects of the second and third kind and they are related to each other via isometry transformations and analytical continuation in the complex plane of the scalar invariant --- the hyperbolic distance.
Moreover, in the dS invariant situation the problem of the resummmation of the leading secular contributions from all loops reduces to a linear integro--differential Dyson--Schwinger equation, because the diagrams in Fig. \ref{2} provide subleading contributions as compared to those in Fig. \ref{3}.
At the same time the dS isometry is necessarily broken by loop IR divergnces for any initial state in the CPP and global dS. In such a case the resummation of the second type secular contributions still remains to be a linear integro--differential Dyson--Schwinger equation. However, the resummation of the leading IR divergences from all loops now amounts to a nonlinear integro--differential Dyson--Schwinger equation because the digrams in Fig. \ref{2} contribute at the same order as those in Fig. \ref{3}.
All the above results have been shown here for the case when $m > (D-1)/2$ in the units of dS radius. With some modifications they are also going to work when $\frac{\sqrt{3}}{4}(D-1) < m < (D-1)/2$, because in such a case there is no secular growth in the higher point correlation functions \cite{Akhmedov:2013vka}. In this case one can put the vertices to their tree--level values in the system of Dyson--Schwinger equations for the propagators and vertices. (Hence, one has to deal with only the equations for the two--point functions.) However, when $m \leq \frac{\sqrt{3}}{4}(D-1)$ one has to solve the combined system of Dyson--Schwinger equations for two--point and higher--point functions together \cite{Akhmedov:2013vka}. That question remains for the moment unsolved.
We would like to acknowledge valuable discussions with A.Polyakov, V.Gorbenko and J.Serreau. AET and UM would like to thank Hermann Nicolai and Stefan Theisen for the hospitality at the Albert Einstein Institute, Golm, where the work on this project was completed.
The work of ETA was supported by the state grant Goszadanie 3.9904.2017/BCh and by the grant from the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS” and by RFBR grant 18-01-00460 А.
| proofpile-arXiv_065-1547 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Networks of interacting elements, or graphs, describe varieties of systems, such as communication systems, biological and ecological systems, and social media systems \cite{holme2012temporal}. Examples include people's communications via emails, infection of diseases, or communications between neurons in nervous systems. A great majority of these networks can be simplified to undirected graphs of binary units, where nodes take either the active or inactive state, and connections between two nodes do not have directionality. For example, spiking activity of neural populations has been investigated using models of binary patterns \cite{schneidman2006weak}. In addition, sequences of correlated binary patterns can represent many artificial time-series data, such as text, movies, and music \cite{graves2013generating}.
Networks of pairwise connected elements are generally described as a Markov random field, or an undirected graphical model; with nodes and edges representing random variables and their relations respectively. This model can be written using the exponential family distribution. When the random variables are binary, their probability distribution follows the celebrated Ising model, or the Boltzmann machine-- a stochastic extension of the Hopfield memory network. However, it is unlikely that observed binary patterns are sampled from such a stationary distribution except for controlled conditions. Hence, the model was extended so it can capture time-varying structures in the data \cite{Shimazaki2009,song2009keller,kolar2010estimating,long2011statistical,kass2011assessment,Shimazaki2012,shimazaki2013single,osogami2015seven,donner2017approximate}.
Of these time-dependent Ising models, methods based on the state-space framework offer the sequential Bayes algorithm that provides estimates of the time-varying network with Bayesian credible intervals, and the expectation-maximization (EM) algorithm that optimizes parameters in the model \cite{Shimazaki2009,Shimazaki2012,shimazaki2013single,donner2017approximate}. However, while estimating a time-dependent full Ising network is an important approach in elucidating binary activity dictated by all the pairwise interactions, a few dominant patterns may capture these time-series due to the innate correlations. Capturing essential structures in the time-varying data is an important tool to perform compression, prediction, and control of these patterns. In this study, we provide an online method to decompose the underlying structure, assuming different time-scales for hierarchical structures in the binary data generation.
Here we construct a low-dimensional representation of the Ising model that summarizes networks as a weighted combination of undirected graphs, assuming that the weights follow independent processes and that the underlying structure of the graphs changes slowly compared to the speed of weight changes. We provide a sequential Bayes algorithm to trace the dynamics of the weights, which we call network states, while updating the underlying graph structures online. The process of learning the underlying graphs is much slower than the network dynamics caused by weight changes, which allows separation of the two dynamics. The method is thus expected to extract temporal hierarchies in natural and artificial systems that generate binary patterns. This approach makes a contrast to previous offline unsupervised methods based on orthogonal or low-rank representation of the network dynamics to reduce dimensionality of time-dependent Ising models \cite{hayashi2009dynamic,hirayama2016sparse}.
This paper is constructed as follows. In Methods, we explain (A) the stationary Ising model of binary patterns and construct (B) a low-dimensional model composed of multiple weighted graphs. We then introduce (C) the state-space model in which the weights of the graphs change. We derive (D) the sequential Bayes algorithm for estimating the weights as well as (E) an online update algorithm of the graphs. In Results, (A) we corroborate the algorithm using simulated data, and demonstrate that the method extracts time-varying features better than a traditional orthogonal decomposition method. Next, (B) the model selection is performed to select the number of features in the data. We confirm the selected model exhibits improved goodness-of-fit over the original time-dependent Ising model. Finally, (C) we show the applicability of the method in analyzing neural data. In Discussion, we conclude the paper with possible extensions of the method.
\section{Methods}
\subsection{The Ising model}
The time-series model of binary data we propose in this study is based on the Ising/spin-glass model in statistical physics. In order to specify notations, here we describe the Ising model. This model will be extended to the multi-graph and time-dependent model in the next sections. For $N$ binary variables, the Ising model is given as
\begin{equation}
p(\mathbf{x}\vert \mathbf{j})
= \exp\left[ \sum_{i=1}^N h_i x_i + \sum_{i<j} j_{ij}x_ix_j - \psi (\mathbf{j}) \right].
\label{eq:ising_original}
\end{equation}
Here the pattern $\mathbf{x}=(x_1,\ldots,x_N)^{\prime}$ is a $N$-tuple binary vector, where each entry indicates the active $x_i=1$ or inactive $x_i=0$ state of the $i^{th}$ node. Each node of the pattern can represent a pixel in image recognition or the activity of a neuron in neuroscience spiking data. $\psi(\mathbf{j})$ is a log normalization term. It ensures that the probabilities of all patterns sum up to 1. $h_i$ is the bias parameter for the $i^{th}$ node whereas $j_{ij}$ represents the interaction between the $i^{th}$ and the $j^{th}$ nodes. We express the parameters by a single column vector:
\begin{equation}
\mathbf{j} =(h_1,\ldots,h_N,j_{1,2},\ldots,j_{N-1,N})^\prime.
\end{equation}
We call this vector the graph of the Ising model. By adjusting the graph, we can embed correlated patterns such as binary images as the ones that frequently appear in the model. The Ising model is written in the canonical form as
\begin{equation}
p(\mathbf{x}\vert \mathbf{j})=\exp\left[\mathbf{j}^{\prime}\tilde{\mathbf{F}}(\mathbf{x})-\psi(\mathbf{j}) \right],
\end{equation}
where $\tilde{\mathbf{F}}(\mathbf{x})$ is the feature vector computed from the binary variables as
\begin{equation}
\tilde{\mathbf{F}}(\mathbf{x})=(x_1,\ldots,x_N,x_1x_2,\ldots,x_{N-1}x_{N})^\prime.
\end{equation}
The log normalization is then computed as
\begin{equation} \label{eq:freeenergy}
\psi(\mathbf{j}) = \log\sum_\mathbf{x}\exp[\mathbf{j}^{\prime}{\tilde{\mathbf{F}}(\mathbf{x})}].
\end{equation}
The computation of $\psi(\mathbf{j})$ requires considering all possible binary patterns. The first derivative of $\psi(\mathbf{j})$ with respect to $\mathbf{j}$ provides the expectation of the feature vector:
\begin{equation}
\tilde{\boldsymbol\eta} \equiv \frac{\partial\psi(\mathbf{j})}{\partial\mathbf{j}}=E_{\mathbf{X}\vert\mathbf{j}} \tilde{\mathbf{F}}(\mathbf{X}),
\label{eq:eta_tilde}
\end{equation}
where $\mathbf{X}$ is a sample of binary data, and $E_{\mathbf{X}\vert\mathbf{j}}$ represents the expectation of $\mathbf{X}$, using $p(\mathbf{x}|\mathbf{j})$. Furthermore, the second derivative yields the Fisher information matrix:
\begin{equation}
\tilde{\mathbf{G}} \equiv \frac{\partial^2\psi(\mathbf{j})}{\partial\mathbf{j}\partial\mathbf{j}^{\prime}} = E_{\mathbf{X}\vert\mathbf{j}}\tilde{\mathbf{F}}(\mathbf{X})\tilde{\mathbf{F}}(\mathbf{X})^{\prime} - E_{\mathbf{X}\vert\mathbf{j}}\tilde{\mathbf{F}}(\mathbf{X})E_{\mathbf{X}\vert\mathbf{j}}\tilde{\mathbf{F}}(\mathbf{X})^{\prime} .
\label{eq:G_tilde}
\end{equation}
We will utilize these relations frequently in the next sections.
\subsection{The multi-graph Ising model}
Next, we construct an Ising model composed of multiple graphs, assuming that the data is sampled from a network that is a weighted combination of these graphs. Namely, we consider the following model:
\begin{equation}
p(\mathbf{x}\vert \boldsymbol\theta,\mathbf{J}) = \exp\left[ \sum_{k=1}^D \theta^k \left[ \sum_{i=1}^N h_i^k x_i + \sum_{j>i} j_{ij}^k x_i x_j\right] - \psi(\boldsymbol\theta,\mathbf{J})\right],
\end{equation}
where $D$ is the number of undirected graphs. We call this model a multi-graph Ising model. $h_i^k$ and $j_{ij}^k$ correspond to the graph parameters of the $k^{th}$ graph. $\theta^k$ is the weight applied to the $k^{th}$ graph, and we define $\boldsymbol\theta = (\theta^1,\theta^2,\hdots,\theta^D)^{\prime}$. If $h_i^k$ and $j_{ij}^k$ are known, the weights $\theta^k$ constitute the canonical parameters of the model. Given the graph parameters and $D < N + N(N - 1)/2$, this model can represent the data with a smaller dimensionality.
The multi-graph Ising model can be written in the canonical form as
\begin{equation}
p(\mathbf{x}\vert \boldsymbol\theta,\mathbf{J}) = \exp\left[\boldsymbol\theta^\prime \mathbf{F}(\mathbf{x}, \mathbf{J})-\psi(\boldsymbol\theta,\mathbf{J})\right].
\label{eq:low-diemnsional_ising_model}
\end{equation}
Here, the feature vector, $\mathbf{F}(\mathbf{x}, \mathbf{J})$, is given by:
\begin{equation}
\mathbf{F}(\mathbf{x}, \mathbf{J})=
\begin{bmatrix}
\sum\limits_{i=1}^N h^1_i x_i+\sum\limits_{j>i} j^1_{ij} x_i x_j \\
\sum\limits_{i=1}^N h^2_i x_i+\sum\limits_{j>i} j^2_{ij} x_i x_j \\
\vdots \\
\sum\limits_{i=1}^N h^D_i x_i+\sum\limits_{j>i} j^D_{ij} x_i x_j
\end{bmatrix},
\end{equation}
and $\mathbf{J}$ is the matrix containing all graphs:
\begin{align}
\mathbf{J}
&=
\begin{bmatrix}
\mathbf{j}^1 & \mathbf{j}^2 & \hdots & \mathbf{j}^D
\end{bmatrix}.
\end{align}
Using $\mathbf{J}$, the feature vector can be further expressed as
\begin{equation}
\mathbf{F}(\mathbf{x}, \mathbf{J}) = \mathbf{J}^{\prime}\tilde{\mathbf{F}}(\mathbf{x}).
\end{equation}
Note that $\tilde{\mathbf{F}}(\mathbf{x})$ is the feature vector of the original Ising model of Eq.~\ref{eq:ising_original}. The log normalization term, $\psi(\boldsymbol\theta,\mathbf{J})$, is then calculated in the same way as in the original Ising model: $\psi(\boldsymbol\theta,\mathbf{J}) = \log\sum_\mathbf{x}\exp[\boldsymbol\theta^\prime \mathbf{F}(\mathbf{x},\mathbf{J})]$.
Like in the original Ising model, the first and second-order derivatives of the log-normalization term respectively provide the expectation and the second-order cumulant of the new feature vector $\mathbf{F}(\mathbf{x},\mathbf{J})$. These can be expressed using the expectation parameters and the Fisher information of the original Ising model (Eqs.~\ref{eq:eta_tilde},\ref{eq:G_tilde}). The expectation of the feature vector is given as
\begin{align}
\boldsymbol\eta
&=E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\mathbf{F}(\mathbf{x},\mathbf{J})
=\mathbf{J}^\prime E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}} \tilde{\mathbf{F}}(\mathbf{x})
=\mathbf{J}^\prime \tilde{\boldsymbol\eta} \nonumber \\
&=
\begin{bmatrix}
\mathbf{j}_{1}^{\prime}\tilde{\boldsymbol\eta} &
\mathbf{j}_{2}^{\prime}\tilde{\boldsymbol\eta} &
\hdots &
\mathbf{j}_{D}^{\prime}\tilde{\boldsymbol\eta}
\end{bmatrix}^{\prime},
\label{eq:expectation_eta}
\end{align}
where $E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}$ represents the expectation of $\mathbf{X}$, using $p(\mathbf{x}\vert\boldsymbol\theta,\mathbf{J})$. Similarly, the second-order cumulant of the feature vector, i.e., the Fisher information matrix $\mathbf{G}$, is given as
\begin{align}
\mathbf{G}
&=E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\left[\mathbf{F}(\mathbf{x},\mathbf{J})\mathbf{F}(\mathbf{x},\mathbf{J})^{\prime} \right] - E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\mathbf{F}(\mathbf{x},\mathbf{J})
E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\mathbf{F}(\mathbf{x},\mathbf{J})^{\prime} \nonumber \\
&= E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\left[
\mathbf{J}^\prime \tilde{\mathbf{F}}(\mathbf{x})
\tilde{\mathbf{F}}(\mathbf{x})^\prime \mathbf{J} \right]
- E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\mathbf{J}^\prime \tilde{\mathbf{F}}(\mathbf{x})
E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\tilde{\mathbf{F}}(\mathbf{x})^\prime \mathbf{J} \nonumber \\
&=\mathbf{J}^\prime
\left\{
E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}[\tilde{\mathbf{F}}(\mathbf{x})
\tilde{\mathbf{F}}(\mathbf{x})^\prime ]
- E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\tilde{\mathbf{F}}(\mathbf{x})
E_{\mathbf{X}\vert\boldsymbol\theta,\mathbf{J}}\tilde{\mathbf{F}}(\mathbf{x})^\prime
\right\}
\mathbf{J} \nonumber \\
&= \mathbf{J}^\prime \tilde{\mathbf{G}} \mathbf{J}.
\label{eq:FisherInfoMatrix}
\end{align}
\subsection{The state-space multi-graph Ising model}
The model developed so far can only produce stationary data. In order to analyze the dynamics of binary patterns, it is essential to consider variations of model parameters in time. Figure \ref{fig:state model} shows the state-space modeling framework used for this purpose. We assume the following state equation to link the states of two consecutive time bins:
\begin{align}\label{eq:statemodel}
\boldsymbol\theta_t = \boldsymbol\theta_{t-1} + \boldsymbol{\xi}_t(\lambda).
\end{align}
At each time step, we add a zero mean Gaussian noise, $\boldsymbol\xi_t$, that has a covariance given by $\lambda^{-1}\mathbf{I}$, where $\mathbf{I}$ is a $D \times D$ identity matrix. $\lambda$ is time-independent and will be fixed. This state equation assumes that the dynamics of the network states are independent Gaussian processes.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth,height=30mm]{ssj}
\caption{A schematic of the state-space multi-graph Ising model. The binary pattern $\mathbf{X}_t$ is generated by multiple graphs $\mathbf{J}_t$ with their weights given by $\boldsymbol{\theta}_t$ (network state). These parameters are time-dependent.
}
\label{fig:state model}
\end{figure}
To use this state equation, it is necessary to assume the state of the model at the first time bin. We assume its probabilistic distribution to be Gaussian, with mean $\boldsymbol\mu$ and covariance $\boldsymbol\Sigma$: $\boldsymbol\theta_1 \sim \mathcal{N}(\boldsymbol\mu,\boldsymbol\Sigma)$. We will fix these hyper-parameters ($\boldsymbol\mu$ and $\boldsymbol\Sigma$). However, unlike the algorithm developed in \cite{Shimazaki2009,Shimazaki2012,shimazaki2013single,donner2017approximate}, here we assume that one of the hyper-parameters, namely $\mathbf{J}$, is also time-dependent, and denote it as $\mathbf{J}_t$ for $t=1,\ldots,T$.
At each time step, we assume that the data is sampled from the current multi-graph Ising model with network state $\boldsymbol\theta_t$ and graphs $\mathbf{J}_t$:
\begin{align}\label{eq:observationmodel}
&p(\mathbf{x}\vert \boldsymbol\theta_t,\mathbf{J}_t) \nonumber \\
& = \exp\left[ \sum_{k=1}^D \theta_{t}^{k} \left[ \sum_{i=1}^N h_i^{k,t} x_i + \sum_{i<j} j_{ij}^{k,t} x_i x_j\right] - \psi(\boldsymbol\theta_t,\mathbf{J}_t)\right] \nonumber \\
& =\exp\left[\boldsymbol\theta_t^\prime \mathbf{F}(\mathbf{x},\mathbf{J}_t)-\psi(\boldsymbol\theta_t,\mathbf{J}_t)\right].
\end{align}
This observation model is the main subject of this study. Eqs.~\ref{eq:statemodel} and \ref{eq:observationmodel} constitute the state-space model of the dynamic pattern sequences. With this augmentation, related parameters of the model ($\boldsymbol\theta_t$, $\mathbf{J}_t$, $\boldsymbol\eta_t$ and $\mathbf{G}_t$) now become time-dependent. The log normalization $\psi(\boldsymbol\theta_t,\mathbf{J}_t)$ is also time-dependent as it is a function of $\boldsymbol\theta_t$ and $\mathbf{J}_t$. We assume that the graph parameter $\mathbf{J}_t$ follows much slower dynamics than the changes of weights $\boldsymbol\theta_t$, which makes their respective effect on the binary patterns distinguishable. In the next sections, we aim to estimate $\boldsymbol\theta_t$ and $\mathbf{J}_t$ from time-varying data. The algorithm that will be developed in these sections is summarized in Table~\ref{pseudo code}.
\begin{table}
\newcounter{cellnum}
\newcommand{\stepcounter{cellnum}\thecellnum.}{\stepcounter{cellnum}\thecellnum.}
\centering
\caption{Algorithm for the online estimation of the graphs}
\label{pseudo code}
\begin{tabular}{|l|}
\hline
\rowcolor[HTML]{EFEFEF}
\stepcounter{cellnum}\thecellnum.\phantom{=} Set $t=1$ and initialize $\mathbf{J}_1$. \\
\stepcounter{cellnum}\thecellnum.\phantom{=} Obtain the filter density of $\boldsymbol\theta_t$, using Eq.~\ref{eq:NR}.
\\
\rowcolor[HTML]{EFEFEF}
\stepcounter{cellnum}\thecellnum.\phantom{=} Compute the gradient of the Q-function
\\
\rowcolor[HTML]{EFEFEF}
\phantom{===}evaluated at $\mathbf{J}_t$, using Eq.~\ref{update_J}.
\\
\stepcounter{cellnum}\thecellnum.\phantom{=} Obtain $\mathbf{J}_{t+1}$, using Eq.~\ref{eq:J_update_qfunction}. \\
\rowcolor[HTML]{EFEFEF}
\stepcounter{cellnum}\thecellnum.\phantom{=} Advance in time: $t\rightarrow t+1$. \\
\stepcounter{cellnum}\thecellnum.\phantom{=} Repeat from step 2.
\\ \hline
\end{tabular}
\end{table}
\subsection{Sequential Bayes estimation}
In this subsection, we provide the sequential Bayes algorithm to estimate the time-dependent weights $\boldsymbol\theta_t$ online. Here, we denote the graphs up to time $t$ as $\mathbf{J}_{1:t}=\left\{\mathbf{J}_1,\mathbf{J}_2,\hdots,\mathbf{J}_t\right\}$. We use the same notation for the data up to time $t$ ($\mathbf{X}_{1:t}$).
The online estimation of the parameters is done by the recurrent construction of the filter density. The filter density using Bayes' theorem is given as
\begin{equation}\label{eq:posterior}
p(\boldsymbol{\theta}_t|\mathbf{X}_{1:t},\mathbf{J}_{1:t})
= \frac{p(\mathbf{X}_{t} | \boldsymbol{\theta}_t,\mathbf{J}_t) p(\boldsymbol{\theta}_t | \mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1}) } {p(\mathbf{X}_{t} | \mathbf{X}_{1:t-1}, \mathbf{J}_{1:t})}.
\end{equation}
To obtain the second term of the numerator, we used:
\begin{equation}
p(\boldsymbol{\theta}_t | \mathbf{X}_{1:t-1},\mathbf{J}_{1:t})=p(\boldsymbol{\theta}_t | \mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1}).
\label{eq:state-space_assumption}
\end{equation}
This holds because $\mathbf{J}_{t}$ is independent of $\boldsymbol{\theta}_t$ given that $\mathbf{X}_{t}$, which connects $\boldsymbol{\theta}_t$ and $\mathbf{J}_{t}$, is marginalized.
\begin{comment}
More concretely we have
\begin{align}
&p(\mathbf{x}_t, \boldsymbol{\theta}_t, \mathbf{J}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1}) \nonumber\\
&= p(\mathbf{x}_t | \boldsymbol{\theta}_t, \mathbf{J}_t)
p(\boldsymbol{\theta}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1})
p(\mathbf{J}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1}).
\label{eq:generative_model}
\end{align}
By marginalizing $\mathbf{x}_t$, we find
\begin{align}
&p(\boldsymbol{\theta}_t, \mathbf{J}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1}) \nonumber\\
&\phantom{=}= p(\boldsymbol{\theta}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1})
p(\mathbf{J}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1}).
\label{eq:parameter_independence}
\end{align}
Hence, we obtain
\begin{align}
p(\boldsymbol{\theta}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1}, \mathbf{J}_t)
&= \frac{p(\boldsymbol{\theta}_t, \mathbf{J}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1})}{p(\mathbf{J}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1})} \nonumber\\
&=p(\boldsymbol{\theta}_t | \mathbf{x}_{1:t-1},\mathbf{J}_{1:t-1}).
\label{eq:one_step_J_drop}
\end{align}
\end{comment}
The estimate of the state at time $t$ ($\boldsymbol\theta_{t}$) given the data up to time $t-1$ is called one-step prediction. The one-step prediction density is derived from the Chapman-Kolmogorov equation
\cite{smith2003estimating}:
\begin{align}\label{eq:chapmankolmogorov}
&p(\boldsymbol{\theta}_t|\mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1}) \nonumber\\
&= \int p(\boldsymbol{\theta}_t| \boldsymbol{\theta}_{t-1}) p(\boldsymbol{\theta}_{t-1}|\mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1}) d\boldsymbol{\theta}_{t-1}.
\end{align}
Because the transition of the state is a Gaussian process, given that the filter density at $t-1$ is a Gaussian density with mean $\boldsymbol\theta_{t-1\vert t-1}$ and covariance $\mathbf{W}_{t-1\vert t-1}$, the one-step prediction density is also Gaussian with mean and covariance given by
\begin{align}
\boldsymbol\theta_{t\vert t-1} &= \boldsymbol\theta_{t-1\vert t-1} \\
\mathbf{W}_{t\vert t-1} &=\mathbf{W}_{t-1\vert t-1}+\lambda^{-1}\mathbf{I}.
\end{align}
The filter density is a combination of the multi-graph Ising model and the one-step prediction density. Here, we approximate the posterior by a Gaussian using the Laplace method. As such, the mean of the approximated Gaussian density is identified as the mode of the posterior. We denote this mode as $\boldsymbol\theta_{t\vert t}$. It maximizes the log posterior given by
\begin{align}\label{eq:log_posterior}
f(\boldsymbol\theta_t)
&\equiv
\log p(\boldsymbol{\theta}_t|\mathbf{X}_{1:t},\mathbf{J}_{1:t}) \nonumber\\
&= \boldsymbol{\theta}_t^\prime \mathbf{F}(\mathbf{X}_t,\mathbf{J}_t) - \psi(\boldsymbol{\theta}_t) \nonumber \\
& - \frac{1}{2} (\boldsymbol{\theta}_t - \boldsymbol{\theta}_{t|t-1})' \mathbf{W}^{-1}_{t|t-1} (\boldsymbol{\theta}_t - \boldsymbol{\theta}_{t|t-1}) + \mathrm{const.}
\end{align}
Because of the log-concavity of both the observation and the state models, it is guaranteed that $f(\boldsymbol\theta_t)$ contains a unique mode that can be found by convex optimization techniques. Here we use the Newton-Raphson method:
\begin{equation}
\boldsymbol\theta^{\rm{new}}=\boldsymbol\theta^{\rm{old}}-[\boldsymbol\nabla \boldsymbol\nabla f(\boldsymbol\theta^{\rm{old}})]^{-1}\boldsymbol\nabla f(\boldsymbol\theta^{\rm{old}}).
\label{eq:NR}
\end{equation}
In this equation, the first and second derivatives are given by
\begin{align}
&\boldsymbol\nabla f(\boldsymbol\theta) =\mathbf{F}(\mathbf{X}_t,\mathbf{J}_t)-\boldsymbol\eta(\boldsymbol\theta)-\mathbf{W}_{t\vert t-1}^{-1}(\boldsymbol\theta-\boldsymbol\theta_{t\vert t-1}), \\
&\boldsymbol \nabla \boldsymbol \nabla f(\boldsymbol\theta) =-\mathbf{G}(\boldsymbol\theta)-\mathbf{W}_{t\vert t-1}^{-1},
\end{align}
where $\boldsymbol\eta$ is the expectation of the feature by $p(\mathbf{x} \vert\boldsymbol\theta,\mathbf{J}_t)$ computed as in Eq.~\ref{eq:expectation_eta}. $\mathbf{G}$ is the Fisher information matrix calculated by Eq.~\ref{eq:FisherInfoMatrix}, where the expectation is performed by $p(\mathbf{x} \vert \boldsymbol\theta,\mathbf{J}_t)$. Given the mode found by the above algorithm, the filtered covariance at each time step is approximated using the Hessian of the log posterior evaluated at $\boldsymbol\theta_{t\vert t}$:
\begin{equation}
\mathbf{W}_{t\vert t}=-[\left. \boldsymbol \nabla \boldsymbol \nabla f(\boldsymbol\theta) \right\vert_{\boldsymbol\theta_{t|t}} ]^{-1}=[\mathbf{G}(\boldsymbol\theta_{t|t})+\mathbf{W}_{t\vert t-1}^{-1}]^{-1}.
\end{equation}
This approximate Gaussian filter is then used to make a prediction density at $t+1$, which completes the recursion.
\begin{comment}
Smoothing
Once the approximate filter density is constructed for $t=1,\ldots,T$ , the backward smoothing algorithm is applied to obtain the smoothed posterior density of the state variable at time $t$\cite{kitagawa1987non,brown1998statistical},
\begin{equation}
p(\boldsymbol{\theta}_{t}|\boldsymbol{x}_{1:T},\mathbf{w})=p(\boldsymbol{\theta}_{t}|\boldsymbol{x}_{1:t},\mathbf{w})\int\frac{p(\boldsymbol{\theta}_{t+1}|\boldsymbol{x}_{1:T},\mathbf{w}) p(\boldsymbol{\theta}_{t+1}|\boldsymbol{\theta}_{t},\mathbf{w})}{p(\boldsymbol{\theta}_{t+1}|\boldsymbol{x}_{1:t},\mathbf{w})}d\boldsymbol{\theta}_{t+1}.
\end{equation}
for $t=T,\ldots,1$. In practice, the following fixed interval smoothing algorithm \cite{brown1998statistical} provides the smoothed MAP estimate $\boldsymbol\theta_{t\vert T}$ and smoothed covariance $\mathbf{W}_{t|T}$ of the posterior distribution
\begin{align}
\boldsymbol{\theta}_{t|T}
&= \boldsymbol{\theta}_{t|t} + \mathbf{A}_t(\boldsymbol{\theta}_{t+1|T} - \boldsymbol{\theta}_{t+1|t}),
\\
\mathbf{W}_{t|T} &= \mathbf{W}_{t|t} + \mathbf{A}_t(\mathbf{W}_{t+1|T} - \mathbf{W}_{t+1|t})\mathbf{A}_t^\prime,
\end{align}
where $\mathbf{A}_t= \mathbf{W}_{t|t}\mathbf{F}^\prime\mathbf{W}_{t+1|t}^{-1}$.
The lag-one covariance smoother, $\mathbf{W}_{t-1,t\vert T}$, which appears in the M-step update equations, can be computed in the following way (REFERENCE):
\begin{align}
& \mathbf{W}_{t-1,t\vert T}=E\left[(\boldsymbol\theta_{t-1}-\boldsymbol\theta_{t-1\vert T})(\boldsymbol\theta_{t}-\boldsymbol\theta_{t\vert T})^{\prime}\right]\vert_{\mathbf{y}_{1:T}} \nonumber \\
& =\mathbf{A}_{t-1}\mathbf{W}_{t\vert T}
\end{align}
for $t=T-1,T-2\ldots,1$.
\bigbreak
The obtaining of the smoothed estimators ends the E step. We now have an estimate of the network state at each time bin. We will now use them to optimize the hyper parameters during the M step.
\end{comment}
\subsection{Identification of the underlying graphs}
We combine the sequential Bayes estimation of the time-dependent weight parameters with the online estimation of the graphs $\mathbf{J}_t$. Our approach is based on the maximum likelihood estimation of $\mathbf{J}_t$ using the stochastic gradient method. We optimize $\mathbf{J}_{t}$ under the principle of maximizing the marginal likelihood at time step $t$ given the past observations up to $t-1$:
\begin{equation}
l(\mathbf{J}_{t}) \equiv \log p(\mathbf{X}_{t} \vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1},\mathbf{J}_{t}).
\end{equation}
This optimization will be performed by the stochastic gradient method:
\begin{equation}
\mathbf{J}_{t+1}=\mathbf{J}_t + \epsilon \frac{\partial l (\mathbf{J}_{t})}{\partial\mathbf{J}_t},
\label{eq:J_update_logmarlikelihood}
\end{equation}
starting with a nominal initial estimate for $\mathbf{J}_1$. With this approach, we slightly update $\mathbf{J}_t$ in the direction toward the optimal parameters at time $t$ to obtain the graphs at time $t+1$, $\mathbf{J}_{t+1}$. This approach that adds the contribution of the data toward the optimal graph parameters at every time step allows us to estimate them in an online fashion.
To employ the stochastic gradient method, we use the fact that the derivative of the marginal log-likelihood function at time step $t$ can be evaluated by the derivative of an alternative lower bound as shown below. For this goal, we introduce the lower bound of the marginal log-likelihood obtained by the following Jensen's inequality, similarly to the EM algorithm \cite{smith2003estimating}. Here, in order to search an optimal graph at time $t$, we introduce a variable $\mathbf{J}^{\ast}_t$. Then,
\begin{align}
l(\mathbf{J}^{\ast}_t)
&= \log \int p(\mathbf{X}_{t},\boldsymbol{\theta}_{t}\vert\mathbf{X}_{1:t-1} \mathbf{J}_{1:t-1},\mathbf{J}^{\ast}_t) d\boldsymbol{\theta}_{t} \nonumber\\
&=
\log \left\langle
\frac{p(\mathbf{X}_{t},\boldsymbol{\theta}_{t}\vert\mathbf{X}_{1:t-1} \mathbf{J}_{1:t-1},\mathbf{J}^{\ast}_t)}
{p(\boldsymbol\theta_{t}|\mathbf{X}_{1:t},\mathbf{J}_{1:t})}
\right\rangle_{\boldsymbol\theta_{t}|\mathbf{X}_{1:t},\mathbf{J}_{1:t}} \nonumber\\
& \geq
\left\langle
\log \frac{p(\mathbf{X}_{t},\boldsymbol{\theta}_{t}\vert\mathbf{X}_{1:t-1} \mathbf{J}_{1:t-1},\mathbf{J}^{\ast}_t)}
{p(\boldsymbol\theta_{t}|\mathbf{X}_{1:t},\mathbf{J}_{1:t})}
\right\rangle_{\boldsymbol\theta_{t}|\mathbf{X}_{1:t},\mathbf{J}_{1:t}} \nonumber\\
&=
\left\langle
\log p(\mathbf{X}_{t},\boldsymbol{\theta}_{t}\vert\mathbf{X}_{1:t-1} \mathbf{J}_{1:t-1},\mathbf{J}^{\ast}_t)
\right\rangle_{\boldsymbol\theta_{t}|\mathbf{X}_{1:t},\mathbf{J}_{1:t} \nonumber}
\\
&\phantom{=====}-\left\langle
\log p(\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t})
\right\rangle_{\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}},
\end{align}
where $\left\langle \cdot\right\rangle_{\boldsymbol\theta_t\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}}$ is the expectation by the filter density at time $t$. The second term of the last equality is the entropy of the filter density, which is not a function of $\mathbf{J}^{\ast}_t$. The first term is the expected complete data log-likelihood, a.k.a. Q-function:
\begin{align}
q(\mathbf{J}^{\ast}_t \vert \mathbf{J}_t)&\equiv\left\langle
\log p(\mathbf{X}_{t},\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t-1} \mathbf{J}_{1:t-1},\mathbf{J}^{\ast}_t)\right\rangle_{\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}} \nonumber \\
&=\left\langle\log p(\mathbf{X}_t\vert\boldsymbol\theta_t,\mathbf{J}^{\ast}_t) \right\rangle_{\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}} \nonumber \\
&\phantom{=}+\left\langle\log p(\boldsymbol\theta_t\vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1})\right\rangle_{\boldsymbol\theta_t\vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t}}.
\end{align}
In the second term of the last equality, we dropped $\mathbf{J}^{\ast}_{t}$, following the same argument made for Eq.~\ref{eq:state-space_assumption}. Hence, in the lower bound of the marginal log-likelihood, only the first term of the Q-function is a function of $\mathbf{J}^{\ast}_{t}$ :
\begin{equation}
q(\mathbf{J}^{\ast}_t \vert \mathbf{J}_t) =
\langle
\boldsymbol{\theta}_t^\prime \mathbf{F}(\mathbf{X}_t, \mathbf{J}^{\ast}_t) - \psi(\boldsymbol{\theta}_t,\mathbf{J}^{\ast}_t)
\rangle_{\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}} + c,
\end{equation}
where $c$ is a function independent of $\mathbf{J}^{\ast}_t$.
\begin{comment}
\begin{align} \label{eq:Q-function}
q & (\mathbf{J}^{\ast}|\mathbf{J}) = \sum_{t=1}^{T}
\langle
\boldsymbol{\theta}_t^\prime \mathbf{F}(\mathbf{X}_t, \mathbf{J}^{\ast}) - \psi(\boldsymbol{\theta}_t)
\rangle_{\boldsymbol\theta_{1:T}|\mathbf{X}_{1:T},\mathbf{J}} \nonumber\\
& -\frac{1}{2}\log{|2\pi \boldsymbol{\Sigma}|} - \frac{1}{2}\left\langle\left(\boldsymbol{\theta}_{1}
-\boldsymbol{\mu}\right)^{\prime}\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{\theta}_{1}-\boldsymbol{\mu}\right)\right\rangle_{\boldsymbol\theta_{1:T}|\mathbf{X}_{1:T},\mathbf{J}} \nonumber\\
& -\frac{T-1}{2}\log{|2\pi (\lambda^{-1}\mathbf{I})|} \nonumber \\
& - \frac{1}{2}\sum\limits _{t=2}^{T}\left\langle\left(\boldsymbol{\theta}_{t}-\boldsymbol{\theta}_{t-1}\right)^{\prime}(\lambda^{-1}\mathbf{I})^{-1}\left(\boldsymbol{\theta}_{t}-\boldsymbol{\theta}_{t-1}\right)\right\rangle_{\boldsymbol\theta_{1:T}|\mathbf{X}_{1:T},\mathbf{J}}
\end{align}
\end{comment}
\begin{comment}
To get the update equation for $\lambda$, we equal the derivative of the Q-function with respect to $\lambda^{\ast}$ to 0. We can then isolate $\lambda^{\ast}$ and obtain:
\begin{align}\label{eq:lambdaupdate}
\lambda^{\ast -1} =
& \frac{1}{(T-1)d}{\rm tr}\left[ \sum_{t=2}^T \mathbf{W}_{t\vert T}-\mathbf{W}_{t-1,t\vert T}\mathbf{F}^{\prime} - \mathbf{F}\mathbf{W}^{\prime}_{t-1,t \vert T}+\mathbf{F}\mathbf{W}_{t-1\vert T}\mathbf{F}^{\prime}\right] \nonumber \\
& +\frac{1}{(T-1)d}\sum_{t=2}^T (\boldsymbol\theta_{t\vert T}-\mathbf{F}\boldsymbol\theta_{t-1\vert T})(\boldsymbol\theta_{t\vert T}-\mathbf{F}\boldsymbol\theta_{t-1\vert T})^{\prime}
\end{align}
The same procedure can be followed to obtain the update equation for $\mathbf{F}$:
\begin{equation}\label{eq:Fupdate}
\mathbf{F}^{\ast}=\left[ \sum_{t=2}^T (\mathbf{W}_{t-1,t\vert T}+\boldsymbol\theta_{t\vert T}\boldsymbol\theta^{\prime}_{t-1 \vert T})\right] \left[\sum_{t=2}^T (\mathbf{W}_{t-1 \vert T}+\boldsymbol\theta_{t-1 \vert T}\boldsymbol\theta^{\prime}_{t-1 \vert T})\right]^{-1}
\end{equation}
\end{comment}
\begin{comment}
After getting new guesses for $\lambda$ and $\mathbf{F}$, we need to consider a stop criterion in order to determine if an other iteration of the EM algorithm is necessary or not. In practice, we do this by computing this
\end{comment}
We note that the derivative of the marginal log-likelihood in Eq.~\ref{eq:J_update_logmarlikelihood} can be replaced with the derivative of the Q-function since we expect
\begin{equation}
\frac{\partial\log p(\mathbf{X}_{t}\vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1},\mathbf{J}_t)}{\partial\mathbf{J}_t}
\approx
\frac{\partial q(\mathbf{J}^{\ast}_t \vert \mathbf{J}_t)}{\partial\mathbf{J}^{\ast}_t}\Bigg\vert_{\mathbf{J}^{\ast}_t=\mathbf{J}_t}
\label{eq:equality}
\end{equation}
if the approximate posterior is close to the exact posterior. The derivation of Eq.~\ref{eq:equality} for the exact posterior is given at the end of this section.
This equation means that, at $\mathbf{J}_t$, the optimal direction to update the graphs for maximizing the Q-function is the same as the direction that optimally maximizes the marginal log-likelihood. According to this, the online update of the underlying graphs at each time step with a gradient ascent scheme is given as:
\begin{equation}
\mathbf{J}_{t+1}=\mathbf{J}_t + \epsilon \frac{\partial q(\mathbf{J}^{\ast}_t \vert \mathbf{J}_t)}{\partial\mathbf{J}^{\ast}_t} \Bigg\vert_{\mathbf{J}^{\ast}_t=\mathbf{J}_t}.
\label{eq:J_update_qfunction}
\end{equation}
Here we obtain the derivative of the Q-function, using
\begin{equation}
\frac{\partial\boldsymbol\theta_t^{\prime}\mathbf{F}(\mathbf{X}_t, \mathbf{J}^{\ast}_t)}{\partial\mathbf{J}^{\ast}_t}= \frac{\partial\boldsymbol\theta_t^{
\prime}\mathbf{J}^{\ast\prime}_t\tilde{\mathbf{F}}(\mathbf{X}_t)}{\partial\mathbf{J}^{\ast}_t}
=\tilde{\mathbf{F}}(\mathbf{X}_t)\boldsymbol\theta_t^{\prime},
\end{equation}
\begin{align}
\frac{\partial\psi(\boldsymbol\theta_t,\mathbf{J}^{\ast}_t)}{\partial\mathbf{J}^{\ast}_t} &= \frac{\partial}{\partial\mathbf{J}^{\ast}_t}\log\sum_{\mathbf{x}}\exp\left[\boldsymbol\theta_t^{\prime}\mathbf{F}(\mathbf{x}, \mathbf{J}^{\ast}_t)\right] \nonumber \\
&=\frac{\partial}{\partial\mathbf{J}^{\ast}_t}\log\sum_{\mathbf{x}}\exp\left[\boldsymbol\theta_t^{\prime}\mathbf{J}^{\ast\prime}_t\tilde{\mathbf{F}}(\mathbf{x})\right] \nonumber \\
&=\frac{\sum_{\mathbf{x}}\exp\left[\boldsymbol\theta_t^{\prime}\mathbf{F}(\mathbf{x}, \mathbf{J}^{\ast}_t)\right]\tilde{\mathbf{F}}(\mathbf{x})\boldsymbol\theta_t^{\prime}}{\sum_{\mathbf{x}}\exp\left[\boldsymbol\theta_t^{\prime}\mathbf{F}(\mathbf{x}, \mathbf{J}^{\ast}_t)\right]} \nonumber \\
&=\sum_{\mathbf{x}}\exp\left[\boldsymbol\theta_t^{\prime}\mathbf{F}(\mathbf{x}, \mathbf{J}^{\ast}_t)-\psi(\boldsymbol\theta_t,\mathbf{J}^{\ast}_t)\right]\tilde{\mathbf{F}}(\mathbf{x})\boldsymbol\theta_t^{\prime} \nonumber \\
&=E_{\mathbf{X}\vert\boldsymbol\theta_t,\mathbf{J}^{\ast}_t}\tilde{\mathbf{F}}(\mathbf{X})\boldsymbol\theta_t^{\prime}
=\tilde{\boldsymbol\eta}_t\boldsymbol\theta^{\prime}_t,
\end{align}
where $E_{\mathbf{X}\vert\boldsymbol\theta_t,\mathbf{J}^{\ast}_t}$ represents the expectation by $p(\mathbf{x}\vert\boldsymbol\theta_t,\mathbf{J}^{\ast}_t)$. Using the above results, the derivative of the Q-function with respect to $\mathbf{J}^{\ast}_t$ is expressed as
\begin{align}\label{update_J}
\frac{\partial\ q(\mathbf{J}^{\ast}_t \vert \mathbf{J}_t)}{\partial\mathbf{J}^{\ast}_t
=\langle(\tilde{\mathbf{F}}(\mathbf{X}_t)-\tilde{\boldsymbol\eta}_t)\boldsymbol\theta_t^{\prime}\rangle_{\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}}.
\end{align}
Updating $\mathbf{J}_t$ requires evaluating Eq.~\ref{update_J} at $\mathbf{J}^{\ast}_t=\mathbf{J}_t$, which requires the evaluation of the expectation with respect to the filter density at time $t$. This can be computed by sampling from the approximate Gaussian posterior distribution, $\boldsymbol\theta_{t}\sim\mathcal{N}(\boldsymbol\theta_{t\vert t},\mathbf{W}_{t\vert t})$. Finally, we sketch the equality for Eq.\ref{eq:equality} for an exact posterior density:
\begin{align}
&\frac{\partial q(\mathbf{J}^{\ast}_t \vert \mathbf{J}_t) }{\partial \mathbf{J}^{\ast}_t}\Bigg\vert_{\mathbf{J}^{\ast}_t=\mathbf{J}_t} \nonumber \\
&=\frac{\partial \left\langle\log p(\mathbf{X}_{t},\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1},\mathbf{J}^{\ast}_t)\right\rangle_{\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}}}{\partial\mathbf{J}^{\ast}_t}\Bigg\vert_{\mathbf{J}^{\ast}_t=\mathbf{J}_t} \nonumber \\
&=\left\langle\left[\frac{\partial\log p(\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t-1},\mathbf{X}_t,\mathbf{J}_{1:t-1},\mathbf{J}_{t})}{\partial\mathbf{J}_t}\right.\right. \nonumber \\
&\phantom{======}\left.\left.+ \frac{\partial\log p(\mathbf{X}_{t}\vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t-1},\mathbf{J}_t)}{\partial\mathbf{J}_t}\right]\right\rangle_{\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t}} \nonumber \\
&=\int p(\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t})\left[\frac{1}{p(\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t})} \frac{\partial p(\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t})}{\partial\mathbf{J}_t} \right. \nonumber \\
&\phantom{======}\left.
+\frac{\partial\log p(\mathbf{X}_{t}\vert\mathbf{X}_{1:t-1}, \mathbf{J}_{1:t})}{\partial\mathbf{J}_t}\right]d\boldsymbol\theta_{t} \nonumber \\
&=\int\left[\frac{\partial p(\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t})}{\partial\mathbf{J}_t}\right. \nonumber \\
&\phantom{======}\left.+p(\boldsymbol\theta_{t}\vert\mathbf{X}_{1:t},\mathbf{J}_{1:t})\frac{\partial\log p(\mathbf{X}_{t}\vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t})}{\partial\mathbf{J}_t}\right]d\boldsymbol\theta_{t}\nonumber \\
&=0+1\cdot\frac{\partial\log p(\mathbf{X}_{t}\vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t})}{\partial\mathbf{J}_t}.
\label{eq:equality_long}
\end{align}
This completes the stochastic gradient search of the underlying graphs under the maximum likelihood principle.
\begin{comment}
\begin{equation}
\frac{\partial\log p(\mathbf{X}_{1:T}\vert\mathbf{J})}{\partial\mathbf{J}}\Bigg\vert_{\mathbf{J}=\mathbf{J}^{\ast}}=\frac{\partial q(\mathbf{J}^{\ast}\vert\mathbf{J})}{\partial\mathbf{J}^{\ast}}\Bigg\vert_{\mathbf{J}=\mathbf{J}^{\ast}}
\end{equation}
\begin{align}
&\frac{\partial q(\mathbf{J}^{\ast}\vert\mathbf{J})}{\partial\mathbf{J}^{\ast}}\Bigg\vert_{\mathbf{J}=\mathbf{J}^{\ast}}
=\frac{\partial E_{\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}}\log p(\mathbf{X}_{1:T},\boldsymbol\theta_{1:T}\vert\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}}\Bigg\vert_{\mathbf{J}=\mathbf{J}^{\ast}} \nonumber \\
&=E_{\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}^{\ast}}\left[\frac{\partial\log p(\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}}+\frac{\partial\log p(\mathbf{X}_{1:T}\vert\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}}\right]\nonumber \\
&=\int p(\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}^{\ast})\left[\frac{1}{p(\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}^{\ast})}\frac{\partial p(\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}} \right. \nonumber \\
&\phantom{===================}\left.+\frac{\partial\log p(\mathbf{X}_{1:T}\vert\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}}\right]d\boldsymbol\theta_{1:T} \nonumber \\
&=\int\left[\frac{\partial p(\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}}\right. \nonumber \\
&\phantom{===========}\left.+p(\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}^{\ast})\frac{\partial\log p(\mathbf{X}_{1:T}\vert\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}}\right]d\boldsymbol\theta_{1:T}\nonumber \\
&=0+1\frac{\partial\log p(\mathbf{X}_{1:T}\vert\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}} \nonumber \\
\end{align}
\end{comment}
\section{Results}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth,height=60mm]{fig_gen_data}
\caption{Online estimation of graphs present in artificially generated data. The state-space multi-graph Ising model was fitted to repeatedly sampled data (200 epochs, each epoch is composed of $T_{\rm ep}=\num{1500}$ time bins). \textbf{A} Most likely binary patterns generated from two constant underlying graphs ($\mathbf{J}$) with $N=9$ nodes. \textbf{B} Up: Images sampled from the time-dependent underlying model at time steps $\left\{ \num{0}, \frac{T_{\rm ep}}{4}, \frac{T_{\rm ep}}{2}, \frac{3T_{\rm ep}}{4}, T_{\rm ep} \right\}$. Middle: Time-dependent weight parameters ($\boldsymbol\theta_t$) of the generative model (dashed line) for a single epoch. Solid lines show the maximum a posteriori estimates of the weights at the last epoch of the data with credible intervals. Bottom: Images sampled from the fitted model at the aforementioned time steps at the last epoch of the data. \text{C} Top: Average of the marginal log-likelihood of the data given the fitted model at every time step over each epoch of the data. Bottom: Correlation coefficient between the graphs of the underlying $\mathbf{J}$ and the fitted $\mathbf{J}_t$ matrices at each epoch of the data. During fitting, the variance of the columns of the estimated $\mathbf{J}_t$ matrix was scaled to unity at every time bin. $\lambda$ was fixed to 1000 and the learning rate $\epsilon$ was $10^{-3}$.}
\label{fig:gen_data}
\end{figure*}
\subsection{Application to artificially generated data}
\textbf{Generation of artificial data} To demonstrate the applicability of the method, we fitted the model to synthesized binary data representing dynamically changing images (Fig.~\ref{fig:gen_data}), in which each node ($x_i$) corresponds to a pixel that can either be black ($x_i=1$) or white ($x_i=0$). At each step, a sample was generated by Eq.~\ref{eq:observationmodel}, a mixture of multiple graphs $\mathbf{J}$ with time-dependent weights $\boldsymbol\theta_t$. Here we explain how we constructed $\mathbf{J}$ and $\boldsymbol\theta_t$ to generate data.
We generated the data from a mixture of two fixed graphs, each containing 9 nodes. We used a constant $\mathbf{J}$ to synthesize the data. If the graphs are used separately, the model most frequently generates an image similar to a `+' or a `T' respectively (Fig.~\ref{fig:gen_data}A). Note that these images have overlapped components. For the time-dependent weights, we chose sinusoidal waves of period $T_{\rm ep}$=1500 time steps. The middle panel of Fig.~\ref{fig:gen_data}B shows the dynamics of the weights of the underlying model (dashed lines) over one epoch of the data. We set their baseline and amplitude to 0.5 with a phase shift of $\pi$ so that each graph has in turn a dominant contribution while the other has a small contribution. We repeatedly generated the time-series according to the above procedure, and obtained 200 epochs of the binary data ($T$=\num{300000} time steps).
On the top panel of Fig.~\ref{fig:gen_data}B, we show images sampled from the underlying model at different time steps in a single epoch. In particular, because of the chosen dynamics of the weights, we expect to obtain with a high probability an image corresponding to a '+' and a 'T' at $t=0.25 T_{\rm ep}$ and $t=0.75 T_{\rm ep}$ respectively. The other images are sampled from a mixture of the two graphs and do not necessarily resemble either one of the main images.
\textbf{Estimating the state-space multi-graph Ising model} On the middle panel of Fig.~\ref{fig:gen_data}B, we also show the dynamics of the weights of the fitted model (solid lines) at the last epoch of the data. These are obtained by the sequential Bayes algorithm. We show their credible interval computed from the covariance matrix of the posterior density of the weights as
$
\boldsymbol\theta_{t\vert t}^k \pm 2\sqrt{\left[\mathbf{W}_{t\vert t}\right]_{k,k}}
$,
where $\left[\mathbf{W}_{t\vert t}\right]_{k,k}$ corresponds to the $k^{th}$ element on the diagonal of the covariance matrix. The dynamics of the fitted model seem to adequately concord with those of the underlying model, as the underlying weights are generally found in the credible intervals of the estimates. On the bottom panel of Fig.~\ref{fig:gen_data}B, we show images sampled from the fitted model for the last epoch of the data. The sampled images from the fitted model became closer to those generated artificially as the online estimation of $\mathbf{J}$ progressed.
\begin{comment}The top panel of Fig.~\ref{fig:gen_data}C shows the marginal log-likelihood at every epoch of the data.
\end{comment}
At a given time $t$, the marginal log-likelihood of the data can be approximated by (see \cite{Shimazaki2012}):
\begin{align}
l(\mathbf{J}_{t}) &\equiv \log p(\mathbf{X}_{t} \vert\mathbf{X}_{1:t-1},\mathbf{J}_{1:t}) \nonumber \\
&\approx \left[\boldsymbol\theta^{\prime}_{t\vert t}\mathbf{F}(\mathbf{X}_t,\mathbf{J}_t)-\psi(\boldsymbol\theta_{t\vert t},\mathbf{J}_t)\right]\nonumber \\
&-\frac{1}{2}(\boldsymbol\theta_{t\vert t}-\boldsymbol\theta_{t\vert t-1})^{\prime}\mathbf{W}^{-1}_{t\vert t-1}(\boldsymbol\theta_{t\vert t}-\boldsymbol\theta_{t\vert t-1})\nonumber \\
&+\frac{1}{2}(\log\det\mathbf{W}_{t\vert t}-\log\det\mathbf{W}_{t\vert t-1})
\end{align}
The top panel of Fig.~\ref{fig:gen_data}C shows the average of the marginal log-likelihoods obtained for each epoch. Namely, for epoch $r$, we computed:
\begin{equation}
l_{\rm avg}(r) = \frac{1}{T_{\rm ep}}\sum_{\tau=(r-1) T_{\rm ep}+1}^{r T_{\rm ep}} l(\mathbf{J}_{\tau})
\end{equation}
for $r=1,...,200$.
The marginal log-likelihood of our fitted model increased with more epochs of the data.
For this simple example, we expected that the matrix $\mathbf{J}$ that generated the data would be reconstructed by the online algorithm. In order to verify the goodness of the estimated $\mathbf{J}$, we computed the correlation coefficient between the columns of the fitted and underlying $\mathbf{J}$ matrices:
\begin{equation}
{\rm corr. coef.} (\mathbf{j}^k_{\rm gen},\mathbf{j}^k_{\rm fit})=\frac{{\mathbf{j}^k_{\rm gen}\cdot\mathbf{j}^k_{\rm fit}} }{ {\sqrt{||\mathbf{j}^k_{\rm gen}|| \cdot ||\mathbf{j}^k_{\rm fit}||}}},
\label{eq:corr_coef}
\end{equation}
where $\mathbf{j}^k$ corresponds to the $k^{th}$ column of the $\mathbf{J}$ matrix and the dot denotes the inner product of two vectors.
Both graphs had a correlation coefficient close to 0 at the beginning (the first guess of $\mathbf{J}$ was randomized as a zero-mean, unit-variance Gaussian) and it progressed toward 1 (Fig.~\ref{fig:gen_data}C Bottom panel).
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth,height=60mm]{fig_gen_AIC}
\caption{Determination of the number of graphs present in data generated from $D$ = 2 random zero-mean, unit-variance Gaussian graphs for $N$=9 nodes. The data was generated 200 times. \textbf{A} Dynamics of the weights applied to the 2 columns of the underlying $\mathbf{J}$ matrix. \textbf{B} Dynamics of the fitted weights at the last epoch of the data for models with number of graphs ($D_{\rm{fit}}$) equal to 1, 2, 3 and 4. The Akaike Information Criterion (AIC) of each model is indicated in the plot titles. For every fitted model, the variance of the columns of the $\mathbf{J}_t$ matrix was scaled to unity at every time bin. $\lambda$ was fixed to 1000 and the learning rate $\epsilon$ was $10^{-3}$ for all fitted models.}
\label{fig:nb_graph}
\end{figure*}
Finally, in order to show that the proposed method captured the features better than a traditional orthogonal decomposition method, we performed a principle component analysis (PCA) on the MAP estimate of the time-dependent full Ising model that includes only a single graph as in Eq.~\ref{eq:ising_original}, but whose parameters all vary in time \cite{Shimazaki2009,Shimazaki2012}. We used the same binary data to fit the model. The data matrix considered to perform the PCA was constituted of the MAP estimates at every time step. Features extracted by PCA were less correlated with the generative graphs: the absolute values of the correlation coefficients between the columns of the generative $\mathbf{J}$ and PC1 and PC2 were below 0.6, which is lower than the result obtained by our method ($\sim$ 0.7).
\begin{comment}
Note that since the gradient used in the update equation of $\mathbf{J}$ depends on $\boldsymbol\theta$ (Eq.~\ref{eq:J_update_qfunction},\ref{update_J}), columns of the $\mathbf{J}$ matrix with a corresponding weight close to 0 practically don't get updated, resulting in a correlation measure constant in time. We observe this in this study as both fitted weights alternatively have values close to 0 because of the sinusoidal dynamic imposed in the generative model. This explains why the update of the second graph is limited in the first half of the procedure while it is significant in the second half. The opposite behavior is observed for the first graph.
\end{comment}
\subsection{Model selection}
In practical application of the method, the exact amount of graphs from which the data is sampled ($D$) is unknown. Hence, it is necessary to specify how many graphs we need to include in the model. To obtain the most predictive model that avoids overfitting to the data, we compare the models by their respective Akaike Information Criterion (AIC) given by
\begin{equation}
{\rm AIC} = -2l(\mathbf{J}_{T-T_{\rm ep}:T}) + 2m
\end{equation}
as a proxy for selection criteria of the online method. Here, $l(\mathbf{J}_{T-T_{\rm ep}:T})=\sum_{t=T-T_{\rm ep}}^T l(\mathbf{J}_t)$ and $m$ is the number of free parameters, which in our case is the number of elements in the $\mathbf{J}$ matrix. Here we compute the marginal log-likelihood of the models at the last epoch of the data. The model with the smallest AIC should be selected.
In Fig.~\ref{fig:nb_graph}, we show that the AIC properly identified the number of graphs in artificially generated data. We generated 200 epochs of data by constructing a model containing two 9-node, zero-mean, unit-variance random Gaussian graphs. Fig.~\ref{fig:nb_graph}A shows the dynamics of the weights used to generate the data over one epoch. We then fitted the model to this data using up to 4 graphs. The same values were used for all other hyper-parameters ($\boldsymbol\mu$, $\boldsymbol\Sigma$, $\lambda$). Fig.~\ref{fig:nb_graph}B shows the fitted weight dynamics. The AIC of each model is shown in the titles of the plots. The model containing 2 graphs has the lowest AIC, which means that using the AIC as a discriminating criterion can avoid over-fitted models.
Last, we compared the goodness-of-fit of our model to that of a time-dependent full Ising model \cite{Shimazaki2009,Shimazaki2012}. For the full model, the mean of the prior of $\mathbf{j}$ was the only free parameter, so $m$ is equal to the number of elements in $\mathbf{j}$. The AIC obtained (\num{27372}) is higher than the AIC of the multi-graph models. This confirms that reducing the dimensionality can lead to a better fit to data sampled from the multi-graph Ising model.
\subsection{Application to neural data}
Next, we applied our model to neural data. We analyzed spontaneous activity of neurons recorded from rat hippocampal dissociated cultures using multi-electrode arrays \cite{10.3389/fphys.2016.00425,timme2018spontaneous}. In these experiments, the authors recorded the cultures for about 5 weeks while neurons grew connections, and reported that the neural activity approached a critical state over time \cite{10.3389/fphys.2016.00425}. The recording length is approximately 1 hour per day.
We analyzed one culture on day 34 that contained 85 neurons. We chose the 12 neurons that showed the highest firing rates. We used 10 ms bins to construct binary data (Fig.~\ref{fig:neural}A), and fitted the multi-graph Ising models with the number of graphs ($D_{\rm{fit}}$) equal to 1, 2, and 3. Fig.~\ref{fig:neural}B shows the estimated weights $\boldsymbol \theta_{t}$ for each model. Correlation coefficients between the learned $\mathbf J$ at the last step and the graphs obtained at each step confirms that the learning of $\mathbf J$ was completed by the first half of the observation period (Fig.~\ref{fig:neural}C). The learning for models with more than 3 graphs was not completed in this period, hence these models were excluded from the analysis.
Notably, in the latter half during which $\mathbf J$ is stable, the weight of the single-graph model is stationary whereas weights of multi-graph models vary in time. Since the learned graphs within the multi-graph models ($D_{\rm{fit}}=2, 3$) are significantly correlated, the coordinated weight changes mostly capture the stationary activity of the population (not shown). Nevertheless differential dynamics of the multi-graphs may capture non-stationary activity of the population. To evaluate the predictive power of the models, we computed their AICs as its proxy, using the data in the latter half of the observation period. The model with 3 graphs was selected. This result suggests that the spontaneous activity of cultured neurons is not stationary, as reported in \cite{sasaki2007metastability}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{fig_neural}
\caption{Application of the state-space multi-graph Ising model to neural data. \textbf{A} Binary spike patterns of neurons from rat hippocampal dissociated cultures with 10 ms bins (total $T=\num{369999}$ bins). The data comes from the 12 neurons exhibiting the highest spike rates from one neural culture (Culture 28) at day 34. \textbf{B} Dynamics of the estimated weights for models with number of graphs ($D_{\rm{fit}}$) equal to 1, 2 and 3. During the fitting procedure, the variance of the columns of $\mathbf{J}_t$ was scaled to unity at every time bin. $\lambda$ was fixed to \num{10000} and the learning rate $\epsilon$ was $10^{-2}$ for all fitted models. \textbf{C} Correlation coefficients between the columns of the estimated $\mathbf{J}$ matrix at the last time step and the columns of the estimated $\mathbf{J}$ matrix at each time step. \textbf{D} AIC of each model fitted to the last half of the data. \textbf{E} Estimated graphs at $t=T$ for the model with 3 graphs. The color of the nodes represent the values of the first-order parameters $h_i$ and the color of the edges represent the value of the second-order parameters $j_{ij}$.}
\label{fig:neural}
\end{figure*}
\begin{comment}
\subsection{Application to musical data}
Next, we applied our model to musical data. This procedure should extract features of musical pieces that appear and disappear as independent processes. Here, the musical features are combinations of pitches most likely to be played at the same time by different instruments or as a part of a chord.
We applied our model to a chorale by Johann Sebastian Bach (BWV 57.8). This is a 4-part chorale, where different instruments play different notes.
Using Music21 \cite{cuthbert2010music21}, we extracted the list of note pitches, their starting offset and their duration. We then built a binary time-series where every node is a pitch and the time-axis corresponds to the offset relative to the start of the piece. We considered 1 time bin to be equal to a quarter of a quarter length. The 9 most played pitches were selected for the analysis (Fig.~\ref{fig:music}A).
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth,height=57mm]{fig_music}
\caption{Application of the state-space multi-graph Ising model to musical data. The analyzed piece is a 4-part chorale by Johann Sebastian Bach (BWV 57.8). \textbf{A} Binary time-series of the pitches played in all parts for each measure of the piece. Only the $N=9$ most used pitches are considered. \textbf{B} AIC of fitted models with different number of graphs ($D_{fit}$) computed at the end of each epoch of the data. The data was repeated until convergence of the marginal log-likelihood. \textbf{C} Patterns most likely sampled by considering the individual learned graphs for the model containing 2 graphs (optimal number of graphs according to the AIC). \textbf{D} Top: Maximum a posteriori estimates of the fitted network state parameters. Bottom: Similarity between the learned patterns and the patterns in the data. No scaling was applied to the columns of the $\mathbf{J}_t$ matrix. $\lambda$ was fixed to 1000 and the learning rate $\epsilon$ was $10^{-3}$ for all fitted models.}
\label{fig:music}
\end{figure*}
We fitted our models to the same time-series using up to 5 graphs. We repeated the data until the marginal log-likelihood over one epoch converged. We determined the best model to be the one with 2 graphs using the AIC (Fig.~\ref{fig:music}B). The result of the fitting of the model with 2 graphs is shown on Fig.~\ref{fig:music}C-D.
In order to have a practical understanding of the results, we computed the most typical combinations of pitches expected from each of the learned graphs (Fig.~\ref{fig:music}C). To obtain them, we constructed 2 original Ising models (Eq.~\ref{eq:ising_original}) with graph $\mathbf{j}^{k,T}$ for $k=1,2$ where $\mathbf{j}^{k,T}$ is the $k^{th}$ column of the fitted $\mathbf{J}_T$ matrix. The binary pattern with the highest probability for each original Ising model is shown. Both patterns contained 4 active nodes (4 pitches played), which may be expected because the 4 parts of the piece contain at most 1 note at a time. The estimated weight dynamics of the learned graphs ($\boldsymbol\theta_t$) are shown in Fig.~\ref{fig:music}D Top. The dynamics show increases and decreases of the weights of the graphs that should capture key dynamics of the piece. The estimated weight of a combination of pitches should increase when this combination is present in the data and decrease when it is absent. To verify this, we computed the fraction of nodes that have the same state in the data and in the learned patterns at every time step. We call this fraction the similarity to the data and we show it in Fig~\ref{fig:music}D Bottom. It should be 1 if the data exhibits a pattern identical to a learned pattern and 0 if all nodes exhibit a different response. We confirm that the shapes of the similarity curves resemble the shapes of the weight curves, which means the multi-graph model properly caught the dynamics of the 2 main binary patterns present in the data.
\end{comment}
\begin{comment}
Visual inspection of the original data confirms that it contains these combinations of pitches when the corresponding estimated weights increase. Also, the absence of one of these patterns in the data causes the corresponding weight to decrease.
\end{comment}
\section{Discussion}
We proposed an algorithm that can identify graphs underlying binary time-series data. Our model is composed of multiple time-dependent Ising graphs with weight parameters. The model is fitted by sequential Bayes estimation for the weights and stochastic gradient based on maximizing the likelihood for the graphs. The method was corroborated by artificially generated and neural time-series data.
It should be noted that our method extracts multiple graphs whose weights follow the assumption specified by the prior density (Eq.~\ref{eq:statemodel}). In the current study, we assumed independent Markov processes for the weight dynamics. Given the observation model without the prior, the choice of $\mathbf{J}$ is undetermined because the following observation model with $D$ graphs is identical for any choice of invertible $D$ by $D$ matrix $\mathbf{Z}$:
\begin{equation}
p(\mathbf{x}\vert\boldsymbol\theta_t, \mathbf{J}_t)=\exp\left[\boldsymbol\theta_t^{\prime}(\mathbf{Z}^{-1}\mathbf{Z})^{\prime} \mathbf{J}_t^{\prime}\tilde{\mathbf{F}}(\mathbf{x})-\psi(\mathbf{Z} \boldsymbol\theta_t,\mathbf{J}_t \mathbf{Z}^{-1})\right].
\end{equation}
Because of the chosen prior distribution, the current method tends to extract independent weight processes with the same time-scale. Despite this assumption about the processes, the determination of the graphs still suffers ambiguity.
For example, a single graph with a constant weight is captured by multiple graphs with significantly correlated weight dynamics if a linear combination of the graphs can reconstruct the original graph, as seen in the analysis of neural data.
In future works, we need to impose different time-scales for weight processes and/or regularize $\mathbf{J}$ to disambiguate the graph estimation.
\begin{comment}
For musical data, the fitted graphs with 5 different initial values yielded an average correlation coefficient of 0.56, which is not significantly high. In future work, proper regularization may be considered in estimating $\mathbf{J}$.
\end{comment}
The current model can be extended in several ways toward predicting complex time-series data. First, the number of binary units that can be treated by the model needs to be increased. The exact computations of $\psi$, $\tilde{\boldsymbol\eta}$ and $\tilde{\mathbf{G}}$ using Eqs.~\ref{eq:freeenergy}, \ref{eq:eta_tilde} and \ref{eq:G_tilde} are only practically realizable with a limited number of nodes (up to approximately 20). For this goal, we can use the pseudolikelihood and TAP/Bethe approximation methods for the state-space Ising model introduced in \cite{donner2017approximate}. Second, we can use the particle filter to model a non-Gaussian posterior as long as the dimension of the state in this model is small. Finally, an important extension is to model non-linear transitions of the state and capture long-term relationships between time bins, similarly to the modern recurrent neural network models \cite{graves2013generating}.
\section{Conclusion}
In summary, the proposed model can estimate undirected graphs and their underlying dynamics in binary data in an online manner. This model is a dynamic probabilistic network that aims to predict sequences of binary patterns, assuming that they are sampled from a mixture of Ising graphs. While further development is necessary for practical use, the model potentially provides a probabilistic approach for pattern prediction that can be an alternative to classical recurrent networks.
\begin{comment}
memo:
In practice, the update of $\mathbf{J}$ is rather unstable. This may be caused by the apparition of a scaling factor, $\mathbf{K}$, that appears during the EM algorithm. We then get:
\begin{equation}
p(\mathbf{x}\vert\boldsymbol\theta_t)=\exp\left[ \sum_{k=1}^D \mathbf{K}\theta_{t}^k\frac{1}{\mathbf{K}} \left[ \sum_{i=1}^N h_i^k x_i + \sum_{j>i} j_{ij}^k x_i x_j\right] - \psi(\boldsymbol\theta_t)\right]
\end{equation}
Of course, both instances of the scaling factor cancel each other to give the right probability distribution, but the sequence of states obtained is multiplied by the scaling factor and the $\mathbf{J}$ matrix returned is divided by it. The sequence of states and the hyper parameters obtained are consequently wrong.
We circumvent this issue by introducing a Bayesian regularization. We can write the complete data log-likelihood in this way by marginalizing:
\begin{equation}
\log p(\mathbf{X})=\log\int p(\mathbf{X}\vert\mathbf{J})p(\mathbf{J})d\mathbf{J}.
\end{equation}
As a first order approximation, we take the integral to be equal to the value of the probabilities at the optimal point in $\mathbf{J}$:
\begin{equation}
\log p(\mathbf{X})\approx \log p(\mathbf{X}\vert\mathbf{J}^{\ast})+\log p(\mathbf{J}^{\ast}).
\end{equation}
By assuming a Gaussian distribution with null mean and covariance $\Lambda^{-1}\mathbf{I}$ for $p(\mathbf{J})$, we obtain:
\begin{equation}
\log p(\mathbf{X})\approx \log p(\mathbf{X}\vert\mathbf{J}^{\ast})-\frac{\Lambda^{-1}}{2}\mathbf{J}^{\ast \prime}\mathbf{J}^{\ast}
\end{equation}
By the definition of the Q-function and the negative entropy term, we have:
\begin{equation}
\log p(\mathbf{X})\geq q(\mathbf{J}^{\ast}\vert \mathbf{J})+\mathcal{H}\left[p(\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T},\mathbf{J}) \right] - \frac{\Lambda^{-1}}{2}\mathbf{J}^{\ast \prime}\mathbf{J}^{\ast}
\end{equation}
We now aim to optimize this new lower bound of the complete log-likelihood. To do so, we differentiate with respect to $\mathbf{J}^{\ast}$.
\begin{align}
\frac{\partial\log p(\mathbf{X})}{\partial\mathbf{J}^{\ast}}& =
\frac{\partial\log p (\mathbf{X}\vert\mathbf{J}^{\ast})}{\partial\mathbf{J}^{\ast}}-\Lambda\mathbf{J}^{\ast} \nonumber \\
& = \frac{\partial q(\mathbf{J}^{\ast}\vert\mathbf{J})}{\partial\mathbf{J}^{\ast}}-\Lambda\mathbf{J}^{\ast} \nonumber \\
& = R\sum_{t=1}^T \mathbf{E}\left[(\mathbf{y}_{t}-\tilde{\boldsymbol\eta})\boldsymbol\theta^{\prime}_t \right]_{\boldsymbol\theta_{1:T}\vert\mathbf{X}_{1:T}}-\Lambda\mathbf{J}^{\ast}
\end{align}
What we obtain will be used in the gradient ascent scheme to optimize the $\mathbf{J}$ matrix. This regularization has no impact on the optimization equations of $\lambda$ and $\mathbf{F}$.
Approximation of the marginal log-likelihood of the data:
\begin{equation}\label{eq:marglikelihood}
l(\mathbf{X}_{1:t} | \mathbf{J}_{1:t})= p(\mathbf{X}_{1}|\boldsymbol{\mu}, \boldsymbol{\Sigma},\mathbf{J}_{1})
\prod_{t=t+1}^t p(\mathbf{X}_{t}|\mathbf{X}_{1:t-1}, \lambda, \mathbf{J}_{1:t}).
\end{equation}
\begin{align}\label{marginal_likelihood}
l(\mathbf{J})\approx &
\sum_{t=1}^T (\boldsymbol\theta_{t\vert t}^{\prime}\mathbf{F}(\mathbf{X}_{t}, \mathbf{J})-\psi(\boldsymbol\theta_{t\vert t})) \nonumber \\
& -\frac{1}{2}{\rm tr}\left[ \sum_{t=1}^T \mathbf{W}^{-1}_{t\vert t-1}(\boldsymbol\theta_{t\vert t}-\boldsymbol\theta_{t\vert t-1})(\boldsymbol\theta_{t\vert t}-\boldsymbol\theta_{t\vert t-1})^{\prime}\right] \nonumber \\
& +\frac{1}{2}\sum_{t=1}^T (\log{\det{\mathbf{W}_{t\vert t}}}-\log{\det{\mathbf{W}_{t\vert t-1}}})
\end{align}
\end{comment}
\section*{Acknowledgment}
The authors thank Prof. Shinsuke Koyama for helpful discussions.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-1554 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Small-Angle Scattering (SAS) investigates structures in samples that generally range from approximately 0.5\,nm to a few 100\,nm. This can both be done for isotropic samples such as blends and liquids, as well as anisotropic samples such as quasi-crystals. In order to obtain data about that size regime scattered intensity, mostly of x-rays or neutrons, is investigated at angles from close to zero, still in the region of the primary beam up to 10$^{\circ}$ , depending on the wavelength of the incoming radiation.
The two primary sources for SAS experiments are x-ray (small-angle x-ray scattering, \emph{SAXS}) sources and neutron (small-angle neutron scattering, \emph{SANS}) sources, which shall be the two cases discussed here. Also scattering with electrons or other particle waves is possible, but not the main use case for the purpose of this manuscript.
For most small-angle scattering instruments, both SAXS and SANS, the science case covers the investigation of self-assembled polymeric and biological systems, multi-scale systems with large size distribution of the contained particles, solutions of (nano-)particles and soft-matter systems, protein solutions, and material science investigations. In the case of SANS this is augmented by the possibility to also investigate the spin state of the sample and hence perform investigations of the magnetic structure of the sample.
In the following sections the general setup of both SAXS and SANS instruments shall be discussed, as well as data acquisition and evaluation and preparation of the sample and the experiment in general. The information contained herein should provide sufficient information for planning and performing a SAS experiment and evaluate the gathered data.
\subsection{General concept}
All SAS experiments, irrespective of the setup used in any specific case, rely on the concept of pinhole cameras to work. Fig.\ref{fig:pinhole} illustrates the geometric concept of the interplay between pinhole cameras and SAS.
\begin{figure}
\centering
\begin{minipage}[l]{0.75\textwidth}
\Large{a)}
\includegraphics[width=\textwidth]{Graphics/pinhole1.png}
\end{minipage}
\begin{minipage}[l]{0.75\textwidth}
\Large{b)}
\includegraphics[width=\textwidth]{Graphics/pinhole2.png}
\end{minipage}
\caption{a) Sketch of a pinhole camera and b) a simplified SAS instrument. The encoding of the real space information is in one case done inside the pinhole, in the other case the direction (and wavelength) encoded information is directly displayed on the screen (shaded area with waves). Positioning of the screen farther away improves the angular resolution and therefore the encoded information.}
\label{fig:pinhole}
\end{figure}
In the usual case, pinhole cameras map every point of the sample (object) to a discrete point on the screen (film or detector). The smaller the hole, the better the point-to-point mapping works, since in the ideal case only a single path between object and image is available. However, this of course comes with a penalty in intensity, since the smaller hole lets less light pass through. Due to the geometry, an image taken with a pinhole camera is always upside down. While the mathematical implications shall be discussed later on in this manuscript at this point we only want to grasp the underlying concept. The information about the object is at the beginning stored in real space. Colors (wavelength) and locations are given as points on the surface of the object. When all beams have converged to the single point that is ideally the pinhole, the information is then encoded in direction of the path (or light-beam) and the wavelength of the light. This is the change between direct and indirect space, locations and directions. When the light falls onto the screen the information is reversed again, to location and color of a spot on the screen, into direct space.
This concept is exploited by SAS. Since we are looking at very small objects (molecules and atoms) the determination of the location with the naked eye, or even a microscope, and encoding of direction is easily achievable by increasing the distances and adjusting the size of the pinhole. However, instead of using the information that has been transferred to real space again, this time the object in real space is put close to the window. This way, the information about the location of atoms and molecules in the sample is encoded into direction or indirect space. Since there should be no information about the light before the pinhole, the light needs to be collimated down to a small, point-like source with no angular divergence.
\section{SAXS instruments}
\index{SAXS}
In general there are two classes of SAXS instruments. One is the laboratory type setup that can be set-up in a single laboratory with a conventional x-ray tube, or more general any metal anode setup, while the other one is a large-scale facility setup at a synchrotron that can provide higher intensities. Since the setup of both instruments differs, and also the use case is not fully identical, we shall discuss both setups separately. One thing that should be kept in mind is that the fundamental principle is identical, i.e. any experiment that can be performed at a synchrotron can also in principle be performed at a laboratory SAXS setup and is only limited in intensity. This is important for the preparation of beamtimes at a synchrotron, which in general should be thoroughly prepared in order to fully exploit all capabilities offered there.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/SAXS-Setup.png}
\caption{Laboratory SAXS setup. The left box is a sketch of a x-ray tube, all the components are in vacuum. The flight path is also usually evacuated. L1 and L2 are the collimation and sample detector distance (SDD) respectively. In the case of laboratory setup those range usually from about 20 cm up to 1-2 m in modern setups. The collimation blocks for L1 and L2 are usually set up in both x and y direction to constrict the flight path, widely used openings are around 1\,mm$\times$1\,mm or below. In some setups, also a slit collimation instead of a point collimation is realized to increase the intensity.}
\label{fig:labSAXS}
\end{figure}
\subsection{Laboratory SAXS setup}
Over the years a wide range of specialized SAXS instruments has become commercially available. The oldest concepts date back to the early 20th century, right after the discovery of x-rays.\cite{guinier1964x} Most of them offer specific advantages in certain use cases, such as the measurement of isotropic samples in a Kratky Camera\cite{KratkyKamera}, or highly adaptable sample environments. Here we shall only concentrate on the basic principle of operation. A general sketch of a SAXS instrument is shown in Fig.\ref{fig:labSAXS}. The x-rays are produced in an x-ray tube and then collimated by a set of slits. Here the collimation as such is already sufficient to obtain a coherent beam, since most of the intensity of standard x-ray tubes (and essentially all metal target x-ray sources) is concentrated into the characteristic spectral lines of the target material (see Fig.\ref{fig:characteristicSpectrum}). Common materials for the target anode are copper and molybdenum, delivering wavelengths of the most intensive K-$\alpha$ lines of 1.54 \AA\, and 0.71 \AA\, respectively. Under the assumption of a usual characteristic spectrum for the anode material the x-ray tubes can be considered monochromatic sources.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/characteristicSpectrum.png}
\caption{Characteristic x-ray spectrum from a metal anode x-ray tube. The high-energy cut-off wavelength is given for the case that a single electron, fully accelerated by the voltage in the x-ray tube, deposits all its kinetic energy in a single photon. In an optimal setup this distribution is very narrow. Then the $K_\alpha$ line fully dominates the spectrum and gives a clean wavelength to perform a SAXS instrument.}
\label{fig:characteristicSpectrum}
\end{figure}
In order to achieve spatial as well as wavelength coherence most x-ray tubes work with a focused beam that is as small as technically feasible. This allows very narrow collimation slits, since it is not improving the coherence, and therefore the signal-to-noise ratio, to narrow the slit further than the initial beam spot or the pixel size of the detector, whichever be smaller. This however leads to a very high energy density, why some x-ray tube designs forgo a solid anode all together and either opt for a rotating anode, where the energy of the beam spot is distributed over a larger surface or a metal-jet anode, where the material is refluxed and can therefore not heat up beyond the point of deformation and therefore also defocussing of the beam.
Some performance figures of current laboratory SAXS setups are given in Tab.\ref{tab:labSAXSparams}. It is worth noting that with the last generation of metal-jet anode setups even laboratory setups can achieve intensities comparable to what was achievable one or two decades ago at a world-class synchrotron. While this of course allows for faster measurements and smaller beam, it also means that beam damage to the sample has to be taken into account.
\begin{table}
\centering
\begin{tabular}{l|l}
Parameter & value \\
\hline
\hline
SDD & 0.8-4 m \\
\hline
Pixel resolution & 172$\times$172$\mu$m \\
\hline
Flux & $10^7$ photons s$^{-1}$ \\
\hline
wavelength $\lambda$ & 1.35 \AA \\
\hline
Q-range & $4\cdot 10^{-3}$-$8\cdot 10^{-1}$ \AA$^{-1}$ \\
\end{tabular}
\caption{Performance parameters for state of the art laboratory SAXS setups, in this case with a liquid metal jet anode at the GALAXI instrument.\cite{kentzinger2016galaxi}}
\label{tab:labSAXSparams}
\end{table}
\subsection{Synchrotron SAXS setups}
While the setup in general is similar to that of a laboratory setup there are some key differences between a synchroton and a laboratory SAXS setup. Most of the differences are based on radio protection needs and are therefore immaterial to this description in terms of the SAXS measurement itself. The other main difference is in the production of the x-rays itself. Current setups at synchrotrons use undulators in order to periodically accelerate charged particles (usually electrons/positrons) perpendicular to the direction of propagation of the particle beam. This creates a very brilliant, nearly perfectly monochromatic x-ray beam along the direction of the electron beam. The monochromaticity can further be improved by a monochromator crystal. Fig.\ref{fig:synchrotronSAXS} shows an example of an synchrotron SAXS setup. After that, the collimation is very similar to that of a laboratory SAXS setup, only the materials are chosen to be thicker in most cases to improve the absorption characteristics. Due to the monochromaticity the brilliance, coherence and signal-to-noise ratio are significantly better than that of a laboratory SAXS setup, since there is no bremsstrahlung spectrum to contribute to the background. In terms of achievable wavelength there is no limitation to use a specific K-$\alpha$ line of any specific material. Often common wavelengths are chosen to better correspond to laboratory measurements on identical samples. One option that is also available in some synchrotrons is the tunability of the wavelength in order to measure resonance effects in the atomic structure of the sample (anomalous SAXS, \emph{ASAXS})\cite{haubold1999characterization} or better chose the accessible Q-space. Tab.\ref{tab:synchrotronSAXS} summarizes some of the performance figures of current synchrotron SAXS setups. For most synchrotron SAXS beamlines beam damage, especially for organic samples, is an issue and has to be taken into account when planning an experiment.
\begin{table}
\centering
\begin{tabular}{l|l}
Parameter & value \\
\hline
\hline
SDD & 0.8-4 m \\
\hline
Pixel resolution & 172$\times$172$\mu$m \\
\hline
Flux & $10^18$ photons s$^{-1}$ \\
\hline
wavelength $\lambda$ & 0.54 - 1.38 \AA \\
\end{tabular}
\caption{Performance parameters for a state of the art synchrotron SAXS beamline, here P03 at DESY.\cite{roth2011situ}}
\label{tab:synchrotronSAXS}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{SynchrotronSAXS-Setup.png}
\caption{Synchrotron SAXS setup. Here the radiation is produced in the storage ring of a synchrotron. In earlier designs, the x-rays were produced at the bending magnets in the ring (kinks in the ring here). This however lead to a wide spread of the produced wavelength and a high angular distribution of the radiation. An undulator from a magnet array as depicted here produces a narrow distribution of wavelength and angular divergence. The rest of the setup is comparable to the laboratory setup, albeit the intensity of the radiation is orders of magnitude higher, which allows for finer collimation slits and longer collimation distances and SDDs.}
\label{fig:synchrotronSAXS}
\end{figure}
\section{SANS setups}
\index{SANS}
In contrast to x-rays, sufficient numbers of free neutrons can only be obtained by nuclear processes, such as fission, fusion and spallation. As large-scale facilities are needed to create the processes at a suitable rate to perform scattering experiments with them, the only facilities where neutron scattering today can be performed is at fission reactor sources and spallation sources. This of course also leads to larger efforts in terms of biological shielding.
It is an inherent feature of those reactions that the reaction products show a wide distribution of energies, with peak energies ranging up to 3 MeV kinetic energy per neutron. This leads to deBroglie wavelengths in the fermi meter region, which is unsuitable for SANS scattering experiments. Thus, in order to obtain a coherent beam it is not only necessary to collimate the neutrons but also to moderate and monochromatize them. Both processes result in losses in usable flux, since the phase space of neutrons cannot be compressed by lenses, as is the case for photons.
The moderation process is performed by collision processes in a moderator medium. The moderator is a material at temperatures around 25 K or below and the resulting neutron spectrum is a Maxwell-Boltzmann spectrum of the corresponding temperature. This results in peak wavelengths around 4 \AA\, for the neutron beam. Neutron scattering instruments can be run both in time-of-flight mode or monochromatic mode.
A schematic of a SANS instrument is shown in Fig.\ref{fig:SANSsetup}. Both cases with a monochromator and a chopper setup for time-of-flight are presented. In a continous source the neutron flux has to be interrupted for the timing of time-of-flight mode while for pulsed sources there is an inherent interruption of the neutron flux.
\begin{figure}
\centering
\begin{minipage}[l]{0.75\textwidth}
\Large{a)}
\includegraphics[width=\textwidth]{Graphics/ContinuousSANS-Setup.png}
\end{minipage}
\begin{minipage}[l]{0.75\textwidth}
\Large{b)}
\includegraphics[width=\textwidth]{Graphics/TOFSANS-Setup.png}
\end{minipage}
\caption{a) Continuous source SANS setup and b) pulsed source SANS Setup. In both cases the neutron source (red) creates hot neutrons of a short wavelength. A cold source (blue) vessel (usually filled with cold $^2$H or $^2$D) is moderating the neutrons down to slower speeds, i.e. longer wavelengths. In both cases the collimation distance and SDD is widely adjustable for most instruments, with lengths between 1 m up to 30 m. In a SANS instrument at a continuous source a monochromator (a turbine with slightly inclined channels) selects a certain wavelength (usually between 3 and 15 \AA) and afterwards the setup is very much like the one shown for SAXS setups, except that the whole instrument is larger. In case of a pulsed source choppers (rotating discs with transparent openings for neutrons) define a start and an end time for each pulse. Since neutrons, different from x-rays, are particle waves, their wavelength determines their speed. Thus, the wavelength is determined by measuring the time of arrival at the detector for each neutron. For an optimized neutron transport all components are usually evacuated.}
\label{fig:SANSsetup}
\end{figure}
This moderation and collimation process in consequence means that neutrons always show an albeit small distribution of wavelenghts and therefore a lower signal to noise level than x-ray sources. Spin and isotopic incoherence add to that. Beam damage however is nigh on impossible with the weakly interacting neutrons.
\newpage
\section{Indirect space and Small-Angle Scattering}
The need for the resolution of small angles can be directly derived from Bragg's equation
\begin{equation}
n \lambda = 2d\cdot\sin\delta
\label{eq:bragg}
\end{equation}
with $n$ being the order of the diffraction, $d$ being the distance between two scatterers, $\theta$ as the scattering angle and $\lambda$ the wavelength of the incoming beam. In order to get interference the incoming beam has to have a wavelength that corresponds to the investigated size regime, which in both cases is on the order of a few Angstroms. Using Bragg's equation with $n=1$, $d=50$\AA\, and $\lambda = 1$\AA\, we arrive at $0.01 = \sin\theta \approx \theta$. Thus, the largest structures to be resolved are determined by the smallest achievable angle.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/Q-construction.png}
\caption{Construction of $\vec{Q}$. The incoming and final wavevectors $\vec{k}_i$ and $\vec{k}_f$ define both the scattering vector $\vec{Q}$ as well as the path length difference $\delta=\Delta s_1 -\Delta s_2$. Here it is important to note that the selection of the center of origin is arbitrary and thus can be chosen to be at the center of the construction. The calculation of the length of $\vec{Q}$ is then given by Eq.\ref{eq:Q}.}
\label{fig:Qconstruction}
\end{figure}
In order to allow for a setup and wavelength independent data evaluation the data is recorded in terms of Q or indirect space. The construction of that Q-space from two scattering points is shown in Fig.\ref{fig:Qconstruction}. From that the magnitude of $\vec{Q}$, which here for simplicity is $|\vec{Q} |= Q$, can be derived as
\begin{equation}
Q=\frac{4\pi}{\lambda}\sin\theta.
\label{eq:Q}
\end{equation}
Even though $\vec{Q}$ is strictly speaking a vector, for most small angle problems only the absolute value $Q$ is of interest, hence this simplification is reasonable. This is due to the isotropic scattering picture of a majority of small-angle scattering data. Another simplification that is often used is the small-angle approximation for the sine with $\sin\theta = \theta$, which is very well valid for small angles. Combining Eqs.\ref{eq:bragg} and \ref{eq:Q} also delivers a useful expression for the approximation of inter-particle distances or correlation lengths
\begin{equation}
d = \frac{2\pi}{Q}.
\label{eq:distances}
\end{equation}
\section{Resolution limits}
SAS is working based on the interference of coherent radiation. That in itself imposes some limitations on the samples and properties that can be investigated.
In term of size, the object under observation has to be of the same order of magnitude as the wavelength of the incoming radiation, analogous to light interference at a double slit. Concerning the analysis in indirect space, also the limited size of the detector and coherence volume of the sample has to be taken into account.
The second limitation that should always be considered is that only elastic scattering renders useful results, i.e. any change in speed or wavelength of the incoming radiation will render unusable results.
Finally, multiple scattering is usually not considered for the evaluation of SAS data. This means, mostly thin samples, or those with a high transmission (usually 90\% or higher), can be investigated.
\section{Fourier Transform and Phase Problem}
Considering the spacing of only two scattering centers as in the last section needs to be extended to an arrangement of scattering centers for evaluation of macroscopic samples, where each atom/molecule can contribute to the scattered intensity. Since the incoming wave at location $\vec{x}$ can be considered to be an even wave it can be described by
\begin{equation}
A(\vec{x},t) = A_0 \exp \left(i2\pi(\nu t-\frac{\vec{x}}{\lambda})\right)
\label{eq:amp}
\end{equation}
With a $A$ being the amplitude as a function of position $\vec{x}$ and time $t$. $A_0$ is the modulus of the amplitude, $\nu$ the frequency and lambda the wavelength.
In order to calculate the correct phase shift $\Delta\phi$ after scattering from two centers as in Fig.\ref{fig:Qconstruction} we need to know the differences in travelled distance between the two waves $\delta$. This then yields
\begin{equation}
\Delta\phi = \frac{2\pi\delta}{\lambda} = \vec{Q}\vec{r},
\end{equation}
which is equivalent to the expression $2\pi x/\lambda$ in Eq.\ref{eq:amp}. Here also the relation $\vec{Q}=\vec{k}_f-\vec{k}_i$ was used. This then leaves us with the spherical wave scattered by the first scattering center
\begin{equation}
A_1(\vec{x},t)=A_0 b \exp(i2\pi(\nu t -\vec{x}/\lambda))
\end{equation}
and the corresponding scattered wave from the second scattering center
\begin{eqnarray}
A_2(\vec{x},t) &=& A_1(\vec{x},t)\exp i \Delta\phi\\
&=& A_0 b \exp(i2\pi(\nu t -\vec{x}/\lambda)) \exp i\vec{Q}\vec{r}
\end{eqnarray}
This can then be combined into the full description of the amplitude with both contributions to
\begin{eqnarray}
A(\vec{x},t) &=&A_1(\vec{x},t)+A_2(\vec{x},t)\\
&=& A_0 b \exp(i2\pi(\nu t -\vec{x}/\lambda))(1+\exp i\vec{Q}\vec{r})
\label{eq:fullAmp}
\end{eqnarray}
here an arbitrary scattering efficiency $b$ for each scattering center has been introduced, which will later be discussed for both x-rays and neutrons.
Since only intensity can be observed at the detector, we need to consider the square, calculated with the complex conjugate of the expression itself
\begin{eqnarray}
I(\vec{Q}) &=& A(\vec{x},t)A^*(\vec{x},t)\\
&=&A_0^2 b^2(1+\exp\left( i\vec{Q}\vec{r}\right))(1+\exp\left( -i\vec{Q}\vec{r}\right)).
\end{eqnarray}
Here the time and absolute location dependencies in Eq.\ref{eq:fullAmp} have cancelled each other out, so we can neglect them and are left with a function that solely depends on the scattering vector $\vec{Q}$ and the location of the particles $\vec{r}$. Neglecting those dependencies allows us to generalize Eq. \ref{eq:fullAmp} to the case of $N$ identical scattering centers with
\begin{equation}
A(\vec{Q}) = A_0 b \sum_{i=1} ^{N} \exp(i\vec{Q}\vec{r}_i).
\label{eq:ampSLD}
\end{equation}
The $\vec{r}_i$ here signify the relative locations of all scattering centers in the sample, relative to either simply the first scattering center or any arbitrary center chosen. Indeed all arrangements are mathematically identical. Replacing the sum by a weighed integral allows also the calculation for the case of a (quasi)continuous sample with number density $\rho(\vec{r})$:
\begin{equation}
A(\vec{Q}) = A_0 b \int _V \rho(\vec{r}) \exp i\vec{Q}\vec{r} d\vec{r}
\end{equation}
This is the Fourier transform of the number density of scattering centers with scattering efficiency $b$, it can also be applied for numerous scattering efficiencies.
However, since the phase information got lost while obtaining the intensity as an absolute square of the amplitudes, there is no direct analytic way of performing an inverse Fourier transform. This is why this is called the phase problem.\index{Phase problem} Also, as described above, in a wide range of cases it is enough to investigate the modulus of $\vec{Q}$, neglecting its vector nature.
\section{Scattering Efficiency}
Since the physical scattering event is very dissimilar for x-rays and neutrons they shall be discussed separately here. However, it should be noted, that the nature of the scattering process does not impact on the method of data evaluation in general. Only in very specific cases, such as contrast matching or polarized scattering there is any discernible difference.
\subsection{Scattering with x-rays}
X-rays, as photons, interact with the sample via electromagnetic interaction. For the purpose of this manuscript it is sufficient to note that the vast majority only interact with the electron shell around the atoms and thus effectively map the electron density within the sample. Interactions with the nucleus would only occur at very high energies, which are not usually used in elastic scattering. In a rough approximation the strength of the electromagnetic interaction scales with $Z^2$, meaning that heavy elements, such as a wide range of common metals, scatter considerably stronger than light ones, like hydrocarbon compounds. For element analyses there is also the possibility of resonance scattering, where the chosen x-ray energies are close to the resonance gaps in the absorption spectrum of specific elements (ASAXS).\cite{haubold1999characterization}
Based on Thomson scattering the scattered intensity at angle $2\theta$ is
\begin{eqnarray}
I(2\theta)&=&I_0 \left(\frac{e^2}{mc^2}\right)\frac{1+\cos^2 2\theta}{2}\\
\frac{I}{I_0}&=&\left(\frac{d\sigma}{d\Omega}\right)_2 = r^2 _e \frac{1+\cos^2 2\theta}{2}
\label{eq:Thomson}
\end{eqnarray}
Here we also introduced the differential scattering cross section $\frac{d\sigma}{d\Omega}$ for a single electron and $r_e$ being the radius of an electron. This means that the total probability for a scattering event to occur into a solid angle $d\Omega$ is exactly that value for a single, isolated electron. This probability is in units of an area. Thus, the scattering length for a single electron $b_e$ is defined as the square root of that:
\begin{equation}
b_e=r_e\sqrt{\frac{1+\cos ^2 2\theta}{2}}
\label{eq:elektronSLD}
\end{equation}
With those previous equations it is again important to note that small-angle scattering is mainly concerned with small angles, thus that $\cos 2\theta \approx 1$ is a very good approximation. This is also, together with backscattering, the location of the highest intensity and negligible polarization effects. The numeric values for the constants used here are $r_e=2.818\times 10^{-15}$ m and the scattering cross section for a single electron $\sigma_e=6.65\times10^{-29} \mbox{ m}^2\mbox{ }= 0.665 $ barn after integration over the full solid angle. As apparent with integration over the full solid angle, the relation is $\sigma = 4\pi b_e^2$.
Since usually the goal is to find the distribution of scattering centers in a volume, the density of scattering length per unit volume is of interest. This is the scattering length density (SLD)
\begin{equation}
\rho (\vec{r}) = \frac{b_e (\vec{r})}{V}.
\label{eq:SLD}
\end{equation}
A very common way of expressing scattering efficiency is using electron units. As can be seen in Eq.\ref{eq:ampSLD} the scattering amplitude is only determined by the SLD of a single electron apart from the Fourier transform of the local density. This means the scattering intensity in electronic units can be expressed as
\begin{equation}
I_{eu}(Q)=\frac{I(Q)}{I_0b_e^2}
\end{equation}
This means, with appropriate calibration, if there is an intensity of $I_{eu} = 200\,b_e^2$ at a certain Q, that the size scale corresponding to that Q vector has 200 electrons per unit volume.
Since photons interact mainly with the electron shell, there is also an angle dependency accounting for the time averaged location probability of the electrons in the shell, which may or may not be spherical, depending on the electronic configuration of that specific atom. This would then lead to a SLD in terms of $b_e(Q)=b_e f_s(Q)$ with $f_s$ being the atomic scattering factor for any specific element. This important to take note of, when there is a structure or form factor on the same size scale as a single atomic distance $Q=\frac{2\pi}{1.54\mbox{ \scriptsize{\AA}}} = 4.08\mbox{ \AA}^{-1}$. This is usually not in the regime of interest for small-angle scattering and will mostly vanish in the incoherent background.
Another incoherent background effect is Compton scattering, where inelastic processes change the wavelength during the scattering process. This is however again strongly suppressed at small angles. The wavelength shift occurring based on Compton scattering is following this expression
\begin{equation}
\Delta\lambda = \frac{h}{mc}2 \sin^2\theta
\end{equation}
The prefactor is $\frac{h}{mc} = 0.02426$ \AA. It is also obvious that at large angles $2\theta = 180$$^{\circ}$ the energy transfer is maximal. Since we are always investigating angles close to $\theta = 0$ the wavelength shift and hence the incoherent background is negligible compared to other experimental factors, such as slits and windows scattering.
\subsection{Scattering with neutrons}
Neutrons interact with the nuclei directly, which results in the atomic form factor being always spherically symmetric (billiard balls) and them being sensitive to different isotopes and spin-spin coupling. In contrast to x-rays, there is no simple expression for scattering strength as a function of isotope or atomic number. Directly neighboring elements and isotopes may have vastly different cross sections.
Based on that we have to rely on tabulated values for the cross sections and scattering lengths of different elements and isotopes (see Tab.\ref{tab:neutronSLD}) and can only write the cross section and scattering length relation as
\begin{table}
\centering
\begin{tabular}{c|c}
Element & scattering length $b_{coh}$/$10^{-14} m$ \\
\hline
\hline
$^1$H & -0.374 \\
\hline
$^2$D & 0.667 \\
\hline
C & 0.665 \\
\hline
N & 0.936 \\
\hline
O & 0.580 \\
\hline
Si & 0.415 \\
\hline
Br & 0.680\\
\end{tabular}
\caption{Coherent scattering length of several elements and isotopes.}
\label{tab:neutronSLD}
\end{table}
\begin{equation}
\frac{d\sigma}{d\Omega}=b^2
\end{equation}
That said, only coherent scattering can form interference patterns, i.e. no change of the nature of the radiation can take place during the scattering process. However, since the neutron can change its spin orientation through spin-spin coupling during the scattering process that may happen, depending on the spin orientation of the sample nuclei. Those are completely statistical processes.
As neutrons are fermions, which have spin $1/2$ the possible outcomes after a scattering process with a nucleus of spin $i$ are $i+1/2$ and $i-1/2$, and the associated possible spin states are
\begin{eqnarray}
\mbox{number of states } i+1/2&:&2(i+1/2)+1=2i+2\\
\mbox{number of states } i-1/2&:&2(i-1/2)+1=2i\\
\mbox{total number of states }&:&4i+2.
\label{eq:spinStates}
\end{eqnarray}
This immediately shows, that only for the case $i=0$ there can be only two states. Since it is impossible to know the spin state of non-zero spin nuclei under ambient conditions, the differential cross section becomes a two-body problem of the form:
\begin{equation}
\frac{d\sigma}{d\Omega} = \sum _{i,j} \left\langle b_i b_j\right\rangle \exp-i\vec{Q}(\vec{r_i}-\vec{r_j})
\end{equation}
Here $\left\langle b_i b_j\right\rangle$ is the expectation value of the SLD for each $b_ib_j$ combination possible given isotope and spin variability. For this there is only one coherent outcome, where $b_i = b_j$, which then results in
\begin{equation}
\left\langle b_i b_i\right\rangle = \left\langle b_i ^2\right\rangle = \left\langle b ^2\right\rangle.
\end{equation}
All other cases result in $b_i \neq b_j$ and therefore
\begin{equation}
\left\langle b_i b_j\right\rangle_{i\neq j} = \left\langle b_i \right\rangle \left\langle b_j\right\rangle = \left\langle b\right\rangle^2.
\end{equation}
This then results in
\begin{equation}
\frac{d\sigma}{d\Omega} = \langle b^2\rangle \cdot \sum_{j,k} \exp\left(-i\vec{Q}(\vec{r_i}-\vec{r_j})\right) +N(\langle b^2\rangle-\langle b\rangle^2).
\end{equation}
Here $\sqrt{\langle b^2\rangle} = b_{coh}$ signifies the coherent scattering length density, since it contains information about the structure of the sample via $\vec{r_{ij}}$ and $\sqrt{\langle b^2\rangle-\langle b\rangle^2} = b_{inc}$ is the incoherent cross section not containing any information about the sample structure. This cannot be suppressed instrumentally, therefore often isotopes with low incoherent scattering length are chosen in neutron scattering to suppress the incoherent background. Both coherent and incoherent scattering lengths can separately used together with Eq.\ref{eq:SLD} to obtain the corresponding scattering length densities.
\subsection{Scattering Cross Section and Contrast Matching}
As described above there is a $Z^2$ dependency of the cross section of atoms in case of x-rays and the cross section values for neutrons have to be tabulated since there is no simple algebraic expression for that. The resulting differences in cross section are illustrated in Fig.\ref{fig:CrossSection}. Because different isotopes have very different cross sections for neutron scattering, in some cases it is possible to replace certain isotopes in order to arrive at desired contrast conditions.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/coherentCrossSections.png}
\caption{Coherent cross-sections for selected elements for x-rays (top) and neutrons (bottom). The coherent scattering cross section scales linearly with the diameter of the circles. It is apparent, that the $Z^2$ dependency strongly emphasizes heavy elements in x-ray scattering, whereas for neutrons even single isotopes can be distinguished. However, for neutrons there is no simple analytic expression for the scattering cross-sections.}
\label{fig:CrossSection}
\end{figure}
One of the most important examples for that technique, called contrast matching, is replacing hydrogen by deuterium. This leaves the chemical composition of the sample unchanged, and hydrogen is extremely abundant in most organic compounds. The concept can in some cases be extended to be used as the Babinet principle, in order to suppress background scattering, since it is extremely preferable to have a solvent with a low background and a solute with a higher background than vice versa. A sketch of the concept is shown in Fig.\ref{fig:ContrastMatchConcept}.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/ContrastMatchingDrip.png}
\caption{Illustration of the concept of contrast matching. In step \ding{172} there are micelles with a corona (pink) dissolved in a solution (blue). The scattering length density of the corona is between the SLD of the solvent and its deuterated counterpart (red). In step \ding{173} the deuterated solvent is added to the solution, which changes the contrast conditions. Finally, in step \ding{174} a sufficient amount of deuterated solvent has been added, so the contrast between the corona and the solvent has vanished. Now the micellar cores can be measured directly.}
\label{fig:ContrastMatchConcept}
\end{figure}
This method allows highlighting otherwise hidden features of the sample or suppressing dominant scattering in order to better determine a structure with a lower volume fraction and therefore less scattering contribution. Examples for that application are highlighting the shell of a sphere, by matching the core or vice versa. Also for protein samples certain structures can be matched, so that only distinct features are visible.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/contrastMatchingSLD.png}
\caption{Semi-analytic way to determine the necessary solvent deuteration for contrast matching. The concentration at the matching point, where the solvent has the same SLD as the polymer particles, is determined by the crossing of the mixed D$_2$O/H$_2$O SLD line and the SLD line of the respective polymer. For the calculation the scattering length density of water is calculated to -0.6$\cdot10^{-6}$\AA$^2$ and the SLD of heavy water is calculated to 6.3$\cdot10^{-6}$\AA$^2$.}
\label{fig:ContrastMatchSLD}
\end{figure}
In order to apply contrast matching, mostly the solvent is changed. In some rare cases also the polymer or other sample is synthesized with a different isotope composition. Here the finding of the correct H/D fraction of the solvent shall be shown. Fig.\ref{fig:ContrastMatchSLD} gives an example of how to find the correct H/D fraction in a semi-analytic way. The underlying principle is expressed by
\begin{eqnarray}
SLD_{sample} &=& SLD_{H_2O}\times H + SLD_{D_2O}\times D\\
H&\equiv & 1\\
D&=&\frac{SLD_{sample}-SLD_{H_2O}}{SLD_{D_2O}}.
\end{eqnarray}
This way the volume of heavy water for each unit volume of protonated (usual) water can be calculated. It is also apparent from that calculation that only mixtures with a scattering length density between water and heavy water can be matched, and that the equations above only cover the non-trivial cases, where pure water or heavy water is not suitable. The actual volumes can then be calculated with $V_{\mbox{water}}=\frac{H}{H+D}$ and $V_{\mbox{heavy water}}=\frac{D}{H+D}$.
A prominent example for contrast matching is the matching out of the shell or core of a micelle. The contrast behavior and the resulting scattering curves are shown in Fig.\ref{fig:ContrastMatchCurves}. Essentially contrast matching can improve the fitting procedure, if well known parts of the structure are matched out or emphasized by the contrast matching. This then delivers two or more different data sets that all should return comparable results. Another option is the reconstruction of embedded particles in a larger structure. Also here, the overall fitting procedure can profit from two fits with mutually corroborating results.
One concept that shall also be mentioned here is magnetic (spin-) contrast. In this context Fig.\ref{fig:ContrastMatchConcept} can be understood to be particles with a magnetic shell. As long as the spins are not aligned there is no contrast between the shell and the solvent (step \ding{174}). When an external magnetic field aligns the spins in the shell, a contrast between the shell and the solvent emerges (\ding{172}). Several other possibilities with and without polarization analysis are possible, however that is beyond the scope of this manuscript.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/contrastMatchingCurves.png}
\caption{Scattering curves for micelles with unmatched, partially matched and completely matched corona. The curves correspond to the scenarios \ding{172}, \ding{173} and \ding{174} in Fig.\ref{fig:ContrastMatchConcept}. Here two effects can be observed. The corona is only 50\% of the radius of the core, hence it influences the scattered intensity at higher angles than the core itself, the scattering feature at $Q$=0.15\AA$^{-1}$ corresponding to the micellar core is therefore quite stable, while the intensity at higher $Q$ changes drastically. Considering the forward scattering the dependence of the scattering contrast between solvent and core is directly visible. The matched out corona shows the least contrast, and therefore the lowest forward scattering intensity, while the unmatched corona has the highest contrast and the highest intensity. This approach is also used, when an analytic approach to find the matching D$_2$O/H$_2$O concentration cannot be found. Several concentrations are tested and where a minimum in the scattered intensity is found, the contrast can be assumed to be matched.}
\label{fig:ContrastMatchCurves}
\end{figure}
\section{Form factors}
\label{sec:formFactors}
\index{Form factor}
As described above, the phase problem usually prevents an analytic reconstruction of the structure from the scattered intensity by an inverse Fourier transform. There are approaches attempting the direct reconstruction of direct space information \cite{weyerich1999small} or reconstruction from bead model annealing / Monte Carlo simulation \cite{koutsioubas2016denfert,grant2018ab}. All these approaches have in common that a direct analytic expression for the scattering is not foreseen, and can therefore not be used as a starting point of the analysis. In the past, the model based analysis has been the most applied approach for the analysis of small-angle scattering data. Here, predetermined structures undergo a Fourier transform, whose result is then used to calculate a scattering pattern. This results in the most cases in analytic expressions that can be directly fitted to the data and are often used in a catalog-like manner in order to determine the structure of the sample. As most geometric forms can be approximated either as a sphere, a disk or a rod (see Fig.\ref{fig:formFactors}) these are the forms that are going to be discussed here. More elaborate structures are available and can in principle be calculated for any structure where the form can be described by an analytic expression. A short, and by no means complete, list of programmes for the evaluation of SAS data is SasView (https://www.sasview.org), SasFit (https://kur.web.psi.ch/sans1/SANSSoft/sasfit.html) and Scatter (http://www.esrf.eu/UsersAndScience/Experiments/\newline
CRG/BM26/SaxsWaxs/DataAnalysis/Scatter\#).
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/formfactors.png}
\caption{Form factors for several scattering geometries. The slopes at the onset of the form factor after the plateau are shown, which is mostly determined by the fractal dimension of the scattering object. Here it also becomes apparent that solely relying on that slope may lead to misinterpretation between similarly scaling objects, here Gaussian coils and discs.}
\label{fig:formFactors}
\end{figure}
\subsection{Sphere}
\index{Form factor!Sphere}
The analytic expression for the scattering created by a sphere of radius $R$ is
\begin{equation}
I(Q) = N \left[3V \rho_0 \cdot\frac{\sin(QR)-QR\cos(QR)}{(QR)^3}\right]^2
\label{eq:SphereForm}
\end{equation}
with $N$ being the number of the scattering particles, $V$ being the volume of a single sphere and $\rho_0$ being the SLD contrast between the sphere and the solvent.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{Graphics/SphereSLD.png}
\caption{Depiction of the SLD distribution along the radius of a sphere. $\rho _0$ is the SLD contrast, i.e. the SLD difference between the scattering particle and the solvent. $R$ is the radius of the sphere.}
\label{fig:SLDSphere}
\end{figure}
This expression can be reached by using a SLD description like a step function as depicted in Fig.\ref{fig:SLDSphere}. As a sphere is already spherically symmetric this can be directly put into the Fourier transform
\begin{eqnarray}
A(\vec{Q}) & = & \mathcal{F}(\rho(\vec{r})) \cdot \left(2\pi\right)^3\\
& = & \int _V \rho (\vec{r})\exp\left(-i\vec{Q}\vec{r}) \right) dV\\
\label{eq:GeneralFT}
& = & \int _{\phi=0} ^{2\pi} \int _{\theta=0} ^\pi \int _{r=0} ^R \rho (\vec{r})\exp\left(-i\vec{Q}\vec{r}) \right) r^2 \sin \theta dr d\theta d\phi \\
& = & \int _{\phi=0} ^{2\pi} \int _{\theta=0} ^{\pi} \int _{r=0} ^R \rho (r) \exp\left(-iQr\cos\theta) \right) r^2 \sin \theta dr d\theta d\phi \label{eq:scalarProduct}\\
& = & \int _{\phi=0} ^{2\pi} \int _{u=-1} ^{1} \int _{r=0} ^R \rho (r) \exp\left(-iQr u) \right) r^2 dr du d\phi \label{eq:substitute}\\
& = & 4\pi \int _{r=0} ^R \rho (r) \left( \frac{\exp (iQr u)-\exp(-iQr u)}{iQr} \right) r^2 dr\\
& = & 4\pi \int _{r=0} ^R \rho (r) \left( \frac{\sin Qr}{Qr} \right) r^2 dr\\
& = & 4\pi\rho_0 \int _{r=0} ^R \frac{\sin Qr}{Qr} r^2 dr\\
& = & 4\pi\rho_0 \frac{sin QR-QR\cos QR}{Q^3} \\
& = & 4\pi\rho_0 \frac{sin QR-QR\cos QR}{Q^3} \\
& = & V\rho_0 \frac{3 sin QR-QR\cos QR}{R^3 Q^3}
\end{eqnarray}
Here Eq.\ref{eq:scalarProduct} used the identity of $\vec{Q}\vec{r}=Qr\cos\theta$ with theta being the enclosed angle and in Eq.\,\ref{eq:substitute} $\cos \theta$ was replaced by $u$. In addition, spherical symmetry was exploited for the integration over the solid angle. The factor $\left(2\pi\right)^3$ is to correct for scaling differences during the Fourier transform.
This corresponds exactly to the squared term in Eq.\ref{eq:SphereForm} which is nothing else than the squared amplitude that we calculated here. As this is only the scattering for a single, isolated sphere, the number density needs to be included to reflect the absolute scattered intensity. In case of neutron scattering this is the case for most of the instruments. X-ray instruments are often not calibrated to absolute scattering intensities and therefore need an arbitrary scaling factor. Similar approaches can be used for other analytic representations of form factors.
\subsection{Thin Rod}
\index{Form factor!Rod}
The scattered intensity by a dilute solution of thin rods of length $L$ is given by
\begin{eqnarray}
I(Q)& = & \rho_0^2 v^2\left(\frac{2}{QL\cos\theta}\right)\sin^2\left(\frac{QL}{2}\cos\theta\right)\\
& \rightarrow & \rho_0^2v^2\frac{2}{QL}\left(\mathrm{Si}(QL) - \frac{1-\cos QL}{QL}\right).
\label{eq:RodForm}
\end{eqnarray}
Here $v$ is the volume of the particle and the average over all orientations has been performed in the second step. The substitution $\mathrm{Si}(QL)=\int_0 ^{QL} \frac{\sin u}{u} du$ was used.
\subsection{Circular Disc}
\index{Form factor!Disc}
An infinitely thin circular disk of radius $R$ scatters the incoming intensity as follows:
\begin{equation}
I(Q)=\rho_0v^2\frac{2}{Q^2R^2}\left(1-\frac{J_1(2QR)}{QR}\right)
\label{eq:DiscForm}
\end{equation}
$J_1$ here is the first order Bessel function.
\subsection{Non-particulate scattering from a flexible chain}
\index{Form factor!Flexible Chain}
A flexible chain in solution cannot be described by a simple analytic form, since one needs to integrate over all possible conformations of the chain. Nevertheless, an analytic expression, the Debye scattering, can be found:
\begin{equation}
I(Q)=\rho_0^2 v^2 \frac{2(\exp(-Q^2R_g^2)+Q^2R_g^2-1)}{Q^2R_g^2}
\label{eq:DebyeLaw}
\index{Debye Law}
\end{equation}
Here $R_g = \frac{1}{V}\int_V \vec{r}^2 \rho _0 d\vec{r}$ is the radius of gyration \index{Radius of gyration} (in this case for constant SLD). A very important aspect of that scattering curve is, that it essentially scales with $Q^2$.
For better comparison the radius of gyration for a solid sphere of radius $R$ is $R_g = \sqrt{\frac{3}{5}}R$, the one for a thin rod of length $L$ is $R_g=\frac{1}{\sqrt{12}}L$ and the one for a very thin circular disc with radius $R$ is $R_g=\frac{1}{\sqrt{2}}R$
\subsection{Polydispersity}
All analytic form factors, that deliver the scattered intensity, are determining the scattered intensity for particles of one exact size. In real systems, however, there are mostly distributions of different sizes. This leads to a superposition of scattering from different particle sizes. Since most particle sizes follow a Gaussian distribution, this is also a good way to fold in the particle size distribution analytically. For extremely long, or very polydisperse, particles then Schulz-Zimm distribution is used, which looks very similar to the Gaussian distribution, however has a cut-off at zero to prevent negative sizes of the particles. For specialized problems also other distributions, such as La-Place, multi-modal or other size distribution functions can be used.
The general idea is that the scattered intensity $I(Q,r)$ is folded with the size distribution function $f(r)$
\begin{equation}
I_{\mbox{\small{real}}}(Q,r) = I_{\mbox{\small{ideal}}} (Q,r) * f(r).
\end{equation}
Here the subscripts real and ideal identify the real measured intensity or the ideal intensity for any calculated particulate size and form.
The effects of the convolution can be seen in Fig.\ref{fig:PDI}. Most notably, the minima are smeared out, and in some cases vanish completely, so they can only be estimated. Another important effect is that the slopes of inclinations cannot be completely reproduced anymore, which is especially important to distinguish scattering from different contributions. The magnitude of the polydispersity is described by the polydispersity index $PDI = \sigma(f(r))/\mu(f(r))$ where $\sigma(f(r))$ is the standard deviation of the size distribution function and $\mu(f(r))$ is the mean of the size distribution function. Values of $PDI \geq 0.3$ are usually discarded during fitting, as then the results become unreliable in such a polydisperse sample.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/PDI.png}
\caption{Effect of polydispersity. While the positions of the minima can still be found at higher polydispersity, the higher order undulations of the form factor vanish.}
\label{fig:PDI}
\end{figure}
In addition to this, the usual polydispersity (approximated by a Gaussian distribution) is by its very nature similar to a resolution smearing of the instrument itself. Therefore, it can easily happen to overestimate the polydispersity. If the resolution function of the instrument is known, it should be used for deconvolution before performing the fits.
\section{Structure Factors}
Structure factors in general describe the scattered intensity due to the arrangement of single particles. This can be because the solution is becoming to dense, and therefore the particles arrange following a nearest neighbor alignment or because the particles are attractive to each other and form aggregates. Thus, more generally a structure factor $S(Q)$ is a measure of interaction between the single particles in the solution and connected with the correlation function $c(r)$ (the probability to find a particle at a certain distance) with the relation
\begin{equation}
S(Q) = \frac{1}{1-nc(Q)}.
\label{eq:StructureFactor}
\end{equation}
Since the structure factor and the form factor need to be convolved in real space, in indirect space this converts to a multiplication, following the convolution theorem. Therefore the scattered intensity, described by form factor $F(Q)$ and structure factor $S(Q)$
\begin{equation}
I(Q)= F(Q)\cdot S(Q).
\label{eq:StructureFormFactor}
\end{equation}
From this equation it also follows, that for a system of uncorrelated, identical particles the structure factor must be $S(Q)=1$. Since the correlation between particles usually leads to either an aggregation or repulsion of particles over long length scales the contribution of the structure factor is most prominent at low Q-values. Also, this means that for large distances the structure factor has to level out to unity, to preserve the fact that at large Q only the inner structure of the particle is visible, not its arrangement in space. A few instructive examples for the structure factor are shown in Fig.\ref{fig:structureFactors}.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/structFactors.png}
\caption{Examples for structure factors. The intensity of the peaks roughly scales with the volume fraction $\eta$ of the particles. Also the position of the peaks is slightly dependent on that volume fraction, which makes a direct calculation of $R=\frac{2\pi}{Q_{max}}$ invalid (The hard sphere radius used here was 60 \AA). A distinct difference can be noted at low $Q$. Here, in general, attractive interaction (sticky hard spheres) leads to an increase in scattering, while repulsive interaction leads to a decrease in intensity.}
\label{fig:structureFactors}
\end{figure}
\subsection{Hard Sphere Structure Factor}
\index{Structure Factor}
The hard sphere structure factor assumes an infinitely high potential below a radius $R$ and a zero potential at higher radii. This can be described by
\begin{equation}
V(r) = \begin{cases}
\infty \mathrm{\, for \,} r \leq R \\
0 \mathrm{\hspace{3mm} for\, } r>R.
\end{cases}
\label{eq:HSPotential}
\end{equation}
Using Eq.\ref{eq:StructureFactor} this can be rewritten as
\begin{equation}
S(Q)=\frac{1}{1+24\eta_{HS}G(2QR)/2QR}.
\end{equation}
Here $G(x)$ is defined as
\begin{eqnarray}
G(x) & = & \alpha\frac{(\sin(x)-xcos(x))}{x^2}+\\
& = & \beta\frac{(2x\sin(x)+(2-x^2)\cos(x-2))}{x^3}+\\
& = & \gamma\frac{(-x^4\cos(x)+4\left[(3x^2-6)\cos(x)+(x^3-6x)\sin(x)+6\right])}{x^5}
\end{eqnarray}
with these definitions for $\alpha, \beta$ and $\gamma$:
\begin{equation}
\alpha =\frac{(1+2\eta_{HS})^2}{(1-\eta_{HS})^4}\hspace{.75cm};\hspace{.75cm}\beta=\frac{6\eta_{HS}(1+\small{\eta_{HS}}/\small{2})^2}{(1-\eta_{HS})^4}\hspace{.75cm};\hspace{.75cm}\gamma=\frac{\small{\eta_{HS}}/\small{2}(1+2\eta_{HS})^2}{(1-\eta_{HS})^4}.
\end{equation}
In all equations the volume fraction that is occupied by hard spheres of radius $R$ is designated $\eta_{HS}$.
\section{Reading a curve}
In an experimental environment it can be useful to determine the fundamental features in a preliminary fashion without computer aided data evaluation, also known as fitting. In addition, this helps determining good starting parameters for fits. In order to do so, we are going to look at the curves shown in Fig.\ref{fig:curveReading}. There we can determine different regions of the scattered intensity (forward scattering, Guinier regime, Debye regime and Porod regime) and determine several properties of the sample from that intensity. When applying the described techniques for directly reading a curve it has to be kept in mind that most of them are either restricted in their validity concerning the $Q$-space or are very general and rough descriptions of the sample.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/CurveReading.png}
\caption{Diverse scattering curves from identical spherical form factor and different structure factors.}
\label{fig:curveReading}
\end{figure}
\subsection{Forward scattering}
As pointed out in the discussion of the structure factor, large aggregates mostly show their presence by an increased scattering intensity at low $Q$. This also becomes apparent when taking Eq.\ref{eq:distances} into account. This means, in general, an increased scattering at low $Q$ is indicative of large aggregates being present in the sample. This also correlates with an attractive potential between the single particles.
Another possibility is strongly suppressed scattering at low $Q$. This can be the case for strongly repulsive interaction potentials between the particles, close to what is described for the hard sphere factor above.
A leveling out of the intensity at low $Q$ is indicative of an either dilute solution or a very weak potential between the particles. Then there is no influence at low $Q$ and only the structure factor of the single particles is visible.
\subsection{Guinier regime}
\label{sec:GuinierRegime}
\index{Guinier Law}
The Guinier regime is usually the crossover region, where the forward scattering is not dominant anymore and the slope of the scattering curve changes to the scattered intensity of the form factor. In this regime the overall size of the particle can be examined. This is similar to seeing something from far away: One may be able to discern the size of the particle but the distinct form remains hidden. Imagine a football and a pumpkin seen from 100 m away. They are close in size, you can properly judge it to be approximately 20 cm in diameter, but the exact form (ridges, stem of the pumpkin) remains hidden. A description that is only taking into account the scattered density of the particles as a whole, valid in that scattering regime is the Guinier Law:
\begin{equation}
I(Q)=\rho _0v^2\exp(-\frac{Q^2R_g^2}{3})
\label{eq:Guinier}
\end{equation}
For details of derivation, which include a Taylor series expansion around zero of the scattered amplitude (Eq.\ref{eq:GeneralFT}) and an averaging over all directions, please refer to the literature.\cite{guinier,roe} Another option is to develop a series expansion for the Debye Law (Eq.\ref{eq:DebyeLaw}) at low $Q$.
In order to evaluate the data using the Guinier Law, the data needs to be plotted as shown in Fig.\ref{fig:GuinierPlot}. The log-log representation and plotting versus $Q^2$ allow to directly read the inclination of the system, multiply by 3 and use the square root in order to retrieve the particle radius.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Graphics/GuinierReading-1.png}
\caption{Sphere form factor and Guinier approximation from Eq.\ref{eq:Guinier} in a Guinier plot. The radius of gyration is 25.8 \AA. The estimated slope by eye was $m=-185$. With $R_g=\sqrt{-3\cdot m} = 23.5$\AA\, the error is within 10\%, which is suitable for a naked eye approximation.}
\label{fig:GuinierPlot}
\end{figure}
\subsection{Debye regime}
In contrast to the Guinier regime, where the data can be evaluated by the Guinier law, the Debye regime signifies the area, where the particulate form manifests in the scattering, which in general cannot be fitted by the Debye law. The Debye law is only valid for the scattering from Gaussian chains. As can be seen in the form factors section \ref{sec:formFactors}, there is a direct correlation to the dimensionality of the scattering particle (sphere, disc, rod) and the slope in log-log plot, since the scattering scales with $I(Q)\sim Q^{-D}$, where $D$ is the dimensionality of the scattering object (sphere: $D=3$; disc: $D=2$; rod: $D=1$). Also the scattering from fractal objects is possible, which then results in non-integer numbers for the slope. It should be noted that this is an approximation that is only valid for the case when $1/\mbox{particle radius} \ll Q \ll 1/\mbox{fundamental building block}$. The fundamental building block in this case can be for example atoms or single monomers of a chain.
\subsection{Porod regime}
\index{Porod Law}
The Porod regime, is the regime where the interface between the particle and the solvent dominates the scattered intensity. It is valid for large $Q$ (before leveling out into the incoherent background) and therefore a good approach is extrapolating the sphere form factor to large $Q$. The decisive property of the scattered intensity is the scaling of $I(Q)\sim Q^{-4}$. This behavior can be derived from an extrapolation of the sphere form factor (Eq.\ref{eq:SphereForm}) to very large Q:
\begin{eqnarray}
I(Q) &\propto & \left( \frac{4}{3}\pi R^3\right)^2\frac{9 (\sin QR -QR\cos QR)^2}{Q^6 R^6}\\
& = & 8\pi ^2\left( \frac{R^2(1+\cos 2QR)}{Q^4}-\frac{2R\sin 2QR}{Q^5}+\frac{1-\cos 2QR}{Q^6}\right)
\label{eq:SpherePorod}
\end{eqnarray}
The higher order terms vanish at large $Q$ delivering the characteristic $Q^{-4}$ behavior of the scattered intensity. Here only proportionality is claimed, which is strictly true in this case. If the scattered intensity is recorded in absolute intensities, here also information about the surface of the particles can be obtained. This then follows the form
\begin{equation}
\lim _{Q\rightarrow \infty}I(Q)=\frac{8\pi\Delta\rho S}{Q^4}.
\label{eq:Porod}
\end{equation}
$\Delta\rho$ is here the SLD difference between the particle and the surrounding medium and $S$ the inter-facial area of the complete sample between particles and medium. This means, the absolute intensity of the Porod regime allows to determine the complete amount of surface in the sample.
\subsection{Estimation of particle and feature Size}
As described previously for low $Q$ in most cases it is a good approximation to assume all particles in the sample have spherical symmetry (Section \ref{sec:GuinierRegime}). The roots of the expression for the spherical form factor are in the locations $\tan (QR)=QR$, which is true for $QR\approx 4.49, 7.73, 10.90 ....$. In many cases anyway only the first minimum of the form factor will be visible. This allows a fast approximation of the radius with $R\approx 4.5/Q_{min}$. Here it needs to be noted, that this is the rotational average of the particle, neglecting any structure of the particle whatsoever.
Another approach of determining the size or correlation of features is using Eq.\ref{eq:distances}:
\begin{equation*}
d = \frac{2\pi}{Q}.
\end{equation*}
Although this is in general only strictly true for lamellar systems and the corresponding correlations, it is still a good approximation for a summary data examination during the experiment. With that restriction in mind it can be used for virtually any feature in the scattering curve and the size of the corresponding feature in the sample.
\section{Further Reading}
Most of the concepts shown in this manuscript are based on previous publications. The following selection of textbooks gives the reader a good overview of the principles of SAS.
\subsection{A. Guinier: X-ray diffraction in crystals, imperfect crystals, and amorphous bodies}
This early textbook concentrates on SAXS, as neutron scattering at the time of writing was still in its infancy. While some of the terminology may have changed slightly over time, in many aspects this book still gives a good fundamental overview of what can be done with small-angle scattering, and how to perform a solid data analysis. In addition, this is literally the book on the Guinier Law, and where some of the basic ideas of reading scattering curves were first collected.
\subsection{R.J. Roe: Methods of x-ray and neutron scattering in polymer science}
Here the author nicely manages to emphasize the commonalities and differences between x-ray and neutron scattering. An overview of the methods and technologies is given, as well as a helpful mathematical appendix, reiterating some of the concepts used in the book.
\subsection{G. Strobl: The physics of polymers}
For soft-matter researchers this book, even though not being focused on scattering as such, gives a good overview of applicable concepts for scattering with soft-matter samples. A wide range of helpful examples highlight in which particular area any evaluation concept of the data is applicable and useful.
\newpage
\addcontentsline{toc}{section}{References}
\bibliographystyle{ieeetr}
| proofpile-arXiv_065-1562 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusions}
We proposed a simple, fast and robust algorithm to compute
shortest paths on Riemannian manifolds learned from data. Here, standard solvers often fail due to ill-conditioned Jacobians
of the associated BVP.
Instead, our method applies a Jacobian-free fixed-point iterative scheme. The assumption is that the true path can be approximated smoothly by the predictive GP posterior. This solver makes the Riemannian methods more feasible since robustly in reasonable time, complex statistical models \citep{arvanitidis:nips:2016} can be fitted,
as well as distances can be computed in challenging deep metric scenarios \citep{arvanitidis:iclr:2018}.
This has been achieved by analyzing and extending
an existing probabilistic numerical solver \citep{HennigAISTATS2014},
turning it from a proof-of-concept into a
principled algorithm.
The presented method thus contributes both to
Riemannian methods and to probabilistic numerics.
Further improvements might be achieved with
more complex fixed-point iterations \citep{ishikawa1974fixed},
advanced line searches \citep{nips2015_5753},
adaptive mesh selection \citep{mazzia2004hybrid},
and improved model selection \citep{vaart2011information}.
\section{Experiments}
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{imgs/ex1_fig1-eps-converted-to.pdf}
\caption{Data}
\label{fig:ex1_fig1}
\end{subfigure}
~
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{imgs/ex1_fig2-eps-converted-to.pdf}
\caption{Curve Lengths}
\label{fig:ex1_fig2}
\end{subfigure}
~
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{imgs/ex1_fig3-eps-converted-to.pdf}
\caption{Runtimes }
\label{fig:ex1_fig3}
\end{subfigure}
\caption{\textit{Left}: Generated data on a semi-circle, together with some challenging geodesics computed by our method, and a point $\mathbf{y}$. \textit{Middle}: The curve lengths of the geodesics between the given data and the point $\mathbf{y}$, on the horizontal axis we have the point index, and the vertical lines represent the failures of the \texttt{bvp5c}{} to converge. \textit{Right}: The runtime for the corresponding geodesic problems.}\label{fig:ex1}
\end{figure*}
In this section we demonstrate the advantages of our method compared to \textsc{Matlab}{}'s \texttt{bvp5c}{}, and the algorithm in \citet{HennigAISTATS2014} denoted in the experiments as H{\&}H. Since all the methods depend on a set of parameters, we will come up with a \emph{default} setting.
For the proposed method, we will use for the $\Delta$ a uniform grid of $N=10$ points including the boundaries. The corresponding noise term of the points $\ddot{\b{z}}$ will be kept fixed to $\b{\Sigma} = 10^{-7}\mathbb{I}_D$. For the GP we will use the Squared Exponential kernel $k(t,t') = \exp(-(2\lambda^2)^{-1}\abs{t-t'}^2)$. We fix the amplitude $\b{V}$ in a Bayesian fashion as \cite{HennigAISTATS2014}, and the length-scale $\lambda^2 \approx 2^{-1} \abs{t_{n+1} - t_n}$ which provides enough degrees-of-freedom while covering the entire interval at the same time.
The prior mean is set to the straight line $\b{m}(t) = \b{c}(0) + t\cdot(\b{c}(1) - \b{c}(0))$, and the derivative accordingly.
For the method of \citet{HennigAISTATS2014}, we use the same
parameters as for our method.
We set the bounds on the Jacobian to $U = \dot{U} = 10$ and
run the method with one refinement iteration.
We set the maximum mesh size for the \texttt{bvp5c}{} to $1000$, and use for the starting mesh uniformly $10$ points on the straight line connecting the boundary points. For all the methods we consider the resulting curve as correct if $\norm{\ddot{\b{c}}(t_n) - f(\b{c}(t_n), \dot{\b{c}}(t_n))}_2^2 \leq 0.1, ~\forall n$.
\subsection{Experiments with a Non-parametric Riemannian Metric}
\label{sec:non_parametric_experiments}
We first consider the case of Riemannian metric learning as proposed by \cite{arvanitidis:nips:2016}. This can be seen as a way to capture the local density of the data, and thus to uncover the underlying geometric structure. The metric at a given point is computed in three steps: 1) use a kernel to assign weight to given data, 2) compute the local diagonal covariance matrix, and 3) use its inverse as the metric tensor. More formally, the metric is
\begin{align}
\b{M}_{dd}(\b{x}) &= \left(\sum_{n=1}^N w_n(\b{x})(x_{nd} - x_d)^2 + \rho \right)^{-1}, \\ &\text{where} \quad w_n(\b{x}) = \exp\left( -\frac{\norm{\b{x}_n - \b{x}}^2_2}{2\sigma_{\mathcal{M}}^2}\right). \nonumber
\end{align}
The parameter $\sigma_{\mathcal{M}} \in \mathbb{R}$ controls the curvature of the manifold, since it regulates the rate of the metric change i.e. when the $\sigma_{\mathcal{M}}$ is small then the metric changes fast so the curvature increases. The parameter $\rho\in \mathbb{R}_{>0}$ is fixed such that to prevent zero diagonal elements.
We generated 200 data points along the upper semi-circle and added Gaussian noise $\mathcal{N}(0,0.01)$ as it is shown in Fig. \ref{fig:ex1_fig1}. The $\rho=0.01$ is kept fixed in all the experiments. We set the parameter $\sigma_{\mathcal{M}}=0.15$, and after fixing the point $\b{c}(0)=\b{y}$ we compute the geodesics to the given data. From the results (Fig. \ref{fig:ex1_fig2}) we see that all three methods perform well when the distance from the starting point is small. However, when distances increase, we see that only our method manages to find the correct curve, while \texttt{bvp5c}{} is not able to solve the problem, and H{\&}H finds too long curves. Also, we see that the runtime of our method (Fig. \ref{fig:ex1_fig3}) is increased only slightly for the difficult problems, while \texttt{bvp5c}{} is always slower and especially for the difficult problems increases the mesh size to the maximum and fails to converge. The performance of the H{\&}H remains almost constant, since it is essentially based on the converge of the model's parameters and not the difficulty of the problem.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{imgs/ex2_fig2_v3-eps-converted-to.pdf}
\end{center}
\vspace{-10pt}
\caption{Failed geodesics.}
\label{fig:ex2}
\end{figure}
Next, we generated three challenging datasets consisting of 400 points each. For the first one we generate a circle and we flip the lower half along the $y$ axis, we refer to it as Curly in the results. For the second we move the lower half of the circle such that to get the two moons. The third one is a 2-dimensional sphere in $\mathbb{R}^3$. Finally, we add Gaussian noise $\mathcal{N}(0,0.01)$, and we standardize to zero mean and unit variance each dimension. We keep the same parameters for the metric. Note that the resulting manifold implies high curvature, so we increased the flexibility of the methods. For the proposed model and H{\&}H we used a grid of 50 points, and the maximum mesh size of \texttt{bvp5c}{} was set to 5000. We pick randomly 40 points for each dataset, and compute the pairwise distances. In Fig. \ref{fig:ex2} we see that our method manages to solve almost all of the shortest path problems, while \texttt{bvp5c}{} fails in almost half of them. H{\&}H is expected to fail in many cases, since it is not designed to converge to the correct solution.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{imgs/ex3_fig1-eps-converted-to.pdf}
\begin{tabular}{c | c c c c }
D & 2 & 5 & 10 & 20\\\hline
Ours & 0\% & 0\% & 0\% & 9\%\\
\texttt{bvp5c}{} & 9\% & 24\% & 42\% & 50\%\\
H{\&}H & 69\% & 95\% & 100\% & 100\%
\end{tabular}
\vspace{-5pt}
\caption{Scalability in higher dimensions and failures.}
\label{fig:ex3}
\end{figure}
Furthermore, we tested the scalability of the methods with respect to dimensionality. We generate 1000 points on a semi-circle in 2 dimensions, and standardized to zero mean and unit variance. We fix a point $\b{y}$ and a subset $\mathcal{S}$ of 100 points. Then, for every $D = [2, 5, 10, 20]$ we construct an orthogonal basis to project the data in $\mathbb{R}^D$, where we standardize the data again and add Gaussian noise $\mathcal{N}(0,0.01)$. Then, for each dimension $D$ we compute the geodesics between the $\b{y}$ and the subset $\mathcal{S}$ of the points. Keep in mind that the parameter $\sigma_{\mathcal{M}}=0.25$ is kept fixed, so as the dimensions increase the sparsity of the data increase, and so does the curvature of the manifold. In Fig.~\ref{fig:ex3} we show the average runtime for every dimensionality and the percentage of failures for each method. Our method remains fast in higher dimensions, even if the curvature of the manifold is increased. On the other hand, we observe that \texttt{bvp5c}{} fails more often, which causes the overhead in the runtime. H{\&}H remains fast, but, the resulting curves cannot be trusted as the criterion of correct solution is almost never met.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{imgs/ex8_fig1-eps-converted-to.pdf}
\end{center}
\vspace{-10pt}
\caption{LAND experiment.}
\label{fig:ex8}
\end{figure}
Also, we fitted a mixture of LANDs \citep{arvanitidis:nips:2016} using the three models, on the two moons dataset generated by flipping and translating the data in Fig.~\ref{fig:ex1_fig1}. Note that we fix $\sigma_{\mathcal{M}}=0.1$ since we want our metric to capture precisely the underlying structure of the data, which implies that the curvature is increased. From the results in Fig.~\ref{fig:ex8} we see that the proposed solver faster achieves higher log-likelihood. Additionally, in the same time interval, it manages to run more iterations (dots in the figure).
\subsection{Experiments with a Riemannian Metric of Deep Generative Models}
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{imgs/ex7_fig3-eps-converted-to.pdf}
\caption{Latent space}
\label{fig:ex7_fig1}
\end{subfigure}
~
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{imgs/ex7_fig1-eps-converted-to.pdf}
\caption{Curve Lengths}
\label{fig:ex7_fig2}
\end{subfigure}
~
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{imgs/ex7_fig2-eps-converted-to.pdf}
\caption{Runtimes}
\label{fig:ex7_fig3}
\end{subfigure}
\caption{\emph{Left}: The latent space together with the computed geodesics between two points. \emph{Middle}: The curve lengths, on the $x$ axis we show the curve length of the proposed model. The results are sorted with respect to our model. \emph{Right}: The corresponding runtimes.}\label{fig:ex7}
\end{figure*}
The Variational Auto-Encoder (VAE) \citep{kingma:iclr:2014, rezende:icml:2014}, provides a systematic way to learn a low dimensional latent representation of the data, together with a generator that learns to interpolate the data manifold in the input space. Usually, deep neural networks are used to model the generator. These flexible models are able to compensate for any reparametrization of the latent space, which renders the latent space unidendifiable. Recently, \cite{arvanitidis:iclr:2018} defined a Riemannian metric in the latent space, which is induced by the generator and is invariant to the parametrization of the latent space. This resolves the identifiability issue and makes computations in the latent space parametrization invariant. More specifically, the VAEs utilizes a stochastic generator that maps
a point $\b{x}$ from the latent space $\mathcal{X}$ to a point $\b{y}$ in the input space $\mathcal{Y}$, and it consists of two parts: the mean and the variance function as $\b{y}(\b{x}) = \bs{\mu}(\b{x}) + \bs{\sigma}(\b{x}) \odot \epsilon$, where $\epsilon\sim\mathcal{N}(0,\mathbb{I}_{\text{dim}(\mathcal{Y})})$ and $\odot$ is the pointwise multiplication. This stochastic mapping introduces a random Riemannian metric in the latent space. However, as it is shown \citep{Tosi:UAI:2014} we are able to use the expectation of the metric which has the appealing form
\begin{align}
\b{M}(\b{x}) = \b{J}_{\bs{\mu}}(\b{x})^{\Trans} \b{J}_{\bs{\mu}}(\b{x}) + \b{J}_{\bs{\sigma}}(\b{x})^{\Trans} \b{J}_{\bs{\sigma}}(\b{x}),
\end{align}
where $\b{J}$ stands for the Jacobian of the corresponding functions. The interpretation of the metric is relatively simple. It represents the distortions due to the mean function and the uncertainty of the generator.
In this context, for the data in the input space we generated the upper half of a 2-dim sphere in $\mathbb{R}^3$, added Gaussian noise $\mathcal{N}(0,0.01)$ and scaled the data in the interval $[-1,1]^3$. Then, we trained a VAE, using for the generator a simple deep network consisted of two hidden layers with 16 units per layer, the $\texttt{softplus}$ as activation functions, and $\texttt{tanh}$ for the output layer. We used a 2-dimensional latent space, and the encoded data can be seen in Fig. \ref{fig:ex7_fig1}. There, we show the computed geodesic for the 3 methods. Interestingly, we see that the 3 resulting curves differ, and the estimated curve lengths are:
proposed (2.52), \texttt{bvp5c}{} (3.65) and H{\&}H (3.30). Our model, manages to find the shortest path, which is a particularly curved path but the second derivative remains relatively smooth, while \texttt{bvp5c}{} finds a simpler curve with larger length. This is not surprising since \texttt{bvp5c}{} prefers solutions where the curve is smoother, while our method prefers curves where the second derivative is smoother. In order to further analyse this behavior, we randomly pick 50 points and compute all pairwise distances. The results in Fig. \ref{fig:ex7_fig2} shows that the proposed method manages to find always the shortest path, while the other methods when the distances increase, provide a suboptimal solution. Comparing the runtimes (Fig. \ref{fig:ex7_fig3}) we see that our method is faster in the simple problems, and has only a small overhead in the difficult problems, however, it manages always to find the shortest path.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{imgs/ex6_fig1-eps-converted-to.pdf}
\end{center}
\vspace{-10pt}
\caption{Runtime comparison}
\label{fig:ex6}
\end{figure}
As a last experiment, we generated a 2-dimensional sphere in $\mathbb{R}^3$ and moved the upper half by 1, and again added Gaussian noise and scaled the data to $[-1,1]^3$. Instead of $\texttt{softplus}$ in the hidden layers, we used the $\texttt{tanh}$ activation functions which increases the curvature. Here the latent space is 2-dimensional. We fix randomly a point in the latent space and compute the geodesic to 100 randomly chosen points. As we observe from the results in Fig. \ref{fig:ex6}, as long as we compute the distance between points of the same semi-sphere the runtimes of our method and \texttt{bvp5c}{} are comparable. However, when the points belong in different semi-spheres the runtimes increase significantly. The reason is that curvature increases dramatically when we cross parts of the latent space where the generator is uncertain. That is also the reason why many problems are not solved (dots in the figure), but even in this challenging setting our model is more robust.
\section{A Brief Recap of Riemannian Geometry}
We start by defining \emph{Riemannian manifolds} \citep{docarmo:1992}. These are
well-studied metric spaces, where the inner product is only locally defined and changes
smoothly throughout space.
\begin{definition}
A Riemannian manifold is a smooth manifold $\mathcal{M}$ where each tangent space
$T_{\b{x}}\mathcal{M}$ is equipped with an inner product (Riemannian metric)
$\langle \b{a}, \b{b} \rangle_\b{x} = \b{a}^{\Trans} \b{M}(\b{x}) \b{b}$
that changes smoothly across the manifold.
\end{definition}
This inner product structure is sufficient for defining the length of a smooth curve
$\b{c}: [0, 1] \rightarrow \mathcal{M}$ in the usual way as
\begin{align}
\mathrm{Length}(\b{c})
&= \int_0^1 \sqrt{\dot{\b{c}}(t)^{\Trans} \b{M}(\b{c}(t)) \dot{\b{c}}(t) } \mathrm{d}t,
\end{align}
where $\dot{\b{c}} = \partial_t \b{c}$ denotes the curve \emph{velocity}.
The length of the shortest path connecting two points then constitutes the natural
distance measure on $\mathcal{M}$. The shortest curve is known as the geodesic, and can be found through
the Euler-Lagrange equations to satisfy a system of 2\textsuperscript{nd} order ODEs \citep{arvanitidis:iclr:2018},
\begin{align}
\label{eq:ode}
\ddot{\b{c}}(t )
= \frac{-{\b{M}({\b{c}(t)})}^{-1}}{2}\bigg[2 (\mathbb{I}_D \otimes \dot{\b{c}}(t)^{\Trans})& \parder{\vectorize{\b{M}({\b{c}(t)})}}{\b{c}(t)}\dot{\b{c}}(t) \nonumber \\
- \parder{\vectorize{\b{M}({\b{c}(t)})}}{\b{c}(t)}^{\Trans} &(\dot{\b{c}}(t) \otimes \dot{\b{c}}(t))\bigg],
\end{align}
where $\vectorize{\cdot}$ stacks the columns of a matrix into a vector and
$\otimes$ is the Kronecker product.
Most numerical calculations on Riemannian manifolds are performed in local
tangent spaces as these are Euclidean. Key operations are therefore mappings
back and forth between the manifold and its tangent spaces. A point $\b{y} \in \mathcal{M}$
can be mapped to the tangent space at $\b{x}\in\mathcal{M}$ by computing the shortest connecting
curve and evaluating its velocity $\b{v}$ at $\b{x}$. This is a tangent vector at $\b{x}$
with the property that its length equals the length of the shortest path.
By the Picard-Lindel{\"o}f theorem \citep{picard1890}, this process can be reversed
by solving Eq.~\ref{eq:ode} with initial conditions $\b{c}(0) = \b{x}$ and
$\dot{\b{c}}(0) = \b{v}$. Mapping from the manifold to a tangent space, thus,
requires solving a boundary value problem, while the inverse mapping is an
initial value problem. Practically, the BVPs dominate the computational
budget of numerical calculations on manifolds.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{imgs/teaser_img_v3-eps-converted-to.pdf}
\end{center}
\vspace{-10pt}
\caption{A data manifold with a geodesic and its tangent vector.}
\label{fig:teaser}
\end{figure}
In the context of this paper we will focus upon Riemannian manifolds which are \emph{learned} from the data, and capture its underlying geometric structure. Thus, we will consider for the smooth manifold the Euclidean space as $\mathcal{M} = \mathbb{R}^D$, and learn a Riemannian metric $\b{M}:\mathbb{R}^D \rightarrow \mathbb{R}^{D\times D}$. To be clear, this simply changes the way we measure distances, while respecting the structure of the data. An illustrative example can be seen in Fig.~\ref{fig:teaser}.
\newline
\section{Introduction}
A long-standing goal in machine learning is to build models that are invariant
to irrelevant transformations of the data, as this can remove factors that
are otherwise arbitrarily determined. For instance, in nonlinear latent variable
models, the latent variables
are generally unidentifiable as the latent space is by design not invariant
to reparametrizations. Enforcing a Riemannian metric in the latent space that is invariant
to reparametrizations alleviate this identifiability issue, which significantly
boosts model performance and interpretability \citep{arvanitidis:iclr:2018, Tosi:UAI:2014}.
Irrelevant transformations of the data can alternatively be factored out by only modeling
local behavior of the data; geometrically this can be viewed as having a locally
adaptive inner product structure, which can be learned from data \citep{hauberg:nips:2012}.
In both examples, the data is studied under a Riemannian metric, so that can be seen as living on a Riemannian manifold.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{imgs/jac_ill_3-eps-converted-to.pdf}
\end{center}
\vspace{-10pt}
\caption{A data manifold and the $\log$-condition number of the Jacobian with fixed velocity (background).
High condition numbers can cause failures to converge for off-the-shelf solvers.}
\label{fig:jacobian_ill_conditioned}
\end{figure}
While this geometric view comes with strong mathematical support, it is
not commonly adopted. The primary concern is that the computational overhead
of Riemannian models often outweigh the induced benefits.
The main bottleneck herein are \emph{distance computations},
which are often at the core of machine learning algorithms.
In Euclidean space, distance evaluations require the computation of a norm.
In kernel methods, distances are evaluated via the kernel trick, but
many interesting geometries cannot be captured with
a positive definite kernel \citep{feragen2015geodesic}.
In both cases, distances can be computed with negligible effort.
The distance
between two points $\b{x}, \b{y}$ on a Riemannian manifold
is defined as the length of the \emph{shortest path}
between $\b{x}$ and $\b{y}$ known as the \emph{geodesic}.
Computing the geodesic requires the solution of a
\emph{boundary value problem (BVP)}.
These types of
\emph{ordinary differential equations (ODEs)}
require specialized numerical methods for their solution,
as---unlike \emph{initial value problems (IVPs)}---they cannot be solved
via step-by-step algorithms like Runge--Kutta methods, and thus
their solution is computationally
\emph{taxing} \citep{ascher1994numerical}.
Moreover, Riemannian manifolds learned from data usually imply high curvature and an unstable Riemannian metric.
The reason is that the metric is estimated only from finite data,
so it changes irregularly fast.
As a consequence, the Jacobians of the ODE, which are required in many
off-the-shelf solvers, are often ill-conditioned which causes additional
problems and effort \citep[\textsection 8.1.2]{ascher1994numerical}.
The example in Fig.~\ref{fig:jacobian_ill_conditioned}
shows that the Jacobian associated with a data driven manifold is highly oscillatory. This implies that standard ODE solvers easily break down.
Thus, specialized BVP solvers for shortest path computations
are required to make distance evaluations on manifolds learned from data
as fast and robust as in other models.
In this paper, we propose a novel method for
computing shortest paths on Riemannian manifolds learned from data. Combining the theory of smoothing splines/Gaussian process
regression \citep{wahba1990spline,rasmussenwilliams} with
fixed-point iterations \citep{mann1953mean,johnson1972fixed},
we arrive at an algorithm which is \emph{simple}, \emph{fast} and
\emph{more reliable} on challenging geometries.
This is achieved by not utilizing the Jacobian
which is computationally costly and ill-behaved.
Our work is a significant improvement of an earlier algorithm
by \citet{HennigAISTATS2014}.
Their algorithm is an early proof of concept for
probabilistic numerics \citep{hennig15probabilistic}.
We show below that in the original form, it
\emph{provably} does not converge to the true solution.
However, their algorithmic structure \emph{is capable} of
converging to the true solution.
We demonstrate \emph{how} their algorithm needs to be
adopted to improve \emph{solution quality} and
\emph{convergence speed}, so that the fixed-point
algorithm can efficiently compute the shortest path on Riemannian manifolds.
\subsubsection*{\bibname}}
\usepackage{tikz}
\usetikzlibrary{positioning}
\usetikzlibrary{tikzmark}
\usetikzlibrary{arrows}
\usetikzlibrary{calc}
\tikzstyle{every picture}+=[remember picture]
\usetikzlibrary{shapes}
\usepackage{enumitem}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{csquotes}
\usepackage{hyperref}
\usepackage{url}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{tensor}
\usepackage{dsfont}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{appendix}
\usepackage{multibib}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{wrapfig}
\usepackage{smartdiagram}
\hypersetup{
colorlinks,
linkcolor={red!50!black},
citecolor={blue!50!black},
urlcolor={blue!80!black}
}
\hypersetup{final}
\newcommand{\gear}[1]{\textbf{[Georgios says: \textit{#1}]}}
\newcommand{\hauberg}[1]{\textbf{[S{\o}ren says: \textit{#1}]}}
\newcommand{\michael}[1]{\textbf{[Michael says: \textit{#1}]}}
\newcommand{\philipp}[1]{\textbf{[Philipp says: \textit{#1}]}}
\newcommand{\todosection}[1]{\subsection{ToDo} \textbf{This ToDo section will be removed before publication.} #1}
\newtheorem{definition}{Definition}
\newtheorem{proposition}{Proposition}
\newenvironment{proof}{\paragraph{\emph{proof}:}}{\hfill$\square$}
\newcommand*{\textsc{Matlab}}{\textsc{Matlab}}
\newcommand*{\texttt{bvp5c}}{\texttt{bvp5c}}
\input{notation}
\begin{document}
\title{Fast and Robust Shortest Paths on Manifolds Learned from Data}
\author[$\dagger$]{Georgios Arvanitidis}
\author[$\dagger$]{S{\o}ren Hauberg}
\author[$\ddagger$]{Philipp Hennig}
\author[$\star$]{Michael Schober}
\affil[$\dagger$]{Technical University of Denmark, Lyngby, Denmark}
\affil[$\ddagger$]{University of T\"ubingen, T\"ubingen, Germany}
\affil[$\ddagger$]{Max Planck Institute for Intelligent Systems, T\"ubingen, Germany}
\affil[$\star$]{Bosch Center for Artificial Intelligence, Renningen, Germany}
\maketitle
\begin{abstract}
We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data. This amounts to solving a system of ordinary differential equations (ODEs) subject to boundary conditions. Here standard solvers perform poorly because they require well-behaved Jacobians of the ODE, and usually, manifolds learned from data imply unstable and ill-conditioned Jacobians. Instead, we propose a fixed-point iteration scheme for solving the ODE that avoids Jacobians. This enhances the stability of the solver, while reduces the computational cost. In experiments involving both Riemannian metric learning and deep generative models we demonstrate significant improvements in speed and stability over both general-purpose state-of-the-art solvers as well as over specialized solvers.
\end{abstract}
\input{intro}
\input{geometry}
\input{solver}
\input{experiments}
\input{conclusions}
\clearpage
\subsubsection*{Acknowledgments}
GA is supported by the Danish Center for Big Data Analytics Driven Innovation. SH was supported by a research grant (15334) from VILLUM FONDEN. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n\textsuperscript{o} 757360). We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the used Titan Xp GPU. PH gratefully acknowledges financial support by the ERC StC Action 757275 PANAMA and grant BMBF 01 IS 18 052-B of the German Federal Ministry for Education and Research.
\small
\bibliographystyle{abbrvnat}
\section{A Fast Fixed-Point Method for Shortest Paths}
In order to apply Riemannian models to interesting data sets,
we require a fast and robust method to solve the boundary value problem
\begin{equation}
\begin{aligned}
\ddot{\b{c}}(t) &= f(\b{c}(t),\dot{\b{c}}(t)),& \b{c}(0) = \b{x},&\;\b{c}(1) = \b{y}
\end{aligned}
\end{equation}
where $f(\b{c}(t),\dot{\b{c}}(t))$ is the right-hand side of Eq.~\eqref{eq:ode}
and $\b{x},\,\b{y} \in \mathcal{M}$.
Numerical BVP solvers typically replace the analytic
solution $\b{c}(t)$ by
an approximant $\post{\b{c}}(t)$ which is required to
fulfill the ODE
$\post{\ddot{\b{c}}}(t_n) = f(\post{\b{c}}(t_n),\post{\dot{\b{c}}}(t_n))$
on a discrete \emph{mesh}
$\Delta = \{t_0 = 0, t_1, \dotsc, t_{N-1} = 1\} \subset [0, 1]$
of evaluation \emph{knots} $t_n$.
Together with the boundary conditions (BC)
$\post{\b{c}}(0) = \b{x},\;\post{\b{c}}(1) = \b{y}$,
this results in a $D(N+2)$-dimensional nonlinear
equation which can be solved for rich enough
$D(N+2)$-dimensional parametric models.
If the approximant is represented by the posterior
mean $\bs{\mu}(t)$ of a GP regressor
$\mathcal{GP}(\tilde{\b{c}}(t); \bs{\mu}(t), \b{k}(t,s))$, then
it can be proven that a solution with small
approximation error exists \citep[\textsection 11]{wendland2004scattered}
under suitable conditions on the kernel
\citep{micchelli2006universal,rasmussenwilliams}.
This means we have to find a (artificial) data
set $\mathcal{D}$ with $(N+2)$ $D$-dimensional data points
such that the posterior mean fulfills the BVP on
the discretization mesh $\Delta$ and the BC on
the curve \emph{boundary}
$\mathcal{B} = \{0, 1\}$.
The data set $\mathcal{D}$ should be thought of as a
\emph{parametrization} for the solution curve.
One way to generate a solution is to define the
auxiliary function $F(\b{c}) = \ddot{\b{c}} - f(\b{c},\dot{\b{c}})$
and apply a variant of the Newton-Raphson method
\citep{deuflhard2011newton} to find a root of $F$.
For example, this type of algorithm is also at the core of
\textsc{Matlab}{}'s \texttt{bvp5c}, a state-of-the-art BVP solver.
The convergence of these algorithms depends on
the Jacobians $\grad{\b{c}} f, \grad{\dot{\b{c}}} f$ of $f$
at the evaluation knots $t_n$.
In particular, Jacobians with big condition number
may cause Newton's method to fail to converge if
no precautions are taken \citep[\textsection 8.1.2]{ascher1994numerical}.
On manifolds learned from data, this is a common problem.
Furthermore, in practice these Jacobians are computed with
a computationally taxing finite difference scheme.
Thus, a method \emph{not} based on Newton's
method should be more suitable for the computation
of shortest paths.
\subsection{Method Description}
\label{sec:method_description}
As mentioned above, we model the approximate
solution $\post{\b{c}}(t)$ with the posterior mean
$\bs{\mu}(t)$ of a (multi-output) \emph{Gaussian process}
\begin{equation}
\mathcal{GP}(\tilde{\b{c}}(t);\,\bs{\mu}(t), \b{V} \otimes k(t,s))
\end{equation}
with (spatial) kernel $k$ and inter-dimensional covariance
matrix $\b{V} \in \mathbb{R}^{D\times D}$.
If the kernel $k$ is sufficiently partially differentiable,
this implies a covariance between derivatives of $\tilde{\b{c}}$
as well \citep[\textsection~4.1.1]{rasmussenwilliams}, in particular
\begin{equation}
\operatorname{cov}\left(\frac{\mathrm{d}^m}{\mathrm{d}t^m}\tilde{{c}}_i(t),
\frac{\mathrm{d}^n}{\mathrm{d}s^n}\tilde{{c}}_j(s)\right) = \b{V}_{ij} \frac{\partial^{m+n}}{\partial t^m \partial s^n} k(t,s)
\end{equation}
for the covariance between output dimensions $i$ and $j$, derivatives
$m$ and $n$ and spatial inputs $t$ and $s$.
This \emph{(prior) model} class is the same as in \citet{HennigAISTATS2014}.
The boundary equations fix two parameters $(0,\b{x}), (1, \b{y})$
of the parametrization. The remaining $N$ parameters
$(t_n, \ddot{\b{z}}_n)$ approximate the
\emph{accelerations} $\ddot{\b{c}}(t_n)$ of the true
solution $\b{c}(t_n)$ at knot $t_n$,
i.e., $\ddot{\b{z}}_n \approx \ddot{\b{c}}(t_n)$.
The $\ddot{\b{z}}_n$ are updated iteratively and
we denote values at the $i$-th iteration with the
superscript $(i)$, e.g., $\post{\b{c}}^{(i)}(t)$ for
the $i$-th approximation, $\ddot{\b{z}}_n^{(i)}$ for the
$i$-th value of the parameter $\ddot{\b{z}}_n$ and so forth.
At iteration $i$,
the approximation $\post{\b{c}}^{(i)}(t)$
is the predictive posterior $\mathcal{GP}$
\begin{equation}
\begin{split}
&P(\tilde{\b{c}}^{(i)}(t)) = \mathcal{GP}(\tilde{\b{c}}^{(i)}(t);\;\bs{\mu}^{(i)}(t), \b{k}^{(i)}(t,s))\\
&\b{G} = \b{V} \otimes \bigg(\begin{bmatrix}k(\mathcal{B},\mathcal{B}) & \frac{\partial^2}{\partial s^2}k(\mathcal{B}, \Delta) \\ \frac{\partial^2}{\partial t^2}k(\Delta,\mathcal{B}) & \frac{\partial^4}{\partial t^2\partial s^2}k(\Delta,\Delta)\end{bmatrix} \\
&\phantom{\b{G} = \b{V} \otimes \bigg(}
+ \operatorname{diag}(0,0,\b{\Sigma},\dotsc,\b{\Sigma})\bigg)\\
&\bs{\omega}\Trans = \left(\b{V} \otimes
\begin{bmatrix}k(t,\mathcal{B}) & \frac{\partial^2}{\partial s^2}k(t,\Delta)\end{bmatrix}
\right) \b{G}^{-1}\\
&\bs{\mu}^{(i)}(t) = \b{m}(t)
+ \bs{\omega}\Trans \text{vec}\left(
\begin{bmatrix}\b{x} - \b{m}(0)\\ \b{y} - \b{m}(1)\\\ddot{\b{z}}^{(i)}_\Delta - \b{\ddot{m}}(\Delta)\end{bmatrix}\Trans \right)\\
&\b{k}^{(i)}(t,s) = \b{V}\otimes k(t,s) - \bs{\omega}\Trans
\left(\b{V} \otimes \begin{bmatrix}
k(\mathcal{B}, s)\\ \frac{\partial^2}{\partial t^2}(\Delta,s)
\end{bmatrix} \right),
\end{split}
\label{eq:posterior}
\raisetag{100pt}
\end{equation}
and similarly for the velocity $\post{\dot{\b{c}}}(t)$ by forming the $\b{G}$ and $\bs{\omega}$ accordingly.
In Eq.~\eqref{eq:posterior}, $\ddot{\b{z}}_\Delta \in \mathbb{R}^{N \times D}$ represents the accelerations, and $k(\mathcal{B},\Delta)\in\mathbb{R}^{2\times N}$ is the
matrix of kernel evaluations at boundary points and evaluation knots and
similar for $k(\mathcal{B},\mathcal{B}), k(\Delta,\Delta)$ and $k(\Delta,\mathcal{B})$.
Finally, $\bs{\Sigma} = \varepsilon \Id_D$ is the identity matrix times a small
regularization parameter $\varepsilon \approx 10^{-10}$, so ${\ddot{\bs{\mu}}}^{(i)}(t_n)\rightarrow \ddot{\b{z}}^{(i)}_n$.
The rationale for its inclusion will be postponed to the end of
Sect.~\ref{sec:comp-to-aistats} as it will become more apparent in contrast
to the model of \citet{HennigAISTATS2014}. Details for the components $k(\cdot,\cdot),~\b{m},~\Delta$ in Appendix~\ref{sec:app:approximate_sp}. Now, we proceed with the description of the algorithm.
\input{solver_figure}
Just like a root of the function
$F(\b{c}) = \ddot{\b{c}} - f(\b{c},\dot{\b{c}})$
solves the ODE, so does a fixed point of the mapping
$\ddot{\bs{\mu}}^{(i+1)}(t) = f(\bs{\mu}^{(i)}(t),\dot{\bs{\mu}}^{(i)}(t))$.
In particular, we can evaluate this mapping on the
discretization mesh $\Delta$ to map $\ddot{\b{z}}_\Delta^{(i)}$
to $\ddot{\b{z}}_\Delta^{(i+1)}$.
The big advantage of this combination of parametrization and update scheme
is the simplicity of obtaining closed-form
iteration updates
\citep[\textsection~9.4]{rasmussenwilliams}.
The vector field $f$ is evaluated
using the current iteration
$(\bs{\mu}^{(i)},\dot{\bs{\mu}}^{(i)})$ to yield
$\ddot{\b{z}}_n^{(i+1)} = f(\bs{\mu}^{(i)}(t_n),\dot{\bs{\mu}}^{(i)}(t_n))$, and $\ddot{\b{z}}_n^{(i+1)} \approx \ddot{\bs{\mu}}^{(i+1)}(t_n)$ because $\varepsilon \rightarrow 0$.
Forming $\bs{\mu}^{(i+1)}, \dot{\bs{\mu}}^{(i+1)}$ from $\ddot{\b{z}}_n^{(i+1)}$
only requires two matrix-vector products (see Eq.~\eqref{eq:posterior}).
The process is depicted in Fig.~\ref{fig:algorithm}.
\input{solver_algorithm}
Variants of this scheme have been repeatedly applied
for the creation of probabilistic
differential equation solvers
\citep{HennigAISTATS2014,chkrebtii2016bayesian,schober2014nips,cockayne2016probabilistic,KerstingHennigUAI2016,teymur2016probabilistic,schober2017probabilistic,kersting2018arXiv}.
Of these papers, only \citet{HennigAISTATS2014} points out that this can be
updated multiple times, but even there the connection between a fixed point
of the mapping and an approximate solution is not stated.
Interpreting the iteration as a fixed point search is the
key insight of this paper.
We suggest to apply a \emph{Mann iteration}
\citep{mann1953mean,johnson1972fixed} process
for the solution of \eqref{eq:ode} given by
\begin{equation}
\begin{aligned}
\label{eq:fixed-point-update}
\ddot{\b{z}}_n^* &= f(\bs{\mu}^{(i)}(t_n),\dot{\bs{\mu}}^{(i)}(t_n))\\
\ddot{\b{z}}_n^{(i+1)} &= \alpha_i \ddot{\b{z}}_n^* + (1 - \alpha_i) \ddot{\b{z}}_n^{(i)}
\end{aligned}
\end{equation}
with \enquote{step sizes} $\alpha_i \in [0, 1]$.
The results of \citet{mann1953mean,johnson1972fixed}
only apply if $\alpha_i = (i+1)^{-1}$, however we found
a backtracking scheme to be effective in practice.
Algorithm \ref{alg:bvp-solver} presents our method
in pseudo-code where $\ddot{\b{z}}_{\Delta}^{(*,j)}$ denotes the tentative parametrization. Note how the backtracking line search for $\alpha_i$
(Lines \ref{line:backtrack-start}-\ref{line:backtrack-end})
requires half of the description.
Our method is similar to a recently proposed
method by \citet{BELLO2017} that is based
on the \emph{variational iterative method}\footnote{\emph{calculus of variations} not \emph{variational inference}.} by \citet{HE2000115}.
\citet{BELLO2017} proposed to use this
scheme \emph{symbolically} requiring a computer-algebra system
for its execution, which makes it inapplicable to practical tasks.
More details to these related works can be found in
\citet{JAFARI20141,KHURI201428}.
\subsection{Comparison with \citet{HennigAISTATS2014}}
\label{sec:comp-to-aistats}
The proposed method is inspired by the previous work
of \citet{HennigAISTATS2014} and we make a direct comparison here.
The algorithm of \citet{HennigAISTATS2014} is a proof-of-concept
probabilistic numerical method \citep{hennig15probabilistic}
for solving boundary value problems.
It is structurally similar to other early probabilistic IVP
solvers of \citet{chkrebtii2016bayesian}
and \citet{skilling1991bayesian}.
Since the publication of these early works,
the field has matured significantly,
providing algorithms with novel functionality
\citep{hauberg2015random,nips2015_5753,oates2017bayesian,briol2018bqmultiple}
and rigorous analysis
\citep{briol2015frank,chkrebtii2016bayesian,schober2017probabilistic,kersting2018arXiv}.
Their main idea is to treat the vector-field
evaluations $\ddot{\b{z}}_n$
as \emph{noisy observations} of the true, but unknown,
second derivative $\ddot{\b{c}}(t_n)$.
For a concrete suggestion, they propose a Gaussian
likelihood
$P(\ddot{\b{z}}_n) =
\N(\ddot{\b{z}}_n; \ddot{\b{c}}(t_n), \b{\Lambda}_n)$.
Together with a GP prior on $\b{c}(t_n)$,
they arrive at an inference algorithm.
Heuristically, they propose to add mesh observations
sequentially, refine them iteratively for a fixed number
of steps, and they repeat the overall process until they
find a set of hyper-parameters of the GP which maximizes the
data likelihood of the final approximation.
However, the algorithm of \citet{HennigAISTATS2014}
cannot converge to the true solution in general.
A convergent method is required to satisfy
$f(\bs{\mu}^{(i)}(t_n),\dot{\bs{\mu}}^{(i)}(t_n)) \to \ddot{\bs{\mu}}^{(i)}(t_n)$
as $i \to \infty$.
However, as $\b{\Sigma}_n \neq \b{0}$ in
\citet{HennigAISTATS2014}, $\ddot{\bs{\mu}}^{(i)}(t_n)$ is
\emph{not} an interpolant of $\ddot{\b{z}}_n^{(i)}$
(\citeauthor{kimeldorf1970correspondence}, \citeyear{kimeldorf1970correspondence}, Thm.~3.2;
\citeauthor{kanagawa2018gps}, \citeyear{kanagawa2018gps}, Prop.~3.6)
implying that the \emph{true accelerations}
$\ddot{\b{c}}(t_n)$ cannot be a parameterization
in the model of \citet{HennigAISTATS2014}
contradicting the fixed point requirement.
The same criticism could be applied to our model,
as we propose $\b{\Sigma}_n = \b{\Sigma} = \varepsilon \Id_D \neq \b{0}$.
We have experimented with annealing schemes for
this hyper-parameter $\b{\Sigma}^{(i)} = i^{-1} \b{\Sigma}$,
but the benefit of $\varepsilon > 0$ for the stability of the
Gram matrix $\b{G}$ is bigger than induced numerical inprecision,
in particular when the tolerance $\tau$ is considerably larger
than $\varepsilon$.
Although the algorithm of \citet{HennigAISTATS2014} cannot converge,
three insights and resulting modifications lead to our proposed method:
\begin{enumerate
\item \citet{HennigAISTATS2014} did not propose a principled scheme to determine the number of refinements $S$, but treated
it as a hyper-parameter that must be provided by the user.
However, it can be easily checked whether the posterior
mean $\bs{\mu}^{(i)}(t)$ fulfills the differential equation
at any point $t$. This not only removes a hyper-parameter, but
can also gives more confidence in the returned solution.
In principle, the error
$e_n^{(i)} = \norm{\ddot{\bs{\mu}}^{(i)}(t_n)
- f(\bs{\mu}^{(i)}(t_n),\dot{\bs{\mu}}^{(i)}(t_n))}$
could even be used to construct adaptive meshes $\Delta^{(i)}$
\citep{mazzia2004hybrid}.
\item Using an universal kernel \citep{micchelli2006universal}
and a fine enough mesh $\Delta^*$, it is known that any curve
can be fitted.
While sub-optimal kernel parameters $\theta$ might require
an exponential bigger mesh
\citep[Thm.~10]{vaart2011information}, the property
of universality holds \emph{regardless} of the
hyper-parameter $\theta$ used to find the approximation.
As a consequence, tuning hyper-parameters is \emph{purely optional} and certainly \emph{does not require restarts}.
This also improves runtime significantly as the Gram matrix
needs only be to inverted \emph{once}.
In practice, we have observed negligible solution improvements
after type-II maximum likelihood optimization after the
end of the algorithm.
\item Currently, there is no analysis when or if
coordinate-wise updates offer improvements over
simultaneous updates for all parameters $\ddot{\b{z}}^{(i)}_\Delta$.
There is, however, a strong argument \emph{for} updating
simultaneously: runtime. All predictive posterior parameters
can be pre-computed and kept fixed throughout the runtime
of the algorithm, if the mesh is not adapted throughout.
In particular, the regression weights $\bs{\omega}\Trans$
can be kept fixed, so each update requires
only two vector-vector products.
\end{enumerate}
Finally, while our derivation does not make use of its
probabilistic interpretation, further steps in this
direction could potentially unlock novel functionality
which has repeatedly been the case with other
probabilistic numerical methods
\citep{briol2018bqmultiple,oates2017bayesian,hauberg2015random}.
\section{Approximate Shortest Paths}
\label{sec:app:approximate_sp}
The proposed approximation to the shortest path is the posterior mean of a Gaussian process, and is parametrized by a set of second derivatives $\ddot{\b{z}}_n$ on a discrete mesh $\Delta = \{t_0=0, t_1, \dots, t_{N-1}=1\} \subset [0,1]$ of evaluation knots $t_n$. Therefore, the shortest path is
\begin{align}
&\bs{\mu}(t) = \b{m}(t)
+ \bs{\omega}\Trans \text{vec}\left(
\begin{bmatrix}\b{x} - \b{m}(0)\\ \b{y} - \b{m}(1)\\\ddot{\b{z}}_\Delta - \b{\ddot{m}}(\Delta)\end{bmatrix}\Trans \right)\nonumber\\
&\b{G} = \b{V} \otimes \bigg(\begin{bmatrix}k(\mathcal{B},\mathcal{B}) & \frac{\partial^2}{\partial s^2}k(\mathcal{B}, \Delta)\\ \frac{\partial^2}{\partial t^2}k(\Delta,\mathcal{B}) & \frac{\partial^4}{\partial t^2\partial s^2}k(\Delta,\Delta)\end{bmatrix}\\
&\phantom{\b{G} = \b{V} \otimes \bigg(}
+ \operatorname{diag}(0,0,\b{\Sigma},\dotsc,\b{\Sigma})\bigg)\nonumber\\
&\bs{\omega}\Trans = \left(\b{V} \otimes
\begin{bmatrix}k(t,\mathcal{B}) & \frac{\partial^2}{\partial s^2}k(t,\Delta)\end{bmatrix}
\right) \b{G}^{-1}\nonumber.
\end{align}
A fixed-point scheme to learn the parameters $\ddot{\b{z}}_\Delta$ that satisfy the ODE of the geodesic curve is presented in Sec.~\ref{sec:method_description}. Next we show how the components of the GP can be chosen.
\textbf{Mean function}
The most natural choice regarding the mean function of the prior is the straight line that connects the two boundary points $\b{m}(t) = \b{c}(0)\cdot t + (\b{c}(1) - \b{c}(0)) \cdot (1-t)$. This is the shortest path when the Riemannian manifold is flat. Also, when the curvature of the manifold is low, then the shortest path will be relatively close to the straight line. Likewise, when two points are very close on the manifold. Note that the mean function of the prior is the initial guess of the BVP solution.
For instance, if for the kernel we chose the SE, then implicitly the prior assumption is that the shortest paths are
smooth curves varying on a length scale of $\lambda$ along $t$. Also, the amplitude $\b{V} = [(\b{a} - \b{b})\Trans \b{S}_\b{x} (\b{a} - \b{b})]\cdot \b{S}_\b{x} \in \mathbb{R}^{D\times D}$, where $\b{S}_\b{x}$ is the
sample covariance of the dataset $\b{x}_{1:N}$ as in \cite{HennigAISTATS2014}.
\textbf{Kernel}
The kernel type implies the smoothness of the approximated
curve. Since shortest paths are expected to be relatively smooth as two times
differentiable parametric functions, a reasonable choice for the kernel is to be
smooth, e.g. squared exponential (SE), Matern, etc.
Moreover, it is important
to use \emph{stationary} kernels, since they treat the two boundary points equally.
For example, the non-stationary Wienner kernel is a common choice for IVPs.
However, in a BVP such a kernel is a poor fit, because if the time
interval is inverted, then the resulting curve will be different.
\textbf{Mesh}
The \emph{Reproducing Kernel Hilbert Space (RKHS)} \citep{rasmussenwilliams} of the Gaussian process is spanned by the basis functions $\{k(t_n, t)\}_{n=0}^{N-1}$. The predictive posterior $\bs{\mu}(t)$ lies in the RKHS as a linear combination of the basis functions $k(t_n, t)$. Therefore, for our approximation to work, we need the true shortest path to be approximated sufficiently well by the RKHS. This, means that the $\bs{\mu}(t)$ has to be a smooth approximation to the true path.
In our case, the mesh $\Delta$ specifies the locations, as well as the number of the basis functions. Consequently, by increasing the size of mesh, we essentially increase the RKHS such that to be able to approximate more complicated true shortest paths. However, knowing in prior the correct number and the placements of the knots is unrealistic. For that reason a reasonable solution is to use a uniform grid for the interval $[0,1]$. Moreover, $\Delta$ can be seen as a common hyper-parameter for every choice of kernel.
\textbf{Hyper-parameters}
The hyper-parameters of each kernel are kept fixed, because learning the hyper-parameters in parallel with the artificial dataset $\ddot{\b{z}}_\Delta$ may lead to degenerate solutions.
\begin{table*}[!h]
\begin{adjustbox}{max width=\textwidth}
\centering
\begin{tabular}{c | c c c c c c}
$N$ & 5 & 10 & 15 & 25 & 50 & 100\\\hline
\#1 & 2.52($\pm$ 0.4693
) & 2.51($\pm$ 0.3296) & 2.51($\pm$ 0.1562) & 2.49($\pm$ 0.1476) & 2.47($\pm$ 0.0043) & 2.47($\pm$ 0.0004) \\
\#2 & 2.36($\pm$ 0.4541
) & 2.33($\pm$ 0.1800) & 2.34($\pm$ 0.2426) & 2.32($\pm$ 0.1162) & 2.32($\pm$ 0.0011) & 2.32($\pm$ 0.0004) \\
\#3 & 2.20($\pm$ 0.5315
) & 2.19($\pm$ 0.1653) & 2.18($\pm$ 0.0742) & 2.17($\pm$ 0.0559) & 2.17($\pm$ 0.0017) & 2.17($\pm$ 0.0004) \\
\#4 & 2.17($\pm$ 0.5496
) & 2.16($\pm$ 0.1972) & 2.15($\pm$ 0.1028) & 2.15($\pm$ 0.0417) & 2.14($\pm$ 0.0020) & 2.14($\pm$ 0.0003) \\
\end{tabular}
\end{adjustbox}
\caption{Experiment for constant speed curves and different mesh sizes.}\label{tab:constant_speed}
\end{table*}
\section{Scaling of the algorithm with respect to mesh and dimensions}
\begin{figure}[!h]
\centering
\includegraphics[width=0.45\textwidth]{imgs/geod_example-eps-converted-to.pdf}
\caption{Example of a shortest path.}
\label{fig:geod_example}
\end{figure}
The curvature of the Riemannian manifold $\M$ i.e., the behavior of the learned metric, implies the complexity of the shortest paths. As regards the iterations that the algorithm needs in order to find the parameters which solve the ODE, these are related to the ability of the RKHS to approximate the true shortest path. In other words, when the true shortest path can be approximated easily by the RKHS, then the only few fixed point iterations are utilized.
For instance, in Fig.~\ref{fig:geod_example} we show a challenging shortest path for a non-parametric metric with $\sigma_\M=0.1$, which means that the curvature is high. When $N=10$ the RKHS is not large enough to approximate easily the true path, so $300$ iterations are needed in order for the algorithm to converge. When we increase the $N=50$ the true path can be smoothly approximated easier by the enlarged RKHS, so that only $80$ fixed point iterations are needed. When we increase the $\sigma_\M=0.15$ the curvature of $\M$ reduces, so now $85$ and $32$ iterations are needed, respectively.
\begin{figure}[!h]
\centering
\includegraphics[width=0.55\textwidth]{imgs/scale_N_D-eps-converted-to.pdf}
\caption{Scaling of the algorithm.}
\label{fig:scaling_N_D}
\end{figure}
For completeness, we test how the method scales to higher dimensions as well. In this dataset, we fix a random point as the base point, and a subset of 100 points. We fix the $\sigma_\M=0.25$ and we chose the dimensions $[3, 5, 10, 25, 50]$ and the mesh sizes $[5, 10, 25, 50, 100]$. Then, we map the 2-dimensional dataset into each dimension using an orthogonal map, we add noise $\N(0,0.01)$ and we compute all the shortest paths between the subset and the base point for different mesh sizes. As we see in Fig.~\ref{fig:scaling_N_D}, the scaling is sublinear as regards the mesh size. Of course, as the dimension increases the problem becomes more complex, so more iterations are needed. Note that the $\M$ does not have high curvature, which means that the true shortest path can be approximated relatively easy by each RKHS.
\section{Constant Speed Curves}
The exact definition of the \emph{geodesic} is that, it is a locally minimizing curve with constant speed. This means that the geodesic might be not the global shortest path, but any segment on the geodesic curve is minimizing the length locally. However, it is important that the geodesic has constant speed. Also, by definition a curve that satisfies the ODE has constant speed.
Here we test how the mesh size $N$ affects the speed of the resulting curve. In Table~\ref{tab:constant_speed} we show the mean and the standard deviation of the speed for 4 curves in the data manifold of Fig.~\ref{fig:geod_example}. The results show that when the mesh increases, the speed becomes more constant since the standard deviation decreases. Instead, for small $N$ the curve satisfies the ODE only at the knots $t_n$, however, it does not have the exact dynamics of the true curve. In other words, with only $N$ points we are not always capable to approximate exactly the true curve. This means that our solution is a smooth approximation of the true curve, but it is not able to have constant speed. As we increase the $N$ the RKHS can approximate exactly the true curve, which satisfies the ODE for every $t$, and thus, it has constant speed.
\section{Robustness of the Solver}
We conducted an experiment to test the robustness of our solver. In particular, we computed a challenging shortest path in the latent space of the deterministic generator $f(x,y) = [x,y,x^2 + y^2]$. In Fig.~\ref{fig:robustness} we show the paths found by \texttt{bvp5c}, our method when initialized with the straight line and when it is initialized by the \texttt{bvp5c}'s solution. Obviously, the \texttt{bvp5c}~converges to a suboptimal solution, while our method manages to find the true shortest path when initialized with the straight line. Interestingly, due to its robustness our solver manages to find a geodesic even by initializing it with the suboptimal solution of \texttt{bvp5c}. Of course, this is not the shortest path but it is a geodesic, because it has constant speed as it satisfies the ODE $\forall t$, and also, it is locally length minimizing.
\begin{figure}[!h]
\centering
\includegraphics[width=0.45\textwidth]{imgs/compare_robustness_of_solvers.png}
\caption{Example of robustness.}
\label{fig:robustness}
\end{figure}
\section{Downstream Tasks}
We also compared the performance of our solver on downstream tasks.
From the LAND experiment (see Sec.~\ref{sec:non_parametric_experiments}) we clustered the data using the trained mixture models and a linear model, and we get the errors: 0\% (ours), 15\% (bvp5c), 21\% (linear). We also numerically measure the KL divergence between the learned distributions and the generating distribution, and observe that the proposed solver improves the fit: 0.35 (ours), 0.65 (bvp5c), 0.53 (linear).
Additionally, we performed $k$-means clustering on a 2-dimensional latent space of a VAE trained on MNIST digits 0,1,2 and the resulting errors: ~92($\pm$5)\% (ours, 1.6($\pm$0.7) hours), ~91($\pm$5)\% (bvp5c, 8($\pm$4.5) hours), ~83($\pm$4)\% (linear). The proposed model is, thus, both faster and more accurate on downstream tasks.
| proofpile-arXiv_065-1564 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Reflection measurement}
The cross-Kerr interaction manifests as photon-number splitting~\cite{Schuster2007} in the measured microwave reflection $S_{11}$ (Fig.~1D).
Distinct peaks correspond to the first transition frequency of the HF mode $|g,n\rangle\leftrightarrow|e,n\rangle$, with frequencies $\omega_\text{H}-n\chi/\hbar$ where $\chi/h =21$~MHz.
We label the eigenstates of the system $|j,n\rangle$, with $j=g,e,f, ...$ ($n=0,1,2, ...$ ) corresponding to excitations of the HF (LF) mode.
The amplitude of peak $n$ is proportional to
\begin{equation}
P_n\kappa_\text{ext}/\kappa_n\ ,
\label{eq:peak_amplitude}
\end{equation}
where $P_n$ is the occupation of photon-number level $|n\rangle$ in the LF mode and $\kappa_\text{ext}/\kappa_n$ is the ratio of external coupling $\kappa_\text{ext}/2\pi=1.6\cdot 10^6s^{-1}$ to the total line-width $\kappa_n$ of peak $n$.
From the Bose-Einstein distribution of peak heights $P_n$, we extract the average photon occupation $n_\text{th} = 1.6$ corresponding to a mode temperature of $17$~mK.
The resolution of individual photon peaks is due to the condition $\kappa_n\ll\chi/\hbar$.
The peak line-widths increase with $n$ following $\kappa_n = \kappa(1+4 n_\text{th}^{(H)})+2\gamma(n+(1+2n)n_\text{th})$, where $\kappa/2\pi=3.7\cdot 10^6s^{-1}$ is the dissipation rate of the HF mode, $n_\text{th}^{(H)}\simeq0.09$ its thermal occupation (see Fig.~S10), and $\gamma/2\pi=23\cdot 10^3s^{-1}$ is the dissipation rate of the LF mode (obtained through time-domain measurement Fig.~4).
The condition $\kappa_n\ll A_\text{H}/\hbar$ makes the HF mode an inductively-shunted transmon qubit~\cite{Koch2007}, making it possible to selectively drive the $|g,n\rangle\leftrightarrow|e,n\rangle$ and $|e,n\rangle\leftrightarrow|f,n\rangle$ transitions.
Despite its low dissipation rate $\gamma$, the LF mode has a line-width of a few~MHz (measured with two-tone spectroscopy, Fig.~S15) which originates in thermal processes such as $|g,n\rangle\rightarrow|e,n\rangle$ occurring at rates $\sim\kappa n_\text{th}^{(H)}$ larger than $\gamma$~\cite{SI}.
The LF mode line-width is then an order of magnitude larger than $A_\text{L}$, making it essentially a harmonic oscillator that we will refer to as the resonator.
The junction non-linearity enables transfer of population between states by coherently pumping the circuit at a frequency $\omega_\text{p}$.
The cosine potential of the junction imposes four-wave mixing selection rules, only allowing interactions that involve 4 photons.
One such interaction is
\begin{equation}
\begin{split}
\hat H_\text{int}=&-\hbar g\sqrt{n+1}|f,n\rangle\langle g,n+1|+h.c.\ ,\\
\end{split}
\label{eq:cooling_int}
\end{equation}
activated when driving at the energy difference between the two coupled states $\omega_\text{p}=2\omega_\text{H}-\omega_\text{L}-\left(2n\chi+A_\text{H}\right)/\hbar$.
This process, enabled by a pump photon, annihilates a photon in the resonator and creates two in the transmon.
The number of photons involved in the interaction is four, making it an allowed four-wave mixing process.
The induced coupling rate is $g=A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}\xi_\text{p}$, where $|\xi_\text{p}|^2$ is the amplitude of the coherent pump tone measured in number of photons~\cite{SI}.
We use this pump tone in combination with the large difference in mode relaxation rates to cool the megahertz resonator to its ground-state (Fig.~2A).
The pump drives transitions between $|g,1\rangle$ and $|f,0\rangle$ at a rate $g$.
The population of $|g,1\rangle$, transfered to $|f,0\rangle$, subsequently decays at a rate $2\kappa$ to the ground-state $|g,0\rangle$.
Cooling occurs when the thermalization rate of the resonator $n_\text{th}\gamma$ is slower than the rate $C\gamma$ at which excitations are transfered from $|g,1\rangle$ to $|g,0\rangle$, where $C = 2g^2/\kappa\gamma$ is the cooperativity (proportional to cooling-pump power~\cite{SI}).
For different cooling pump strengths, we measure $S_{11}$ (Fig.~2B).
The pump frequency is adapted at each power since the AC-stark effect increasingly shifts the qubit frequency as a function of power (see Fig.~S9).
The data is fitted to a sum of complex Lorentzians, with amplitudes given by Eq.~(\ref{eq:peak_amplitude}) and line-widths $\kappa_n$, from which $P_n$ is extracted.
Thermal effects lead to the ratio $P_{n+1}/P_n = n_\text{th}/(1+n_\text{th})$ between neighboring photon-number states for $n\ge 1$, and the cooling pump changes the ratio of occupation of the first two states
\begin{equation}
\frac{P_1}{P_0} \simeq \frac{n_\text{th}}{1+n_\text{th}+C}\ .
\label{eq:main_P1_over_P0}
\end{equation}
The ground-state occupation hence increases with cooperativity and we attain a maximum $P_0=0.82$.
At higher cooperativity, $P_0$ diminishes due to the off-resonant driving of other four-wave mixing processes such as $|f,n+1\rangle\langle g,n|+h.c.$ which tend to raise the photon number of the resonator.
This effect is simulated using an adaptive rotating-wave approximation~\cite{baker2018adaptive} (Fig.~2C and S6).
Neighbouring four-wave mixing processes are measured by sweeping the pump frequency whilst monitoring the spectrum (Fig.~3A).
When cooling with a single pump they eventually limit performance, but can be resonantly driven to our advantage.
By driving multiple cooling interactions $|g,n\rangle\leftrightarrow|f,n-1\rangle$, less total pump power is required to reach a given ground-state occupation, hence minimizing off-resonant driving.
By maximizing the ground-state peak amplitude as a function of the power and frequency of four cooling tones, we achieve $P_0 = 0.90$ (Fig.~3B).
By combining cooling $|g,n\rangle\leftrightarrow|f,n-1\rangle$ and raising $|g,n\rangle\leftrightarrow|f,n+1\rangle$ tones (inset of Fig.~3C), we demonstrate stabilization of higher Fock states, non-Gaussian states commonly considered as non-classical phenomena~\cite{rips2012steady}.
The optimum frequencies for the raising and cooling tones adjacent to the stabilized state were detuned by a few~MHz from the transition frequency (see dashed lines in the inset of Fig.~3C), otherwise one pump tone would populate the $|f\rangle$ level, diminishing the effectiveness of the other.
Finally we investigate dynamics in a photon resolved manner (Fig.~4).
Whilst probing $S_{11}$ at a given frequency, we switch the cooling or single photon stabilization pumps on and off for intervals of $50\ \upmu$s.
We perform this for a sequence of probe frequencies, resulting in $S_{11}$ as a function of both frequency and time (see full spectrum in~\cite{SI}).
The spectrum is fitted at each time to extract $P_n$ as a function of time.
After reaching the steady state, the pumps are turned off and we observe the thermalization process which follows the semi-classical master equation
\begin{equation}
\begin{split}
\dot P_n &= -n \gamma (n_\text{th}+1) P_n + n\gamma n_\text{th}P_{n-1}\\
& -(n+1)P_n\gamma n_\text{th} + (n+1)P_{n+1}\gamma (n_\text{th}+1)\ .
\end{split}
\label{eq:main_rate_equation}
\end{equation}
Our cQED architecture enables the readout and manipulation of a radio-frequency resonator at the quantum level.
Utilizing the fast readout methods of cQED, single-shot readout or the tracking of quantum trajectories could enable even finer resolution of thermodynamic effects at the quantum scale.
Coupling many of these devices together could enable the exploration of many-body effects in Bose-Hubbard systems with dynamically tunable temperatures~\cite{rigol2008thermalization,sorg2014relaxation}.
This circuit architecture could also be used to interface circuit quantum electrodynamics with different physical systems in the MHz frequency range, such as spin systems~\cite{ares2016sensitive} or macroscopic mechanical oscillators\cite{Teufel2011}.
Finally, this circuit could enable sensing of radio frequency radiation with quantum resolution, a critical frequency range for a number of applications, from nuclear magnetic resonance imaging to radio astronomy.
\textbf{Acknowledgments: }
We acknowledge Ya. M. Blanter, S. M. Girvin, J. D. P. Machado for useful discussions.
\textbf{Funding: }
This work was supported by the European Research Council under the European Union’s H2020 program under grant agreements 681476 - QOM3D and 785219 - GrapheneCore2, by the Dutch Foundation for Scientific Research (NWO) through the Casimir Research School, and by the Army Research Office through Grant No.\ W911NF-15-1-0421.
\textbf{Author contributions: }
MFG and RV developed the theoretical description of the experiment.
MFG designed the device.
MFG fabricated the device with help from JD and MK.
MFG, MK, CD, JD and MJ participated in the measurements.
MFG and CD analyzed the data.
BB provided the software and input for the adaptive rotating-wave-approximation simulation.
MFG wrote the manuscript with input from MK, CD and GAS.
All co-authors reviewed the manuscript and provided feedback.
GAS supervised the project.
\textbf{Competing interests: }
The authors declare that they have no competing interests.
\textbf{Data and materials availability: }
Raw data as well as all measurement, data-analysis and simulation code used in the generation of main and supplementary figures is available in Zenodo with the identifier 10.5281/zenodo.2551258
\section*{MATERIALS AND METHODS}
\section{Fabrication}
The circuit is fabricated on a high resistivity silicon substrate using aluminum as a superconductor.
Using shadow evaporation, we first pattern Al/AlOx/Al Josephson junctions, the bottom plate of the capacitor and an underpass (a line connecting the center of the spiral inductor to the capacitor).
We then deposit $260$ nm of hydrogenated amorphous silicon (a-Si:H) as a dielectric layer, motivated by its expected low dielectric loss~\cite{o2008microwave}, using PECVD (plasma enhanced chemical vapor deposition)
The a-Si:H is patterned to form a dielectric layer for the parallel plate capacitor, a bridge over the spirals underpass, and a protection layer above the junctions.
Finally we sputter-deposit and pattern aluminum to form the rest of the circuitry, after an argon-milling step to ensure a galvanic connection to the first aluminum layer.
The resulting circuit is shown in detail in Fig.~\ref{fig:S_setup}.
\section{Experimental setup}
\begin{figure*}[ht!]
\includegraphics[width=1\columnwidth]{figure_SI/setup.pdf}
\caption[Experimental setup and device]{\textbf{Experimental setup and device. A: }
room temperature setup for spectroscopy experiments, \textbf{B: } room temperature setup for time-domain experiments.
These setups are connected to the fixed setup shown in \textbf{C}.
\textbf{C: }Cryogenic setup.
\textbf{D: }Optical image of the chip, wire-bonded to a surrounding printed circuit board (PCB).
The PCB is mounted in a copper box which is cooled below $7$~mK (\textit{i.e.} under the range of our fridge thermometry) in our dilution refrigerator.
\textbf{E: }Optical image of the two circuits connected to the measured feedline.
Due a small cross-Kerr to line-width ratio, photon-number splitting was not achieved in the top device, where the low (high) mode was designed to resonate at $\sim50$~MHz ($\sim7.2$ GHz).
\textbf{F: }Optical image of the SQUID, under a protective a-Si:H layer to avoid damage from Ar milling in the last step.
\textbf{G,H: }Optical and SEM image of the 23-turn spiral inductor which has a $1.5\ \upmu$m pitch and a $500\ $nm wire width.
}
\label{fig:S_setup}
\end{figure*}
\section{Data filtering}
In Figs~1A, 3A, \ref{fig:S_caveats}B, \ref{fig:S_temperature}B,C, \ref{fig:S_all_transitions}, \ref{fig:S_time_domain}A,B,C we applied a Gaussian filter with a standard deviation of one increment in the x-axis (and y-axis when applicable).
The filtering was used in the construction of the figure for clarity.
No filtering was applied before fitting the data.
\section{Theory}
\twocolumngrid
\setlength{\parskip}{1em}
\begin{figure*}[ht!]
\includegraphics[width=0.7\textwidth]{figure_SI/circuit.pdf}
\caption[Detailed circuit diagram]{The circuit studied in this work (A) and approximate circuits for the low-frequency (B) and high-frequency regime (C).
}
\label{fig:S_circuit}
\end{figure*}
\subsection{Circuit Hamiltonian}
\label{sec:black_box}
In this section, we derive the Hamiltonian for the circuit shown in Fig.~\ref{fig:S_circuit}A using the black-box quantization method~\cite{nigg_black-box_2012}.
This method allows the systematic derivation of the resonance frequency $\bar\omega_m$ and anharmonicity $A_m$ of the different modes $m$ of a circuit from the admittance $Y(\omega) = 1/Z(\omega)$ across the Josephson junction if we replace the latter by a linear inductor $L_\text{J} = \hbar^2/4 e^2 E_\text{J}$.
The resonance frequencies $\bar\omega_m$ are the zeros of the admittance $Y(\bar\omega_m) = 0$, and the anharmonicities are given by
\begin{equation}
A_m = -\frac{2e^2}{L_\text{J}\bar\omega_m^2(\text{Im}Y'(\bar\omega_m))^2}\ .
\label{eq:anharmonicity}
\end{equation}
The idea is to quantify through $A_m$ the amount of current traversing the Josephson junction for an excitation in mode $m$.
The Hamiltonian of the circuit is then
\begin{equation}
\begin{split}
\hat{H} &= \sum_m\hbar\bar\omega_m\hat{a}_m^\dagger\hat{a}_m + \underbrace{E_\text{J}[1-\cos{\hat{\varphi}}]-E_\text{J}\frac{\hat{\varphi}^2}{2}}_\text{junction non-linearity}\ ,\\
\text{where }\hat{\varphi} &= \sum_m\left(2 A_m/E_\text{J}\right)^{1/4}(\hat{a}_m^\dagger+\hat{a}_m)\ .
\label{eq:Hamiltonian_8th_order}
\end{split}
\end{equation}
In the circuit of Fig.~\ref{fig:S_circuit}A , there are two modes, a high-frequency one and a low-frequency one.
By comparing to a black-box quantization of the full circuit, we find that taking the approximation of $C_H\simeq0$, $C_c\simeq0$ for the low-frequency mode and $C_L\simeq\infty$ for the high-frequency mode results in corrections of only $0.2$, $1.2$, $0.3$ and $2.1$ \% in the value of $\omega_L$, $\omega_H$, $A_L$ and $A_H$ respectively.
It is therefore a good approximation, which has the additional advantage of producing simple analytical equations for the frequencies and anharmonicities of the circuit.
Starting with the low-frequency mode shown in Fig.~\ref{fig:S_circuit}B, we find the (imaginary part of the) admittance across the linearized junction to be
\begin{equation}
\text{Im}Y(\omega) = \frac{1}{\omega L_J}\frac{\left(\frac{\omega}{\omega_L}\right)^2-1}{1-\left(\omega\sqrt{LC_L}\right)^2}\ ,
\end{equation}
yielding the resonance frequency
\begin{equation}
\omega_L = \frac{1}{\sqrt{(L+L_J)C_L}}\ .
\label{eq:wl}
\end{equation}
Taking the derivative of the imaginary part of the admittance at $\omega = \omega_L$ yields:
\begin{equation}
\text{Im}\frac{\partial Y}{\partial \omega}(\omega_L) = 2C_L\left(\frac{L+L_J}{L_J}\right)^2
\end{equation}
Substituting this into Eq.~(\ref{eq:anharmonicity}) yields
\begin{equation}
A_L = -\frac{e^2}{2C_L}\left(\frac{L_J}{L+L_J}\right)^3\ .
\label{eq:Al}
\end{equation}
Turning to the high-frequency mode shown in Fig.~\ref{fig:S_circuit}C, we find the (imaginary part of the) admittance across the linearized junction to be
\begin{equation}
\text{Im}Y(\omega) = C_H\omega\left(1-\frac{\omega_H^2}{\omega^2}\right)\ ,
\end{equation}
yielding the resonance frequency
\begin{equation}
\omega_H = \sqrt{\frac{L+L_J}{LL_JC_H}}\ .
\label{eq:wh}
\end{equation}
Taking the derivative of the imaginary part of the admittance at $\omega = \omega_H$ yields:
\begin{equation}
\text{Im}\frac{\partial Y}{\partial \omega}(\omega_H) = 2C_H
\end{equation}
Substituting this into Eq.~(\ref{eq:anharmonicity}) yields
\begin{equation}
A_H = -\frac{e^2}{2C_H}\left(\frac{L}{L+L_J}\right)\ .
\label{eq:Ah}
\end{equation}
A Taylor expansion of the junctions cosine potential is justified if the anharmonicities are weak and only a few photons populate the circuit.
Whilst numerical calculations in this work consider the 8-th order expansion, much understanding can be gleaned by stopping the expansion at the fourth-order
\begin{equation}
\begin{split}
\hat{H}_{4,\text{diag}} =& \hbar\omega_\text{H}\hat{a}^\dagger\hat{a} -\frac{A_\text{H}}{2}\hat{a}^\dagger\hat{a}^\dagger\hat{a}\hat{a}\\
&+ \hbar\omega_\text{L}\hat{b}^\dagger\hat{b} -\frac{A_\text{L}}{2}\hat{b}^\dagger\hat{b}^\dagger\hat{b}\hat{b}\\
&-\chi\hat{a}^\dagger\hat{a}\hat{b}^\dagger\hat{b}\ ,\\
\label{eq:H4_diag}
\end{split}
\end{equation}
where $\chi$ is the cross-Kerr coupling: the amount by which the high-mode transition shifts as a result of adding an excitation in the low mode and vice versa.
We defined the first transition frequencies of both modes
\begin{equation}
\begin{split}
\hbar\omega_\text{H}=\hbar\bar\omega_\text{H}-A_\text{H}-\frac{\chi}{2}\ ,&\\
\hbar\omega_\text{L}=\hbar\bar\omega_\text{L}-A_\text{L}-\frac{\chi}{2}\ .&\\
\end{split}
\end{equation}
In Eq.~(\ref{eq:H4_diag}), we have neglected terms in the expansion which are off-diagonal in the Fock basis and do not modify the eigenergies to leading order perturbation theory.
The eigenfrequencies of the system are summarized in the energy diagram of Fig.~\ref{fig:energy_diagram}
\subsection{Comparison to the typical circuit QED architecture}
We now compare our circuit architecture with a more conventional circuit QED coupling scheme~\cite{Koch2007}, where the transmon qubit with a frequency $\omega_\text{H}$ couples capacitively at a rate $G$ to an $LC$-oscillator ($\omega_\text{L}$).
In this architecture, the cross-Kerr coupling would be $4A_\text{H} (\bar g\omega_\text{L}/\omega_\text{H}^2)^2$, to first order in $\bar G$ and $A_\text{H}$~\cite{gely2017nature}.
If we would want a cross-Kerr coupling to exceed $\kappa$, for the large frequency difference $\omega_\text{L}\ll\omega_\text{H}$ demonstrated in this work, the couplings $G$ would have to be very large.
As is well known from ultra-strong coupling circuit QED architectures, this translates to both high impedance resonators~\cite{bosman2017approaching} and large coupling capacitors~\cite{bosman_multi-mode_2017}.
These elements are all present in this circuit if we consider $C_L$ as a coupling capacitor between the high impedance $L_\text{H}C_\text{H}$-oscillator and the qubit constituted of the junction and the junctions capacitance (that we have neglected in the previous Hamiltonian derivation).
However, the basis of normal modes presented in the previous section present a much more convenient framework to understand the system.
\begin{figure*}[ht!]
\includegraphics[width=0.9\textwidth]{figure_SI/energy_diagram.pdf}
\caption[Detailed energy diagram]{\textbf{Detailed energy diagram of the system}.
We depict the first three levels of both high and low mode along with their dissipation and thermalization rates.
Transition energies are written with $\hbar=1$.
}
\label{fig:energy_diagram}
\end{figure*}
\subsection{Translating the measured $S_{11}$ to a quantum operator}
We now introduce a driving term in the Hamiltonian and consider losses to both the environment and the measurement port.
Following input-output theory~\cite{vool2017introduction,clerk2010introduction}, the quantum Langevin equation for $\hat{a}(t)$ is
\begin{equation}
\frac{\text{d}}{\text{d}t}\hat{a}(t) = \frac{i}{\hbar}[\hat{H}_\text{undr},\hat{a}(t)] - \frac{\kappa}{2}\hat{a}(t)+\sqrt{\kappa_\text{ext}}\tilde{a}_\text{in}(t)\ .
\end{equation}
Where the undriven Hamiltonian $\hat{H}_\text{undr}$ corresponds to that of Eq.~(\ref{eq:Hamiltonian_8th_order}), where the degree of expansion of the non-linearity is yet unspecified.
The microwave reflection measured in spectroscopy (here in the time-domain) is given by
\begin{equation}
S_\text{11}(t) = \frac{\tilde{a}_\text{out}(t)}{\tilde{a}_\text{in}(t)} = 1-\sqrt{\kappa_\text{ext}}\frac{\hat{a}(t)}{\tilde{a}_\text{in}(t)}\ ,
\label{eq:in_out}
\end{equation}
where $\tilde{a}_\text{in}(t)$ ($\tilde{a}_\text{out}(t)$) is the incoming (outgoing) field amplitude, $\kappa_\text{ext}$ ($\kappa$) is the external (total) coupling rate of the high-frequency mode.
The coupling of the low mode to the feedline $\gamma_\text{ext}/2\pi=2s^{-1}$ is much smaller than coupling of the high mode to the feedline $\kappa_\text{ext}/2\pi=1.63\cdot10^6s^{-1}$, we therefore assume that a drive tone only affects the high-frequency mode.
For a coherent drive, characterized by a drive frequency $\omega_\text{d}$ and an incoming power $P_\text{in}$ (equal to the average power $\langle P(t) \rangle$ of the oscillating input signal), the wave amplitude is
\begin{equation}
\tilde{a}_\text{in}(t)=\sqrt{\frac{P_\text{in}}{\hbar\omega_\text{d}}} e^{-i\omega_\text{d} t}\ ,
\end{equation}
and the drive term can be incorporated in the Hamiltonian of the system
\begin{align}
\begin{split}
\frac{\text{d}}{\text{d}t}\hat{a}(t) &= \frac{i}{\hbar}[\hat{H}_\text{undr}+\hat{H}_\text{drive},\hat{a}(t)] - \frac{\kappa_\text{ext}}{2}\hat{a}(t)\ ,\\
\text{where }\hat{H}_\text{drive} &= i\hbar\epsilon_\text{d}\left(e^{-i\omega_\text{d} t}\hat{a}^\dagger(t) - e^{i\omega_\text{d} t}\hat{a}(t)\right)\ ,\\
\epsilon_\text{d} &= \sqrt{\frac{\kappa_\text{ext}P_{in}}{\hbar\omega_\text{d}}}\ .
\end{split}
\end{align}
Additionally, we also remove the time-dependence in the drive Hamiltonian by moving to a frame rotating at $\omega_\text{d}$ with the unitary transformation $U_\text{probe} = e^{i\omega_\text{d} t \hat{a}^\dagger\hat{a}}$,
\begin{equation}
\frac{\text{d}}{\text{d}t}\hat{a} = \frac{i}{\hbar}[U_\text{probe}^\dagger\hat{H}_\text{undr}U_\text{probe}+\tilde{H}_\text{drive},\hat{a}] - \frac{\kappa_\text{ext}}{2}\hat{a}\ ,
\label{eq:langevin}
\end{equation}
where $\hat a e^{i\omega_d t} = \hat a(t)$ and
\begin{equation}
\tilde{H}_\text{drive} = -\hbar\omega_\text{d} \hat{a}^\dagger \hat{a}\\
+i\hbar\epsilon_\text{d}\left(\hat{a}^\dagger -\hat{a}\right)\ .
\label{eq:drive_hamiltonian}
\end{equation}
In this rotating frame, the reflection coefficient becomes
\begin{equation}
\hat{S}_\text{11}(\omega_\text{d}) = 1-\frac{\kappa_\text{ext}}{\epsilon_\text{d}}\hat{a}\ ,
\label{eq:in_out_freq}
\end{equation}
of which we measure the expectation value when probing the system.
From now on, and in the main text we use the shorthand $S_{11}(\omega_\text{d}) = \langle\hat{S}_\text{11}(\omega_\text{d})\rangle$.
Note that by casting the quantum Langevin Eq.~(\ref{eq:langevin}) in the form
\begin{equation}
\begin{split}
\frac{\text{d}}{\text{d}t}\hat{a} =& \frac{i}{\hbar}[U_\text{probe}^\dagger\hat{H}_\text{undr}U_\text{probe}+\hat{H}_\text{drive},\hat{a}]\\
+&\left(L^{\dagger}\hat{a}L - \frac{1}{2}\left(\hat{a}L^{\dagger}L + L^{\dagger}L\hat{a}\right)\right)\ ,\\
\text{where }L=&\kappa_\text{ext} \hat{a}\ ,
\end{split}
\end{equation}
it can be readily transformed to a Lindblad equation
\begin{equation}
\begin{split}
\frac{\text{d}}{\text{d}t}\rho =& -\frac{i}{\hbar}[U_\text{probe}^\dagger\hat{H}_\text{undr}U_\text{probe}+\hat{H}_\text{drive},\rho] \\
+& \left(L \rho L^{\dagger}-\frac{1}{2}\left(\rho L^{\dagger}L + L^{\dagger}L\rho\right)\right)\ \\
=& -\frac{i}{\hbar}[U_\text{probe}^\dagger\hat{H}_\text{undr}U_\text{probe}+\hat{H}_\text{drive},\rho] + \kappa_\text{ext}\mathcal{L}[\hat{a}]\ ,
\label{eq:probed_lindbald_early}
\end{split}
\end{equation}
better suited to numerical calculations using QuTiP~\cite{johansson2012qutip}.
\begin{figure*}[ht!]
\includegraphics[width=0.53\textwidth]{figure_SI/S_fitting_caveats.pdf}
\caption[Possible caveats in fitting a sum of Lorentzians to $S_{11}$]{\textbf{Possible caveats in fitting a sum of Lorentzians to $S_{11}$}.
\\\hspace{\textwidth}
\textbf{A: driven states are broadened then hybridize.}
As we increase the coupling $g$ induced by a cooling pump resonant with $\ket{g,1}\leftrightarrow\ket{f,0}$,the low mode is cooled as shown in the left panel.
In the right panel, we zoom in to the normalized $n=1$ peak.
As a consequence of the coupling between levels $\ket{g,1}$ and $\ket{f,0}$, this peak first broadens then splits into two distinct peaks,
The slight asymmetry arises from the tail of the $n=0$ peak.
We used the device parameters with an increased $\chi= h\times130$~MHz and $A_\text{H}= h\times600$~MHz in order to minimize the visibility of the tail of the other peaks.
\\\hspace{\textwidth}
\textbf{B: the dispersive shift increases with $n$.}
The Hamiltonian presented in Eq.~(2), which only considers the diagonal contributions of the quartic term of the JJ non-linearity, results in a constant shift of the high-mode frequency $\omega_H-n\chi/\hbar$.
As shown in black, overlaid on the blue dots of the same data as in panel A, this results in a slight misalignment of the peaks.
By diagonalizing the Hamiltonian of Eq.~(\ref{eq:Hamiltonian_8th_order}), with the JJ non-linearity Taylor expanded to the 8-th order, we achieve a more realistic prediction of the system frequencies, and find that the shift increases with the number of photons in the low mode, as shown with red lines.
\\\hspace{\textwidth}
\textbf{C: $\gamma$ and $n_\text{th}$ modify the high-mode line-width $\kappa_{j,n}$.}
As shown in Eq.~(\ref{eq:sum_of_lorentzians}), the high-mode line-width not only depends on high-mode dissipation rate $\kappa$, but also on the dissipation $\gamma$ and thermal occupation $n_\text{th}$ of the low mode.
As $\gamma\ll\kappa$, this effect is subtle for low thermal occupations, but if neglected, can lead to an underestimation of the low mode occupation at higher temperatures.
}
\label{fig:S_caveats}
\end{figure*}
\subsection{Derivation of $S_{11}[\omega]$: probing the system}
In this section we derive the spectrum of the high mode for arbitrary states of the low mode.
We append the Lindblad equation of Eq.~(\ref{eq:probed_lindbald_early}) to take into account additional interactions of the system with the environment.
Internal dissipation of the high mode $\kappa_\text{int}$, is added to the external dissipation rate to constitute its total dissipation rate $\kappa = \kappa_\text{int}+\kappa_\text{ext}$.
The low mode is attributed a dissipation rate $\gamma$.
The average thermal occupation of the two modes are denoted by $n_\text{th}^{(H)}$ and $n_\text{th}$ for the high and low mode respectively.
We can estimate the response function $S_{11}(\omega_\text{d})$ analytically using the Hamiltonian of Eq.~(\ref{eq:H4_diag}).
The unitary $U_\text{probe}$ leaves this Hamiltonian unchanged and the complete Lindblad equation is then
\begin{equation}
\begin{split}
\frac{\text{d}}{\text{d}t} \rho = &-\frac{i}{\hbar}[H_\text{4,diag}+\hat{H}_\text{drive},\rho] \\
&+\kappa (n_\text{th}^{(H)}+1) \mathcal{L}[\hat{a}]
+\kappa n_\text{th}^{(H)} \mathcal{L}[\hat{a}^\dagger]\\
&+\gamma (n_\text{th}+1) \mathcal{L}[\hat{b}]
+\gamma n_\text{th} \mathcal{L}[\hat{b}^\dagger]\ .
\label{eq:probed_lindbald}
\end{split}
\end{equation}
In the un-driven case $\epsilon_\text{d}=0$, we assume the steady-state solution to be a diagonal density matrix $\rho_\text{ss}$ as a consequence of thermal effects
\begin{equation}
\rho_\text{ss} =
\begin{bmatrix}
P_g & 0&0 & \\
0 & P_e & 0 & \\
0 & 0 & P_f & \\
& & & \ddots \\
\end{bmatrix}_H
\otimes
\begin{bmatrix}
P_0 & 0&0 & \\
0 & P_1 & 0 & \\
0 & 0 & P_2 & \\
& & & \ddots \\
\end{bmatrix}_L\ ,
\label{eq:thermal_steady_state}
\end{equation}
where $P_g,P_e,P_f$ ($P_0,P_1,P_2$) corresponds to the occupation of high (low) mode levels.
Note that when we pump the system, effectively coupling levels of the high and low mode, this approximation breaks down and that particular limit is discussed below.
We now look for a perturbative correction to this matrix at a small driving rate $\epsilon_\text{d}$
\begin{equation}
\rho = \rho_\text{ss} + \epsilon_\text{d} \rho_\text{pert} \ ,
\end{equation}
where $\rho_\text{pert}$ has the unit time.
The objective is to determine the expectation value of the reflection coefficient
\begin{equation}
S_\text{11}(\omega_\text{d}) = \text{Tr}\left[\rho \left(1-\frac{\kappa_\text{ext}\hat{a}}{\epsilon_\text{d}}\right)\right] \ .
\end{equation}
We substitute the perturbative expansion of $\rho$ into Eq.~(\ref{eq:probed_lindbald}) and keep only terms to first order in $\epsilon_\text{d}$.
This equation is solved analytically in reduced Hilbert-space sizes using the software Wolfram Mathematica.
The largest Hilbert-space sizes for which Mathematica could provide an analytical solution in a reasonable amount of time were: (4,0), (3,2), (2,5) where the first (second) number designates the number of levels included in the high (low) mode.
We extrapolate the obtained results to construct the reflection coefficient
\begin{equation}
\begin{split}
S_\text{11}(\omega_\text{d}) &= 1-(P_g-P_e)\sum_n P_n\frac{\kappa_\text{ext}}{i\Delta_{g,n}+\kappa_{g,n}}\\
&\ \ \ \ \ -(P_e-P_f)\sum_n P_n\frac{2\kappa_\text{ext}}{i\Delta_{e,n}+\kappa_{e,n}}\ ,\\
\text{where }\kappa_{g,n} &= \kappa(1+4 n_\text{th}^{(H)})+2\gamma(n+(1+2n)n_\text{th})\ ,\\
\kappa_{e,n} &= \kappa(3+8 n_\text{th}^{(H)})+2\gamma(n+(1+2n)n_\text{th})\ .\\
\end{split}
\label{eq:sum_of_lorentzians}
\end{equation}
which corresponds to a sum of Lorentzian functions, each associated to high-mode level $i$ and a low-mode level $n$, with line-width $\kappa_{i,n}$ centered around $\Delta_{i,n} = 0$, where
\begin{equation}
\begin{split}
&\Delta_{g,n} = \omega_\text{H}-n\chi/\hbar-\omega_\text{d}\ ,\\
&\Delta_{e,n} = \omega_\text{H}-(n\chi-A_\text{H})/\hbar-\omega_\text{d}\ .
\end{split}
\label{eq:deltas}
\end{equation}
Note that in the main text we use the notation $\kappa_{g,n}=\kappa_n$.
By numerically computing $S_{11}$ as described in Sec.~\ref{sec:numerics_S11}, we find that the expression for the line-widths $\kappa_{i,n}$ remains accurate, whilst the center of the Lorentzians $\Delta_{i,n}$ will slightly shift from Eq.~(\ref{eq:deltas}), as shown in Fig.~\ref{fig:S_caveats}B.
When fitting data, we hence use the Eqs.~(\ref{eq:sum_of_lorentzians}) whilst fixing $\Delta_{i,n}$ with a diagonalization of the Hamiltonian Eq.~(\ref{eq:Hamiltonian_8th_order}) Taylor expanded to the 8-th order.
In Fig.~\ref{fig:S_temperature}(C), we show that Eq.~(\ref{eq:sum_of_lorentzians}) is in excellent agreement with both data and numerics.
\subsubsection{The impact of a pump tone on $S_{11}(\omega)$}
Pump tones can invalidate Eq.~(\ref{eq:sum_of_lorentzians}) in different ways.
As an example let us take the cooling scheme where a pump tone couples the levels $\ket{g,1}$ and $\ket{f,0}$ at a rate $g$.
This is simulated by numerically finding the steady state of the Hamiltonian
\begin{equation}
\begin{split}
\hat{H} =& \hbar\Delta\hat{a}^\dagger\hat{a} -\frac{A_\text{H}}{2}\hat{a}^\dagger\hat{a}^\dagger\hat{a}\hat{a}\\
&+ (2\hbar\Delta-A_\text{H}-A_\text{L})\hat{b}^\dagger\hat{b} -\frac{A_\text{L}}{2}\hat{b}^\dagger\hat{b}^\dagger\hat{b}\hat{b}\\
&-\chi\hat{a}^\dagger\hat{a}\hat{b}^\dagger\hat{b}\\
&+g(\ket{g,1}\bra{f,0}+\ket{f,0}\bra{g,1})\\
&+i\hbar\epsilon_\text{d}(\hat{a}^\dagger-\hat a) ,\\
\end{split}
\end{equation}
written in a frame rotating at the probe frequency $\Delta = \omega_H-\omega_d$ and where the levels $\ket{g,1}$ and $\ket{f,0}$ are made resonant.
As shown in Fig.~\ref{fig:S_caveats}A, a peak corresponding to a transition to or from a level which is being pumped will be broadened in line-width and eventually will split into two peaks with increasing $g$.
This is not an issue in the cooling scheme since we do not use the driven $n=1$ peak to extract Fock-state fidelity, only the $n=0$ peak.
We do however off-resonantly pump $\ket{g,0}\leftrightarrow\ket{f,1}$ for example, along with many other transitions involving either state $\ket{g,0}$ or $\ket{e,0}$.
Off-resonant pumping should also lead to line-width broadening, this time of a peak used in extracting a Fock state fidelity.
To mitigate this issue we extract $P_n$ -- when stabilizing the $n$-th Fock state -- by using a fixed line-width $\kappa_n$ defined in Eq.~(\ref{eq:sum_of_lorentzians}).
This means that we always give a lower bound to $P_n$.
By comparing the pumped and un-pumped line-width of $n=0$ peak (see Fig.~\ref{fig:S_cooling}(B)), we notice no change in line-width with increasing pump power, indicating that our underestimation is certainly not drastic.
Finally, pump tones could drive the steady-state away from our assumption of a purely diagonal density matrix Eq.~(\ref{eq:thermal_steady_state}).
However we find that in the cooling experiment of Fig.~2, the adaptive rotating-wave simulation suggests that at maximum $P_0$, all off-diagonal terms of the density matrix are below $2.3\times10^{-3}$.
This issue can safely be disregarded.
\subsection{Four wave mixing}
\subsubsection{Analytical derivation of the pump-induced coupling rates}
In this section we will consider the probe tone to be very weak and hence negligible.
Following Refs.~\cite{Leghtas2015}, we add a pump tone driving the high mode with frequency $\omega_\text{p}$ and strength $\epsilon_\text{p}$ to the system Hamiltonian
\begin{equation}
\begin{split}
\hat{H}_\text{4,dr} &= \hbar\bar\omega_\text{H}\hat{a}^\dagger\hat{a}+\hbar\bar\omega_\text{L}\hat{b}^\dagger\hat{b} +E_\text{J}[1-\cos{\hat{\varphi}}]-\frac{E_\text{J}}{2}\hat{\varphi}^2 \\
&+ \hbar \left(\epsilon_\text{p}e^{-i\omega_\text{p} t}+\epsilon_\text{p}^*e^{i\omega_\text{p} t}\right)\left(\hat{a}^\dagger +\hat{a}\right)\ ,\\
\text{where }\hat{\varphi} &= \left(2 A_\text{H}/E_\text{J}\right)^{1/4}(\hat{a}^\dagger+\hat{a})+\left(2 A_\text{L}/E_\text{J}\right)^{1/4}(\hat{b}^\dagger+\hat{b})\ .
\end{split}
\end{equation}
We move to the displaced frame of the pump through the unitary transformation
\begin{equation}
U_\text{pump} = e^{-\tilde{\xi}_\text{p}\hat{a}^\dagger+\tilde{\xi}_\text{p}^*\hat{a}}\ ,
\end{equation}
Where $\tilde{\xi}_\text{p}$ is defined by the differential equation
\begin{equation}
\frac{d\tilde\xi_\text{p}}{dt}=-i\bar\omega_\text{H}\tilde\xi_\text{p}-i\left(\epsilon_\text{p}e^{-i\omega_\text{p} t}+\epsilon_\text{p}^*e^{i\omega_\text{p} t}\right)-\frac{\kappa}{2}\tilde\xi_\text{p}\ .
\end{equation}
For $t\gg1/\kappa$, and for far detuned drives $|\omega_\text{H}-\omega_\text{p}|\gg\kappa$, this equation is solved by
\begin{equation}
\begin{split}
\tilde\xi_\text{p}&\simeq{\epsilon_\text{p}e^{-i\omega_\text{p} t}}\left(\frac{1}{\omega_\text{p}-\bar\omega_\text{H}} +\frac{1}{\omega_\text{p}+\bar\omega_\text{H}} \right)\ .
\end{split}
\end{equation}
In this frame, the Hamiltonian becomes
\begin{equation}
\begin{split}
\hat{H}_\text{4,dr} =& \hbar\bar\omega_\text{H}\hat{a}^\dagger\hat{a}+\hbar\bar\omega_\text{L}\hat{b}^\dagger\hat{b} +E_\text{J}[1-\cos{\tilde{\varphi}}]-\frac{E_\text{J}}{2}\tilde{\varphi}^2 \\
\text{where }\tilde{\varphi} =& \left(2 A_\text{H}/E_\text{J}\right)^{1/4}(\hat{a}^\dagger+\hat{a})+\left(2 A_\text{L}/E_\text{J}\right)^{1/4}(\hat{b}^\dagger+\hat{b})\\
& +\left(2 A_\text{H}/E_\text{J}\right)^{1/4}(\tilde\xi_\text{p}^*+\tilde\xi_\text{p})
\end{split}
\end{equation}
We now Taylor expand the cosine non-linearity to fourth-order, neglecting terms which are off-diagonal in the Fock basis except when they depend on $\tilde\xi_\text{p}$ .
The latter can be made relevant depending on our choice of $\omega_\text{p}$.
\begin{equation}
\begin{split}
\hat{H}_{4,\text{pumped}} &= \hat{H}_{4,\text{diag}} + \hat{H}_\text{p}\ ,\\
\end{split}
\label{eq:hamiltonian_pumped}
\end{equation}
Where $\hat{H}_{4,\text{diag}}$ was given in Eq.~(\ref{eq:H4_diag}).
The terms dependent on the pump power and frequency are assembled in the term $\hat{H}_\text{p}$ and written in Table \ref{tab:pump_terms}, along with the approximate pumping frequency $\omega_\text{p}$ necessary to eliminate their time-dependence.
As shown in the next paragraph, this occurs when the pump frequency matches the transition frequency between the two states coupled by the interaction term.
\begin{table}[t]
\centering
\caption[Four-wave mixing terms]{\textbf{Four-wave mixing terms}
Only half of terms are shown, the other half can be obtained by taking the hermitian conjugate of all these terms.
Terms become approximately time-independent around the frequency $\omega_\text{p}$ given in the left column.
}
\label{tab:pump_terms}
\begin{tabular}{C{2.5cm} C{2.5cm} C{2.5cm}}
\\
$\omega_\text{p} \simeq $ & prefactor & interaction \\ \hline \\
\multicolumn{3}{c}{ Stark shift}\\\\
&$-2A_\text{H} |\xi_\text{p}|^2$& $\hat a^\dagger\hat a$\\
&$-\chi |\xi_\text{p}|^2$& $\hat b^\dagger\hat b$\\
\\\multicolumn{3}{c}{Heating interactions}\\\\
$(\omega_\text{H}+\omega_\text{L})/2$ &$-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}} (\tilde\xi_\text{p}^*)^2$& $\hat{a}\hat{b}$\\
$\omega_\text{H}+2\omega_\text{L}$ &$-\chi\tilde\xi_\text{p}^*/2$& $\hat{a}\hat b^2 $\\
$2\omega_\text{H}+\omega_\text{L}$ &$-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}\tilde\xi_\text{p}^*$& $\hat a^2\hat{b}$\\
\\\multicolumn{3}{c}{Cooling interactions}\\\\
$(\omega_\text{H}-\omega_\text{L})/2$ &$-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}} (\tilde\xi_\text{p}^*)^2$& $\hat{a}\hat{b}^\dagger$\\
$\omega_\text{H}-2\omega_\text{L}$ &$-\chi\tilde\xi_\text{p}^*/2$& $\hat{a}(\hat b^\dagger)^2$\\
$2\omega_\text{H}-\omega_\text{L}$ &$-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}\tilde\xi_\text{p}^*$& $\hat a^2 \hat{b}^\dagger$\\
\\\multicolumn{3}{c}{Unused interactions}\\\\
$3\omega_\text{H}$ &$- A_\text{H}\tilde\xi_\text{p}/3$& $\hat a^3$\\
$\omega_\text{H}/3$ &$-A_\text{H} (\tilde\xi_\text{p}^*)^3/3$& $\hat a$\\
$\omega_\text{H}$ &$-A_\text{H} (\tilde\xi_\text{p}^*)^2/2$& $\hat a^2 $\\
$\omega_\text{H}$ &$-\chi\tilde\xi_\text{p}$& $\hat{a}\hat b^\dagger\hat b $\\
$\omega_\text{H}$ &$-A_\text{H}\tilde\xi_\text{p}$& $\hat a^\dagger\hat a^2$\\
$\omega_\text{H}$ &$- A_\text{H}(\tilde\xi_\text{p}^*)^3$& $\hat{a}$\\
$\omega_\text{H}$ &$-A_\text{H}\tilde\xi_\text{p}$& $\hat a$\\
$\omega_\text{H}$ &$-\chi\tilde\xi_\text{p}/2$& $\hat{a}$\\
\\
$3\omega_\text{L}$ &$-A_\text{H}^{\frac{1}{4}}A_\text{L}^{\frac{3}{4}}\tilde\xi_\text{p}/3$& $\hat b^3$\\
$\omega_\text{L}/3$ &$-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}(\tilde\xi_\text{p}^*)^3/3$& $\hat{b}$\\
$\omega_\text{L}$ &$-\chi \tilde\xi_\text{p}^2/4$& $\hat b^2 $\\
$\omega_\text{L}$ &$-2A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}\tilde\xi_\text{p}$& $\hat a^\dagger\hat a\hat{b}$\\
$\omega_\text{L}$ &$-A_\text{H}^{\frac{1}{4}}A_\text{L}^{\frac{3}{4}}\tilde\xi_\text{p}$& $\hat b^\dagger\hat b^2$\\
$\omega_\text{L}$ &$-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}(\tilde\xi_\text{p}^*)^3$& $\hat{b}$\\
$\omega_\text{L}$ &$-A_\text{H}^{\frac{1}{4}}A_\text{L}^{\frac{3}{4}}\tilde\xi_\text{p}$& $\hat b$\\
$\omega_\text{L}$ &$-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}\tilde\xi_\text{p}$& $\hat{b}$\\
\end{tabular}
\end{table}
We now move to the interaction picture through the unitary transformation
\begin{equation}
\begin{split}
U_\text{int}&=e^{i \hat{H}_{4,\text{diag}}t/\hbar}\ ,
\end{split}
\end{equation}
$\hat{H}_{4,\text{diag}}$ is diagonal in the Fock state basis $\left\{\ket{j,n}\right\}_{\substack{n=0,1,2,.. \\ j=g,e,f,..}}$
\begin{equation}
\begin{split}
\hat{H}_0&=\sum_{\substack{n=0,1,2,.. \\ j=g,e,f,..}}\hbar\epsilon_{j,n}\ket{j,n}\bra{j,n}\ ,\\
\text{where }\epsilon_{j,n}&= n\omega_\text{L} - \frac{A_\text{L}}{2\hbar}\left(n^2-n\right)\\
&+j\omega_\text{H} - \frac{A_\text{H}}{2\hbar}\left(j^2-j\right)\\
&-nj\chi/\hbar\ .
\end{split}
\end{equation}
To determine $\hat H_\text{p}$ in this frame, it suffices to know the expression of annihilation operators in this frame.
We will take as an example the term we use for cooling, which reads in the interaction picture
\begin{equation}
\begin{split}
&U_\text{int}\left(-A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}} (\tilde\xi_\text{p}^*)^2\hat{a}^2\hat b^\dagger\right) U_\text{int}^\dagger +h.c. \\
= -A_\text{H}^{\frac{3}{4}}&A_\text{L}^{\frac{1}{4}} (\tilde\xi_\text{p}^*)^2(U\hat{a}U^\dagger)^2 (U\hat bU^\dagger)^\dagger +h.c.\ .
\end{split}
\label{eq:cooling_out_of_interaction_picture}
\end{equation}
Since $\hat H_0$ is diagonal, exponentiating it only requires exponentiating each of the diagonal elements, and the annihilation operators in the interaction picture are
\begin{equation}
\begin{split}
U_\text{int}\hat{a}U_\text{int}^\dagger &=\sum_{\substack{n=0,1,.. \\ j=g,e,..}} \sqrt{j+1}e^{-(\epsilon_{n,j+1} - \epsilon_{n,j})t/\hbar}\ket{j,n}\bra{j+1,n}\\
U_\text{int}\hat{b}U_\text{int}^\dagger &=\sum_{\substack{i=0,1,.. \\ j=g,e,..}} \sqrt{n+1}e^{-(\epsilon_{j,n+1} - \epsilon_{j,n})t/\hbar}\ket{j,n}\bra{j,n+1}\ .
\end{split}
\label{eq:ab_interaction_picture}
\end{equation}
Note that if the system were harmonic, these expressions would simplify to $e^{-i\omega_\text{H} t}\hat a$ and $e^{-i\omega_\text{L} t}\hat b$.
If we substitute Eqs.~(\ref{eq:ab_interaction_picture}) into Eq.~(\ref{eq:cooling_out_of_interaction_picture}), one of the terms we obtain is
\begin{equation}
\begin{split}
-\hbar ge^{i\left(\omega_\text{p}-\left(2\omega_\text{H}-A_\text{H}/\hbar-\omega_\text{L}\right)\right)t}&\ket{g,1}\bra{f,0} +h.c.\ ,
\end{split}
\end{equation}
where we defined the interaction strength
\begin{equation}
g=\sqrt{2}A_\text{H}^{\frac{3}{4}}A_\text{L}^{\frac{1}{4}}|\xi_\text{p}|/\hbar\ .
\end{equation}
By choosing the pump frequency $\omega_\text{p} = 2\omega_\text{H} - A_\text{H}/\hbar - \omega_\text{L} $, the term becomes time-independent, making it more relevant than the other terms of $\hat H_P$ as we will derive next.
More generally, we can engineer the cooling interactions
\begin{equation}
\begin{split}
&-\hbar g\sqrt{n+1}\ket{f,n}\bra{g,n+1} +h.c.\ ,
\end{split}
\label{eq:cooling_interaction_n}
\end{equation}
by choosing the pump frequencies
\begin{equation}
\omega_\text{p} = 2\omega_\text{H}-2n\chi/\hbar - A_\text{H}/\hbar - \omega_\text{L}\ .
\end{equation}
This is the interaction used in all expriments presented in the last three figures of the main text.
Cooling by driving the $|g,1\rangle\leftrightarrow|e,0\rangle$ transition may seem like a more natural choice, but it is a two pump-photon process (due to four-wave mixing selection rules), and hence requires higher pumping power.
Additionally, due to its higher energy, the $|f,0\rangle$ state has a lower thermal occupation than $|e,0\rangle$.
As discussed below, high pump powers and thermal occupation of the qubit place strong limitations on the cooling efficiency.
Rather than lowered, the number of excitations in the low mode can also be raised using interactions of the form
\begin{equation}
\begin{split}
&-\hbar g\sqrt{n+1}\ket{f,n+1}\bra{g,n} +h.c.\ ,
\label{eq:raising_interaction}
\end{split}
\end{equation}
which are realized by choosing the pump frequencies
\begin{equation}
\omega_\text{p} = 2\omega_\text{H}-2(n+1)\chi/\hbar - A_\text{H}/\hbar + \omega_\text{L}\ .
\end{equation}
\subsubsection{Derivation of cooling rate}
\label{sec:cooling_rate}
In this section we focus on the cooling interaction of Eq.~(\ref{eq:cooling_interaction_n}), however the methodology described is generalizable to all interaction terms.
The objective of this section is to translate the interaction term derived previously into a cooling rate for the low mode.
We assume that this interaction is sufficiently weak to enable us to perform first-order perturbation theory, considering the high mode as a fluctuating quantum noise source $\hat{F}_H$ perturbing the low mode following App.~B.1 of Ref.~\cite{clerk2010introduction}.
An initial state of the low mode $|n\rangle$ will evolve following
\begin{equation}
\begin{split}
|\psi(t)\rangle=|n\rangle
+i\sqrt{n}g\left(\int_0^t d\tau e^{i\Delta \tau}\hat{F}_H (\tau) \ket{n-1}\bra{n}\right)|n\rangle\
\end{split}
\end{equation}
where $\hat{F}_H (\tau) = \left(\ket{f}\bra{g}\right) (\tau)$ is treated as an independent noise source acting on the Hilbert space of the high mode.
We consider the transition is off-resonantly driven such that the time-dependence in the interaction picture is not completely eliminated and the interaction term rotates at
\begin{equation}
\Delta = \omega_\text{p} - \left(2\omega_\text{H}-2n\chi/\hbar - A_\text{H}/\hbar - \omega_\text{L}\right)\ ,
\end{equation}
The probability amplitude of finding the low mode in $|n-1\rangle$ is
\begin{equation}
\langle n-1|\psi(t)\rangle =i\sqrt{n}g\int_0^t d\tau e^{i\Delta \tau}\left(\ket{f}\bra{g}\right) (\tau) \ ,
\end{equation}
leading to a probability
\begin{equation}
\begin{split}
&|\langle n-1|\psi(t)\rangle|^2 =\langle n-1|\psi(t)\rangle^\dagger \langle n-1|\psi(t)\rangle\\
&=ng^2\int_0^t\int_0^t d\tau_1 d\tau_2 e^{i\Delta (\tau_2-\tau_1)} \left(\ket{f}\bra{g}\right)^\dagger (\tau_1)\left(\ket{f}\bra{g}\right) (\tau_2)\ .
\label{proba}
\end{split}
\end{equation}
Note that $|\langle n-1|\psi(t)\rangle|^2$ is still a quantum operator acting on the high-mode Hilbert space.
To obtain a classical probability, we now calculate its expectation value $\langle . \rangle_H$, provided that the high mode evolves in steady-state under thermal effects and dissipation
\begin{equation}
\begin{split}
&p_{n\rightarrow n-1}(t)=\langle|\langle n-1|\psi(t)\rangle|^2\rangle_H\\
&=ng^2\int_0^t\int_0^t d\tau_1 d\tau_2 e^{i\Delta (\tau_2-\tau_1)}\langle\left(\ket{f}\bra{g}\right)^\dagger (\tau_1)\left(\ket{f}\bra{g}\right) (\tau_2)\rangle_H\ .
\end{split}
\end{equation}
As in Appendix A.2 of \cite{clerk2010introduction}, we transform the double integral $S$ to
\begin{equation}
\begin{split}
S &= \int_0^t d\tau_1 \int_0^t d\tau_2 e^{i\Delta (\tau_2-\tau_1)}\langle\left(\ket{g}\bra{f}\right) (\tau_1)\left(\ket{f}\bra{g}\right) (\tau_2)\rangle_H\\
&=\int_0^t dT \int_{-B(T)}^{B(T)} d\tau e^{-i\Delta \tau}\langle\left(\ket{g}\bra{f}\right) (T + \tau/2)\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\left(\ket{f}\bra{g}\right) (T - \tau/2)\rangle_H\ ,\\
&\text{where }B(T) = 2T\text{ if }T< t/2\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = 2(t-T)\text{ if }T> t/2
\end{split}
\end{equation}
For time-scales larger than the decay rate of the high mode $\tau\gg 1/\kappa$, the two time-dependent high-mode operators are not correlated and the integrand will vanish (see Appendix A.2 of \cite{clerk2010introduction}).
We can therefore extend the range of the inner integral to $\pm \infty$ in estimating the probability at a time $t\gg1/\kappa$.
\begin{equation}
\begin{split}
S =\int_0^t dT \int_{-\infty}^{+\infty} d\tau e^{-i\Delta \tau}\langle&\left(\ket{g}\bra{f}\right) (T + \tau/2)\\
\times&\left(\ket{f}\bra{g}\right) (T - \tau/2)\rangle_H\ .\\
\end{split}
\end{equation}
Using time-translation invariance, we can remove the dependence on $T$
\begin{equation}
\begin{split}
S &=\int_0^t dT \int_{-\infty}^{+\infty} d\tau e^{-i\Delta \tau} \langle\left(\ket{g}\bra{f}\right) (\tau)\left(\ket{f}\bra{g}\right) (0)\rangle_H\\
&=t\int_{-\infty}^{+\infty} d\tau e^{-i\Delta \tau}\langle\left(\ket{g}\bra{f}\right) (\tau)\left(\ket{f}\bra{g}\right) (0)\rangle_H\ ,
\end{split}
\end{equation}
such that the rate becomes time-independent
\begin{equation}
\begin{split}
\Gamma_{n\rightarrow n-1} &= p_{n\rightarrow n-1}(t)/t\\
&=ng^2\int_{-\infty}^{+\infty} d\tau e^{-i\Delta \tau} \langle\left(\ket{g}\bra{f}\right) (\tau)\left(\ket{f}\bra{g}\right) (0)\rangle_H\ .\\
\end{split}
\end{equation}
Using time-translation invariance, we find that for negative values of $\tau$,
\begin{equation}
\begin{split}
\langle\left(\ket{g}\bra{f}\right)& (-|\tau|)\left(\ket{f}\bra{g}\right) (0)\rangle_H\\
& = \langle\left(\ket{g}\bra{f}\right) (0)\left(\ket{f}\bra{g}\right) (|\tau|)\rangle_H \\
& = \langle\left(\ket{g}\bra{f}\right) (|\tau|)\left(\ket{f}\bra{g}\right) (0)\rangle^*_H\ ,
\end{split}
\end{equation}
leading to
\begin{equation}
\begin{split}
\Gamma_{n\rightarrow n-1} &=ng^2\int_{0}^{\infty} d\tau e^{-i\Delta \tau}\langle\left(\ket{g}\bra{f}\right) (\tau)\left(\ket{f}\bra{g}\right) (0)\rangle_H\\
&+ng^2\int_{0}^{\infty} d\tau e^{-i\Delta \tau}\langle\left(\ket{g}\bra{f}\right) (\tau)\left(\ket{f}\bra{g}\right) (0)\rangle^*_H\\
=2ng^2 &\text{Re}\left(\int_{0}^{\infty} d\tau e^{-i\Delta \tau} \langle\left(\ket{g}\bra{f}\right) (\tau)\left(\ket{f}\bra{g}\right) (0)\rangle_H\right)\ .\\
\end{split}
\end{equation}
In the steady state of the system, the quantum regression theorem
can be shown to reduce the expression to
\begin{equation}
\Gamma_{n\rightarrow n-1} = 2ng^2\text{Re}\left(\int_{0}^{\infty} d\tau e^{-i\Delta \tau}~\text{Tr}\left[\ket{g}\bra{f}e^{\mathcal{L}\tau}\ket{f}\bra{g}\hat\rho\right]\right)\ ,
\end{equation}
where $\hat\rho$ is the steady-state density matrix of the high mode and $e^{\mathcal{L}t}$ its propagator, a function which takes a density matrix as an input and evolves it up to a time $t$ following the Lindblad equation.
Reducing the high mode to a three-level system and considering dissipation and thermal effects, this trace can be calculated analytically using the QuantumUtils Mathematica library
\begin{equation}
\text{Tr}\left[\ket{g}\bra{f}e^{\mathcal{L}t}\ket{f}\bra{g}\hat\rho\right]=P_ge^{-\kappa t(1+\frac{3}{2}n_\text{th}^{(H)})}\ .
\end{equation}
By only considering dissipation and thermalization, we made the assumption that an excitation could not be driven back from $\ket{f,n-1}$ to $\ket{g,n}$ under the effect of pumping, \textit{i.e. }we assume $2\kappa\gg \sqrt{n}g$, that we are far from the strong coupling regime.
After integration, we obtain
\begin{equation}
\Gamma_{n\rightarrow n-1} = \frac{2ng^2P_g}{\kappa(1+\frac{3}{2}n_\text{th}^{(H)})}\frac{1}{1+\left(\frac{\Delta}{\kappa}\right)^2}\ .
\label{eq:rate_down}
\end{equation}
Following the same method, we also obtain for the hermitian conjugate of this interaction term
\begin{equation}
\Gamma_{n-1\rightarrow n} = \frac{2ng^2P_f}{\kappa(1+\frac{3}{2}n_\text{th}^{(H)})}\frac{1}{1+\left(\frac{\Delta}{\kappa}\right)^2}\ ,
\label{eq:rate_up}
\end{equation}
if the $\ket{f}$ level is populated, we find that there is a probability for the pump to raise the number of excitations in the low mode rather than lower it.
We refer to the steady state population of the ground and second-excited state of the high mode as $P_g$ and $P_f$ respectively.
The same calculation can be performed for the raising interaction, which yields identical rates only with $P_g$ and $P_f$ interchanged.
A good figure of merit of the cooling efficiency is then to compare this rate with $\gamma$, yielding the cooperativity
\begin{equation}
C = \frac{\Gamma_{1\rightarrow 0}}{\gamma} =\frac{2g^2}{\kappa\gamma(1+\frac{3}{2}n_\text{th}^{(H)})}\frac{1}{1+\left(\frac{\Delta}{\kappa}\right)^2}\ .
\label{eq:full_cooperativity}
\end{equation}
\subsubsection{Semi-classical description of the cooling process}
With the cooling rate above, we can construct a semi-classical set of rate equations describing the competition between thermalization and cooling.
They would correspond to the diagonal part of a Lindblad equation, and equates the population leaving and arriving to a given state of the low mode.
We restrict ourselves to the driving of $\ket{f,0}\leftrightarrow\ket{g,1}$ as in the experiment of Fig.~2, where these equations can be written as
\begin{equation}
\dot P_0 = P_1\left(\gamma CP_g + \gamma (n_\text{th}+1)\right) - P_0\left(\gamma CP_f + \gamma n_\text{th}\right),
\end{equation}
\begin{equation}
\begin{split}
\dot P_1 &= -P_1\left(\gamma CP_g + \gamma (n_\text{th}+1)\right) + P_0\left(\gamma CP_f + \gamma n_\text{th}\right)\\
& -2P_1\gamma n_\text{th} + 2P_2\gamma (n_\text{th}+1)\ ,
\end{split}
\end{equation}
and, for $n\ge2$
\begin{equation}
\begin{split}
\dot P_n &= -n \gamma (n_\text{th}+1) P_n + n\gamma n_\text{th}P_{n-1}\\
& -(n+1)P_n\gamma n_\text{th} + (n+1)P_{n+1}\gamma (n_\text{th}+1)\ .
\end{split}
\end{equation}
In steady state ($\dot P=0$), the solution is a function of $P_0$
\begin{equation}
\begin{split}
\frac{P_0}{P_1}&=\frac{CP_g + n_\text{th}+1}{CP_f + n_\text{th}} =A\ ,\\
\frac{P_n}{P_{n+1}}&=\frac{n_\text{th}+1}{n_\text{th}} = B\ \text{for}\ n\ge1\ .
\end{split}
\label{eq:cooling_formula}
\end{equation}
We reach a unique solution by imposing $\sum_nP_n=1$, which yields an expression for $P_0$
\begin{equation}
P_0 = \frac{
A
\left(A-1\right)
\left(B-1\right)
}{
B(A^2-1)
+ A(1-A)
}\ .
\label{eq:P0_with_thermal}
\end{equation}
This expression is used in Fig.~\ref{fig:S_version_fig_2} to show the temperature limited evolution of $P_0$ as a function of cooperativity.
A more accurate description of the cooling process at high cooperativities comes from a numerical simulation taking the strong coupling limit and off-resonant driving of other four-wave mixing processes into account.
\subsection{Limiting factors to cooling}
\begin{figure}[ht!]
\includegraphics[width=0.5\textwidth]{figure_SI/S_version_fig_2.pdf}
\caption[Limiting factors to cooling.]{\textbf{Limiting factors to cooling. }
$P_0$ as a function of $C$, with dots showing data points identically to Fig.~2.
The decrease of $P_0$ at large $C$ is not captured by the cooling limitation due to thermal population of the $\ket{f}$ state (dashed line) or the limit imposed by strong coupling (dotted line), where the pump hybridizes the $\ket{g,1}$ and $\ket{f,0}$ states.
The solid curve shows a prediction considering the off-resonant driving of other sideband transitions by the pump: as the cooling process starts to saturate due to the strong coupling limit, the driving rate of transitions that increase the photon number overpowers the cooling effect.
}
\label{fig:S_version_fig_2}
\end{figure}
Here, we discuss three limiting factors to the cooling experiment (Fig.~2), ending with some notes on how to improve the device cooling performance
The first limiting factor is the thermal occupation of the high mode.
The pump tone drives the population from $|g,1\rangle$ to $|f,0\rangle$, but the reverse process also occurs since the $f$ level has a small thermal population $P_f\simeq0.006$ (see Sec.~\ref{sec:temperature}).
This leads to the limit $P_1/P_0 > P_f/P_g$ (dashed line in Fig.~\ref{fig:S_version_fig_2}) for which we have derived an exact analytical expression (Eq.~(\ref{eq:P0_with_thermal}))
The second limiting factor is that of strong coupling (similar to in optomechanical cooling~\cite{teufel2011circuit}), where the pump hybridizes the $|g,1\rangle$ and $|f,0\rangle$ states.
If $g$ exceeds the decay rate $2\kappa$, the population of state $|g,1\rangle$ will be driven to $|f,0\rangle$ and then transfered back to $|g,1\rangle$ without having the time to decay to $|e,0\rangle$.
To simulate this effect, we compute the steady state of the system by solving a Lindblad equation numerically (see ~\ref{sec:cooling_simulation}).
The result is shown as a dotted line in Fig.~\ref{fig:S_version_fig_2}, which additionally takes into account the population of the high mode.
As with the thermal effect, the strong coupling limit only imposes an upper bound on $P_0$, rather than predicting its decrease at high $C$.
When the cooling tone is detuned by $\Delta$ from its transition frequency, the cooperativity acquires a factor $1/\left(1+\Delta^2/\kappa^2\right)$ (Eq.~(\ref{eq:rate_up})).
A similar formula applies to all other four-wave mixing processes, including raising interactions (Eq.~(\ref{eq:raising_interaction})).
If the latter are far-detuned, their off-resonant driving will have little impact on the system.
However, as the cooling process starts to saturate due to the previously discussed limiting factors, the driving of other transitions is still far from saturation and can overpower the cooling effect.
What ensues is a competition between off-resonantly driven transitions that cool and raise the photon occupation.
We simulate this by following the bootstrap step of the adaptive rotating-wave approximation method of Ref.~\cite{baker2018adaptive}, which offers a way to include the most relevant off-resonantly driven transitions to the system Hamiltonian (see Sec.~\ref{sec:cooling_simulation}).
The result is shown as the solid curve of Fig.~\ref{fig:S_version_fig_2} which predicts the maximum $P_0$ and the strong cooperativity behavior.
We emphasize that, except for a small shift on the calibrated cooperativity-axis, the theoretical curves do not correspond to a fit to the data, but rather constitute a prediction based on the independently determined dissipation rates, thermal occupations and circuit parameters.
From this simulation we extract that, at maximum $P_0=0.82$, the average photon number in the cooled resonator is $\bar n =0.65$.
Note that
\begin{equation}
\begin{split}
\bar n &= 0\times(P_{g0}+P_{e0}+P_{f0}+...)\\
&+1\times(P_{g1}+P_{e1}+P_{f1}+...)\\
&+2\times(P_{g2}+P_{e2}+P_{f2}+...)\\
&+...
\end{split}
\end{equation}
The first 10 most populated levels are: $P_{g0} = 0.736$, $P_{e0} = 0.067$, $P_{g1} = 0.036$, $P_{g2} = 0.028$, $P_{g3} = 0.028$, $P_{g4} = 0.022$, $P_{g5} = 0.017$, $P_{g6} = 0.014$, $P_{g7} = 0.011$, $P_{f0} = 0.009$, where $P_{j,n}$ refers to the occupation of state $\ket{j,n}$.
Taking only the contribution of these states into account in the above formula already leads to $\bar n = 0.51$, and including the occupation of all 50 simulated levels leads to $\bar n =0.65$.
Determining the ideal system parameters to improve cooling (and Fock-state stabilization fidelity) is not straightforward.
One path to improvement could lie in determining values of $A_H$ and $\chi$ which minimize the effect of off-resonant driving by moving the most problematic transitions away from the cooling frequency.
Another is to reach a higher ground-state occupation before being limited by strong coupling, which can only be achieved by reducing the resonators dissipation $\gamma$.
Decreasing the high mode dissipation $\kappa$ is not necessarily beneficial: it diminishes off-resonant driving, but strong coupling would occur at smaller pump powers.
For our system, decreasing $\kappa$ in the simulation of Fig.~2C results in a lower ground-state occupation.
\clearpage
\onecolumngrid
\section{Numerical procedures}
\twocolumngrid
\subsection{Spectrum}
The eigenfrequencies of the system are determined by diagonalizing the system Hamiltonian.
Unless specified otherwise, we diagonalize the Hamiltonian of Eq.~\ref{eq:Hamiltonian_8th_order} with the junction non-linearity Taylor expanded to 8-th order.
We consider 10 excitations in the high mode and 20 in the low mode, and have verified that extending the Hilbert space further only leads to negligeable changes in the obtained spectrum.
This diagonalization also provides the dressed eigenstates $\ket{j,n}$, which are to be distinguished from the bare eigenstates $\widetilde{\ket{j,n}}_{\substack{n=0,1,2,.. \\ j=g,e,f,..}}$.
\subsection{Microwave reflection}
\label{sec:numerics_S11}
In order to compute the microwave reflection of the device, we solve a Lindblad equation using Qutip~\cite{johansson2012qutip}.
The Hamiltonian is written in the dressed basis defined above, it is hence diagonal with entries corresponding to the eigenfrequencies obtained in the diagonalization.
We consider 5 high-mode excitations and 10 low-mode excitations.
We add the drive term $i\hbar\epsilon_d(\hat a^\dagger -\hat a)$ defined in the dressed basis, and move to the frame rotating at the drive frequency $\omega_d$ by adding $-\hbar\omega_d\hat a ^\dagger \hat a$.
We add jump operators defined in the dressed basis by
\begin{equation}
\begin{split}
(n_\text{th}^{(H)}\kappa)^{\frac{1}{2}}\hat a^\dagger\ ,\ ((n_\text{th}^{(H)}+1)\kappa)^{\frac{1}{2}}\hat a\ ,\\
(n_\text{th}\gamma)^{\frac{1}{2}}\hat b^\dagger\ , \ ((n_\text{th}+1)\gamma)^{\frac{1}{2}}\hat b\ ,
\label{eq:collapse_ops}
\end{split}
\end{equation}
to describe dissipation and thermal effects.
Finally, we compute the expectation value of $\hat S_{11}=1-\frac{\kappa_\text{ext}}{\epsilon_\text{d}}\hat{a}$ for different drive frequencies.
As shown in Fig.~\ref{fig:S_temperature}, this computation is in excellent agreement with the sum of Lorentzian formula of Eq.~(\ref{eq:sum_of_lorentzians}).
\subsection{Cooling simulation}
\label{sec:cooling_simulation}
We use a similar method for the adaptive rotating-wave approximation (aRWA) simulation of Fig.~2.
We start with the same diagonal Hamiltonian.
We denote by $\omega_{j,n}$ the eigenfrequency of the dressed eigenstates $\ket{j,n}$.
As a result of the collapse operators of Eq.~(\ref{eq:collapse_ops}), a dressed state of the system $\ket{j,n}$ will have a total decay rate to other states of the system
\begin{equation}
\begin{split}
\Gamma_{j,n} = (j+1)(n_\text{th}^{(H)}\kappa) +j((n_\text{th}^{(H)}+1)\kappa)\\ +(n+1)(n_\text{th}\gamma) +n((n_\text{th}+1)\gamma)\ .
\end{split}
\end{equation}
Following Ref.~\cite{baker2018adaptive}, we can then estimate the impact of a pump tone at a frequency $\omega_p$ and driving rate $\epsilon_p$ on the steady state of the system.
Two states $\ket{k}=\ket{j,n}$ and $\ket{k'}=\ket{j',n'}$ will be coupled by this pump.
And to first order in $\epsilon_p$, the only change in the steady state density matrix will be in its off-diagonal element
\begin{equation}
\rho_{kk'} = \frac{V_{kk'}(P_{k'}-P_k)}{(\omega_{k'}-\omega_k)-\omega_p+i(\Gamma_k+\Gamma_{k'})/2}\ ,
\label{eq:ranking_parameter}
\end{equation}
where $P_{k}$ is the occupation of state $\ket{k}$ under the collapse operators of Eq.~(\ref{eq:collapse_ops}).
The dipole moment $V_{kk'} = \bra{k}\epsilon_p(\tilde a + \tilde a^\dagger)\ket{k'}$ is computed using annihilation $\tilde a$ and creation operators $\tilde a^\dagger$ defined in the bare basis.
The transitions between all the states are then ranked with decreasing $|\rho_{kk'}|$ (\textit{i.e.} decreasing relevance).
The most relevant terms are added in the form $\hbar V_{kk'}\ket{k}\bra{k'}$ to the Hamiltonian which is moved to the rotating frame in which states $\ket{k}$ and $\ket{k'}$ are resonant.
In Fig.~\ref{fig:S_version_fig_2}, we perform this calculation for $\omega_p = \omega_{f,0}-\omega_{g,1}$.
We show both the result of including a maximum number of transitions (465) and a single transition.
It was only possible to include 465 transitions out of the 650 transitions which have a non zero dipole moment.
This is due to limitations in the construction of the rotating frame, for more details see Ref.~\cite{baker2018adaptive}.
\begin{figure*}[ht!]
\includegraphics[width=0.75\textwidth]{figure_SI/tree.pdf}
\caption[Simulation of off-resonantly driving transitions in cooling experiment.]{\textbf{Simulation of off-resonantly driving transitions in cooling experiment. }
\textbf{A: }Ground-state occupation $P_0$ as a function of cooperativity in the cooling experiment of Fig.~2.
We show data (blue dots) and a simulation including the driving of 1, 2, 3, 16 (dashed curves) and 465 transitions (solid curve) using the adaptive rotating-wave approximation~\cite{baker2018adaptive} (aRWA).
The red symbols in panels A and B correspond to identical simulation points.
\textbf{B: }
Evolution of $P_0$ at a fixed cooperativity $C=46$.
For each point, we consider an additional transition.
\textbf{C: }
The transitions leading to the largest change in $P_0$ are displayed in the system energy diagram.
The transitions are colored red (blue) if adding them causes an increase (decrease) of $P_0$.
This distinction should be interpreted with care since the change in $P_0$ may be the result of multiple transitions interacting.
The thickness of the lines is logarithmically related to the change in $P_0$ that comes from adding the transition.
\textbf{D: }
Experimental spectrum (blue dots) and numerical predictions (sum of Lorentzian formula Eq.~(\ref{eq:sum_of_lorentzians})) at very high cooling powers.
We use the aRWA simulation to estimate the amplitudes of each Lorentzian peak.
Up to a cooperativity of $C=300$, the data is consistent with aRWA predictions.
However, there is a clear deviation between data and simulation at the highest powers (inset).
}
\label{fig:S_tree}
\end{figure*}
In Fig.~\ref{fig:S_tree}, we study how each transition affects the steady-state of the system.
Ranking using $|\rho_{kk'}|$ does not take into account that multiple transitions may interact.
To rank the relevance of the transitions in a more realistic way, we further rank the transitions following their impact on $P_0$.
We add transitions one by one in the simulation, recording for each transition the change $\Delta P_0$ that ensues.
We then rank the transitions with decreasing $|\Delta P_0|$, and repeat: we add the transitions in the new order one by one, rank them and start again until reaching convergence.
This simulation is in good agreement with the data except at the very highest powers (see Fig.~\ref{fig:S_tree}D).
There are four possible limitations in our aRWA simulation that could be the cause of this discrepancy.
First, our implementation of aRWA does not take into account the AC-Stark shift of each level.
Present only at high powers, these AC-Stark shifts could bring certain transitions in or out of resonance with each-other, modifying the final steady-state of the system.
Secondly, we work with a Hilbert space of only 10 excitations in the low mode.
At the highest power, the simulation indicates an average low-mode occupation of $~5$ and a larger Hilbert space may be needed to reach more accurate results.
Thirdly, only first order transitions were considered in the ranking of the transitions, so no higher order processes, such as those shown in Fig.~\ref{fig:S_all_transitions}C, are taken into account.
Fourthly, we rank transitions with Eq.~(\ref{eq:ranking_parameter}) using $P_{k}$ the occupation of states $\ket{k}$ under the collapse operators of Eq.~(\ref{eq:collapse_ops}).
However $P_{k}$ may change under the effect of the driving, modifying the relevance of a given transition.
This can be taken into account as described in Ref.~\cite{baker2018adaptive}, but is too computationally expensive with the Hilbert-space size used here.
\clearpage
\onecolumngrid
\section{Background subtraction}
\twocolumngrid
\subsection{Network analysis}
Most of our data analysis relies on fitting a sum of complex Lorentzians (see Eq.(~\ref{eq:sum_of_lorentzians})), to the measured microwave reflection $S_{11}$ in both phase and amplitude.
The signal we acquire is affected by the imperfections of the microwave equipment used to carry the signals to and from the device.
These can be modeled by a two port network with $s$ parameters $s_{11},s_{22}$, corresponding to the reflections at the VNA ports (reflected back to the VNA) and at the device (reflected back to the device) respectively, and $s_{21},s_{12}$, corresponding to the attenuation chain from the VNA to the device and the amplification chain from the device to the VNA respectively.
\begin{figure}[ht!]
\includegraphics{figure_SI/background_substraction.pdf}
\caption[Effective microwave network]{\textbf{Effective microwave network. }
We do not directly have access to the reflection at our device $S_{11}$.
We measure an effective reflection coefficient $S_{11}^{\text{eff}}$, affected by the imperfect microwave equipment between the network analyzer and device described by an $s-$matrix.
}
\label{fig:background_substraction}
\end{figure}
We hence measure with our VNA the effective microwave reflection
\begin{equation}
S_{11}^{\text{eff}} = s_{11}+\frac{s_{12}s_{21}}{1-s_{22}S_{11}}S_{11}
\end{equation}
Note that these $s$ parameters are generally frequency dependent.
We make the approximation $s_{11},s_{22}\ll s_{12},s_{21}$, meaning we attribute most of the measured microwave background to the frequency dependent transmission of the attenuation and amplification chain.
The signal we want to measure is now proportional to a so-called ``microwave background''
\begin{equation}
S_{11}^{\text{eff}} \simeq s_{12}s_{21}S_{11}\ ,
\end{equation}
which we have to experimentally measure.
\begin{figure}[ht!]
\includegraphics[width=0.45\textwidth]{figure_SI/power_dependence.pdf}
\caption[Probe power dependence]{
\textbf{High-probe-power behavior. A: }
$|S_{11}|$ as a function of probe frequency and probe power.
%
\textbf{B: }
At higher powers, the system starts to resonate at a different frequency, corresponding to the junction being replaced by an open circuit.
%
\textbf{C: }
Depth of the $n=0$ peak extracted from data (blue dots) and numerical steady-state calculation (see Sec.~\ref{sec:numerics_S11}).
%
As the probe driving rate exceeds $\kappa$, the peaks vanish.
%
We use the disappearance of peaks at the high power indicated by an arrow to acquire a microwave background that is subtracted (divided) in phase (amplitude) from all datasets.
%
\textbf{D: }
Population in the high mode as a function of probe power as extracted from simulation.
%
We used this information to choose the probing power indicated by an arrow for all other experiments.
%
It is as high as possible to increase signal to noise ratio, but low enough to not populate the high mode.
}
\label{fig:S_power}
\end{figure}
\subsection{Measuring the microwave background}
As shown in Fig.~\ref{fig:S_power}, when probing the system at high power the device response is $S_{11}=1$, allowing us to extract the microwave background $s_{12}s_{21}$.
This phenomenon is a consequence of super-splitting as explained in \cite{Bishop2009a}, which we will briefly summarize here.
To understand super-splitting, we have to truncate the high mode to a two-level system constituted of its two first levels $\ket{g}$ and $\ket{e}$.
In the Bloch sphere, the probe tone will cause rotations around the y-axis and $1-S_{11}$ corresponds to the projection of the state vector on the x-axis.
For driving rates faster than $\kappa$, the state vector will rapidly rotate around the y-axis yielding a zero projection on the x-axis hence $S_{11}=1$ and no peak.
For driving rates slower than $\kappa$, random decays of the state vector will be very likely to occur before the state vector can rotate around the y-axis, yielding a non zero projection on the x-axis and a dip in the microwave reflection.
A signature of this effect is the splitting of the absorption peak in two for large probe powers.
Whilst our signal to noise does not allow the resolution of this feature, it is present in the fitted simulation, supporting this explanation.
At even higher power, the system starts to resonate at a different frequency, corresponding to the junction being replaced by an open circuit when the current traversing the junction exceeds the critical current.
This effect is shown in the inset, Fig.~\ref{fig:S_power}B.
We use the disappearance of peaks at a high power indicated by the arrow ``calibrating power" to acquire a microwave background that is subtracted (divided) in phase (amplitude) to all datasets.
\begin{equation}
\frac{S_{11}^{\text{eff}}}{s_{12}s_{21}} \simeq S_{11}\ .
\end{equation}
\onecolumngrid
\section{Fitting}
\twocolumngrid
Here, we summarize our fitting routine.
We start by extracting $\gamma$ from the time-domain data, which will be used in the formula for the linewidth $\kappa_n$ in all subsequent fits.
By fitting the microwave reflection $S_{11}$ to a sum of Lorentzians (see Eq.~(\ref{eq:sum_of_lorentzians})), we get access to the peaks linewidths and amplitudes which allows us to determine $\kappa$, $\kappa_\text{ext}$ and $n_\text{th}^{(H)}$.
By fitting $S_{11}$ to the eigenfrequencies obtained from a diagonalization of the Hamiltonian of Eq.~(\ref{eq:Hamiltonian_8th_order}), we determine the values of the circuit elements.
The occupation of the low mode $n_\text{th}$ is determined separately for each individual experiment.
Each step is detailed in the subsections below.
\subsection{Low-frequency mode dissipation}
\begin{figure*}[ht!]
\includegraphics[width=1\textwidth]{figure_SI/cooling.pdf}
\caption[AC-stark shift and cooperativity measurement]{\textbf{AC-stark shift and cooperativity measurement. A: }
%
Normalized $|S_{11}|$ as a function of probe frequency and cooperativity in a single-pump cooling experiment.
%
The AC-Stark shift of the $n=0$ peak follows the fitted white dashed line.
%
%
\textbf{B: }
Extracted line-widths of peaks $n=0,1$.
%
Error bars correspond to the standard errors estimated from the least-squares fit.
%
%
Fluctuations in $\kappa_n$ result from fluctuations in $n_\text{th}$.
%
The onset of the strong-coupling regime, indicated by a red arrow, is seen through the increase in line-width of the $n=1$ peak.
%
Below this value, we can accurately extract $P_1$ from the amplitude of the $n=1$.
%
\textbf{C: }
Ratio of extracted probabilities $P_2/P_1$ and $P_1/P_0$.
%
The former is constant $P_2/P_1 = n_\text{th}/(1+n_\text{th})$, whilst the latter is fitted to $P_1/P_0 = (n_\text{th}+CP_f)/(1+n_\text{th}+CP_g)$, allowing us to convert pump power (top x-axis) to cooperativity (bottom x-axis).
%
}
\label{fig:S_cooling}
\end{figure*}
We start by fitting the thermalization from the ground-state measured in time-domain (Fig.~\ref{fig:S_time_domain}A) to determine $\gamma$.
Since the line-width of the $S_{11}(t)$ peaks is a function of $\gamma$ and $n_\text{th}$, we start by postulating these two values to extract a first estimate of the time evolution of $P_n$.
By fitting the evolution of $P_n$ to the rate equation of Eq.~(6), we extract a new value for $\gamma$ and $n_\text{th}$.
We then repeat this process many times, each time using the new values $\gamma$ and $n_\text{th}$ to fit $S_{11}(t)$, until we converge to $\gamma/2\pi=23.5\cdot 10^3s^{-1}$.
The low-frequency mode dissipation can also be measured without recourse to time-domain experiments.
The knowledge of the power dependent AC-stark shift and the cooperativity, measured in a single tone cooling experiment, is sufficient to extract $\gamma$.
We use this method to confirm our time-domain results, as well as verify the theory developed in Sec. ~\ref{sec:cooling_rate}.
First we measure the AC-stark shift of the $n=0$ peak, from which we extract the the proportionality factor $\xi^2/P$, between pumping rate $\xi$ and pump power $P$ (Fig.~\ref{fig:S_cooling}A).
Secondly we determine the power at which the strong coupling regime arises (Fig.~\ref{fig:S_cooling}B).
Above this power, the line-width of the $n=1$ peak will rise as the state $\ket{g,1}$ hybridizes with $\ket{f,0}$ under the effect of the cooling pump.
Below this power, the line-width of the $n=1$ peak is approximatively constant, and its height provides an accurate measure of $P_1$.
In this regime, we thirdly extract the ratio of probabilities $P_2/P_1$ and $P_1/P_0$
Following Eqs.~(\ref{eq:cooling_formula}), the former should remain constant $P_2/P_1 = n_\text{th}/(1+n_\text{th})$.
The latter, however, decreases with power, $P_1/P_0 = (n_\text{th}+CP_f)/(1+n_\text{th}+CP_g)$, and fitting this curve provides the conversion factor between cooperativity $C$ and power.
If we also know the anharmonicity $A_\text{H}$, cross-Kerr $\chi$, the high-mode occupation $n_\text{th}^{(H)}$ and dissipation rate $\kappa$, we can estimate the low-mode dissipation $\gamma = 2(\xi^2/P)/(C/P)A_\text{H}\chi/\kappa/(1+3n_\text{th}^{(H)}/2) \simeq 2\pi\cdot16\cdot 10^3s^{-1}$ close to the value obtained in time-domain.
The discrepancy is due to the inaccuracy of the relation $P_1/P_0 = (n_\text{th}+CP_f)/(1+n_\text{th}+CP_g)$, arising from the off-resonant driving of other four-wave mixing transitions.
\subsection{High-frequency mode dissipation and device temperature}
\label{sec:temperature}
Using $\gamma/2\pi=23\cdot 10^3s^{-1}$, we fit the spectra shown in Fig.~\ref{fig:S_temperature} to fix $\kappa$, $\kappa_\text{ext}$ and $n_\text{th}^{(H)}$.
Here, the fridge temperature is varied, and from a fit of Eq.~(\ref{eq:sum_of_lorentzians}) we extract $\kappa,\kappa_\text{ext}$, $n_\text{th}$ and $n_\text{th}^{(H)}$ at each temperature.
We took care to let the system thermalize for $\sim 10$ minutes at each temperature before starting measurements.
The linear scaling of low-mode temperature with fridge temperature, shown in Fig.~\ref{fig:S_temperature}B, confirms that we can extract a realistic mode temperature from the Bose-Einstein distribution.
A large difference in temperature is measured between low and high mode, which could be explained by the difference in external coupling to the feedline of the two modes.
We fix the values of $\kappa$, $\kappa_\text{ext}$ and $n_\text{th}^{(H)}$ to the lowest fridge temperature fit (Fig.~\ref{fig:S_temperature}C).
We leave $n_\text{th}$ as free parameters in the other experiments as it was found to vary by 10 to 20 percent on a time-scale of hours.
In the main text we quote the value of $n_\text{th}$ of the lowest point in the temperature sweep ($n_\text{th}=1.62$), but in the Fock state stabilization measurement, we measured $n_\text{th}=1.40$, in the cooling experiment $n_\text{th}=1.81$ and in the time-domain $n_\text{th}=1.37$.
These fluctuations are much smaller than the uncertainty in fitting $n_\text{th}$.
In both the cooling and Fock state stabilization experiments, $n_\text{th}$ was extracted from an initial measurement of $S_{11}$ in absence of pump tones.
\begin{figure*}[ht!]
\includegraphics{figure_SI/temperature.pdf}
\caption[Temperature dependence]{
\textbf{Temperature dependence. A: }
Normalized $|S_{11}|$ as a function of probe frequency and fridge temperature.
%
\textbf{B: }
Temperature of both modes, fit using using Eq.~(\ref{eq:sum_of_lorentzians}), as a function of fridge temperature.
%
\textbf{C: }
Lowest-temperature data (blue) and two fits, one using Eq.~(\ref{eq:sum_of_lorentzians}) (black), and another fitting a numerical model as described in Sec.~\ref{sec:numerics_S11} (dashed red line).
%
Excellent agreement between both fits validates our method of fitting the spectrum with a sum of Lorentzian functions.
%
\textbf{D: }
Higher-temperature data (red) and fit (black) using Eq.~(\ref{eq:sum_of_lorentzians}).
}
\label{fig:S_temperature}
\end{figure*}
\subsection{Circuit parameters}
The frequency of the system transitions (and hence the circuit parameters) is determined by fitting a numerical steady-state calculation of $S_{11}$ to the lowest temperature data (Fig.~\ref{fig:S_cooling}C).
This simulation, described in \ref{sec:numerics_S11}, starts with a diagonalization of the Hamiltonian of Eq.~(\ref{eq:Hamiltonian_8th_order}).
In this fit we additionally impose that the transition frequency $\ket{g,0}\leftrightarrow\ket{g,1}$ match the value measured in two-tone spectroscopy (Fig.~\ref{fig:S_two_tone}A).
We further verify the values of $C_\text{H}$, $C_\text{L}$, $L_\text{J}$ and $L$, as well as the black-box circuit analysis of Sec.~\ref{sec:black_box}, by extracting $A_\text{H}$, $\chi$ and $\omega_\text{H}$ for a varying $L_\text{J}$.
The junction or rather SQUID inductance is modified by sweeping the flux traversing it.
This is done by current-biasing a coil situated beneath our sample.
We show in Figs.~\ref{fig:S_flux}B,C,D the result of fitting a sum of Lorentzians to the flux-dependent spectrum (Fig.~\ref{fig:S_flux}A).
For each extracted parameter, we plot the theoretical evolution with flux obtained through a numerical diagonalization of the Hamiltonian of Eq.~(\ref{eq:Hamiltonian_8th_order}) (Taylor expanded to the 8-th order), as well as the analytical expressions obtained from black-box quantization (Eqs.~(\ref{eq:wl},\ref{eq:Al},\ref{eq:wh},\ref{eq:Ah})).
The only discrepancy is between the numerical and analytical estimation of $A_\text{L}$ and $\chi$.
It arises due to a term obtained from the quartic non-linearity of the junction proportional to: $(\hat a ^\dagger \hat a +1) (\hat a ^\dagger + \hat a )(\hat b ^\dagger + \hat b ) + h.c.$.
This term resembles a beam-splitter interaction which typically makes an oscillator more anharmonic when coupled to an oscillator more non-linear than itself.
The asymmetry of the SQUID dictating the dependence of $L_\text{J}$ on flux was a fit parameter in the construction of this figure and was found to be $20\%$.
This experiment also suffered from a number of flux jumps, where the transition frequency of the circuit suddenly jumped to a different value.
The flux was then swept until we recovered the same frequency before continuing the scan.
This data-set is thus assembled from 6 different measurements.
Therefore, an entire flux periodicity was not successfully measured, making the conversion between the current fed into a coil under the sample and flux a free parameter.
\begin{figure*}[ht!]
\includegraphics[width=1\textwidth]{figure_SI/flux_scan.pdf}
\caption[Flux dependence of the system parameters]{
\textbf{Flux dependence of the system parameters. A: }
$|S_{11}|$ as a function of probe frequency and flux through the SQUID.
%
Dashed black lines correspond to flux jumps.
\textbf{B,C,D: }
Eigenfrequencies, anharmonicities and cross-Kerr coupling of the system as a function of flux.
%
Dots are extracted through a sum-of-Lorentzians fit of the dataset in A.
%
Full curves correspond to a numerical diagonalization of the Hamiltonian of Eq.~(\ref{eq:Hamiltonian_8th_order}), Taylor expanded to the 8-th order.
%
Dashed lines correspond to analytical formulas obtained from black-box quantization.
%
The single data point corresponding to the low-mode frequency is extracted from the sideband transition frequencies (Fig.~\ref{fig:S_two_tone}).
}
\label{fig:S_flux}
\end{figure*}
\onecolumngrid
\clearpage
{\renewcommand{\arraystretch}{1.2}
\begin{table}[]
\centering
\caption{Fitted system parameters}
\begin{tabular}{L{6cm}L{1.5cm}L{3cm}L{3cm}}
\\
Quantity & Symbol & Value & Equation \\ \hline
\\\multicolumn{4}{c}{Hamiltonian parameters}\\\\
Dressed high-mode frequency ($\ket{g,0}\rightarrow\ket{e,0}$) & $\omega_H$ & $2\pi\times$ 5.911~GHz & $\bar\omega_H+A_H/\hbar+\chi/2\hbar$ \\
Dressed low-mode frequency ($\ket{g,0}\rightarrow\ket{g,1}$) & $\omega_L$ & $2\pi\times$ 173~MHz & $\bar\omega_L+A_L/\hbar+\chi/2\hbar$\\
\\
Bare high-mode frequency & $\bar{\omega}_H$ & $2\pi\times$ 6.113~GHz & $\sqrt{\frac{L+L_J}{LL_JC_L}}$ \\
Bare low-mode frequency & $\bar{\omega}_L$ & $2\pi\times$ 182~MHz & $\frac{1}{\sqrt{(L+L_J)C_L}}$ \\
\\
High-mode anharmonicity & $A_H$ & $h\times$ 192~MHz & $\frac{e^2}{2C_H}\left(\frac{L}{L+L_J}\right)$, \\
Low-mode anharmonicity & $A_L$ & $h\times$ 495 kHz & $\frac{e^2}{2C_L}\left(\frac{L_J}{L+L_J}\right)^3$, \\
Cross-Kerr & $\chi$ & $h\times$ 21.29~MHz & $2\sqrt{A_LA_H}$ \\
\\\multicolumn{4}{c}{Dissipation rates}\\\\
High-mode dissipation rate & $\kappa$ & $2\pi\times$ 3.70~MHz & \\
External coupling rate & $\kappa_\text{ext}$ & $2\pi\times$ 1.63~MHz & \\
Low-mode dissipation rate & $\gamma$ & $2\pi\times$ 23.50 kHz & \\
Low-mode external coupling rate & $\gamma_\text{ext}$ & $2\pi\times$ 1.99 Hz & \\
\\
High-mode quality factor & $Q_H$ & 1599 & $\omega_H/\kappa$ \\
High-mode external quality factor & $Q_H^{(\text{ext})}$ & 3617 & $\omega_H/\kappa_\text{ext}$ \\
Low-mode quality factor & $Q_L$ & 7348 & $\omega_L/\gamma$ \\
Low-mode external quality factor & $Q_L^{(\text{ext})}$ & 87 $\times 10^6$ & $Z_0\sqrt{\frac{C_L}{L+L_J}}\left(\frac{C_c}{C_L}\right)^2$ \\
\\\multicolumn{4}{c}{Thermal parameters}\\\\
High-mode temperature & $T_H$ & 112~mK & \\
Low-mode temperature & $T_L$ & 17~mK & \\
\\
High-mode occupation number & $n_\text{th}^{(H)}$ & 0.09 & $\frac{1}{e^{\frac{\hbar\omega_H}{k_BT_H}}-1}$ \\
Low-mode occupation number & $n_\text{th}$ & 1.62 & $\frac{1}{e^{\frac{\hbar\omega_L}{k_BT_L}}-1}$
\\\multicolumn{4}{c}{Circuit parameters}\\\\
Josephson energy & $E_J$ & $h\times$ 4.01~GHz & $\frac{\hbar^2\bar{\omega}_H^2\bar{\omega}_L^2}{8\left(\bar{\omega}_H\sqrt{A_L}+\bar{\omega}_L\sqrt{A_H}\right)^2}$ \\
Josephson inductance & $L_J$ & 41 nH & $\frac{\hbar^2}{4e^2E_J}$ \\
Low-mode capacitance & $C_L$ & 11.1 pF & $\frac{e^2 \sqrt{A_L} \bar{\omega}_H^3}{2 \left(\bar{\omega}_L\sqrt{A_H}+\bar{\omega}_H\sqrt{A_L}\right)^3}$ \\
High-mode capacitance & $C_H$ & 40.7~fF & $\frac{e^2 \bar{\omega}_L}{2 \left(\sqrt{A_HA_L} \bar{\omega}_H+A_H\bar{\omega}_L\right)}$ \\
High-mode inductance & $L$ & 28.2 nH & $\frac{2 \sqrt{A_H} \left(\bar{\omega}_L\sqrt{A_H}+\bar{\omega}_H\sqrt{A_L}\right)^2}{e^2\sqrt{A_L} \bar{\omega}_H^3 \bar{\omega}_L}$ \\
Coupling capacitor & $C_c$ & 0.95~fF & $C_H\sqrt{\frac{\kappa_\text{ext}LL_J}{Z_0(L+L_J))}} $ \\
Feedline impedance & $Z_0$ & 50$\Omega$ & \\
\end{tabular}
\label{tab:params}
\end{table}
}
\pagebreak
\FloatBarrier
\twocolumngrid
\onecolumngrid
\section{Supplementary experimental data}
\subsection{Flux dependence of thermal and dissipation parameters}
The flux sweep shown in Fig.~\ref{fig:S_flux} also gives access to the temperature and dissipation rates of the modes as a function of flux which are shown in Fig.~\ref{fig:S_flux_n_th_kappa}.
These are extracted from a fit of the sum of Lorentzians (Eq.~(\ref{eq:sum_of_lorentzians})), \textit{i.e.} from the line-widths and amplitudes of the measured peaks.
This relies on our estimation of $\gamma$, which is assumed to be a constant, and $n_\text{th}^{(H)}$, which is difficult to extract due to the low signal-to-noise ratio as well as the $\ket{e,0}\leftrightarrow\ket{f,0}$ peak crossing the $\ket{g,n}\leftrightarrow\ket{e,n}$ peaks.
To investigate the accuracy of these fits, we plot in Fig.~\ref{fig:S_flux_n_th_kappa}C the external quality factor of the high mode as a function of its frequency (Fig.~\ref{fig:S_flux_n_th_kappa}C) .
We find a clear mismatch with the behavior expected from our circuit analysis.
This indicates that we cannot confidently state that the temperature of the low mode and dissipation of the high mode fluctuate with flux as shown here, or even provide meaningful error bars.
Further analysis could take the form of time-domain measurements at each flux points to determine $\gamma$, or higher signal-to-noise measurements of the $\ket{e,0}\leftrightarrow\ket{f,0}$ to fix $n_\text{th}^{(H)}$.
\begin{figure*}[h!]
\includegraphics[width=0.8\textwidth]{figure_SI/flux_scan_decay_thermal.pdf}
\caption[Flux dependence of thermal and dissipation parameters]{
\textbf{Flux dependence of thermal and dissipation parameters. A: }
%
Low mode temperature, as a function of the low-mode resonance frequency.
%
\textbf{B,C: }Total and external quality factor of the high mode as a function of its frequency.
%
The dashed line in panel C corresponds to the expected behavior from our circuit analysis.
%
These parameters are extracted from a fit of Eq.~(\ref{eq:sum_of_lorentzians}) to the flux-dependent spectrum shown in Fig.~\ref{fig:S_flux}.
%
}
\label{fig:S_flux_n_th_kappa}
\end{figure*}
\begin{figure*}[ht!]
\subsection{Full time-dependent spectrum}
\includegraphics[width=0.8\textwidth]{figure_SI/S_TD.pdf}
\caption[Full time-dependent spectrum]{\textbf{Full time-dependent spectrum. }
%
Time and probe frequency dependence of $|S_{11}|$ for both ground-state cooling (\textbf{A}) and the one-photon-state stabilization (\textbf{B}).
%
By fitting these datasets using Eq.~(\ref{eq:sum_of_lorentzians}), in both frequency and time, we construct the plots shown in Fig.~4.
%
\textbf{C: }line cut of the data set B (indicated by arrows in B) is shown as blue dots, the black line corresponds to a fit.
%
The relatively low signal-to-noise ratio is responsible for the large noise in probability of Fig.~4.
}
\label{fig:S_time_domain}
\end{figure*}
\FloatBarrier
\subsection{Four-wave mixing spectrum}
By measuring the spectrum whilst sweeping the frequency of a pump tone, we show in Fig.~\ref{fig:S_all_transitions} the multitude of four-wave mixing processes possible in this system.
Panel A is particularly relevant to the cooling experiment, and is shown in Fig.~3A, as one can see the relevant transitions lying next to the cooling transition $\ket{g,0}\leftrightarrow\ket{f,1}$.
We tested different combinations of raising and cooling four-wave mixing processes (panels B and C) for cooling and Fock-state stabilization, but these alternatives consistently produced lower state occupations than the results shown in the main text.
Two transitions in panel A are unexpected from a simple four-wave mixing approach to the system: $\ket{g,n}\leftrightarrow\ket{f,n+3}$ and $\ket{e,0}\leftrightarrow\ket{h,n+3}$.
These are six-wave mixing processes and one could expect them to have very weak effects.
However in this system the cross-Kerr $\chi$ is a considerable fraction of $\omega_L$.
The usually neglected term of the quartic non-linearity of the junction proportional to $\chi(2\hat a ^\dagger \hat a +1)(\hat b\hat b+\hat b^\dagger\hat b^\dagger)$ then leads to the dressed low Fock state $\ket{n}$ having a significant overlap with the bare states $\ket{n\pm 2k}$ where k is a positive integer.
The transition $\ket{g,0}\leftrightarrow\ket{f,3}$ is thus visible since $\ket{f,3}$ has a large overlap with $\ket{f,1}$ and $\ket{g,0}\leftrightarrow\ket{f,1}$ is an easily drivable four-wave mixing transition.
\begin{figure*}[ht!]
\includegraphics[width = 0.7\columnwidth]{figure_SI/all_transitions.pdf}
\caption[Measurement of four-wave mixing processes]{
\textbf{Measurement of four-wave mixing processes.}
$|S_{11}|$ as a function of probe frequency and of the frequency of a stronger pump tone.
%
Features in the data are indicated by arrows with position generated from the eigenvalues of the Hamiltonian of Eq.~(\ref{eq:Hamiltonian_8th_order}), Taylor expanded to the 8-th order.
%
%
The AC-Stark shift is not considered in the computation of transition energies, and a constant high-mode frequency is taken as indicated by a black arrow in the x-axis, leading to slight mismatches between the transitions and placed arrows.
%
In panel \textbf{B} dashed lines indicate the large avoided crossings observed when the pump tone is directly resonant with the transition frequencies of the high mode.
%
In panel \textbf{C}, the dashed line indicates $\omega_p=\omega_d/2$, the features arising there being due to the first harmonic overtone of the pump (issuing from our microwave generator) driving the high mode.
}
\label{fig:S_all_transitions}
\end{figure*}
\subsection{Low-frequency spectrum}
We monitor the height of the $\ket{g,0}\leftrightarrow\ket{e,0}$, whilst sweeping the frequency of a secondary pump tone.
As shown in Fig.~\ref{fig:S_two_tone}A,B, this allows us to easily measure the anharmonicity of the high mode and the frequency of the low mode.
The line-width of the low-mode peak is considerably larger than the previously determined low-mode dissipation rate $\gamma/2\pi=23\cdot 10^3s^{-1}$.
If the line-width was equal to $\gamma$, we would expect to see photon number splitting, distinct peaks separated by the low-mode anharmonicity $A_\text{L}$, corresponding to the transitions $\ket{g,n}\leftrightarrow\ket{g,n+1}$.
To understand why this is not the case, we fit a steady-state numerical computation of a pumped and probed Hamiltonian
\begin{equation}
\begin{split}
\hat H&=-A_\text{H}(\hat a^\dagger)^2\hat a^2 +\hbar(\omega_\text{L}-\omega_\text{p})\hat b^\dagger\hat b -A_\text{L}(\hat b^\dagger)^2\hat b^2\\
&+ i\hbar\epsilon_\text{d}(\hat a^\dagger-\hat a)+ i\hbar\epsilon_\text{p}(\hat b^\dagger-\hat b)\ ,
\end{split}
\end{equation}
with the collapse operators of Eq.~(\ref{eq:probed_lindbald}).
The only free parameter is the pumping strength $\epsilon_\text{p}\sim16\times\gamma$, the probe strength was taken to be negligibly small with respect to all other rates in the model.
By varying simulation parameters, we can then explore the origin of this broad line-width.
These results are summarized in Fig.~\ref{fig:S_two_tone}C.
Reducing the pumping strength $\epsilon_\text{p}$ will suppress what is usually referred to as `power broadening', at the expense of the signal-to-noise ratio, but does not reveal photon-number splitting.
By reducing $\gamma$ to a negligibly small rate, photon number splitting can only be glimpsed behind a line-width broadening induced by the process $\ket{g,n}\rightarrow\ket{e,n}$ which occurs at a rate $\kappa n_\text{th}^{(H)}$.
This becomes clear if we instead keep $\gamma/2\pi=23\cdot 10^3s^{-1}$ and take the limit $n_\text{th}^{(H)}=0$, making the first two peaks apparent.
As derived in Eq.~(\ref{eq:sum_of_lorentzians}), the line-width of a thermally populated anharmonic oscillator broadens significantly with its thermal occupation, which is responsible in this case for the disappearance (broadening) of peaks $n\ge2$.
By reducing both $\gamma$ and $n_\text{th}^{(H)}$, photon-number resolution would become visible.
\begin{figure*}[ht!]
\includegraphics[width = 0.8\columnwidth]{figure_SI/two_tone.pdf}
\caption[Two-tone measurement of the anharmonicity and lower mode spectrum]{\textbf{Two-tone measurement of the anharmonicity and lower mode spectrum. }
\textbf{A: }we sweep a pump tone around the low-mode frequency (x-axis) whilst monitoring the depth of the $n=0$ peak $|S_{11}(\omega=\omega_\text{H})|$ (y-axis).
We observe two peaks, separated by $\chi$ corresponding to the transitions $\ket{g,n}\leftrightarrow\ket{g,n+1}$ at $\omega_\text{p}=\omega_\text{L}$ and $\ket{e,n}\leftrightarrow\ket{e,n+1}$ at $\omega_\text{p}=\omega_\text{L}-\chi/\hbar$.
A steady-state numerical computation is shown as a black line and data as blue points.
\textbf{B: }by performing the same measurement around the high-mode frequency, we measure a peak corresponding to $\ket{e,0}\leftrightarrow\ket{f,0}$ at $\omega_\text{p}=\omega_\text{H}-A_\text{H}/\hbar$.
Compared to Fig.~\ref{fig:S_all_transitions}, these two datasets constitute a more direct measurement of $\omega_\text{L}$ and $A_\text{H}$.
\textbf{C: }the line-width of the low mode was found to be significantly broader than $\gamma$, with no accessible photon-number resolution.
By varying simulation parameters as detailed in the legend, we explore the origin of this broad line-width.
}
\label{fig:S_two_tone}
\end{figure*}
| proofpile-arXiv_065-1566 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec_intro}
Stellar flares are very powerful magnetic phenomena occurring after a rearrangement of the magnetic field topology and the resulting reconnection of magnetic field lines in stellar coronae \citep[e.g.,][]{FletcherDHJK2011SSRv}. During flares, a large amount of energy previously stored in the magnetic field is released, accelerating a flow of high-speed electrons up to mega-electronvolt energies. This energy can be deposited both locally at the reconnection site and onto the underlying dense chromosphere at the loop footpoints by the electron beams moving downward along the field lines, heating the surrounding plasma to $\geq$10 million degrees. Very quickly, the pressure in the heated plasma exceeds the ambient chromospheric pressure, making the heated plasma expand and fill the overlying magnetic tubes. The plasma then gradually cools down as soon as heating decreases and stops. \par
This general scenario is supported by extensive observations of solar flares across all accessible wavelengths. Flares are in fact intrinsic multi-wavelength phenomena, releasing energy in several bands of the electromagnetic spectrum \citep{Benz2008LRSP.5.1B}: accelerated electrons emit gyrosynchrotron radiation and nonthermal hard X-rays upon Coulomb collisions with particles in the chromospheric plasma \citep[e.g.,][]{HoyngDMR1981ApJ.246L.155H}; the heated region of the chromosphere and upper photosphere surrounding the loop footpoints radiates in optical and UV bands \citep[e.g.,][]{Neidig1989SoPh.121.261N}; and the evaporated plasma confined in the magnetic loop emits soft X-rays.
Observed correlations between the energy released in different bands can probe the complex interplay between the phenomena occurring during flares. For instance, the correlation observed between the energy released in optical and hard X-rays in solar flares was taken as proof of the connection between the energy deposited in the chromosphere and the heating of the hot spots in the deeper layers \citep{MatthewsVHN2003AA.409.1107M,Metcalf2003ApJ.595.483M,HudsonWM2006SoPh.234.79H}. However, an important unanswered question remains as to whether this and other results obtained from solar flares can be directly extended to stellar flares. This cannot be taken for granted given that the energy released by flares occurring in the Sun and in stars across the Hertzsprung-Russel diagram can differ by orders of magnitude. Flares in young and active stars can be particularly energetic. For instance, in pre-main sequence (PMS) stars, flares typically release $\sim$10$^{34}\,$ergs of energy in soft X-rays \citep[e.g.,][]{FavataFRM2005ApJs}, with the brightest events reaching $\sim$10$^{36}\,$ergs, while the total irradiance measurements for some of the largest solar flares indicate radiated energies of between 10$^{31}$ and 10$^{32}\,$ergs. Although technically challenging, multi-wavelength observations of flares occurring in stars with different properties are therefore required in order to fully understand the physics of flares and to determine whether or not the ensemble of phenomena in play change during stellar evolution and the gradual transition to lower magnetic activity levels as a result of spin-down. \par
In this paper we analyze one of the rare simultaneous optical and X-ray observations of stellar flares thanks to the XMM/Newton observations of the Pleiades taken during the \textit{Kepler}/K2 Campaign 4. The aim is to calculate and compare flare properties in both bands. In Sect. \ref{sec_pleiades} we describe previous X-ray and optical observations in the time domain of the Pleiades, and in Sects \ref{k2_data_sec} and \ref{xray_data_sec} we present the \textit{Kepler} and X-ray data, respectively. A sample of 12 stars hosting flares analyzed in detail in this paper is described in Sect. \ref{sample_sec}. We then first derive and analyze flare properties averaged over flare duration (Sect. \ref{glob_flare}) and subsequently analyze the time evolution of the emitting plasma (Sect. \ref{time_res}). Finally, in Sect. \ref{thats_all_folks} we compare our results to those of existing studies of flares. \par
\section{Existing studies on optical and X-ray variability of the Pleiades}
\label{sec_pleiades}
The Pleiades open cluster (also named M45, Melotte~22, and NGC~1432; the name "the Seven Sisters" is used to indicate its brightest members), located in the constellation of Taurus, is the brightest stellar cluster in the sky. It is therefore not surprising that it is one of the most studied astronomical targets, with dedicated literature including more than one thousand published papers, and continues to be considered a cornerstone for understanding stellar physics. With an age of 125$\,$Myr \citep{StaufferSK1998ApJ} and a mean distance of 134$\pm$6$\,$pc (from the Gaia Data Release 2, indicating a mean parallax of 7.5$\pm$0.3$\,$mas, \citealp{Gaia2016AA595A2G}), the Pleiades is a rich cluster, with 2107 stars identified by \citet{BouyBSB2015AA577A148B} as having a high probability of being members from an analysis of accurate multi-epoch proper motion measurements and multi wavelength photometry. Other recent compilations of candidate members were obtained by \citet{StaufferHFA2007ApJS172663S}, \citet{LodieuDH2012MNRAS.422.1495L}, \citet{BouyBMC2013AA554A.101B}, and \citet{SarroBBB2014AA563A45S}. A $2^\circ \times2^\circ$ DSS2-red image of the Pleiades is shown in Fig. \ref{xmm_fields}, with the XMM fields analyzed in this work indicated. \par
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{xmm_fields.ps}
\caption{DSS2-red $2^\circ \times2^\circ$ image centered on the Pleiades. The detector footprints of the XMM pointings analyzed here and their associated Obs.IDs. are shown.}
\label{xmm_fields}
\end{figure}
A number of authors have analyzed the optical variability in the Pleiades, classifying several types of variable stars. In recent years, the revolution in optical time-domain astronomy due to the unprecedented photometric precision of the \textit{Kepler} telescope has led to several discoveries in this open cluster. Using K2 \citep{Howell2014PASP.126.398H} data, \citet{RebullSBC2016AJa} determined the rotation periods of cluster members; \citet{RebullSBC2016AJb} analyzed the light curves of those members with multiple periodic signals; \citet{StaufferRBH2016AJ} focused on the angular momentum evolution and magnetic dynamo of young low-mass stars using Pleiades members as a template; and \citet{White2017MNRAS.471.2882W} analyzed the variability of the Seven Sisters, classifying six of them as slowly pulsating B-stars, while the variability observed in the remaining star (Maia) was attributed to rotational modulation of a large spot. Spot coverage and its connection with stellar properties and rotational periods was studied by \citet{Savanov2017ARep.61.996S} on a sample of 759 confirmed members. \par
The Pleiades cluster is also a prime target to study stellar X-ray variability across a wide range of stellar spectral types. Evidence for solar-like cyclic activity in the low-mass Pleiades stars was found by \citet{Schmitt1993AA.277.114S}, comparing ROSAT All Sky Survey data with Einstein/IPC data in order to cover a time baseline of about 10 years. \citet{GagneCS1995ApJ} observed X-ray variability with an amplitude larger than a factor two over $\sim$1$\,$yr in approximately 25\% of Pleiades members. Large amplitude variability over a large timescale and flares in X-rays were also observed by \citet{Micela1996ApJS.102.75M} using ROSAT observations. ROSAT observations have also been analyzed by \citet{Marino2003AA.406.629M} to verify that X-ray variability is a common feature in late-type Pleiades members over both short (hours) and medium (months) timescales. \par
X-ray flares occurring in 20 Pleiades stars observed with ROSAT were studied by \citet{Stelzer2000AA.356.949S}, finding decay times ranging between 0.3 and 12.6$\,$ksec, peak luminosity in the range 29.8$\leq$log(L$_{\rm X}$[erg/s])$\leq$31.2, and total emitted energies in the range 32.6$\leq$log(E$_{\rm X}$[erg])$\leq$34.4. X-ray flares in the Pleiades stars HII~1032, HII~1100, HII~1516 and HII~1280 have been observed with XMM by \citet{Briggs2003MNRAS.345.714B}. In these four flares, the observed peak luminosity ranged between 29.6$\leq$log(L$_{\rm X}$[erg/s])$\leq$30.8 in the 0.3-4.5$\,$keV energy band and they were characterized by a decay time of 2--2.9 ks. The two strongest flares were observed in HII~1032 and HII~1100, with peak plasma temperatures and loop emission measures (EM) of T$_{\rm peak}$=39$^{46}_{32}\,$MK and EM=3.4$\times10^{53}\,$cm$^{-3}$ in HII~1032, T$_{\rm peak}$=24$^{27}_{22}\,$MK and EM=1.4$\times10^{53}\,$cm$^{-3}$ in HII~1100. The light curve of HII~1100 also shows a peculiar dip of 600$\,$s occurring just before the onset of the X-ray flare. \par
\section{Data and light curves}
\subsection{Kepler/K2 observations}
\label{k2_data_sec}
The original \textit{Kepler} mission \citep{Borucki2010Sci.327.977B} was designed with the primary purpose of detecting planets using the method of transits. Thanks to its superb photometric precision in the 420--900$\,$nm band pass, the mission has not only allowed us to identify hundreds of confirmed and thousands of candidate exoplanets, but has also provided an inestimable collection of light curves of stars across most of the Hertzsprung-Russell diagram. It is therefore not surprising that \textit{Kepler} data has a huge impact not only on exoplanetary science, but also on several other fields of astronomy, such as asteroseismology \citep{Chaplin2010ApJ.713L.169C}, variability of low mass stars \citep[e.g.,][]{McQuillan2014ApJS.211.24M}, eclipsing binaries \citep{Prsa2011AJ.141.83P}, flares and superflares \citep{Maehara2012Natur.485.478M,Davenport2016ApJ}, and even variability in active galactic nuclei \citep{Pei2014ApJ.795.38P}. \par
The \textit{Kepler} primary mission ended with the loss of two reaction wheels, which prevented the telescope from maintaining stable pointing. The mission was extended by aligning the telescope along its orbital plane, minimizing the torque from solar radiation pressure. The extended \textit{Kepler}/K2 mission has been designed to cover a series of fields along the ecliptic, each for $\sim$80 days \citep{Howell2014PASP.126.398H}. The K2 mission has provided new opportunities to study stellar variability. For instance, the original \textit{Kepler} field did not contain stellar clusters younger than 1$\,$Gyr, while K2 fields include PMS stars and young clusters (e.g., Upper Sco, Taurus Molecular Cloud, the Pleiades, M32, the Hyades, Praesepe, and others). \par
The Pleiades have been included in the K2 Campaign 4 and observed from 2015 February 8 to 2015 April 20, providing light curves for a total of 1020 high-probability Pleiades members, {spread over most of the cluster field}. In this paper we analyze the pre-search data conditioning light curves generated by the K2 project in the long-cadence mode (time resolution of $\sim$30$\,$min), and obtained from the Mikulski Archive for Space Telescopes. We included only data points not flagged as affected by thruster firings and any other bad data flags.
\subsection{XMM-Newton observations}
\label{xray_data_sec}
The six X-ray observations of the Pleiades analyzed in this paper (P.I. Jeremy Drake) were obtained on 10, 11, 13, 17, 18, and 26 February 2015 with XMM-Newton using the European Photon Imaging Camera (EPIC) during the fourth \textit{Kepler}/K2 campaign. EPIC observations were performed
simultaneously with three detectors (PN, MOS1, and MOS2), which are approximately co-pointed but have different shapes, FoVs, and sensitivities (with the PN detector being the most sensitive). The observations shown in Fig. \ref{xmm_fields} were designed in order to avoid the brightest stars in the field and to optimize the number of K2 targets potentially observable. All these observations were performed with a roll angle of 256$^\circ$, using the medium filter, but with different exposure time as shown in Table \ref{obs_table}. \par
\begin{table}
\caption{XMM observations of the Pleiades}
\label{obs_table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{Obs.ID$^1$} &
\multicolumn{1}{|c|}{RA} &
\multicolumn{1}{|c|}{DEC} &
\multicolumn{1}{|c|}{Texp} \\
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{J2000} &
\multicolumn{1}{|c|}{J2000} &
\multicolumn{1}{|c|}{ksec} \\
\hline
101 & 03:47:43.87 & +23:47:55.9 & 53 \\
201 & 03:46:37.62 & +24:46:38.6 & 58 \\
301 & 03:47:00.08 & +24:21:30.7 & 60 \\
401 & 03:47:59.81 & +23:22:26.3 & 74.9\\
501 & 03:44:43.13 & +24:40:57.4 & 89.9\\
601 & 03:44:13.00 & +24:39:21.0 & 83.6\\
\hline
\multicolumn{4}{l}{$^1$ Only the last three digits after 0761920- are shown.} \\
\end{tabular}
\end{table}
The MOS1, MOS2, and PN images of the six fields are shown in Fig.~\ref{combined_fields}, where we marked the positions of the sources hosting the bright flares studied in this paper and those of all the K2 targets. \par
\begin{figure*}[!h]
\centering
\includegraphics[width=7cm]{xmm_101.ps}
\includegraphics[width=7cm]{xmm_102.ps}
\includegraphics[width=7cm]{xmm_103.ps}
\includegraphics[width=7cm]{xmm_104.ps}
\includegraphics[width=7cm]{xmm_105.ps}
\includegraphics[width=7cm]{xmm_106.ps}
\caption{Combined MOS1, MOS2, and PN background-filtered images of the six XMM fields. Large green circles mark the sources with bright flares discussed in this paper, while the small circles denote all the K2 targets falling in these fields. Images were smoothed using a Gaussian smoothing function with a kernel radius of 2 pixels.}
\label{combined_fields}
\end{figure*}
Images were processed using SAS v.15 adopting the standard pipeline. Events were filtered in three steps. First, we removed events that triggered more than four adjacent pixels ({\it pattern}$\geq$12) and with flags different from 0\footnote{Flag values encode various event conditions, such as near hot pixels or events outside the field of view. The condition FLAG=0 is the most conservative event quality screening condition, and particularly necessary for PN images.}. In the second step, we selected the ``bad time intervals'' as those with intense background photon noise by inspecting the background light curve. We then filtered out events detected during these time intervals. The fraction of the total integration time with high background noise level varies in these observations from a minimum of 10-20\% (MOS and PN, respectively) in observation 0761920301, to a maximum of 30-46\% in observation 0761920601. In the last step, images were produced by filtering the events in three energy bands: soft (0.3-0.8$\,$keV), broad (0.3-7.9$\,$ keV), and hard (2-7.9$\,$keV). \par
Source detection was performed with the wavelet convolution code PWXDetect, initially developed for ROSAT and $Chandra$ images and aimed at finding local maxima in the wavelet-convolved image at different wavelet scales \citep{Damiani1997ApJ.483.350D,Damiani1997ApJ.483.370D}. For the scope of this paper, an accurate process of source detection aimed at improving completeness of faint sources was unnecessary. We therefore adopted a detection threshold of 4.6$\sigma$ for all images, at which one spurious source due to background fluctuations is expected to contaminate the total detected source sample. An improved source detection and validation of detected sources will be performed in a forthcoming paper. \par
\subsection{Sample selection and light curves}
\label{sample_sec}
\begin{table}
\caption{Properties of selected stars.}
\label{sample_table}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{Star} &
\multicolumn{1}{|c|}{SpT} &
\multicolumn{1}{|c|}{Teff} &
\multicolumn{1}{|c|}{R$^{(1)}_{*}$} &
\multicolumn{1}{|c|}{P$^{(2)}_{\rm rot}$} &
\multicolumn{1}{|c|}{Distance} \\
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{K} &
\multicolumn{1}{|c|}{R$_{\odot}$} &
\multicolumn{1}{|c|}{days} &
\multicolumn{1}{|c|}{pc} \\
\hline
HII~212 & M0 & 3909$\pm$125& $0.50_{0.43}^{0.55}$ & 4.49 & $133\pm1$ \\%0.58 -1.166
HHJ~273 & (M5) & 3178$\pm$50 & $0.23_{0.21}^{0.25}$ & 1.15 & $142\pm2$ \\%0.3 -1.961
HHJ~336$^b$ & (M4) & 3180$\pm$50 & $0.23_{0.21}^{0.25}$ & 0.37 & $157\pm7$ \\%0.3 -1.961
HCG~244$^b$& (M4) & 3280$\pm$50 & $0.27_{0.25}^{0.29}$ & 0.66 & $137\pm2$ \\%0.39 -1.730
HII~345 & G8 & 5150$\pm$125& $0.77_{0.74}^{0.79}$ & 0.84 & $135\pm1$ \\%0.85 -0.453
HII~1516& (K8) & 3921$\pm$125& $0.50_{0.47}^{0.55}$ & 0.31 & $135\pm1$ \\%0.58 -1.166
HCG~146 & (M4) & 3167$\pm$50 & $0.23_{0.21}^{0.25}$ & 0.68 & $135\pm2$ \\%0.3 -1.961
HCG~273 & (M2) & 3508$\pm$50 & $0.35_{0.33}^{0.37}$ & 2.76 & $136\pm1$ \\%0.45 -1.578
HCG~150 & (M4) & 3287$\pm$50 & $0.28_{0.26}^{0.29}$ & 1.08 & $127\pm1$ \\%0.39 -1.730
HCG~295 & (M2) & 3412$\pm$50 & $0.31_{0.30}^{0.33}$ & 1.42 & $137\pm2$ \\%1.15 0.173
HII~405 & F9 & 6219$\pm$125& $1.07_{1.02}^{1.13}$ & 1.91 & $135\pm1$ \\%1.15 0.173
HCG~148$^b$ & (M4) & 3135$\pm$50 & $0.22_{0.21}^{0.24}$ & 0.22 & $143\pm3$ \\%
\hline
\multicolumn{6}{l}{(1): Stellar radii; (2): Rotation periods \citep{RebullSBC2016AJa}.} \\
\multicolumn{6}{l}{$^b$: binary star} \\
\end{tabular}
\end{table}
X-ray light curves of all detected sources were extracted adopting circular extraction regions with a radius of 400 pixels (20$^{\prime\prime}$) centered on each source position. When we needed to account for source crowding, extraction regions were shrunk to the largest radius at which there was no overlap with adjacent source regions. We also visually inspected the defined extraction regions in order to avoid contamination from PSF wings of nearby bright sources. Additionally, following the prescription of the SAS guide, background regions were chosen with the same size as the extraction regions, close to source positions, avoiding PSF wings of nearby sources, and, in the case of PN images, and sharing the the same distance from the read out node of the associated source, in order to minimize the impact of varying response along the chip. \par
We selected X-ray flares in the combined MOS+PN light curves using the Maximum Likelihood (ML) method \citep{Scargle1982ApJ,CaramazzaFMR2007AA,AlbaceteColomboFMS2007}. The method consists of dividing the whole light curve into time blocks during which the count rate can be considered constant within a given confidence level. We then visually inspected the K2 light curves of the stars where X-ray flares occurred, selecting 12 objects with bright flares observed both in optical and X-rays. \par
Table \ref{sample_table} lists the main properties of the stars that hosted the selected flares. All these stars are bona fide cluster members, with membership probability approaching unity as obtained by \citet{BouyBMC2013AA554A.101B}. Effective temperatures are taken from \citet{StaufferRBH2016AJ}, adopting the conversion tables for PMS stars of \citet{PecautMamajek2013ApJS}. Stellar radii are calculated from the 125-Myr MIST isochrone \citep{Choi2016ApJ} using the properties of these stars. Rotation periods are calculated from K2 light curves by \citet{RebullSBC2016AJa}. Two of these stars, namely HHJ~336 and HCG~244, are multi periodic with a detected second period of 0.35 and 0.24 days, respectively \citep{RebullSBC2016AJb}. Spectral types derived from spectroscopy are available for three stars (values without brackets in Table \ref{sample_table}) and are obtained from \citet{Trumpler1921LicOB,ProsserSK1991AJ,MartinRZ1996ApJ} and \citet{StaufferSK1998ApJ}. Spectral types within brackets in Table 2 are derived from available photometry and effective temperature ($\rm T_{eff}$) values using the conversions into spectral types derived by \citet{PecautMamajek2013ApJS}. Individual distances and their uncertainties are obtained from the parallaxes and their errors in the Gaia DR2 catalog \citep{GaiaCollaboration2018AA.616A.1G}. All the stars in this sample are single stars, with only HCG~244 and HCG~148 being listed as photometric binary stars by \citet{PinfieldDJS2003MNRAS} and HHJ~336 whose multi-periodic K2 light curve is explained in terms of being a binary system. \par
\begin{figure*}[]
\centering
\includegraphics[width=11.5cm]{lightcurve_src69_HCG273.ps}
\caption{Light curves of HCG~273. Top Panel: The K2 light curve. The error bars of K2 data points are smaller than the plot symbols. The red dotted line marks the polynomial used to fit the quiescent light curve and is limited to the portion of the K2 light curve for which the fit is performed; the blue squares mark the points corresponding to the flare. Central Panel: Residuals of the K2 light curve from the best-fit polynomial function limited in the portion of the light curve where the fit is performed (limited by the vertical dotted lines). The green squares mark the K2 points just before and after the flare. Bottom panel: The combined background-subtracted MOS+PN X-ray light curve. Shaded regions are ``bad time intervals'' in at least one EPIC detector.}
\label{HHJ336_lc}
\end{figure*}
Figures \ref{HHJ336_lc}--\ref{HCG148} show the light curves of the selected stars. The top panel of each figure shows the K2 light curve limited to the time window of the XMM observations. To improve visualization, each K2 light curve is normalized to the minimum value observed in this interval. In order to define the quiescent level, we performed a fit of the portion of the K2 light curve in the time window of the XMM observations with a polynomial of the fifth order. In each figure we also show the portion of the light curve used to perform the fit, which was restricted ad-hoc in those cases where it was not possible to obtain a good fit over the entire light curve. The fit was performed recursively: points discrepant by more than $2\sigma$ were removed from the fit before the fit was then calculated again and the process repeated until no more discrepant points were found. The final sets of discrepant points typically define the optical flares and are marked with blue boxes in these figures. The residuals of the fit are shown in the central panels. The bottom panel of each figure shows the merged background-subtracted MOS+PN light curve, with the bin size chosen according to source X-ray brightness\footnote{600 s for sources with a total number of counts $\leq$100; 400 s for 100<counts$\leq$2000; 300 s for 2000<counts$\leq$4000; 200 s for 4000<counts$\leq$5000; 100 s for sources with more than 5000 total detected counts.}. The bad time intervals on both MOS and PN detectors are also marked. We removed events detected during these intervals for source detection but not for the selection and analysis of these bright flares. \par
The K2 flares in our sample did not occur at particular rotation phases. Setting the rotational phase $\phi=0.5$ at the minimum of the K2 light curves (i.e., when the main starspots are close to the center of the stellar disc), five flares occurred at 0.25$\leq \phi \leq$0.5, while four flares occurred at 0.85$\leq \phi \leq$0.95. Furthermore, none of the observed physical properties of the flares depend on the rotational phase at which they occurred. This may be due to the small number of flares in our sample, even if our results agree with those obtained on larger samples of stellar flares observed in K2 light curves \citep[e.g.,][]{Doyle2018MNRAS.480.2153D}. \par
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src42_HII1516.ps}
\caption{Light curves of HII~1516. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HII1516}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src37_HCG244.ps}
\caption{Light curves of HCG~244. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HCG244}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src30_HHJ336.ps}
\caption{Light curves of HHJ~336. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HCG273}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src22_HHJ273.ps}
\caption{Light curves of HHJ~273. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HHJ273}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src39_HII345.ps}
\caption{Light curves of HII~345. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HII345}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src2_HII212.ps}
\caption{Light curves of HII~212. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HII212}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src50_HCG146.ps}
\caption{Light curves of HCG~146. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HCG146}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src99_HCG150.ps}
\caption{Light curves of HCG~150. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HCG150}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src146_HII405.ps}
\caption{Light curves of HII~405. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HII405}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src117_HCG295.ps}
\caption{Light curves of HCG~295. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HCG295}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{lightcurve_src160_HCG148.ps}
\caption{Light curves of HCG~148. Panel layout and content as in Fig.~\ref{HHJ336_lc}}
\label{HCG148}
\end{figure}
Among the remaining 57 K2 targets falling in the XMM fields, 13 stars show that probable optical flares (``probable'' since in all cases but one the flare consists of one or two K2 data points) occurred during the XMM observations. Two of them are not detected in X-rays (BPL~108 and BPL~118); six stars (NPL~24, HII~1114, HII~1234, HHJ~203, HCG~302, HCG~123) are detected in X-rays but there is no X-ray flare observed; in 4 stars (BPL~85, HCG~194, HHJ~232, HII~314) an X-ray flare is observed but is too weak to be analyzed; in one case (HII~314) both the optical and X-ray variability are very complicated, with no evident single-loop flares. The full list of K2 targets falling in the XMM fields and not included in Table \ref{sample_table} is shown in Appendix \ref{allk2_sec}.
\subsection{Flare duration}
\label{duration_sect}
Flare decay time can provide information about the flaring structures \citep[e.g.,][]{RealeBPS1997AA}. The decay time can be derived directly by folding the flare light curves with an exponential function. In our case this could only be done successfully in three cases: the X-ray flares in HCG~273 (Fig. \ref{HHJ336_lc}), HHJ~336 (Fig. \ref{HCG273}), and HHJ~273 (Fig. \ref{HHJ273}). The resulting decay times of these flares are 2.6$\pm$0.6$\,$ks, 2.9$\pm$0.6$\,$ks, and 2.3$\pm$1.8$\,$ks, respectively. \par
An alternative estimate of flare decay times can be obtained under the assumption of a constant release of energy during flare decay, as the ratio between the total emitted energy and the peak luminosity \citep{Flaccomio2018AA.620A.55F}. However, the peak luminosity is averaged over the peak time bin, underestimating the real peak value. Thus, this ratio is typically larger than the real flare decay time. \par
We therefore decided to characterize these flares by their total duration. In the K2 light curves, this is calculated as the difference between the time of the K2 data points just before and after the flares, that is, those marked with green squares in Figs. \ref{HHJ336_lc}--\ref{HCG148}, with an adopted uncertainty of 21.2 (15$\times$$\sqrt{2}$) minutes. In the XMM light curves, the brightness and variability of the background prevented us from adopting the same approach. In fact, despite the brightness of the flares studied in this paper, it was not always possible to distinguish the tail of flare decay from background fluctuations, even by dividing the light curve into time blocks. We therefore estimated the duration of the X-ray flares directly from the arrival sequence of detected photons, as shown in Fig. \ref{phot_arrival_img}. In this figure, each panel shows the detection sequence of all the X-ray photons in the broad band for each star in our sample. In order to isolate the flares from the quiescence and thus derive the start and end times of each flare, we defined in these sequences the time intervals characterized by an almost constant count rate adopting the procedure described below. \par
\begin{figure*}[]
\centering
\includegraphics[width=6cm]{time_flares_HII212.ps}
\includegraphics[width=6cm]{time_flares_HHJ273.ps}
\includegraphics[width=6cm]{time_flares_HHJ336.ps}
\includegraphics[width=6cm]{time_flares_HCG244.ps}
\includegraphics[width=6cm]{time_flares_HII345.ps}
\includegraphics[width=6cm]{time_flares_HII1516.ps}
\includegraphics[width=6cm]{time_flares_HCG146.ps}
\includegraphics[width=6cm]{time_flares_HCG273.ps}
\includegraphics[width=6cm]{time_flares_HCG150.ps}
\includegraphics[width=6cm]{time_flares_HII405.ps}
\includegraphics[width=6cm]{time_flares_HCG295.ps}
\includegraphics[width=6cm]{time_flares_HCG148.ps}
\caption{Sequence of the arrival time of the detected X-ray photons in the broad band in the 12 flares analyzed in this paper. The green lines mark the start and end times of the optical flare, while red lines delimit the X-ray flare.}
\label{phot_arrival_img}
\end{figure*}
We first considered the entire sequence of photon detection times and performed a linear fit of the counts versus time curve. In order to test whether the detection sequence was compatible with that describing a constant count rate, we calculated the two-sided Kolmogorov-Smirnov statistic between the observed sequence and that derived from the linear fit. If the resulting significance of the K-S statistic is larger than an adopted threshold (meaning that the observed sequence of detections is not compatible with a constant count rate), we removed the last data point and repeated both the fit and the K-S test. The procedure was repeated until the K-S statistic indicated that the sequence of photon detections was compatible with a constant count rate. When this happened, all photons were collected into a time block. The test was then repeated from the first photon detected after the time block. In this way, the sequence of detections of X-ray photons was split into a sequence of broken lines, as shown in Fig. \ref{phot_arrival_img}. The intervals corresponding to the X-ray flares are easily identified as they are steeper than the intervals collecting quiescence photons. In order to quantify the uncertainties on this estimate of flare start and end times, we repeated the procedure adopting different values of the threshold for the K-S statistic, varying it by about 10\%. Table \ref{flaretau_table} lists the resulting mean flare duration, the associated uncertainties, and the delay of the X-ray flare with respect the optical flare (i.e., t$\rm_{delay}$=t$\rm_{xray\_start}$-t$\rm_{kep\_start}$). In most (eight) cases the optical flare onset precedes that of the X-ray flare, in three cases t$\rm_{delay}$ is compatible with zero within errors, while only in one case (HCG~150) does the X-ray flare precede the optical flare. However, we note that in this star, residual background fluctuations may have affected the estimate of the initial rise time (see Fig. \ref{HCG150}). This result is compatible with the general model of stellar flares \citep{Benz2008LRSP.5.1B}, with the optical event occurring in the chromosphere/upper photosphere typically preceding the X-ray flare in the magnetic loop, and it has been observed in solar flares \citep[e.g.,][]{MartinezOliveros2012ApJ.753L.26M} and in stellar flares \citep[e.g.,][]{Gudel2002ApJ.580L.73G,Flaccomio2018AA.620A.55F}. \par
\begin{table}
\caption{Flare duration}
\label{flaretau_table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{t$_{\rm kep}$} &
\multicolumn{1}{|c|}{t$_{\rm xray}$} &
\multicolumn{1}{|c|}{t$_{delay}$} \\
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{ksec} &
\multicolumn{1}{|c|}{ksec} &
\multicolumn{1}{|c|}{minutes}\\
\hline
HII~212 & 12.4$\pm$1.3 & 14$\pm$3 & 64.0$\pm$15.3\\
HHJ~273 & 14.1$\pm$1.3 & 20.1$\pm$0.4 & 76.1$\pm$15.1\\
HHJ~336 & 3.6$\pm$1.3 & 4.0$\pm$0.2 & 7.2$\pm$15.1\\
HCG~244 & 8.8$\pm$1.3 & 3.6$\pm$0.5 & 29.3$\pm$15.1\\
HII~345 & 5.3$\pm$1.3 & 1.0$\pm$0.1 & 33.1$\pm$15.1\\
HII~1516 & 3.5$\pm$1.3 & 1.0$\pm$0.1 & 22.6$\pm$15.1\\
HCG~146 & 7.1$\pm$1.3 & 6.5$\pm$0.7 & -6.6$\pm$15.6\\
HCG~273 & 19.4$\pm$1.3 & 11.5$\pm$0.4 & 46.8$\pm$15.1\\
HCG~150 & 5.3$\pm$1.3 & 8.2$\pm$0.9 & -46.1$\pm$15.1\\
HCG~295 & 10.6$\pm$1.3 & 7.0$\pm$1 & 12.8$\pm$20.8\\
HII~405 & 3.5$\pm$1.3 & 4$\pm$2 & 26.7$\pm$15.1\\
HCG~148 & 5.3$\pm$1.3 & 1.6$\pm$0.3 & 38.0$\pm$15.1\\
\hline
\multicolumn{4}{l}{t$\rm_{delay}$=t$\rm_{xray\_start}$-t$\rm_{kep\_start}$} \\
\end{tabular}
\end{table}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{duration_vx_duration.ps}
\caption{Flare duration in optical vs. X-ray light. For each data point, we show the name of the star. The lines mark six linear fits performed with the \emph{IDL} routine \emph{SIXLIN}. The resulting slopes are indicated in the bottom-right corner (from top to bottom, the fitting methods are: Ordinary Least Squares Y vs. X, Ordinary Least Squares X vs. Y, Ordinary Least Squares Bisector, Orthogonal Reduced Major Axis, Reduced Major-Axis, Mean ordinary Least Squares). In the left corner we show the results from a Spearman correlation. We observe significant correlation between the flare duration in the two bands, and can see that the flares occurring in the KGF stars were shorter than those in M stars.}
\label{duration_plot}
\end{figure}
Figure \ref{duration_plot} compares the duration of the flares in optical and X-ray light. Given the small number of data points, we used the \emph{IDL} routine \emph{SIXLIN} to perform a linear fit of the points in Fig. \ref{duration_plot} adopting six different statistical methods for the linear regression. The resulting slopes, shown in the figure, are typically slightly smaller than one. Most of the flares in our sample therefore have similar duration in the two bands, even if typically slightly longer in X-rays, as observed in most solar flares. In Fig. \ref{duration_plot} we used different symbols to mark single M stars, single KGF stars, and binary stars (all three binary stars in our sample are M stars). Flare duration seems to be different for stars in these three groups, with single M stars showing the longest flares, while binary stars and single KGF stars hosting the shortest flares. This difference is more important in X-rays than in optical, with all X-ray flares shorter than 5 days occurred in binary stars or single KGF stars. Given the small number of stars populating the ``KGF'' sample, our data do not allow us to state whether this is a real flare property or due to a selection effect, since the determination of flare end time may be hindered by high quiescence level and the three KGF stars are the only examples where the X-ray quiescent level is brighter than log(L$\rm_X\,[erg/s]$)>29.4). \par
Figure \ref{tauvsper_plot} shows the flare duration in optical and X-rays as a function of stellar rotation period. In both bands we observe a correlation between the duration of the flares and stellar rotation period, which apparently is not induced by a difficulty in detecting weak flares in the slowly rotating stars. For instance, HCG~273 and HII~212, the two stars with the largest rotation period (2.8 and 4.5 days, respectively), are not the brightest stars in our sample. One might expect it to be more difficult to select weak flares for these stars. However, we note that in the right panel, the bottom-left part of the diagram (rapid rotation and short flares) is populated by binary stars and KGF stars, suggesting that binarity, spectral type, or selection effects due to different quiescent level may affect the observed correlation. If confirmed, this correlation may indicate that rapidly rotating stars predominantly host short flares. Since short flares are typically associated with small flaring structures, and probably short magnetic loops, this might be in general agreement with the scenario where in rapidly rotating stars the intense centrifugal stripping inhibits the formation of long loops and reduces the volume available for stellar coronae \citep[e.g.,][]{Argiroffi2016AA.589A.113A}. However, a detailed analysis of the connection between loop geometry, energy release, and flare rate as a function of the stellar rotation period requires a larger and more complete sample of flares to confirm the existence of a correlation between flare decay time and stellar rotation. \par
It was therefore necessary to verify how our estimate of the X-ray flare duration can be affected by the intensity of the quiescence. To this aim, we repeated the calculation of the flare duration on simulated sequences of photon time detection. We considered five quiescent count rates ranging from 0.05$\,$cnt/s to 0.5$\,$cnt/s, and simulated 100 sequences of photon time detection for each of these assumed quiescence count rates. We also injected into each of these simulated quiescent levels a flare with a count rate of about 0.17$\,$cnt/s (600 photons in 3.63$\,$ks). These values were chosen in order to obtain a photon detection sequence with a quiescent level of 0.05$\,$cnt/s similar to that of HCG~244 (Fig. \ref{phot_arrival_img}). For each of the simulated sequences of photon detection we calculated the start and end time of the flare and thus its duration. The average values corresponding to a given quiescence count rate range from about 4.3$\,$ks (quiescent count rate of 0.05$\,$cnt/s) to 3.0$\,$ks (0.5$\,$cnt/s). This test confirms that our estimate of the duration of the X-ray flare may depend on the intensity of the quiescence, likely because the fainter the quiescence level the easier we can discern between quiescence and the end of a flare. However, the $\sim$1$\,$ks difference is not large enough to affect the results obtained from Figs. \ref{duration_plot} and \ref{tauvsper_plot}. In Appendix \ref{AppC_sec} we show five simulated sequences of photon detection time. \par
\begin{figure*}[]
\centering
\includegraphics[width=9cm]{flare_taukep_vs_per.ps}
\includegraphics[width=9cm]{flare_tauX_vs_per.ps}
\caption{Flare duration in optical (left panel) and X-ray light (right panel) vs. stellar rotation period. For each data point, we show the name of the star. The lines mark six linear fits performed with the \emph{IDL} routine \emph{SIXLIN}. The resulting slopes are indicated in the bottom-right corner. In the top-left corner we also show the Spearman's rank correlation coefficient and the significance of its deviation from zero. The observed correlation between flare duration and stellar rotation period suggests that rapidly rotating stars preferentially host smaller coronal loops.}
\label{tauvsper_plot}
\end{figure*}
\section{Time-averaged flare properties}
\label{glob_flare}
In order to calculate the energy released in the Kepler band, we first computed the equivalent duration (ED) of each flare, which is the integral under the normalized K2 light curve during the flare \citep{Davenport2016ApJ}. The ED is in units of seconds and corresponds to the time it would take the star to emit in quiescence the same amount of energy released by the flare \citep{Gershberg1972ApSS.19.75G,Hunt-Walker2012PASP.124.545H}. Thus, the ED allows us to estimate the total energy released by the flare in the Kepler band by multiplying it by the quiescent stellar luminosity in the Kepler band (E$_{\rm kep,flare}=$ED$\times$L$_{\rm kep,quies}$). We calculated the stellar Kepler luminosity in quiescence (L$_{\rm kep,quies}$) from the magnitude in the Kepler band and individual E$_{\rm B-V}$ value listed in the \emph{K2 Ecliptic Plane Input Catalog}\footnote{http://archive.stsci.edu/k2/epic/search.php} \citep{HuberBHB2016ApJS}, adopting the individual distances listed in Table \ref{sample_table}, a zero point flux in the Kepler band of $3.257\times10^{-9}\,$erg/cm$^2$/s/\AA$\,$ in the AB photometric system\footnote{http://svo2.cab.inta-csic.es/theory/fps/index.php?id=Kepler/Kepler.K \\ \&\&mode=browse\&gname=Kepler\&gname2=Kepler\#filter}, the extinction coefficient in the Kepler band equal to 0.859$^{\rm m}$ from the \citet{Donnell1994ApJ} extinction law, and ignoring any dependency of the extinction law and zero-point flux on stellar spectral type. This approach does not account for the variability of the star and the different spectral shape of the quiescent and flare emission. In principle we could assume a blackbody spectrum for both star and flare emission, the former at the stellar effective temperature and the latter at 10000$\,$K \citep{ShibayamaMNN2013ApJS}. However, this approach is also approximated, since it does not account for the time evolution of the flare spectral energy distribution (SED), the contribution by nonthermal emission to white light flares \citep{KowalskiHWO2013ApJS}, and it assumes the knowledge of the size of the emitting region and stellar radii. \par
In order to calculate the energy released by the X-ray flare and the average flare plasma properties (plasma temperature kT and emission measure EM), we used XSPEC v.~12.8.1 \citep{Arnaud1996} to analyze the X-ray spectrum of each source extracted during the flare. X-ray spectra were fitted with a single temperature (1T) APEC ionization-equilibrium isothermal plasma model \citep{SmithBLR2001ApJL} after subtracting the spectrum extracted during quiescence. In this way, we isolated the contribution to the X-ray spectrum of the flaring plasma. We adopted the element abundances defined by \citet{MaggioFFM2007}. Also, we accounted for interstellar absorption using the TBABS model \citep{WilmsAM2000}, fixing the absorption column N$_{\rm H}$ to the value obtained by multiplying the individual known optical extinctions for 1.8$\times$10$^{21}\,$cm$^2$/mag. We verified that leaving N$_{\rm H}$ as a free parameter did not significantly improve the quality of the fit. Best-fit models were chosen by minimizing the C-statistic and the quality of the fit tested using the XSPEC tool \emph{goodness}. The limit for acceptable fits adopted in this paper, set to a null-hypothesis probability of a good fit (P$_{\%}$) equal to 5\%, was always met using 1T APEC models, given that the number of detected photons did not allow us to resolve the thermal structure of the flaring plasma. The significance of the parameters was verified by analyzing the confidence contours in the C-stat space with the XSPEC tool \emph{steppar}. The total energy released in the broad X-ray band (E$_{\rm xray,flare}$) was calculated using the XSPEC model CFLUX to obtain the flux, which was converted into luminosity adopting the individual stellar distance and multiplying it by the flare duration. The peak luminosity L$_{\rm xray,peak}$ was calculated with the time-resolved X-ray spectral fit explained in Sect. \ref{time_res} and was found to be the largest luminosity obtained in the time blocks defined to sample the flare. In order to estimate plasma properties and X-ray luminosity during quiescence, we fitted the quiescent X-ray spectra using the same set of models adopted for the flaring plasma. For the quiescence, however, background spectra were extracted in suitable background regions selected following the prescription of the SAS guide (see Sect. \ref{sample_sec}). Results are summarized in Table \ref{flareprop_table}, which shows the total energy and peak luminosity of each flare, together with the average plasma properties observed during quiescence and during the flares. We do not show the average flare temperature obtained for the flare that occurred in HII~212 since it is not well determined by the spectral fit that found a temperature smaller than the quiescence value.
\par
\begin{table*}
\caption{Absorption, plasma temperature, luminosity, and total energy in optical and X-ray light during quiescence and flares.}
\label{flareprop_table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{N$_{\rm H}$} &
\multicolumn{1}{|c|}{kT$_{\rm quies}$} &
\multicolumn{1}{|c|}{kT$_{\rm flare}$} &
\multicolumn{1}{|c|}{log(L$_{\rm xray,quies}$)} &
\multicolumn{1}{|c|}{log(L$_{\rm xray,peak}$)} &
\multicolumn{1}{|c|}{log(E$_{\rm xray,flare}$)} &
\multicolumn{1}{|c|}{log(L$_{\rm kep,quies}$)} &
\multicolumn{1}{|c|}{log(E$_{\rm kep,flare}$)} \\
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{$10^{21}\,$cm$^{-2}$} &
\multicolumn{1}{|c|}{keV} &
\multicolumn{1}{|c|}{keV} &
\multicolumn{1}{|c|}{[erg/sec]} &
\multicolumn{1}{|c|}{[erg/sec]} &
\multicolumn{1}{|c|}{[erg]} &
\multicolumn{1}{|c|}{[erg/sec]} &
\multicolumn{1}{|c|}{[erg]}\\
\hline
HII~212 & 0.6 & $0.77^{0.85}_{0.69}$& & $29.05^{29.10}_{29.00}$& $30.30^{30.40}_{30.22}$& $33.52^{33.56}_{33.39}$& 32.02$\pm$0.20 & 34.16$\pm$0.42\\
HHJ~273 & 1.2 & $0.69^{0.77}_{0.62}$& $1.61^{1.70}_{1.52}$& $28.73^{28.78}_{28.69}$& $30.34^{30.37}_{30.30}$& $34.04^{34.06}_{34.02}$& 31.53$\pm$0.23 & 34.12$\pm$0.42\\%37
HHJ~336 & 1.1 & $0.35^{0.42}_{0.29}$& $1.56^{1.65}_{1.45}$& $28.86^{28.94}_{28.79}$& $30.43^{30.49}_{30.37}$& $33.65^{33.68}_{33.62}$& 31.88$\pm$0.23 & 33.59$\pm$0.74\\%30
HCG~244 & 0.9 & $0.58^{0.61}_{0.55}$& $1.66^{1.98}_{1.43}$& $29.11^{29.12}_{29.09}$& $29.75^{29.86}_{29.67}$& $32.90^{32.95}_{32.85}$& 32.01$\pm$0.24 & 34.04$\pm$0.41\\%69
HII~345 & 1.4 & $0.59^{0.60}_{0.59}$& $2.45^{2.69}_{2.23}$& $30.18^{30.19}_{30.18}$& $30.85^{30.90}_{30.80}$& $33.68^{33.71}_{33.66}$& 33.27$\pm$0.20 & 34.66$\pm$0.50\\%22
HII~1516 & 0.2 & $0.91^{0.93}_{0.89}$& $2.56^{3.21}_{2.30}$& $29.42^{29.43}_{29.41}$& $30.51^{30.58}_{30.45}$& $33.54^{33.58}_{33.51}$& 32.17$\pm$0.20 & 33.71$\pm$0.56\\%39
HCG~146 & 1.6 & $0.74^{0.84}_{0.61}$& $0.78^{0.85}_{0.70}$& $28.54^{28.59}_{28.47}$& $29.48^{29.53}_{29.43}$& $32.99^{33.04}_{32.95}$& 31.53$\pm$0.23 & 33.52$\pm$0.47\\%2
HCG~273 & 1.1 & $0.60^{0.64}_{0.56}$& $1.63^{1.69}_{1.57}$& $29.07^{29.09}_{29.06}$& $30.33^{30.36}_{30.29}$& $33.86^{33.87}_{33.84}$& 31.92$\pm$0.23 & 34.34$\pm$0.38\\%50
HCG~150 & 0.7 & $0.61^{0.65}_{0.58}$& $2.16^{2.59}_{1.79}$& $28.90^{28.92}_{28.89}$& $29.62^{29.76}_{29.52}$& $33.02^{33.07}_{32.97}$& 31.63$\pm$0.23 & 32.86$\pm$0.53\\%99
HCG~295 & 0.7 & $0.69^{0.76}_{0.62}$& $1.96^{3.22}_{1.44}$& $28.96^{29.00}_{28.92}$& & $33.36^{33.47}_{33.26}$& 31.44$\pm$0.21 & 33.49$\pm$0.41\\%146
HII~405 & 1.0 & $0.47^{0.49}_{0.46}$& $1.29^{1.36}_{1.21}$& $29.66^{29.68}_{29.65}$& $29.87^{29.98}_{29.77}$& $33.50^{33.53}_{33.46}$& 33.75$\pm$0.21 & 33.90$\pm$0.58\\%99
HCG~148 & 0.8 & $0.32^{0.36}_{0.29}$& $1.00^{1.15}_{0.89}$& $28.71^{28.77}_{28.66}$& & $33.07^{33.13}_{33.01}$& 31.59$\pm$0.22 & 34.07$\pm$0.50\\%146
\hline
\multicolumn{8}{l}{} \\
\end{tabular}
\end{table*}
\begin{figure*}[]
\centering
\includegraphics[width=8cm]{flare_EvsE.ps}
\includegraphics[width=8cm]{flare_LpeakvsLpeak.ps}
\caption{Comparison between the total emitted energy (left panel) and the peak luminosity (right panel) in optical and X-ray light. The name of each star is also indicated. The lines mark six linear fits between the log values performed with the \emph{IDL} routine \emph{SIXLIN}. The resulting slopes are also indicated. We also show the Spearman's rank correlation coefficient and the significance of its deviation from zero. These flares typically released more energy and had a larger peak luminosity in the optical than in X-rays.}
\label{EkvsLk_plot}
\end{figure*}
Figure \ref{EkvsLk_plot} compares the total energy released in optical and in X-ray light. As typically observed in solar flares \citep[e.g.,][]{Kretzschmar2010NatPh.6.690K,Kretzschmar2011AA.530A.84K}, most of the events analyzed in this paper released more energy in optical light than in X-rays. With the exception of the flares that occurred in HHJ~336 and HCG~150, where the E$_{\rm kep,flare}$/E$_{\rm xray,flare}$ ratio is equal to 0.9 and 0.7, respectively, in the other flares the ratio ranges from 1.2 (HHJ~273) to 13.7 (HCG~244). As indicated by a two-sided Spearman test, the observed correlation between the energy released in the two bands is weak. However, the lack of correlation in the left panel is mainly driven by the stars HCG~244 and HCG~148, the only two spectroscopically confirmed binary stars. Removing these stars, in fact, the correlation test results in r=0.75 and P($\rho$)=0.01. A stronger correlation is instead observed between the peak luminosity observed in optical and X-rays, with, again, larger values observed in the optical band. \par
We searched for correlations between flare properties (such as energy and duration) both in optical and X-rays, with the estimated starspots coverage (A$_{\rm spot}$) of stellar surface. Correlations between starspots coverage and flare properties (energy and flare frequency) have been found in stars observed with \textit{Kepler} \citep{McQuillan2014ApJS.211.24M,Maehara2017PASJ.69.41M} and interpreted as resulting from the relation between A$_{\rm spot}$ and the stellar magnetic activity level \citep[e.g.,][]{McQuillan2014ApJS.211.24M}. To calculate the starspot coverage, we used Eqs. 2 and 3 of \citet{Maehara2017PASJ.69.41M}, which are valid with the hypothesis that stellar spots are organized mainly in large spots and their lifetime is longer than stellar rotation period. Both hypotheses seem reasonable for our sample of active and rapidly rotating stars. These equations require stellar radii and effective temperature, listed in Table \ref{sample_table}, and the normalized amplitude of the optical light curves. Optical amplitude variability is in fact related to the apparent area of starspots, as suggested by \citet{McQuillan2014ApJS.211.24M} and \citet{Maehara2017PASJ.69.41M}. Given the short duration of the XMM observations compared with the length of K2 light curves, we measured the normalized amplitude of the K2 light curve during the XMM observations in order to characterize the starspot coverage occurring during the X-ray observations. The starspot coverage we obtained ranges between 0.01 and 0.1, with only HCG~295 with A$_{\rm spot}$=0.007 and HHJ~273 and HHJ~336 with A$_{\rm spot}$>0.5. However, the flares analyzed here do not show any correlation with the starspots coverage. We interpret this result as a consequence of the rapid rotation of these stars. In fact, correlations between flare properties and starspot coverage are typically observed for stars with rotation periods longer than 3 days, while in more rapidly rotating stars a sort of ``saturation'' of flare properties is found \citep[e.g.,][]{McQuillan2014ApJS.211.24M}, and, as shown in Table \ref{sample_table}, all but one of our stars have rotation periods shorter than 3 days.
\section{Time-resolved X-ray analysis}
\label{time_res}
We studied the time evolution of the emitting plasma during the X-ray flares by sampling the merged MOS+PN light curves with suitable time intervals. Despite being useful for flare detections, the intervals defined in Sect. \ref{sample_sec} did not always sample the flares properly. For this reason, we adopted two additional sets of time blocks: one defined by collecting 100 net photons in the broad band in each block starting from the beginning of the flare, or defining the blocks ad-hoc after visually inspecting the light curve. Among the three sets of intervals defined for each flare, we used the one resulting in the best time sampling, taking into account also the X-ray counts detected in each interval. We thus calculated the time-resolved X-ray properties (plasma temperature and emission measure) in each block, repeating the spectral analysis described in Sect. \ref{glob_flare}. The flares in HCG~295 and HCG~148 were too faint for this analysis. \par
The main objective of this analysis was to track the time evolution of the X-ray-emitting plasma in the kT-versus-log(EM) plane, where it is possible to distinguish the typical path of the four phases of the evolution of a single loop: the heat pulse with a sudden increase of plasma temperature; the evaporation phase during which the heated plasma fills the magnetic loop until the plasma density reaches its peak; the conductive cooling phase with a decline of temperature while density is still increasing; and the radiative cooling phase after the density reaches its peak, with both temperature and density decreasing \citep{Reale2007AA}. This produces a characteristic path in the temperature-versus-density (or emission measure) plane. This path, and in particular the slope of the cooling phase, can be used as a diagnostic for flare cooling and heating processes. \par
Figures \ref{HII405_flare} and \ref{HII1516_flare} show the evolution of the flares observed in HII~1516 and HII~405, respectively. These are the two flares for which the heating phase could be distinguished with the defined blocks. In each of these figures, the left panel shows the X-ray flare in broad, soft, and hard energy bands, superimposed on the optical flare, together with the defined set of time intervals. Count rates in the soft and hard light curves during the bad time intervals are not shown in these figures, given that their amplitude is typically comparable with background fluctuations. In the flare in HII~1516, the count rate increased by a factor of 40, while a smaller amplitude is observed in HII~405 (a factor of $\sim$15). The two central panels show the time evolution of plasma temperature and EM, which results in the flare evolution pattern in the temperature-versus-EM plane shown in the right panel. Together with the flare observed in HII~345, the flare in HII~1516 is the one for which we measured the highest plasma temperature averaged over the flare duration (Table \ref{flareprop_table}). \par
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_146_601_HII405.ps}
\caption{Evolution of the flare observed in HII~405. The left panel shows the merged MOS+PN light curve in broad, soft, and hard energy bands (dots) superimposed on the K2 light curve (green asterisks). The horizontal green lines mark the defined blocks and the vertical dashed areas correspond to the bad time intervals (Sect. \ref{xray_data_sec}). The central panels show the time evolution of plasma temperature and emission measure, while the right panel shows the evolution of the flare in the log(T) vs. 0.5$\times$log(EM) plane. The red dashed line marks the slope of the cooling phase.}
\label{HII405_flare}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_42_301_HII1516.ps}
\caption{Evolution of the flare observed in HII~1516. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HII1516_flare}
\end{figure*}
In six flares, namely those in HHJ~273 (Fig. \ref{HHJ273_flare}), HII~212 (Fig. \ref{HII212_flare}), HCG~273 (Fig. \ref{HCG273_flare}), HHJ~336 (Fig. \ref{HHJ336_flare}), HCG~244 (Fig. \ref{HCG244_flare}) and the G8 star HII~345 (Fig. \ref{HII345_flare}), the time blocks do not resolve the heating phase, while the cooling phase is sampled with at least three blocks. In all these flares, temperature and density peak during the same interval, with the one exception being HII~345, where the peak plasma temperature is observed during interval \#1 while that of plasma density is during \#2. The largest flare amplitude is observed in HHJ~273, with the count rate increasing by a factor of 115, followed by HCG~273 (32.0), HII~212 (7.4), HHJ~336 (6.9), HCG~244 (6.3) and HII~345 (6.2). The flare occurring in HHJ~273 is also the brightest in X-rays among the whole sample (log(E$_{\rm xray,flare}$[erg])=34.04$^{34.06}_{34.02}$, see Table \ref{flareprop_table}). In these flares, we measured similar values of peak plasma temperature, ranging from 42$^{48}_{37}\,$MK in HHJ~273 to 50$^{96}_{38}\,$MK in HCG~244 (see Table \ref{flareprop_table}). The one exception to this is the flare in the G8 star HII~345, which reached a peak temperature of 122$^{189}_{85}\,$MK (against a quiescent plasma temperature of 6.96$^{7.04}_{6.88}\,$MK).
It is interesting to compare the variability observed in the broad and hard X-ray bands in HHJ~273 (Fig. \ref{HHJ273_flare}) and HGC~273 (Fig. \ref{HCG273_flare}). In HHJ~273, we observed an evident and narrow peak of hard X-ray emission which precedes the one in the broad and soft band light curve by $\sim$800$\,$s. In HCG~273, the hard X-ray light curve rises for $\sim$600$\,$s before the observed peak in broad and soft bands. In this case however, instead of a single peak, we observe two peaks separated by about 800$\,$s, which could be due to the loop oscillations that we observed in this flare (see Sect. \ref{oscill_sec}). The delayed onset of the soft X-ray flare with respect to the hard band could naturally be explained by the time evolution of flares \citep{Reale2007AA}, with the peak of plasma temperature preceding the density peak. A similar variability in soft and hard X-ray bands was observed in solar flares \citep{Sato2001ApJ.558L.137S,Reale2001ApJ.557.906R} and in a few other young stars \citep{Preibisch2002AJ.123.1613P,Preibisch2003AA.401.543P}. \par
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_22_501_HHJ273.ps}
\caption{Evolution of the flare observed in HHJ~273. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HHJ273_flare}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_2_601_HII212.ps}
\caption{Evolution of the flare observed in HII~212. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HII212_flare}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_69_301_HCG273.ps}
\caption{Evolution of the flare observed in HCG~273. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HCG273_flare}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_30_101_HHJ336.ps}
\caption{Evolution of the flare observed in HHJ~336. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HHJ336_flare}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_37_301_HCG244.ps}
\caption{Evolution of the flare observed in HCG~244. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HCG244_flare}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_39_501_HII345.ps}
\caption{Evolution of the flare observed in HII~345. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HII345_flare}
\end{figure*}
In the remaining flares, we were not able to isolate the heating, while the cooling phase is split into two intervals. During these flares, the count rate increased by a factor 9.7 in HCG~150 (Fig. \ref{HCG150_flare}) and 14 in HCG~146 (Fig. \ref{HCG146_flare}). The flare in HCG~150 also shows a prolonged rise phase lasting about 25 minutes. \par
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_99_601_HCG150.ps}
\caption{Evolution of the flare observed in HCG~150. Panel layout and content as in Fig.~\ref{HII405_flare}.}
\label{HCG150_flare}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=18cm]{flare_atlas_50_601_HCG146.ps}
\caption{Evolution of the flare observed in HCG~146. Panel layout and content as in Fig.~\ref{HII405_flare}}
\label{HCG146_flare}
\end{figure*}
We calculated the slope of the cooling pattern ($\rm \zeta$) followed by the flares in the log(kT)-versus-0.5$\times$log(EM) plane by interpolating the points of the cooling phase. As shown by \citet{Jakimiec1992AA.253.269J}, \citet{Sylwester1993AA.267.586S}, and \citet{RealeBPS1997AA}, this value provides hints on the presence of heating during the cooling phase. In fact, $\zeta$ is expected to be $\sim$2 when negligible heating occurs during the cooling phase, otherwise it is $<$2. As shown in Table \ref{slopecool_table}, slope values suffer large uncertainties. From the analysis of the cooling path of these flares in the log(T)-versus-log(EM) plane, we thus obtained a marginal indication that these flares are associated with single loops without substantial heating during the cooling phase, with the exception of HHJ~273 ($\rm \zeta$=0.4$\pm$1.1, see Fig. \ref{HHJ273_flare}) and HII~405 ($\zeta$=0.7$\pm$0.8, see Fig. \ref{HII405_flare}). However, given the large uncertainties on $\rm \zeta$, we cannot exclude that some of these flares may be associated with more complicated structures, such as flare-loop arcades overlying the active regions, as discussed by \citet{HeinzelShibata2018ApJ.859.143H}. \par
\begin{table}
\caption{Slope of the cooling pattern}
\label{slopecool_table}
\centering
\begin{tabular}{|c|c|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{$\zeta$} \\
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{} \\
\hline
HII 212 & $3.2\pm2.8$\\
HHJ 273 & $0.4\pm1.1$\\
HHJ 336 & $1.1\pm1.2$\\
HCG 244 & $1.9\pm2.4$\\
HII 345 & $2.1\pm0.9$\\
HII 1516& $1.7\pm1.4$\\
HCG 146 & $2.4\pm1.2$\\
HCG 273 & $1.6\pm0.5$\\
HCG 150 & $5.5\pm30$\\
HII 405 & $0.7\pm0.8$\\
\hline
\multicolumn{2}{l}{} \\
\end{tabular}
\end{table}
\subsection{Loop lengths}
\label{loops_sec}
Most of the slopes listed in Table \ref{slopecool_table} are consistent with pure cooling in a single flaring loop. We can estimate the size of the loops by applying relations derived by \citet{RealeBPS1997AA} and \citet{Reale2007AA} from hydrodynamic loop models. In these models, the plasma is described as a compressible fluid confined in the coronal loops and transporting energy along the magnetic field lines. The only role of the magnetic field is therefore to confine the plasma \citep{Rosner1978ApJ.220.643R}. The flare is triggered by a heat pulse injected inside the loop, and the plasma cools down by radiation and thermal conduction along the field lines. Compared with other existing models, they have the advantage of being easily connected with observable quantities such as the plasma temperature and density in the loop. Other models, such as that developed by \citet{ShibataYokoyama2002ApJ.577.422S}, present a relation between the loop length and physical quantities that are difficult to estimate from optical and X-ray light curves, such as the coronal pre-flare electron density. \par
We estimated the loop lengths using two equations. The former is based on the time of the density peak (t$_{\rm maxden}$), calculated in this paper as the difference between the half time of the block with the largest emission measure and the flare start:
\begin{equation}
\rm L_{\rm loop}=6\times 10^{2.5}\times t_{\rm maxden}\times \Psi^2 \times T^{0.5}_{\rm max}
\label{loop_eq_2}
,\end{equation}
while the latter is based on the duration of the rise of the light curve (i.e., from the flare start to the peak of emission) $\rm t_{rise}$, measured in kiloseconds:
\begin{equation}
\rm L_{\rm loop}=0.6 \times \Psi^2 \times T^{0.5}_{\rm max} \times t_{\rm rise}
\label{loop_eq_3}
.\end{equation}
In these equations, T$_{\rm max}$ is the maximum temperature in Kelvin, calculated from the maximum temperature obtained from the time-resolved spectral analysis using T$_{\rm max}$=0.13$\times$T$_{\rm max,fit}^{1.16}$ \citep[specific for EPIC instruments,][]{Reale2007AA}, and {$\rm \psi$ is the ratio between the peak temperature and the temperature at the density peak, which ranges between 1.2 and 2 \citep{Reale2007AA}}. The resulting loop lengths are shown in Table \ref{length_table}. There is a general good agreement between the length estimated with the two equations.
\begin{table}
\caption{Comparison of the loop lengths obtained with Eqs. (\ref{loop_eq_2}) and (\ref{loop_eq_3})}
\label{length_table}
\centering
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{$\rm L_{\rm eq1}$} &
\multicolumn{1}{|c|}{$\rm L_{\rm eq2}$} \\
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{$\times10^{10}\,$cm} &
\multicolumn{1}{|c|}{$\times10^{10}\,$cm} \\
\hline
HII~212 & 2.1$^{4.4}_{1.0}$ & 1.3$^{4.1}_{0.3}$ \\
HHJ~273 & 2.5$^{4.2}_{1.3}$ & 3.7$^{7.3}_{1.6}$ \\
HHJ~336 & 1.1$^{2.0}_{0.5}$ & 0.9$^{2.6}_{0.2}$ \\
HCG~244 & 1.3$^{2.8}_{0.6}$ & 1.7$^{5.5}_{0.4}$ \\
HII~345 & 1.3$^{2.7}_{0.6}$ & 1.0$^{3.1}_{0.2}$ \\
HII~1516 & 2.7$^{5.8}_{1.1}$ & 1.4$^{4.5}_{0.3}$ \\
HCG~146 & 1.6$^{2.7}_{0.8}$ & 0.9$^{2.3}_{0.2}$ \\
HCG~273 & 1.4$^{2.5}_{0.7}$ & 2.7$^{5.2}_{1.2}$ \\
HCG~150 & 2.1$^{6.6}_{0.9}$ & 6.5$^{22.6}_{2.5}$\\
HII~405 & 5.5$^{9.0}_{2.9}$ & 2.8$^{5.8}_{1.2}$ \\
\hline
\multicolumn{3}{l}{} \\
\end{tabular}
\end{table}
\subsection{Oscillations}
\label{oscill_sec}
In solar flares, oscillatory patterns in soft X-ray light curves have been observed several times \citep{McLaughlin2018SSRv.214.45M}. They are typically interpreted as MHD oscillations in the magnetic loops \citep[][and references therein]{StepanovZN2012book,VanDoorsselaere2016SoPh.291.3143V}, or global waves traveling across the corona \citep{LiuOfman2014SoPh}. Oscillatory patterns have also been observed in stellar flares, and interpreted as magneto-acoustic waves produced by oscillations of the loop during the evaporation of the chromospheric plasma \citep{ZaitsevStepanov1989SvAL}; fast MHD waves \citep{MathioudakisBJD2006AA}; and fast kink modes in stellar loops \citep{PandeySrivastava2009ApJ}, which however are not expected to produce density oscillations \citep{StepanovZN2012book}. In the alternative model developed by \citet{Reale2016ApJ}, and already applied to flares in young stars \citep{Reale2018ApJ.856.51R}, the oscillations are due to density waves, which are triggered by a heat pulse shorter than the sound crossing time inside the loop at the temperature peak. We followed the method developed by \citet{LopezSantiago2018RSPTA.37670253L} to reveal oscillations in the X-ray flares in our sample. The adopted method is a modified version of the Fourier analysis designed to detect quasi-periodic signals, in which the light curve is transformed from time to frequency domain adopting a Morlet function as a mother function \citep{Torrence98apractical}. Figure \ref{wav69_figure} shows the results of this calculation for the flare in HCG~273, the only one in our sample where significant oscillations were detected\footnote{We are not able to state whether oscillations in the other flares are physically missing or just not detectable.}.
\begin{figure}[]
\centering
\includegraphics[width=9cm]{wavelet_source69.ps}
\caption{Top panel: Observed light curve during the flare in HCG~273 overplotted with the flare global shape obtained by fitting a third degree polynomial to the binned (10 s) light curve. Bottom panel: Wavelet power spectrum obtained during the flare, with dotted, dashed, and solid contours marking peaks with 68\%, 95\%, and 99\% confidence level, respectively. The hatched area is the ``cone of influence'' (see Sect. \ref{oscill_sec}).}
\label{wav69_figure}
\end{figure}
The power spectrum (i.e., the square of the wavelet transform) is two-dimensional (one dimension for time and one for frequency), and oscillation patterns (together with other disturbances in the time series) result in extended peaks with high confidence level. Figure \ref{wav69_figure} shows the wavelet power spectrum obtained from the flare in HCG~273. The cone of influence marked in the diagram is the region of the power spectrum where the edge effects are important \citep{Torrence98apractical}. A significant oscillation with a period of 500$\pm$100$\,$s is detected. The detection of oscillations in the flare observed in HCG~273 supports the hypothesis of a single loop dominating this flare, ignited by a single rapid heat pulse, which is shorter than the sound crossing time across the loop at the maximum plasma temperature. \par
This result also allows us to obtain an independent estimate of the length of this loop. The observed peak temperature of $49.3^{58.3}_{43.2}\,$MK corresponds to a sound speed v$_{\rm s}$=8.1$^{8.9}_{7.6}\times$10$^7\,$cm/s. As found by \citet{Reale2016ApJ}, during an oscillation period the plasma has traveled twice in the loop. The resulting loop length can be estimated as L$_{\rm loop}$=v$_{\rm s}\times$P/2=2.0$^{2.2}_{1.9}\times10^{10}\,$cm (where P is the oscillation period). This estimate is compatible within errors to those in Table \ref{length_table}. \par
\section{Discussion and conclusions}
\label{thats_all_folks}
We have analyzed 12 bright flares that occurred in Pleiades stars observed simultaneously with XMM-Newton and \textit{Kepler}/K2, with the aim of calculating and comparing the energy released by the flares in the two bands and characterizing the flare evolution and geometry. With a total energy released in the optical band in the 32.9<log(E$_{\rm kep,flare}\,$[erg])<34.7 range (median value 34.0$\,$erg), all but one of the flares in our sample (the one in HCG~150, Fig. \ref{HCG150_flare}) meet the criteria for ``superflares'' defined by \citet{ShibayamaMNN2013ApJS} and \citet{NotsuSMN2013ApJ} (e.g., $10^{33}\,$erg$\leq$E$_{\rm optical}\leq 10^{38}\,$erg). This is not surprising given that our sample is limited to bright flares occurring in young and rapidly rotating stars. In fact, \citet{Maehara2012Natur.485.478M}, in their monitoring of a sample of 83000 stars for 120 days, observed that the frequency of superflares is about 20 times greater in stars with rotation periods shorter than 10 days compared to slower rotators, and \citet{WichmannFWN2014AA} found a larger occurrence of superflares in stars showing the Li~$\rm \lambda6708\,$\AA$\,$absorption line in their spectra, which is typical of young stars. Besides, \citet{Honda2015PASJ.67.85H} measured Li abundances in 34 stars hosting superflares detected with \textit{Kepler}, showing that half of them are younger than the age of the Hyades cluster (6.25$\times$10$^8\,$yr, \citealp{Perryman1998AA.331.81P}) and most of them are likely younger than the Sun based on their projected rotational velocity. Extending the comparison to a wider range of stellar properties, in their study of the optical flares detected in all available \textit{Kepler} light curves, \citet{Davenport2016ApJ} reported 851168 flares occurring in 4041 stars, with a median flare energy of log(E$_{\rm kep,flare}$[erg])=34.6, which is slightly larger than the mean energy released in optical by the flares in our sample. This difference is likely due to the different distribution of spectral types of the stars included in \citet{Davenport2016ApJ}, that is, mainly G and K dwarfs, and in our paper, where nine in every twelve stars are M dwarfs. \par
In X-rays, the flares in our sample can be compared with the 130 flares analyzed by \citet{Pye2015AA.581A.28P} occurring in 70 stars included in the XMM-Newton serendipitous source catalog. These stars are mainly within 1$\,$kpc of the Sun, with a few being well-known variable stars and binary systems. In these flares, the peak luminosity in the broad band ranges from $\sim$10$^{29}\,$erg/sec to $\sim$10$^{32}\,$erg/sec, and X-ray energy output ranges from $\sim$10$^{32}\,$erg to $\sim$10$^{35}\,$erg. This catalog thus includes X-ray flares that are brighter than those in our sample, in which the peak luminosity in the broad band is between 3.2$\times$10$^{29}\,$erg/sec and 7.9$\times$10$^{30}\,$erg/sec and the total energy released in the X-ray broad band is between 7.8$\times$10$^{32}\,$erg and 9.8$\times$10$^{33}\,$erg.\par
Most (10 over 12) of the flares analyzed in this work released more energy in optical than in X-rays. This is typically observed in solar flares, and naturally explained with the fact that the optical flare traces the heating of plasma in the chromosphere/photosphere, with part of the released energy irradiated by the evaporating plasma confined in the magnetic loop. The two flares releasing more energy in X-rays than in optical (HHJ~336, where E$_{\rm kep,flare}$/E$_{\rm xray,flare} \sim$0.9, and HCG~150, where E$_{\rm kep,flare}$/E$_{\rm xray,flare} \sim$0.7) could be due to more complex events than a single loop or to the uncertainties related to the estimate of the energy of a flare. In the remaining stars, this ratio ranges between 1.2 and 13.8. This is smaller, for instance, than the ratio observed in the bright solar flares by \citet{Woods2006JGRA.11110S14W}, which, converting the GOES band (0.1-0.8$\, \mu$m) into the Chandra/ACIS-I broad band, ranges between 25 and 40. The possibility that a larger fraction of energy is converted into X-ray emission as flares become more energetic is compatible with the relation between E$_{\rm bol,flare}$ and E$_{\rm xray,flare}$ found by \citet{Flaccomio2018AA.620A.55F} in the flares observed in NGC~2264 by \textit{CoRoT} and \textit{Chandra}. \par
Several studies have found a correlation between the energy released by flares and their duration. For instance, \citet{Veronig2002AA.382.1070V} analyzed an extensive set of 49409 soft-X-ray solar flares observed during the period of 1976-2000 by GOES, thus covering about three solar cycles. This sample contains mainly C-type flares (66\%), with a median duration of 12$\pm$0.2 minutes. In this sample, flare duration, rise time, and decay time are well correlated with both the emitted flux integrated over the whole flare duration and the peak luminosity. Similar results were obtained by \citet{Toriumi2017ApJ.834.56T}. \citet{Namekata2017ApJ.851.91N} extended the existing correlation between the energy released by white flares and their duration to the superflares regime. For the solar flares, they in fact found that t$\rm_{WF}$=E$\rm_{WF}^{A}$, with t$\rm_{WF}$ being the white flare duration, E$\rm_{WF}$ the released energy, and A=0.38 for stellar flares and A=0.39 for superflares. The main difference between the two regimes is in the duration of the flares, which is about one order of magnitude shorter for superflares than for solar flares. The relation between flare duration and released energy was also confirmed by \citet{Hawley2014ApJ.797.121H}, who analyzed more than 1000 optical flares occurring in the M4 star GJ~1243 in about 2 months of \textit{Kepler} observations.
\begin{figure}[]
\centering
\includegraphics[width=8cm]{duration_vs_E_opt.ps}
\includegraphics[width=8cm]{duration_vs_E_xray.ps}
\caption{Flares duration vs. total released energy in optical (upper panel) and X-rays (bottom panel). The different symbols are used to mark M stars, KFG stars, and binary stars. The results of the two-sided Spearman's rank correlation test considering only M stars are shown. In both cases a weak correlation is observed only for M stars.}
\label{dur_vs_E_fig}
\end{figure}
In Fig. \ref{dur_vs_E_fig} we show the observed correlation between duration and energy released in optical and X-rays for the flares in our sample. In both cases, a weak correlation is found considering only the M stars. In the Kepler band we find that t$\rm_{kep}$=E$\rm_{kep,flare}^{0.30}$, which is significantly different from the relation found by \citet{Namekata2017ApJ.851.91N}. In X-rays we find a steeper relation (t$\rm_{xray}$=E$\rm_{xray,flare}^{0.52}$) than the one found for solar flares (t$\rm_{xray}$=E$\rm_{xray,flare}^{0.2-0.33}$; \citealp{Veronig2002AA.382.1070V}). These differences are likely due to the poor statistical sample of superflares analyzed in this paper and the limited range of emitted energy (32.9$\leq$log(E${\rm_{K2,flare}}$)$\leq$34.7 in our sample, 33$\leq$log(E${\rm_{K2,flare}}$)$\leq$36 in \citet{Namekata2017ApJ.851.91N}). We however extend the results obained by \citet{Hawley2014ApJ.797.121H} on the flares occurring in GJ~1243, where the energy of the observed flares reached 10$^{33}\,$erg in the Kepler band, smaller than most of the flares analyzed in this paper.
\begin{figure}[]
\centering
\includegraphics[width=8cm]{cluster_peak_lx.ps}
\includegraphics[width=8cm]{cluster_Ex.ps}
\caption{Range of peak luminosity (top panel) and emitted energy (bottom panel) in the X-ray broad band observed in the Pleiades and in samples of stars with different ages. For each data set, the horizontal lines mark the minimum and maximum values, the upper and lower quartiles and, in red, the median value.}
\label{Lxclusters_figure}
\end{figure}
In Fig. \ref{Lxclusters_figure} we attempt a comparison between the energy budget of the X-ray flares observed in stars with different ages. We include data in this figure from the flares detected in NGC~2264 stars as part of the ``Coordinated Synoptic Investigation of NGC~2264'' \citep{CodySBM2014AJ} analyzed by \citet{Flaccomio2018AA.620A.55F}, the Chandra Orion Ultradeep Project (COUP) on the Orion Nebula Cluster \citep[ONC,][]{GetmanFBM2008ApJ}, the Taurus Molecular Cloud \citep[TMC,][]{FranciosiniPSM2007AA,StelzerFBM2007AA}, NGC~2547 \citep{JeffriesEPB2006MNRAS}, Cyg~OB2 \citep{Flaccomio2018arXiv181106769F}, $\eta\,$Chamaleontis \citep[EtaCha,][]{LopezSantiago2010AA.524A.97L}, NGC~869 \citep{Bhatt2014JApA.35.39B}, and the single stars SVS~16 in NGC~1333 \citep{Preibisch2003AA.401.543P}, Trumpler~14~Y442 \citep{Hamaguchi2015ATel.7983.1H}, NGC~6231~\#304 \citep{Sana2007MNRAS.377.945S}, and Upper~Sco~\#100 \citep{Argiroffi2006AA.459.199A}.
For each data set, we mark each single available measurement with a small circle, the median value with red lines and the quartiles, together with the minimum and maximum values, with black lines. These data sets are not directly comparable with each other without taking into account the different stellar mass spectrum, the different distances of the target stars, the different duration of the X-ray observations and the different number of detected stars, which is beyond the scope of this paper. For instance, the differences between Cygnus~OB2, Orion, and NGC~2264 are likely due to the different size of the stellar samples, and driven mainly by a few stars. In fact, in Cyg~OB2 the number of observed flares is 545 in 501 stars among 7924 detected X-ray sources (\citealp{WrightDGA2014} and \citealp{Flaccomio2018arXiv181106769F}), compared with $\sim$400 flares occurring in about 1000 X-ray sources detected in NGC~2264 and 216 flares observed in 161 (over a total of 1408) stars in Orion. This results in a larger chance to observe powerful flares in Cygnus~OB2 compared with Orion and NGC~2264. The larger energy released in the flares in the four stars of NGC~869 compared with that observed in stellar samples with similar age can be explained by the fact that this sample is limited to early-type stars (two A and two B-late stars). The stars in Taurus share a similar mass spectrum to the stars studied in this paper.
\begin{figure}[]
\centering
\includegraphics[width=8cm]{EM_vs_T.ps}
\caption{Distribution of the peak EM vs. peak temperature for the flares in our sample compared with those observed in Orion by COUP \citep{GetmanFBM2008ApJ} and those included in the catalog of stellar flares compiled by \citet{Gudel2004AARv}.}
\label{EMvsT_figure}
\end{figure}
Keeping this in mind, Fig. \ref{Lxclusters_figure} is compatible with the scenario in which the energy budget of the most energetic flares observed in clusters declines markedly with stellar age. The values shown in Fig. \ref{Lxclusters_figure} can also be compared with those for solar flares, which typically have L$_{\rm xray,peak}\leq$10$^{28}\,$erg/s \citep{Gudel2004AARv} and total energy released in X-rays ranging from 10$^{29}\,$erg to 10$^{32}\,$erg \citep{ShibataYokoyama2002ApJ.577.422S}, or with the flares observed with XMM in Proxima Centauri (L$_{\rm xray,peak}$=4$\times$10$^{28}\,$erg/s, log(E$_{\rm xray,flare}\,$[ergs])$\sim$32.0--32.5, \citealp{RealeGPA2004AA}). If confirmed, such a decline could also be the consequence of the disappearance of the hottest plasma components in stellar coronae. To test this possibility, in Fig. \ref{EMvsT_figure}, we compared the peak emission measure and plasma temperature observed in the flares of our sample with those observed in Orion stars by COUP \citep{GetmanFBM2008ApJ} and those included in the list of stellar flares compiled by \citet{Gudel2004AARv}. The flares analyzed in this paper, despite being the most powerful flares observed in the Pleiades, are characterized by lower emission measure and plasma temperature than the flares observed in Orion, and than many of the flares analyzed by \citet{Gudel2004AARv}, some of which occurred on main sequence stars. \par
Finally, we did not observe loops with lengths as large as the loop lengths observed in several COUP stars with disks (e.g., longer than $10^{12}\,$cm). This is compatible with the requirement of a protoplanetary disk to produce and sustain such very long loops \citep{FavataFRM2005ApJs,Reale2018ApJ.856.51R}. \par
\begin{acknowledgements}
We thank the anonymous referee for his/her thoughtful reading and comments, which helped us to improve our paper. For this study we used data from the NASA satellite \textit{Kepler} and the X-ray observatory XMM/Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. Funding for the \textit{Kepler} mission is provided by the NASA Science Mission directorate. Funding for the K2 mission is provided by the NASA Science Mission directorate. M.G.G., G.M., S.S., C.A., E.F., F.R., and I.P. acknowledge modest financial support from the agreement ASI-INAF n.2017-14-H.0. J.J.D. was supported by NASA contract NAS8-03060 to the {\it Chandra X-ray Center}. J.D.A.G. was supported by Chandra grants AR4-15000X and GO5-16021X.
\end{acknowledgements}
\addcontentsline{toc}{section}{\bf Bibliografia}
\bibliographystyle{aa}
| proofpile-arXiv_065-1573 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Consider the ring of Gaussian integers $\mathbb{Z}[i]$, which is the ring of integers of the imaginary quadratic field $\mathbb{Q}(i)$. Let $\mathfrak{a} = \langle \alpha \rangle$ be an ideal in $\mathbb{Z}[i]$ generated by the Gaussian integer $\alpha \in \mathbb{Z}[i]$. The \textit{norm} of the ideal $\mathfrak{a}$ is defined as $N(\mathfrak{a}) := \alpha \cdot \overline{\alpha}$, where $\alpha \mapsto \overline{\alpha}$ denotes complex conjugation. Let $\theta_{\alpha}$ denote the argument of $\alpha$. Since $\mathbb{Z}[i]$ is a principal ideal domain, and the generators of $\mathfrak{a}$ differ by multiplication by a unit $\{\pm 1, \pm i\} \in \mathbb{Z}[i]^{\times}$, we find that $\theta_{\mathfrak{a}} := \theta_{\alpha}$ is well-defined modulo $\pi/2$. We may thus fix $\theta_{\mathfrak{a}}$ to lie in $[0,\pi/2)$, which corresponds to choosing a generator $\alpha$ that lies within the first quadrant of the complex plane.\\
\\
We are interested in studying the angular distribution of $\{\theta_{\mathfrak{p}}\} \in [0,\pi/2)$, where $\mathfrak{p}\subsetneq \mathbb{Z}[i]$ are the collection of prime ideals with norm $N(\mathfrak{p})\leq X$. To optimize the accuracy of our methods, we employ several standard analytic techniques. In particular, we count the number of angles lying in a short segment of length $1/K$ in $[0,\pi/2]$ using a smooth window function, denoted by $F_{K}(\theta)$, and we count the number of ideals $\mathfrak{a}$ with norm $N(\mathfrak{a})\leq X$ using a smooth function, denoted by $\Phi$. We moreover count prime ideals using the weight provided by the Von Mangoldt function, defined as $\Lambda(\mathfrak{a}) = \log N(\mathfrak{p})$ if $\mathfrak{a} = \mathfrak{p}^{r}$ is a power of a prime ideal $\mathfrak{p}$, and $\Lambda(\mathfrak{a}) = 0$ otherwise.
\\
\\
Let $f \in C_{c}^{\infty}(\mathbb{R})$ be an even, real-valued window function. For $K \gg 1$, define
\begin{equation}\label{Fk}
F_{K}(\theta)\ := \ \sum_{k \in \mathbb{Z}}f\left(\frac{K}{\pi/2}\left(\theta-\frac {\pi}{ 2}\cdot k\right)\right),
\end{equation}
which is a $\pi/2$-periodic function whose support in $[0,\pi/2)$ is on a scale of $1/K$. The Fourier expansion of $F_{K}$ is given by
\begin{equation}\label{fourapprox}
F_{K}(\theta) = \sum_{k \in \mathbb{Z}}\widehat{F}_{K}(k)e^{i4k\theta},\hspace{10mm} \widehat{F}_{K}(k)=\frac{1}{K}\widehat{f}\left(\frac{k}{K}\right),
\end{equation}
where the normalization is defined to be $\widehat{f}\left(y \right):=\int_{\mathbb{R}}f(x)e^{-2\pi i y x}dx.$\\
\\
Let $\Phi \in C_{c}^{\infty}(0,\infty)$ and denote the \textit{Mellin transform} of $\Phi$ by
\begin{equation}
\tilde{\Phi}(s)\ := \ \int_{0}^{\infty}\Phi(x)x^{s-1}dx.
\end{equation}
Define
\begin{equation}\label{pisum}
\psi_{K,X}(\theta)
:=\sum_{\mathfrak{a}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a}) F_K(\theta_{\mathfrak{a}} -\theta),
\end{equation}
where $\mathfrak{a}$ runs over all nonzero ideals in $\mathbb{Z}[i]$. We may then think of $\psi_{K,X}(\theta)$ as a smooth count for the number of prime power ideals less than $X$ lying in a window of scale $1/K$ about $\theta$. As in Lemma 3.1 of \cite{RudWax}, the mean value of $\psi_{K,X}(\theta)$ is given by
\begin{align} \label{expvalue}
\begin{split}
\langle\psi_{K,X}\rangle &:= \frac {1}{\pi/2}\int_{0}^{\frac \pi 2}\sum_{\mathfrak{a}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a}) F_K(\theta_{\mathfrak{a}} -\theta)d\theta \sim \frac{X}{K}\cdot C_{\Phi}\cdot c_{f},
\end{split}
\end{align}
where
\begin{equation}\label{mean value constants}
c_{f} := \frac{1}{4\pi^2}\int_{\mathbb{R}}f(x)dx, \hspace{5mm} \textnormal{ and } \hspace{5mm} C_{\Phi}:= 4\pi^{2}\int_{0}^{\infty}\Phi(u)du.
\end{equation}
For fixed $K>0$, Hecke \cite{Hecke} proved that in the limit as $X \rightarrow \infty$,
\begin{equation}\label{shrink}
\psi_{K,X}(\theta) \sim \frac{X}{K}\int_{0}^{\infty}\Phi(x)dx.
\end{equation}
Alternatively, one may study the behavior of $\psi_{K,X}(\theta)$ upon taking both $K, X \rightarrow \infty$. As demonstrated by Kubilius \cite{Kubilius 1955}, under the assumption of the Grand Riemann Hypothesis (GRH), (\ref{shrink}) continues to hold for $K \ll X^{1/2-o(1)}$.\\
\\
In this paper, we wish to study
\begin{align}
\textnormal{Var}(\psi_{K,X}) &:= \frac{1}{\pi/2}\int_{0}^{\frac \pi 2}\bigg|\psi_{K,X}(\theta) - \langle\psi_{K,X}\rangle\bigg|^{2}d\theta.
\end{align}
Such a quantity was investigated by Rudnick and Waxman \cite{RudWax}, who, assuming GRH, obtained an upper bound for $\textnormal{Var}(\psi_{K,X})$.\footnote{See also \cite{SarnakUri}.} They then used this upper bound to prove that almost all arcs of length $1/K$ contain at least one angle $\theta_{\mathfrak p}$ attached to a prime ideal with $N(\mathfrak p)\leq K(\log K)^{2+o(1)}$.\\
\\
Montogomery \cite{Montgomery} showed that the pair correlation of zeros of $\zeta(s)$ behaves similarly to that of an ensemble of random matrices, linking the zero distribution of the zeta function to eigenvalues of random matrices. The Katz-Sarnak density conjecture \cite{KatzSarn1,KatzSarn2} extended this connection by relating the distribution of zeros across families of $L$-functions to eigenvalues of random matrices. Random matrix theory (RMT) has since served as an important aid in modeling the statistics of various quantities associated to $L$-functions, such as the spacing of zeros \cite{Hedjhal, Odlyzko, Sound}, and moments of $L$-functions \cite{ConreyIII, ConreyIV}. Motivated by a suitable RMT model for the zeros of a family of Hecke $L$-functions, as well as a function field analogue, Rudnick and Waxman conjectured that
\begin{equation}\label{RMT Conjecture}
\textnormal{Var}(\psi_{K,X}) \sim \int_{\mathbb{R}}f(y)^{2}dy \int_{0}^{\infty}\Phi(x)^{2}dx\cdot \textnormal{min} (\log X, 2\log K).
\end{equation}
Inspired by calculations for the characteristic polynomials of matrices averaged over the compact classical groups, Conrey, Farmer, and Zirnbauer \cite{Conrey, ConreyII} further exploited the relationship between $L$-functions and random matrices to conjecture a recipe for calculating the ratio of a product of shifted $L$-functions averaged over a family. The \textit{$L$-functions Ratios Conjecture} has since been employed in a variety of applications, such as computing $n$-level densities across a family of $L$-functions, mollified moments of $L$-functions, and discrete averages over zeros of the Riemann Zeta function \cite{ConreySnaith}. The Ratios Conjecture has also been extended to the function field setting \cite{Andrade}. While constructing a model using the Ratios Conjecture may pose additional technical challenges, the reward is often a more accurate model; RMT heuristics can model assymptotic behavior, but the Ratios Conjecture is expected to hold down to lower order terms. This has been demonstrated, for example, in the context of one-level density computations, by Fiorilli, Parks and S\"odergren \cite{FiorParkS}.
\\
\\
This paper studies $\textnormal{Var}(\psi_{K,X})$ down to lower-order terms. Define a new parameter $\lambda$ such that $X^{\lambda} = K$. We prove the following theorem:
\begin{theorem}\label{trivial regime theorem}
Fix $\lambda > 1$. Then
\begin{equation}
\frac {\textnormal{Var}(\psi_{K,X})}{C_{f}X^{1-\lambda}} = C_{\Phi} \log X+C'_{\Phi}+\pi^2\tilde{\Phi}\left(\frac 1 2\right)^2+o\left(1\right),
\end{equation}
where
\begin{align}
\begin{split}\label{C constant 1}
C_{f} &:=\frac{1}{ 4\pi^2}\int_{\mathbb{R}}f(y)^{2}dy \hspace{15mm} C'_{\Phi} := 4\pi^2 \cdot \int_{0}^{\infty}\log x \cdot \Phi(x)^2 ~dx,
\end{split}
\end{align}
and $C_{\Phi}$ is as in $\textnormal{(\ref{mean value constants})}$. Under GRH, the error term can be improved to $O_{\Phi}\left(X^{-\epsilon}\right)$ for some $\epsilon>0$ $($depending on $\lambda)$.
\end{theorem}
The proof of Theorem \ref{trivial regime theorem} is given in Section 2, and is obtained by classical methods. For $\lambda < 1$ the computation is more difficult, and we use the Ratios Conjecture to suggest the following.
\begin{conjecture}\label{conj} Fix $0 < \lambda <1 $. We have
\begin{equation}
\frac {\textnormal{Var}(\psi_{K,X})}{C_{f}X^{1-\lambda}} =
\left\{
\begin{array}{l l}
C_{\Phi} \log X+\Delta_{\Phi}+O_{\Phi}\left(X^{-\epsilon}\right) & \textnormal{ if } \frac 1 2 <\lambda < 1\\
C_{\Phi}\left(2 \lambda \log X\right) -K_{\Phi}+ O_{\Phi}\left(X^{-\epsilon}\right) & \textnormal{ if } \lambda < \frac 1 2,
\end{array} \right.
\end{equation}
where
\begin{equation}
\Delta_{\Phi}:=C'_{\Phi}- \pi^2\tilde{\Phi}\left(\frac 1 2\right)^{2},
\end{equation}
and
\begin{equation}
K_{\Phi}:= C_{\Phi,\zeta}-C_{\Phi,L}-A_{\Phi}'+2\pi^2\tilde{\Phi}\left(\frac 1 2\right)^{2}+C_{\Phi}\left(\log \left(\frac {\pi^2}{4}\right)+2\right),
\end{equation}
for some constant $\epsilon >0$ $($depending on $\lambda)$. Here $C_{\Phi,\zeta}$, $C_{\Phi,L}$, and $A_{\Phi}'$, are as in $(\ref{zeta constant})$, $(\ref{L constant})$, and $(\ref{A constant})$, respectively.
\end{conjecture}
\begin{figure}[h]
\includegraphics[width=8cm, height=6cm]{abillionprimes} \label{abillionprimes}
\caption{ A plot of the ratio $\textnormal{Var}(\psi_{K,X})/(\langle \psi_{K,X} \rangle \log X)$ versus $\lambda = \log K / \log X$, for $X \approx 10^9$ with test functions $\Phi = 1_{(0,1]}$ and $f = 1_{[-\frac{1}{2},\frac{1}{2}]}$. The red line is the prediction given by Conjecture \ref{conj}, while the blue line is the RMT Conjecture of (\ref{RMT Conjecture}).}
\centering
\end{figure}
Conjecture \ref{conj} provides a refined conjecture for $\textnormal{Var}(\psi_{K,X})$ with a power saving error term (away from the bifurcation points). It moreover recovers the asymptotic prediction given by (\ref{RMT Conjecture}), which was initially obtained by completely different methods. Numerical data for $\textnormal{Var}(\psi_{K,X})$ is provided in Figure ~1.\\
\\ A saturation effect similar to the one above was previously observed by Bui, Keating, and Smith \cite{BuiKeatingSmith}, when computing the variance of sums in short intervals of coefficients of a fixed $L$-function of high degree. There, too, the contribution from lower order terms must be taken into account in order to obtain good agreement with the numerical data.\\
\\
A proof of Theorem \ref{trivial regime theorem} is provided in Section 2 below. When $\lambda >1$ the main contribution to the variance is given by the diagonal terms, which we directly compute by separately considering the weighted contribution of split primes (Lemma \ref{Split Primes}) and inert primes (Lemma \ref{Inert Primes}). When $0 < \lambda < 1$ we may no longer trivially bound the off-diagonal contribution, and so we instead shift focus to the study of a relevant family of Hecke $L$-functions. In Section 3 we compute the ratios recipe for this family of $L$-functions, and in Section 4 we apply several necessary simplifications. Section 5 then relates the output of this recipe to $\textnormal{Var}(\psi_{K,X})$, resulting in Conjecture \ref{full conjecture}, which expresses $\textnormal{Var}(\psi_{K,X})$ in terms of four double contour integrals. Section 6 is dedicated to preliminary technical lemmas, and the double integrals are then computed in Sections 7$ - $9. One finds that the main contributions to $\textnormal{Var}(\psi_{K,X})$ come from second-order poles, while first-order poles contribute a correction factor smaller than the main term by a factor of $\log X$.\\
\\
The Ratios Conjectures moreover suggests an enlightening way to group terms. The first integral, which corresponds to taking the first piece of each approximate functional equation in the ratios recipe, corresponds to the contribution of the diagonal terms, computed in Theorem \ref{trivial regime theorem}. In particular, we note that its contribution to $\textnormal{Var}(\psi_{K,X})$ is independent of the value of $\lambda$ (Lemma \ref{1st total}). In contrast, the contribution emerging from the second and third integrals depends on the value of $\lambda$ (Lemma \ref{second total}). This accounts for the emergence of two bifurcation points in the lower order terms: one at $\lambda = 1/2$ and another at $\lambda = 1$. The fourth integral, corresponding to taking the second piece of each approximate functional equation in the ratios recipe, only makes a significantly contribution to $\textnormal{Var}(\psi_{K,X})$ when $\lambda < 1/2$ (Lemma \ref{fourth total}). This accounts for the bifurcation point in the main term, previously detected by the RMT model, as well as for the contribution of a complicated lower-order term, which appears to nicely fit the numerical data. \\
\\
\textbf{Acknowledgments:} This work emerged from a summer project developed and guided by E. Waxman, as part of the 2017 SMALL Undergraduate Research Project at Williams College. We thank Zeev Rudnick for advice, and for suggesting this problem, as well as Bingrong Huang and J. P. Keating for helpful discussions. The summer research was supported by NSF Grant DMS1659037. Chen was moreover supported by Princeton University, and Miller was supported by NSF Grant DMS1561945. Waxman was supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no 320755., as well as by the Czech Science Foundation GA\v CR, grant 17-04703Y.
\section{Proof of Theorem \ref{trivial regime theorem}}
To compute $\textnormal{Var}(\psi_{K,X})$ in the regime $\lambda > 1$, it suffices to calculate the second moment, defined as
\begin{align}
\begin{split}
\Omega_{K,X} &:= \frac{1}{\pi/2}\int_{0}^{\frac \pi 2}\bigg|\psi_{K,X}(\theta)\bigg|^{2}d\theta\\
&= \frac{2}{\pi} \sum_{\substack{\mathfrak{a}, \mathfrak{b} \subset \mathbb{Z}[i]\\ }} \Phi \left(\frac{N(\mathfrak{a})}{X}\right)\Phi \left(\frac{N(\mathfrak{b})}{X}\right) \Lambda(\mathfrak{a})\Lambda(\mathfrak{b}) \int_{0}^{\frac \pi 2} F_K(\theta_{\mathfrak{a}} -\theta)F_K(\theta_{\mathfrak{b}} -\theta)d\theta.\\
\end{split}
\end{align}
Indeed, note that as in Lemma 3.1 of \cite{RudWax},
\begin{equation}
\langle\psi_{K,X}\rangle \sim \frac{X}{K}\int_{\mathbb{R}}f(x)dx\int_{0}^{\infty}\Phi(u)du\\
= O\left(\frac{X}{K}\right),
\end{equation}
so that for $\lambda > 1$,
\begin{align} \label{var vs. second moment}
\begin{split}
\textnormal{Var}(\psi_{K,X}) &= \Omega_{K,X} - \langle\psi_{K,X}\rangle^{2}\\
&= \Omega_{K,X} + O\left(X^{1-\epsilon}\right),
\end{split}
\end{align}
where $\epsilon = 2\lambda-1$.\\
\\
Suppose $\mathfrak{a} \neq \mathfrak{b}$, and that at least one of $\theta_{\mathfrak{a}}, \theta_{\mathfrak{b}} \neq 0$. Then by Lemma 2.1 in \cite{RudWax},
\begin{equation}
|\theta_{\mathfrak{a}}-\theta_{\mathfrak{b}}| \geq \frac{1}{X} \gg \frac{1}{K}.
\end{equation}
Moreover, in order for the integral
\begin{equation}
\int_{0}^{\pi/2} F_K(\theta_{\mathfrak{a}} -\theta)F_K(\theta_{\mathfrak{b}} -\theta)d\theta
\end{equation}
to be nonzero, we require that $\theta_{\mathfrak{a}} - \theta_{\mathfrak{b}}< \frac{\pi}{2 K}$. Since $X = o(K)$, such off-diagonal terms contribute nothing, and the contribution thus only comes from terms for which $\theta_{\mathfrak{a}} = \theta_{\mathfrak{b}}$. We therefore may write
\begin{align} \label{second moment}
\begin{split}
\Omega_{K,X} &= \frac{2}{\pi}\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}}\neq 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right)^2 \Lambda^2(\mathfrak{a})\int_{0}^{\frac \pi 2} F_K(\theta)^2d\theta\\
&\phantom{=}+\frac{2}{\pi}\bigg |\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a})\bigg |^2 \int_{0}^{\frac \pi 2} F_K(\theta)^2d\theta.
\end{split}
\end{align}
By Parseval's theorem we have that for sufficiently large $K$,
\begin{align}\label{fourier simplification}
\begin{split}
\frac{2}{\pi}\int_{0}^{\frac \pi 2} \vert F_K(\theta)\vert^2d\theta &= \sum_{k \in \mathbb{Z}}\vert \widehat{F}_K(k)\vert^2d\theta= \frac{1}{K^{2}}\sum_{k \in \mathbb{Z}}\widehat{f}\left(\frac k K \right)^2 = 4\pi ^2 \frac {C_{f}}{K},
\end{split}
\end{align}
and therefore
\begin{align} \label{second moment}
\begin{split}
\Omega_{K,X} &= 4\pi ^2 \frac {C_{f}}{K}\left(\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}}\neq 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right)^2 \Lambda^2(\mathfrak{a})+\bigg |\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a})\bigg |^2\right).
\end{split}
\end{align}
Theorem \ref{trivial regime theorem} then follows from (\ref{var vs. second moment}), (\ref{second moment}),
and the following two lemmas.
\begin{lemma}\label{Split Primes}
We have
\begin{equation}\label{error no grh}
\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}}\neq 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right)^2 \Lambda^2(\mathfrak{a}) = \frac{1}{4\pi^2}\bigg(C_{\Phi}X\cdot \log X-X C'_{\Phi}\bigg)+O_{\Phi}\left(Xe^{-c \cdot \sqrt{\log X}}\right),
\end{equation}
while under GRH, the error term has a power saving, say, to $O_{\Phi}\left(X^{2/3}\right)$.
\end{lemma}
\begin{lemma}\label{Inert Primes}
Unconditionally we have that
\begin{equation}
\bigg|\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Lambda(\mathfrak{a}) \Phi\left(\frac{N(\mathfrak{a})}{X}\right)\bigg|^{2} = \frac{X}{4} \left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2}+O_{\Phi}\left(Xe^{-c \cdot \sqrt{\log X}}\right),
\end{equation}
while, again, under GRH, the error term has a power saving.
\end{lemma}
\textbf{Proof of Lemma \ref{Split Primes}:}
\begin{proof}
Consider the quantity
\begin{align}
\begin{split}
\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}}\neq 0}}& \Phi \left(\frac{N(\mathfrak{a})}{X}\right)^2 \Lambda^2(\mathfrak{a}) = \sum_{\mathfrak{p}|p\equiv 1(4)}\sum_{n=1}^{\infty} \Phi \left(\frac{N(\mathfrak{p}^{n})}{X}\right)^2 \Lambda^2(\mathfrak{p})+\sum_{m=0}^{\infty}\Phi \left(\frac{2^{2m+1}}{X}\right)^2 (\log 2)^2\\
&=\sum_{p \equiv 1(4)}2\cdot \Phi \left(\frac{p}{X}\right)^2 (\log p)^2+\sum_{\mathfrak{p}|p\equiv 1(4)}\sum_{n=2}^{\infty} \Phi \left(\frac{N(\mathfrak{p}^{n})}{X}\right)^2 \Lambda^2(\mathfrak{p})+O_{\Phi}\left(\log X\right),
\end{split}
\end{align}
where we note that since $\Phi$ is compactly supported, the sum on the far right has at most $O_{\Phi}\left(\log X\right)$ terms. Moreover,
\begin{align}
\sum_{\mathfrak{p}|p\equiv 1(4)}\Phi \left(\frac{N(\mathfrak{p}^{n})}{X}\right)^2 \Lambda^2(\mathfrak{p})&\ll X^{\frac 1 n+\epsilon}
\end{align}
since the sum has at most $O_{\Phi}(X^{1/n})$ terms. It follows that
\begin{equation}
\sum_{\mathfrak{p}|p\equiv 1(4)}\sum_{n=2}^{\infty} \Phi \left(\frac{N(\mathfrak{p}^{n})}{X}\right)^2 \Lambda^2(\mathfrak{p}) \ll \sum_{n=2}^{\log X} X^{\frac{1}{n}+\epsilon}= O_{\Phi}\left(X^{\frac{2}{3}}\right),
\end{equation}
and therefore
\begin{equation}\label{split simplification}
\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}}\neq 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right)^2 \Lambda^2(\mathfrak{a}) = \sum_{p \equiv 1(4)}2\cdot \Phi \left(\frac{p}{X}\right)^2 (\log p)^2+O_{\Phi}\left(X^{\frac{2}{3}}\right).
\end{equation}
Upon setting
\begin{equation}
f(t) := \log t \cdot \Phi\left(\frac{t}{X}\right)^2
\end{equation}
and
\begin{equation}
a_{p} := \left\{
\begin{array}{l l}
2\cdot \log p & \text{ if } p \equiv 1(4) \\
0 & \text{ otherwise,}
\end{array} \right.
\end{equation}
it follows from Abel's Summation Formula and the Prime Number Theorem that
\begin{align}\label{abel split}
\begin{split}
\sum_{p \equiv 1(4)}2\cdot \Phi \left(\frac{p}{X}\right)^2 (\log p)^2
&= \int_{1}^{\infty}\log t \cdot \Phi\left(\frac{t}{X}\right)^2 dt+O\left(\int_{1}^{\infty}t^{\frac 1 2 +\epsilon}\cdot f'(t)dt\right).
\end{split}
\end{align}
where the error term assumes RH. Applying the change of variables $u :=t/X$, we then obtain that for sufficiently large $X$,
\begin{align}\label{abel main}
\begin{split}
\int_{1}^{\infty}\log t \cdot \Phi\left(\frac{t}{X}\right)^2 dt &= X\cdot \log X \int_{0}^{\infty} \Phi\left(u\right)^2 du+X\cdot \int_{0}^{\infty}\log u\cdot \Phi\left(u\right)^2 du\\
&= \frac{1}{4\pi^2}\bigg(X\cdot \log X C_{\Phi}-X C'_{\Phi}\bigg).
\end{split}
\end{align}
Under RH, the error term is then given as
\begin{align}\label{abel error}
\begin{split}
\int_{1}^{\infty}t^{\frac 1 2+\epsilon}\cdot f'(t)dt & \ll \int_{1}^{\infty}t^{-\frac 1 3} \cdot \Phi\left(\frac{t}{X}\right)^2 dt\ll_{\Phi} X^{\frac 2 3},
\end{split}
\end{align}
while unconditionally it is as in (\ref{error no grh}). Combining the results of (\ref{split simplification}), (\ref{abel split}), (\ref{abel main}), and (\ref{abel error}), we then obtain Lemma \ref{Split Primes}.
\end{proof}
\textbf{Proof of Lemma \ref{Inert Primes}:}
\begin{proof}
Next, we consider the quantity
\begin{align}
\begin{split}
\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a}) &= 2\sum_{p \equiv 3(4)}\sum_{j=1}^{\infty} \Phi \left(\frac{p^{2j}}{X}\right) \log p+\sum_{m=1}^{\infty} \Phi \left(\frac{2^{2m}}{X}\right) \log 2\\
&= 2\sum_{p \equiv 3(4)}\sum_{j=1}^{\infty} \Phi \left(\frac{p^{2j}}{X}\right) \log p+O_{\Phi}\left(\log X\right).
\end{split}
\end{align}
Since
\begin{equation}
\sum_{p \equiv 3(4)}\Phi \left(\frac{p^{2j}}{X}\right) \log p \ll_{\Phi} X^{\frac{1}{2j}+\epsilon},
\end{equation}
we have that
\begin{equation}
\sum_{p \equiv 3(4)}\sum_{j=2}^{\infty} \Phi \left(\frac{p^{2j}}{X}\right) \log p \ll_{\Phi} (\log X)\cdot X^{\frac{1}{4}+\epsilon}=O_{\Phi}\left( X^{\frac 1 3}\right),
\end{equation}
and therefore
\begin{equation}
\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a}) = 2 \sum_{p \equiv 3(4)}\Phi \left(\frac{p^2}{X}\right) \Lambda (p)+O_{\Phi}\left(X^{\frac{1}{3}}\right).
\end{equation}
Moreover, since
\begin{align}
\begin{split}
\sum_{n \equiv 3(4)}\Phi \left(\frac{n^2}{X}\right) \Lambda (n) &=\sum_{p \equiv 3(4)}\Phi \left(\frac{p^2}{X}\right) \Lambda (p)+ \sum_{p \equiv 3(4)}\sum_{\substack{j=3 \\ \textnormal{odd}}}^{\infty}\Phi \left(\frac{p^{2j}}{X}\right) \Lambda (p)\\
&=\sum_{p \equiv 3(4)}\Phi \left(\frac{p^2}{X}\right) \Lambda (p)+ O_{\Phi}\left(X^{\frac{1}{3}}\right),
\end{split}
\end{align}
we obtain
\begin{equation}
\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a}) = 2\sum_{n \equiv 3(4)}\Phi \left(\frac{n^2}{X}\right) \Lambda (n)+O_{\Phi}\left(X^{\frac{1}{3}}\right).
\end{equation}
By the Mellin inversion theorem, we find that
\begin{align}
\begin{split}
\sum_{n \equiv 3(4)}\Phi \left(\frac{n^2}{X}\right) \Lambda (n) & =\sum_{n \equiv 3(4)} \Lambda (n) \frac{1}{2\pi i}\int_{\text{Re}(s)= 2}\tilde{\Phi}(s)\left(\frac{n^2}{X}\right)^{-s}ds \\
& = \frac{1}{2\pi i}\int_{\text{Re}(s)= 2}\tilde{\Phi}(s)\sum_{n \equiv 3(4)}\frac{\Lambda (n)}{n^{2s}}X^{s}ds.
\end{split}
\end{align}
Let $\chi_{0} \in \left(\mathbb{Z}/4\mathbb{Z}\right)^{\times}$ denote the principal character, and $\chi_{1} \in \left(\mathbb{Z}/4\mathbb{Z}\right)^{\times}$ denote the non-principal character, with corresponding $L$-functions given by $L(s, \chi_{0})$ and $L(s, \chi_{1})$, respectively. Upon noting that
\begin{equation}
\chi_{0}(n)-\chi_{1}(n)=
\left\{
\begin{array}{l l}
2 & \text{ if } n = 3 \text{ mod }4 \\
0 & \text{otherwise}, \\
\end{array} \right.
\end{equation}
we obtain
\begin{align}
\begin{split}
\frac{L'}{L}(2s,\chi_1)-\frac{L'}{L}(2s,\chi_0) &=\sum_{n=1}^{\infty}\frac{\Lambda(n)(\chi_{0}(n)-\chi_{1}(n))}{n^{2s}}\\
&=2\sum_{n\equiv 3(4)}^{\infty}\frac{\Lambda(n)}{n^{2s}}.
\end{split}
\end{align}
It follows that
\begin{align}\label{integral2s}
\begin{split}
2\sum_{n \equiv 3(4)}\Phi \left(\frac{n^2}{X}\right) \Lambda (n) &= \frac{1}{2\pi i}\int_{\text{Re}(s)= 2}\left(\frac{L'}{L}(2s,\chi_{1})-\frac{L'}{L}(2s,\chi_{0})\right)\tilde{\Phi}(s)X^s ds\\
&= \frac{1}{4\pi i}\int_{\text{Re}(s)= 4}\left(\frac{L'}{L}(s,\chi_{1})-\frac{L'}{L}(s,\chi_0)\right)\tilde{\Phi}\left(\frac s 2\right)X^\frac{s}{2} ds.
\end{split}
\end{align}
Moreover, we compute
\begin{equation}
\frac{L'}{L}(s,\chi_{0}) = -\frac{1}{s-1}+\gamma_{0}+\log 2 +O(s-1),
\end{equation}
where $\gamma_{0}$ is the Euler-Mascheroni constant, while $L'/L(s,\chi_1)$ is holomorphic about $s=1$. Shifting integrals, we pick up a pole at $s = 1$ and find that
\begin{equation}
\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Lambda(\mathfrak{a}) \Phi\left(\frac{N(\mathfrak{a})}{X}\right) = \frac{1}{2} X^{\frac{1}{2}}\tilde{\Phi}\left(\frac 1 2\right)+O_{\Phi}\left(\sqrt{X}e^{-c \cdot \sqrt{\log X}}\right)
\end{equation}
for some $c > 0$. Squaring this then yields
\begin{equation}
\left|\sum_{\substack{\mathfrak{a} \subset \mathbb{Z}[i] \\ \theta_{\mathfrak{a}} = 0}} \Lambda(\mathfrak{a}) \Phi\left(\frac{N(\mathfrak{a})}{X}\right)\right|^{2} = \frac{X}{4} \left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2}+O_{\Phi}\left(Xe^{-c \cdot \sqrt{\log X}}\right).
\end{equation}
As above, we note that under the assumption of GRH the error term can be improved to have a power-saving.
\end{proof}
\section{Implementing the Ratios Conjecture}
Throughout this section, and the remainder of the paper, we will assume GRH.
\subsection{The Recipe}
The $L$-Functions Ratios Conjecture described in \cite{Conrey}, provides a procedure for computing an average of $L$-function ratios over a designated family. Let $\mathcal{L}(s,f)$ be an $L$-function, and $\mathcal{F} =\{f\}$ a family of characters with conductors $c(f)$, as defined in section 3 of \cite{ConreyII}. $\mathcal{L}(s,f)$ has an approximate functional equation given by
\begin{equation}\label{approx equation}
\mathcal{L}(s,f) = \sum_{n<x}\frac{A_{n}(f)}{n^s}+\epsilon(f,s)\sum_{m <y} \frac{\overline{A_{m}(f)}}{m^{1-s}}+\textnormal{remainder}.
\end{equation}
Moreover, one may write
\begin{equation} \label{denominator}
\frac{1}{\mathcal{L}(s,f)} = \sum_{n=1}^{\infty}\frac{\mu_{f}(n)}{n^s},
\end{equation}
where the series converges absolutely for Re$(s)>1$.
To conjecture an asymptotic formula for the average
\begin{equation}
\sum_{f \in \mathcal{F}}\frac{\mathcal{L}(\frac{1}{2} +\alpha,f)\mathcal{L}(\frac{1}{2} + \beta,f)}{\mathcal{L}(\frac{1}{2}+ \gamma,f)\mathcal{L}(\frac{1}{2} + \delta,f)},
\end{equation}
the \textit{Ratios Conjecture} suggests the following recipe.\\
\\
\textbf{Step One}: Start with
\begin{equation}
\frac{\mathcal{L}(\frac{1}{2}+ \alpha,f)\mathcal{L}(\frac{1}{2} + \beta,f)}{\mathcal{L}(\frac{1}{2}+ \gamma,f)\mathcal{L}(\frac{1}{2} + \delta,f)}.
\end{equation}
Replace each $L$-function in the numerator with the two terms from its approximate functional equation, ignore the remainder terms and allow each of the four resulting sums to extend to infinity. Replace each $L$-function in the denominator by its series (\ref{denominator}). Multiply out the resulting expression to obtain 4 terms. Write these terms as
\begin{equation}(\text{product of }\epsilon(f,s) \text{ factors})\sum_{n_{1},\dots, n_{4}}(\text{summand}).
\end{equation}
\textbf{Step Two}: Replace each product of $\epsilon(f,s)$ factors by its expected value when averaged over the family.\\
\\
\textbf{Step Three}: Replace each summand by its expected value when averaged over the family.\\
\\
\textbf{Step Four}: Call the total $M_{f}:=M_{f}(\alpha,\beta,\gamma,\delta)$, and let $F = |\mathcal{F}| $. Then for
\begin{equation}\label{ratios domain}
-\frac{1}{4}<\textnormal{Re}(\alpha),\textnormal{Re}(\beta)< \frac{1}{4}, \hspace{10mm}
\frac{1}{\log F}\ll \textnormal{Re}(\gamma), \textnormal{Re}(\delta)<\frac{1}{4},
\end{equation}
and
\begin{equation}\label{ratios domain 2}
\textnormal{Im}(\alpha),\textnormal{Im}(\beta),\textnormal{Im}(\gamma),\textnormal{Im}(\delta) \ll_{\epsilon}F^{1-\epsilon},
\end{equation}
the conjecture is that
\begin{equation}
\sum_{f \in \mathcal{F}}\frac{\mathcal{L}(\frac{1}{2} + \alpha,f)\mathcal{L}(\frac{1}{2} + \beta,f)}{\mathcal{L}(\frac{1}{2} + \gamma,f)\mathcal{L}(\frac{1}{2} + \delta,f)}g(c(f))=\sum_{f \in \mathcal{F}}M_{f}\left(1+O\left(e^{(-\frac{1}{2}+\epsilon)c(f)}\right)\right)g(c(f))
\end{equation}
for all $\epsilon > 0$, where $g$ is a suitable weight function.
\subsection{Hecke $L$-functions}
We are interested in applying the ratios recipe to the following family of $L$-functions. Consider the Hecke character
\begin{equation}
\Xi_{k}(\mathfrak{a}):= \left(\alpha/ \overline{\alpha}\right)^{2k} = e^{i 4k \theta_{\mathfrak{a}}}, \hspace{5mm} k \in \mathbb{Z},
\end{equation}
which provides a well-defined function on the ideals of $\mathbb{Z}[i]$. To each such character we may associate an $L$-function
\begin{align}
L_{k}(s) &:= \ \sum_{\substack{\mathfrak{a} \subseteq \mathbb{Z}[i]\\ \mathfrak{a} \neq 0}}\frac{\Xi_{k}(\mathfrak{a})}{N(\mathfrak{a})^{s}}=\prod_{\mathfrak{p} \textnormal{ prime}}\left(1-\frac{\Xi_{k}(\mathfrak{p})}{N(\mathfrak{p})^s}\right)^{-1}, \hspace{5mm} \textnormal{Re}(s)>1.
\end{align}
Note that $L_{k}(s) = L_{-k}(s)$, and that
\begin{equation}\label{cmplxconj}
\overline{\frac{L'_{k}}{L_{k}}(s)}\ = \ -\sum_{\mathfrak{a} \neq 0} \overline{\frac{\Lambda(\mathfrak{a}) \Xi_{k}(\mathfrak{a})}{\overline{N(\mathfrak{a})^{s}}}}\ = \ - \sum_{\mathfrak{a} \neq 0} \frac{\Lambda(\mathfrak{a})\overline{\Xi_{k}(\mathfrak{a})}}{N(\mathfrak{a})^{\overline{s}}}\ = \ \frac{L'_{-k}}{L_{-k}}(\overline{s})= \frac{L'_{k}}{L_{k}}(\overline{s}).
\end{equation}
Moreover, when $k\neq 0$, then $L_{k}(s)$ has an analytic continuation to the entire complex plane, and satisfies the \textit{functional equation}
\begin{equation}
\xi_{k}(s):=\pi^{-(s+|2k|)}\cdot \Gamma(s+|2k|)\cdot L_{k}(s)=\xi_{k}(1-s).
\end{equation}
\subsection{Step One: Approximate Function Equation}
We seek to apply the above procedure to compute the average
\begin{equation}
\sum_{k \neq 0}\bigg | \widehat{f}\left( \frac{k}{K}\right) \bigg |^{2}\frac{L_{k}(\frac 1 2+\alpha)L_{k}(\frac 1 2+\beta)}{L_{k}(\frac 1 2+\gamma)L_{k}(\frac 1 2+\delta)}
\end{equation}
for specified values of $\alpha, \beta,\gamma, \delta$. For this particular family of $L$-functions, we have
\begin{equation}
\epsilon(f,s) := \frac{L_{k}(s)}{L_{k}(1-s)}=\pi^{2s-1}\cdot \frac{\Gamma(1-s+|2k|)}{\Gamma(s+|2k|)},
\end{equation}
and
\begin{align}
A_{k}(n) &= \sum_{\substack{N(\mathfrak{a}) = n}}\Xi_{k}(\mathfrak{a}),
\end{align}
which is a multiplicative function defined explicitly on prime powers by
\begin{equation}\label{coefficients}
A_{k}(p^{l}) = \left\{
\begin{array}{l l}
\sum_{j=-l/2}^{l/2}e^{2j4ki\theta_{p}} & \text{ if } p \equiv 1(4), l \text{ even}\\
\sum_{j=-(l+1)/2}^{(l-1)/2}e^{(2j+1)4ki\theta_{p}} & \text{ if } p \equiv 1(4), l \text{ odd}\\
0 & \text{ if } p \equiv 3(4), l \textnormal{ odd }\\
1 & \text{ if } p \equiv 3(4), l \textnormal{ even } \\
(-1)^{lk} & \text{ if } p = 2,
\end{array} \right.
\end{equation}
where, for prime $p \equiv 1 (4)$, we define $\theta_{p} := \theta_{\mathfrak{p}}$, where $\mathfrak{p} \subset \mathbb{Z}[i]$ is a prime ideal lying above $p$. Note, moreover, that the above formula is independent of our specific choice of $\mathfrak{p}$.\\
\\
As per the recipe, we ignore the remainder term and allow both terms in the approximate functional equation to be summed to infinity. This allows us to write
\begin{equation}
L_{k}(s) \approx \sum_{n}\frac{A_{k}(n)}{n^s}+\pi^{2s-1}\cdot \frac{\Gamma(1-s+|2k|)}{\Gamma(s+|2k|)}\sum_{m} \frac{A_{k}(m)}{m^{1-s}},
\end{equation}
upon noting that $\overline{A_{k}(n)}=A_{k}(n)$ for all $A_{k}(n)$.\\
\\
To compute the inverse coefficients, write
\begin{align}
\begin{split}
\frac{1}{L_k(s)}&=\prod_{\mathfrak{p}} \left(1-\frac{e^{4ki \theta_{\mathfrak{p}}}}{N(\mathfrak{p})^s}\right)\\
&=\left(1-\frac{(-1)^k}{2^{s}}\right)\prod_{p \equiv 1(4)} \left(1-\frac{(e^{4ki\theta_p}+e^{-4ki\theta_p})}{p^{s}}+\frac{1} {p^{2s}}\right)\prod_{p\equiv 3(4)} \left(1-\frac 1 {p^{2s}}\right)
\\
&=\left(1-\frac{A_{k}(2)}{2^{s}}\right)\prod_{p \equiv 1(4)} \left(1-\frac{A_{k}(p)}{p^{s}}+\frac{1}{p^{2s}}\right)\prod_{p\equiv 3(4)} \left(1-\frac{A_{k}(p)}{p^{s}}-\frac{A_{k}(p^2)}{p^{2s}}\right).
\end{split}
\end{align}
We then obtain
\begin{equation}\label{inverse series}
\frac{1}{L_k(s)}=\sum\limits_{h} \frac{\mu_{k}(h)}{h^s},
\end{equation}
where
\begin{align}\label{inverse coefficients}
\mu_k(p^h):=
\begin{cases}
1 & h=0\\
-A_{k}(p) & h=1
\\
-1 & h=2,p\equiv 3(4)
\\
1 & h=2, p\equiv 1(4)
\\
0 & \textnormal{otherwise}.
\end{cases}
\end{align}
Multiplying out the resulting expression gives
\begin{align}
\begin{split}
&\left(\sum\limits_{h=0}^{\infty} \frac{\mu_{k}(h)}{h^{\frac 1 2+\gamma}}\right)\left(\sum\limits_{l=0}^{\infty} \frac{\mu_{k}(l)}{l^{\frac 1 2+\delta}}\right)\times \left(\sum_{n=0}^{\infty}\frac{A_{k}(n)}{n^{\frac 1 2+\alpha}}+\pi^{2\alpha}\cdot \frac{\Gamma(\frac 1 2-\alpha+|2k|)}{\Gamma(\frac 1 2+\alpha+|2k|)}\sum_{n =0}^{\infty} \frac{A_{k}(n)}{n^{\frac 1 2-\alpha}}\right) \\
&\phantom{=}\times \left(\sum_{m=0}^{\infty}\frac{A_{k}(m)}{m^{\frac 1 2+\beta}}+\pi^{2\beta}\cdot \frac{\Gamma(\frac 1 2-\beta+|2k|)}{\Gamma(\frac 1 2+\beta+|2k|)}\sum_{m =0}^{\infty} \frac{A_{k}(m)}{m^{\frac 1 2-\beta}}\right)
\end{split}
\end{align}
\begin{align}
\begin{split}
&=\prod_{\substack{ p }}\left(\sum_{m,n,h,l}\frac{\mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})}{p^{h(\frac 1 2+\gamma)+l(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\right)\\
&\phantom{=}+\pi^{2\alpha}\cdot \frac{\Gamma(\frac 1 2-\alpha+|2k|)}{\Gamma(\frac 1 2+\alpha+|2k|)}\prod_{\substack{ p }}\left(\sum_{m,n,h,l}\frac{\mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})}{p^{h(\frac 1 2+\gamma)+l(\frac 1 2+\delta)+n(\frac 1 2-\alpha)+m(\frac 1 2+\beta)}}\right)\\
&\phantom{=}+\pi^{2\beta}\cdot \frac{\Gamma(\frac 1 2-\beta+|2k|)}{\Gamma(\frac 1 2+\beta+|2k|)}\prod_{\substack{ p }}\left(\sum_{m,n,h,l}\frac{\mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})}{p^{h(\frac 1 2+\gamma)+l(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2-\beta)}}\right) \\
&\phantom{=}+\pi^{2(\alpha+\beta)}\cdot \frac{\Gamma(\frac 1 2-\alpha+|2k|)}{\Gamma(\frac 1 2+\alpha+|2k|)}\frac{\Gamma(\frac 1 2-\beta+|2k|)}{\Gamma(\frac 1 2+\beta+|2k|)}\\
&\phantom{=}\times\prod_{\substack{ p }}\left(\sum_{m,n,h,l}\frac{\mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})}{p^{h(\frac 1 2+\gamma)+l(\frac 1 2+\delta)+n(\frac 1 2-\alpha)+m(\frac 1 2-\beta)}}\right),
\end{split}
\end{align}
where the above follows upon noting that
\begin{align}
\begin{split}
&\left(\sum_{h=0}^{\infty}\frac{\mu_{k}(h)}{h^{\frac 1 2+\gamma}}\right)\left(\sum_{l=0}^{\infty}\frac{\mu_{k}(l)}{l^{(\frac 1 2+\delta)}}\right)\left(\sum_{n=0}^{\infty}\frac{A_{k}(n)}{n^{\frac 1 2+\alpha}}\right)\left(\sum_{m=0}^{\infty}\frac{A_{k}(m)}{m^{\frac 1 2+\beta}}\right)\\
&=\prod_{p}\left(\sum_{h}\frac{\mu_{k}(p^h)}{p^{h(\frac 1 2+\gamma)}}\right)\left(\sum_{l}\frac{\mu_{k}(p^{l})}{p^{l(\frac 1 2+\alpha)}}\right)\left(\sum_{n}\frac{A_{k}(p^{n})}{p^{n(\frac 1 2+\alpha)}}\right)\left(\sum_{m}\frac{A_{k}(p^{m})}{p^{m(\frac 1 2+\beta)}}\right)\\
& = \prod_{\substack{ p }}\left(\sum_{m,n,h,l}\frac{\mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})}{p^{h(\frac 1 2+\gamma)+l(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\right).
\end{split}
\end{align}
The algorithm now dictates that we compute the $\Gamma$-average
\begin{equation}
\bigg \langle \pi^{2(\alpha+\beta)}\cdot \frac{\Gamma(\frac 1 2-\alpha+|2k|)}{\Gamma(\frac 1 2+\alpha+|2k|)}\frac{\Gamma(\frac 1 2-\beta+|2k|)}{\Gamma(\frac 1 2+\beta+|2k|)}\bigg \rangle_{K},
\end{equation}
as well as an average for the quantity coming from the first piece of each functional equation, namely
\begin{equation}
\bigg \langle \prod_{\substack{ p }}\left(\sum_{m,n,h,l}\frac{\mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})}{p^{h(\frac 1 2+\gamma)+l(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\right)\bigg \rangle_{K}.
\end{equation}
Here we write $\langle \cdot \rangle_{K}$ to denote the average over all $0< |k| \leq K$. The average of the remaining three pieces will then follow similarly upon applying the appropriate change of variables.
\subsection{Step Two: Averaging the Gamma Factors}
The gamma factor averages over the family of Hecke $L$-functions are provided by the following lemma.
\begin{lemma}
Fix $0 < \alpha, \beta < \frac 1 2$. We find that
\begin{align}\label{single gamma}
\bigg\langle \frac{\Gamma(\frac 1 2-\alpha+|2k|)}{\Gamma(\frac 1 2+\alpha+|2k|)}\bigg\rangle_{K} = \frac{\left(2K\right)^{-2\alpha}}{1-2\alpha} + O\left(K^{-1}\right),
\end{align}
and similarly
\begin{align}\label{double gamma}
\begin{split}
\bigg\langle \frac{\Gamma(\frac 1 2-\alpha+|2k|)}{\Gamma(\frac 1 2+\alpha+|2k|)}\frac{\Gamma(\frac 1 2-\beta+|2k|)}{\Gamma(\frac 1 2+\beta+|2k|)}\bigg\rangle_{K}&=\frac{(2K)^{-2(\alpha+\beta)}}{1-2(\alpha+\beta)}+ O\left(K^{-1}\right).
\end{split}
\end{align}
\end{lemma}
\begin{proof}
A proof of (\ref{single gamma}) is given in \cite{Waxman}. (\ref{double gamma}) follows similarly upon noting that
\begin{equation}
\frac{\Gamma\left(\frac 1 2+|2k|-\alpha\right)}{\Gamma\left(\frac 1 2+|2k|+\alpha\right)}\frac{\Gamma\left(\frac 1 2+|2k|-\beta\right)}{\Gamma\left(\frac 1 2+|2k|+\beta\right)}= \left(\frac 1 2+|2k|\right)^{-2(\alpha+\beta)}\left(1+O\left(\frac{1}{k}\right)\right).
\end{equation}
Averaging over $0<|k| \leq K$ as in \cite{Waxman} then yields (\ref{double gamma}).
\end{proof}
\subsection{Step Three: Coefficient Average}
In this section, we seek to compute the coefficient average
\begin{equation}\label{coefficient average}
\bigg \langle \mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})\bigg \rangle_{K}.
\end{equation}
To do so, we must consider several cases depending on the value of $p$ mod 4. Define
\begin{align}
\delta_p(m,n,h,l)\ :=\ \lim_{K\to \infty}\bigg \langle \mu_{k}(p^h)\mu_{k}(p^l)A_{k}(p^{n})A_{k}(p^{m})\bigg \rangle_{K}
\end{align}
and write
\begin{align}
\delta_p(m,n,h,l)\ :=\ \begin{cases}
\delta_{3(4)}(m,n,h,l) & \text{ when }p\equiv 3(4)\\
\delta_{1(4)}(m,n,h,l) & \text{ when }p\equiv 1(4)\\
\delta_{2}(m,n,h,l) & \text{ when }p=2.
\end{cases}
\end{align}
\subsubsection{$p\equiv 1(4)$:}
By (\ref{inverse coefficients}), we may restrict to the case in which $h,l
\in \{0,1,2\}$. If $h,l \in \{0,2\}$, then $\delta_{1(4)}(m,n,h,l)$ reduces to $\left<A_{k}(p^m)A_{k}(p^n)\right>_K$, where
\begin{equation}
A_{k}(p^{m}) = \left\{
\begin{array}{l l}
\sum_{j=-\frac{m}{2}}^{\frac{m}{2}}e^{2j4ki\theta_p} & m \text{ even}\\
\sum_{j=-\frac{\left(m+1\right)}{2}}^{\frac{\left(m-1\right)}{2}}e^{\left(2j+1\right)4ki\theta_p} & m \textnormal{ odd}.\\
\end{array} \right.
\end{equation}
Expanding the product $A_{k}(p^m)A_{k}(p^n)$ yields a double sum of points on the unit circle, and averaging over $ k \leq K$ then eliminates, in the limit, any such terms which are not identically equal to 1. Collecting the significant terms, we find that
\begin{align}
\delta_{1(4)}(m,n,h,l) =
\begin{cases}
\min{\left\{m,n\right\}}+1& m+n ~ \mathrm{even}\\
0& m+n ~ \mathrm{odd}.
\end{cases}
\end{align}
If either $h=1$ and $l \in \{0,2\}$, or $l=1$ and $h \in \{0,2\}$, then the product
$\mu_{k}(p^h)\mu_{k}(p^l)=-A_{k}(p)=-(e^{4ki\theta_p}+e^{-4ki\theta_p})$, so that (\ref{coefficient average}) reduces to
\begin{equation}
\left<-\left(e^{4ki\theta_p}+e^{-4ki\theta_p}\right)A_{k}(p^m)A_{k}(p^n)\right>_K.
\end{equation} Expanding out this product yields again a sum of points on the unit circle, which upon averaging over $k \leq K$ eliminates, in the limit, any such terms not identically equal to 1. We then obtain
\begin{align}
\delta_{1(4)}(m,n,h,l)=
\begin{cases}
0 & m+n ~ \mathrm{even}\\
-2\left(\min{\left\{m,n\right\}}+1\right) & m+n ~ \mathrm{odd}.
\end{cases}
\end{align}
Finally, suppose $h = l = 1$. In this case, the product $\mu_{k}(p^h)\mu_{k}(p^l)=A_{k}(p)^2=e^{2\cdot4ki\theta_p}+2+e^{-2\cdot4ki\theta_p}$, so that (\ref{coefficient average}) reduces to
\begin{equation}
\left<\left(e^{2\cdot 4ki\theta_p}+2+e^{-2\cdot 4ki\theta_p}\right)A_{k}(p^m)A_{k}(p^n)\right>_K.
\end{equation} Collecting significant contributions as before, we conclude that
\begin{align}
\delta_{1(4)}(m,n,h,l)=
\begin{cases}
4n+2 & m=n\\
4\left(\min{\left\{m,n\right\}}+1\right)& m\neq n, m+n ~ \mathrm{even}\\
0 & m+n ~ \mathrm{odd}.
\end{cases}
\end{align}
\subsubsection{$p\equiv 3(4)$:}
Again we may restrict to the case in which $h,l \in \{0,2\}$. If $h = l
\in \{0,2\}$, then $\mu_{k}(p^h)\mu_{k}(p^l)=1$, and therefore
\begin{align}
\delta_{3(4)}(m,n,h,l)&=
\begin{cases}
1 & m, n \text{ are even}
\\
0 & \text{otherwise}.
\end{cases}
\end{align}
Likewise, if $(h,l) = (0,2)$ or $(h,l)=(2,0)$ then $\mu_{k}(p^h)\mu_{k}(p^l)=-1$ and
\begin{align}
\delta_{3(4)}(m,n,h,l) &=
\begin{cases}
-1 & m, n \text{ are even}
\\
0 & \text{otherwise}.
\end{cases}
\end{align}
\subsubsection{$p=2$:}
If $p=2$, then we may restrict to the case in which $h,l \in \{0,1\}$. If, moreover, $h=l$, then
\begin{align}
\delta_{2}(m,n,h,l)&=\big \langle(-1)^{(m+n)k}\big \rangle_K =
\begin{cases}
1 & m+n \text{ is even}
\\
0 & \text{otherwise,}
\end{cases}
\end{align}
while if $h \neq l$,
\begin{align}
\delta_{2}(m,n,h,l)&= -\big \langle(-1)^{(m+n+1)k}\big \rangle_K =
\begin{cases}
-1 & m+n \text{ is odd}
\\
0 & \text{otherwise.}
\end{cases}
\end{align}
\subsubsection{Summary}
Summarizing the above results, we then conclude that
\begin{align}\label{coefficient cases}
\begin{split}
&\delta_{1(4)}(m,n,h,l)\ =\
\begin{cases}
\min{\{m,n\}}+1 & m+n\text{ even, } h,l \in \{0,2\}\\
-2(\min{\{m,n\}}+1) & m+n\text{ odd, }(h,l)=(0,1),(1,0),(1,2)\text{ or }(2,1)\\
4n+2 & m=n,\ (h,l) = (1,1)\\
4\left(\min{\left\{m,n\right\}}+1\right) & m\neq n, \hspace{2mm} m+n ~ \mathrm{even,} \hspace{2mm} (h,l) = (1,1)\\
0 & \text{otherwise},
\end{cases}
\\
&\delta_{3(4)}(m,n,h,l)\ =\
\begin{cases}
1 & m,n\text{ even, }(h,l)=(0,0)\text{ or }(2,2)\\
-1 & m,n\text{ even, }(h,l)=(0,2)\text{ or }(2,0)\\
0 &\text{otherwise},
\end{cases}\\
&\hspace{4mm}\delta_2(m,n,h,l)\ =\
\begin{cases}
1 & m+n\text{ even, }(h,l)=(0,0)\text{ or }(1,1)\\
-1 & m+n\text{ odd, }(h,l)=(0,1)\text{ or }(1,0)\\
0 & \text{otherwise}.
\end{cases}
\end{split}
\end{align}
\subsection{Step Four: Conjecture}
Upon applying the averages, the Ratios Conjecture recipe claims that for $\alpha, \beta,\gamma,\delta$ satisfying the conditions specified in (\ref{ratios domain}), we have
\begin{align}\label{the conjecture M}
\sum_{k \neq 0}\bigg | \widehat{f}\left( \frac{k}{K}\right) \bigg |^{2}\frac{L_{k}(\frac 1 2+\alpha)L_{k}(\frac 1 2+\beta)}{L_{k}(\frac 1 2+\gamma)L_{k}(\frac 1 2+\delta)}=\sum_{k \neq 0}\bigg | \widehat{f} \left( \frac{k}{K}\right) \bigg |^{2} M_{K}(\alpha,\beta,\gamma,\delta)+O\left(K^{\frac{1}{2}+\epsilon}\right),
\end{align}
where
\begin{align}
\begin{split}
&M_{K}(\alpha,\beta,\gamma,\delta):=\prod_{\substack{ p }}G_{p}(\alpha,\beta,\gamma,\delta)+\frac{\left(\pi/2K\right)^{2\alpha}}{1-2\alpha}\prod_{\substack{ p }}G_{p}(-\alpha,\beta,\gamma,\delta)\\
&\phantom{=}+\frac{\left(\pi/2K\right)^{2\beta}}{1-2\beta} \prod_{\substack{ p }}G_{p}(\alpha,-\beta,\gamma,\delta)+\frac{\left(\pi/2K\right)^{2(\alpha+\beta)}}{1-2(\alpha+\beta)}\prod_{\substack{ p }}G_{p}(-\alpha,-\beta,\gamma,\delta),
\end{split}
\end{align}
and
\begin{equation}
G_{p}(\alpha,\beta,\gamma,\delta):=\sum_{m,n,h,l}\frac{\delta_{p}(m,n,h,l)}{p^{h(\frac 1 2+\gamma)+l(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}.
\end{equation}
\section{Simplifying the Ratios Conjecture Prediction}
In this section we seek a simplified form of $M_{K}(\alpha,\beta,\gamma,\delta)$. First, we again consider several separate cases, depending on the value of $p$ mod 4.
\subsection{Pulling out Main Terms}
Suppose $p \equiv 3(4)$. By (\ref{coefficient cases}), we expand each local factor as
\begin{align}
\begin{split}
G_{p}(\alpha,\beta,\gamma,\delta)&=\sum_{\substack{m,n \\ \text{even}}}\frac{\delta_{3(4)}(m,n,0,0)}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{\delta_{3(4)}(m,n,2,2)}{p^{2(\frac 1 2+\gamma)+2(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\\
&\phantom{=}+ \frac{\delta_{3(4)}(m,n,0,2)}{p^{2(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{\delta_{3(4)}(m,n,2,0)}{p^{2(\frac 1 2+\gamma)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\\
&=\sum_{\substack{m,n \\ \text{even}}}\frac{1}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{1}{p^{(1+2\gamma)+(1+2\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\\
&\phantom{=}- \frac{1}{p^{(1+2\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}-\frac{1}{p^{(1+2\gamma)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\\
&=\left(1+\frac{1}{p^{2+2\gamma+2\delta}}-\frac{1}{p^{1+2\delta}}-\frac{1}{p^{1+2\gamma}}\right)\sum_{m,n}\frac{1}{p^{n(1+2\alpha)+m(1+2\beta)}}.
\end{split}
\end{align}
Assuming small positive fixed values of $\text{Re}(\alpha),\text{Re}(\beta),\text{Re}(\gamma),\text{Re}(\delta)$, we factor out all terms which, for fixed $p$, converge substantially slower than $1/p^2$. In other words, we write
\begin{align}
\begin{split}
G_{p}(\alpha,\beta,\gamma,\delta)&=\left(1-\frac{1}{p^{1+2\delta}}-\frac{1}{p^{1+2\gamma}}+O\left(\frac{1}{p^{2}}\right)\right)\left(1+\frac{1}{p^{1+2\alpha}}+\frac{1}{p^{1+2\beta}}+O\left(\frac{1}{p^{2}}\right)\right)\\
&=1-\frac{1}{p^{1+2\delta}}-\frac{1}{p^{1+2\gamma}}+\frac{1}{p^{1+2\alpha}}+\frac{1}{p^{1+2\beta}}+O\left(\frac{1}{p^{2}}\right)\\
&=Y_{p}(\alpha,\beta,\gamma,\delta)\times A_{p}(\alpha,\beta,\gamma,\delta),
\end{split}
\end{align}
where
\begin{equation}
Y_{p}(\alpha,\beta,\gamma,\delta):=1-\frac{1}{p^{1+2\delta}}-\frac{1}{p^{1+2\gamma}}+\frac{1}{p^{1+2\alpha}}+\frac{1}{p^{1+2\beta}},
\end{equation}
and $A_{p}(\alpha,\beta,\gamma,\delta)$ is a local function that converges like $1/p^{2}$ for sufficient small $\text{Re}(\alpha),\text{Re}(\beta),\text{Re}(\gamma),$ and $\text{Re}(\delta)$.\\
\\
Next, suppose $p \equiv 1(4)$. Factoring out terms with slow convergence as above, we expand $G_{p}(\alpha,\beta,\gamma,\delta)$ as
\begin{align}
\begin{split}
&G_{p}(\alpha,\beta,\gamma,\delta)=\sum_{\substack{m+n \\ \text{even}}}\bigg ( \frac{\min{\{m,n\}}+1}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{\min{\{m,n\}}+1}{p^{(1+2\gamma)+(1+2\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\\
&\phantom{=}+ \frac{\min{\{m,n\}}+1}{p^{(1+2\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{\min{\{m,n\}}+1}{p^{(1+2\gamma)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\bigg )\\
&\phantom{=}+
\sum_{\substack{m+n \\ \text{odd}}}\bigg ( \frac{-2(\min{\{m,n\}}+1)}{p^{(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{-2(\min{\{m,n\}}+1)}{p^{(\frac 1 2+\gamma)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\\
&\phantom{=}+ \frac{-2(\min{\{m,n\}}+1)}{p^{(\frac 1 2+\gamma)+2(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{-2(\min{\{m,n\}}+1)}{p^{2(\frac 1 2+\gamma)+(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\bigg )\\
&\phantom{=}+
\sum_{n}\bigg ( \frac{4n+2}{p^{(1+\gamma+\delta)+n(1+\alpha+\beta)}}\bigg )+\sum_{\substack{m+n \\ \text{even}\\ m\neq n}}\frac{4\left(\min{\left\{m,n\right\}}+1\right)}{p^{(1+\gamma+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\\
&=\left ( \sum_{\substack{m+n \\ \text{ even}}} \frac{\min\{m,n\} +1}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}} \right )\left( 1+ \frac{1}{p^{1+2\gamma}}+ \frac{1}{p^{1+2\delta}} + \frac{1}{p^{2+2\gamma + 2\delta}} \right )\\
&\phantom{=}+ \left ( \sum_{\substack{m+n \\ \text{ odd}}} \frac{-2(\min\{m,n\}+1)}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}} \right )\times\left ( \frac{1}{p^{\frac 1 2+\gamma}} +\frac{1}{p^{\frac 1 2+\delta}} + \frac{1}{p^{\frac 3 2+2\gamma +\delta}} + \frac{1}{p^{\frac 3 2+\gamma+2\delta}} \right )\\
&\phantom{=}+\left ( \sum_{\substack{m+n \\ \text{ even} \\ m \neq n}} \frac{4 \min\{m,n\}+4}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}} + \sum_{n} \frac{4 n+2}{p^{n(1+\alpha+\beta)}} \right ) \left ( \frac{1}{p^{1+\gamma+\delta}} \right ).
\end{split}
\end{align}
Since
\begin{equation}
\sum_{\substack{m+n\\ \text{ even}}} \frac{\min\{m,n\} +1}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}} = \left(1+\frac{1}{p^{1+2\alpha}}+\frac{1}{p^{1+2\beta}}+\frac{2}{p^{1+\alpha+\beta}}+O\left(\frac{1}{p^2}\right)\right), \\
\end{equation}
\begin{equation}
\sum_{\substack{m+n\\ \text{ odd}}} \frac{-2(\min\{m,n\}+1)}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}} = \left (\frac{-2}{p^{\frac 1 2+\alpha}} +\frac{-2}{p^{\frac 1 2+\beta}}+O\left ( \frac{1}{p^{\frac 3 2}} \right )
\right ), \\
\end{equation}
and
\begin{align}
\left ( \sum_{\substack{m+n \\ \text{ even} \\ m \neq n}} \frac{4 \min\{m,n\}+4}{p^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}} + \sum_{n} \frac{4n+2}{p^{n(1+\alpha+\beta)}} \right ) \left ( \frac{1}{p^{1+\gamma+\delta}} \right ) &= \frac{2}{p^{1+\gamma+\delta}} + O \left ( \frac{1}{p^2} \right ),
\end{align}
we conclude that for $p \equiv 1(4)$,
\begin{equation}
G_{p}(\alpha,\beta,\gamma,\delta) = Y_{p}(\alpha,\beta,\gamma,\delta)\times A_{p}(\alpha,\beta,\gamma,\delta),
\end{equation}
where
\begin{align}
\begin{split}
Y_{p}(\alpha,\beta,\gamma,\delta)&:=1+\frac{1}{p^{1+2\alpha}}+\frac{1}{p^{1+2\beta}}+\frac{1}{p^{1+2\gamma}}+ \frac{1}{p^{1+2\delta}} +\frac{2}{p^{1+\alpha+\beta}} \\
&\phantom{:=}+ \frac{-2}{p^{1+\alpha +\gamma}}+\frac{-2}{p^{1+\alpha +\delta}} +\frac{-2}{p^{1+\beta +\gamma}}+\frac{-2}{p^{1+\beta +\delta}} +\frac{2}{p^{1+\gamma+\delta}},
\end{split}
\end{align}
and $A_{p}(\alpha,\beta,\gamma,\delta)$ is a function that converges sufficiently rapidly.\\
\\
Finally, note that
\begin{align}
\begin{split}
G_{2}(\alpha,\beta,\gamma,\delta)&=\sum_{\substack{m+n \\ \text{even}}}\left(\frac{\delta_{2}(m,n,0,0)}{2^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{\delta_{2}(m,n,1,1)}{2^{(\frac 1 2+\gamma)+(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\right)\\
&\phantom{=}+\sum_{\substack{m+n \\ \text{odd}}}\left(\frac{\delta_{2}(m,n,1,0)}{2^{(\frac 1 2+\gamma)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}+\frac{\delta_{2}(m,n,0,1)}{2^{(\frac 1 2+\delta)+n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\right)\\
&=\left(1+\frac 1 {2^{1+\gamma+\delta}}\right)\sum_{\substack{m+n \\ \text{even}}}\left(\frac{1}{2^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\right)\\
&\phantom{=}-\left(\frac 1 {2^{\frac 1 2+\gamma}}+\frac 1 {2^{\frac 1 2+\delta}}\right)\sum_{\substack{m+n \\ \text{odd}}}\left(\frac{1}{2^{n(\frac 1 2+\alpha)+m(\frac 1 2+\beta)}}\right).
\end{split}
\end{align}
It follows that
\begin{equation}
G_{2}(\alpha,\beta,\gamma,\delta) = Y_{2}(\alpha,\beta,\gamma,\delta)\times A_{2}(\alpha,\beta,\gamma,\delta),
\end{equation}
where
\begin{align}
\begin{split}
Y_{2}(\alpha,\beta,\gamma,\delta)&:=\left(1+\frac 1 {2^{1+\gamma+\delta}}\right)\left(1+\frac{1}{2^{1+2\alpha}}+\frac{1}{2^{1+2\beta}}+\frac{1}{2^{1+\alpha+\beta}}\right)\\
&\phantom{:=}-\left(\frac 1 {2^{\frac 1 2+\gamma}}+\frac 1 {2^{\frac 1 2+\delta}}\right)\left(\frac{1}{2^{\frac 1 2+\alpha}}+\frac{1}{2^{\frac 1 2+\beta}}\right)\\
&=\bigg(1+\frac 1 {2^{1+\gamma+\delta}}+\frac{1}{2^{1+\alpha+\beta}}+\frac{1}{2^{1+2\alpha}}+\frac{1}{2^{1+2\beta}}\\
&\phantom{:=}-\frac 1 {2^{1+\alpha+\gamma}}-\frac 1 {2^{1+\alpha+\delta}}-\frac 1 {2^{1+\beta+\gamma}}-\frac 1 {2^{1+\beta+\delta}}\bigg),
\end{split}
\end{align}
and $A_{2}(\alpha,\beta,\gamma,\delta):= G_{2}(\alpha,\beta,\gamma,\delta)/Y_{2}(\alpha,\beta,\gamma,\delta)$.
\subsection{Expanding the Euler Product}
Recall that for Re$(x)> 0$,
\begin{align}
\zeta(1+x) &= \prod_p \left(1-\frac{1}{p^{1+x}}\right)^{-1} = \prod_p \left(1 + \frac{1}{p^{1+x}} + O \left (\frac{1}{p^2} \right )\right), \label{zeta+}
\end{align}
and
\begin{align}
\begin{split}
L(1+x) &= \prod_{p \equiv 1(4)} \left(1- \frac{1}{p^{1+x}} \right)^{-1}\prod_{p \equiv 3(4)} \left(1+ \frac{1}{p^{1+x}}\right)^{-1}\\
&=\prod_{p \equiv 1(4)} \left(1+ \frac{1}{p^{1+x}}+ O \left ( \frac{1}{p^2} \right ) \right)\prod_{p \equiv 3(4)} \left(1-\frac{1}{p^{1+x}} + O \left ( \frac{1}{p^2} \right )\right),
\end{split}
\end{align}
where $L(s):=L(s,\chi_{1})$. Incorporating the above simplifications, and again collecting only terms which converge substantially slower that $p^{-3/2}$, we arrive at the following conjecture.
\begin{conjecture} With constraints on $\alpha, \beta, \gamma, \delta$ as described in (\ref{ratios domain}) and (\ref{ratios domain 2}), we have
\begin{align}
\begin{split}
&\sum_{k\neq 0}\bigg | \widehat{f}\left( \frac{k}{K}\right) \bigg |^{2}\frac{L_{k}(\frac 1 2+\alpha)L_{k}(\frac 1 2+\beta)}{L_{k}(\frac 1 2+\gamma)L_{k}(\frac 1 2+\delta)}=\sum_{k \neq 0}\bigg | \widehat{f} \left( \frac{k}{K}\right) \bigg |^{2}\bigg (G(\alpha,\beta,\gamma,\delta)\\
&\phantom{=}+\frac{1}{1-2\alpha}\left(\frac{\pi}{2K}\right)^{2\alpha}G(-\alpha,\beta,\gamma,\delta)+\frac{1}{1-2\beta}\left(\frac{\pi}{2K}\right)^{2\beta}G(\alpha,-\beta,\gamma,\delta)\\
&\phantom{=}+\left(\frac{1}{1-2(\alpha+\beta)}\right)\left(\frac{\pi}{2K}\right)^{2(\alpha+\beta)}G(-\alpha,-\beta,\gamma,\delta)\bigg )+O\left(K^{\frac{1}{2}+\epsilon}\right),
\end{split}
\end{align}
where
\begin{align}\label{G factor}
G(\alpha,\beta,\gamma,\delta)&:=\prod_{p}G_{p}(\alpha,\beta,\gamma,\delta)\\
&\hspace{1mm}= Y(\alpha,\beta,\gamma,\delta)\times A(\alpha,\beta,\gamma,\delta),
\end{align}
\begin{align}
\begin{split}
Y(\alpha,\beta,\gamma,\delta) &:=
\frac{\zeta(1+2\alpha)\zeta(1+2\beta)\zeta(1+\gamma+\delta)\zeta(1+\alpha+\beta)}{\zeta(1+\alpha+\gamma)\zeta(1+\beta+\gamma)\zeta(1+\beta+\delta)\zeta(1+\alpha+\delta)}\\
&\hspace{5mm}\times \frac{L(1+2\gamma)L(1+2\delta)L(1+\gamma+\delta)L(1+\alpha+\beta)}{L(1+\alpha+\gamma)L(1+\beta+\gamma)L(1+\beta+\delta)L(1+\alpha+\delta)},
\end{split}
\end{align}
and $ A(\alpha,\beta,\gamma,\delta):= G(\alpha,\beta,\gamma,\delta)/Y(\alpha,\beta,\gamma,\delta) $ is an Euler product that converges for sufficiently small fixed values of $\textnormal{Re}(\alpha),\textnormal{Re}(\beta),\textnormal{Re}(\gamma),\textnormal{Re}(\delta)$.
\end{conjecture}
In further calculations, it will be helpful to define
\begin{equation}
\mathcal{Y}(\alpha,\beta,\gamma,\delta):=\frac{\zeta(1+2\alpha)\zeta(1+2\beta)\zeta(1+\gamma+\delta)\zeta(1+\alpha+\beta)}{\zeta(1+\alpha+\gamma)\zeta(1+\beta+\gamma)\zeta(1+\beta+\delta)\zeta(1+\alpha+\delta)},
\end{equation}
as well as
\begin{equation}
\mathcal{A}(\alpha,\beta,\gamma,\delta):=\frac{G(\alpha,\beta,\gamma,\delta)}{\mathcal{Y}(\alpha,\beta,\gamma,\delta)}.
\end{equation}
It will also be necessary to make use of the following lemma.
\begin{lemma}\label{A is 1}
We have that
\begin{align}
A(\alpha,\beta,\alpha,\beta)=\mathcal{A}(\alpha,\beta,\alpha,\beta)=1.
\end{align}
\end{lemma}
\begin{proof}
Since $Y(\alpha,\beta,\alpha,\beta)=\mathcal{Y}(\alpha,\beta,\alpha,\beta)=1$, it suffices to show that
$\hfill \break G(\alpha,\beta,\alpha,\beta)=1$. Note that $G_{2}(\alpha,\beta,\alpha,\beta )=1$, and upon writing
\begin{equation}
\sum_{m, n} \frac{1}{p^{n(1+2\alpha)+m(1+2\beta)}}=\left(1-\frac{1}{p^{1+2\beta}}\right)^{-1}
\left(1-\frac{1}{p^{1+2\alpha}}\right)^{-1},
\end{equation}
we similarly obtain that $G_{p}(\alpha,\beta,\alpha,\beta )=1$ whenever $p \equiv 3(4)$. Moreover, we rewrite
\begin{align}
\sum_{\substack{m+n\\ \text{even}}} \frac{\text{min}(m,n) +1}{p^{m(\frac{1}{2}+\alpha)+n(\frac{1}{2}+\beta)}}= \frac{p^{2(1+\alpha +\beta)}(1+p^{1+\alpha+\beta})}{(p^{1+2\alpha}-1)(p^{1+\alpha+\beta}-1)(p^{1+2\beta}-1)},
\end{align}
and
\begin{equation}
\sum_{\substack{m+n \\ \text{odd}}} \frac{-2(\text{min}(m,n) +1)}{p^{m(\frac{1}{2}+\alpha)+n(\frac{1}{2}+\beta)}}=\frac{-2p^{\frac{5}{2}+2\alpha+2\beta}(p^{\alpha}+p^{\beta})}{(p^{1+2\alpha}-1)(p^{1+\alpha+\beta}-1)(p^{1+2\beta}-1)},
\end{equation}
as well as
\begin{align}
\begin{split}
\sum_{\substack{m \neq n \\ m+n \text{ even}}} \frac{4\cdot \text{min}(m, n)+4}{p^{m(\frac{1}{2}+\alpha)+n(\frac{1}{2}+\beta)}} &= \frac{4p^{2(1+\alpha+\beta)}(1+p^{1+\alpha+\beta})}{(p^{1+2\alpha}-1)(p^{1+\alpha+\beta}-1)(p^{1+2\beta}-1)}\\
&\phantom{=}-\frac{4p^{2(1+\alpha+\beta)}}{(p^{1+\alpha+\beta}-1)^{2}},
\end{split}
\end{align}
and
\begin{equation}
\sum_{n=0}^\infty \frac{4n+2}{p^{n(1+\alpha+\beta)}}=\frac{2p^{1+\alpha+\beta}(1+p^{1+\alpha+\beta})}{(p^{1+\alpha+\beta}-1)^2},
\end{equation}
so that for $p \equiv 1(4)$,
\begin{align}
\begin{split}
&G_{p}(\alpha,\beta,\gamma,\delta)=\left ( \frac{p^{2(1+\alpha +\beta)}(1+p^{1+\alpha+\beta})}{(p^{1+2\alpha}-1)(p^{1+\alpha+\beta}-1)(p^{1+2\beta}-1)} \right )\\
& \times\bigg(1+ \frac{1}{p^{1+2\gamma}}+ \frac{1}{p^{1+2\delta}} + \frac{1}{p^{2+2\gamma + 2\delta}} \bigg)- \left (\frac{2p^{\frac{5}{2}+2\alpha+2\beta}(p^{\alpha}+p^{\beta})}{(p^{1+2\alpha}-1)(p^{1+\alpha+\beta}-1)(p^{1+2\beta}-1)} \right )\\
&\times \left ( \frac{1}{p^{\frac 1 2+\gamma}} +\frac{1}{p^{\frac 1 2+\delta}} + \frac{1}{p^{\frac 3 2+2\gamma +\delta}} + \frac{1}{p^{\frac 3 2+\gamma+2\delta}} \right )+\bigg(\frac{2p^{1+\alpha+\beta}(1+p^{1+\alpha+\beta})}{(p^{1+\alpha+\beta}-1)^2}\\
&+ \frac{4p^{2+2\alpha+2\beta}(1+p^{1+\alpha+\beta})}{(p^{1+2\alpha}-1)(p^{1+\alpha+\beta}-1)(p^{1+2\beta}-1)} -\frac{4p^{2+2\alpha+2\beta}}{(p^{1+\alpha+\beta}-1)^{2}}\bigg) \left ( \frac{1}{p^{1+\gamma+\delta}} \right )\nonumber.
\end{split}
\end{align}
Upon setting $\alpha = \gamma$ and $\beta = \delta$, we then have $G_{p}(\alpha,\beta,\alpha,\beta)=1$. The lemma then follows from (\ref{G factor}).
\end{proof}
\begin{lemma}\label{A integral}
Define $A_{\beta}(\alpha):=A(-\alpha,-\beta,\alpha,\beta).$ Then
\begin{align}
\frac{d}{d \alpha}A_{\beta}(\alpha)\bigg \vert_{\alpha = -\beta} &= -2\sum_{p\equiv 3(4)}\frac{\left(p^{2+8\beta}+p^2-2 p^{4\beta}\right) \log p}{p^{2+8\beta}+p^2-p^{4\beta}-p^{4+4\beta}}.
\end{align}
\end{lemma}
\begin{proof}
Write
\begin{align}
A_{\beta}(\alpha) = \prod_{p} p_{\beta}(\alpha),
\end{align}
where
\begin{equation}
p_{\beta}(\alpha):= \frac{G_{p}(-\alpha,-\beta,\alpha,\beta)}{Y_{p}(-\alpha,-\beta,\alpha,\beta)}
\end{equation}
are the local factors of $A_{\beta}(\alpha)$, and note that $p_{\beta}(-\beta)=1$ at each prime $p$. By the product rule,
\begin{equation}
\frac{d}{d \alpha}A_{\beta} = \sum_{q}\frac{d}{d \alpha}q_{\beta}\prod_{p \neq q}p_{\beta},
\end{equation}
and the result follows upon noting that
\begin{equation}
\frac{d}{d \alpha}q_{\beta}(\alpha)\bigg \vert_{\alpha = -\beta} = \left\{
\begin{array}{l l l}
0 & \text{ if } q=2 \\
0 & \text{ if } q\equiv 1(4)\\
-2\frac{\left(p^{2+8 \beta}+p^2-2 p^{4 \beta}\right) \log p}{p^{2+8 \beta}+p^2-p^{4 \beta }-p^{4+4 \beta }} & \text{ if } q\equiv 3(4).
\end{array} \right.
\end{equation}
\end{proof}
\section{The Ratios Conjecture Prediction for $\text{Var}(\psi_{K,X})$:}
Let $F_{K}(\theta)$ be as in (\ref{Fk}). By the Fourier expansion of $F_{K}$, we may write
\begin{align}
\begin{split}
\psi_{K,X}(\theta) &=\sum_{\mathfrak{a} } \Phi \left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a}) F_K(\theta_{\mathfrak{a}} -\theta)\\
&= \sum_{\mathfrak{a} }\Phi\left(\frac{N(\mathfrak{a})}{X}\right)\Lambda(\mathfrak{a})\sum_{k \in \mathbb{Z}}\frac{1}{K}\widehat{f}\left(\frac{k}{K}\right) e^{4ki(\theta_{\mathfrak{a}}-\theta)}.
\end{split}
\end{align}
Since the mean value is given by the zero mode $k=0$, the variance may be computed as
\begin{align}
\begin{split}
\text{Var}(\psi_{K,X})&=\frac{2}{\pi}\int_{0}^{\frac \pi 2}\bigg|\psi_{K,X}(\theta) - \langle \psi_{K,X}\rangle\bigg|^{2}d\theta \\
&=\frac{2}{\pi}\int_{0}^{\frac \pi 2}\bigg|\sum_{k\neq 0} e^{-i4k\theta} \frac 1{ K}\widehat{f}\left(\frac {k}{ K}\right) \sum_{\mathfrak{a}} \Phi\left(\frac{N(\mathfrak{a})}{X}\right) \Lambda(\mathfrak{a})\Xi_{k}(\mathfrak{a})\bigg|^{2}d\theta.\label{nonk}
\end{split}
\end{align}
By applying the \textit{Mellin Inversion Formula}
\begin{equation}
\Phi(x) = \frac{1}{2\pi i}\int_{\text{Re}(s)=2}\tilde{\Phi}(s)x^{-s}ds,
\end{equation}
we obtain
\begin{align}
\begin{split}
\sum_{\mathfrak a} \Lambda(\mathfrak a) \Xi_{k}(\mathfrak a)\Phi\left(\frac{N(\mathfrak a)}{X}\right) &= \frac 1{2\pi i}\int_{(2)} \sum_{\mathfrak a} \Lambda(\mathfrak a) \Xi_{k}(\mathfrak{a}) \frac{X^s}{N(\mathfrak{a})^s} \tilde \Phi(s) ds\\
&= \frac 1{2\pi i}\int_{(2)} -\frac{L_{k}'}{L_{k}}(s) \tilde \Phi(s) X^s ds.
\end{split}
\end{align}
Inserting this into (\ref{nonk}), we find that
\begin{align}
\begin{split}
\text{Var}(\psi_{K,X}) &= \frac{2}{\pi}\int_{0}^{\frac \pi 2}\bigg|\sum_{k\neq 0} e^{-i4k\theta} \frac 1{ K}\widehat{f}\left(\frac {k}{ K}\right) \sum_{\mathfrak{a}} \Lambda(\mathfrak{a})\Xi_{k}(\mathfrak{a})\Phi\left(\frac{N(\mathfrak{a})}{X}\right)\bigg|^{2}d\theta\\
&= \frac{2}{\pi}\int_{0}^{\frac \pi 2}\bigg|\sum_{k\neq 0} e^{-i4k\theta} \frac 1{ K}\widehat{f}\left(\frac {k}{ K}\right) \frac {i}{2\pi}\int_{(2)}\frac{L_{k}'}{L_{k}}(s) \tilde \Phi(s) X^s ds\bigg|^{2}d\theta.
\end{split}
\end{align}
Upon recalling that
\begin{align}
\int_{0}^{\frac \pi 2}e^{4i(k'-k)\theta}d\theta = \left\{
\begin{array}{l l}
0 & \text{ if } k \neq k' \\
\frac \pi 2 & \text{ if } k = k',\\
\end{array} \right.
\end{align}
$\text{Var}(\psi_{K,X})$ can be restricted to terms for which the Fourier coefficients are equal, i.e.,
\begin{align}
\begin{split}
\text{Var}(\psi_{K,X}) &=\frac 1{4\pi^2K^{2}}\int_{(2)}\int_{(2)}\sum_{k\neq 0}\bigg |\widehat{f}\left(\frac {k}{ K}\right)\bigg |^{2}\frac{L_{k}'}{L_{k}}(s)\frac{L_{k}'}{L_{k}}(\overline{s'}) \tilde \Phi(s)\tilde \Phi(\overline{s'})X^{s} X^{\overline{s'}}ds\overline{ds'}
\end{split}
\end{align}
by Fubini's theorem. Moreover, under GRH, $\frac {L_{k}'}{L_{k}}(s)$ is holomorphic in the half-plane Re$(s)>\frac {1}{2}$, and thus we may shift the vertical integrals to Re$(s)=\frac{1}{2}+\epsilon$, and Re$(s')=\frac{1}{2}+\epsilon'$, for any $\epsilon, \epsilon'>0$. Upon making the change of variables $\alpha:= s-\frac{1}{2}$ and $\beta:= s'-\frac 1 2$ we find that
\begin{equation}\label{varcontour}
\begin{split}
\text{Var}(\psi_{K,X})&=- \frac {X^{1-2\lambda}}{4\pi^2}\int_{(\epsilon')}\int_{(\epsilon)}\sum_{k\neq 0} \bigg |\widehat{f}\left(\frac {k}{ K}\right)\bigg |^{2}\frac{L_{k}'}{L_{k}}\left(\frac{1}{2}+\alpha\right)\frac{L_{k}'}{L_{k}}\left(\frac{1}{2}+\beta\right)\\
&\phantom{=}\times \tilde \Phi\left(\frac{1}{2}+\alpha\right)\tilde \Phi\left(\frac{1}{2}+\beta\right) X^{\beta}X^{\alpha}d\alpha d\beta.
\end{split}
\end{equation}
Note by (\ref{ratios domain 2}) that the substitution of the ratios conjecture is only valid when Im$(\alpha)$, Im$(\beta)\ll_{c} K^{1-c}$, for small $c>0$. If either Im$(\alpha)> K^{1-c}$ or Im$(\beta)> K^{1-c}$, we use the rapid decay of $\tilde{\Phi}$, as well as upper bounds on the growth of $\frac{L'_{k}}{L_{k}}$ within the critical strip, to show that the contribution to the double integral coming from these tails is bounded by $O_{c}\left(K^{-1+c}\right)$. For Im$(\alpha)$, Im$(\beta) < K^{1-c}$, we take the derivative of (\ref{the conjecture M}) to obtain
\begin{align}\label{M prime}
\sum_{\substack{k \neq 0}} \bigg |\widehat{f}\left(\frac {k}{ K}\right)\bigg |^{2}\frac{L_{k}'}{L_{k}}\left(\frac{1}{2}+\alpha\right)\frac{L_{k}'}{L_{k}}\left(\frac{1}{2}+\beta\right)= \sum_{\substack{k \neq 0}} \bigg |\widehat{f}\left(\frac {k}{ K}\right)\bigg |^{2}M'_{K}(\alpha,\beta)+O\left(K^{\frac{1}{2}+\epsilon}\right),
\end{align}
where\footnote{Here, and elsewhere, we allow for a slight abuse of notation: $\alpha$ and $\beta$ denote coordinates of $M_{K}$, as well as coordinates of the point at which the derivative is then evaluated.}
\begin{align}
\begin{split}
&M_{K}'(\alpha,\beta) := \frac{\partial }{\partial \beta}\frac{\partial}{\partial \alpha}M_{K}(\alpha,\beta,\gamma,\delta)\Bigg \vert_{(\alpha,\beta,\alpha,\beta)}.
\end{split}
\end{align}
Plugging (\ref{M prime}) into (\ref{varcontour}) for Im$(\alpha)$, Im$(\beta) < K^{1-c}$, and using a similar argument as above to bound the tails, we then arrive at the following conjecture:
\begin{conjecture}\label{full conjecture} We have that
\begin{align}
&\text{Var}(\psi_{K,X}) = -C_{f}\frac {X}{K}\bigg(I_{1} + I_{2}+I_{3}+I_{4}\bigg)+O\left(X^{-\frac{\lambda}{2}+\epsilon}\right),
\end{align}
where
\begin{equation}
\begin{split}
I_{1}:&= \int_{(\epsilon')} \int_{(\epsilon)}\frac{\partial }{\partial \beta}\frac{\partial}{\partial \alpha} G(\alpha,\beta,\gamma,\delta)\bigg \vert_{(\alpha,\beta,\alpha,\beta)}\\
&\phantom{=}\times \tilde \Phi\left(\frac{1}{2}+\alpha\right)\tilde \Phi\left(\frac{1}{2}+\beta\right)X^{\alpha+\beta}d\alpha d\beta,
\end{split}
\end{equation}
\begin{align}
\begin{split}
I_{2}:&=\int_{(\epsilon')}\int_{(\epsilon)}\frac{\partial }{\partial \beta}\frac{\partial}{\partial \alpha}\left(\frac{\pi}{2}\right)^{2\beta}\frac{1}{1-2\beta} G(\alpha,-\beta,\gamma,\delta)\bigg \vert_{(\alpha,\beta,\alpha,\beta)}\\
&\phantom{=}\times \tilde \Phi\left(\frac{1}{2}+\alpha\right)\tilde \Phi\left(\frac{1}{2}+\beta\right) X^{\alpha}X^{\beta(1-2\lambda)}d\alpha d\beta
\end{split}
\end{align}
\begin{align}
\begin{split}
I_{3}:&= \int_{(\epsilon')}\int_{(\epsilon)}\frac{\partial }{\partial \beta}\frac{\partial}{\partial \alpha}\left(\frac{\pi}{2}\right)^{2\alpha}\frac{1}{1-2\alpha} G(-\alpha,\beta,\gamma,\delta)\bigg \vert_{(\alpha,\beta,\alpha,\beta)}\\
&\phantom{=} \times \tilde \Phi\left(\frac{1}{2}+\alpha\right)\tilde \Phi\left(\frac{1}{2}+\beta\right) X^{\alpha(1-2\lambda)}X^{\beta}d\alpha d\beta
\end{split}
\end{align}
and
\begin{align}
\begin{split}
I_{4}&:=\int_{(\epsilon')}\int_{(\epsilon)}\frac{\partial }{\partial \beta}\frac{\partial}{\partial \alpha} \left(\frac{1}{1-2(\alpha+\beta)}\right)\left(\frac{\pi}{2}\right)^{2(\alpha+\beta)} G(-\alpha,-\beta,\gamma,\delta)\bigg \vert_{(\alpha,\beta,\alpha,\beta)}\\
&\hspace{5mm}\times\tilde \Phi\left(\frac{1}{2}+\alpha\right) \tilde \Phi\left(\frac{1}{2}+\beta \right) X^{\alpha(1-2\lambda)}X^{\beta(1-2\lambda)}d\alpha d\beta.
\end{split}
\end{align}
\end{conjecture}
Conjecture \ref{conj} will then follow from the following three lemmas:
\begin{lemma}\label{1st total} We have
\begin{equation}
I_{1} = -(\log X) C_{\Phi}-C'_{\Phi}-\pi^2\tilde{\Phi}\left(\frac{1}{2}\right)^2+O_{\Phi}\left(X^{-\frac 1 5}\right).
\end{equation}
\end{lemma}
\begin{lemma} \label{second total}
We have
\begin{equation}
I_{2}+I_{3} = \left\{
\begin{array}{l l l}
O_{\Phi}\left(X^{-\epsilon}\right) & \text{ if } \lambda > 1 \\
2 \pi^2\left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2} +O_{\Phi}\left(X^{-\epsilon}\right) & \text{ if } \frac 1 2 < \lambda < 1 \\
4 \pi^2\left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2} + O_{\Phi}\left(X^{-\epsilon}\right)& \text{ if } \lambda < \frac 1 2,
\end{array}\right.
\end{equation}
where $\epsilon > 0$ is a constant $($depending on $\lambda)$.
\end{lemma}
\begin{lemma}\label{fourth total}
We have
\begin{equation}
I_{4} = \left\{\begin{array}{l l}
C_{\Phi}(1 - 2\lambda)\log X+\kappa+ O_{\Phi}\left(X^{-\epsilon}\right)& \text{ if } \frac 1 2 < \lambda\\
O_{\Phi}\left(X^{-\epsilon}\right) & \text{ if } \frac 1 2 > \lambda,
\end{array}\right.
\end{equation}
where
\begin{equation}
\kappa := C_{\Phi}\left(\log \left(\frac {\pi^2}{4}\right)+2\right)+C_{\Phi,\zeta}-C_{\Phi,L} +C'_\Phi-\pi^2\left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2}-A_{\Phi}'.
\end{equation}
Here $\epsilon > 0$ is a constant $($depending on $\lambda)$, and $C_{\Phi,\zeta}$, $C_{\Phi,L}$, and $A_{\Phi}'$, are as in $(\ref{zeta constant})$, $(\ref{L constant})$, and $(\ref{A constant})$, respectively.
\end{lemma}
Conjecture \ref{conj} follows upon inserting the results from Lemma \ref{1st total}, Lemma \ref{second total}, and Lemma \ref{fourth total}, into Conjecture \ref{full conjecture}. Note that when $\lambda >1$, Conjecture \ref{full conjecture} moreover agrees with Theorem \ref{trivial regime theorem}.
\section{Auxiliary Lemmas}
Before proceeding to the proofs of Lemmas \ref{1st total}, \ref{second total}, and \ref{fourth total}, we will prove a few auxiliary lemmas that will be used frequently in the rest of the paper.
\begin{lemma}\label{countour bound}
Let $h(\alpha)$ be holomorphic in $\Omega := \left\{-\frac{1}{4}< \textnormal{Re}(\alpha)< \epsilon\right\}$ for some $\epsilon>0$, except for possibly at a finite set of poles. Moreover, suppose that $h(\alpha)$ does not grow too rapidly in $\Omega$, i.e., there exists a fixed $d>0$ such that $h(\alpha) \ll |\alpha|^{d}$ away from the poles in $\Omega$. Set
\begin{equation}
f(\alpha) := h(\alpha)\tilde{\Phi}\left(\frac 1 2+\alpha \right)X^{\alpha},
\end{equation}
where $\alpha, \beta,$ and $\tilde{\Phi}$ are as above. Then
\begin{equation}
\int_{(\epsilon)} f(\alpha)d\alpha =2\pi i\cdot \sum_{k}\textnormal{Res}(f,a_{k})+O\left( X^{-\frac{1}{5}}\right),
\end{equation}
where $\textnormal{Res}(f,a_{k})$ denotes the residue of $f$ at each pole $a_{k} \in \Omega$.
\end{lemma}
\begin{proof}
Consider the contour integral drawn counter-clockwise along the closed box
\begin{equation}
C_{T} := V_{1} \cup H_{1} \cup V_{2} \cup H_{2},
\end{equation}
where
\begin{align}
\left\{
\begin{array}{l l}
V_{1} &:= [\epsilon-iT,\epsilon+iT]\\
H_{1} &:= [\epsilon+iT,-\frac 1 4+\epsilon+iT]\\
V_{2} &:= [-\frac 1 4+\epsilon+iT,-\frac 1 4+\epsilon-iT]\\
H_{2} &:= [-\frac 1 4+\epsilon-iT,\epsilon-iT].
\end{array} \right.
\end{align}
By Cauchy's residue theorem,
\begin{align}
\begin{split}
\int_{(\epsilon)}f(\alpha) ~d\alpha &= 2\pi i \cdot \sum_{k}\textnormal{Res}(f,a_{k})- \lim_{T \rightarrow \infty}\bigg ( \int_{H_{1} \cup V_{2} \cup H_{2}} f(\alpha)d\alpha\bigg ).
\end{split}
\end{align}
Set $\alpha = \sigma+iT$. By the properties of the Mellin transform, we find that for any fixed $A >0$,
\begin{equation}
\tilde{\Phi}\left(\frac 1 2 +it \right) \ll \textnormal{min}(1,|t|^{-A}).
\end{equation}
Since moreover $h(\alpha)$ does not grow too rapidly, we bound
\begin{align}
\begin{split}
\int_{H_{1}} f(\alpha)d\alpha &= \int_{\epsilon}^{-1/4+\epsilon}h(\sigma+iT)\tilde{\Phi}\left(\frac 1 2+\sigma+iT\right)X^{\sigma+iT}d\sigma \ll \frac{X^{\epsilon}}{T^{A}},
\end{split}
\end{align}
so that
\begin{equation}
\lim_{T \rightarrow \infty}\int_{H_{1}} f(\alpha)d\alpha = 0,
\end{equation}
and similarly
\begin{equation}
\lim_{T \rightarrow \infty}\int_{H_{2}} f(\alpha)d\alpha = 0.
\end{equation}
Finally, we bound
\begin{align}
\begin{split}
\lim_{T \rightarrow \infty} \int_{V_{2}} f(\alpha)d\alpha &= -i \int_{\mathbb{R}}h\left(-\frac{1}{4}+\epsilon+it\right)\tilde{\Phi}\left(\frac{1}{4}+\epsilon+it\right)X^{(-\frac{1}{4}+\epsilon+it)}dt\\
&\ll \int_{\mathbb{R}} \textnormal{min}(1,|t|^{-A})X^{-\frac{1}{4}+\epsilon}X^{it}dt \ll X^{-\frac{1}{5}},
\end{split}
\end{align}
from which the theorem then follows.
\end{proof}
\begin{lemma}\label{holomorphic integrals}
Let $\alpha, \beta, \tilde{\Phi}$ be as above. Suppose $h(\alpha,\beta)$ is holomorphic\footnote{A function $f:\Omega \subset \mathbb{C}^{2} \mapsto \mathbb{C}$ is said to be \textit{homolorphic} if it is holomorphic in each variable separately.} in the region
\begin{equation}
\Omega \times \Omega := \left\{(\alpha,\beta): -\frac{1}{4}< \textnormal{Re}(\alpha),\textnormal{Re}(\beta)< \epsilon\right\}
\end{equation}
for some $\epsilon >0$, and moreover that $h(\alpha,\beta)$ does not grow too rapidly in $\Omega \times \Omega$, i.e., does not grow too rapidly in each variable, separately. Then
\begin{equation}
\int_{(\epsilon')}\tilde{\Phi}\left (\frac 1 2+\beta \right)\int_{(\epsilon)}h(\alpha,\beta)\tilde{\Phi}\left ( \frac 1 2+\alpha \right)X^{\alpha+\beta} ~d\alpha d\beta \ll X^{-\frac{2}{5}}.
\end{equation}
\end{lemma}
\begin{proof}
Set
\begin{equation}
f_{\beta}(\alpha) := h_{\beta}(\alpha)\tilde{\Phi}\left(\frac 1 2+\alpha \right)X^{\alpha},
\end{equation}
where $h_{\beta}(\alpha):=h(\alpha,\beta)$. Since $f_{\beta}$ is holomorphic, by an application of Lemma \ref{countour bound} we write
\begin{equation}
\int_{(\epsilon)} f_{\beta}(\alpha)d\alpha =O_{\beta}\left(X^{-\frac{1}{5}}\right)=O\left(g(\beta)\cdot X^{-\frac{1}{5}}\right),
\end{equation}
where $g$ does not grow too rapidly as a function of $\beta$. By another application of Lemma \ref{countour bound}, it then follows that
\begin{align}
\begin{split}
\int_{(\epsilon')}\tilde{\Phi}\left ( \frac 1 2+\beta \right)X^{\beta} \bigg(\int_{(\epsilon)}f_{\beta}(\alpha)~d\alpha\bigg) d\beta &\ll \int_{(\epsilon')}g(\beta) \tilde{\Phi}\left (\frac 1 2+\beta \right)X^{-\frac{1}{5}+\beta} ~d\beta\\
&\ll X^{-\frac{2}{5}}.
\end{split}
\end{align}
\end{proof}
\begin{lemma}\label{holomorphic bound 2}
Let $\alpha, \beta$,$\tilde{\Phi}$, and $f_{\beta}$ be as above. Suppose $f_{\beta}(\alpha)$ has a finite pole at $a_{k}(\beta)$ with residue $\textnormal{Res}(f_{\beta},a_{k}(\beta))$. Moreover, suppose that for each $a_{k}(\beta)$, $\textnormal{Res}(f_{\beta},a_{k}(\beta))$ is holomorphic in $\Omega := \left\{-\frac{1}{4}< \textnormal{Re}(\beta)< \epsilon\right\}$ for some $\epsilon > 0$, and that $\textnormal{Res}(f_{\beta},a_{k}(\beta))$ does not grow too rapidly in $\Omega$. Then
\begin{equation}
\int_{(\epsilon')}\tilde{\Phi}\left ( \frac 1 2+\beta \right)X^{\beta}\int_{(\epsilon)}f_{\beta}(\alpha) ~d\alpha d\beta \ll X^{-\frac{1}{5}}.
\end{equation}
\end{lemma}
\begin{proof}
By Lemma \ref{countour bound}, we write
\begin{equation}\label{inner error}
\int_{(\epsilon)} f_{\beta}(\alpha)d\alpha =2\pi i\cdot \sum_{k}\textnormal{Res}(f_{\beta},a_{k}(\beta))+O\left(g(\beta)\cdot X^{-\frac{1}{5}}\right),
\end{equation}
where, as in the proof of Lemma \ref{holomorphic integrals}, we explicitly note the dependence of the error term on $\beta$. Applying Lemma \ref{holomorphic integrals} to the error term in (\ref{inner error}), we obtain
\begin{align}
\int_{(\epsilon')}\tilde{\Phi}\left ( \frac 1 2+\beta \right)X^{\beta} &\int_{(\epsilon)}f_{\beta}(\alpha)~d\alpha d\beta \\
&= 2\pi i \cdot \int_{(\epsilon')}\tilde{\Phi}\left ( \frac 1 2+\beta \right)\sum_{k}\textnormal{Res}(f_{\beta},a_{k}(\beta))X^{\beta} d\beta+O\left(X^{-\frac{2}{5}}\right),
\end{align}
and finally by another application of Lemma \ref{countour bound},
\begin{align}
\int_{(\epsilon')}\tilde{\Phi}\left ( \frac 1 2+\beta \right)\sum_{k}\textnormal{Res}(f_{\beta},a_{k}(\beta))X^{\beta} d\beta &\ll X^{-\frac{1}{5}}.
\end{align}
\end{proof}
\begin{lemma} Let $C_{\Phi}$ and $C'_{\Phi}$ be as in $(\ref{mean value constants})$ and $(\ref{C constant 1})$, respectively. Then
\begin{equation}\label{C constant 2}
C_{\Phi} = - 2\pi i \int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}\left(\frac 1 2-\beta \right)~d\beta
\end{equation}
and
\begin{equation}\label{C' constant 2}
C'_{\Phi} = - 2 \pi i\int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}'\left(\frac 1 2-\beta \right)~d\beta.
\end{equation}
\end{lemma}
\begin{proof}
Set $\phi(y) = \Phi(e^{y})e^{y/2}$ so that
\begin{align}
\tilde{\Phi}\left(\frac 1 2 +it \right) = \int_{0}^{\infty}\Phi(x)x^{-\frac 1 2 +it}dx= \int_{\mathbb{R}} \phi(y)e^{iy t}dy = \widehat{\phi}\left(-\frac{t}{2\pi}\right),
\end{align}
and similarly $\tilde{\Phi}\left(\frac 1 2 -it \right) = \widehat{\phi}\left(\frac{t}{2\pi}\right).$ By shifting the integral to Re$(\beta) = 0$ we obtain
\begin{align}
2\pi i \int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta\right)\widetilde{\Phi}\left(\frac{1}{2}-\beta\right)~d\beta= 2\pi i \int_{\mathbb{R}}\widehat{\phi}\left(-\frac{t}{2\pi}\right)\widehat{\phi}\left(\frac{t}{2\pi}\right)i ~dt.
\end{align}
Since $\overline{\widehat{\phi}\left(-\frac{t}{2\pi}\right)}=\widehat{\phi} \left(\frac{t}{2\pi}\right)$, we moreover have that
\begin{align}
\begin{split}
\int_{\mathbb{R}}\widehat{\phi}\left(-\frac{t}{2\pi}\right)\widehat{\phi}\left(\frac{t}{2\pi}\right)i dt&= \int_{\mathbb{R}}\left|\widehat{\phi}\left(-\frac{t}{2\pi}\right)\right|^2 i dt= 2\pi i\cdot \int_{\mathbb{R}}\left|\widehat{\phi}\left(x\right)\right|^2 dx\\
&= 2\pi i\cdot \int_{0}^{\infty}\Phi(x)^2 dx,
\end{split}
\end{align}
i.e.,
\begin{equation}
C_{\Phi} = 4 \pi^2 \int_{0}^{\infty}\Phi(x)^2 dx = -2\pi i \int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta\right)\widetilde{\Phi}\left(\frac{1}{2}-\beta\right)~d\beta.
\end{equation}
Next, note that
\begin{align}
\begin{split}
\tilde{\Phi}'\left(\frac 1 2-\beta \right)=-\frac{d}{d\beta}\tilde{\Phi}\left(\frac 1 2-\beta\right)&=-\frac{d}{d\beta}\int_{0}^{\infty}\Phi(x)x^{\frac{1}{2}-\beta-1}dx\\
&=\int_{0}^{\infty}\Phi(x)(\log x) x^{-\beta -\frac{1}{2}}dx.
\end{split}
\end{align}
Upon setting $g(y) = y \cdot \Phi(e^{y})e^{y/2}$, we write
\begin{align}
\int_{0}^{\infty}\Phi(x)(\log x) x^{ -\frac{1}{2}-it}dx= \int_{\mathbb{R}}g(y)e^{-iyt}dy =\widehat{g}\left(\frac {t}{2\pi}\right),
\end{align}
so that by shifting to the half-line Re$(\beta) = 1/2$, it follows that
\begin{align}
\begin{split}
2 \pi i\int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}'\left(\frac 1 2-\beta \right)~d\beta &= 2\pi i \int_{\mathbb{R}}\widehat{g}\left(\frac {t}{2\pi}\right)\widehat{\phi}\left(-\frac {t}{2\pi}\right)i ~dt\\
&= (2\pi i)^{2} \cdot \int_{\mathbb{R}}\widehat{g}(x)\overline{\widehat{\phi}(x)} ~dx\\
&= -4\pi ^2 \cdot \int_{\mathbb{R}}g(x)\overline{\phi(x)} ~dx\\
&= -4\pi ^2 \cdot \int_{0}^{\infty}\log x \cdot \Phi(x)^2 ~dx.
\end{split}
\end{align}
\end{proof}
\section{Proof of Lemma \ref{1st total}}
In this section we seek to compute
\begin{equation}\label{first integral}
I_{1}= \int_{(\epsilon')}\int_{(\epsilon)}\frac{\partial }{\partial \beta}\frac{\partial}{\partial \alpha} G(\alpha,\beta,\gamma,\delta)\bigg \vert_{(\alpha,\beta,\alpha,\beta)}\tilde \Phi\left(\frac{1}{2}+\alpha\right)\tilde \Phi\left(\frac{1}{2}+\beta\right)X^{\alpha+\beta}d\alpha d\beta.
\end{equation}
Note that
\begin{align}\label{dadb Y}
\begin{split}
\frac{\partial}{\partial \alpha}\frac{\partial}{\partial\beta}&G(\alpha,\beta,\gamma,\delta)\bigg \vert_{(\alpha,\beta,\alpha,\beta)} = \frac{\partial}{\partial \alpha}\frac{\partial}{\partial\beta}\bigg (\mathcal{Y}(\alpha,\beta,\gamma,\delta)\cdot \mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg )\bigg \vert_{(\alpha,\beta,\alpha,\beta)}\\
&=\frac{\zeta''}{\zeta}(1+\alpha+\beta)-\frac{\zeta'}{\zeta}(1+\alpha+\beta)^2+\frac{\zeta'}{\zeta}(1+2\alpha)\frac{\zeta'}{\zeta}(1+2\beta)\\
&\phantom{=}+\frac{\zeta'}{\zeta}(1+2\alpha)\cdot\frac{\partial}{\partial \beta}\mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,\beta,\alpha,\beta)}\\
& \phantom{=}+\frac{\zeta'}{\zeta}(1+2\beta)\cdot \frac{\partial}{\partial \alpha}\mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,\beta,\alpha,\beta)}+\frac{\partial}{\partial \alpha}\frac{\partial}{\partial \beta}\mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,\beta,\alpha,\beta)},
\end{split}
\end{align}
where we recall that $\tilde{\mathcal{A}}(\alpha,\beta,\alpha,\beta)=1$. Since
\begin{equation}
h(\alpha,\beta) := \frac{\partial}{\partial \alpha}\frac{\partial}{\partial \beta}A(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,\beta,\alpha,\beta)}
\end{equation}
is holomorphic in $\Omega \times \Omega$, by Lemma \ref{holomorphic integrals} we find that the integral corresponding to this term is bounded by $O\left(X^{-2/5}\right)$. Moreover, by an application of Lemma \ref{holomorphic bound 2}, the integrals corresponding to
\begin{equation}
\frac{\zeta'}{\zeta}(1+2\alpha)\cdot \frac{\partial}{\partial \beta}A(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,\beta,\alpha,\beta)}\hspace{5mm} \textnormal{and} \hspace{5mm} \frac{\zeta'}{\zeta}(1+2\beta)\cdot \frac{\partial}{\partial \alpha}A(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,\beta,\alpha,\beta)}
\end{equation}
are each bounded by $O\left(X^{-1/5}\right)$. The main contributions to (\ref{first integral}) thus come from
\begin{equation}
\frac{\zeta''}{\zeta}(1+\alpha+\beta), \hspace{5mm} -\frac{\zeta'}{\zeta}(1+\alpha+\beta)^2, \hspace{5mm} \textnormal{and } \hspace{5mm} \frac{\zeta'}{\zeta}(1+2\alpha)\cdot \frac{\zeta'}{\zeta}(1+2\beta),
\end{equation}
and we now proceed to separately compute each of the three corresponding integrals.
\subsection{Computing $\frac{\zeta''}{\zeta}(1+\alpha+\beta):$}\label{1st}
The first double integral we would like to compute is
\begin{align}\label{firstintegral}
\begin{split}
I_{(\ref{1st})}&:=\int_{(\epsilon')}\int_{(\epsilon)}\frac{\zeta''}{\zeta}(1+\alpha+\beta)\tilde{\Phi}\left(\frac 1 2+\alpha\right)\tilde{\Phi}\left(\frac 1 2+\beta\right)X^{(\alpha+\beta)} ~d\alpha~d\beta\\
&=\int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta\right)X^{\beta} \int_{(\epsilon)}f_{(\ref{1st})}(\alpha) ~d\alpha ~d\beta,
\end{split}
\end{align}
where
\begin{equation}
f_{(\ref{1st})}(\alpha) := \frac{\zeta''}{\zeta}(1+\alpha+\beta)\tilde{\Phi}\left(\frac 1 2+\alpha\right)X^{\alpha}.
\end{equation}
Since $f_{(\ref{1st})}$ has one double pole at $\alpha=-\beta$, it follows from Lemma \ref{countour bound} that
\begin{align}
\int_{(\epsilon)}f_{(\ref{1st})}(\alpha) ~d\alpha &= 2\pi i \cdot \text{Res}(f_{(\ref{1st})},-\beta)+O\left(X^{-\frac{1}{5}}\right).
\end{align}
To compute $\text{Res}(f_{(\ref{1st})},-\beta)$, we split $f_{(\ref{1st})}(\alpha)$ into two parts.\\
\\
i) First, we expand $\frac{\zeta''}{\zeta}(1+\alpha+\beta)$ about the point $\alpha = -\beta$, yielding
\begin{equation}
\frac{\zeta''}{\zeta}(1+\alpha+\beta) = \frac{2}{(\alpha+\beta)^{2}}-\frac{2\gamma_{0}}{(\alpha+\beta)}+2(\gamma_0^2+\gamma_1)+h.o.t.,
\end{equation}
where $\gamma_i$ are Stieltjes constants, not to be confused with the variable $\gamma$ used previously.\\
ii) Next, we expand $g(\alpha) = \tilde{\Phi}\left(\frac 1 2+\alpha\right)X^{\alpha}$ about the point $\alpha = -\beta$. Since
\begin{equation}
g'(\alpha) = \tilde{\Phi}\left(\frac 1 2+\alpha \right)(\log X) X^{\alpha}+\frac{d}{d\alpha}\tilde{\Phi}\left(\frac 1 2+\alpha \right)X^{\alpha},
\end{equation}
it follows that
\begin{align}\label{g expansion}
\begin{split}
g(\alpha)&= \tilde{\Phi}\left(\frac 1 2-\beta \right)X^{-\beta}+\bigg(\tilde{\Phi}\left(\frac 1 2-\beta\right)(\log X) X^{-\beta}\\
&\phantom{=}+\tilde{\Phi}'\left(\frac 1 2-\beta\right)X^{-\beta}\bigg)(\alpha+\beta) +h.o.t.
\end{split}
\end{align}
Multiplying the two Taylor expansions above, we find that
\begin{align}
\begin{split}
\textnormal{Res}(f_{(\ref{1st})},-\beta)&= 2\bigg(\tilde{\Phi}\left(\frac 1 2-\beta\right)(\log X) +\tilde{\Phi}'\left(\frac 1 2-\beta\right)-\gamma_{0}\tilde{\Phi}\left(\frac 1 2-\beta \right)\bigg)X^{-\beta},
\end{split}
\end{align}
and therefore
\begin{align*}
\begin{split}
\int_{(\epsilon)}f_{(\ref{1st})}(\alpha) ~d\alpha&=4\pi i \bigg(\tilde{\Phi}\left(\frac 1 2-\beta \right)(\log X) +\tilde{\Phi}'\left(\frac 1 2-\beta \right)-\gamma_{0}\tilde{\Phi}\left(\frac 1 2-\beta \right)\bigg)X^{-\beta}\\
&\phantom{=}+O\left(X^{-\frac{1}{5}}\right).
\end{split}
\end{align*}
By an application of Lemma \ref{countour bound}, it follows that
\begin{align}
\begin{split}
&I_{(\ref{1st})} =4\pi i \bigg(\log X \int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right) \tilde{\Phi}\left(\frac 1 2-\beta \right)~d\beta\\
&+\int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right) \tilde{\Phi}'\left(\frac 1 2-\beta \right)~d\beta-\gamma_{0}\int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}\left(\frac 1 2-\beta \right)~d\beta\bigg)\\
&+O\left(X^{-\frac{2}{5}}\right),
\end{split}
\end{align}
i.e.,
\begin{equation}\label{1st; 1}
\boxed{I_{(\ref{1st})} = - 2 (\log X) C_{\Phi}-2 C'_{\Phi}+2 \gamma_{0} C_{\Phi}+O\left(X^{-\frac{2}{5}}\right).}
\end{equation}
\subsection{Computing $-\frac{\zeta'}{\zeta}(1+\alpha+\beta)^2$}\label{2nd}
Next, we are interested in the integral
\begin{align}\label{integral3}
\begin{split}
I_{(\ref{2nd})}&:=-\int_{(\epsilon')}\int_{(\epsilon)}\frac{\zeta'}{\zeta}(1+\alpha+\beta)^2\cdot \tilde{\Phi}\left(\frac 1 2+\alpha \right)\tilde{\Phi}\left(\frac 1 2+\beta \right)X^{\alpha+\beta} ~d\alpha~d\beta\\
&\hspace{1mm}=-\int_{(\epsilon')}X^{\beta}\tilde{\Phi}\left(\frac 1 2+\beta \right)\int_{(\epsilon)}f_{(\ref{2nd})}(\alpha) ~d\alpha~d\beta,
\end{split}
\end{align}
where
\begin{equation}
f_{(\ref{2nd})}(\alpha) := \frac{\zeta'}{\zeta}(1+\alpha+\beta)^2\cdot \tilde{\Phi}\left(\frac 1 2+\alpha \right)X^{\alpha}.
\end{equation}
Since $f_{(\ref{2nd})}(\alpha)$ has a single pole at $\alpha = -\beta$, it follows from Lemma \ref{countour bound} that
\begin{equation}
\int_{(\epsilon)}f_{(\ref{2nd})}(\alpha) ~d\alpha = 2\pi i \cdot \textnormal{Res}(f_{(\ref{2nd})},-\beta) +O\left(X^{-\frac 1 5}\right).
\end{equation}
To determine the residue of this integral at the point $\alpha = -\beta$, we expand $\frac{\zeta'}{\zeta}(1+\alpha+\beta)^2$ and $g(\alpha) := \tilde{\Phi}\left(\frac 1 2+\alpha \right)X^{\alpha}$ about the point $\alpha = -\beta$, yielding
\begin{align}
\begin{split}
\frac{\zeta'}{\zeta}(1+\alpha+\beta)^2 &= \frac{1}{(\alpha+\beta)^2}-\frac{2\gamma_{0}}{(\alpha+\beta)}+h.o.t.,
\end{split}
\end{align}
and
\begin{align}
\begin{split}
g(\alpha)&= \tilde{\Phi}\left(\frac 1 2-\beta \right)X^{-\beta}+\left(\tilde{\Phi}\bigg(\frac 1 2-\beta \right)(\log X) X^{-\beta}\\
&\phantom{=}+\tilde{\Phi}'\left(\frac 1 2-\beta \right)X^{-\beta}\bigg)(\alpha+\beta)+h.o.t.,
\end{split}
\end{align}
so that
\begin{align}
\begin{split}
\textnormal{Res}(f_{(\ref{2nd})},-\beta) &= \bigg(\tilde{\Phi}\left(\frac 1 2-\beta\right)(\log X) +\tilde{\Phi}'\left(\frac 1 2-\beta\right)-2\gamma_{0}\tilde{\Phi}\left(\frac 1 2-\beta \right)\bigg)X^{-\beta}.
\end{split}
\end{align}
It follows that
\begin{align*}
\begin{split}
\int_{(\epsilon)}f_{(\ref{2nd})}(\alpha) ~d\alpha &= 2\pi i \bigg(\tilde{\Phi}\left(\frac 1 2-\beta \right)(\log X) +\tilde{\Phi}'\left(\frac 1 2-\beta \right)\\
&\phantom{=}-2\gamma_{0}\tilde{\Phi}\left(\frac 1 2-\beta \right)\bigg)X^{-\beta}+O\left(X^{-\frac{1}{5}}\right),
\end{split}
\end{align*}
from which we obtain
\begin{equation}\label{1st; 2}
\boxed{ I_{(\ref{2nd})}=(\log X) C_{\Phi}+C'_{\Phi}-2\gamma_{0} C_{\Phi}+O\left(X^{-\frac{2}{5}}\right).}
\end{equation}
\subsection{Computing $\left(\frac{\zeta'}{\zeta}(1+2\alpha)\right)\left(\frac{\zeta'}{\zeta}(1+2\beta)\right)$}\label{3rd}
Next we are interested in the integral
\begin{align}\label{firstintegral}
\begin{split}
I_{(\ref{3rd})}&:=\int_{(\epsilon')}\int_{(\epsilon)}\frac{\zeta'}{\zeta}(1+2\alpha)\cdot \frac{\zeta'}{\zeta}(1+2\beta)\tilde{\Phi}\left(\frac{1}{2}+\alpha\right)\tilde{\Phi}\left(\frac{1}{2}+\beta\right)X^{\alpha+\beta} ~d\alpha~d\beta\\
&\phantom{:}=\int_{(\epsilon')}f_{(\ref{3rd})}(\beta)~d\beta\cdot \int_{(\epsilon)}f_{(\ref{3rd})}(\alpha)d\alpha,
\end{split}
\end{align}
where
\begin{equation}
f_{(\ref{3rd})}(\alpha) := \frac{\zeta'}{\zeta}(1+2\alpha)\tilde{\Phi}\left(\frac{1}{2}+\alpha\right)X^{\alpha}.
\end{equation}
Since
\begin{equation}
\frac{\zeta'}{\zeta}(1+2\alpha) = -\frac{1}{2\alpha} + \gamma_{0} + h.o.t.,
\end{equation}
$f$ has a simple pole at $\alpha = 0$ with residue
\begin{align}
\text{Res}(f_{(\ref{3rd})},0) &=\lim_{\alpha \rightarrow 0}\alpha \cdot f_{(\ref{3rd})}(\alpha)=-\frac{1}{2}\tilde{\Phi}\left(\frac{1}{2}\right).
\end{align}
It thus follows from Lemma \ref{countour bound} that
\begin{equation}
\int_{(\epsilon)}f_{(\ref{3rd})}(\alpha)d\alpha = -\pi i \tilde{\Phi}\left(\frac 1 2\right)+O\left(X^{-\frac 1 5}\right),
\end{equation}
and similarly
\begin{equation}
\int_{(\epsilon)}f_{(\ref{3rd})}(\beta)d\beta = -\pi i \tilde{\Phi}\left(\frac 1 2\right)+O\left(X^{-\frac 1 5}\right),
\end{equation}
from which we conclude that
\begin{equation}\label{1st; 3}
\boxed{I_{(\ref{3rd})} = -\pi^2\left(\tilde{\Phi}\left(\frac{1}{2}\right)\right)^2+O\left(X^{-\frac 1 5}\right).}
\end{equation}
Lemma \ref{1st total} then follows upon combing the results of (\ref{1st; 1}), (\ref{1st; 2}), and (\ref{1st; 3}).
\section{Proof of Lemma \ref{second total}}
Next, we consider the quantity
\begin{align}\label{2nd ratios piece}
\begin{split}
&\frac{\partial}{\partial \alpha}\frac{\partial}{\partial\beta}\Bigg(\frac{1}{1-2\alpha} \left(\frac{\pi}{2}\right)^{2\alpha} G(-\alpha,\beta,\gamma,\delta)\Bigg)\Bigg|_{(\alpha,\beta,\alpha,\beta)}=\frac{\zeta(1-2\alpha)}{(1-2\alpha)}\left(\frac \pi 2\right)^{2\alpha}\Bigg(\mathcal{A}(-\alpha,\beta,\alpha,\beta)\\
&\bigg(-\frac{\zeta'}{\zeta}(1+2\beta)-\frac{\zeta'}{\zeta}(1-\alpha+\beta)+\frac{\zeta'}{\zeta}(1+\alpha+\beta)\bigg)-\frac{d}{d\beta}\mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg|_{(-\alpha,\beta,\alpha,\beta)}\Bigg)
\end{split}
\end{align}
coming from the integral $I_{2}$, as well as the symmetric quantity
\begin{align}\label{3rd ratios piece}
\begin{split} &\frac{\partial}{\partial \alpha}\frac{\partial}{\partial\beta}\bigg(\frac{1}{1-2\beta} \left(\frac{\pi}{2}\right)^{2\beta} G(\alpha,-\beta,\gamma,\delta)\bigg)\bigg|_{(\alpha,\beta,\alpha,\beta)}=\frac{\zeta(1-2\beta)}{(1-2\beta)}\left(\frac{\pi}{2}\right)^{2\beta}\Bigg(\mathcal{A}(\alpha,-\beta,\alpha,\beta)\\
&\bigg(-\frac{\zeta'}{\zeta}(1+2\alpha)-\frac{\zeta'}{\zeta}(1+\alpha-\beta)+\frac{\zeta'}{\zeta}(1+\alpha+\beta)\bigg)-\frac{\partial}{\partial \alpha}\mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,-\beta,\alpha,\beta)}\Bigg)
\end{split}
\end{align}
coming from the integral $I_{3}$. As before, we approach this term by term, and note that by an application of Lemma \ref{holomorphic bound 2}, the integrals over
\begin{equation}
\frac{d}{d\beta}\mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg|_{(-\alpha,\beta,\alpha,\beta)} \hspace{5mm} \textnormal{and} \hspace{5mm} \frac{\partial}{\partial \alpha}\mathcal{A}(\alpha,\beta,\gamma,\delta)\bigg|_{(\alpha,-\beta,\alpha,\beta)}
\end{equation}
may be bounded by $O\left(X^{-\frac 1 5}\right)$. Significant contributions then come from integration against the following integrands:\\
\\
$i) -\frac{\zeta'}{\zeta}(1+2\beta)$ and $ -\frac{ \zeta'}{\zeta}(1+2\alpha)$,\\
\\
$ii)-\frac{ \zeta'}{ \zeta}(1-\alpha+\beta)$ and $-\frac{ \zeta'}{ \zeta}(1+\alpha-\beta)$,\\
\\
$iii) \hspace{2mm} 2 \cdot \frac{\zeta'}{ \zeta}(1+\alpha+\beta)$.
\subsection{Computing $ -\frac{ \zeta'}{\zeta}(1+2\beta)$ and $ -\frac{ \zeta'}{\zeta}(1+2\alpha)$:} \label{4th}
Combining the discussion above with (\ref{full conjecture}), we seek to compute the following integral:
\begin{align}
I_{(\ref{4th})}&:=-\int_{(\epsilon)}\frac{\zeta(1-2 \alpha)}{(1-2\alpha)} \cdot \left(\frac{\pi}{2}\right)^{2\alpha}\tilde{\Phi}\left(\frac 1 2+\alpha\right)X^{\alpha(1-2\lambda)}\bigg (\int_{(\epsilon')}f_{(\ref{4th})}(\beta) d\beta\bigg )~d\alpha,
\end{align}
where
\begin{equation}
f_{(\ref{4th})}(\beta) := \mathcal{A}(-\alpha,\beta,\alpha,\beta)\frac{\zeta'}{\zeta}(1+2 \beta)\tilde{\Phi}\left(\frac 1 2+\beta\right)X^{\beta}.
\end{equation}
Note that since
\begin{equation}
\frac{\zeta'}{\zeta}(1+2\beta) = -\frac{1}{2\beta} + \gamma_{0} + h.o.t.,
\end{equation}
$f_{(\ref{4th})}$ has a simple pole at $\beta = 0$ with residue
\begin{align}
\textnormal{Res}(f_{(\ref{4th})},0) &= - \mathcal{A}(-\alpha,0,\alpha,0)\frac{1}{2}\tilde{\Phi}\left(\frac 1 2\right),
\end{align}
so that by Lemma \ref{countour bound},
\begin{equation}
\int_{(\epsilon')}f_{(\ref{4th})}(\beta) d\beta = -\pi i \mathcal{A}(-\alpha,0,\alpha,0)\tilde{\Phi}\left(\frac 1 2\right)+O\left(X^{-\frac 1 5}\right).
\end{equation}
Inserting this back into the outer integral, we find that
\begin{align}
I_{(\ref{4th})}&=\pi i \tilde{\Phi}\left(\frac 1 2\right)\int_{(\epsilon)}f'_{(\ref{4th})}(\alpha)~d\alpha+O\left(X^{-\frac 1 5}\right),
\end{align}
where
\begin{equation}
f'_{(\ref{4th})}(\alpha) := \mathcal{A}(-\alpha,0,\alpha,0)\left(\frac \pi 2\right)^{2\alpha} X^{\alpha(1-2\lambda)}\frac{\zeta{(1-2\alpha)}}{(1-2\alpha)}\tilde{\Phi}\left(\frac 1 2+\alpha\right).
\end{equation}
If $\lambda > \frac 1 2$, we shift to the vertical line Re$(\alpha) = 1/5$, so that
\begin{align}
\begin{split}
\int_{(\epsilon)}f'_{(\ref{4th})}(\alpha)~d\alpha &=i \int_{\mathbb{R}}\mathcal{A}\left(-\frac 1 5-it,0,\frac 1 5+it,0\right)\left(\frac{\pi^2 X^{1-2\lambda}}{4}\right)^{\frac 1 5 +it}\\
&\phantom{=}\times \frac{\zeta{(\frac 3 5-2it)}}{(\frac 3 5-2it)}\tilde{\Phi}\left(\frac {7}{10}+it\right)~dt\\
&=i \left(\frac{\pi^2 X^{1-2\lambda}}{4}\right)^{\frac 1 5}\int_{\mathbb{R}}\mathcal{A}\left(-\frac 1 5-it,0,\frac 1 5+it,0\right)\left(\frac{\pi^2 X^{1-2\lambda}}{4}\right)^{it}\\
&\phantom{=}\times\frac{\zeta{\left(\frac 3 5-2it\right)}}{\left(\frac 3 5-2it\right)}\tilde{\Phi}\left(\frac {7}{10}+it\right)~dt.
\end{split}
\end{align}
Since the integrand decays rapidly as a function of $t$, the integral is bounded absolutely by a constant that is independent of $\lambda$. It follows that for any fixed $\lambda > \frac 1 2$,
\begin{equation}
I_{(\ref{4th})} \ll X^{\left(\frac{1}{5}\right)\left(1-2\lambda\right)}.
\end{equation}
If $\lambda < \frac 1 2 $ we shift to the vertical line Re$(\alpha) = -1/5$, pick up a residue at $\alpha = 0$, and bound the remaining contour by $O\left(X^{\left(-\frac 1 5\right)\left(1-2 \lambda \right)}\right)$. Since
\begin{equation}
\zeta(1-2\alpha) = -\frac{1}{2\alpha} + \gamma_{0}+h.o.t.,
\end{equation}
the residue is given by
\begin{align}
\textnormal{Res}(f'_{(\ref{4th})},0) &=-\frac{1}{2}\tilde{\Phi}\left(\frac 1 2\right),
\end{align}
where we make use of Lemma \ref{A is 1}. Since
\begin{equation}
2 \pi i \cdot -\frac{1}{2}\tilde{\Phi}\left(\frac 1 2\right) \pi i \tilde{\Phi}\left(\frac 1 2\right)=\pi^2\left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2},
\end{equation}
it follows that
\begin{equation}
I_{(\ref{4th})} = \left\{
\begin{array}{l l}
\pi^2\left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2} +O\left(X^{\left(-\frac 1 5 \right)\left(1-2 \lambda \right)}\right)& \text{ if } \lambda < \frac 1 2 \\
O\left(X^{\left(-\frac{1}{5}\right)\left(2\lambda-1 \right)}\right) & \text{ if } \lambda > \frac 1 2.
\end{array} \right.
\end{equation}
Upon including the contribution from the integral over $ -\frac{ \zeta'}{\zeta}(1+2\alpha)$ coming from the third piece of the Ratios Conjecture, we conclude that the combined contribution from these two symmetric pieces together is equal to
\begin{equation}\label{3rd; 1}
\boxed{2 \cdot I_{(\ref{4th})} = \left\{
\begin{array}{l l}
2\pi^2\left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2} +O\left(X^{\left(-\frac 1 5 \right)\left(1-2 \lambda \right)}\right)& \text{ if } \lambda < \frac 1 2 \\
O\left(X^{\left(-\frac{1}{5}\right)\left(2 \lambda-1 \right)}\right) & \text{ if } \lambda > \frac 1 2.
\end{array}\right.}
\end{equation}
\subsection{Computing $-\frac{\zeta'}{ \zeta}(1-\alpha+\beta)$ and $-\frac{\zeta'}{ \zeta}(1+\alpha-\beta)$}\label{5th}
In this section we assume that $0<\textnormal{Re}(\alpha) < \textnormal{Re}(\beta) = \epsilon'$. The integral that we are interested in computing is
\begin{align}
I_{(\ref{5th})}&:=-\int_{(\epsilon)}\frac{\zeta(1-2 \alpha)}{(1-2\alpha) }\tilde{\Phi}\left(\frac 1 2+\alpha\right)\left(\frac \pi 2\right)^{2\alpha}X^{\alpha(1-2\lambda)}\bigg (\int_{(\epsilon')}f_{(\ref{5th})}(\beta)~d\beta\bigg ) ~d\alpha,
\end{align}
where
\begin{equation}
f_{(\ref{5th})}(\beta) = \mathcal{A}(-\alpha,\beta,\alpha,\beta)\frac{\zeta'}{ \zeta}(1-\alpha+\beta)\tilde{\Phi}\left(\frac 1 2+\beta\right)X^{\beta}.
\end{equation}
Recalling that
\begin{equation}
\frac{\zeta'}{\zeta}(1-\alpha+\beta) = \frac{1}{\alpha-\beta} + \gamma_{0} + h.o.t.,
\end{equation}
we find that $f_{(\ref{5th})}$ has a simple pole at $\alpha = \beta$. Under the assumption that $0<\textnormal{Re}(\alpha) < \textnormal{Re}(\beta) = \epsilon'$, this pole is picked up upon shifting the contour to the line Re$(\alpha)=-1/5$, and the residue is
\begin{align}
\textnormal{Res}(f_{(\ref{5th})},\alpha) &=-\mathcal{A}(-\alpha,\alpha,\alpha,\alpha)\tilde{\Phi}\left(\frac 1 2+\alpha \right)X^{\alpha}.
\end{align}
It follows that
\begin{align}
\begin{split}
I_{(\ref{5th})}&=-\int_{(\epsilon)}\frac{\zeta(1-2 \alpha)}{(1-2\alpha)}\tilde{\Phi}\left(\frac 1 2+\alpha\right)\left(\frac{\pi}{2}\right)^{2\alpha}X^{\alpha(1-2\lambda)}\bigg (-2 \pi i\cdot \textnormal{Res}(f_{(\ref{5th})},\alpha)+ O\left(X^{-\frac 1 5}\right)\bigg ) ~d\alpha\\
&= 2 \pi i \int_{(\epsilon)}f'_{(\ref{5th})}(\alpha) ~d\alpha+ O\left(X^{-\frac 1 5}\right),
\end{split}
\end{align}
where
\begin{equation}
f'_{(\ref{5th})}(\alpha) = \mathcal{A}(-\alpha,\alpha,\alpha,\alpha)\frac{ \zeta(1-2 \alpha)}{(1-2\alpha) }\left(\frac{\pi}{2}\right)^{2\alpha}X^{2\alpha(1-\lambda)}\tilde{\Phi}\left(\frac 1 2+\alpha\right)\tilde{\Phi}\left(\frac 1 2+\alpha \right).
\end{equation}
If $\lambda >1$, we shift to the vertical line Re$(\alpha) = 1/5$, and bound
\begin{align}
\int_{(\epsilon)}f'_{(\ref{5th})}(\alpha)~d\alpha &= \left(X^{\left(\frac{1}{5}\right)\left(2-2\lambda\right)}\right),
\end{align}
while if $\lambda < 1$, we shift to the vertical line Re$(\alpha) = -\frac{1}{5}$, pick up a pole at $\alpha = 0$, and bound the remaining contour by $O\left(X^{\left(-1/5 \right)\left(2-2 \lambda \right)}\right)$. Since
\begin{align}
\textnormal{Res}(f'_{(\ref{5th})},0) &= - \frac{1}{2}\tilde{\Phi}\left(\frac 1 2\right)\tilde{\Phi}\left(\frac 1 2 \right),
\end{align}
we conclude that
\begin{equation}\label{3rd; 2}
\boxed{I_{(\ref{5th})} = \left\{
\begin{array}{l l}
2\pi^2\left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2} +O\left(X^{\left(-\frac 2 5\right)\left(1- \lambda \right)}\right)& \text{ if } \lambda < 1 \\
O\left(X^{\left(-\frac{2}{5}\right)\left(\lambda-1 \right)}\right) & \text{ if } \lambda > 1.
\end{array} \right.}
\end{equation}
Lastly, we consider the integral
\begin{align}
I_{(\ref{5th},\textnormal{sym})}&:=-\int_{(\epsilon')}\frac{\zeta(1-2 \beta)}{(1-2\beta) }\tilde{\Phi}\left(\frac 1 2+\beta\right)\left(\frac \pi 2\right)^{2\beta}X^{\beta(1-2\lambda)}\bigg (\int_{(\epsilon)}f_{(\ref{5th},\textnormal{sym})}(\beta)~d\alpha\bigg ) ~d\beta,
\end{align}
where
\begin{equation}
f_{(\ref{5th},\textnormal{sym})}(\beta) = \mathcal{A}(\alpha,-\beta,\alpha,\beta)\frac{\zeta'}{ \zeta}(1+\alpha-\beta)\tilde{\Phi}\left(\frac 1 2+\alpha\right)X^{\alpha},
\end{equation}
which is the symmetry quantity corresponding to $I_{(\ref{5th},\textnormal{sym})}$ coming from (\ref{3rd ratios piece}) above. Under the assumption that $0<\textnormal{Re}(\alpha) < \textnormal{Re}(\beta)$, the inner integral is holomorphic in the region $-\frac{1}{5}<\textnormal{Re}(\alpha)<\epsilon$, from which it follows that
\begin{equation}\label{3rd; 2sym}
\boxed{I_{(\ref{5th},\textnormal{sym})} = O\left(X^{-\frac{1}{5}}\right).}
\end{equation}
Note that had we instead assumed $0<\textnormal{Re}(\beta) <\textnormal{Re}(\alpha)<1/5$, we would obtain a significant contribution from $I_{(\ref{5th},\textnormal{sym})}$ and a negligible contribution from $I_{(\ref{5th})}$. In this way, the symmetry between $\alpha$ and $\beta$ is preserved.
\subsection{Computing $\frac{\zeta'}{ \zeta}(1+\alpha+\beta)$}\label{6th}
Next, we compute
\begin{align}
\begin{split}
I_{(\ref{6th})}&:=\int_{(\epsilon)}\left(\frac{\pi}{2}\right)^{2\alpha}\frac{ \zeta(1-2 \alpha)}{(1-2\alpha)}\tilde{\Phi}\left(\frac 1 2+\alpha\right)X^{\alpha(1-2\lambda)}\left(\int_{(\epsilon')}f_{(\ref{6th})}(\beta)d\beta\right)~d\alpha,
\end{split}
\end{align}
where
\begin{equation}
f_{(\ref{6th})}(\beta) = \mathcal{A}(-\alpha,\beta,\alpha,\beta)\frac{\zeta'}{ \zeta}(1+\alpha+\beta)\tilde{\Phi}\left(\frac 1 2+\beta\right)X^{\beta}.
\end{equation}
Since
\begin{equation}
\frac{\zeta'}{\zeta}(1+\alpha+\beta) = -\frac{1}{\alpha+\beta} + \gamma_{0} + h.o.t.,
\end{equation}
the residue at $\beta = -\alpha$ is
\begin{align}
\textnormal{Res}(f_{(\ref{6th})},-\alpha)&= - \mathcal{A}(-\alpha,-\alpha,\alpha,-\alpha)\tilde{\Phi}\left(\frac 1 2-\alpha\right)X^{-\alpha}.
\end{align}
It follows that
\begin{equation}
\int_{(\epsilon')}f_{(\ref{6th})}(\beta)d\beta = - 2\pi i \mathcal{A}(-\alpha,-\alpha,\alpha,-\alpha)\tilde{\Phi}\left(\frac 1 2-\alpha\right)X^{-\alpha}+O\left(X^{-\frac 1 5}\right),
\end{equation}
and thus upon shifting the line of integration to Re$(\alpha) = 1/5$, we conclude that
\begin{align}\label{3rd; 3rd}
\begin{split}
I_{(\ref{6th})}&=\int_{(\epsilon)}\left(\frac{\pi}{2}\right)^{2\alpha}\frac{\zeta(1-2 \alpha)}{(1-2\alpha)}\tilde{\Phi}\left(\frac 1 2+\alpha\right)X^{\alpha(1-2\lambda)}\left(\textnormal{Res}(f_{(\ref{6th})},-\alpha)+O\left(X^{-\frac 1 5}\right)\right)~d\alpha\\
&=O\left(X^{-\frac {2}{5}\lambda}\right).
\end{split}
\end{align}
Lemma \ref{second total} then follows upon combining the computations in (\ref{3rd; 1}), (\ref{3rd; 2}), (\ref{3rd; 2sym}), and (\ref{3rd; 3rd}).
\section{Proof of Lemma \ref{fourth total}}\label{7th}
Since
\begin{align}\label{4th integral}
&\frac{\partial}{\partial \alpha} \frac{\partial}{\partial\beta}\bigg( \frac{1}{1-2(\alpha+\beta)} \left(\frac{\pi}{2}\right)^{2(\alpha+\beta)} G(-\alpha,-\beta,\gamma,\delta)\bigg )\bigg \vert_{(\alpha,\beta,\alpha,\beta)}= \\
&\frac{\zeta(1-2\alpha)\zeta(1-2\beta)}{(1-2(\alpha+\beta))}\left(\frac \pi 2\right)^{2(\alpha+\beta)}\Bigg(\frac{\zeta(1-\alpha-\beta)\zeta(1+\alpha+\beta)}{\zeta(1+\alpha-\beta)\zeta(1-\alpha+\beta)}\Bigg)\mathcal{A}(-\alpha,-\beta,\alpha,\beta),\nonumber
\end{align}
we write
\begin{align}\label{outer 4th}
I_{4}&=\int_{(\epsilon')}\zeta(1-2\beta)\tilde{\Phi}\left(\frac 1 2+\beta \right)\left(\frac{\pi}{2}\right)^{2\beta}X^{\beta(1 - 2\lambda)} \bigg(\int_{(\epsilon)}f_{4}(\alpha) ~d\alpha\bigg)~d\beta,
\end{align}
where
\begin{align}
\begin{split}
f_{4}(\alpha) &= \mathcal{A}(-\alpha,-\beta,\alpha,\beta)\left(\frac{\pi}{2}\right)^{2\alpha}X^{\alpha(1 - 2\lambda)}\frac{ \zeta(1-2 \alpha)}{(1-2(\alpha+\beta))}\\
&\phantom{=}\times \Bigg(\frac{\zeta(1-\alpha-\beta)\zeta(1+\alpha+\beta)}{\zeta(1+\alpha-\beta)\zeta(1-\alpha+\beta)}\Bigg)\tilde{\Phi}\left(\frac 1 2+\alpha\right).
\end{split}
\end{align}
Suppose $\lambda > 1/2$. We then shift to the vertical line Re$(\alpha) = 1/5$, so that
\begin{align}
\begin{split}
\int_{(\epsilon)}f_{4}(\alpha) ~d\alpha &=i \left(\frac{\pi^2 X^{1-2\lambda}}{4}\right)^{\frac 1 5}\int_{\mathbb{R}}\mathcal{A}\left(-\frac 1 5-it,-\beta,\frac 1 5+it,\beta\right)\left(\frac{\pi^2 X^{1-2\lambda}}{4}\right)^{it}\\
&\phantom{=}\times \frac{\zeta{(\frac 3 5-2it)}}{(\frac 3 5-2it-2\beta)}\Bigg(\frac{\zeta(\frac 4 5-it-\beta)\zeta\left(\frac 6 5+it+\beta\right)}{\zeta\left(\frac 6 5+it-\beta\right)\zeta\left(\frac 4 5-it+\beta\right)}\Bigg)\tilde{\Phi}\left(\frac {7}{10}+it\right)~dt.
\end{split}
\end{align}
By the decay properties of $\Phi$, the integral is bounded by a constant (depending on $\beta$) that is independent of $\lambda$. It follows that
\begin{equation}
\int_{(\epsilon)}f_{4}(\alpha) ~d\alpha = O_{\beta}\left(X^{\frac{1}{5}\left(1-2\lambda\right)}\right) = O\left(g(\beta)\cdot X^{\frac{1}{5}\left(1-2\lambda\right)}\right),
\end{equation}
where $g$ does not grow too rapidly as a function of $\beta$.
Inserting this back into the outer integral, and shifting the line of integration to Re$(\beta) = 1/5$, we obtain
\begin{align}
I_{4} &\ll X^{\frac{1}{5}\left(1-2\lambda\right)} \cdot \int_{(\epsilon')}g(\beta)\cdot\zeta(1-2\beta)\tilde{\Phi}\left(\frac 1 2+\beta \right)\left(\frac{\pi}{2}\right)^{2\beta}X^{\beta(1 - 2\lambda)} d\beta\ll X^{\frac{2}{5}\left(1-2\lambda\right)}.
\end{align}
Next, suppose $\lambda < 1/2$. We shift the line of integration to $\textnormal{Re}(\alpha) = - 1/5$, and pick up a simple at $\alpha=0$, and a double pole at $\alpha = - \beta$. By an application of Lemma \ref{countour bound}, we then find
\begin{equation}
\int_{(\epsilon)} f_{4}(\alpha)d\alpha =2\pi i\cdot \bigg(\textnormal{Res}(f_{4},0)+\textnormal{Res}(f_{4},-\beta)\bigg)+O\left( X^{-\frac{1}{5}(1-2\lambda)}\right).
\end{equation}
It remains to compute these two residue contributions.
\subsection{Simple Pole at $\alpha = 0$:}\label{8th}
Note that $f_{4}$ has a simple pole at $\alpha = 0$ with residue
\begin{align}
\textnormal{Res}(f_{4},0) &= -\frac{1}{2}\mathcal{A}(0,-\beta,0,\beta) \frac{1}{(1-2\beta)}\tilde{\Phi}\left(\frac 1 2\right),
\end{align}
which contributes when $\lambda < 1/2$. Inserting this into the outer integral, we find that
\begin{align}\label{outer simple 4th}
\begin{split}
&\int_{(\epsilon')}\zeta(1-2\beta)\tilde{\Phi}\left(\frac 1 2+\beta \right)\left(\frac{\pi}{2}\right)^{2\beta}X^{\beta(1 - 2\lambda)}\left(-\pi i \mathcal{A}(0,-\beta,0,\beta)\frac{1}{(1-2\beta)}\tilde{\Phi}\left(\frac 1 2\right)\right)d\beta\\
&=-\pi i\tilde{\Phi}\left(\frac 1 2\right)\int_{(\epsilon')}f_{(\ref{8th})}(\beta)d\beta,
\end{split}
\end{align}
where
\begin{align}
f_{(\ref{8th})}(\beta) &= \frac{\zeta(1-2\beta)}{(1-2\beta)}\tilde{\Phi}\left(\frac 1 2+\beta \right)\left(\frac{\pi}{2}\right)^{2\beta}X^{\beta(1 - 2\lambda)}\mathcal{A}(0,-\beta,0,\beta).
\end{align}
The integral in (\ref{outer simple 4th}) has a simple pole at $\beta = 0$ with residue
\begin{align}
\textnormal{Res}(f_{(\ref{8th})},0) &=-\frac{1}{2}\tilde{\Phi}\left(\frac 1 2\right),
\end{align}
so that the total contribution from this pole is
\begin{equation}\label{simple pole contribution}
\boxed{-\pi^2 \left(\tilde{\Phi}\left(\frac 1 2\right)\right)^{2}+ O\left(X^{-\frac 1 5(1-2\lambda)}\right).}
\end{equation}
\subsection{Double Pole at $\alpha = -\beta$:}
To compute the residue of $f_{4}$ at the point $\alpha = - \beta$, we split $f_{4}(\alpha)$ into three components.\\
\\
i) First, define
\begin{equation}
h(\alpha):= \mathcal{A}(-\alpha,-\beta,\alpha,\beta)\frac{\zeta(1-2 \alpha)}{\zeta(1+\alpha-\beta)\zeta(1-\alpha+\beta)}\frac{\tilde{\Phi}\left(\frac 1 2+\alpha\right)}{(1-2(\alpha+\beta))}.
\end{equation}
Since $h(\alpha)$ is holomorphic at $\alpha=-\beta$, we may expand it as a power series of the form
\begin{align}
h(\alpha) &=h(-\beta)+ h^{(1)}(-\beta)(\alpha+\beta)+h.o.t.
\end{align}
ii) Next, we expand
\begin{equation}
\left(\frac \pi 2 \right)^{2\alpha}\left( X^{1 - 2\lambda}\right)^{\alpha} = e^{\alpha (\log (\frac {\pi^2}{4}) +(1 - 2\lambda)\log X)}=e^{\alpha\cdot C}
\end{equation}
about the point $\alpha = -\beta$, where
\begin{equation}
C := \log \left(\frac {\pi^2}{4}\right) +(1 - 2\lambda)\log X.
\end{equation}
The expansion is given as
\begin{equation}
e^{\alpha\cdot C}=e^{-\beta\cdot C}+C\cdot e^{-\beta\cdot C}(\alpha+\beta) +h.o.t.
\end{equation}
iii) Finally, we note that
\begin{equation}
\zeta(1-\alpha-\beta)\zeta(1+\alpha+\beta) = \left(-\frac{1}{\alpha+\beta} + \gamma_{0}+h.o.t.\right)\left(\frac{1}{\alpha+\beta} + \gamma_{0}+h.o.t.\right).
\end{equation}
The total residue is then found to be the full coefficient of $(\alpha+\beta)^{-1}$, i.e.,
\begin{align}
\begin{split}
\textnormal{Res}(f_{4},-\beta) &=-C\cdot e^{-\beta\cdot C}h(-\beta) -e^{-\beta\cdot C}h^{(1)}(-\beta).
\end{split}
\end{align}
We now compute these two contributions separately.
\subsubsection{First Piece}
The total contribution from the first piece is
\begin{align}
&-2\pi i \cdot \left(\log \left(\frac {\pi^2}{4}\right) +(1 - 2\lambda)\log X\right)\left(\frac{\pi}{2}\right)^{-2\beta} X^{-\beta(1 - 2\lambda)}\cdot \frac{\tilde{\Phi}\left(\frac 1 2-\beta\right)}{\zeta(1-2\beta)},
\end{align}
where we note that $\mathcal{A}(\beta,-\beta,-\beta,\beta)= 1$. Inserting this into the outer integral of (\ref{outer 4th}), we find that the main contribution of this piece is
\begin{align}
-2\pi i \int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}\left(\frac 1 2-\beta \right)\cdot \left(\log \left(\frac {\pi^2}{4}\right) +(1 - 2\lambda)\log X\right)~d\beta,
\end{align}
i.e., the total contribution is given by
\begin{equation}\label{first piece}
\boxed{C_{\Phi}\left(\log \left(\frac {\pi^2}{4}\right) +(1 - 2\lambda)\log X\right)+ O\left(X^{-\frac 1 5(1-2\lambda)}\right).}
\end{equation}
\subsubsection{Second Piece}
One directly computes
\begin{align}
\begin{split}
h^{(1)}(-\beta)&=\frac{1}{\zeta(1-2\beta)}\Bigg(\tilde{\Phi}'\left(\frac 1 2-\beta\right)+\tilde{\Phi}\left(\frac 1 2-\beta\right)\bigg(2-\frac{\zeta'}{\zeta}(1-2\beta) -\frac{\zeta'}{\zeta}(1+2\beta)\\
&\hspace{5mm}+A_{\beta}'(-\beta)+\frac{L'}{L}(1-2
\beta)+\frac{L'}{L}(1+2\beta)\bigg)\Bigg),
\end{split}
\end{align}
upon noting that $A_{\beta}(-\beta)=A(\beta,-\beta,-\beta,\beta)=1.$ Inserting this expression back into the outer integral of (\ref{outer 4th}), we find that the total contribution from this piece is
\begin{equation}\label{second piece}
\boxed{2 C_{\Phi}+C_{\Phi,\zeta}-C_{\Phi,L} +C'_\Phi -A_{\Phi}'+O\left(X^{-\frac 1 5(1-2\lambda)}\right),}
\end{equation}
where
\begin{equation}\label{zeta constant}
C_{\Phi,\zeta} :=2\pi i \int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}\left(\frac 1 2-\beta \right)\left(\frac{\zeta'}{\zeta}(1-2\beta)+\frac{\zeta'}{\zeta}(1+2\beta)\right)~d\beta,
\end{equation}
\begin{equation}\label{L constant}
C_{\Phi,L} := 2\pi i \int_{(\epsilon')}\tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}\left(\frac 1 2-\beta \right)\left(\frac{L'}{L}(1+2\beta)+\frac{L'}{L}(1-2\beta)\right)~d\beta,
\end{equation}
and
\begin{equation}\label{A constant}
A_{\Phi}' := - 4\pi i \int_{(\epsilon')} \tilde{\Phi}\left(\frac 1 2+\beta \right)\tilde{\Phi}\left(\frac 1 2-\beta \right)\Bigg(\sum_{\substack{p\equiv 3(4)\\mathfrak{p} \textnormal{ prime}}}\frac{\left(p^{2+8\beta}+p^2-2 p^{4\beta}\right) \log p}{p^{2+8\beta}+p^2-p^{4\beta}-p^{4+4\beta}}\Bigg)d\beta,
\end{equation}
where we have made use of Lemma \ref{A integral}. Lemma \ref{fourth total} then follows upon combining the results of (\ref{simple pole contribution}), (\ref{first piece}), and (\ref{second piece}).
| proofpile-arXiv_065-1587 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:introduction01}
As society progresses and technology develops, the power system becomes more and more complicated, and humans’ requirements and expectations upon the system have been leveled up step by step. The goal of power system improvements start from successful delivery of energy, to safe energy delivery, to expansion for larger coverage and capacity, to advancements in stability and resilience, to boost of energy efficiency, and to enhancement of social welfare. Furthermore, the installation and development of renewable energy resources in most of universities or societies is speeding up, because the prices of solar panels and wind turbines are decreasing for their potential costumers so most of the buyers can afford the renewable energies nowadays. Meanwhile, The integration of renewable energy need the power grid adds more modularity and improve adaptability which may lower the system's robustness and uncertainty in terms of the balance between demand and generation. On the other hand, the integration of renewable energy will also increase the fluctuation of real-time pricing (RTP) in future smart grid, which makes the prediction and planning of distribution power system more complex. For energy-end users who consumes bulk power like universities, price is among one of the crucial factors when it comes to cost saving and load reduction \cite{SGresidential}.
With the introduce of smart meters and nodal prices, the utility price for the buses within a same distribution power grid varies a lot. Therefore, building managers have their own control objectives according to the corresponding distribution locational marginal prices \cite{hao2016locational}. However, the nodal price in a distribution power grid is not only dependent on one bus, it is almost under the influence of every bus in the same distribution power grid. Therefore, each building manager has to take the influences from other buses into consideration before they make their decision. Therefore, within the distribution power grid, the building managers can play a game and find the optimal strategy for them to control the HVAC system. However, the outcome, generated from the game, is solely dependent on the utility cost. For large commercial buildings and academic buildings, this kind of control and planning strategy needs to be improved to ensure that the control strategy would not affect the working efficiency of indoor occupants.
The first drawback of the aforementioned methodology is that it cannot guarantee the indoor working productivity within a reasonable range. The second shortcoming is that the algorithm is not sensitive to the constantly changing environment factors such as temperature, power system condition, DLMP, etc. So the methodology cannot bring the model up to date. To cope with the first problem, we come up with the social cost. the formulation for the social cost comprises two major parts: the utility cost, which is calculated by the end-use energy and the corresponding DLMPs; the cost of work productivity, which is determined by the cost of performance reduction and the number of working personnels \cite{hao2016distribution,jiang2016spat123ial,lu1}. With the increase of buses in distribution power system and the indoor temperature control strategies, the calculation complexity for one game escalates dramatically. Therefore, in this manuscript, we implement markov decision process based multi-agent reinforcement learning to address this problem.
The following sections will briefly review the related research field in this manuscript. Sec.~\ref{sec:introduction021} defines the meaning of "Social Energy" and introduces the computation paradigm in this paper. Sec.~\ref{sec:introduction02} introduces the past and future of distribution locational marginal pricing. Sec.~\ref{sec:introduction03} focuses on the literature of game theory and reinforcement learning in power system. More detailed discussion and reviews can be found in the later chapters.
\section{Social Energy}\label{sec:introduction021}
The inherent nature of energy, i.e., physicality, sociality and informatization, implies the inevitable and intensive interaction between energy systems and social systems. From this perspective, we define ``social energy" as a complex socio-technical system of energy systems, social systems and the derived artificial virtual systems which characterize the intense inter-system and intra-system interactions. The recent advancement in intelligent technology, including artificial intelligence and machine learning technologies, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies, enables the possibility of substantial advancement in socio-technical system optimization, scheduling, control and management. We provide a discussion on the nature of energy, and then propose the concept and intention of social energy systems for electrical power. A general methodology of establishing and investigating social energy is proposed, which is based on the ACP approach, i.e., ``artificial systems" (A), ``computational experiments” (C) and ``parallel execution” (P), and parallel system methodology. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept.
\section{Current Research on Distribution Locational Marginal Pricing}\label{sec:introduction02}
The Smart Grid improves the existing system by accommodating bi-directional flow of both electrical power and real-time communication between consumers and utility operators. Changes to the generation, transmission and delivery infrastructure are supervised, controlled and coordinated by grid operators. Currently, energy efficiency and the emerging of new power loads have further increased the needs for developing new methods for demand side management (DSM). Since the 1980'$s$, DSM has been used as a load shifting tool \cite{gellings1985concept,mohsenian2010optimal,samadi2010optimal} and real time pricing (RTP) is considered as one of the most popular methods that can motivate customers to manage their energy consumption wisely and more efficiently.
In 2010, the University of Denver (DU) facilities spent \$$3.7\,M$ on campus electricity measured from 78 building meters, and 7 buildings have additional demand rate $kW$ ratchet charges. Later in 2011, DU facilities planed to deploy additional methods beyond existing efforts to further lower peak demand, including distributed generation, demand response, proactive heating and cooling, managed load shedding and lighting controls. Driven by the economic goals and regulated by federal laws, DU campus is trying to utilize DSM to reduce peak loads and decrease utility scale in order to control bill demands and cut down $CO_2$ emission. Based on these features, we implement and study the locational marginal pricing (LMP) in the campus power system to generate nodal pricing to help the facilities reduce peak loads, balance power supply and demand, and save on electricity bills. The distribution locational marginal price (DLMP) is modified from LMP to estimate the real-time cost of delivering power to every node in the distribution system and to provide compensation for the renewable energy generation.
Due to the characteristics of LMP, the DLMP based RTP is able to help the electricity market evolve into a more efficient one with less volatility, and the enhanced market efficiency will lead to social welfare for both energy providers and consumers. Smart grid and DLMP will realize DSM and improve elasticity on the demand side, to offer the customer with lower energy cost and to provide the market with increased social welfare \cite{borenstein2009time,schweppe2013spot,lu3}. DLMP has its own advantage through allowing the utility to charge the true cost of electric service to the individual customer rather than mass cross-subsidization \cite{kiefer2011deregulated}. Optimal power flow (OPF) based methods, price area congestion control methods and transaction-based methods \cite{christie2000transmission,lu4} are three techniques to solve congestion management problems. The OPF-based congestion management method is the most precise and efficient methodology and the foundation of the centralized optimization. In \cite{wang2007computational,hao2014Asilomar}, several approaches are demonstrated to deal with the congestion changes in OPF based calculation problem.
DLMP is adopted in this paper for generating real-time prices using the power distribution system model of the DU campus, which is based on the real-world DU utility map. Real world real-time load data is used in the simulation.
\section{Current Research on Demand Side Management, Game Theory and Reinforcement Learning in Power System}\label{sec:introduction03}
Game theory has been implemented in power system for a relatively long time, like the double auction price mechanism \cite{C4MATPOWER}, and it is proven that game theory can realized in abundant scenarios to solve energy related problems. In \cite{SGEVcharging}, the authors present a novel method for energy exchange between electric vehicle (EV) and power grid based on Stackelberg game. However, for academic and commercial buildings, the impact of EV is negligible in terms of the amount of load consumed by EVs. \cite{SGpricingmech} concentrates on the design and implementation of game theory in RTP and load stability for future power grid, and the paper introduces social welfare for maximizing the supplier/consumer profits. Still, the study of the influence of RTP was not included. Some researchers conducted experiments about the relationship between RTP and users' activities \cite{SGjhgPrc1,SGjhgPrc2,jiang2016spat123ial}. But the price mechanisms in those articles,like common flat rate and quadratic cost function, will not fit for future smart grid. Energy scheduling and demand management will benefit smart gird in many aspects such as large scale load shift, mitigate congestion, reduce power system transit stability issues.
Buildings take up $30\%-45\%$ percentages of global energy consumption \cite{uk2011carbon,jiang2016short}, academic buildings and commercial buildings are labeled as the type of building which consumes the highest power energy within this sector \cite{govregulation1,jiang2016sh123ort,govregulation2,jiang2017sh234ort}. The continuously increasing energy market and $CO_2$ emissions have made the reduction of green house gas and improvement of energy efficiency among the major concerns for energy policies in most countries \cite{whyHVAC1,jiang2015pmu,gu2018multi123}. Therefore, demand side management becomes a very popular and important topic, when people started to pursue higher benefit, e.g. economic profits and social benefits \cite{mohagheghi2010demand, albadi2008summary, vardakas2015survey}. A heating, ventilation and air-conditioning system (HVAC) is universally implemented in large buildings such as academic buildings, shopping centers and commercial buildings \cite{whyHVAC2,zhu2018hierarchical,zhu2017Graphical}. Majority of the research works focus on energy conservation, profit optimization, or pollution elimination problems, and there rarely are works take individual human effect into consideration \cite{brooks2010demand, yusta2007optimal, dong2012distributed, lu2013design, hermanns2013demand, mohsenian2010}. HVAC systems take up the largest energy end usage and impact the cost bill dramatically \cite{whyHVAC1}. And ineffective operations and settings of HVAC systems can lead to remarkably waste of energy, poor indoor air quality and environmental degradation \cite{indoorair,li2017consensus,li2017naps}. Since the ultimate goal of power system development and improvement is to facilitate human life, the effect of human behaviors and their experiences of the services should not be neglected in demand side management and HVAC system scheduling. The biggest obstacle for considering individual human effects in demand side management and scheduling was the extremely high uncertainty and variability of human behavior, making it almost impossible to establish a model for computing. Nowadays, big data and considerably large scale data based modeling techniques can help to find a feasible solution.
In this paper, social cost, which includes the electricity consumption costs and human working efficiency costs, is introduced as an advanced concept to address the importance of both the energy consumption and human experiences in power system management. The optimization of social cost is designated as the objective in this paper to arrange and manage the HVAC system scheduling. Inspired by the methodology in \cite{gamemethod,dai2015werultra,dai2013wefimage,qiu2015computational,qiu2018random}, we propose a game-theoretic solution through formulating the aforementioned problem into a game-theoretic formation. Our proposed approach can solve a finite n-person non co-operative game that the players are the building managers in the same local smart grid, the strategies are the HVAC indoor temperature settings, and the payoffs are the social costs according to various personnels inside those buildings as well as indoor working productivity. It should be noted that we introduce distributional locational marginal pricing (DLMP) to strengthen and reflect the relationship between the plays' payoffs, other player's action and power system situation. To illustrate the proposed methodology and mechanism, we embedded the approach into an interactive real-artificial parallel computing technique. For implementing our methodology and the artificial-real management/computing system, human efficiency modeling, numerical experiments based smart building modeling, distribution level real-time pricing and social response to the pricing signals are studied. The details of aforementioned techniques will be depicted in chapter~\ref{Chap5}.
Although the game theory can be a good solution for most of the problems, it would still take for a while to solve realistic puzzles and the time would increase exponentially in terms of large size distribution power grid and dozens of control strategies. In terms of our specific optimization objective, there is a need for us to come up with a more advanced algorithm that can solve the proposed objective function faster. To realize the goal, an algorithm needs to be capable of distribution calculation, self learning, and solving discontinuous problem. Hence, multi-agent reinforce learning comes into our searching scope and fits for our optimizing and controlling needs. In this paper, a Markov Decision Processes (MDP) based multi-agent reinforcement learning methodology is implemented to address to optimize the campus social cost.
\section{Architecture of the Paper}
In Chapter 1, the literature of current research are studied, which provides the motivation, rationale and background for the paper. In Chapter 2, the computational paradigm of the research is demonstrate. And social energy is also introduced in this chapter. To reflect the cost of energy, Chapter 3 mainly introduces the distribution locational pricing, and the implementation of the DLMP. University of Denver campus power system is used to demonstrate. Chapter 4 illustrates the preliminary methodology that is used to investigate our objective function. Chapter 5 focuses on the advanced methodology that we implement in our research to improve the computational efficiency. The related numerical experiments are conducted in the University of Denver power system, which is a $57$ buses distribution power system. Chapter 6 concludes the report.
\section{Introduction and Motivation for Parallel Computing and Social Energy}
Energy has always been a key element in the development and operation of a society. It is the backbone which supports the prosperity of a modern society. It is the hand that pushes the society to move forward, meanwhile, it is also one of the major limitations and barrier which hinders the pace of social development \cite{hao2017SocialEn}.
\subsection{Grand Social Challenges in Energy and Power System}
The trend of electrification forced by the Second Industrial Revolution eventually triggered the explosion of the demand for energy supply, and set global social development on the basis of fossil fuels. Based on the statistics provided by The World Bank, fossil fuels, including coal, petroleum, and natural gas, have accounted for over 80 percent of world's energy consumption ever since 1990. Although the development of the energy industry constantly propels the development of the entire society, problems, as many can see, have emerged \cite{Brown2015,NY09,Werner2015}. The first one arising is the depletion of fossil fuel resources, which was foreseen even at the early stage of their large-scale utilization. Another most obvious problem, which catches global attentions, is the environmental impact and the climate change. Pollutants, including carbon monoxide, oxides of nitrogen, sulfur oxides, hydrocarbons and particulate matters, which are released from fuel combustion, are contaminating the air we breath everyday and encroaching on human health. Uncontrolled oil spills, coal mining washing, and acid rain are polluting the water and damaging the aquatic ecosystem. And strip mining and some closed power plants have left large area of land fallow and wasted. If environmental pollution only affects certain areas, where fossil fuels are exploited, transported and processed, climate change, mainly global warming, affect the life of all human beings. Ever since the First Industrial Revolution, increased dependence on fossil fuels has resulted in a huge amount of green house gas emission and has dramatically accelerated the global temperature rise. Melting glaciers, rising sea levels, swallowed continent, reduced food production, species extinction, and ecosystem collapse can all come to reality if the temperature rise continues to follow the current pattern. Other problems, such as political or security problems also arose among nations due to the uneven distribution of the fossil fuel sources.
Being aware of the benefits as well as the side effects brought by conventional energy resources, societies worldwide have already started to take actions, i.e., reducing their dependence on conventional energy and transiting to alternative energy resources. Nuclear energy was once thought to be the substitute \cite{IAEA2014}. Although it produced $11$\% of the world's electricity in 2014 \cite{Mycle2014}, its contribution is declining due to the climbing cost and, most importantly, due to its potential destructive danger, which was exposed to the world through the Chernobyl and Fukushima accidents. Clean renewable energy is receiving greater expectations \cite{Brown2015}, which includes hydro power, solar power and wind power, which are inexhaustible and accessible almost anywhere in the world. Many large-scale solar and wind power plants have been established worldwide. Meanwhile, solar and wind energy have been accepted at the commercial and personal consumer level, e.g., small-scale renewable energy production for commercial buildings and residential housings. The above progress, together with the emergence of a new energy utilization form, electrical vehicles, have changed the role of the passive end consumers to a more active one as prosumers (producer-consumer) \cite{prosumer}.
The diversity of energy sources and involvement of social entities are changing the structure of the power system by incorporating distributed generation and storage capabilities into the conventional centralized operation mode, making it much more complicated and much difficult to operate \cite{Bian2014, Wigan2014}. Efficiency and security are among the most severe concerns. How can an energy system incorporate different energy forms? How to take the advantages of different energy forms in order to enhance efficiency while eliminating waste and side effects? And how could an energy system be operated to maintain stable performance to ensure secure generation and transmission in case of any type of disturbances from both inside and outside of the system. The quest for an intelligent system now is urgent than any other time in the history. The system should be able to dig, collect, process, digest and utilize the tremendous information flowing in and between every parts and procedures for the purpose to monitor, manage or even provide advices to itself.
\subsubsection{Physicality of Energy}
All the processes of energy production and consumption take place in the physical space. The major energy resources encompass fossil fuels, e.g., coal, petroleum, and natural gas, alternative and renewable resources, e.g., hydro, solar, wind, biomass and geothermal, and nuclear energy. They are used to provide cooling, heating, illumination, mechanical power and electricity. The devices and systems involved in energy transformation and utilization include boiler, steam engine, steam turbine, electric generator, electric motor and many advanced devices and systems which are composed of various electricity consuming equipment. Furthermore, the rising of distributed energy sources, energy storage, combined cooling, heat and power (CCHP), and electric vehicles have further enhanced the diversity and complexity of the devices and systems working in the energy production and consumption process.
\subsubsection{Sociality of Energy}
Ultimately, energy is produced to serve the human society. Thus, inevitably a label of sociality is attached to it. Sociality of energy is presented and stated in the following three aspects \cite{hao2017osti}.
\paragraph{Direct Involvement of Human Beings during Energy Production}
Human beings directly participate in every procedure of energy production and consumption, i.e., planning, designing, constructing, operating and maintaining energy systems. Different knowledge background, proficiency levels, subjective consciousness and even emotional status of the participants in these procedures might affect the system in different ways. For example, different operators may result in different efficiency levels even when operating the same boiler under the same condition for thermal power generation. Per statistics, the fluctuation in efficiency can at least reach 0.5\%. When the boiler is working at an unrated state, the effect is even more prominent. Therefore, the energy production process can reflect one aspect of the sociality of energy.
\paragraph{Sociality Reflected in Load Characteristics}
Influenced by the applications of many demand side management programs, consumers would probably change the schedules of their daily activities according to real time utility prices to pursue lower costs. And this would definitely lead to load change together with human's activities, habits and mental states. Besides, since an increasing portion of population are choosing electric vehicles as their means of transportation, load becomes to shift not only in the time domain but also in the spatial domain in response to people's needs for traveling, people's decisions, electricity prices and to the traffic conditions. What's more, since the quality and capacity of energy storage devices have been largely enhanced, people now can store energy at off-peak time, move it to complement peak time usage and they even have the option to trade with the grid. And this new possibility can help enhance power system security, improve efficiency and save costs. Hence, the load characteristics can reflect another aspect of the sociality of energy. possibility can help enhance power system security, improve efficiency and save costs. Hence, the load characteristics can reflect another aspect of the sociality of energy. Therefore, the need for design a novel computation algorithm, which can taking care of the competition between prosumers and time-varying load profile, is urgent for future energy and power system.
\paragraph{Sociality Reflected in Load Characteristics}
The planning of an energy system is constrained by all kinds of considerations, i.e., the energy sources, environmental endurance, economic conditions and the population. As more issues have emerged due to the utilization of conventional energy, optimized control and management of multi-source energy production becomes even more urgent. Many countries have established governmental mechanisms to support and encourage the development of environment protection programs. As a result, the penetration rate of renewable energy in energy production is rising rapidly, which creates new challenges to the current power system for maintaining stability and security, and requests all the energy generation units to work under varying operating conditions. How to operate the thermal generating units to ensure efficiency while reducing pollution? How to incorporate the requirements and expectations from the government and society, such as limitations on emission and required rate of renewable installations? These questions would be vital for the future power system to answer for achieving flexible energy system operation. Thus, energy system operation, which is restricted by policies and regulations, can reflect the last aspect of sociality of energy.
\subsubsection{Informatization of Energy}
Energy flow itself provides huge amount of information. Although the information is easy to acquire, yet few systematic analyses have been conducted to better utilize the value embedded in them. Information is power, as long as it is employed reasonably and efficiently. To assist the current energy system, the construction of an information system is necessary. The information system could offer an overview of the current situation of energy use by analyzing historical data, reconstruct the frame of each process, and provide optimized plan for future use aiming at enhancing efficiency and slowing down energy depletion. In any energy system process, an information flow is coupled with the energy flow. The information generated in sensing, computing, monitoring, scheduling, dispatching, and control and management directly affects and dominates the energy flow, and leaves a sign of informatization on the energy system. Besides, the dominating status of information in the energy system also enables it to receive information from the society and to be heavily influenced by human's thoughts and judgements. Thus, the informatization of energy system also reflects the property of sociality in energy.
To respond to the grand challenges and based on the inherent properties of energy, we propose the definition of ``social energy", which is enabled by the recent advancement in intelligent technology, including artificial intelligence, and machine learning, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies. In the following, we first provide a proposal on the general methodology of establishing and investigating social energy, which is based on the ACP approach, i.e., ``artificial systems" (A), ``computational experiments” (C) and ``parallel execution” (P), and parallel system methodology, and then propose the social energy systems for electrical power. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept. In the concluding remarks, we discuss the technical pathway, in both social and nature sciences, to social energy, and our vision on its future.
\section{General Methodology in Establishing Social Energy}\label{sec:approach}
\subsection{The ACP Approach}
\begin{figure}[!h]
\centering
\includegraphics[width=5.7in]{figure/ParaEx.eps}
\caption{The ACP approach.}
\label{fig:ParaEx}
\end{figure}
The ACP approach consists of ``artificial systems" (A), ``computational experiments" (C) and ``parallel execution" (P) \cite{5wang2007toward,wang2013intelligent}.
The ``artificial systems" serve for building complex models based on the data and information collected from the real physical world by employing data-driven approaches and semantic modeling methods. They apply Merton's laws to construct a feedback exchange mechanism between information and behavior. The ``computational experiments" aim at accurate data analytics. Since numerical representations for human society activities can hardly be extracted for quantitative analysis, social computing methods must be applied. Social computing methods \cite{5wang2004social} such as deep computing, group breadth computing and historical experience computing can help obtain the results from different modes of the virtual artificial systems. The social computing methods must be rooted in the human society, utilizing artificial intelligence instead of traditional computation methods to model the society. And the ``parallel execution" targets at innovative decision-making. The artificial energy system and physical energy system form a pair of parallel energy system, constructing a new feedback control mechanism based on virtual-physical interactions.
Recent development of new IT technology, such as deep learning, Internet of things, and cloud computing, lays a technical foundation for the realization of the ACP approach. The core philosophy of ACP is to interpret and transfer what is virtual in the complex CPSS to quantifiable, computable and processable processes, which turns the virtual artificial space into another space to solve complexity problems. The ACP approach targets at establishing a ``virtual vs. real" parallel system of a complex system, and solving complex system problems through real-time computing and interacting between the two. Generally speaking, the ACP approach contains three majoy steps: 1) Modeling complex systems using artificial systems; 2) Evaluating complex systems using computational experiments; and 3) Realizing effective control and management over the complex system through interacting the real physical system with its virtual artificial system.
The virtual artificial space and the real physical space form the ``complex space" for solving complex system problems. The most recent development of new IT technology, such as high-performance and cloud computing, deep learning, and Internet of Things, provides a technical foundation for the ACP approach. In essence, the ACP approach aims to establish both the ``virtual" and ``real" parallel systems of a complex system, and both of them are utilized to solve complex system problems through quantifiable and implementable real-time computing and interacting. In summary, the ACP approach is consist of three major steps. 1) Using artificial systems to model complex systems; 2) Using computational experiments to evaluate complex system; and 3) Interacting the real physical system with the virtual artificial system, and through the virtual-real system interaction, realizing effective control and management over the complex system.
In a sense, the ACP approach solves the ``paradox of scientific methods" in complex system's ``scientific solutions". In most complex systems, due to high complexity, conducting experiments is infeasible. In most cases, one has to rely on the results, i.e., the output from the complex system, for evaluating the solution. However, ``scientific solutions" need to satisfy two conditions: triable and repeatable. In a complex system that involves human and societies, not being triable is a major issue. This is due to multi-faceted of reasons of prohibitive costs, legal restriction, ethics, and most importantly, impossible to have unchanged experiment conditions. The above inevitably results in the ``paradox of scientific methods" in complex system's ``scientific solutions". Thus, the ACP pursues the sub-optimal approach of ``computational experiments". The computational experiments substitute the simulations for physical systems when simulations are not feasible. In this way, the process of solving complex system issues becomes controllable, observable and repeatable, and thus the solution also becomes triable and repeatable, which meets the basic requirements of scientific methods.
\subsection{Parallel Control System}
Based on the ACP approach, the parallel intelligence can be defined as one form of intelligence that is generated from the interactions and executions between real and virtual systems \cite{wang2013parallel}. The parallel intelligence is characterized by being data-driven, artificial systems based modeling, and computational experiments
based system behavior analytics and evaluation. The core philosophy of parallel intelligence is, for a complex system, constructing a parallel system which is consisting of real physical systems and artificial systems. The final goal of parallel intelligence is to make decisions to drive the real system so that it tends towards the virtual system. In this way, the complex system problems are simplified utilizing the virtual artificial system, and the management and control of the complex system are achieved. Fig.~\ref{fig:ParaEx} demonstrates a framework of a parallel system. In this framework, parallel intelligence can be used in three operation modes, i.e., 1) Learning and training: the parallel intelligence is used for establishing the virtual artificial system. In this mode, the artificial system might be very different from the real physical system, and not much interaction is required; 2) Experiment and evaluation: the parallel intelligence is used for generating and conducting computational experiments in this mode, for testing and evaluating various system scenarios and solutions. In this mode, the artificial system and the real system interact to determine the performance of a proposed policy; 3) Control and management: parallel execution plays a major role in this operation mode. The virtual artificial system and real physical system interact with each other in parallel and in real-time, and thus achieve control and management of the complex system. We would like to point out that one single physical system can interact with multiple virtual artificial systems. For example, corresponding to different demands from different applications, one physical system can interact simultaneously or in a time-sharing manner with the data visualization artificial system, ideal artificial system, experimental artificial system, contingency artificial system, optimization artificial system, evaluation artificial system, training artificial system, learning artificial system, etc.
\begin{figure}[htp]
\centering
\includegraphics[width=5.7in]{figure/ParaFrame.eps}
\caption{The parallel system approach for real-virtual interactive system control and management.}
\label{fig:ParaFrame}
\end{figure}
The ACP approach and parallel system methodology have been applied to several major social challenges, including intelligent transportation systems, social elderly healthcare systems, mass production system management, and social computing and management for security, and achieved remarkable success. The example of Intelligent Transportation Systems (ITS) in Urban Areas is demonstrated as following. The major challenges in ITS lies in two folds: \textit{1.)} Transportation system data are very difficult to obtain neither from the related government departments nor from field experiments, resulting in infeasibility in large-scale and close-to-reality study of transportation systems; \textit{2)} Current intelligent transportation systems rely on data from past events for decision-making for future events, lacking a mechanism of investigating the actual causes of ITS events. The ACP approach is employed to study this socio-technical complex system, utilizing the parallel ``Artificial Transportation Systems" (ATS) for modeling, control and management, which is able to bypass the two difficulties stated above \cite{Wan13,Wan08,Wan07}. Utilizing the ACP approach, the social factors, such as population distribution, individual human behaviors, climate and public emergencies, are taken into considerations in the ITS system. Together with the cloud-based computing and IoT technology, the traffic conditions can be forecast. Meanwhile, the ACP approach can also provide suggestions on how to improve traffic conditions through control and management before the actual traffic congestion takes place.
The success of the ACP approach and parallel system methodology provides a feasible and promising technical pathway for studying the proposed social energy system, which is illustrated in the next section.
\section{Definition and Unique Characteristics of Social Energy}
\subsection{The Definition of Social Energy}
Utilizing the concepts and methodology introduced in the previous sections, we provide the definition of the proposed social energy as following.
A social energy system is a complex of physical energy systems, physical social systems, and the artificial virtual systems derived from the physical systems. The artificial virtual systems are derived with certain purposes that concern the joint operation of the socio-technical systems. Utilizing the multifaceted data collected from the socio-technical systems, through sufficient interacting and massive computing, knowledge automation of the systems is gained, and intelligence in system control and management is generated. The knowledge and intelligence in turn are applied in the social energy system, achieving a truly automated and intelligent joint socio-technical system design and management.
We propose to use the general methodology of ACP approach and parallel system in establishing the proposed social energy system, and its unique characteristics are explained as the following.
\subsection{Unique Characteristics of Social Energy}
\subsubsection{Artificial Systems}
In the era of Industrial and Energy 3.0, the power system and information system are designed separately, bringing about a lack of interaction between the two, and, to some degree, resulting in the lack of utilization of the informatization nature of energy. Coming into the era of Energy 4.0, the concept and frame of a ``Cyber physical power system'' have been proposed, focusing on the amalgamation and cooperation of the power and information systems, especially the influence of the information system upon the power system. However, the specific theories and techniques needed to build such a system are not yet mature.
Most of the studies on the physical energy systems focuse mainly on the energy flow in the processes of energy transformation, combined utilization of multiple energy sources, cascaded utilization of energy, environmental friendly substitutes for conventional energy sources, etc.
The design and plan for conventional power system control and management schemes usually only consider the physicality of the energy flow, and ignore, most of the time, the sociality and informatization properties of energy. With the rising of intelligent energy transformation, transmission and consumption, the energy system is required to incorporate the intermittent renewable sources, to catch up with rapid demand changes, and to actively meet the requirements of saving energy and lowering emission, which requires in-depth incorporation of the social system information, i.e., individual, organizational and social behavior information which contains complexity and uncertainty. To explore how these social elements affect the energy system, and further to assist improving the socio-technical system's design, operation and maintenance, one should understand and combine several disciplines, such as sociology, management, economics, anthropology, and praxeology.
The practice for developing the aforementioned interaction and cooperation mechanisms among the physical, social and informative properties have not been launched yet, and they cannot be realized in the conventional power system simulation models, which usually only take minimum social information into considerations, such as power demand. In addition, the cooperation strategy among the three components of the cyber-physical-social system should be adaptively updated in accordance with the real-time working conditions, in order to provide intelligent decisions. Unlike the traditional deterministic or probability-based simulation models, most of the changing conditions cannot be forecast, which means the structure of the system model should be time-varying.
Conventional power system simulation models consider and utilize only the load patterns, while the social elements, e.g., incentive mechanisms, people's consumption habits or people's decision-making patterns, involve knowledge from multiple disciplines, which fundamentally denies the feasibility of the conventional simulation methods to build the social models. Besides, the power system is a non-linear system, and most of the simulation models are built and serves around a certain nod in the grid, that they can hardly reflect the overall working condition of the entire grid, resulting in a relatively conservative operation style. What's more, due to the fast development of the ''Internet of energy'', multi-source energy combination and demand side management become inevitable, which further elevates the degree of system non-linearity. Therefore, the complex working conditions of the power system cannot be reliably and effectively realized by the existing simulation techniques.
Revolutionary theories and techniques are in urgent demand to analyze, manage and control such a complex system. To response to the call, the concept of artificial energy systems based on the idea of parallel systems is deemed by us as a suitable methodology for social energy. The uncertainties existing in the cyber-physical-social system and the complexity of the human society intrinsically determine that this system would be extremely complicated. Hence, to transit from Newton's Laws to Merton's Laws, from control to guidance, the establishment of artificial systems is in an urgent need.
The virtual artificial energy system will be established in the ``cyber space" based on semantic \cite{5wang2015}, data driven or other modeling techniques. It should work parallel to the physical energy system, such that the two systems interact and exchange feedback between each other. The artificial system should be able to reflect the real-time working conditions of the physical system, and on the other hand, it needs to conduct parallel computations to provide optimized solutions and advices for better performance of the physical system. Taking this as a precedent, it is believed that introducing the virtual artificial system into the complex energy control system will definitely start a revolutionary new era in the energy industry.
\subsubsection{Knowledge Automation Through Interacting and Computing}
A social energy system usually contains highly complex physical and informative processes. The complexity of the social energy socio-technical system exceeds by far the complexity that the industrial automation systems can handle, which means conventional industrial automation approaches, including manual control, single loop, multiple loop, distributed control system (DCS), manufacturing execution system (MES), enterprise resource planning (ERP), etc., cannot satisfy the requirement of social energy any more.
The artificial system derived from the social and energy systems will play a central role in enabling the processing of tremendous data and information from the energy and social systems. The physical system data and information with features of uncertainty, redundancy, and inconsistency, which decide that human intelligence alone has no ability in processing and analyzing the data and information. Therefore, it is necessary to initiate the system of knowledge automation \cite{5wang2015software}, employing data-driven methods, multi-agent system and other artificial intelligence techniques to liberate human intelligence and to achieve the desired outcome of the artificial system. The approach for realizing data automation is introduced in Section~\ref{sec:approach} as the ACP approach and parallel system methodology.
\subsubsection{Socio-Technical Feedback Mechanism and Gamification of Reality}
The direct outcome of the intensive interaction among social, technical and artificial systems is the feedback mechanism that directly or indirectly leads to the desired system goals. We note that the physical socio-technical system can interact with multiple virtual artificial systems, e.g., consumer behavior artificial system, organizational behavior artificial system, contingency artificial system, scheduling and optimization artificial system, real-time management artificial system, etc. Each of the artificial system corresponds to different applications. And the knowledge automated through interacting and computing generates feedback signals which are used for modeling and shaping the real physical social and energy systems. The feedback may range from direct control of technical system, indirect manipulation of distributed socio-technical agents, prescriptive influence on social and economic entities, publicity for maximum information dissemination or even psychological hints on individual behaviors. In \cite{Wer12}, the effective and efficient feedback is characterized as the core feature of gaming from philosophical perspective. In the social energy socio-technical system, strong, effective and efficient feedback mechanism provides the possibility of the gamification of reality, i.e., turning the reality into a ``game", where the physical systems are shaped and pushed toward the desired ones much more rapidly. With such strong feedback mechanism, the socio-technical system is expected to possess advanced closed-loop operation capability with stability, rapid system convergence and agility in system adaptation.
The era of energy 5.0 will be an era of big data. The sensing and surveillance data from physical systems, pertinent data from information control, and social activities and policies all will be the data sources. There will be no concrete models or references for the conventional simulation techniques to imitate, thus the conventional simulation and control modes can hardly handle the challenges from the era of big data. A data-driven parallel control mechanism utilizing the theories, methods and techniques of knowledge automation will be the core in the establishment of the artificial system. The focuses and key issues of the CPSS are illustrated as the following.
\paragraph{Scientific problems} While traditional computations and physical models work independently, the CPSS requires a unified modeling theory to realize dynamic interactions among computational, physical and social processes, achieve time-space consistency, cope with uncertainty problems, and eventually to accomplish the parallel operation of the physical and artificial systems.
\paragraph{Technical problems} Corresponding to the above scientific problems, the technical focuses should be put on the development of new scheduling, design, analysing and experiment tools which can utilize social computing, parallel execution or other strategies in order to reflect the activities of interactions and evolutions.
\paragraph{Engineering problems} The engineering problems mainly encompass system construction, design, integration, maneuverability \textit{etc.} Issues like reasonable time managing of the physical system and concurrency of the physical and virtual artificial systems are also of importance.
\begin{figure*
\centering
\includegraphics[width=5.7in]{figure/Trans.eps}
\caption{The socio-technical power system architecture at the bulk generation and transmission level.}
\label{fig:Trans}
\end{figure*}
\section{Architecture of Social Energy for Electrical Power System}
In this section, we provide an initial investigation on the socio-technical system architectures for electrical power systems at different levels for demonstrating the interactions among electrical energy systems, social systems and their resulting artificial systems.
\subsection{Bulk Generation and Transmission Level Architecture}
As the first chain in the electricity delivery, bulk electricity power generation and transmission networks aim to generate and transmit electrical power that balances power demand across large geographical areas. Therefore, the most important issue faced by the transmission network is how to efficiently dispatch electricity from generation to load, which includes the efficient operation of existing transmission facilities and the expansion of the network to meet increased demand.
At the stages of transmission grid construction and expansion, regulatory forces carry the responsibility to decide the network investment, business models and access policies to set the basic tone of grid development \cite{Regul, Elforsk, Grid2030, christie2000}. In the operation of transmission grid, ``economic dispatch" means that the least expensive energy is transmitted to meet power demand while ensuring the security and stability of the power system. In an ideal scenario, simple employment of generations with least expensive operation costs would be the best solution to achieve the goal. However, network effects, which encompass network losses, grid-imposed constraints, and quality of service, are the main factors that influence the operation of the transmission grid. Regulatory, economic and technical efforts are designed to cope with the impacts brought by the network effects.
In terms of quality of service, although most of the quality issues occurring in the demand side are due to power distribution level failure, the less common issues caused by transmission failure generally bring severe consequences to the entire power system \cite{Dobson2007}. This is determined by the characteristics of electricity flow, which obeys the Kirchoff's laws, that failure in one working transmission line immediately triggers the Domino's effect and the failure is spread across the grid to induce black-out. To prevent this kind of catastrophe, power quality related policies are established, and in response a series of technical measures are taken to preserve the stability of the grid. Techniques such as Volt/VAR control and load tap changing capacitor bank are developed to maintain reasonable active and reactive power amount in the grid to ensure proper power flow and voltage levels. And power system integrity protection, oscillation damping and other related techniques intend to minimize the impacts and protect the grid from disturbances.
Network losses \cite{glover2011power, wood2012power}, which consists of mostly ohmic losses and losses due to corona effects, add the influence of location to the ideal scenario of energy dispatch, that prices in the transmission grid are different from node to node. Grid-imposed constraints, which include the constraints resulted from the physical limitations of transmission lines and facilities, e.g., current flow limits and the constraints established by the regulators considering the security issues of grid operation, e.g., capacity and voltage level, further emphasize the locational effects and complicate the process of energy dispatch for pursuing the most economical way.
Transmission level nodal prices (equivalent to locational marginal prices) \cite{schweppe1988, borenstein2005time, crew1995theory} are the marginal prices at each node, counting the impacts of losses and constraints, and provide the locational pricing signals. The nodal prices play a truly important role in the transmission market. It enables the operators to remunerate utilizing the price differences. It provides the information for guiding transmission planning, optimal power flow control, virtual power plant dispatch and demand response dispatch, and therefore it is meant to achieve the ``efficient energy dispatch". Also for the regulators and policy makers, the locational signals can provide useful advices on siting policies.
The nodal pricing mechanism cannot cover all the power transmission investors' costs, in addition transmission charges must be issued to the beneficiaries of the grid. And an effective and efficient cost allocation mechanism should allocate the cost for initiation, operation, maintenance and expansion to all the beneficiaries properly, that everyone's profits are ensured and the economic viability of the system is protected.
\subsection{Distributed Generation and Distribution Level Architecture}\label{sec:distri}
\begin{figure*
\centering
\includegraphics[width=5.7in]{figure/Distr.eps}
\caption{The socio-technical power system architecture at the distributed generation and distribution level.}
\label{fig:Distr}
\end{figure*}
Compared with the bulk power generation and electricity transmission grid, the distribution grid is a much more intricate system \cite{mei2011power}. It connects to the transmission grid, steps down the transmission voltage to meet consumption's requirements and deals directly with the various end users, fulfilling the last step in delivery of energy from generation to consumption. The rise of distributed generation (DG) today \cite{borbely2001, ackermann2001, barker2000} allows smaller scale electricity generations to be connected to the distribution grid. The functions of and responsibilities carried by the distribution grids determine the complexity and massive quantities of their components and operation mechanisms, which implies that more regulatory, technical and economic efforts are required to maintain its viability and functionality.
The most basic duty of the distribution grid is to deliver electricity to the end users within its service area for supporting resident living and society development. The idea sounds simple, however, it requests the fulfilment of a series of complicated tasks. In the initial stage, the distributors/operators of the distribution grids must plan, design and build the capacity and structure of their distribution network by analyzing and evaluating the current demand and possible increase in demand in the future. During the life time of services, distributors need to properly operate the grid and develop techniques to ensure continuity of provision of high quality electricity. Non-discriminatory acceptance of new-coming consumers should be ensured and their connection to the grid is carried out by the distributors. For long-term steady and secure services, it is also the responsibility of the distributors in conducting regular and effective maintenance of the system.
All the tasks mentioned above require huge investment. Although it is the distributors' duty to maintain a complex distribution network which can deliver high-quality electrical power to the end users, the distributors' ultimate goal is always to profit as much as possible. Following this logic, the distributors may want to reduce the costs invested, as a result, conflicts between the consumers' and the distributors' benefits emerge. Distributors may intend to deny the access of new consumers to the existing distribution grid, for avoiding the costs for extra capacity planned in the initial grid construction, and the costs for new connection nodes. In terms of quality of service, for the distributors, higher quality implies higher costs, while consumers always prefer high-quality power. Rural or remote area electrification is another example, which are costly and low-profit projects to distributors, but they are imperative due to governments' regulation and beneficial for the residents in those areas.
The most severe conflicts today would probably arise around the proliferation of distributed generations (DG), distributed storage and electric vehicles (EV) \cite{clement2010,pepermans2005,quezada2006,cheng2014electrified,cheng2015d2d,zhang16flex,zhang16energy,Cheng16con,globalcom}. Considering the environmental friendly nature of renewable generations, the increased energy efficiency using co-generation of combined heat and power (CHP), and the governmental promotion and incentive policies to support distributed generations, more consumers may be willing to deploy DG. Despite the advantages DG can bring, it would impact the existing distribution network and the distributors' benefits in many ways. Since the current distribution grids are designed for unidirectional energy flow (from transmission to distribution to end users), connections of new DG facilities request investment on new technologies and constructions. And the reverse and extra power flow from users back to the grid can impact the security, stability and quality of the energy delivery, and entails the distributors to restructure their operation modes.
Policy makers and regulators influence the entire system by direct forces, i.e. direct regulations and indirect forces, i.e. indirect prescriptions through retail market. Regulators render qualifications and establish obligations, such as service duration, area of service, regular maintenance requirements, etc., through distribution licensing. And by means of incentive-based regulations, distributors are always encouraged and awarded to pay more efforts achieving certain reference level requirements, and are penalized for not accomplishing required works. Techniques such as distribution integrated Volt/VAR control (IVVC), voltage regulation and stabilization, power system fault localization and power restoration are the responses to those regulations for improving service quality, maintaining system security and stability, and reducing system energy losses. Reasonable prices are regulated by evaluating the market and the cost reports submitted by the distributors, in order to protect both consumers' and distributors' benefits. Feedbacks from the retail market and distributors in turn help the regulators to renew and adjust prices and those reference levels to adapt to changes and keep up with the trend of system development.
Although the future of DG seems promising, the scale of DG installation today has not reached the level to severely impact the distribution grids. Whereas, since DG's deployment will be an inevitable trend, corresponding to regulatory complements, market pricing mechanisms and technological developments should be initiated to cope with the challenges that DG will bring. Specific and more detailed regulations would be carried out to formalize the connection of DGs and EVs to the distribution grid, and to regulate the generation and operation of DGs. Prices would be set at a much smaller time-scale \cite{Itistime,li2014distribution} e.g., day-ahead, intra-day or real-time, to incorporate the characteristics of DGs, e.g., intermittence of renewable energy generation. And the charges for consumer side DG connections and maintenance should be set properly. For the distributors, approaches such as generation scheduling due to DG, automatic DG control, DG stabilization, optimal power flow due to DG and EV charge optimization are of great value and importance to optimize grid performance by affiliating DGs.
\subsection{Consumer Level Architecture}\label{sec:consumer}
\begin{figure*}
\centering
\includegraphics[width=5.7in]{figure/Consum.eps}
\caption{The socio-technical power system architecture at the consumer level.}
\label{fig:Consum}
\end{figure*}
In traditional power system, consumers have been treated as solely passive end users. Nowadays, the rapidly increasing number of distributed generation installations at the consumer level and the growing interests in electrical vehicles enable the consumers to be involved as active players in the power system. Furthermore, research from various fields, such as sociology, anthropology, economics and engineering, has realized that consumer behavior has great influences on the working status of the system, that if the consumers' energy consumption activities can be correctly guided, both the system and the consumers themselves can obtain pronounced benefits. Therefore, it is necessary and urgent to establish an interactive framework, like in all the other levels in the power system, which consists of the corresponding regulatory and market mechanisms as well as technical supports and operational tactics, to better direct and manage the energy related activities of consumers.
Unlike the transmission and distribution levels, where machines take most of the works under exact and unified instructions of the operator, in the consumer level, almost all the operations are determined and fulfilled by different individuals with different characteristics. Their decisions and activities possess high uncertainty. This uncertainty together with the uncertainty of distributed energy sources make the system control and management in this level very difficult. Therefore, ``intelligent" techniques and automation plays extremely important role at this level to help simplify and unify the control and management processes \cite{farhangi2010path,gungor2011smart,di2012event}.
We define a new concept of ``Consumer-Energy Interfaces" (CEI) at the consumer level, which will be the intermedia empowering the interactions between consumers and the network. Interactive smart meters allow both sides to exchange information about the working conditions of each other; consumer-level market access interface enables the bidirectional trades of energy; and automatic consumer energy control interface and automatic consumer level DG control interface can ensure the safe and efficient energy flow. As the entrance for consumers to participate in the grid, significance of the interface design is conceivable. To ensure the entrance for consumers to participate in the grid, but at the same time it should be able to provide all the important information for its users to make proper decisions. However, what information is considered important to a particular user? And how simple the interface should be? Many more questions need to be well studied and answered by the designers. And according to the answers, regulators should establish pertinent standards to regulate the use of the interfaces, so that consumers' security and privacy should be sufficiently protected.
Given the capability to interact and exchange information with the market and the energy network, the efforts that can be made to maximize the benefits and minimize the adverse impacts of consumers' involvement (act as Pro-Sumers) \cite{prosumer} are discussed in the following. At first, demand side management (DSM) strategies \cite{fahrioglu2001using,DSM2010autonomous,DSM2011demand} should be improved and widely applied to relieve the burden of peak time generation and transmission, and from a long-standing point of view to save investment in all the facilities to meet peak demands. Market signals like real-time prices and consumer level locational nodal prices are inevitable elements in DSM, which reflect the real-time working status of participating entities, e.g., demand changes in consumer side and available grid capacity. Based on the pricing signals and the exchanged parameters provided by smart meters, and supported by smart appliance and load control interface, demand management programs such as market signal responsive load and voltage/frequency signal responsive load can be realized. While these measures are based on real-time control and management, smart building modeling can provide advises on scheduling \emph{ex ante}. When the consumers play the role as energy producers utilizing local DGs, as mentioned in the previous subsection (Sec.~\ref{sec:distri}), it brings not only benefits but also impacts to the grid. Since the relative efforts to be made have been narrated in Sec.~\ref{sec:distri}, although not limited to those have been discussed, tautology is avoided here.
To achieve the ``healthy" energy flow involving prosumers, one can predict that huge information flow is also transmitted within the grid. This phenomenon may initiate the unique and non-negligible issue of security and privacy, which did not rise in any other levels. Regulatory institutes should carry part of the responsibility to take care of this issue, and technical solutions, e.g., information security, should be advanced for better protection of the consumer and the grid exposed to the information networks.
\section{Distribution Locational Marginal Real Time Pricing}\label{sec:WAMs}
The first step to build the artificial system and the computing paradigm introduced in chapter~\ref{Chap2} is to establish a real time pricing mechanism. Our research, presented in this manuscript , mainly focuses on distribution power grid. Hence, to achieve the ACP approach and parallel computing scheme in chapter~\ref{Chap2}, and demonstrate the architectures in Sec.~\ref{sec:distri} and Sec.~\ref{sec:consumer}, this chapter introduces the locational marginal real time pricing mechanism in distribution level that can reflect the nodal prices dynamically.
\subsection{Extraction of Network Topology Based on DU Campus Utility Map}
Fig.~\ref{fig:DULoopnetwork} demonstrates the network topology of the DU campus grid. In this network, the nodes stand for campus buildings and the lines represent the transmission lines between campus buildings. There are a number of switches on the campus, which are assumed to be closed in the numerical simulation. As shown in Fig.~\ref{fig:DULoopnetwork}, the total number of buses is 60. There are 57 buildings on campus and the other 3 buses are the set to represent the transmission lines. The power system is connected to the main utility grid at bus $1$, bus $38$ and bus $51$. The distribution renewable generators are assumed to be connected at bus $25$, bus $36$ and bus $42$. According to DU facility's Driscoll solar project, the University of Denver plans to spend \$$300\,k$ to assemble a $100\,kW$ photovoltaic generation. Correspondingly, it is assumed that three $40\,kW$ renewable generators are connected at each of the three buses. The rest of the buses are all load buses. The extracted power system topology can bring the feasibility of studying and simulating DLMP based on the DU campus grid system.
\begin{figure}[!t]
\begin{center}
\subfigure[]{ \label{fig:DULoopnetwork}
\resizebox{3.8in}{!}{\includegraphics{figure/Chapter3topology.eps}}}\\
%
\subfigure[]{ \label{fig:simulink}
\resizebox{2.8in}{!}{\includegraphics{figure/Chapter3simulink.eps}}}
\caption{(a) The network topology of DU campus grid, (b) the corresponding test bed of DU campus grid.}
\end{center}
\end{figure}
Fig.~\ref{fig:simulink} shows the corresponding simulation test bed we created in the Matlab Simulink environment. The campus power grid is divided into three sections according to the three different transmission lines in Fig.~\ref{fig:DULoopnetwork}. The DU electricity power profile is used to set the amount of the loads.
\section{The Locational Marginal Pricing at the Power Distribution Level}\label{chapter:4}
\subsection{Distribution Locational Marginal Pricing}
This paper implements an advanced approach of nodal pricing, which is an extension of LMP from transmission systems to distribution systems \cite{gu2017cha32432nce,gu2016knowledge}. Compared with transmission level, DLPM possesses its own distributional characteristics. In distribution power systems, the voltage is unified to all the buildings and the load may vary from time to time dramatically. Also in many distribution cases, the line congestion and flow limits are different from line to line. However, similarly to the LMP in transmission level, DLMP can be formulated with the following parts including marginal congestion cost (MCC), marginal loss cost (MLC) and marginal energy cost (MEC) \cite{jiang2017b123ig,jiang2012fa123ult,ding2016automa123tic}. This process is achieved in a distribution multi-source scheme, which will be explained in details later.
The traditional scheme of distribution power systems is inflexible and will be outdated in the future. The conventional optimal power flow model corresponds to the minimization of the total cost of power production subject to power grid constraints. Nevertheless, in consideration of the forthcoming distribution power grids, with vast penetration of renewable energy and energy storage facilities, the electricity generation expenditure is not characterized clearly by far and it is difficult to find a well-recognized universal formulation. The DLMP takes advantages of social surplus as a substitution function in the OPF solver. In a competing power market, utility companies and consumers provide confidential offers and bids forecasting the expenditure and quantity that they are affordable to sell and buy electricity. As a result, the DLMP is enabled by such communication capability and the flow chart is shown in Fig.~\ref{fig:calculation paradigm}. The social surplus is depicted as the overall benefits of University of Denver subtracts the total costs of utility companies.
\begin{equation}\label{equ:maineq}
s=\sum\limits_{j=1}^{N}(c_j - p_j) \times q_{c_j} - \sum\limits_{i=1}^{M}(p_i - u_i) \times q_{u_i}
\end{equation}
where $s$ is the system social surplus that is gained from our DLMP calculation, $N$ is the total number of campus buildings and $j$ is the index of buildings; $M$ is the total number of electricity suppliers including renewable energy and $i$ is the index of those generations; $c_j$ stands the building bid price for each power generation and $u_i$ represents the offer price from each power generation; $p_j$ is the distribution locational marginal price at each building $j$, and $p_i$ stands for the distribution locational marginal price at supply bus $i$; $q_{c_j}$ is the power demand at building $j$; $q_{U_i}$ is the power supply from bus $i$.
\begin{figure}[!t]
\begin{center}
\includegraphics[scale=0.22]{figure/DLMPheng.eps}
\caption{General block diagram of DLMP mechanism.}\label{fig:calculation paradigm}
\end{center}
\end{figure}
\subsection{DCOPF Model for DLMP Calculation}\label{sec:DCmodel}
\subsubsection{Methodology}\label{sec:DC Methodology}
For a DC power system, the reactive power is not considered and the voltage magnitudes are set to be universal. In consideration of all the distribution power system characteristics, in the process of DLMP calculation, Generation Shift Factor (GSF) is used to reflect the time-varying line congestion corresponding to different line flow constraints \cite{meng2011distribution}. The implementation of real-time pricing mechanism is evaluated based on real DU campus power grid topology. It is assumed that in the future there are a number of renewable energy generations in the system. In terms of the DU campus power system, there are three legacy buses that can supply sufficient electricity from main utility grid to the campus. Optimal Power Flow (OPF) is the solver for our problem, and constraints are added into the traditional OPF problem to solve the DLMP problem. In this case, DCOPF is utilized to calculate DLMP. The constraints mainly consist of two parts: the balance between customer demands and supplier, and the constraints that includes $GSF_{k-i}$ to secure power transmission lines. As a result, the optimization problem can be modified as:
\begin{eqnarray}\label{equ:DCDLPM}
\underset{p_j, p_i}{\text{arg max}}&s=\sum\limits_{j=1}^{N}(c_j - p_j) \times q_{c_j} \nonumber\\
&- \sum\limits_{i=1}^{M}(p_i - u_i) \times q_{u_i}\\\label{equ:DCfunction}
\text{s.t.}&\sum\limits_{i=1}^{M}q_{u_i} - \sum\limits_{j=1}^{N}q_{c_j} = 0\\\label{equ:DCconstrain1}
&\sum\limits_{j=1}^{N} g_{k-i} \times (q_{u_i} - q_{c_j}) \leqslant f_k^{Max} \\\label{equ:DCconstrain2}
&q_{u_i}^{Min} \le q_{u_i} \le q_{u_i}^{Max}\\\nonumber \label{equ:DCconstrain3}
\end{eqnarray}
where $g_{k-i}$ is the generation shift factor from bus $i$ to line $k$, and $f_k^{Max}$ stands for the power flow limit at $k$th line.
The DC-DLMP problem can be divided into the following three components \cite{li2007dcopf}:
\begin{eqnarray}
&p=MEC + MLC + MCC\\
&MEC = \lambda \\
&MLC = 0 \\
&MCC = \sum\limits_{i=1}^{N}g_{k-i} \times \mu_k
\end{eqnarray}
where $p$ is the distribution locational marginal price for each building; $MEC$ is the marginal energy cost, and $\lambda$ represents the Lagrangian multiplier of (\ref{equ:DCconstrain1}); $MLC$ is the marginal loss cost that equals to $0$ in DCOPF model; $MCC$ stands for the marginal congestion cost; $g_{k-i}$ is the generation shift factor, and $\mu_k$ is Lagrangian multiplier of (\ref{equ:DCconstrain1})
\subsubsection{Numerical Results on the DCOPF Based DLMP}\label{sec:test bed}
The DLMP based on DCOPF algorithm is simulated and evaluated in the DU 60-bus distribution system shown in Fig.~\ref{fig:DULoopnetwork}. The load configurations for each building are generated from real-world DU building data to create reasonable testing scenarios. The DU power system is assumed to consume all the available renewable energy to supply demands in order to reduce $CO_2$ emission and billing utility. Although the campus power grid utilizes all the renewable energy, it is not enough for balancing all the demands, and the campus power grid still needs supply from the legacy power grid, which means the campus will draw energy from buses $1$, $38$, $51$.
In the numerical simulation, the overall load configurations of the DU campus power grid is $1879.98\,kW$ and $606.66\,kVar$ for active and reactive power, respectively. The DLMP calculation is developed in the MATLAB® environment based on the optimal power flow solver from MATPOWER 5.1 simulation package \cite{C4MATPOWER}. The simulation results are showed in Fig.~\ref{fig:DCDLPM}. The total power loss in the grid depends on load configurations. All the generation is dispatched based on DCOPF in order to maximize the social surplus on the DU campus.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[scale=0.29]{figure/topologyDCprices1.eps}
\caption{DC-DLMP calculation results.}\label{fig:DCDLPM}
\end{center}
\end{figure}
As shown in Fig.~\ref{fig:DCDLPM}, the numbers in red show the prices for all the buses. The results shows that all the buildings in the DU campus power grid operate at the same price, $\$60.000$/MWh. Because in DCOPF model, the marginal loss ($MLC$) is assumed to equal zero and congestion loss ($MCC$) is also assumed to be zero as well, the overall power loss in DU campus distribution system is trivial. Congestion can influence the DLMP in the $DC$ model. In order to simulate the congestion condition, the ratings of the lines that are connected to bus 32 and bus 33 are decreased and the load of bus 33 is increased to 5 times of the normal load. The results corresponding to the modification is showed in black numbers in Fig.~\ref{fig:DCDLPM}. The prices for bus 32 and bus 33 are raised to $\$70.000$/MWh and $\$75.000$/MWh, respectively.
\subsection{ACOPF Based DLMP}
\subsubsection{Methodology}\label{sec:AC Methodology}
Compared with the DCOPF model, the solvers for loss and reactive power are added into ACOPF simulation, which makes the model include more distribution power system characteristics. In this paper, the DU campus grid is a medium-scale power system with real-world electricity profiles, which means the ACOPF model would not require a high computational load. The ACOPF based DLMP model is formulated as following.
\begin{eqnarray}\label{equ:ACDLPM}
\underset{p_j, p_i}{\text{arg max}}&s=\sum\limits_{j=1}^{N}(c_j - p_j) \times q_{c_j}\\\nonumber
&- \sum\limits_{i=1}^{M}(p_i - u_i) \times q_{u_i}\\\label{equ:ACconstrain1}
\text{s.t.} &\sum\limits_{i=1}^{M}q_{u_i} - \sum\limits_{j=1}^{N}q_{c_j} - L_{P}(V,\theta) = 0\\\label{equ:ACconstrain2}
&\sum\limits_{i=1}^{M}Q_{u_i} - \sum\limits_{j=1}^{N}Q_{c_j} - L_{Q}(V,\theta) = 0\\\label{equ:ACconstrain3}
&f_j(V,\theta) \leqslant f_j^{Max}\\\label{equ:ACconstrain4}
&q_{u_i}^{MIN} \le q_{u_i} \le q_{u_i}^{MAX}\\\label{equ:ACconstrain5}
&Q_{u_i}^{MIN} \le Q_{u_i} \le Q_{u_i}^{MAX}\\\label{equ:ACconstrain6}
&V_i^{MIN} \le V_i \le V_i^{MAX}\\\nonumber
\end{eqnarray}
where $V$ and $\theta$ are voltage magnitude and angle, respectively; $f_j$ stands for the power flow limit at $j$th line; $q_{u_i}$ is the active power output from each power source, while $Q_{u_i}$ is the reactive power output from the corresponding energy generation; $V_i$ stands for the voltage magnitude of the $i$th bus with power injection; and $L_{P}(V,\theta)$ and $L_{Q}(V,\theta)$ are the total active power loss and reactive power loss in the DU campus power gird, respectively.
\subsubsection{Numerical Results on the ACOPF Based DLMP}
In this ACOPF based DLMP model, the overall load configurations of DU campus power grid is set similarly to the DC model. The simulation results are depicted in Fig.~\ref{fig:ACDLPM}. The total active power losses is $78.856$ kW and reactive power losses is $20.06$ kVar. All the generation is dispatched based on the ACOPF results in order to maximize social surplus in DU campus.
\begin{figure}[!htb]
\begin{center}
\includegraphics[scale=0.29]{figure/Chapter4topologyACprices.eps}
\caption{AC-DLMP calculation results.}\label{fig:ACDLPM}
\end{center}
\end{figure}
The result shown in Fig.~\ref{fig:ACDLPM} indicates that in the ACOPF model all the buses have different prices. Building $33$ has the most expensive distribution locational marginal price, which is $\$121.612$/MWh. Building $52$ has the least expensive price, which is $\$46.020$/MWh. The average price of all the buildings is $\$59.379$/MWh.
Due to power loss, the generators need to provide extra energy to balance the load, therefore the additional marginal loss costs (MLC) are applied to each building. As a result, this makes the DLMPs more expensive than the corresponding DLMP values in DCOPF model. Also the overall social surplus calculated by ACOPF model is less than the total social surplus generated by DCOPF model.
\section{The Energy Consumption Model}
One of the major puzzles that all the power engineers and researchers are facing nowadays is that the access to real-time power profile can be hardly gained. Building managers and utility companies are very cautious when it comes to sharing the data to public. We can understand their trepidation because it would be changeling for them to share when they take security factors into consideration. Though it is very hard to get specific power consumption profile, there are actually several pretty good power simulation softwares, so we can utilize them and simulate the power profile according to our research and simulation purposes.
\subsection{The Building Consumption Analysis Tool}
To fit our research purpose, the building consumption analysis software should fulfill the following prerequisites: 1) Based on all kinds of buildings on campus, the consumption analysis tool can simulate various types of buildings, such as academic building, dormitory, arena \& fitness center and office buildings; 2) The consumption simulation software can provide up to hourly power profile because our research aims to optimize the hourly energy usage and power system operation and planning. 3) The building simulation tool should be capable of specify different architectures, number of story and the shape of the buildings.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.7\textwidth]{figure/eBuilding.eps}
\caption{The building architecture design interface}
\end{center}
\end{figure}
Among all the building consumption analysis tools, we choose eQUEST as our power profile provider. eQUEST is designed to provide whole building performance analysis to building professionals, i.e, owners, designers, operators, utility \& regulatory personnel, and educators \cite{2010equest}. eQUEST is a comprehensive and powerful tool to model building energy consumption, the analysis of building consumption is treated as system of systems, but the user interface and design process are designed to shorten and facilitate the process of preparing building models for simulation and research analysis. eQUEST is actually a window-based interface to the DOE simulation engine. The DOE-2.2 simulation engine is the most widely recognized, used, and trusted building simulation tool available today. And eQUEST have the ability that at the beginning of the establishment of the model, user has to specify the type of the building and the corresponding structure. If needed, researchers can also design the shape, material and amounts for the walls, windows and doors.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.7\textwidth]{figure/eBuildingHVAC.eps}
\caption{The HVAC system design interface}\label{eBuilding}
\end{center}
\end{figure}
It should be noted that eQUSET can also estimate the HVAC system's size and model for the building that the user constructed in the software. Aside from that, Fig.~\ref{eBuilding} shows that the software can support complex system design and allows the researchers to design several independent HVAC system within one building. eQUEST is a simulation software that mainly aims to predict the energy consumption from HVAC system aspect, but it also provides the functionality to estimate the energy consumed by lights and plug-in load. This functionality makes the energy consumption predicted by eQUEST more applicable to real-world scenarios. Although it is impossible to get the same energy consumption as real-world power profile, we find that the output of eQUEST is good enough for our researches during the simulation process. And the output file contains detailed information about various types of data such as indoor temperature, outdoor temperature, date, lighting consumption and total end-use energy, etc.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Partial Output Data}
\label{tab:edata}
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
Lighting energy & Vent fan energy & Heating energy & Hot water energy & Total end-use energy \\ \hline
23.6173 & 0 & 0 & 159479 & 159479 \\ \hline
23.6173 & 0 & 0 & 160202 & 160202 \\ \hline
23.6173 & 0 & 0 & 160732 & 160732 \\ \hline
29.6397 & 0 & 0 & 161285 & 161285 \\ \hline
104.204 & 397.022 & 1.74E+07 & 407303 & 1.78E+07 \\ \hline
338.38 & 397.022 & 1.57E+07 & 1.82E+06 & 1.75E+07 \\ \hline
413.313 & 397.022 & 1.50E+07 & 2.84E+06 & 1.78E+07 \\ \hline
424.129 & 397.022 & 1.50E+07 & 2.29E+06 & 1.73E+07 \\ \hline
425.112 & 397.022 & 1.48E+07 & 1.57E+06 & 1.64E+07 \\ \hline
425.112 & 397.022 & 1.47E+07 & 1.35E+06 & 1.60E+07 \\ \hline
425.112 & 397.022 & 1.41E+07 & 1.97E+06 & 1.61E+07 \\ \hline
425.112 & 397.022 & 1.39E+07 & 3.19E+06 & 1.71E+07 \\ \hline
425.112 & 397.022 & 1.37E+07 & 3.65E+06 & 1.73E+07 \\ \hline
425.112 & 397.022 & 1.35E+07 & 2.37E+06 & 1.58E+07 \\ \hline
425.112 & 397.022 & 1.33E+07 & 2.24E+06 & 1.55E+07 \\ \hline
425.112 & 397.022 & 1.33E+07 & 3.55E+06 & 1.69E+07 \\ \hline
425.112 & 397.022 & 1.35E+07 & 7.57E+06 & 2.11E+07 \\ \hline
425.112 & 397.022 & 1.34E+07 & 1.41E+07 & 2.75E+07 \\ \hline
425.112 & 397.022 & 1.33E+07 & 1.70E+07 & 3.03E+07 \\ \hline
425.112 & 397.022 & 1.34E+07 & 1.54E+07 & 2.88E+07 \\ \hline
414.786 & 397.022 & 1.37E+07 & 1.06E+07 & 2.43E+07 \\ \hline
\end{tabular}
\end{table}
\section{Introduction of Preliminary Study}
In this chapter, we provide a preliminary design to demonstrate the concept of social energy and parallel computing paradigm. The energy prediction models implemented in this chapter is a quadratic regression model and the objective function is solved by brute force searching algorithm. The goal of the part of research and work is to analysis the feasibility of the parallel computing paradigm introduced in Chapter~\ref{Chap2}, and lay a concrete foundation for further study.
The case study includes the elements of power system operation, smart building modeling, real-time pricing mechanism, and human behavior modeling. The technical scheme of the case study is demonstrated in Fig.\ref{fig:figx}, using the concept the parallel intelligence and control. The detailed technical description of the case study is introduced in this section. In previous literature, there are a lot of studies to prove the relationship between comfortness and work performance. Indoor temperature is a crucial factor of the indoor environment, which can affect human behavior in many ways such as perceived air quality, working performance and thermal comfort, etc. \cite{2006effect} The U.S federal government regulates CO$_2$ emission for universities, as a result, facility managers are required to meet the green house reduction regulation in order to avoid financial penalties by targeting on energy cutback. However, some of the strategies are inefficient and may cause reduction in comfortness. To some extent, to take residence who feel uncomfortable are less productive and students are less productive and need more time to accomplish their tasks, which may lead to more energy consumption and environmental degradation \cite{2008managing}. \cite{2004control,2004study} show that with \$2 saving per employer when indoor temperature is within comfortable range, the working efficiency will reduce 2\% per degree \degree C when the temperature is higher than 25\degree C.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.34]{figure/Figx.eps}
\caption{Blue print.}
\label{fig:figx}
\end{figure}
\section{Preliminary Methodology}
This section proposes a smart building power consumption strategy by jointly considering the interactions between the campus power grid and the community artificial system. Work productivity is considered as one of the essential factor in the methodology, because work performance varies considerably under different indoor temperatures. Power consumption is related to indoor temperature settings and outside temperature. Based on \cite{2010equest}, a regression model is trained to simplify the calculation process for building power consumption. It should be noted that distributional marginal real-time pricing is implemented to reflect the changing of energy usage and utility expenses. The DLMP calculation is developed in MATLAB environment based on MATPOWER simulation toolbox \cite{2011matpower}.
\subsection{Human Work Performance Model}
Indoor temperature is one of the fundamental characteristics of the indoor environment. It can be controlled with a degree of accuracy depending on the building and its HVAC system. The indoor temperature affects thermal comfort, perceived air quality, sick building syndrome symptoms and performance at work \cite{2004control}. In this study, the work productivity $P$ is referred to the effects of temperature on performance at office work \cite{2006effect}. According to \cite{2002school}, the ideal temperature range for school is between 68\degree F and 74\degree F. Our target building is the fitness center in the University of Denver, therefore the temperature bracket is designed from 64\degree F and 79\degree F.
\begin{eqnarray}\label{equ:eff}
P=&0.1647524 \times ((T_{1}-32) \times 5/9)-\nonumber\\
&0.0058274 \times ((T_{1}-32) \times 5/9)^2+ \nonumber\\
&0.0000623 \times ((T_{1}-32) \times 5/9)^3-\nonumber\\
&0.4685328;\\
\text{s.t.}&64 \leqslant T_1 \leqslant79\\\nonumber
\end{eqnarray}
where $P$ is the work productivity, $T_{1}$ stands for the indoor temperature settings. (\ref{equ:eff}) will be used later to calculate the total building cost. Fig.~\ref{fig:eff} shows the relationship between indoor temperatures and the corresponding work efficiency.
\begin{figure}[!h]
\centering
\includegraphics[scale=1]{figure/efficiency.eps}
\caption{Work performance w.r.t. indoor temperature.}
\label{fig:eff}
\end{figure}
\subsection{The Regression Model for Predicting Building Power Consumption}
We utilize \cite{2010equest} as the building simulation tool, which can provide comprehensive and detailed calculations about HVAC systems and simplistic assumptions for lighting and plug loads. The advanced hourly report can provide sufficient information for training the regression model for predicting power consumption. Daniel L. Ritchie center is the fitness center of the University of Denver (DU). More than 345,000 people visited this 41,000 m$^2$ building every academic year. In order to simulate the power consumption profile, the simulation model built in eQUEST is 41,000 m$^2$ as well. The type of the model is selected as fitness center. Table~\ref{tab:eQUEST1} shows part of the simulation results. It is noted that the simulation results generated much more information, results however, due to space limitation only the following is included in this paper. The fourth column shows the change of outside temperature.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Part of the simulation results}
\label{tab:eQUEST1}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
Month&Day&Hour&Temp (F)&Energy (BTU)\\
\hline
7 & 26 & 15 & 95 & $6.59 \times 10^6$\\
7 & 26 & 16 & 94 & $8.66 \times 10^6$\\
7 & 26 & 17 & 94 & $1.26 \times 10^7$\\
7 & 26 & 18 & 93 & $1.62 \times 10^7$\\
7 & 26 & 19 & 88 & $1.76 \times 10^7$\\
7 & 26 & 20 & 85 & $1.62 \times 10^7$\\
\hline
\end{tabular}
}
\end{table}
It is difficult to train a model that can fit the energy usage of all year round. Therefore, to ensure accuracy, the regression model in this paper focuses on the days of "cooling days". The designed indoor temperature varies from 64 \degree F to 79 \degree F, which is the same with temperature range in former section. Time, inside temperature and outside temperature are chosen from the simulation results to train the model. The model is shown in~\ref{equ:regress}.
\begin{eqnarray}\label{equ:regress}
E_n = &2.0443 \times t_1+1.8823 \times T_2\\\nonumber
&-1.6305 \times T_1+2.1181 \times 10^6\\
\text{s.t.}&10 \leqslant t_1 \leqslant 21\\
&64 \leqslant T_1 \leqslant 79\\
&50 \textless T_2 \textless 100\\\nonumber
\end{eqnarray}
where $E_n$ is the hourly consumed energy in british thermal unit (BTU), which is predicted by the pre-trained regression model; $t_1$ stands for the time; $T_2$ represents the outside temperature from the weather data; and $T_1$ is the indoor temperature.
\subsection{Distribution Locational Marginal Pricing for DU}
To indicate the influence of load changing on both financial aspect and campus power system side, distribution locational marginal pricing is introduced to associate the physical system to the artificial system. The detailed campus power system configurations is depicted in \cite{2016DU}. Fig.~\ref{fig:topology} shows the topology of DU campus power system.
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\textwidth]{figure/topology01.eps}
\caption{The network topology of DU campus grid.}
\label{fig:topology}
\end{figure}
Using a simulator developed based on DU campus power system, the AC-OPF based DLMP is introduced to implement the real-time pricing mechanism.
\begin{eqnarray}\label{equ:ACDLPM}
\underset{p_j, p_i}{\text{arg max}}&s=\sum\limits_{j=1}^{N}(c_j - p_j) \times q_{c_j}\\\nonumber
&- \sum\limits_{i=1}^{M}(p_i - u_i) \times q_{u_i}\\\label{equ:ACconstrain1}
\text{s.t.} &\sum\limits_{i=1}^{M}q_{u_i} - \sum\limits_{j=1}^{N}q_{c_j} - L_{P}(V,\theta) = 0\\\label{equ:ACconstrain2}
&\sum\limits_{i=1}^{M}Q_{u_i} - \sum\limits_{j=1}^{N}Q_{c_j} - L_{Q}(V,\theta) = 0\\\label{equ:ACconstrain3}
&f_j(V,\theta) \leqslant f_j^{Max}\\\label{equ:ACconstrain4}
&q_{u_i}^{MIN} \leqslant q_{u_i} \leqslant q_{u_i}^{MAX}\\\label{equ:ACconstrain5}
&Q_{u_i}^{MIN} \leqslant Q_{u_i} \leqslant Q_{u_i}^{MAX}\\\label{equ:ACconstrain6}
&V_i^{MIN} \leqslant V_i \leqslant V_i^{MAX}\\\nonumber
\end{eqnarray}
where $s$ is the system social surplus that is gained from our DLMP calculation, $N$ is the total number of campus buildings and $j$ is the index of buildings; $M$ is the total number of electricity suppliers including renewable energy and $i$ is the index of those generations; $c_j$ stands the building bid price for each power generation and $u_i$ represents the offer price from each power generation; $p_j$ is the distribution locational marginal price at each building $j$, and $p_i$ stands for the distribution locational marginal price at supply bus $i$; $q_{c_j}$ is the power demand at building $j$; $q_{U_i}$ is the power supply from bus $i$; $V$ and $\theta$ are voltage magnitude and angle, respectively; $f_j$ stands for the power flow limit at $j$th line; $q_{u_i}$ is the active power output from each power source, while $Q_{u_i}$ is the reactive power output from the corresponding energy generation; $V_i$ stands for the voltage magnitude of the $i$th bus with power injection; and $L_{P}(V,\theta)$ and $L_{Q}(V,\theta)$ are the total active power loss and reactive power loss in the DU campus power gird, respectively.
All the other building will use time varying synthetic load to simulate the real-world campus power consumption condition.
\subsection{Overall Social Cost}
To address the overall social cost \cite{hao2016distribution,hao2017SMC}, a novel method is demonstrated in this section. Based on the aforementioned system configurations and social energy methodology, the formulation for the overall social cost comprises two major parts: the utility cost part, which is calculated by the end-use energy and the corresponding DLMPs; the cost of work productivity, which is determined by the cost of performance reduction and the amount of working personnels. (\ref{equ:final}) shows the formulation, and the goal of the equation is to search for the most economic combination between HVAC costs and work productivity.
\begin{eqnarray}\label{equ:final}
\underset{P_r, P, e_f}{\text{arg min}}&C = P_r \times E_n+E_f \times (1-P) \times O\\
\text{s.t.} & 0.96 \textless P \leqslant 1 \\
& 20 \textless P_r \leqslant 100 \\\nonumber
\end{eqnarray}
where $C$ is the overall cost; $P_r$ represents the DLMP at Ritchie Center; $E_n$ is the end-use energy; $E_f$ is the annual saving for each personnel when the working productivity is 1; $P$ stands for the actual working performance, which is determined by the indoor temperature settings; $O$ is the number of occupants in the building. Detailed results are discussed as following.
\subsection{Results and Discussion}
\subsubsection{Case 1: the Influence of Temperature settings}
In this case, different temperature settings are tested when the time and amount of people are the same. According to the 2005-2006 academic year data, 345,000 people visited Ritchie Center, which means the average number is more than 900 people per day. It is assumed that the number of occupancy equals 500 in this case. 10 am is selected as the target time. The variable is the indoor temperature. As shown in Fig. ~\ref{equ:eff}, the best work performance occurs at 71\degree F, which equals to 0.9991. However, in Fig.~\ref{fig:case1}, it is suggested that the inside temperature should be adjusted to 76 \degree F. It should be pointed out that the resulting prices take considerations of both the utility cost and work performance cost. The hourly costs are \$114.86 and \$116.52, respectively. The hourly difference between those two settings are \$1.66, which can save more than \$6,000 per year. The most expensive temperature setting occurs at 64 \degree F, because the HVAC system needs to consume considerably energy to cool down the indoor temperature and the working efficiency is relatively low. Compared with the most expensive temperature setting, the suggested one can save up to \$12,337 per year, which is a significant saving. When the indoor temperature is set to 76 \degree F, there is a cost drop. The reason is that the needed cooling energy decreases and the working performance is relatively high. As shown, when the temperature is higher than 76 \degree F, the social cost is increasing due to the reduction of working productivity.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.9]{figure/case1.eps}
\caption{The influence of temperature settings.}
\label{fig:case1}
\end{figure}
\subsubsection{Case 2: the Influence of Amount of People}
During social events that happen in the Ritchie Center, such as sports games and commencements, the amount of people can increase dramatically. In this case, the number of people can probably be the major influence of the overall costs. As is demonstrated in (\ref{equ:final}), the total reduction of work efficient is influenced by the amount of people. In this scenario, it is assumed that 2,500 persons are inside the building. The various costs corresponding to different time and temperatures are demonstrated in Fig.~\ref{fig:case2}. Compared with Fig.~\ref{fig:case1}, the most economic temperature is 71 \degree F, which is the temperature for best work productivity. And early morning is much cheaper for DU to hold those kinds of events. In reality, lower temperature settings lead to more power consumption, but the setting can increase productivity.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.9]{figure/case2.eps}
\caption{The influence of amount of people.}
\label{fig:case2}
\end{figure}
\subsubsection{Case 3: the Comparison between Temperature and Amount of People}
The comprehensive comparisons between different scenarios are conducted in this case study. Fig.~\ref{fig:case3} shows the total cost under various situations. The comparison mainly focuses on the most expensive time period in one day. Human beings generate thermal even when they are sleeping, therefore the amount of people in the building is related to the energy that needs to be consumed to cool down the room. However, in Fig.~\ref{fig:case3}, the real cost that serve 2500 people at 71\degree F is cheaper than providing a 64\degree F building for 100 personnels. And the overall costs for serving 300 people can lower than satisfying 250 people. The trade-off between work efficiency and temperature is measured and shown in this section. Detailed costs information is in Tab.~\ref{tab:cost} There is an old management adage: “What gets measured gets done.” The corollary is equally true: “What doesn’t get measured gets ignored.” Many facility managers only measure energy use. In this manuscript, the presented methodology can not only measure energy use but also measure the work performance, which can provide valuable maneuvers for facility managers to help them make decisions.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.9]{figure/case3.eps}
\caption{The comparison between temperature and amount of people.}
\label{fig:case3}
\end{figure}
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Costs comparison}
\label{tab:cost}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Temp(F) & Personnels & Time & Costs(\$)\\
\hline
64 & 100 & 16 & 164.64 \\
79 & 300 & 16 & 162.96 \\
71 & 250 & 16 & 164.31 \\
71 & 2500 & 16 & 164.53 \\
\hline
\end{tabular}
}
\end{table}
\subsubsection{Case 4: One Day Simulation}
As aforementioned, Ritchie Center is built as a multi-functional building. DU holds many kinds of games and commencements during whole year. In this section, it is assumed that a event will be held in Ritchie center from noon to 2 pm and the amount of people is up to 2,500. From the results of previous case studies, the total social cost is pretty high under certain temperature circumstances. 71\degree F is not only the best temperature setting for working productivity but also a practical temperature setting in real life. Therefore, the hourly cost of 71\degree F is
set as the referral in this section. As shown in Fig. \ref{fig:case4}, except for the time period from noon to 2 pm, the minimum cost (the blue line) is lower than the referral curve (the red line). Compared with the referral, the proposed methodology can help to save \$18.40 per day or \$6716 per year. The detailed hourly information is demonstrated in Tab.~\ref{tab:cost1}. The second column is the amount of people inside the building. The third column is the minimum cost calculate by the proposed method. The fourth column is the referring cost.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.9]{figure/case4.eps}
\caption{The one day hourly cost comparison.}
\label{fig:case4}
\end{figure}
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Costs comparison}
\label{tab:cost1}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Time & Occupancy & Minimum (\$) & 71\degree F (\$)\\
\hline
\hline
10 & 100 & 114.26 & 116.48 \\
\hline
11 & 400 & 128.59 & 130.40 \\
\hline
12 & 2,500 & 147.21 & 147.21 \\
\hline
13 & 2,500 & 152.91 & 152.91 \\
\hline
14 & 2,500 & 158.60 & 158.60 \\
\hline
15 & 400 & 159.56 & 161.36 \\
\hline
16 & 300 & 162.37 & 164.32 \\
\hline
17 & 250 & 157.10 & 159.10 \\
\hline
18 & 200 & 154.51 & 156.59 \\
\hline
19 & 200 & 149.28 & 151.36 \\
\hline
20 & 100 & 146.64 & 148.86 \\
\hline
21 & 100 & 149.60 & 151.82 \\
\hline
\end{tabular}
}
\end{table}
\section{Section Conclusion}
In this section, a preliminary methodology to calculate the costs of social energy is demonstrated. This innovative formulation is based on the paradigm of parallel computing. The methodology can be the cornerstone for the following researches that studies the combination costs of electricity and work efficiency.
\section{The Problem and Motivation}
The methodology introduced in Chapter \ref{Chap4} mainly aims to demonstrate the novel idea we proposed in this manuscript. During the later research process and the development of modern artificial intelligence and neural network technologies, we realize that the methodology in Chapter \ref{Chap4} possesses several shortcomings as listed in the followings:
\begin{itemize}
\item Limitation of energy consumption model: the energy prediction model in Chapter ~\ref{Chap4} is trained by linear regression method which limits the capability of the algorithm. The limitation of linear regression model is that it can only predict a certain time period in one day, for example, 10AM to 9PM in summer. There is an urgent demand to implement an advanced method to train a sophisticated model that can predict the power profile for entire year.
\item Brute force searching method: the preliminary results shows in Chapter ~\ref{Chap4} is calculated by exhaustive search. The restriction of the proposed searching methodology is that it cannot be achieved in any large scale power system. When the number control strategy and buses increases, the calculation complexity and time will escalate exponentially.
\item The methodology needs to be tested and examined in more complicated and comprehensive cases and scenarios.
\end{itemize}
At the same time, through the literature of Chapter \ref{Chap2}, Chapter \ref{Chap3} and Chapter \ref{Chap4}, we can know that the demand for design an algorithm that can fully fulfill the following characteristics is vital.
\begin{itemize}
\item The algorithm have the ability to yearly assist the power system planning and operation and be able to adapt to the continuously changing environment and customer energy usage.
\item The method is capable of stimulating building managers to participate because future smart grid will integrate many features such as renewable energy, real-time pricing and distributed generator. Even in the same campus, but each building may have various energy usage demand according to their diverse control goals.
\item The approach should offer the building manager the best control strategy corresponding to the miscellaneous building's types and schedules.
\item Not like Brute force searching method, the algorithm should be capable of handling the calculation for large scale power systems, a distributed or multi-agent methodology may be able to cope the problem.
\item The new methodology needs to be proved and examined in more complicated and comprehensive cases and scenarios.
\end{itemize}
As society progresses and technology develops, the power system becomes more and more complicated, and humans’ requirements and expectations upon the system have been leveled up step by step. The phases of power system improvements start with the successful delivery of energy, to safe energy delivery, to expansion for larger coverage and capacity, to advancements in stability and resilience, to improvement of energy efficiency, and to enhancement of social welfare. The developments of renewable energy resources in most of universities or societies are relatively slow, because the solar panels and wind turbines are expensive Most buyers cannot afford using renewable energies. Meanwhile, the integration of renewable energy needs the power grid to have more modularity and and adaptability which may reduce the system robustness and uncertainty in demands and generations. On the other hand, the integration of renewable energy further fluctuates the real-time pricing (RTP) in future smart grid. Therefore, for energy end users like universities, price is one of the crucial factors for cost saving and load reduction \cite{SGresidential}. In \cite{SGEVcharging}, authors present a novel method for energy exchange between electric vehicle (EV) and power grid based on the Stackelberg game. However, for academic and commercial buildings, the impact of EV is negligible in terms of the amount of load consumed by EVs. \cite{SGpricingmech} concentrates on the design and the implementation of game theory in RTP and load stability for future power grid. The paper introduces social welfare for maximizing the supplier and consumer profits. Yet, the study on the influence of RTP was not included. Some researchers conducted experiments about the relationship between RTP and user activities \cite{SGjhgPrc1,SGjhgPrc2}. The price mechanisms used in those articles, common flat rate and quadratic cost function for example, will not be suitable for future smart grid. Energy scheduling and demand management will benefit smart gird in many aspects including large scale load shift, congestion mitigation, reduction of power system transit stability issues.
Social cost, which includes the electricity consumption costs and human working efficiency costs, is introduced as an advanced concept to address the importance of both the energy consumption and human experiences in power system management. The optimization of social cost is designated as the objective function to arrange and manage the HVAC system scheduling. Inspired by the methodology in \cite{gamemethod}, we transform the aforementioned problem into a game-theoretic problem. The proposed approach can solve a finite n-person non co-operative game that the players are the building managers in the same local smart grid, the strategies are the HVAC indoor temperature settings, and the payoffs are the social cost according to various personnel inside those buildings and the indoor working productivity. It should be noted that distributional locational marginal pricing (DLMP) is introduced to strengthen and reflect the relationship among the player payoffs, other player's action and power system situation. To illustrate the proposed methodology, we embedded the approach into an interactive real-artificial parallel computing technique. For implementing our methodology and the artificial-real management/computing system, human efficiency modeling, numerical experiments based smart building modeling, distribution level real-time pricing and social response to the pricing signals are studied. The details of aforementioned techniques will be depicted in the following sections.
The rest of the chapter is organized as the following: Sec.\ref{sec:models} describes the computing and communication systems and the corresponding modules; Sec.\ref{sec:game} narrates the formulation of the social game; Sec.\ref{sec:results} shows the simulation results and the related analytical description. Sec.\ref{sec:conclusion} concludes the chapter.
\section{Establishment of Computing System}
\label{sec:models}
In this section, we provide analytical descriptions of the proposed computing and communication mechanism, the working productivity model, the energy consumption model and the power system DLMP model. We assume that the buildings is equipped with smart meters with the capability of two-way communication. According to each building schedules the smart management system can forecast the amount of people inside the buildings.
\subsection{Real-artificial Computing and Management Paradigm}
Our proposed methodology is to minimize building energy consumption and suggest the building manager the best HVAC settings to ensure that the indoor environment is within a comfort zone. To fulfill the function ality of this methodology, we introduce the real-artificial computing and management paradigm to be the computational and communicational framework based on the concepts in \cite{parallelsystem}. Fig.~\ref{fig:TechFrame} shows the real-artificial computing and management paradigm.
\begin{figure}[!h]
\centering
\includegraphics[width=1.00\linewidth]{figure/TechFrame01.eps}
\caption{Technical scheme of the DU social energy system case study.}
\label{fig:TechFrame}
\end{figure}
Our optimization methodology requires the information interaction between the physical systems such as HVAC system and the digital systems like smart meter. The real-artificial computing and management paradigm consists of two major parts: the real physical system and the artificial virtual system. The physical system collects data such as the amount of people inside a building, the building schedule for next time period and the current indoor temperature setting. Using the information collected by physical system, the virtual system will be able to calculate the payoff matrix and game with other players in the same local smart grid. Through optimization, the virtue system will provide feed-back to the building manager to help set the optimal HVAC temperature for the next time period.
\subsection{Human Work Performance Model}
\label{sec:human}
Indoor temperature is one of the fundamental characteristics of the indoor environment. It can be controlled with a degree of accuracy depending on the building and its HVAC system. The indoor temperature affects thermal comfort, perceived air quality, sick building syndrome symptoms and performance at work \cite{2004control}. The work productivity $P$ is referred to effect of temperature on performance at office work \cite{2006effect}, and the human work performance model can be expressed as
\begin{eqnarray}\label{equ:eff}
\xi &=& g(T_{in})\\
&=& 0.1647524 \cdot (\frac{5}{9} \cdot (T_{in}-32))-\nonumber\\
&& 0.0058274 \cdot (\frac{5}{9} \cdot (T_{in}-32))^2+ \nonumber\\
&& 0.0000623 \cdot (\frac{5}{9} \cdot (T_{in}-32))^3-0.4685328 \nonumber
\end{eqnarray}
where $\xi$ is the work productivity, $T_{in}$ is the indoor temperature settings which satisfies $T_{l} \leqslant T_{in} \leqslant T_{u}$ and $T_{l}$ is the temperature lower limit, $T_{u}$ is the temperature upper limit. And Fig.~\ref{fig:eff} shows the relationship between the indoor temperatures and the work efficiency. It should be pointed out that, although according to \cite{2002school}, the ideal temperature range for university buildings is between 68\degree F and 74\degree F, the temperature bracket in our study is designed between 64\degree F and 79\degree F.
\begin{figure}[!h]
\centering
\includegraphics[width=1.0\linewidth]{figure/efficiency.eps}
\caption{Work efficiency as a function of indoor temperature.}
\label{fig:eff}
\end{figure}
Let $\xi_{k,t}$ and $x_{k,t}$ denotes the work productivity and indoor temperature setting in building $k$ at time $t$, respectively, (\ref{equ:eff}) can be rewritten as
\begin{eqnarray}\label{equ:efffunc}
\xi_{k,t} = g(x_{k,t})
\end{eqnarray}
In addition, we denote $\boldsymbol{\xi}_t=[\xi_{1,t} \; \xi_{2,t} \dots \xi_{n,t}]^\intercal$ and $\mathbf{x}_t=[x_{1,t} \; x_{2,t} \dots x_{n,t}]^\intercal$, $\mathbf{x}_t$ is the control variable in this case study.
\subsection{The Neural Network Based Energy Consumption Profile Models}
\label{sec:consumption}
We utilize eQUEST \cite{2010equest} as the building simulation tool, which provides comprehensive and detail calculations about HVAC systems and simplistic assumptions for lighting and plug-in loads according to the size and type of various buildings. The hourly report from the simulation software can provide sufficient information for training the neural network models to predict power consumption. Compared with previous chapters, the capability and accuracy of the building energy consumption model is improved to the next level. Table~\ref{tab:eQUEST} shows partial simulation results of the hourly energy consumption on July $1st$ at the Ritchie center.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.1}
\caption{Partial simulation results of Ritchie center on July $1st$ from $15:00$ to $20:00$}
\label{tab:eQUEST}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Hour&T$_{in}$ (\degree F)&T$_{out}$ (\degree F)&Energy (BTU)\\
\hline
15 &64& 94 & $1.08\times10^7$\\
16 &64& 92 & $1.31\times10^7$\\
17 &64& 92 & $1.70\times10^7$\\
18 &64& 93 & $2.03\times10^7$\\
19 &64& 89 & $2.22\times10^7$\\
20 &64& 85 & $2.19\times10^7$\\
\hline
\end{tabular}
\end{table}
Simulated data in one year under different indoor temperature settings should be generated for each building in the same smart grid.
For each building, a two-layer feed-forward network with sigmoid hidden neurons and linear output neurons is trained using the Levenberg-Marquardt backpropagation algorithm. Hour, indoor temperature setting and outdoor temperature are the inputs, and the energy consumption is the output of the neural network model. In our study, 75\%, 15\% and 15\% of the sample data are randomly selected as the training, validation and testing data, respectively.
The average R-value is 0.92, which demonstrates that the neural network models for predicting energy consumption are acceptable. Thus, energy consumptions of building $k$ at time $t$, which is denoted as $e_{k,t}$ can be expressed as functions of indoor temperature setting $x_{k,t}$, time $t$ and outdoor temperature $T_{out,t}$
\begin{eqnarray}\label{equ:consump}
&e_{k,t} = h_k (x_{k,t}, t, T_{out,t})\\
&\mathbf{e}_t = H (\mathbf{x}_t, t, T_{out,t}) \nonumber
\end{eqnarray}
where $\mathbf{e}_t=[e_{1,t} \; e_{2,t} \cdots e_{57,t}]^\intercal$.
\subsection{Distribution Locational Marginal Pricing for DU Campus Grid}
\label{sec:price}
To indicate the influence of load variation on both financial aspect and campus power system side, the distribution locational marginal pricing \cite{DLMPmethod} is introduced to associate the physical system with the artificial system.
\begin{eqnarray}\label{equ:ACDLPM}
\underset{p^b_j, p^g_i}{\text{arg max}}&s=\sum\limits_{j=1}^{N}(c_j - p^b_j) \cdot q_{c_j}\\\nonumber
&- \sum\limits_{i=1}^{M}(p^g_i - u_i) \cdot q_{u_i}\\\label{equ:ACconstrain1}
\text{s.t.} &\sum\limits_{i=1}^{M}q_{u_i} - \sum\limits_{j=1}^{N}q_{c_j} - L_{P}(V,\theta) = 0\\\label{equ:ACconstrain2}
&\sum\limits_{i=1}^{M}Q_{u_i} - \sum\limits_{j=1}^{N}Q_{c_j} - L_{Q}(V,\theta) = 0\\\label{equ:ACconstrain3}
&f_j(V,\theta) \leqslant f_j^{Max}\\\label{equ:ACconstrain4}
&q_{u_i}^{MIN} \leqslant q_{u_i} \leqslant q_{u_i}^{MAX}\\\label{equ:ACconstrain5}
&Q_{u_i}^{MIN} \leqslant Q_{u_i} \leqslant Q_{u_i}^{MAX}\\\label{equ:ACconstrain6}
&V_i^{MIN} \leqslant V_i \leqslant V_i^{MAX}\\\nonumber
\end{eqnarray}
where $s$ denotes system social surplus that is obtained from our DLMP calculation, $N$ is the number of buildings in smart grid and $j$ is the building index; $M$ is the total number of electricity suppliers and $i$ is the generator index; $c_j$ stands for the building bid price for each power generation and $u_i$ represents the offer price from each power generation; $p^b_j$ is the distribution locational marginal price at each building $j$, and $p^g_i$ stands for the distribution locational marginal price at supply bus $i$; $q_{c_j}$ is the power demand at building $j$; $q_{u_i}$ is the power supply from bus $i$; $V$ and $\theta$ are voltage magnitude and angle, respectively; $f_j$ stands for the power flow at $j$th line, which is limited by $f_j^{max}$ A; $q_{u_i}$ is the active power output from each power source and the maximum capacity $q_{u_i}^{MAX}$ MW, while $Q_{u_i}$ is the reactive power output from the corresponding energy generation and the maximum capacity $Q_{u_i}^{MAX}$ MVar; $V_i$ stands for the voltage magnitude of the $i$th bus with power injection, in this case study $V_i^{MIN}$ pu and $V_i^{MAX}$ pu; and $L_{P}(V,\theta)$ and $L_{Q}(V,\theta)$ are the total active power loss and reactive power loss in the smart gird, respectively.
For the convenience of calculations, we express the DLMP of the target buildings $\mathbf{p}_t = [p_{1,t} \; p_{2,t} \dots p_{n,t}]^\intercal$ as a highly nonlinear and complex function $\Gamma(\cdot)$ of energy consumption $\mathbf{e}_t$, and thus as a function of the control variable $\mathbf{x}_t$ as
\begin{eqnarray}\label{equ:dlmpfunc}
\mathbf{p}_t = \Gamma (\mathbf{e}_t) = \Gamma({H(\mathbf{x}_t, t, T_{out_t})}).
\end{eqnarray}
\subsection{Overall Social Cost}
To address the overall social cost, a novel method is demonstrated in this section. Based on the aforementioned real-artificial computing and management paradigm and the system configuration, the formulation for the overall social cost comprises two major parts: the utility cost, calculated by the end-use energy and the corresponding DLMPs; the cost of work productivity, determined by the cost of performance reduction and the amount of occupants. (\ref{equ:scost}) defines the formulation for calculating the overall social costs $\psi_t$ at time $t$ of the target buildings in smart grid
\begin{eqnarray}\label{equ:scost}
\psi_t & = & \sum_{k = 1}^{n} [p_{k,t} \cdot e_{k,t} + w \cdot \alpha (1-\xi_{k,t}) \cdot o_{k,t}]\\ \nonumber
& = & \mathbf{p}_t \cdot \mathbf{e}_t + w \cdot \alpha (\mathbf{1}-\boldsymbol{\xi}_t) \cdot \mathbf{o}_t\\ \nonumber
& = & \Gamma [H(\mathbf{x}_t,t,T_{out,t})] \cdot H(\mathbf{x}_t,t,T_{out,t}) \\ \nonumber
& & + w \cdot \alpha[1-g (\mathbf{x}_t)] \cdot \mathbf{o}_t\\
& = & \Psi (\mathbf{x}_t, t, T_{out,t}, \mathbf{o}_t)
\end{eqnarray}
where $\psi_t$ is the overall cost at time $t$, $w$ is the weight for the efficiency component which is set to be $0.1$, $\alpha$ is the hourly saving for each personnel when the working productivity is 1, and $\mathbf{o}_t = [o_{1,t} \; o_{2,t} \cdots o_{n,t}]^\intercal$ where $o_{k,t}$ is the number of occupants in building $k$ at time $t$.
It should be noted that, the outdoor temperature $T_{out,t}$ and the number of occupants $\mathbf{o}_t$ is obtained through historical data and is directly related to the time $t$ in a day. For this reason, at a certain time $t$, $T_{out,t}$ and $\mathbf{o}_t$ are known. The overall social cost is a function of the indoor temperature settings $\mathbf{x}_t$. The goal is to find, the best indoor temperature settings $\hat{\mathbf{x}}_t$ at time $t$ that generates the most economic combination between HVAC costs and work productivity. Therefore, the problem can be formulated as
\begin{eqnarray}\label{equ:mincost}
\hat{\mathbf{x}}_t &=& \underset{\mathbf{x}_t}{\text{arg min}} \; \Psi (\mathbf{x}_t) \\ \nonumber
& \text{s.t.} & T_{l} \leqslant x_{k,t} \leqslant T_{u}
\end{eqnarray}
\section{Social Cost Game}
\label{sec:game}
Based on the models and the controlling and management mechanism depicted in Sec.\ref{sec:models}, optimizing the setting of HVAC system can be formulated. Buildings consume up to $45\%$ of the global energy, in it, HVAC system constitutes the largest user of energy. Therefore the optimization of building HVAC system energy usage is crucial not only to our environment but also to the occupants. Our proposed methodology solves a finite N person game, since the game is designated to maximize the social cost within a certain smart grid while means that the players are the buildings managers that each of the individual manager aims to minimize the energy cost and maintain the indoor temperature in reasonable zone in a smart grid. According to the temperature control bracket and accuracy of HVAC system, the number of strategies for each player is finite. The payoffs are affected by utility price which is presented in Sec.\ref{sec:price}, the HVAC consumption whose energy usage is modeled in Sec.\ref{sec:consumption} and working efficiency of building occupants is related to the indoor temperature, and analyzed in Sec.\ref{sec:human}.
We devise the social cost optimization problem into a $N$ person, finite non co-operative game which can be expressed in following:
\begin{equation}\label{eq:game1}
\psi(x_{i,t}) = (\mathbf{N},\{\mathbf{S}^{i,t}\}_{i\in{N}},\{\mathbf{\delta}^{i,t}\}_{i\in{N}})
\end{equation}
where $\mathbf{N}$ is the player set at time $t$, $\mathbf{S}^{i}$ is the $i_{th}$ players' finite pure strategy set at time $t$ and $\mathbf{\delta}^{i}$ is the payoff set corresponding to various pure strategies at time $t$. If the game has an optimal solution, there is at least one Nash equilibrium, which means that one player could not get a better payoff than the optimal strategy at NE if the game reaches a Nash equilibrium and the other players are playing according to their Nash equilibrium strategies.
\begin{equation}
\delta^{i,t}(\alpha^*) \geqslant \delta^{i,t}(\alpha^{*-i},\alpha^i), \forall i \in N, \forall \alpha^i \in \Sigma^i
\end{equation}
where $*$ means the strategy at NE, $-i$ denotes the players other than $i_{th}$ player, and $\Sigma^i$ is the mixed strategy space for player $i$
However, the DLMP mechanism narrated in Sec.\ref{sec:price} is a double auction pricing paradigm and the nodal prices are calculated by alternating current optimal power flow(AC-OPF). If players keep communicating with energy sellers during the period of the games, the whole process would be incredibly long making the feasible algorithm into an infeasible one. Therefore, we implement the price estimation tool from \cite{SGprc} for the managers to estimate their nodal prices and payoffs. An effective way to predict is likely by referring to the prices at the same time from yesterday, the day before yesterday, and the same day last week.
Therefore, the predicted price for an upcoming hour $t$ on a day $d$ is obtained as followed:
\begin{equation}\label{eq:conspric}
\hat{p}^t[d] = k_1p^t[d-1]+k_2p^h[d-2]+k_7p^t[d-7],\forall h\in H.
\end{equation}
Here, $p^t[d-1]$, $p^t[d-2]$, and $p^t[d-7]$ denote the previous values of price $p^t$ on yesterday, the day before yesterday, and the same day last week, respectively. We choose All-Days-Same Coefficients that means the parameters $k_1$,$k_2$,$k_7$ remain the same for every day \cite{SGprc}, and we set $k_1=0.8765$, $k_2=0$, $k_7=0.1025$. After implementing those coefficients, when compared with DLMPs, the price prediction error resulting from (\ref{eq:conspric}) is $16\%$ on average. After the managers reach NE, their energy usage scheduling information will be sent to the DLMP module in artificial system, then the virtual system will calculate the actually DLMPs and feed back the nodal prices to the managers through the physical system. The results from DLMPs will be substituted for $[d-1]$,$[d-2]$, and $[d-7]$ sequentially until the difference of prices between the manger estimation and the DLMPs is within the predetermined price threshold $\epsilon$. While the pricing is updating, each manager will also update their strategy and game with others.
\begin{algorithm}
\label{alg:alg1}
\begin{algorithmic}[1]
\State Random initialization
\State Based on (\ref{eq:conspric}) calculate social cost and payoff matrix.
\State \textbf{Repeat}.
\State Each player take turns and update payoff-matrix.
\State Calculate best response (BR) strategies.
\State According to BR, implement DLMP to calculate price.
\State Each player broadcast the updated price and payoff matrix.
\State \textbf{End}
\State Update schedule for next time interval.
\end{algorithmic}
\end{algorithm}
By assuming $\psi^i$ is the optimal HVAC system setting for player $i$, then we have an nonlinear optimization problem for the each player in smart grid.
\begin{eqnarray}\nonumber
&(\psi^{i,t})\\
&\text{min}&\gamma^{i,t}-\delta^{i,t}(\alpha)\\
&\text{s.t.}&\delta^{i,t}(\alpha^{-i,t},s^{i,t}_j)-\gamma^{i,t} \leqslant 0 \forall j = 1,\dots,m^i\\
&&\sum_{j=1}^{m^i}\alpha^{i,t}_j = 1\\
&&\alpha^{i,t}_j \geqslant 0 \quad \forall j = 1,\dots,m^i
\end{eqnarray}
where $\gamma^{i,t}$ is assumed as the optimal social cost corresponding to the best indoor temperature setting, $(\alpha^{-i,t},s^{i,t}_j)$ denotes the player $i$'s strategies set include strategy $s^{i,t}_j$ while the others' strategies are expressed as $\alpha^{-{i,t}}$ at time $t$. According to \cite{gamemethod}, after applying KKT condition, we can obtain that a Nash equilibrium of game (\ref{eq:game1}) can be transformed into a problem of equalities and inequalities.
\begin{lemma}\label{lemma}
A necessary and sufficient condition for game $\psi$ to have a Nash equilibrium strategy set $\alpha$ is
\begin{eqnarray}\label{eq:lemma1}
&\gamma^{i,t}-\delta^{i,t}(\alpha) = 0 \quad \forall i \in N\\\label{eq:lemma2}
&\delta^{i,t}(\alpha^{-i,t},s^{i,t}_j)-\gamma^{i,t} \leqslant 0, \\\nonumber
&\forall j = 1,\dots,m^i,\forall i \in N\\\label{eq:lemma3}
&\sum_{j=1}^{m^i}\alpha^{i,t}_j = 1,\forall i \in N\\\label{eq:lemma4}
&\alpha^{i,t}_j \geqslant 0 \quad \forall j = 1,\dots,m^i, \forall i \in N
\end{eqnarray}
\end{lemma}
Form (\ref{eq:lemma1}), we can obtain that for every player in the same smart grid their best response strategy is at Nash equilibrium. (\ref{eq:lemma2},\ref{eq:lemma3},\ref{eq:lemma4}) are the equality and inequality constraints for optimization and (\ref{eq:lemma2}) means that no mix strategy combination would result in better social cost than best response. Therefore, we can obtain that the optimal solution of nonlinear HVAC scheduling problem is the strategy at Nash equilibrium $\alpha$.
\begin{theorem}
A necessary and sufficient condition for $\alpha^*$ to a Nash equilibrium of game $\Psi$ is that it is an optimal solution of the following minimization problem
\begin{eqnarray}\nonumber
&(\Psi)\\\nonumber
&\text{min}&\sum_{i \in N}\gamma^{i,t}-\delta^{i,t}(\alpha)\\\nonumber
&\text{s.t.}&\delta^{i,t}(\alpha^{-i,t},s^{i,t}_j)-\gamma^{i,t} \leqslant 0,\\\nonumber
&& \forall j = 1,\dots,m^i, \forall i \in N\\\nonumber
&&\sum_{j=1}^{m^i}\alpha^{i,t}_j = 1, \quad \forall i \in N \\\nonumber
&&\alpha^{i,t}_j \geqslant 0 \quad \forall j = 1,\dots,m^i, \forall i \in N
\end{eqnarray}
\end{theorem}
The optimal value of our proposed social cost game should be 0. The value of $\gamma^{i,t}$ at the optimal point gives the expected payoff of the player $i$ at time $t$.
\subsection{Simulation Results}\label{sec:results}
\subsubsection{Simulation Testbed}
Like the previous chapters, the University of Denver campus power grid is used as the simulation testbed in this chapter. There are 60 buses on campus, $57$ buses are load bus and the other $3$ buses are power sources. Assume that all the buildings on campus have been equipped with smart meters which have two-way communication capability using a campus local network and the proposed social game algorithm has been deployed. The campus power consumption varies between $2$ MW and $11$ MW. For those three power sources, we set the maximum power supply for active power and reactive power to be $12.7$ MW and $11$ MVar, respectively. The bus voltage are limited by $V_i^{MIN} = 0.90$ pu and $V_i^{MAX}=1.10$, $i\in N = 60$. A detailed campus power system configurations is depicted in \cite{2016DU}. Fig.~\ref{fig:topology} shows the topology of DU campus power system.
\begin{figure}[!h] \hspace{-20pt}
\centering
\includegraphics[width=0.9\textwidth]{figure/topology01.eps}
\caption{The network topology of DU campus grid.}
\label{fig:topology5}
\end{figure}
\subsubsection{Results}\label{sec:gamer}
In this chapter, bus $2$, bus $59$, bus $41$, bus $24$, and bus $13$ are selected as the social cost game players and are highlighted in Fig.\ref{fig:topology}. Those buildings, containing a multi-functional fitness center with big arena and various academic buildings, consume the majority of the campus energy and can represent common building type on a university campus. The amount of personnels inside different buildings is influenced by various factors. They are the maximum building capacity, the type of building and the event that is held inside the building. According to the events and schedules, the number of people in the five buildings are shown in Fig.~\ref{fig:amoutpeople}.
\begin{figure}[h]
\centering
\includegraphics*[width=0.7\linewidth]{figure/figure1.eps}
\caption{The amount of people inside buildings}
\label{fig:amoutpeople}
\end{figure}
Bus $2$ is holds large events such as graduation commencements, any athletic games, the amount of people may vary a lot and the rate of change of population can fluctuate dramatically. Bus $59$ is the law school. It is a large building and holding relatively small conferences. Bus $41$ is a $4$ stories academic building. Bus $24$ is a buildings housing education and office activities. Bus $13$ contains many chemistry labs. We can demonstrate that our proposed methodology can be applied to different types of building and can handle dynamic optimization problems when the load and number of people are constantly changing. In this section, a typical summer day is selected, outdoor temperature is shown in Fig.~\ref{fig:amoutpeople}.
\begin{figure}[h]
\centering
\includegraphics*[width=0.7\linewidth]{figure/figure2.eps}
\caption{The results comparison}
\label{fig:result}
\end{figure}
Fig.\ref{fig:result} shows the simulation results for each of the five selected buses and the overall social cost, respectively. For bus $2$, which is the largest buildings on campus, the daily indoor temperature is kept at $71\degree F$. The sole control strategy incurs $\$3,622.94$ in social cost. The best response strategy incurs $\$2,911.39$, which means that the proposed methodology can help the manager to save up to $\$711.5$ on the test day. Bus $13$ has the least square feet and capacity of all, the constant temperature control strategy incurs $\$89.6$ if the HVAC system is set at $75\degree F$. Compared to the optimal strategy that incurs $\$53.498$, the constant controlling method would make the manager pay $67\%$ more. For all the five selected buses, the total optimal daily cost is $\$3,655.6$ at the Nash equilibrium point, while the cost for $67\degree F$, $71\degree F$, and $75\degree F$ are $\$5,686.0$, $\$4,744.0$, and $\$4,215.2$, respectively. The simulation results prove that the Nash equilibrium point calculated by the proposed methodology is the global optimal point for the problem of social cost minimization. The outcome of this proposed methodology substantiates the use of it in buildings with central HVAC systems, by adaptively calculating the optimal HVAC system control settings based on DLMP and occupancy.
\section{Multi-Agent Reinforcement Learning for Social Cost Game}\label{sec:RL}
The social cost game depicted in Sec.~\ref{sec:game} is limited by the algorithm complexity. When the number of indoor temperature control strategy or the number of distribution node increase, the computation time increases exponentially. The shortcoming makes the cost of social energy game consume long time, although the modern computational capability of CPU or GPU is much better than the past. Hence, there is a need to implement an advanced algorithm to simplify the game strategy pool and adapt to the constantly changing environment. Even if the game control strategy or the node distribution power system goes up, the computational time should increase linearly not exponentially. Therefore, we implement the Markov Decision Process based Multi-Agent Reinforcement Learning to cope with the puzzle.
\subsection{Markov Decision Process for Social Energy Cost Game}\label{sec:MDP}
The framework of the MDP has the following components: (1) state of the system, (2)actions in each state, (3) transition probabilities, (4) transition rewards for each action from former state to next state, and (5) a policy. As demonstrated in the previous chapters and sections, the intelligent artificial model we build can model the system. So we can observe the Markov chain by observing the system model. The ideas are explained below.
\begin{itemize}
\item State: For each player in the distribution power system who plays the social cost game, the "state" of a system is usually a set of measurable factors that can be used to describe their cost. In our case, the system is described in terms of the cost of social energy that consists the energy cost and the cost of working efficiency reduction, calculated by DLMP multiplying energy usage and indoor occupants multiplying the reduction of working efficiency, respectively. The system is a dynamic system that means any subtle change can incur the change of state.
\item Action: The actions for MDP is the strategies for each player. The strategies allowed in the social cost game are the actions in the Markov Chain. The change of state can incur the change of action, vice versa.
\item Transition Probability: Assume that in state $i$ action $a$ is selected to be the optimal game strategy, and state $j$ is the next state. Let $p(i,a,j)$ denotes the probability of transferring from state $i$ to state $j$ under the influence of action $a$.
\item Immediate Rewards: The player receives an instant reward (which could be either positive or negative) when it transitions from one state to another, which is represented by $r(i,a,j)$.
\item Policy: The policy defines the actions (strategy) to be chosen in every state visited by the system.
\item Time of Transition: Some researches also introduce time of transition to their study. In our case, it is assumed that the time of transition is unity, which means that the transition time is not a factor during the study of MDP in this manuscript.
\end{itemize}
In our study, discounted reward is introduced to define the reward that the corresponding action (strategy) can make for the player. $x_s$ denote the state of the system before the $s$th transition. The objective of the discounted-reward MDP is to find the strategy that optimize the discounted reward starting from every state. The discounted reward from state $i$ can be defined as:
\begin{equation}
\zeta_i = \lim_{k \rightarrow \infty} E [\sum_{s=1}^{k} \tau^{s-1} r(x_s,\pi(x_s),x_{s+1}) \mid x_1 = i]
\end{equation}\label{equ:MDP1}
where $\gamma$ denotes the discount factor, and $0 \leq \tau \leq 1$, an alternative expression of Equ.~\ref{equ:MDP1} is:
\begin{equation}
\zeta_i = E[r(x_1,\pi(x_1),x_2) + \tau r(x_2,\pi(x_2),x_3) + \tau^2 r(x_3,\pi(x_3),x_4) + \cdots]
\end{equation}\label{equ:MDP2}
$\tau$ is used to discount the monetary value of the cost of social energy, and it should be noted that:
\begin{equation}
\tau = (\frac{1}{1+\mu})^1
\end{equation}\label{equ:dcount}
where $\mu$ is the rate of interest. When $\mu>0$, we can ensure that $0 \leq \tau \leq 1$. In our study, it is assumed that the discounting rate is fixed so the power of $\frac{1}{1+\mu}$ is kept at $1$.
\subsection{The Implementation of Multi-Agent Reinforcement Learning for Social Cost Game}
With the definition of Markov Decision Process (MDP) in Sec.~\ref{sec:MDP}, we can implement a multi-agent RL algorithm that can be used to reduce the computational complexity for the proposed game in Sec.~\ref{sec:game}. It should be noted that the learning process of RL algorithm needs the updating of rewards in its database every time the system transition into a new state. Like in other researches that relates to RL algorithm, we define the constantly updating quantities as $Q$ factors as well. So $Q(i,a)$ will denote the reward quantity for state $i$ and action $a$.
The reward that is calculated in the transition is denoted as feedback. The feedback is used is to update the $Q$-factors for the evaluation of actions (strategies) in the former state. Generally speaking if the value of a feedback is good, the $Q$-factor of that action is increased, otherwise, the the $Q$-factor of that action is decreased. Therefore, the system is analyzed and controlled in real time. In each state visited, some action is opted out and the system is ready to proceed to next state. It should be noted that the "state" in our context is the power system condition at the specific time that the state is visited. Since at a specific time point, the number of indoor occupant is fixed, the only factor that influences the choice of HVAC setting is the DLMP at each bus. Each time the action is selected or changed, the DLMP will be influenced. Then, the system enters a new state.
\subsubsection{Discounted Reward Multi-Agent Reinforcement Learning for Social Cost Game}
In our study, we choose the discounted reward Multi-agent RL for our cost of social energy game to realize our goal of computational complexity reduction \cite{Rlearning}. The general steps for discounted reward Multi-agent RL can be expressed as follows:
\begin{itemize}
\item Step 1 (Input and Initiation): Set the $Q$-factors to 0:
\begin{equation}
Q(i,a) \leftarrow 0, \forall i, and, \forall a
\end{equation}
$A(i)$ denotes the set of actions in state $i$. In our case, the number of action equals to the number of strategy in the social energy game. $\alpha^k$ defines the learning rate in the $k_{th}$ iteration. Set $k = 1$ when transition to a new state. $Itmax$ denotes the maximum number of iteration, and should be set to a large number.
\item Step 2 ($Q$-factor Update): Let $\mid A(i) \mid$ denotes the number of actions in set $A(i)$. Hence, the probability of action $a$ is selected in state $i$ is $\frac{1}{\mid A(i) \mid}$. $r(i,a,j)$ denotes the transition reward. The algorithm for updating $Q(i,a)$ is defined as:
\begin{equation}
Q(i,a) \leftarrow (1 - \alpha^k)Q(i,a) + \alpha^k[r(i,a,j) + \tau \max\limits_{b \in A(j)}Q(j,b)],
\end{equation}
The computation of $\alpha^k$ will be discussed later. $\tau$ denotes the discount factor in MDP.
\item Step 3 (Termination Check): Increase $k$ by $1$. Set $i \leftarrow j$, when $k < Itmax$, then return to Step 1. Otherwise, proceed to Step 4.
\item Step 4 (Outputs): For each state $i$, select the action $a^*$ that the corresponding $Q(i,a^*)$ achieves the optimal value.
\end{itemize}
The learning rate $\alpha^k$ should be positive value and satisfy $\alpha^k<1$. The learning rate for the discounted reward reinforcement learning is a function of $k$ and have to meet the condition in \cite{RLcondition}. In our research, the learning rate step size is expressed as:
\begin{equation}
\alpha^k = \frac{C}{D+k}
\end{equation}
where $C = 90$ and $D=100$ in our tests.
According to the general steps, we can formulate our discounted reward multi-agent RL social energy game as follows:
\begin{itemize}
\item Step 1 (Input and Initiation): Set the $Q$-factors to 0:
\begin{equation}
Q(i,s) \leftarrow 0, \forall i, and, \forall s
\end{equation}
$S(i)$ denotes the set of strategy in game $\Psi$. In our case, the number of action equals to the number of strategy in the social energy game. $\alpha^k$ defines the learning rate in the $k_{th}$ iteration. Set $k = 1$ when transition to a new state. $Itmax$ denotes the maximum number of iteration, and should be set to a large number. In our research, the $Itmax = 10000$.
\item Step 2 ($Q$-factor Update): Let $\mid S(i) \mid$ denotes the number of actions in set $S(i)$. Hence, the probability of strategy $s$ is selected in state $i$ is $\frac{1}{\mid S(i) \mid}$. $\delta(i,a,j)$ denotes the transition reward of the corresponding strategy. The algorithm for updating $Q(i,a)$ is defined as:
\begin{equation}\label{eq:rl1}
Q(i,a) \leftarrow (1 - \alpha^k)Q(i,a) + \alpha^k[\delta(i,s,j) + \tau \max\limits_{b \in S(j)}Q(j,b)],
\end{equation}
It should be noted that $\max\limits_{b \in S(j)}Q(j,b)$ equals the optimal social cost $\gamma^{i,t}$ in game $\Psi$. Therefore, we can transform Equ.~\ref{eq:rl1} into:
\begin{equation}
Q(i,a) \leftarrow (1 - \alpha^k)Q(i,a) + \alpha^k[\delta(i,s,j) + \tau \gamma(i)],
\end{equation}
where $\gamma(i)$ denotes the optimal social energy game payoff.
\item Step 3 (Termination Check): Increase $k$ by $1$. Set $i \leftarrow j$, when $k < Itmax$, then return to Step 1. Otherwise, proceed to Step 4.
\item Step 4 (Outputs): For each state $i$, select the strategy $s^*$ that the corresponding $Q(i,a^*)$ achieves the optimal value.
\item Pop up the best two strategy according to the optimal $Q(i,a^*)$ to play the social energy game
\end{itemize}
\subsection{Results}
In this chapter, bus $2$, bus $59$, bus $41$, bus $24$, and bus $13$ are selected as the social cost game players and are highlighted in Fig.\ref{fig:topology}. Those are the same five buildings as in Sec.~\ref{sec:gamer}. Since the building information has already been introduced in Sec.~\ref{sec:gamer}, it would not need to spend space do it twice. The number of people in the five buildings are shown in Fig.~\ref{fig:amoutpeople}.
\begin{figure}[!h]
\centering
\includegraphics*[width=1.0\textwidth]{figure/game2.eps}
\caption{The summer day results comparison}
\label{fig:resultRL1}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics*[width=1.0\textwidth]{figure/game9.eps}
\caption{The winter day results comparison}
\label{fig:resultRL2}
\end{figure}
The result in Fig.~\ref{fig:resultRL1} and Fig.~\ref{fig:resultRL2} shows that the proposed algorithm can achieve the same optimal outcomes as the algorithm we demonstrated in Sec.~\ref{sec:game}.
Fig.~\ref{fig:time} shows the computational complexity comparison between the two proposed methodologies. It shows that when the number of players in a game increases, the time complexity of the reinforcement learning based game can be approximated as a linear function, while the time complexity of the game theory can be approximated as an exponential function. When there are just two or three players in a game, the computation time for the proposed methodology in Sec.~\ref{sec:game} is smaller than the methodology depicted in Sec.~\ref{sec:RL}. As the number of players increases, the reinforcement learning based algorithm shows its undefeatable advantage compared with the algorithm based on game theory solely.
\begin{figure}[!h]
\centering
\includegraphics*[width=1.0\textwidth]{figure/time.eps}
\caption{The computational time comparison}
\label{fig:time}
\end{figure}
\section{Chapter Conclusion}\label{sec:conclusion}
In this chapter, we proposed an autonomous and optimal HVAC system energy consumption scheduling algorithm for minimizing the social cost. That is the parameter we proposed to quantify and measure both human-bing working productivity and energy bill as the monetary value. Firstly, we implemented a realistic real-time pricing mechanism in our parallel system instead of utilizing simple pricing assumptions such as flat rate pricing. Unlike most of the existing game strategies that concentrate on the reduction of energy consumption and monetary cost, our methodology seeks to balance the energy cost and indoor air quality perceived by people. Our proposed algorithm is based on the interactions among players and the interaction between the energy-end users and power suppliers, and each of the players aims to maximize its own profit in a game-theoretic way. Simulation results prove that the proposed HVAC management strategy can reduce energy cost and also maintain the indoor working efficiency in a comfort zone. Second, to address the drawback of the proposed game-theoretic methodology, we implement reinforcement learning to reduce the computational complexity. This methodology can achieve the same optimal results but in much shorter time window.
\section{Proof of theorem 1}
In light of Lemma 1, the feasible set of $(\Psi)$ is nonempty as for every finite non co-operative game a Nash equilibrium exists. Further let $S$ be the feasible region for $(\Psi)$ then,
\begin{equation}
\text{min}_{(\alpha,\gamma^1,\dots,\gamma^n)\in S}\sum_{i \in N}(\gamma^i-\sum_{j=1}^{m^i}\delta^i(\alpha^{-i},s^i_j)) \geqslant 0.
\end{equation}
Thus, if $\alpha^*$ is a Nash equilibrium it is feasible for $(\Psi)$, and from (1),
\begin{equation}
\sum_{i \in N}(\gamma^{i*}-\delta^i(\alpha^*))=0
\end{equation}
yielding that $\alpha^*$ is an optimal solution of $(\Psi)$.
Conversely, suppose $(\alpha,\gamma^{1*},\dots,\gamma^{n*})$ is an optimal solution of $(\Psi)$ then it satisfies (2) to (4).
By virtue of (2), $\sum_{i \in N}(\delta^i(\alpha^{-i*},s^i_j))$.
But by the existence theorem of Nash equilibrium, there must exist at least on $(\alpha,\gamma^1,\dots,\gamma^n)$ feasible for $(\Psi)$ with $\sum_{i \in N}(\gamma^{i}-\delta^i(\alpha))=0$. So for $(\alpha,\gamma^{1*},\dots,\gamma^{n*})$ to be a blobal minimum for $(\Psi)$,
\begin{equation}
\sum_{i \in N}(\delta^i(\alpha^*)-\gamma^{i*})=0
\end{equation}
Consequently $\gamma^*$ is a Nash equilibrium of game $\psi$, on account of Lemma 1 the payoff $\delta^{i*}$ is obviously the optimal expected payoff to player $i$.
We see that the problem of computing a Nash equilibrium of $\psi$ reduces to that of solving the optimization problem $(\psi)$ with optimal value zero.
\section{Conclusion}
Since the invention of power system, power engineers always aim to serve human-beings better quality and more stable energy. In this manuscript, the research investigates the mutual influences between the building energy usage and the productivity of indoor occupants. The preliminary results reveal that there is a strong relationship between those two factors. Our research aims to find the optimal balance between building energy usage and the working efficiency of indoor occupants. To address the novel distribution power system problem, we implement a parallel computing scheme and define the "social energy" as a novel concept that combines the monetary and society aspects of power system together. The parallel computing scheme helps us to build the interactive mechanism between physical system and artificial system. Then, the study and research of the aforementioned problem is conducted from shallow to deep. The investigation starts with the implementation of techniques such as quadratic regression model and brute search algorithm. Distribution locational marginal pricing is also introduced to calculate the economic cost of energy. Then, to cope with the future smart grid features, game theory is implemented. But limitation of the game theory is obvious. With the increase of bus or strategy, the computation time goes up quickly. Therefore, reinforcement learning is implemented and embedded into the game-theoretic methodology to reduce the computational complexity. The results reveal that the algorithm achieves the time reduction goal and simplify the gaming process and can still optimize the indoor temperature control strategy. Our research can help to achieve a better demand side management with the consideration of indoor occupants and a better power delivery quality.
\chapter{Chapter 1}\label{Chap1}
\input{Chapter_1
\chapter{Chapter 2}\label{Chap2}
\input{Chapter_2}
\chapter{Chapter 3}\label{Chap3}
\input{Chapter_3}
\chapter{Chapter 4}\label{Chap4}
\input{Chapter_4}
\chapter{Chapter 5}\label{Chap5}
\input{Chapter_5}
\chapter{Chapter 6}\label{Chap6}
\input{Chapter_6}
\newpage
\addcontentsline{toc}{chapter}{References}
\renewcommand{\bibname}{References}
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-1600 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{submission}
Unsupervised neural networks assume unlabeled data to be generated from a neural network structure, and have been
applied extensively to pattern analysis and recognition. The most basic one is the Restricted Boltzmann Machine (RBM)
\cite{smolensky1986information, freund1992unsupervised, hinton2002training}, an energy-based
model with a layer of hidden neurons and a layer of visible neurons. With such a basic structure, multiple
layers of RBMs can be stacked to create an unsupervised deep neural network structure, such as the Deep Belief Network (DBN)\cite{hinton2006fast} or
the Deep Boltzmann Machine (DBM) \cite{salakhutdinov2009deep}. These models
can be calibrated with a combination of the Stochastic Gradient Descent (SGD) and the Contrastive Divergence (CD) algorithm
\cite{hinton2002training} or
the Persistent Contrastive Divergence (PCD) algorithm \cite{tieleman2008training}. Once the parameters of a model is learned,
the values of the hidden units can be inferred from the visible units, thus applying unsupervised neural networks for
feature selection. Alternatively, we may consider applying
the parameters obtained from an unsupervised deep neural network to initialize a deep feedforward neural network,thus improving supervised learning.
Fully-connected deep neural networks suffer from millions or even billions of parameters and are hard to train.Sparsity is one of the
key characters that deep neural networks desire.Apart from the main building blocks for deep architectures,RBMs
are also powerful models in their own right.They have been successfully applied in collaborative filtering
\cite{salakhutdinov2007restricted}, information and image retrieval \cite{gehler2006rate}
,multi-class classification \cite{larochelle2008classification} and time series modeling \cite{taylor2007modeling}.
Therefore,the performance of RBMs is crutical to not only themselves but also the deep architectures build upon them.Reducing the parameters of RBMs
while keep their generative performance can help accelerate the inference and reduce the memory footprint for themselves
and those deep models based on them.Moreover,it can provide a reasonable way to choose sparse architectures for
deep neural networks.
The main contribution of this paper is that we emprically
show that redundancy exists in RBMs’ fully-connected architectures,which means RBMs
have many useless connections that can be discarded.By using iterative pruning method,the
parameters needed to model the data distribution can be
dramatically reduced.Further,by applying the pruning method
in each layer of the unsupervised pretraining procedure while keep its generative performance,
we can build sparse architectures for deep neural
networks.Experimental results show that there is virtually no
loss in discriminative performance.
The remaining of this paper is organized as follows.In next section,we briefly present background knowledge of RBMs and DBNs.
After that we describe the method that we use to reduce the parameters in RBMs and propose an unsupervised sparse deep architecture selection
algorithm.Then we present our experimental results
on several datasets.Finally we conclude the paper.
\section{Background knowledge}
\subsection{Restricted Boltzmann Machines}
A restricted Boltzmann machine is a probabilistic model consisting of a set of \emph{visible units} \textbf{v} and
a set of \emph{hidden units} \textbf{h} which form a bipartite graph; there are no connections between pairs of
visible units or pairs of hidden units, but every visible unit is connected to every hidden unit.Here for simplicity
we assume that both the visible and hidden units are binary units.
RBMs are enegry-based models and the basic RBM has the following enegry function:
\begin{equation}
E(v,h)=-\sum_{ij} v_{i}w_{ij}h_{j} - \sum_i v_ia_i - \sum_j h_jb_j
\end{equation}
where $v_i$ and $h_j$ denote the states of $i$th visible and $j$th
hidden unit,while $w_{ij}$ represents the weight between $i$th visible
and $j$th hidden unit.In addition, $a_i$ and $b_j$ represent the bias on $i$th
visible unit and $j$th hidden unit respectively.
The joint distribution of $(v,h)$ can then be defined as:
\begin{equation}
p(v,h)=\frac{e^{-E(v,h)}}{Z}
\end{equation}
\begin{equation}
Z=\sum_{v,h}{e^{-E(v,h)}}
\end{equation}
where $Z$ is the partition function.Given the observed data,the states
of the hidden units are conditionally independent.This means that each hidden unit
learns to represent the data independently.Their activation probabilities are
\begin{equation}
p(h_j|v)=\frac{1}{1+e^{-W_{.j}^Tv-b_j}}
\end{equation}
where $w_{.j}$ denotes the $j$th column of $W$,which is the weight matrix
between the visible and hidden layer. The set of weights that connect
a single hidden unit to all visible units is called a filter.
The maximum likelihood learning in a RBM is intractable because of its partition function.
However,Hinton \cite{hinton2002training} proposes that we can get very good
approximations to the gradient when running the Gibbs sampler
only $k$ steps,initalized from the training data.This training algorithm called \textbf{Contrastive
Divergence} has become the standard way of training RBMs.The Contrastive
Divergence gradient is like below:
\begin{equation}
-\sum_{h}{p(h|v)\frac{\partial{E(v,h)}}{\partial{\theta}}}+\sum_{h}{p(h|v^{-})\frac{\partial{E(v^{-},h)}}{\partial{\theta}}}
\end{equation}
where $v$ is the training data and $v^{-}$ is the sample that yielded by run a Gibbs chain initalized from $v$
after $k$ steps.We can see that for Contrastive Divergence if the sample $v^{-}$ is identical to the training data $v$,then the gradient
disappears and the training stops.
\subsection{Deep Belif Network}
RBMs can be stacked to form Deep Belif Networks(DBNs).DBNs are graphical models which learn to
extract a deep hierarchical representation of the training data.The joint distribution of a DBN is
as follows:
\begin{equation}
P(v,h^1,...,h^n)=\left(\prod_{k=0}^{n-2}P(h^k|h^{k+1})\right)P(h^{n-1},h^{n})
\end{equation}
where $v=h^0$ is the training data,$P(h^{k}|h^{k+1})$ is a conditional distribution for $k$th
hidden layer conditioned on the layer above it.$P(h^{n-1},h^{n})$ is the joint distribution of the top two layers.
Algorithm~\ref{alg:up} shows the greedy layer-wise unsupervised training \cite{hinton2006reducing, bengio2007greedy} for the DBNs.
The trained DBN can be used as a generative model directly,also we can extract weights from the DBN to define a Multilayer Perceptron(MLP)
for supervised tasks.
\begin{algorithm}[hb]
\caption{Recursive Greedy Learning Procedure for the DBN}
\label{alg:up}
\begin{algorithmic}[1]
\STATE Fit parameters $W_1$ of a 1-layer RBM to data.
\STATE Freeze the parameter vector $W^1$ and use samples from $p(h^1|v,W^1)$ as the data for
training the next layer of binary features with an RBM.
\STATE Proceed recursively for as many layers as desired.
\end{algorithmic}
\end{algorithm}
\section{Reduce the parameters in RBMs}
There are typically many weights which are of little importance in a neural network.In network pruning,
such unimportant weights are discarded,and the network is retrained.This process can be conducted iteratively
\cite{reed1993pruning}.Recently,Han \textit{et al}. \cite{han2015learning} describe an iterative pruning framework to
prune the deep convolutional neural networks(the main process is illustracted in Figure~\ref{iterative-pruning}).However,such pruning work is only conducted on deep convolutional neural networks.
And they prune the network in a greedy way.The differences and similarities between the method we used and Han \textit{et al}. will be discussed below.
\begin{figure}[!ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[scale=0.6]{figures/Pruning.png}}
\caption{Obtain sparse deep architecture by our algorithm.}
\label{iterative-pruning}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Iterative Pruning}
Our pruning method shares the same framework with Han \textit{et al}..The pruning framework consists of three phases,which are training,pruning and retraining.
In the training phase,because RBMs only have one hidden layer,we can easily train RBMs to the final
configurations and learn which connections are importrant.
And then in pruning phase,we prune all the parameters under the threshold $T$.We have tried two different ways to prune.
One is conducted in the probabilistically way,and the other is the method Han \textit{et al}. \cite{han2015learning} used.
In the probability way,we prune the certain percentage $p$ of the remaining weights $\hat{W}$.Which equals to
implement a mask
\begin{equation}
m_{ij} = 1_{\abs{\hat{w}_{ij}} \ge T},T=Q_{100p\%}{\abs{\hat{W}}}
\end{equation}
where $T$ is the threshold and $Q$ is the $100p\%$-th left percentile of
the remaining weights $\hat{W}$ based on their absolute value.When showing the pruning results in experiments section,we denote the pruned models by
this method as \emph{ours}.
In the second way,the threshold is defined as a quality parameter multiplied by the weight $\hat{W}$'s standard
deviation.Which equals to implement a mask
\begin{equation}
m_{ij} = 1_{\abs{\hat{w}_{ij}} \ge T},T=\lambda \cdot \sigma
\end{equation}
where $\lambda$ is the quality parameter and $\sigma$ is the standard deviation of the weight $\hat{W}$.
When showing the pruning results,we denote the pruned models by
this method as \emph{Han et al}.'s.
Without retraining,the pruning will result in huge
generative performance loss.So the pruning phase is followed by the retraining phase.
This pruning and retraining process is conducted iteratively.
The main difference between the method we used and Han \textit{et al}.'s is that we prune the parameters in a probabilistic way
and Han \textit{et al}.'s controls the parameters reduced by the variance in weight distribution.The probabilistic way can get
the exact sparsity percentage of weights we want but it requires sorting the weights first.While Han \textit{et al}.'s can avoid
sorting process but it needs to run serval pruning process to determine the suitable hyper-parameter $\lambda$.And described in their
paper \cite{han2015learning},their pruning process is conducted in a greedy search way.While we try to slightly reduce the parameters of
the model to test their performance. The most important one is that such pruning work is only conducted in deep convolutional neural networks
before.To the best of our knowledge,there is no pruning work on unsupervised neural nets so far.
\subsection{Regularization}
Choosing the correct regularization impacts the performance of pruning and retraining.
In training phase,we have tried different regularization methods like Dropout,DropConnect and Sparse RBM.
Dropout,which is usually used on the fully-connected layers in CNNs has also been applied in RBMs \cite{srivastava2014dropout}.
It drops out each hidden unit in RBMs independently with probablity $p$.
In addition,DropConnect \cite{wan2013regularization} was also proposed to regularize deep CNNs which drops each connection independently
with probability $p$.
While Sparse RBM \cite{lee2008sparse} was introduced to force hidden units to activate on only a small propotion of the training set.
So it can learn discriminative features to help supervised tasks.
But we find that all of them result in bad generative models.
And then we experimented with L1 and L2 regularization.L1 regularization penalizes non-zero parameters resulting
in more parameters near zero.This gives better generative performance after pruning without retraining.Also
we observed L1 regularization is better after a single aggresive pruning with retraining.But L2 regularization
outperforms L1 regularization slightly in iteratively pruning on MNIST dataset.Thus,in the following experiments,we combine L2 regularization
to train RBMs.This is futher discussed in the experiment section.
\subsection{Unsupervised Sparse Deep Architecture Selection Algorithm}
Reducing the connections in RBM without significantly affecting its performance,which means we therefore are able to
obtain a sparse architecture for RBM that models,represents or reconstructs the data almost equally well as the fully-connected one.
Applying RBM as the unsupervised feature extractor combined with the pruning method,then we can build sparse deep architecture based on that.We propose an unsupervised sparse deep architecture selection algorithm
in Algorithm~\ref{alg:sup}.This algorithm unsupervisedly form sparse deep architectures as a prior in architectures for visual recognition tasks.
The effect of this algorithm is illustracted in Figure~\ref{architecture}.
The method \textbf{m} denoted in the Algorithm~\ref{alg:sup} can be replaced by any other method that can reduce the parameters
without significantly affecting the generative performance in RBMs.
In this paper,we use the pruning method described above to replace \textbf{m} to test the generative and discriminative
performance of the deep neural nets.
\begin{algorithm}[hb]
\caption{Unsupervised Sparse Deep Architecture Selection Algorithm}
\label{alg:sup}
\begin{algorithmic}[1]
\STATE Fit parameters $W_1$ of a 1-layer RBM to data.
\STATE Use the method \textbf{m} to reduce the connections of a 1-layer RBM while keep its generative performance.
In turn,we get sparse weight parameters $W_1^{'}$.
\STATE Freeze the parameter vector $W_1^{'}$ and use samples from $p(h^1|v,W_1^{'})$ as the data for
training the next layer of binary features with an RBM.
\STATE Proceed recursively for as many layers as desired.
\end{algorithmic}
\end{algorithm}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{figures/architecture_selection.png}}
\caption{Obtain sparse deep architecture by our algorithm.}
\label{architecture}
\end{center}
\vskip -0.2in
\end{figure}
\section{Experiments}
We have conducted experiments on three different datasets to test the pruning methods.Since the maximum likelihood
learning in RBMs is to raise the probabilities of the model assigned to the training data,the generative
performance of RBMs can be evaulated by the average log-probabilities on the test dataset.In order to do that,
we have to confront the partition function in RBMs,which is intractable.If the hidden units of the RBM is small enough,
we can get the exact number of partition function.
So we conducted two sets of experiments in each dataset.One is using 20 hidden units where the partition
function can be calculated in a brute-force way,the other is using realistic number of hidden units to
model the data distribution where we use Annealed Importance Sampling(AIS) \cite{neal2001annealed, salakhutdinov2008quantitative} method to obtain estimates of the models'
partition functions.
\begin{figure*}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth*2]{figures/tradeoff.pdf}}
\caption{Trade-off curve for parameters reduction and generative performance loss on MNIST dataset.The left shows the
results pruned without retraining.The right shows the results pruned with retraining.}
\label{tradeoff-mnist}
\end{center}
\vskip -0.2in
\end{figure*}
After the experiments on RBMs,we build sparse deep architectures based on the pruned RBMs on two datasets.We evaluate the generative and discriminative
performance on them and show that there is virtually no loss in both of them.
\subsection{MNIST dataset}
The MNIST digits dataset is widely used to assess the performance of novel machine learning algorithms.
The dataset contains 60000 images for training and 10000 images for testing,each image being a variant of a digit
represented by a 28$\times$28 binary matrix.
The results are depicted in Table 1.The models in the Table denoted as RBM are all trained with weight decay.
The sparsity in the Table~\ref{mnist-rbm} refers to the percentage of pruned weights compared to the RBM and the two numbers behind
the models denote the \textbf{CD} steps for learning and the hidden units of the model respectively.The
\textbf{CD} steps 25 means that the model is trained with the CD steps gradually increased from 1 to 25.
Note that the average test log-probabilities for RBM and pruned RBM with 20 hidden units are true values and the ones with
500 hidden units are estimates averaged on 100 AIS runs.The settings of the tables depicted below are the same.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccc}
\hline
Models & \begin{tabular}[c]{@{}c@{}}No. of weights\\ (sparsity\%) \end{tabular} & \begin{tabular}[c]{@{}c@{}}Average Test\\ log-probabilities \end{tabular}\\
\hline
RBM(1-20) & 15680(N/A) & -157.70 \\
Han \textit{et al}.(1-20) & 5283(66.31\%) & -159.36 \\
\emph{Ours}(1-20) & 5379(66.70\%) & -159.51 \\
\hline
RBM(1-500) & 392000(N/A) & -111.54 \\
Han \textit{et al}.(1-500) & \textbf{41457(89.42\%)} & -111.09 \\
\emph{Ours}(1-500) & 46120(88.24\%) & \textbf{-106.57} \\
\hline
RBM(25-500) & 392000(N/A) & -86.34 \\
Han \textit{et al}.(25-500) & 30888(91.49\%) & -86.93 \\
\emph{Ours}(25-500) & \textbf{32284(91.76\%)} & \textbf{-86.12} \\
\hline
\end{tabular}
\end{center}
\caption{The parameter reduction results on MNIST dataset.The result of RBM with CD25(variable)
is from \cite{salakhutdinov2008quantitative}.}
\label{mnist-rbm}
\end{table}
We can see from the results that even with not enough hidden units where the training in RBM is clearly underfitting,
there is still redundancy and nearly \textbf{66\%} connections can be pruned with little loss of performance.
When the number of hidden units increases to a realistic number of 500,the percentage of weights can be reducted increases
to nearly \textbf{90\%}.To the best of our knowledge,the result in \cite{salakhutdinov2008quantitative} is the best published
result of RBM on MNIST.
So we apply the pruned methods on it and get the most compressed rate on MNIST without loss in generative performance.
The trade-off between the parameter reduction and the generative performance loss is depicted in Figure~\ref{tradeoff-mnist}.The baseline
in Figure~\ref{tradeoff-mnist} is the result reported in \cite{salakhutdinov2008quantitative}.
The left panel in Figure~\ref{tradeoff-mnist} shows that without retraining,RBM's generative performance drops dramatically.It is also interesting
to see that RBM trained with L1 regularization can have the ``free lunch'' of nearly 70\% parameters reduction without
generative performance loss.
In the right panel of Figure~\ref{tradeoff-mnist},all of the pruning curves are conducted in the probability way we described above except Han \textit{et al}.'s.
One of the good quality of pruning in a probability way is that it can reach any parameter reduction point(say if we want to prune
exactly some percentage of weights).It is interesting
to find that when the parameter reduction rate is under about \textbf{90\%},the models even perform better than the original fully-connected
one.This emprically shows that pruned models can sometimes reduce the overfitting and gain better performance.
We also present the samples from the original model \cite{salakhutdinov2008quantitative} and our pruned model
in Figure~\ref{samples}.The samples are obtained by running Gibbs chains initialized from the same test images
and each line shows images after running the chains for 1000 steps.
The first 100 recepitive fields of the original model and our pruned model are depicted in
Figure~\ref{receptive-field}.We can see that the pruned
one has features like strokes while the original one has much noise.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\columnwidth/2]{figures/sequential_img.pdf}%
\includegraphics[width=\columnwidth/2]{figures/sequential_pruned_img.pdf}
\caption{Samples from the original model(left) and our pruned model(right).}
\label{samples}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\columnwidth/2]{figures/original_sala.pdf}%
\includegraphics[width=\columnwidth/2]{figures/prob_pruned_sala.pdf}
\caption{Samples from the original model(left) and our pruned model(right).}
\label{receptive-field}
\end{center}
\end{figure}
The histogram of weight distribution before and after pruning is shown in Figure~\ref{weight-distribution}.
We can find that the weight distribution changes from the unimodal distribution peaks around zero with tails dropping off quickly to a bimodal distribution.
The weight value's range speards from [-1,1] to [-3,3] after pruning.This means that the remaining important weights are strengthened
after the retraining.
\begin{figure*}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth*2]{figures/weight_distribution.pdf}}
\caption{The weight distribution before and after pruning.}
\label{weight-distribution}
\end{center}
\vskip -0.2in
\end{figure*}
\subsection{OCR letters dataset}
OCR letters dataset contains 52152 16$\times$8 images.Following the code provided by Larochelle \footnote{\url{http://www.dmi.usherb.ca/~larocheh/mlpython/_modules/datasets/ocr_letters.html}},we split
the dataset into 42152 training set,and 10000 test set.Results on this dataset are depicted in Table~\ref{ocrletters-rbm}.
As our goal is to investigate
the redundancy in RBMs,the RBM models denoted in the Table~\ref{ocrletters-rbm} may not be fine tuned to reach their best generative performance.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth/2]{figures/tradeoff_ocr.pdf}}
\caption{Trade-off curve for parameters reduction and generative performance loss on OCR letters dataset.}
\label{tradeoff-ocrletters}
\end{center}
\vskip -0.2in
\end{figure}
Trained with 1000 hidden units and gradually increased \textbf{CD} steps from 1 to 25, the model's parameters can be reduced by a factor of \textbf{8}$\times$
with no loss in generative performance.
The trade-off between the parameter reduction and the generative performance loss is also depicted in Figure~\ref{tradeoff-ocrletters}.
It can also be observed that when the parameter reduction rate is under about \textbf{90\%},the pruned models sometimes even perform
better than the original fully-connected one.The baseline in the Figure~\ref{tradeoff-ocrletters} is the RBM trained with CD steps gradually increased
from 1 to 25 in Table~\ref{ocrletters-rbm}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccc}
\hline
Models & \begin{tabular}[c]{@{}c@{}}No. of weights\\ (sparsity\%) \end{tabular} & \begin{tabular}[c]{@{}c@{}}Average Test\\ log-probabilities \end{tabular}\\
\hline
RBM(1-20) & 2560(N/A) & -46.86\\
Han \textit{et al}.(1-20) & 912(65.86\%) & -47.74\\
\emph{Ours}(1-20) & 879(65.70\%) & -47.93\\
\hline
RBM(1-1000) & 128000(N/A) & -31.65\\
Han \textit{et al}.(1-1000) & 35717(72.10\%) & \textbf{-34.16}\\
\emph{Ours}(1-1000) & \textbf{30733(75.99\%)} & -34.84 \\
\hline
RBM(25-1000) & 128000(N/A) & -27.63 \\
Han \textit{et al}.(25-1000) & 15200(88.13\%) & \textbf{-27.13} \\
\emph{Ours}(25-1000) & \textbf{15060(88.23\%)} & -27.42 \\
\hline
\end{tabular}
\end{center}
\caption{The parameter reduction results on OCR letters dataset.}
\label{ocrletters-rbm}
\end{table}
\subsection{CalTech 101 Silhouettes dataset}
CalTech 101 Silhouettes dataset contains silhouettes of objects extracted from the
CalTech 101 image dataset.In total it has 101 classes and two datasets.One with binary images of 28$\times$28 pixels split
in a training set of 4100 samples and a testing set of 2307 samples, and one with binary images of
16$\times$16 pixels split in a training set of 4082 samples and a testing set of 2302 samples.
We report results on these two datasets respectively in Table~\ref{caltech28-rbm} and Table~\ref{caltech16-rbm}.Also we declear the baselines of RBMs in the tables
may not be fine tuned to reach their best generative performance,but this doesnot affect the results as our goal is to show
that the redundancy exits in RBMs. The parameters of RBM trained with 1 CD step and 500 hidden units can be reduced by a factor of
\textbf{8}$\times$ with no loss in generative performance in the 28$\times$28 dataset.And the parameters of RBM trained with the some setting can be reduced by a facotr of
\textbf{11}$\times$ with no loss in generative performance in the 16$\times$16 dataset.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccc}
\hline
Models & \begin{tabular}[c]{@{}c@{}}No. of weights \\ (sparsity) \end{tabular} & \begin{tabular}[c]{@{}c@{}}Average Test\\ log-probabilities \end{tabular}\\
\hline
RBM(1-20) & 15680(N/A) & -374.69\\
Han \textit{et al}.(1-20) & 5513(64.84\%) & -375.62 \\
\emph{Ours}(1-20) & 5379(65.70\%) & -374.73\\
\hline
RBM(1-500) & 392000(N/A) & -162.82\\
Han \textit{et al}.(1-500) & \textbf{45043(88.51\%)} & -161.59 \\
\emph{Ours}(1-500) & 46119(88.23\%) & \textbf{-161.27}\\
\hline
\end{tabular}
\end{center}
\caption{The parameter reduction results on CalTech 101 Silhouettes 28$\times$28 datasets.}
\label{caltech28-rbm}
\begin{center}
\begin{tabular}{ccc}
\hline
Models & \begin{tabular}[c]{@{}c@{}}No. of weights \\ (sparsity) \end{tabular} & \begin{tabular}[c]{@{}c@{}}Average Test\\ log-probabilities \end{tabular}\\
\hline
RBM(1-20) & 5120(N/A) & -100.52\\
Han \textit{et al}.(1-20) & 544(89.38\%) & -98.70\\
\emph{Ours}(1-20) & 422(91.76\%) & -100.84\\
\hline
RBM(1-500) & 128000(N/A) & -56.66\\
Han \textit{et al}.(1-500) & \textbf{11264(91.20\%)} & -56.66 \\
\emph{Ours}(1-500) & 15066(88.23\%) & \textbf{-56.40} \\
\hline
\end{tabular}
\end{center}
\caption{The parameter reduction results on CalTech 101 Silhouettes 16$\times$16 datasets.}
\label{caltech16-rbm}
\end{table}
\subsection{Deep Belif Network's performance on MNIST and OCR letters datasets}
One of the important application of RBM is to use them as building blocks to pretrain deep networks\cite{hinton2006reducing}
.We use the Algorithm~\ref{alg:sup} to obtain sparse deep architectures by applying our pruning method to prune each layer and
keep its generative performance.This algorithm helps to form sparse deep neural networks as feature detector part for visual recognition tasks. We show that compared to the fully-connected architectures there is virtually no loss in their capabilities.
We conducted the experiments on the MNIST dataset and the OCR letters dataset.We set the CD step to 1 in all experiments.
On the MNIST dataset,to make the comparison with the previous result\cite{hinton2006reducing} fair,we pretrain a 784-500-500-2000 network which has
the same architecture to the deep network described in \cite{hinton2006reducing}.After pretraining,the multilayer network
is fine-tuned using \textbf{SGD} \footnote{In \cite{hinton2006reducing} Conjugate Gradient was used and achieved error rate of 1.2\%.}.
The network achieves an error rate of \textbf{1.23\%}.All the reported error rate in this section is averaged over ten trials.Then we use our pruning method in every layer of the pretraining while keep their generative
performance.As a result,we reduce \textbf{91.76\%} connections of each layer and obtain a sparse architecture of deep neural network .
After that,the sparse architecture is fine-tuned by \textbf{SGD},and achieves an error rate of \textbf{1.25\%}.We do the same experiment on the OCR letters dataset
,we use a 128-1000-1000 network architecture.The fully-connected one achieves an error rate of \textbf{9.68\%}.While the sparse architecture we obtained
by Algorithm~\ref{alg:sup} achieves an error rate of \textbf{9.26\%} which outperforms the fully-connected one and each layer's \textbf{76\%} connections are reduced.The experiments show that the unsupervised
sparse deep architectures selection algorithm that we propose in Algorithm~\ref{alg:sup} works well and finds good sparse deep architectures for feature extraction.
\section{Conclusion}
In this paper,we describe a pruning method to reduce the parameters of RBM which is the basic building block for deep architectures
with virtually no loss in their generative performance.We emprically show that pruning can sometimes even help to improve
the model's generative performance.And based on that we propose an unsupervised sparse deep architecture selection algorithm to form sparse deep
neural networks which verified by our experiments have virtually no loss or even perform better in discriminative performance.Futhur research directions
include detailed analysis on this sparse deep architectures and different visible unit type like Gaussian RBM.
| proofpile-arXiv_065-1601 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The class of NSOP$_{1}$ theories may be viewed as the class of theories that are simple at a generic scale. This picture emerged piecemeal, starting with the results of Chernikov and the second-named author \cite{ArtemNick}, which established a Kim-Pillay-style criterion for NSOP$_{1}$ and characterized the NSOP$_{1}$ theories in terms of a weak variant of the independence theorem. Simplicity-like behavior had been observed in certain algebraic structures\textemdash for example, the generic vector space with a bilinear form studied by Granger, and $\omega$-free PAC fields investigated by Chatzidakis\textemdash and these new results established these structures are NSOP$_{1}$ and suggested that this simplicity-like behavior might be characteristic of the class. The analogy with simplicity theory was deepened in \cite{kaplan2017kim} and \cite{24-Kaplan2017} with the introduction of Kim-independence. There it was shown that, in an NSOP$_{1}$ theory, Kim-independence satisfies appropriate versions of Kim's lemma, symmetry, the independence theorem, and local character and that, moreover, these properties individually characterize NSOP$_{1}$ theories. This notion of independence has proved useful in proving preservation of NSOP$_{1}$ under various model-theoretic constructions and has been shown to coincide with natural algebraic notions of independence in new concrete examples. In this way, the structure theory for NSOP$_{1}$ theories has developed along parallel lines to simplicity theory, with Kim-independence replacing the core notion of non-forking independence.
The key difference between these settings stems from the fact that the notion of Kim-independence only speaks about the behavior of dividing at the \emph{generic} scale. To say that $a$ is Kim-independent over $M$ with $b$ is to say that any $M$-indiscernible sequence $I$ beginning with $b$, if sufficiently generic over $M$, is conjugate over $Mb$ to one that is indiscernible over $Ma$. In the initial definition of Kim-independence, genericity is understood to mean that the sequence is a Morley sequence in a global $M$-invariant type, but, after the fact, it turns out that broader notions of generic sequence give rise to equivalent definitions in the context of NSOP$_{1}$ theories \cite[Theorem 7.7]{kaplan2017kim}. In any case, this additional genericity requirement in the definition of independence produces a curious phenomenon: roughly speaking, asserting indiscernibility over a larger base is making a stronger statement, asserting genericity over a bigger base is making a weaker one. This tension is what introduces subtleties in the generalization of facts from non-forking independence in simple theories to the broader setting of Kim-independence in NSOP$_{1}$ theories, as base monotonicity no longer holds. In fact, an NSOP$_{1}$ theory in which Kim-independence satisfies base monotonicity is necessarily simple \cite[Proposition 8.8]{kaplan2017kim}.
This paper is devoted to studying the ways that genericity over one base may be transfered to genericity over another base. Base monotonicity trivializes all such questions in the context of non-forking independence in simple theories, so the issues we deal with here are new and unique to the NSOP$_{1}$ world. The first work along these lines was in \cite{kruckman2018generic}, where Kruckman and the second-named author proved ``algebraically reasonable" versions of extension, the independence theorem, and the chain condition, which allow one to arrange for tuples to be Kim-independent over a given base and \emph{algebraically} independent over a larger one. We build on this work, showing that in many cases one can arrange for Kim-independence over both bases and extend this to the construction of Morley sequences. This leads to our main theorem:
\begin{thm*}
Suppose $T$ is a complete theory. The following are equivalent:
\begin{enumerate}
\item $T$ is NSOP$_{1}$
\item Transitivity of Kim-independence over models: if $M \prec N \models T$, $a \ind^{K}_{M} N$ and $a \ind^{K}_{N} b$, then $a \ind^{K}_{M} Nb$.\footnote{In the literature, \emph{transitivity} for a relation $\ind$ is sometimes taken to mean $a \ind_{A} b + a \ind_{Ab} c \iff a \ind_{A} bc$, which implies base monotonicity. Since, in general, $\ind^{K}$ does not satisfy base monotonicity in an NSOP$_{1}$ theory, we use transitivity to denote only the $\implies$ direction. This is reasonable since this may be paraphrased by saying that a non-Kim-forking extension of a non-Kim-forking extension is a non-Kim-forking extension (all extensions over models). Kim has suggested using the term ``transitivity lifting" for this notion, but we opt for the simpler ``transitivity."}
\item $\ind^{K}$-Morley sequences over models are witnesses: if $M \models T$ and $\varphi(x;b_{0})$ Kim-divides over $M$ and $\langle b_{i} : i < \omega \rangle$ is an $\ind^{K}$-Morley sequence, then $\{\varphi(x;b_{i}) : i < \omega\}$ is inconsistent.
\end{enumerate}
\end{thm*}
The direction $(3) \implies (1)$ was known by \cite[Theorem 3.16]{kaplan2017kim}, but all other directions are new. We prove $(1)\implies (2)$ in Theorem \ref{transitivity theorem}, $(2)\implies(1)$ in Proposition \ref{transitivity implies nsop1}, and finally $(1)\implies (3)$ in Theorem \ref{witnessing theorem} below.
The theorem clarifies the extent to which concepts from simplicity theory can be carried over to the NSOP$_{1}$ context. The Kim-Pillay theorem for simple theories catalogues the basic properties of non-forking independence in a simple theory. We had showed all of these properties for Kim-independence except base monotonicity, transitivity, and local character in \cite{kaplan2017kim}, and observed that base monotonicity had to go for non-simple NSOP$_{1}$ theories. Local character was later established in joint work with Shelah in \cite{24-Kaplan2017}, which left only transitivity. An alternative formulation of transitivity, which is a consequence of the standard one and base monotonicity, was considered in \cite[Section 9.2]{kaplan2017kim}, where it was shown to fail in NSOP$_{1}$ theories in general. The present theorem establishes transitivity in its usual form and, moreover, goes further, showing that transitivity of Kim-independence is characteristic of NSOP$_{1}$ theories.
This theorem also represents a signficant technical development in the study of Kim-independence, allowing us to answer several questions. The (1)$\implies$(2) direction and its proof settle two questions from our prior work \cite[Question 3.14, Question 3.16]{24-Kaplan2017}. The (1)$\implies$(3) direction collapses two kinds of generic sequence studied in \cite{kaplan2017kim}: it has as a corollary that tree Morley sequences coincide with total $\ind^{K}$-Morley sequences, answering \cite[Question 7.12]{kaplan2017kim} and, additionally, gives a characterization of witnesses for Kim-dividing in NSOP$_{1}$ theories.
We give three applications in Section \ref{applications}. First, we prove two `lifting lemmas' that show that, in an NSOP$_{1}$ theory, if $M$ is an elementary substructure of $N$, then whenever $a \ind^{K}_{M} N$, all $\ind^{K}$-Morley sequences and tree Morley sequences over $M$ beginning with $a$ are conjugate over $Ma$ to sequences that are respectively $\ind^{K}$-Morley or tree Morley over $N$. This gives an analogue to a known result for non-forking Morley sequences in simple theories and clarifies the relationship between witnesses to Kim-dividing between two bases, one contained in another. Secondly, we prove a local version of preservation of Kim-independence under unions of chains, which was previously only known for complete types. In an NSOP$_{1}$ theory, a formula $k$-Kim-divides over an increasing union of models if and only if it $k$-Kim-divides over a cofinal collection of models in the chain (for an appropropriate definition of $k$-Kim-dividing), which answers \cite[Question 3.17]{24-Kaplan2017}. Finally, we reformulate the Kim-Pillay-style characterization of $\ind^{K}$ from \cite[Theorem 9.1]{24-Kaplan2017}, instead characterizing $\ind^{K}$ intrinsically in terms of properties of an abstract independence relation, without reference to finite satisfiability. We expect that these results will have further applications in the study of this class of theories.
\section{Preliminaries}
Throughout the paper, $T$ will denote a complete theory in the language $L$ with infinite monster model $\mathbb{M} \models T$. We will not notationally distinguish between elements and tuples. We will write $x,y,$ and $z$ to denote tuples of variables, and use the letters $M,N$ to denote models of $T$.
\subsection{NSOP$_{1}$ theories, invariant types, and Morley sequences}
\begin{defn}
\cite[Definition 2.2]{dvzamonja2004maximality} A formula $\varphi\left(x;y\right)$
has the $1$-\emph{strong order property} \emph{(SOP}$_{1})$ if there
is a tree of tuples $(a_{\eta})_{\eta\in2^{<\omega}}$ so
that
\begin{itemize}
\item For all $\eta\in2^{\omega}$, the partial type $\{\varphi\left(x;a_{\eta\restriction n}\right): n<\omega\}$
is consistent.
\item For all $\nu,\eta\in2^{<\omega}$, if $\nu\frown\langle0\rangle\unlhd\eta$
then $\left\{ \varphi\left(x;a_{\eta}\right),\varphi\left(x;a_{\nu\frown\langle1\rangle}\right)\right\} $
is inconsistent.
\end{itemize}
A theory $T$ is \emph{NSOP}$_{1}$ if no formula has SOP$_{1}$ modulo
$T$.
\end{defn}
The following equivalent formulation is more useful in practice:
\begin{fact}
\label{karyversion} \cite[Lemma 5.1]{ArtemNick} \cite[Proposition 2.4]{kaplan2017kim} A theory $T$ has
NSOP$_{1}$ if and only if there is a formula $\varphi\left(x;y\right)$,
$k<\omega$, and an infinite sequence $\langle \overline{c}_{i}: i\in I\rangle$ with
$\overline{c}_{i}=\left(c_{i,0},c_{i,1}\right)$ satisfying:
\begin{enumerate}
\item For all $i\in I$, $c_{i,0}\equiv_{\overline{c}_{<i}}c_{i,1}$.
\item $\{\varphi\left(x;c_{i,0}\right) : i\in I\}$ is consistent.
\item $\{\varphi\left(x;c_{i,1}\right): i\in I\}$ is $k$-inconsistent.
\end{enumerate}
Moreover, if $T$ has SOP$_{1}$, there is such a $\varphi$ with $k=2$.
\end{fact}
Given an ultrafilter $\mathcal{D}$ on a set of tuples $A$, we may define a complete type $\text{Av}(\mathcal{D},B)$ over $B$ by
$$
\text{Av}(\mathcal{D}, B) = \{\varphi(x;b) : \{a \in A : \mathbb{M} \models \varphi(a,b)\} \in \mathcal{D}\}.
$$
We write $a\ind_{M}^{u}B$ to mean $\text{tp}\left(a/MB\right)$ is finitely satisfiable in $M$, in other words $\text{tp}(a/MB)$ is a \emph{coheir} of its restriction to $M$. This is additionally equivalent to asserting that there is an ultrafilter $\mathcal{D}$ on tuples from $M$ such that $a \models \text{Av}(\mathcal{D},MB)$.
A global type $q\in S\left(\mathbb{M}\right)$ is called $A$\emph{-invariant} if $b\equiv_{A}b'$ implies that, for all $\varphi(x;y)$, we have $\varphi\left(x;b\right)\in q$ if and
only if $\varphi\left(x;b'\right)\in q$. A global type $q$ is \emph{invariant}
if there is some small set $A$ such that $q$ is $A$-invariant.
If $M$ is a model, then any type $p \in S(M)$ is finitely satisfiable in $M$ and hence $p =\text{Av}(\mathcal{D},M)$ for some ultrafilter $\mathcal{D}$ on tuples from $M$. Then $\text{Av}(\mathcal{D},\mathbb{M})$ is a global $M$-finitely satisfiable (and hence $M$-invariant) extension of $p$ (see, e.g., \cite[Lemma VII.4.1]{shelah1990classification}).
\begin{defn}
Suppose $q$ is an $A$-invariant global type and $I$ is a linearly
ordered set. By a \emph{Morley sequence in }$q$ \emph{over} $A$
\emph{of order type} $I$, we mean a sequence $\langle b_{\alpha}: \alpha\in I\rangle$
such that for each $\alpha\in I$, $b_{\alpha}\models q|_{Ab_{<\alpha}}$
where $b_{<\alpha}=\langle b_{\beta} : \beta<\alpha \rangle$. Given a linear
order $I$, we will write $q^{\otimes I}$ for the unique global $A$-invariant
type in variables $\langle x_{\alpha}: \alpha \in I \rangle$ such that for any
$B\supseteq A$, if $\overline{b}\models q^{\otimes I}|_{B}$ then
$b_{\alpha}\models q|_{Bb_{<\alpha}}$ for all $\alpha\in I$. If
$q$ is, moreover, finitely satisfiable in $A$, in which case $b_{\alpha} \ind^{u}_{A} b_{<\alpha}$ for all $\alpha \in I$, then we refer to
a Morley sequence in $q$ over $A$ as a \emph{coheir sequence} over
$A$.
\end{defn}
We will also make use of the dual notions of heir and an heir sequence:
\begin{defn}
If $B \supseteq M$, we say that $p \in S(B)$ is an \emph{heir} of its restriction to $M$ if $B \ind^{u}_{M} a$ for some, equivalently all, $a \models p$ and we write $a \ind^{h}_{M} b$ if and only if $\text{tp}(a/Mb)$ is an heir of $\text{tp}(a/M)$ if and only if $b \ind^{u}_{M} a$. We say that $\langle b_{i} : i \in I \rangle$ is an \emph{indiscernible heir sequence} over $M$ if $\langle b_{i} : i \in I \rangle$ is $M$-indiscernible and $b_{i} \ind^{h}_{M} b_{<i}$ for all $i \in I$.
\end{defn}
\begin{defn}
Suppose $M$ is a model.
\begin{enumerate}
\item We say that $\varphi\left(x;b\right)$ \emph{Kim-divides over }$M$ if there is a global $M$-invariant $q\supseteq\text{tp}\left(b/M\right)$ and Morley sequence $\langle b_{i} : i < \omega \rangle$ over $M$ in $q$ with $\{\varphi(x;b_{i}): i < \omega\}$ inconsistent.
\item We say that $\varphi\left(x;b\right)$ Kim-forks over $M$ if it implies
a finite disjunction of formulas, each Kim-dividing over $M$.
\item A type $p$ Kim-forks over $M$ if there is $\varphi(x;b)$ such that $p \vdash \varphi(x;b)$ and $\varphi(x;b)$ Kim-forks over $M$.
\item We write $a\ind_{M}^{K}B$ for $\text{tp}\left(a/MB\right)$ does not Kim-fork over $M$. We may also paraphrase $a \ind^{K}_{M} B$ as $a$ and $B$ are \emph{Kim-independent} over $M$.
\item We say that an infinite sequence $\langle a_{i} : i \in I \rangle$ is an $\ind^{K}$\emph{-Morley sequence} over $M$ if $\langle a_i : i \in I \rangle$ is $M$-indiscernible and $a_{i} \ind^{K}_{M} a_{<i}$ for all $i \in I$.
\end{enumerate}
\end{defn}
Note that if $a\ind_{M}^{u}B$ then $a\ind_{M}^{f}B$ (i.e. $\text{tp}\left(a/BM\right)$
does not fork over $M$) which implies $a\ind_{M}^{K}B$.
Kim-independence may be used to give several equivalents of NSOP$_{1}$. In order to state the appropriate form of local character for this notion, we will need to introduce the generalized club filter.
\begin{defn}
\label{clubdef} Let $\kappa$ be a cardinal and $X$ a set with $\left|X\right|\geq\kappa$.
We write $\left[X\right]^{\kappa}$ to denote $\{Y\subseteq X : \left|Y\right|=\kappa\}$ and likewise $[X]^{<\kappa}$ for $\bigcup_{\lambda < \kappa} [X]^{\lambda}$. A set $C\subseteq\left[X\right]^{\kappa}$ is \emph{club} if, for every $Y\in\left[X\right]^{\kappa}$, there is some $Z\in C$
with $Y\subseteq Z$ and if, whenever $\langle Y_{i} : i<\alpha\leq\kappa \rangle$ is an increasing chain in $C$, i.e. each
$Y_{i}\in C$ and $i<j<\alpha$ implies $Y_{i}\subseteq Y_{j}$, then
$\bigcup_{i<\alpha}Y_{i}\in C$.
\end{defn}
\begin{fact} \label{basic kimindep facts}
\cite[Theorem 8.1]{kaplan2017kim} \cite[Theorem 1.1]{24-Kaplan2017} \label{kimslemma} The following
are equivalent for the complete theory $T$:
\begin{enumerate}
\item $T$ is NSOP$_{1}$.
\item Kim's lemma for Kim-dividing: Given any model $M\models T$ and formula
$\varphi\left(x;b\right)$, $\varphi\left(x;b\right)$ Kim-divides if and only if for any $\langle b_{i} : i < \omega \rangle$ Morley over $M$ in some global $M$-invariant type, $\{\varphi\left(x;b_{i}\right) : i < \omega\}$ is inconsistent.
\item Symmetry of Kim independence over models: $a\ind_{M}^{K}b$ iff $b\ind_{M}^{K}a$
for any $M\models T$.
\item Local character on a club: given any model $M \models T$ and type $p \in S(M)$, the set $\{N \prec M : |N| = |T| \text{ and } p \text{ does not Kim-divide over }N\}$ is a club subset of $[M]^{|T|}$.
\item Independence theorem over models: if $A\ind^{K}_{M}B$, $c\ind^{K}_{M}A$,
$c'\ind^{K}_{M}B$ and $c\equiv_{M}c'$ then there is some $c''\ind^{K}_{M}AB$
such that $c''\equiv_{MA}c$ and $c''\equiv_{MB} c'$.
\end{enumerate}
\end{fact}
\begin{rem} \label{local character for bigger cardinals}
Because NSOP$_{1}$ is preserved by naming constants, we also see that if $\kappa \geq |T|$ and we are given any model $M \models T$ with $|M| \geq \kappa$ and type $p \in S(M)$, the set $\{N \prec M : |N| = \kappa \text{ and } p \text{ does not Kim-divide over }N\}$ is a club subset of $[M]^{\kappa}$. This follows by choosing an arbitrary $M_{0} \prec M$ of size $\kappa$ and applying Fact \ref{basic kimindep facts}(3) to the theory $T(M_{0})$ obtained from $T$ by adding constants for $M_{0}$.
\end{rem}
We will make extensive use of the following additional properties of Kim-independence in NSOP$_{1}$ theories:
\begin{fact} \label{fact:Kim Morley is consistent}
Suppose that $T$ is NSOP$_{1}$ and $M \models T$.
\begin{enumerate}
\item Extension: if $a \ind^{K}_{M} b$, then given any $c$, there is $a' \equiv_{Mb} a$ such that $a' \ind^{K}_{M} bc$ \cite[Proposition 3.20]{kaplan2017kim}.
\item Consistency along $\ind^{K}$-Morley sequences: suppose $\langle a_{i} : i<\omega \rangle$ is
an $\ind^{K}$-Morley sequence over $M$. Then if $\varphi\left(x,a_{0}\right)$
does not Kim-divide over $M$, then $\{\varphi\left(x,a_{i}\right) : i<\omega\}$
does not Kim-divide over $M$, and in particular it is consistent \cite[Lemma 7.6]{kaplan2017kim}.
\item Strengthened independence theorem: Suppose $c_{0} \equiv_{M} c_{1}$, $c_{0} \ind^{K}_{M} a$, $c_{1} \ind^{K}_{M} b$ and $a \ind^{K}_{M} b$. Then there is $c \models \text{tp}(c_{0}/Ma) \cup \text{tp}(c_{1}/Mb)$ such that $a \ind^{K}_{M} bc$, $b \ind^{K}_{M} ac$, and $c \ind^{K}_{M} ab$ \cite[Theorem 2.13]{kruckman2018generic}.
\end{enumerate}
\end{fact}
We will need the following chain condition for $\ind^{K}$-Morley sequences, which is a slight elaboration of the proof of Fact \ref{fact:Kim Morley is consistent}(2).
\begin{lem} \label{chain condition for indk}
Suppose $T$ is NSOP$_{1}$ and $M \models T$. If $a \ind^{K}_{M} b_{0}$ and $I = \langle b_{i} : i < \omega \rangle$ is an $\ind^{K}$-Morley sequence over $M$, then there is $a' \equiv_{Mb_{0}} a$ such that $I$ is $Ma'$-indiscernible and $a' \ind^{K}_{M} I$.
\end{lem}
\begin{proof}
Let $p(x;b_{0}) = \text{tp}(a/Mb_{0})$. By induction, we will choose $a_{n}$ such that $a_{n} \models \bigcup_{i \leq n} p(x;b_{i})$ and $a_{n} \ind^{K}_{M} b_{\leq n}$. For $n = 0$, we put $a_{0} = a$. Given $a_{n}$, pick $a'$ such that $a'b_{n+1} \equiv_{M} ab_{0}$. Then, by invariance, we have $a' \ind^{K}_{M} b_{n+1}$ and, additionally, $b_{n+1} \ind^{K}_{M} b_{\leq n}$, and $a_{n} \ind^{K}_{M} b_{\leq n}$. As $a' \equiv_{M} a \equiv_{M} a_{n}$, we may apply the independence theorem to find $a_{n+1}$ such that $a_{n+1} \equiv_{Mb_{\leq n}} a_{n}$, $a_{n+1} \equiv_{Mb_{n+1}} a'$, and $a_{n+1} \ind^{K}_{M} b_{\leq n+1}$. In particular, $a_{n+1} \models \bigcup_{i \leq n+1} p(x;b_{i})$, completing the induction.
By compactness and finite character, we can find $a_{*} \models \bigcup_{i < \omega} p(x;b_{i})$ such that $a_{*} \ind^{K}_{M} I$. By compactness, Ramsey, an automorphism, we may assume $I$ is $Ma_{*}$-indiscernible, completing the proof.
\end{proof}
\subsection{Generalized indiscernibles and a class of trees}
The construction of tree Morley sequences goes by way of an inductive construction of approximations to Morley trees indexed by a certain class of trees. Although the initial set-up is somewhat cumbersome, the definitions allow us to give simple and streamlined constructions. It will be convenient to use the notation and basic definitions that accompany the trees $\mathcal{T}_{\alpha}$ from \cite[Section 5.1]{kaplan2017kim}. The subsection below consists entirely of this notation and these definitions which are reproduced for the readers' convenience.
For an ordinal $\alpha$, let the language $L_{s,\alpha}$ be $\langle \unlhd, \wedge, <_{lex}, (P_{\beta})_{\beta \leq \alpha} \rangle$. We may view a tree with $\alpha$ levels as an $L_{s,\alpha}$-structure by interpreting $\unlhd$ as the tree partial order, $\wedge$ as the binary meet function, $<_{lex}$ as the lexicographic order, and $P_{\beta}$ interpreted to define level $\beta$. Our trees will be understood to be an $L_{s,\alpha}$-structure for some appropriate $\alpha$. We recall the definition of a class of trees $\mathcal{T}_{\alpha}$ below:
\begin{defn}
Suppose $\alpha$ is an ordinal. We define $\mathcal{T}_{\alpha}$ to be the set of functions $f$ such that
\begin{itemize}
\item $\text{dom}(f)$ is an end-segment of $\alpha$ of the form $[\beta,\alpha)$ for $\beta$ equal to $0$ or a successor ordinal. If $\alpha$ is a successor, we allow $\beta = \alpha$, i.e. $\text{dom}(f) = \emptyset$.
\item $\text{ran}(f) \subseteq \omega$.
\item finite support: the set $\{\gamma \in \text{dom}(f) : f(\gamma) \neq 0\}$ is finite.
\end{itemize}
We interpret $\mathcal{T}_{\alpha}$ as an $L_{s,\alpha}$-structure by defining
\begin{itemize}
\item $f \unlhd g$ if and only if $f \subseteq g$. Write $f \perp g$ if $\neg(f \unlhd g)$ and $\neg(g \unlhd f)$.
\item $f \wedge g = f|_{[\beta, \alpha)} = g|_{[\beta, \alpha)}$ where $\beta = \text{min}\{ \gamma : f|_{[\gamma, \alpha)} =g|_{[\gamma, \alpha)}\}$, if non-empty (note that $\beta$ will not be a limit, by finite support). Define $f \wedge g$ to be the empty function if this set is empty (note that this cannot occur if $\alpha$ is a limit).
\item $f <_{lex} g$ if and only if $f \vartriangleleft g$ or, $f \perp g$ with $\text{dom}(f \wedge g) = [\gamma +1,\alpha)$ and $f(\gamma) < g(\gamma)$
\item For all $\beta \leq \alpha$, $P_{\beta} = \{ f \in \mathcal{T}_{\alpha} : \text{dom}(f) = [\beta, \alpha)\}$.
\end{itemize}
\end{defn}
\begin{rem}
Condition (1) in the definition of $\mathcal{T}_{\alpha}$ was stated incorrectly in the first arXiv version of \cite{kaplan2017kim} via the weaker requirement that $\text{dom}(f)$ is an end-segment, non-empty if $\alpha$ is limit. There, and below, the inductive constructions assume that $\mathcal{T}_{\alpha+1}$ consists of the empty function (the root) and countably many copies of $\mathcal{T}_{\alpha}$ given by $\{\langle i \rangle \frown \eta : i< \omega,\eta \in \mathcal{T}_{\alpha}\}$ (where this concatenation is defined below in Definition \ref{concatenation}). But if $\alpha$ is a limit, this becomes false if we allow functions with domain $\{\alpha\}$ since the empty function is not an element of $\mathcal{T}_{\alpha}$ and therefore the function $\alpha \mapsto i$ is not of the form $\langle i \rangle \frown \eta$ for some $\eta \in \mathcal{T}_{\alpha}$. This is rectified by omitting functions whose domain is an end-segment of the form $[\beta,\alpha)$ for $\beta$ limit. \end{rem}
\begin{defn} \label{concatenation}
Suppose $\alpha$ is an ordinal.
\begin{enumerate}
\item (Restriction) If $w \subseteq \alpha \setminus \text{lim}(\alpha)$, the \emph{restriction of} $\mathcal{T}_{\alpha}$ \emph{to the set of levels }$w$ is given by
$$
\mathcal{T}_{\alpha} \upharpoonright w = \{\eta \in \mathcal{T}_{\alpha} : \min (\text{dom}(\eta)) \in w \text{ and }\beta \in \text{dom}(\eta) \setminus w \implies \eta(\beta) = 0\}.
$$
\item (Concatenation) If $\eta \in \mathcal{T}_{\alpha}$, $\text{dom}(\eta) = [\beta+1,\alpha)$ for some $\beta \in \alpha \setminus \text{lim}(\alpha)$, and $i < \omega$, let $\eta \frown \langle i \rangle$ denote the function $\eta \cup \{(\beta,i)\}$. We define $\langle i \rangle \frown \eta \in \mathcal{T}_{\alpha+1}$ to be $\eta \cup \{(\alpha,i)\}$. When we write $\langle i \rangle \in \mathcal{T}_{\alpha+1}$ by itself, we use this to denote the function $\{(\alpha,i)\}$.
\item (Canonical inclusions) If $\alpha < \beta$, we define the map $\iota_{\alpha \beta} : \mathcal{T}_{\alpha} \to \mathcal{T}_{\beta}$ by $\iota_{\alpha \beta}(f) = f \cup \{(\gamma, 0) : \gamma \in \beta \setminus \alpha\}$.
\item (The all $0$'s path) If $\beta < \alpha$, then $\zeta_{\beta}$ denotes the function with $\text{dom}(\zeta_{\beta}) = [\beta, \alpha)$ and $\zeta_{\beta}(\gamma) = 0$ for all $\gamma \in [\beta,\alpha)$. This defines an element of $\mathcal{T}_{\alpha}$ if and only if $\beta \in \alpha \setminus \text{lim}(\alpha)$.
\end{enumerate}
\end{defn}
We will most often be interested in collections of tuples indexed by $\mathcal{T}_{\alpha}$ and, if $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is such a collection and $\eta \in \mathcal{T}_{\alpha}$, we will write $a_{\unrhd \eta}$ and $a_{\vartriangleright \eta}$ for tuples enumerating the elements indexed by elements of $\mathcal{T}_{\alpha}$ above or strictly above $\eta$ in the tree partial order, respectively. Note that if $\beta < \alpha$ is a limit ordinal and $\eta \in \mathcal{T}_{\alpha}$ has $\text{dom}(\eta) = [\beta+1,\alpha)$, then $\beta \frown \langle i \rangle$ is a function whose domain is $[\beta,\alpha)$ and is therefore not in $\mathcal{T}_{\alpha}$. If $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is a collection of tuples indexed by $\mathcal{T}_{\alpha}$, we will abuse notation and write $a_{\unrhd \eta \frown \langle i \rangle}$ for the tuple that enumerates $\{a_{\nu} : \nu \in \mathcal{T}_{\alpha}, \eta \frown \langle i \rangle \subseteq \nu\}$ and likewise for $a_{\unrhd \zeta_{\beta}}$.
We additionally remark that the concatention notation is only unambiguous once we have specified in which tree the element lives\textemdash for example, $\langle i \rangle \frown \langle j \rangle$ can denote an element of $\mathcal{T}_{\alpha+2}$ when $\langle j \rangle \in \mathcal{T}_{\alpha+1}$ or an element of $\mathcal{T}_{\alpha+1}$ if $\langle i \rangle \in \mathcal{T}_{\alpha+1}$, but this notation reads unambiguously once we have specified in which tree we are referring to $\langle i \rangle \frown \langle j \rangle$. In the arguments below, the intended meaning of concatenation is clear from context and no confusion will arise.
The function $\iota_{\alpha \beta}$ includes $\mathcal{T}_{\alpha}$ into $\mathcal{T}_{\beta}$ by adding zeros to the bottom of every node in $\mathcal{T}_{\alpha}$. Clearly if $\alpha < \beta < \gamma$, then $\iota_{\alpha \gamma} = \iota_{\beta \gamma} \circ \iota_{\alpha \beta}$. If $\beta$ is a limit, then $\mathcal{T}_{\beta}$ is the direct limit of the $\mathcal{T}_{\alpha}$ for $\alpha < \beta$ along these maps.
\begin{defn} Suppose $I$ is an $L'$-structure, where $L'$ is some language.
\begin{enumerate}
\item We say that $(a_{i} : i \in I)$ is a set of $I$\emph{-indexed indiscernibles over} $A$ if whenever
$(s_{0}, \ldots, s_{n-1})$, $(t_{0}, \ldots, t_{n-1})$ are tuples from $I$ with
$$
\text{qftp}_{L'}(s_{0}, \ldots, s_{n-1}) = \text{qftp}_{L'}(t_{0}, \ldots, t_{n-1}),
$$
then we have
$$
\text{tp}(a_{s_{0}},\ldots, a_{s_{n-1}}/A) = \text{tp}(a_{t_{0}},\ldots, a_{t_{n-1}}/A).
$$
\item In the case that $L' = L_{s,\alpha}$ for some $\alpha$, we say that an $I$-indexed indiscernible is $\emph{s-indiscernible}$. As the only $L_{s,\alpha}$-structures we will consider will be trees, we will often refer to $I$-indexed indiscernibles in this case as \emph{s-indiscernible trees}.
\item We say that $I$-indexed indiscernibles have the \emph{modeling property} if, given any $(a_{i} : i \in I)$ from $\mathbb{M}$ and set of parameters $A$, there is an \(I\)-indexed indiscernible \((b_{i} : i \in I)\) in $\mathbb{M}$ \emph{locally based} on \((a_{i} : i \in I)$ over $A$ -- i.e., given any finite set of formulas \(\Delta\) from \(L(A)\) and a finite tuple \((t_{0}, \ldots, t_{n-1})\) from \(I\), there is a tuple \((s_{0}, \ldots, s_{n-1})\) from \(I\) such that
\[
\text{qftp}_{L'} (t_{0}, \ldots, t_{n-1}) =\text{qftp}_{L'}(s_{0}, \ldots , s_{n-1})
\]
and also
\[
\text{tp}_{\Delta}(b_{t_{0}}, \ldots, b_{t_{n-1}}) = \text{tp}_{\Delta}(a_{s_{0}}, \ldots, a_{s_{n-1}}).
\]
\end{enumerate}
\end{defn}
Recall that, given a set $X$, we write $[X]^{<\omega}$ to denote the set of finite subsets of $X$.
\begin{defn} \label{spead out def}
Suppose $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is a tree of tuples, and $C$ is a set of parameters.
\begin{enumerate}
\item We say that $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is \emph{spread out over} C if for all $\eta \in \mathcal{T}_{\alpha}$ with $\text{dom}(\eta) =[\beta+1,\alpha)$ for some $\beta \in \alpha$, there is a global $C$-invariant type $q_{\eta} \supseteq \text{tp}(a_{\unrhd \eta \frown \langle 0 \rangle}/C)$ such that $(a_{\unrhd \eta \frown \langle i \rangle})_{i < \omega}$ is a Morley sequence over $C$ in $q_{\eta}$.
\item Suppose $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is a tree which is spread out and $s$-indiscernible over $C$ and for all $w,v \in [\alpha \setminus \text{lim}(\alpha)]^{<\omega}$ with $|w| = |v|$,
$$
(a_{\eta})_{\eta \in \mathcal{T}_{\alpha} \upharpoonright w} \equiv_{C} (a_{\eta})_{\eta \in \mathcal{T}_{\alpha} \upharpoonright v}
$$
then we say that $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is a \emph{Morley tree} over $C$.
\item A \emph{tree Morley sequence} over $C$ is a $C$-indiscernible sequence of the form $(a_{\zeta_{\beta}})_{\beta \in \alpha \setminus \text{lim}(\alpha)}$ for some Morley tree $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ over $C$.
\end{enumerate}
\end{defn}
\begin{rem}
Note that in Definition \ref{spread out def}(1), it is possible that $\mathrm{dom}(\eta) = [\beta + 1,\alpha)$ for a limit ordinal $\beta \in \alpha$, in which case $\eta \frown \langle i \rangle$, defined to be the function $\eta \cup \{(\beta, i)\}$, is not an element of $\mathcal{T}_{\alpha}$. Nonetheless, the tuple $a_{\unrhd \eta \frown \langle i \rangle}$ still makes sense as the tuple whose elements are indexed by functions in the tree $\mathcal{T}_{\alpha}$ containing $\eta \frown \langle i \rangle$. See the remarks after Definition \ref{concatenation}.
\end{rem}
\begin{fact} \label{modeling}
\text{ }
\begin{enumerate}
\item For any $\alpha$, $\mathcal{T}_{\alpha}$-indexed indiscernibles have the modeling property \cite[Theorem 4.3]{KimKimScow} \cite[Corollary 5.6]{kaplan2017kim}.
\item Given a model $M \models T$, there is a cardinal $\kappa$ such that if $(a_{\eta})_{\eta \in \mathcal{T}_{\kappa}}$ is a tree of tuples, spread out and $s$-indiscernible over $M$, then there is a Morley tree $(b_{\eta})_{\eta \in \mathcal{T}_{\omega}}$ such that for all $w \in [\omega]^{<\omega}$,
$$
(a_{\eta})_{\eta \in \mathcal{T}_{\kappa} \upharpoonright v} \equiv_{M} (b_{\eta})_{\eta \in \mathcal{T}_{\omega} \upharpoonright w}.
$$
for some $v \in [\kappa \setminus \text{lim}(\kappa)]^{<\omega}$ \cite[Lemma 5.10]{kaplan2017kim}.
\end{enumerate}
\end{fact}
The interest in tree Morley sequences is that the genericity condition is sufficiently weak that they exist under broader hypotheses than invariant Morley sequences, yet is sufficiently strong to witness Kim-independence. This is made precise below:
\begin{defn}
Suppose $M$ is a model and $(a_{i})_{i < \omega}$ is an $M$-indiscernible sequence.
\begin{enumerate}
\item Say $(a_{i})_{i < \omega}$ is a \emph{witness} for Kim-dividing over $M$ if, for all formulas $\varphi(x;a_{0})$ that Kim-divide over $M$, $\{\varphi(x;a_{i}) : i <\omega\}$ is inconsistent.
\item Say $(a_{i})_{i < \omega}$ is a \emph{strong witness} to Kim-dividing over $M$ if, for all $n$, the sequence $\langle (a_{n \cdot i}, a_{n \cdot i + 1}, \ldots, a_{n \cdot i + n-1}) : i < \omega \rangle$ is a witness to Kim-dividing over $M$.
\end{enumerate}
\end{defn}
\begin{fact} \label{witnessfacts} \cite[Proposition 7.9]{kaplan2017kim}
Suppose $T$ is NSOP$_{1}$ and $M \models T$.
\begin{enumerate}
\item (Kim's Lemma for tree Morley sequences) $\varphi(x;a)$ Kim-divides over $M$ if and only if $\{\varphi(x;a_{i}) : i < \omega\}$ is inconsistent for some tree Morley sequence $(a_{i})_{i < \omega}$ over $M$ with $a_{0} = a$ if and only if $\{\varphi(x;a_{i}) : i < \omega\}$ is inconsistent for all tree Morley sequences $(a_{i})_{i < \omega}$ over $M$ with $a_{0} = a$. \cite[Corollary 5.14]{kaplan2017kim}
\item The sequence $(a_{i})_{i < \omega}$ is a strong witness for Kim-dividing over $M$ if and only if $(a_{i})_{i < \omega}$ is a tree Morley sequence over $M$. \cite[Proposition 7.9]{kaplan2017kim}
\end{enumerate}
\end{fact}
\begin{rem} \label{coheirs are strong witnesses}
The argument for \cite[Corollary 7.10]{kaplan2017kim} contains a proof that $\ind^{f}$-Morley sequences over models are strong witnesses to Kim-dividing. Note that it follows, then, that coheir sequences over models are also strong witnesses to Kim-dividing, as $a \ind^{u}_{M} b$ implies $a \ind^{f}_{M} b$ for all $M \models T$.
\end{rem}
Finally, we define one more kind of $\ind^{K}$-Morley sequence:
\begin{defn}
Suppose $M \models T$. A \emph{total }$\ind^{K}$\emph{-Morley sequence over}$M$ is an $M$-indiscernible sequence $\langle a_{i} : i < \omega \rangle$ such that $a_{>i} \ind^{K}_{M} a_{\leq i}$ for all $i < \omega$.
\end{defn}
\begin{fact} \label{sequence implications}
Suppose $T$ is NSOP$_{1}$ and $M \models T$.
\begin{enumerate}
\item If $I$ is a tree Morley sequence over $M$, then $I$ is a total $\ind^{K}$-Morley sequence over $M$.
\item If $I$ is a total $\ind^{K}$-Morley sequence over $M$, then $I$ is $\ind^{K}$-Morley over $M$.
\end{enumerate}
\end{fact}
\begin{proof}
(2) is obvious so we prove (1). Suppose $I = \langle a_{i} : i < \omega \rangle$ is a tree Morley sequence over $M$. Let $(a_{\eta})_{\eta \in \mathcal{T}_{\omega}}$ be a Morley tree over $M$ with $a_{i} = a_{\zeta_{i}}$ for all $i < \omega$. Then for all $i < \omega$, we have that $\langle a_{\unrhd \zeta_{i+1} \frown \langle j \rangle} : j < \omega \rangle$ is a Morley sequence over $M$ in a global $M$-invariant type which is $Ma_{\zeta_{\geq i+1}}$-indiscernible. Therefore $a_{\zeta_{\geq i+1}} \ind^{K}_{M} a_{\unrhd \zeta_{i+1} \frown \langle 0 \rangle}$, which, in particular, implies $a_{> i} \ind^{K}_{M} a_{\leq i}$ for all $i < \omega$.
\end{proof}
\section{Transitivity holds in NSOP$_{1}$ theories}
In this section, we prove the transitivity of Kim-independence in NSOP$_{1}$ theories. The argument proceeds via an analysis of situations under which one can obtain sequences that are generic over more than one base simultaneously. The heart of the argument is Proposition \ref{goodseq}, which proves the existence of a sequence that is a tree Morley sequence over a model and $\ind^{K}$-Morley over an elementary extension. This, combined with symmetry, gives transitivity as an immediate consequence.
Producing a sequence which is $\ind^{K}$-Morley over a model and a tree Morley sequence over an elementary extension is less involved. The following lemma was implicit in \cite[Lemma 3.6]{24-Kaplan2017}:
\begin{lem} \label{kms over M tms over N}
Suppose $T$ is NSOP$_{1}$, $M \prec N \models T$, and $a \ind^{K}_{M} N$. Then there is a tree Morley sequence $(a_{i})_{i < \omega}$ over $N$ with $a_{0} = a$ such that $a_{i} \ind^{K}_{M} Na_{<i}$ for all $i < \omega$. In particular, $\langle a_{i} : i < \omega \rangle$ is simultaneously an $\ind^{K}$-Morley sequence over $M$ and a tree Morley sequence over $N$.
\end{lem}
\begin{proof}
Let $\langle a_{i} : i \in \mathbb{Z}\rangle$ be a coheir sequence over $N$ with $a_{0} = a$. Since coheir sequences are strong witnesses to Kim-dividing, by Remark \ref{coheirs are strong witnesses}, reversing the order of a sequence does not change whether or not it is a strong witness, and the fact that strong witnesses are tree Morley by Fact \ref{witnessfacts}(2), it follows that, setting $b_{i} = a_{-i}$, we have that $\langle b_{i} : i < \omega \rangle$ is a tree Morley sequence over $N$ with $b_{0} = a$.
We claim this sequence also satisfies $b_{i} \ind^{K}_{M} Nb_{<i}$: if not, then by symmetry, there is some $i$ such that $Nb_{<i} \nind^{K}_{M} b_{i}$ and this is witnessed by some $\varphi(x,n;b_{i}) \in \text{tp}(b_{<i}N/Mb_{i})$. Because $\langle a_{i} : i \in \mathbb{Z} \rangle$ is a coheir sequence over $N$, we have, in particular, that $a_{>i} \ind^{u}_{N} a_{i}$. Hence $b_{<i} \ind^{u}_{N} b_{i}$ so there must be some $n' \in N$ with $\models \varphi(n',n;b_{i})$. But then $N \nind^{K}_{M} b_{i}$. By symmetry and invariance, this contradicts $a \ind^{K}_{M} N$, since $b_{i} \equiv_{N} a$.
\end{proof}
\begin{lem} \label{weirdextension}
Supose $T$ is NSOP$_{1}$ and $M \prec N \models T$. If $b \ind^{K}_{M} N$ and $c \ind^{K}_{M} N$, then there is $c' \equiv_{N} c$ such that $bc' \ind^{K}_{M} N$ and $b \ind^{K}_{N} c'$.
\end{lem}
\begin{proof}
Define a partial type $\Gamma(x;N,b)$ by
$$
\Gamma(x;N,b) = \text{tp}(c/N) \cup \{\neg \varphi(x,b;n) : \varphi(x,y;n) \text{ Kim-divides over }M\}.
$$
By Lemma \ref{kms over M tms over N}, we may construct an $N$-indiscernible sequence $\langle b_{i} : i < \omega \rangle$ such that $b_{0} = b$, $b_{i+1} \ind^{K}_{M} Nb_{\leq i}$, and $\langle b_{i} : i < \omega \rangle$ is a tree Morley sequence over $N$.
\textbf{Claim 1}: $\bigcup_{i < \omega} \Gamma(x;N,b_{i})$ is consistent.
\emph{Proof of claim}: By induction on $n$, we will choose $c_{n} \ind^{K}_{M} Nb_{< n}$ such that
$$
c_{n} \models \bigcup_{i < n} \Gamma(x;N,b_{i}).
$$
For $n = 0$, we may set $c_{0} = c$ and the condition is satisfied since $c \ind^{K}_{M} N$.
Suppose we are given $c_{n} \ind^{K}_{M} Nb_{< n}$ realizing $\bigcup_{i < n} \Gamma(x;N,b_{i})$. By extension, choose $c' \equiv_{M} c$ with $c' \ind^{K}_{M} b_{n}$. As $b_{n} \ind^{K}_{M} Nb_{< n}$, we may apply the strengthened independence theorem, Fact \ref{fact:Kim Morley is consistent}(3), to find $c_{n+1} \models \text{tp}(c_{n}/Nb_{< n}) \cup \text{tp}(c'/Mb_{n})$ with $c_{n+1} \ind^{K}_{M} Nb_{< n+1}$ and $b_{n}c_{n+1} \ind^{K}_{M} Nb_{< n}$. In particular, $b_{n}c_{n+1} \ind^{K}_{M} N$, so $c_{n+1} \models \Gamma(x;N,b_{n})$. This gives $c_{n+1} \models \bigcup_{i < n+1} \Gamma(x;N,b_{i})$. The claim follows by compactness.\qed
Now define a partial type $\Delta(x;N,b)$ by
\begin{eqnarray*}
\Delta(x;N,b) &=& \Gamma(x;N,b)\\
& \cup& \{ \neg \psi(x;b): \psi(x;b) \in L(Nb)\text{ Kim-divides over }N\}.
\end{eqnarray*}
\textbf{Claim 2}: $\Delta(x;a,b)$ is consistent.
\emph{Proof of claim}: Suppose not. Then, by definition of $\Delta(x;N,b)$, compactness, and the equality of Kim-forking and Kim-dividing, we have
$$
\Gamma(x;N,b) \vdash \psi(x;b),
$$
for some $\psi(x;b) \in L(Nb)$ that Kim-divides over $N$. Then we have
$$
\bigcup_{i < \omega} \Gamma(x;N,b_{i}) \vdash \{\psi(x;b_{i}) : i < \omega\}.
$$
The left-hand side is consistent by Claim 1 but the right hand side is inconsistent by Kim's lemma and the choice of $\overline{b}$, a contradiction that proves the claim.
\qed
Now let $c' \models \Delta(x;N,b)$. Then, by symmetry, we have $c' \equiv_{N} c$, $bc' \ind^{K}_{M} N$, and $b \ind^{K}_{N} c'$ which is what we want.
\end{proof}
\begin{prop}\label{goodseq}
Suppose $T$ is NSOP$_{1}$ and $M \prec N \models T$. If $a \ind^{K}_{M} N$, then there is a sequence $\langle a_{i} : i < \omega \rangle$ with $a_{0} = a$ which is a tree Morley sequence over $M$ and Kim-Morley over $N$.
\end{prop}
\begin{proof}
By induction on $\alpha$, we will construct trees $(a^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ satisfying the following conditions:
\begin{enumerate}
\item For all $\eta \in \mathcal{T}_{\alpha}$, $a^{\alpha}_{\eta} \models \text{tp}(a/N)$.
\item $(a^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is $s$-indiscernible over $N$, spread out over $M$.
\item If $\alpha$ is a successor, then $a^{\alpha}_{\emptyset} \ind^{K}_{N} (a^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$.
\item $(a^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}} \ind^{K}_{M} N$.
\item If $\alpha < \beta$, then for all $\eta \in \mathcal{T}_{\alpha}$, $a^{\beta}_{\iota_{\alpha \beta}(\eta)} = a^{\alpha}_{\eta}$.
\end{enumerate}
To begin, put $a^{0}_{\emptyset} = a$. At a limit stage $\delta$, we define $(a^{\delta}_{\eta})_{\eta \in \mathcal{T}_{\delta}}$ by $a^{\delta}_{\iota_{\alpha \delta}(\eta)} = a^{\alpha}_{\eta}$ for all $\alpha < \delta$ and $\eta \in \mathcal{T}_{\alpha}$. This is well-defined by (5) and the definition of $\mathcal{T}_{\delta}$. Moreover, it clearly satisfies (1), (2) is trivial, and (3) and (4) are satisfied by finite character.
Now in the successor stage, we will construct $(a^{\alpha+1}_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$. Let $\overline{b} = \langle (a^{\alpha}_{\eta,i})_{\eta \in \mathcal{T}_{\alpha}} : i < \omega \rangle$ be a coheir sequence over $M$ with $a^{\alpha}_{\eta,0} = a^{\alpha}_{\eta}$ for all $\eta \in \mathcal{T}_{\alpha}$. By (4), we may assume $\overline{b}$ is $N$-indiscernible and $\overline{b} \ind^{K}_{M} N$ by the chain condition (Lemma \ref{chain condition for indk}). Apply Lemma \ref{weirdextension} to get $b \equiv_{N} a$ such that $b \ind^{K}_{N} \overline{b}$ and $b\overline{b} \ind^{K}_{M} N$. Define a tree $(b_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ by $b_{\emptyset} = b$ and $b_{\langle i \rangle \frown \eta} = a^{\alpha}_{\eta,i}$ for all $i < \omega$, $\eta \in \mathcal{T}_{\alpha}$. Now let $(a^{\alpha+1}_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ be an $s$-indiscernible tree over $N$, locally based on the tree $(b_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ over $N$. By an automorphism, we may assume that $a^{\alpha+1}_{\langle 0 \rangle \frown \eta} = a^{\alpha}_{\eta}$ for all $\eta \in \mathcal{T}_{\alpha}$, so (5) is satisfied. By construction and induction, $(b_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ is spread out over $N$ and $b_{\eta} \models \text{tp}(a/N)$ for all $\eta \in \mathcal{T}_{\alpha+1}$ so $(a^{\alpha+1}_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ satisfies (1) and (2). Likewise, since $b \ind^{K}_{N} \overline{b}$ and $(b_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}} \ind^{K}_{M} N$ by construction, and because of the fact that Kim-forking is witnessed by formulas, it follows from the fact that $(a^{\alpha+1}_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ is locally based on $(b_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ over $N$ that $a^{\alpha+1}_{\emptyset} \ind^{K}_{N} a^{\alpha+1}_{\vartriangleright \emptyset}$ and $(a^{\alpha+1}_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}} \ind^{K}_{M} N$ as well, so (3) and (4) are satisfied. This completes the construction.
Let $(a_{\eta})_{\eta \in \mathcal{T}_{\omega}}$ be the tree obtained by applying Fact \ref{modeling}. Then $(a_{\zeta_{\alpha}})_{\alpha < \omega}$ is the desired sequence.
\end{proof}
\begin{thm} \label{transitivity theorem}
Suppose $T$ is NSOP$_{1}$, $M \prec N \models T$, $a \ind^{K}_{M} N$ and $a \ind^{K}_{N}b$. Then $a \ind^{K}_{M} Nb$.
\end{thm}
\begin{proof}
Suppose $a,b,M,$ and $N$ are given as in the statement. By Proposition \ref{goodseq}, there is a sequence $I = \langle a_{i} : i < \omega \rangle$ with $a_{0} = a$ such that $I$ is a tree Morley sequence over $M$ and an $\ind^{K}$-Morley sequence over $N$. Since $b \ind^{K}_{N} a$, there is $I' \equiv_{Na} I$ such that $I'$ is $Nb$-indiscernible, by compactness, Ramsey, and Fact \ref{witnessfacts}. Then $I'$ still a tree Morley sequence over $M$ so, by Kim's lemma for tree Morley sequences, $Nb \ind^{K}_{M} a$, so we may conclude by symmetry.
\end{proof}
Transitivity allows one to easily obtain analogues for Kim-independence of the ``algebraically reasonable" properties of Kim- and algebraic-independence proved in \cite{kruckman2018generic}. For example, the following is the analogue of ``algebraically reasonable extension" \cite[Theorem 2.15]{kruckman2018generic}:
\begin{cor}
Suppose $T$ is NSOP$_{1}$, $M \prec N \models T$, and $a \ind^{K}_{M} N$. Then given any $b$, there is $a'$ with $a' \equiv_{N} a$, $a' \ind^{K}_{M} Nb$, and $a' \ind^{K}_{N} b$.
\end{cor}
\begin{proof}
Applying extension, we obtain $a' \equiv_{N} a$ so that $a' \ind^{K}_{N} b$. By invariance, $a' \ind^{K}_{M} N$ so by transitivity, $a' \ind^{K}_{M} Nb$.
\end{proof}
\subsection{An example}
In this subsection, we present an example that illustrates two important phenomena simultaneously. First, it shows that if $T$ is NSOP$_{1}$, $M \prec N \models T$ and $a \ind^{K}_{M} N$, then it is not necessarily possible to find $I = \langle a_{i} : i < \omega \rangle$ that is a coheir sequence over $N$ with $a_{0} = a$ and $I \ind^{K}_{M} N$. Our example shows that Lemma \ref{kms over M tms over N}\textemdash this lemma shows that, in this situation, a coheir sequence starting with $a$ over $N$ is $\ind^{K}$-Morley over $M$\textemdash is optimal, as, in general, one cannot improve it to conclude that a coheir sequence over $N$ is a stronger form of Morley sequence over $M$. In particular, it is not the case that every tree Morley sequence over $N$ is automatically a tree Morley sequence over $M$, as one might hope, as we produce a coheir sequence (which is therefore tree Morley over $N$) which is not tree Morley over $M$. Secondly, our example shows that it is possible, in an NSOP$_{1}$ theory, that there is an $\ind^{K}$-Morley sequence that is neither a tree Morley sequence nor a total $\ind^{K}$-Morley sequence (later, in Corollary \ref{tms = tree}, we will show the notions of tree Morley sequence and total $\ind^{K}$-Morley sequence are equivalent). In particular, we show that there is a sequence $\langle a_{i} : i < \omega \rangle$ that is $\ind^{K}$-Morley over $M$ but $a_{2}a_{3} \nind^{K}_{M} a_{0}a_{1}$.
\begin{fact} \label{winkler facts}
Let $L$ be the language consisting of a single binary function $f$.
\begin{enumerate}
\item The empty $L$-theory has a model completion $T_{f}$, which eliminates quantifiers. \cite{winkler1975model} \cite[Corollary 3.10]{kruckman2018generic}
\item Modulo $T_{f}$, for all sets $A$, $\text{acl}(A) = \text{dcl}(A) = \langle A \rangle$, where $\langle A \rangle$ denotes the substructure generated by $A$. \cite[Corollary 3.11]{kruckman2018generic}
\item $T_{f}$ is NSOP$_{1}$ and Kim-independence coincides with algebraic independence: for any tuples $a,b$, if $M \models T_{f}$, $a \ind^{K}_{M} b$ if and only if $\langle aM \rangle \cap \langle bM \rangle = M$. \cite[Corollary 3.13]{kruckman2018generic}
\end{enumerate}
\end{fact}
\begin{exmp}
Let $M$ be a countable model of $T_{f}$ and $N$ an $\aleph_{1}$-saturated elementary extension, all contained in the monster model $\mathbb{M} \models T_{f}$. Pick elements $m_{*} \in M$ and $n_{*} \in N \setminus M$. In $N \setminus M$, we can find a countable set of distinct elements $B = \{b_{i} : i < \omega\}$ such that $f|_{(B \times M) \cup (M \times B) \cup \Delta} = \{m_{*}\}$ and $f|_{(B \times B) \setminus \Delta} = \{n_{*}\}$, where $\Delta = \{(b_{i},b_{i}) : i < \omega\}$. Let $\mathcal{D}$ be a non-principal ultrafilter on $N$ concentrating on $B$ and $q = \text{Av}(\mathcal{D},\mathbb{M})$. Let $I = (a_{i})_{i < \omega} \models q^{\otimes \omega}|_{N}$ be a Morley sequence in $q$ over $N$.
We claim $a_{0} \ind^{K}_{M} N$. This is equivalent to the assertion that $\langle a_{0}M \rangle \cap N = M$.
Suppose $c \in \langle a_{0}M \rangle \cap N$. Then there is a term $t(x;m)$, possibly with parameters from $M$, such that $t(a_{0};m) = c$ and therefore $\{i : t(b_{i};m) = c\} \in \mathcal{D}$. One may easily check that if $s$ is a constant term in the language $L(Mb_{i})$, i.e. $L$ with constants for $M$ and $b_{i}$, then either $s = b_{i}$ or there is $m' \in M$ with $s = m'$. This is clear for the constants and, since $f(n,b_{i}) = f(b_{i},n) = f(b_{i},b_{i}) = m_{*}$ for all $n \in M$, the induction follows. Since the $b_{i}$ are pairwise distinct and $\{i : t(b_{i};m) = c\} \in \mathcal{D}$, it is clear that $c$ is not equal to any $b_{i}$, so it follows that $c \in M$.
However, $f(b_{i},b_{j}) = n_{*} \in N\setminus M$ for all $i,j < \omega$ so $f(a_{0},a_{1}) = n_{*}$. Therefore $\text{dcl}(a_{0},a_{1}M) \cap N \neq M$, which shows $a_{0}a_{1} \nind^{K}_{M} N$. This shows in particular $I \nind^{K}_{M} N$.
Next, in the proof of Lemma \ref{kms over M tms over N}, we show that if $T$ is NSOP$_{1}$, $M \prec N$ and $b \ind^{K}_{M} N$, then for any coheir sequence $\langle b_{i} : i < \omega \rangle$ in $\text{tp}(b/N)$, we have $b_{>i} \ind^{K}_{M} b_{i} $ for all $i < \omega$. It follows that $a_{>i} \ind^{K}_{M} a_{i}$ and thus $I$ is an $\ind^{K}$-Morley sequence over $M$ indexed in reverse. However, we have $f(a_{2},a_{3}) = f(a_{0},a_{1}) = n_{*} \not\in M$ so $a_{0}a_{1} \nind^{K}_{M} a_{2}a_{3}$, which shows that $I$ is not a total $\ind^{K}$-Morley sequence.
\end{exmp}
\section{Transitivity implies NSOP$_{1}$}
In this section, we complete the characterization of NSOP$_{1}$ theories by the transitivity of Kim-independence. The argument is loosely inspired by the proof due to Kim that transitivity of non-forking independence implies simplicity \cite[Theorem 2.4]{kim2001simplicity}. However, we have to deal with a more complicated combinatorial configuration as well as the need to produce \emph{models} over which we may observe a failure of transitivity from SOP$_{1}$. We begin by observing a combinatorial consequence of SOP$_{1}$ arising from the witnessing array of pairs and then work in a Skolemization of a given SOP$_{1}$ theory to find the desired counter-example to transitivity.
\begin{lem}\label{goodindiscernible}
Suppose $T$ has SOP$_{1}$. Then there is a formula $\varphi(x;y)$ and an indiscernible sequence $(a_{i},c_{i,0},c_{i,1})_{i < \omega}$ such that
\begin{enumerate}
\item For all $i < \omega$, $a_{i} \models \{\varphi(x;c_{j,0}) : j \leq i\}$.
\item $\{\varphi(x;c_{i,1}) : i < \omega\}$ is $2$-inconsistent.
\item For all $i < \omega$, $c_{i,0} \equiv_{a_{<i},c_{<i,0}c_{<i,1}} c_{i,1}$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $\kappa$ be a cardinal sufficiently large relative $|M|$ to apply Fact \ref{modeling}. Because $T$ has SOP$_{1}$, we know by Fact \ref{karyversion} and compactness, there is a formula $\varphi(x;y)$ and an indiscernible sequence $(c_{i,0},c_{i,1})_{i < \kappa}$ such that
\begin{itemize}
\item $\{\varphi(x;c_{i,0}) : i < \kappa\}$ is consistent.
\item $\{\varphi(x;c_{i,1}) : i < \kappa\}$ is $2$-inconsistent.
\item For all $i < \kappa$, $c_{i,0} \equiv_{c_{<i,0}c_{<i,1}} c_{i,1}$ and $c_{n,0} \equiv_{a_{<n}c_{<n,0}c_{<n,1}} c_{n,1}$.
\end{itemize}
By induction on $n < \omega$, we will build $(a_{i})_{i < n}$ and $(c^{n}_{i,0},c^{n}_{i,1})_{i < \kappa}$ such that, for all $n < \omega$,
\begin{enumerate}
\item $\{\varphi(x;c^{n}_{i,0}) : i < \kappa\}$ is consistent.
\item $\{\varphi(x;c^{n}_{i,1}) : i < \kappa\}$ is $2$-inconsistent.
\item For all $i < \kappa$, $c^{n}_{i,0} \equiv_{c^{n}_{<i,0}c^{n}_{<i,1}} c^{n}_{i,1}$ and $c^{n}_{n,0} \equiv_{a_{<n}c^{n}_{<n,0}c^{n}_{<n,1}} c^{n}_{n,1}$.
\item For all $i < n$, $a_{i} \models \{\varphi(x;c^{n}_{j,0}) : j \leq i\}$.
\item For $m \leq n$, $(c^{m}_{m,0},c^{m}_{m,1}) = (c^{n}_{m,0},c^{n}_{m,1})$.
\end{enumerate}
To begin, we define $(c^{0}_{i,0},c^{0}_{i,1})_{i < \kappa}$ by setting $(c^{0}_{i,0},c^{0}_{i,1}) = (c_{i,0}, c_{i,1})$ for all $i < \kappa$. This, together with the empty sequence of $a_{i}$'s satisfies (1)\textemdash(3). For $n=0$, (4) and (5) are vacuous, so this handles the base case.
Now suppose for $n$, we have constructed $(a_{i})_{i < n}$, $(c^{n}_{i,0},c^{n}_{i,1})_{i < \kappa}$. Choose $a_{n} \models \{\varphi(x;c^{n}_{i,0}) : i \leq n\}$. Now by the pigeonhole principle, there are $i_{*} > j_{*} > n$ such that $c^{n}_{i_{*},1} \equiv_{a_{\leq n}c^{n}_{\leq n,0}c^{n}_{\leq n,1}} c^{n}_{j_{*},1}$. As $c^{n}_{i_{*},0} \equiv_{c^{n}_{<i_{*},0}c^{n}_{<i_{*},1}} c^{n}_{i_{*},1}$, there is $\sigma \in \text{Aut}(\mathbb{M}/c^{n}_{<i_{*},0}c^{n}_{<i_{*},1})$ with $\sigma(c^{n}_{i_{*},0}) = c^{n}_{i_{*},1}$. Define a new array by setting $(c^{n+1}_{m,0},c^{n+1}_{m,1}) = (c^{n}_{m,0},c^{n}_{m,1})$ for all $m \leq n$, $(c^{n+1}_{n+1,0},c^{n+1}_{n+1,1}) = (c^{n}_{i_{*},1},c^{n}_{j_{*},1})$, and finally $(c^{n+1}_{n+\alpha,0},c^{n+1}_{n+\alpha,1}) = \sigma(c^{n}_{i_{*}+\alpha,0},c^{n}_{i_{*}+\alpha,1})$ for all $2 \leq \alpha < \kappa$.
Now we check that this satisfies the requirements. For (1), note that $\{\varphi(x;c^{n+1}_{i,0}) : i < \kappa\}$ is equal to $\{\varphi(x;\sigma(c^{n}_{i,0})) : i \leq n \text{ or }i \geq i_{*}\}$ and this is consistent because $\{\varphi(x;c^{n}_{i,0}) : i \leq n \text{ or } i \geq i_{*}\}$ is consistent and $\sigma$ is an automorphism. Likewise, $\{\varphi(x;c^{n+1}_{i,1}) : i < \kappa\}$ is equal to $\{\varphi(x;\sigma(c^{n}_{i,1})) : i \leq n, i = j_{*}, \text{ or }i > i_{*}\}$, so this is $2$-inconsistent because $\{\varphi(x;c^{n}_{i,1}) : i < \kappa\}$ is $2$-inconsistent and $\sigma$ is an automorphism. (3)\textemdash(5) are immediate from our construction. This completes the induction.
Now define $(c^{\omega}_{i,0},c^{\omega}_{i,1})_{i < \omega}$ such that $(c^{\omega}_{i,0},c^{\omega}_{i,1}) = (c^{j}_{i,0},c^{j}_{i,1})$ for some, equivalently all, $j \geq i$. Then $(a_{i},c^{\omega}_{i,0},c^{\omega}_{i,1})_{i < \omega}$ satisfies conditions (1)\textemdash(3) so, after extracting an indiscernible sequence, we conclude.
\end{proof}
\begin{rem}
If $\varphi(x;y)$ witnesses SOP$_{1}$ in $T$, it is clear from the definition that $\varphi$ will witness SOP$_{1}$ in any expansion $T'$ of $T$ and hence we may apply the above lemma to find $(a_{i},c_{i,0},c_{i,1})_{i < \omega}$ which are morever $L^{\text{Sk}}$-indiscernible and satisfy $c_{i,0} \equiv^{L^{Sk}}_{a_{<i},c_{<i,0}c_{<i,1}} c_{i,1}$ for all $i$ in $\mathbb{M}^{Sk}$, where the $L^{\text{Sk}}$-structure $\mathbb{M}^{\text{Sk}}$ is a monster model of an expansion of $T$ with Skolem functions. See, e.g., \cite[Remark 2.5]{kaplan2017kim}.
\end{rem}
\begin{prop} \label{transitivity implies nsop1}
Suppose $T$ has SOP$_{1}$. Then there are models $M \prec N \models T$ and tuples $a$ and $c$ such that $a \ind^{u}_{M} N$, $a \ind^{u}_{N}c$ and $a \nind^{K}_{M} Nc$.
\end{prop}
\begin{proof}
Fix a Skolemization $T^{\text{Sk}}$ of $T$ in the language $L^{\text{Sk}}$ and work in a monster model $\mathbb{M}^{\text{Sk}} \models T^{\text{Sk}}$. We will write $\equiv^{L^{\mathrm{Sk}}}$ to denote equality of types in the language $L^{\mathrm{Sk}}$ and $\equiv$ to denote equality of types in the language $L$. By Lemma \ref{goodindiscernible} and compactness, we can find an $L$-formula $\varphi(x;y)$ and an $L^{\text{Sk}}$-indiscernible sequence $(a_{i},c_{i,0},c_{i,1})$ such that
\begin{enumerate}
\item For all $i \in \mathbb{Q}$, $a_{i} \models \{\varphi(x;c_{j,0}) : j \leq i\}$.
\item $\{\varphi(x;c_{i,1}) : i \in \omega\}$ is $2$-inconsistent.
\item For all $i \in \mathbb{Q}$, $c_{i,0} \equiv^{L^{\text{Sk}}}_{a_{<i},c_{<i,0}c_{<i,1}} c_{i,1}$.
\end{enumerate}
Define $M = \text{Sk}(a_{<0}c_{<0,0}c_{<0,1})$ and $N = \text{Sk}(a_{<0},c_{<0,0},c_{<0,1},a_{>1})$. Note that we have $M \prec N$. In the claims below, independence is understood to mean independence with respect to the $L$-theory $T$.
\textbf{Claim 1}: $a_{1} \ind^{u}_{M} N$.
\emph{Proof of claim}: Fix a formula $\psi(x;n) \in \text{tp}(a_{1}/N)$. We can write the tuple $n = t(a,c)$ where $t$ is a tuple of Skolem terms, $a$ is a finite tuple from $a_{<0}a_{>1}$ and $c$ is a finite tuple from $c_{<0,0}c_{<0,1}$. As $a$ and $c$ are finite, there is some rational $\epsilon <0$ such that $a$ and $c$ come from $a_{<\epsilon}a_{>1}$ and $c_{<\epsilon,0}c_{<\epsilon,1}$ respectively. By indiscernibility, $\psi(x;n)$ is realized also by any $a_{\delta}$ with $\epsilon < \delta < 0$, which is in $M$.\qed
\textbf{Claim 2}: $a_{1} \ind^{u}_{N} c_{0,0}$.
\emph{Proof of claim}: This has a similar proof to Claim 1. Given any $\psi(x;n,c_{0,0}) \in \text{tp}(a_{1}/Nc_{0,0})$, as before, we can write the tuple $n = t(a,c)$ where $t$ is a tuple of Skolem terms, $a$ is a finite tuple from $a_{<0}a_{>1}$ and $c$ is a finite tuple from $c_{<0,0}c_{<0,1}$. Because these tuples are finite, there is a rational $\epsilon > 1$ such that $a$ comes from $a_{<0}a_{>\epsilon}$. Then by indiscernibility, $\psi(x;n,c_{0,0})$ is satisfied by any $a_{\delta}$ with $1 < \delta < \epsilon$, all of which are in $N$. \qed
\textbf{Claim 3}: $a_{1} \nind^{K}_{M} Nc_{0,0}$.
\emph{Proof of claim}: We will show even $a_{1} \nind^{K}_{M} c_{0,0}$. Let $\mathcal{D}$ be an ultrafilter on $M$ containing $\{c_{i,1} : i \in (\epsilon,0)\}$ for every $\epsilon < 0$. By $L^{\mathrm{Sk}}$-indiscernibility, we have $c_{0,1} \models \text{Av}(\mathcal{D},M)$. Then there is a sequence $(b_{i})_{i < \omega} \models \text{Av}(\mathcal{D}, \mathbb{M}^{\text{Sk}})^{\otimes \omega}|_{M}$ with $b_{0} = c_{0,1}$. By (2) and the choice of $\mathcal{D}$, we know $\{\varphi(x;b_{i}) : i < \omega\}$ is $2$-inconsistent so $\varphi(x;c_{0,1})$ Kim-divides over $M$. Moreover, $c_{0,0} \equiv^{L^{\text{Sk}}}_{a_{<0}c_{<0,0}c_{<0,1}} c_{0,1}$ so, in particular, $c_{0,0} \equiv_{M} c_{0,1}$ from which it follows also that $\varphi(x;c_{0,0})$ Kim-divides over $M$. By (1), we have $\models \varphi(a_{1},c_{0,0})$ so $a_{1} \nind^{K}_{M} c_{0,0}$. \qed
The claims taken together show $a_{1} \ind^{n}_{M} N$, $a_{1} \ind^{u}_{N} c_{0,0}$, and $a_{1} \nind^{K}_{M} c_{0,0}$, which completes the proof.
\end{proof}
\begin{cor}
The following are equivalent:
\begin{enumerate}
\item $T$ is NSOP$_{1}$.
\item $\ind^{K}$ satisfies the following weak form of transitivity: if $M \prec N \models T$, $a \ind^{u}_{M} N$ and $a \ind^{u}_{N} b$, then $a \ind^{K}_{M} Nb$.
\item $\ind^{K}$ satisfies transitivity: if $M \prec N \models T$, $a \ind^{K}_{M} N$ and $a \ind^{K}_{N} b$, then $a \ind^{K}_{M} Nb$.
\end{enumerate}
\end{cor}
\begin{proof}
Theorem \ref{transitivity theorem} establishes (1)$\implies$(3), Proposition \ref{transitivity implies nsop1} shows (2)$\implies$(1), and (3)$\implies$(2) is immediate from the fact that $\ind^{u}$ implies $\ind^{K}$.
\end{proof}
\section{$\ind^{K}$-Morley sequences are witnesses}
In this section, we characterize NSOP$_{1}$ by the property that $\ind^{K}$-Morley sequences are witnesses to Kim-dividing. The non-structure direction of this characterization was already observed in \cite[Theorem 3.16]{kaplan2017kim}: if $T$ has SOP$_{1}$ then $\ind^{K}$-Morley sequences will not always witness Kim-dividing. The more interesting direction goes the other way, showing that in the NSOP$_{1}$ context, $\ind^{K}$-Morley sequences are witnesses. This is a significant technical development in the study of NSOP$_{1}$ theories, as it, for example, obviates the need in many cases to construct tree Morley sequences. We give some applications below.
\begin{thm} \label{witnessing theorem}
Suppose that $\varphi\left(x,a\right)$ Kim-divides over $M$. Suppose
that $\langle a_{i}: i<\omega\rangle$ is an $\ind^{K}$-Morley sequence over $M$,
starting with $a$. Then $\{\varphi\left(x,a_{i}\right):i<\omega\}$
is inconsistent.
\end{thm}
\begin{proof}
Suppose not. Let $\kappa=\left|M\right|+|T|$ and extend the sequence to have length $\kappa^{+}$.
It suffices to find an increasing continuous sequence of models $\langle N_{i}: i<\kappa^{+}\rangle$
such that $N_{i}$ contains $a_{<i}$, $\left|N_{i}\right|\leq \kappa$,
$N_{0}=M$ and such that $a_{i}\ind_{M}^{K}N_{i}$. To see this, suppose that $c\models\{\varphi\left(x,a_{i}\right):i<\kappa^{+}\}$.
Then by local character, Remark \ref{local character for bigger cardinals}, for some $i<\kappa^{+}$, $c\ind_{N_{i}}^{K}N_{\kappa^{+}}$
where $N_{\kappa^{+}}=\bigcup_{i<\kappa^{+}}N_{i}$, as $\{N_{i} : \kappa \leq i < \kappa^{+}\}$ is a club subset of $[N_{\kappa^{+}}]^{\kappa}$. Hence $c\ind_{N_{i}}^{K}a_{i}$.
However, $a_{i}\ind_{M}^{K}N_{i}$ and hence by transitivity and symmetry,
$N_{i}c\ind_{M}^{K}a_{i}$ contradicting our assumption that $\varphi\left(x,a\right)$
Kim-divides over $M$ and hence also $\varphi(x;a_{i})$, by invariance.
\textbf{Claim}: There is a partial type $\Gamma(\overline{x})$ over $a_{<\kappa^{+}}M$ such that:
\begin{enumerate}
\item We have $\overline{x}=\langle \overline{x}_{\alpha}: \alpha<\kappa^{+} \rangle$ is an increasing
continuous sequence of tuples of variables such that $\left|\overline{x}_{\alpha}\right|=\kappa$,
and such that $\overline{x}_{\alpha+1}$ contains $\kappa$
new variables not in $\overline{x}_{\alpha}$ for all $\alpha<\kappa^{+}$.
\item $\Gamma\left(\overline{x}\right)$ asserts that $\overline{x}_{\alpha}$ enumerates a model containing
$Ma_{<\alpha}$ for all $\alpha < \kappa^{+}$.
\end{enumerate}
\emph{Proof of claim}: We define $\Gamma(\overline{x})$ as a continuous increasing union of partial types $\Gamma_{\alpha}(\overline{x}_{\alpha})$ for $\alpha < \kappa^{+}$. Suppose we are given $\Gamma_{\delta}(\overline{x}_{\delta})$ for $\delta < \alpha$.
If $\alpha = \beta+1$, then we define $\overline{x}_{\alpha,0} = \overline{x}_{\beta}$ and then, given $\overline{x}_{\alpha,i}$, we define $\Lambda_{i}$ to be the set of all partitioned formulas $\varphi(y;\overline{x})$ where the parameters of $\varphi$ come from $a_{<\alpha}M$ and the parameter variables $\overline{x}$ of $\varphi$ are among $\overline{x}_{\alpha,i}$. Now define $\overline{x}_{\alpha,i+1} = \overline{x}_{\alpha,i}$ together with a new variable $x_{\lambda}$ for each $\lambda \in \Lambda_{i}$. Finally $\overline{x}_{\alpha} = \bigcup_{i<\omega} \overline{x}_{\alpha,i}$. Let $\Gamma_{\alpha,0} = \Gamma_{\beta}$ and, given $\Gamma_{\alpha,i}$, we define $\Gamma_{\alpha,i+1}$ by
$$
\Gamma_{\alpha,i+1}(\overline{x}_{\alpha,i+1}) = \Gamma_{\alpha,i}(\overline{x}_{\alpha,i}) \cup \{(\exists y)\varphi(y;\overline{x}) \to \varphi(x_{\lambda};\overline{x}) : \lambda = \varphi(y;\overline{x}) \in \Lambda_{i}\}.
$$
Then $\Gamma_{\alpha}(\overline{x}_{\alpha}) = \bigcup_{i < \omega} \Gamma_{\alpha,i}(\overline{x}_{\alpha,i})$. Note that because $\mathbb{M} \models (\exists y)[y = c]$ for each $c \in a_{<\alpha}M$, any realization of $\Gamma_{\alpha}(\overline{x}_{\alpha})$ will contain $a_{<\alpha}M$ and will be a model by the Tarski-Vaught test.
To complete the induction, we note that if $\alpha$ is a limit and we are given $\Gamma_{\delta}$ for all $\delta < \alpha$, then we can set $\overline{x}_{\alpha} = \bigcup_{\delta < \alpha} \overline{x}_{\delta}$ and $\Gamma_{\alpha}(\overline{x}_{\alpha}) = \bigcup_{\delta < \alpha} \Gamma_{\delta}(\overline{x}_{\delta})$, which has the desired property as the union of an elementary chain is a model. \qed
Lastly, we define $\Delta(\overline{x})$ as follows:
$$
\Delta(\overline{x}) = \Gamma(\overline{x}) \cup \{\neg \varphi(\overline{x}_{\alpha};a_{\alpha}) : \varphi(\overline{x}_{\alpha},y) \in L(M), \varphi(\overline{x}_{\alpha};a) \text{ Kim-divides over }M, \alpha < \kappa^{+}\},
$$
where we write $\varphi(\overline{x}_{\alpha};a_{\alpha})$ to denote a formula whose variables are a finite subtuble of $\overline{x}_{\alpha}$. To conclude, it is enough, by symmetry, to show that $\Delta\left(\overline{x}\right)$ is consistent.
By compactness, it is enough to prove this when replace $\kappa^{+}$ by a natural number $n < \omega$,
so we prove it by finding such a sequence by induction on $n$. Suppose
we found such an increasing sequence of models $N_{i}$ for $i<n$.
Let $N_{n}$ be a model containing $MN_{n-1}a_{<n}$ of size $\kappa$.
Since $a_{n}\ind_{M}a_{<n}$, we may assume by extension that $a_{n}\ind_{M}N_{n}$,
preserving all the previous types, so we are done.
\end{proof}
\begin{cor}
\label{cor:witnessing}Suppose $T$ is NSOP$_{1}$ and $M \models T$. If $\langle a_{i}: i<\omega \rangle$
is a Kim-Morley sequence over $M$ starting with $a_{0}=a$, then
$\varphi\left(x,a\right)$ Kim-divides over $M$ iff $\{\varphi\left(x,a_{i}\right): i<\omega\}$
is $k$-inconsistent for some $k < \omega$.
\end{cor}
\begin{proof}
One direction is Fact \ref{fact:Kim Morley is consistent}(2). The other is Theorem \ref{witnessing theorem}, since, by compactness and indiscernibility, if $\{\varphi(x;a_{i}) : i < \omega\}$ is inconsistent, it is $k$-inconsistent for some $k < \omega$.
\end{proof}
\begin{rem}
In fact, in Corollary \ref{cor:witnessing} we only need to assume $\langle a_{i} : i < \omega \rangle$ satisfies $a_{i} \ind^{K}_{M} a_{<i}$ and $a_{i} \models \text{tp}(a_{i}/M)$ for all $i < \omega$ (i.e. it is not necessary to assume that this sequence is $M$-indiscernible). If $\varphi(x;a)$ does not Kim-divide over $M$, then $\{\varphi(x;a_{i}) : i < \omega\}$ is consistent by the independence theorem over $M$. Conversely, if $\{\varphi(x;a_{i}) : i < \omega\}$ is not $k$-inconsistent for any $k < \omega$, then the partial type $\Gamma(y_{i} : i < \omega)$ containing, for all $i < \omega$,
\begin{itemize}
\item $y_{i} \models \text{tp}(a/M)$,
\item $\{\psi(y_{<i};y_{i}) : \psi(y_{<i};a) \in L(Ma) \text{ Kim-forks over }M\}$
\item $(\exists x)\bigwedge_{j < i} \varphi(x;y_{i})$
\end{itemize}
together with a schema asserting $\langle y_{i} : i < \omega \rangle$ is $M$-indiscernible is finitely satisfiable in $\langle a_{i} : i < \omega \rangle$ by compactness, Ramsey, and symmetry. A realization contradicts Corollary \ref{cor:witnessing}.
\end{rem}
\begin{cor} \label{witnesschar}
Suppose $T$ is NSOP$_{1}$, $M \models T$, and $I = \langle a_{i} : i < \omega\rangle$ is an $M$-indiscernible sequence. The $I$ is a witness for Kim-dividing over $M$ if and only if $I$ is a $\ind^{K}$-Morley sequence over $M$.
\end{cor}
\begin{proof}
Note that if $I$ is a witness for Kim-dividing over $M$, then $a_{i} \ind^{K}_{M} a_{<i}$ for all $i < \omega$ by symmetry: if $\varphi(x;a_{i}) \in \text{tp}(a_{<i}/Ma_{i})$, then, by $M$-indiscernibility, $a_{<i} \models \{\varphi(x;a_{j}): j \geq i\}$ so $\varphi(x;a_{i})$ does not Kim-divide over $M$, hence $a_{<i} \ind^{K}_{M} a_{i}$. This shows that witnesses for Kim-dividing over $M$ are $\ind^{K}$-Morley over $M$. The other direction is Theorem \ref{witnessing theorem}.
\end{proof}
\begin{cor} \label{tms = tree}
Suppose $T$ is NSOP$_{1}$ and $M \models T$. A sequence $I$ over $M$ is tree Morley over $M$ if and only if $I$ is a total $\ind^{K}$-Morley sequence over $M$.
\end{cor}
\begin{proof}
By Fact \ref{sequence implications}(1), if $I$ is tree Morley over $M$ then $I$ is a total Morley sequence over $M$. For the other direction, suppose $I = \langle a_{i} : i < \omega \rangle$ is a total $\ind^{K}$-Morley sequence over $M$ and we will show it is tree Morley over $M$. By Fact \ref{witnessfacts}, it suffices to show $I$ is a strong witness to Kim-dividing over $M$. Because $a_{>i} \ind^{K}_{M} a_{\leq i}$ for all $i < \omega$, if $1 \leq n < \omega$ we know $\langle (a_{n \cdot i}, a_{n \cdot i + 1}, \ldots, a_{n \cdot i + (n-1)} \rangle$ satisfies
$$
(a_{n \cdot i}, a_{n \cdot i + 1}, \ldots, a_{n \cdot i + (n-1)}) \ind^{K}_{M} (a_{n \cdot j}, a_{n \cdot j + 1}, \ldots, a_{n \cdot j + (n-1)})_{j < i},
$$
for all $i < \omega$, or, in other words, $\langle (a_{n \cdot i}, a_{n \cdot i + 1}, \ldots, a_{n \cdot i + (n-1)} ) : i < \omega \rangle$ is an $\ind^{K}$-Morley sequence over $M$, hence a witness to Kim-dividing over $M$ by Theorem \ref{witnessing theorem}. It follows that $I$ is a strong witness to Kim-dividing, so $I$ is tree Morley over $M$.
\end{proof}
\section{Applications} \label{applications}
\subsection{Lifting lemmas}
The first application of the transitivity and witnessing theorems will be two `lifting lemmas' that concern $\ind^{K}$-Morley and tree Morley sequences over two bases simultaneously. In Lemma \ref{kms over M tms over N}, we showed that if $M \prec N$ and $a \ind^{K}_{M} N$, then it is possible to construct an $\ind^{K}$-Morley sequence over $M$ beginning with $a$ which is also a tree Morley sequence over $N$. Later, we showed under the same hypotheses in Proposition \ref{goodseq}, that we can construct a tree Morley sequence over $M$ starting with $a$ which is also an $\ind^{K}$-Morley sequence over $N$. These raise two natural questions: first, is it possible, under these hypotheses, to construct sequences that are tree-Morley over both bases simultaneously? And if so, are such sequences somehow special? We show that the answer to the first question is yes, and, moreover, address the second by showing that every $\ind^{K}$-Morley sequence (tree-Morley sequence) over $M$ beginning with $a$ is conjugate over $Ma$ to a sequence that is $\ind^{K}$-Morley (tree Morley) over $N$.
\begin{defn}
We say that $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is $\ind^{K}$\emph{-spread out} over $M$ if for all $\eta \in \mathcal{T}_{\alpha}$ with $\text{dom}(\eta) =[\beta+1,\alpha)$ for some $\beta < \alpha$, the sequence $(a_{\unrhd \eta \frown \langle i \rangle})_{i < \omega}$ is an $\ind^{K}$-Morley sequence over $M$.
\end{defn}
\begin{lem} \label{indkERarg}
Suppose $(a_{\eta})_{\eta \in \mathcal{T}_{\kappa}}$ is a tree of tuples, $\ind^{K}$-spread out and $s$-indiscernible over $M$. If $\kappa$ is sufficiently large, then there is a tree $(b_{\eta})_{\eta \in \mathcal{T}_{\omega}}$, $s$-indiscernible and $\ind^{K}$-spread out over $M$, such that:
\begin{enumerate}
\item For all $w \in [\omega]^{<\omega}$,
$$
(a_{\eta})_{\eta \in \mathcal{T}_{\kappa} \upharpoonright v} \equiv_{M} (b_{\eta})_{\eta \in \mathcal{T}_{\omega} \upharpoonright w}.
$$
for some $v \in [\kappa \setminus \text{lim}(\kappa)]^{<\omega}$.
\item For all $w,v \in [\omega]^{<\omega}$ with $|w| = |v|$,
$$
(b_{\eta})_{\eta \in \mathcal{T}_{\omega} \upharpoonright w} \equiv_{M} (b_{\eta})_{\eta \in \mathcal{T}_{\omega} \upharpoonright v}.
$$
\end{enumerate}
\end{lem}
\begin{proof}
The proof of \cite[Lemma 5.10]{kaplan2017kim} (Fact \ref{modeling}(2)) shows that there is $(b_{\eta})_{\eta \in \mathcal{T}_{\omega}}$ satisfying (1) and (2). As $(a_{\eta})_{\eta \in \mathcal{T}_{\kappa}}$ is $s$-indiscernible and $\ind^{K}$-spread out over $M$, (1) implies that $(b_{\eta})_{\eta \in \mathcal{T}_{\omega}}$ is $s$-indiscernible and $\ind^{K}$-spread out over $M$ as well. See the proof of \cite[Lemma 5.10]{kaplan2017kim} (Fact \ref{modeling}(2)) for more details.
\end{proof}
\begin{lem} \label{weak tree}
Suppose $M$ is a model and $(a_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is a tree which is $\ind^{K}$-spread out and $s$-indiscernible over $M$ and for all $w,v \in [\alpha \setminus \text{lim}(\alpha)]^{<\omega}$ with $|w| = |v|$,
$$
(a_{\eta})_{\eta \in \mathcal{T}_{\alpha} \upharpoonright w} \equiv_{M} (a_{\eta})_{\eta \in \mathcal{T}_{\alpha} \upharpoonright v}
$$
then $(a_{\zeta_{\beta}})_{\beta \in \alpha \setminus \text{lim}(\alpha)}$ is a tree Morley sequence over $M$.
\end{lem}
\begin{proof}
The condition that for all $w,v \in [\alpha \setminus \text{lim}(\alpha)]^{<\omega}$ with $|w| = |v|$,
$$
(a_{\eta})_{\eta \in \mathcal{T}_{\alpha} \upharpoonright w} \equiv_{M} (a_{\eta})_{\eta \in \mathcal{T}_{\alpha} \upharpoonright v}
$$
implies that $(a_{\zeta_{\beta}})_{\beta \in \alpha \setminus \text{lim}(\alpha)}$ is an $M$-indiscernible sequence. By Corollary \ref{tms = tree}, it suffices to show that $(a_{\zeta_{\beta}})_{\beta \in \alpha \setminus \text{lim}(\alpha)}$ is a total $\ind^{K}$-Morley sequence over $M$. Fix any non-limit $\beta < \alpha$. We know that $a_{\zeta_{\leq \beta}}$ is a subtuple of $a_{\unrhd \zeta_{\beta}} = a_{\vartriangleright \zeta_{\beta+1\frown 0}}$ and $\langle a_{\vartriangleright \zeta_{\beta+1\frown \langle i \rangle}} : i < \omega \rangle$ is an $\ind^{K}$-Morley sequence over $M$ which is $Ma_{\zeta_{>\beta}}$-indiscernible so $a_{\zeta_{>\beta}} \ind^{K}_{M} a_{\zeta_{\leq \beta}}$ by Theorem \ref{witnessing theorem}.
\end{proof}
\begin{prop} \label{tree lifting lemma}
Suppose $T$ is NSOP$_{1}$, $M \prec N \models T$, and $I = \langle b_{i} : i < \omega \rangle$ is a tree Morley sequence over $M$. If $b_{0} \ind^{K}_{M} N$, then there is $I' \equiv_{Mb_{0}} I$ such that $I'$ is a tree Morley sequence over $N$.
\end{prop}
\begin{proof}
By compactness, we may stretch the sequence so that $I = \langle b_{i} : i \in \kappa \setminus \text{lim}(\kappa) \rangle$ for some cardinal $\kappa$ large relative to $|N|$. By the chain condition, Lemma \ref{chain condition for indk}, we may also assume $I$ is $N$-indiscernible and $I \ind^{K}_{M} N$ after moving by an automorphism over $Mb_{0}$. By induction on $\alpha \leq \kappa$, we will construct trees $(b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ and sequences $I_{\alpha} = \langle b_{\alpha,i} : i \in \kappa \setminus \text{lim}(\kappa)\rangle$ satisfying the following conditions for all $\alpha$:
\begin{enumerate}
\item For all non-limit $i \leq \alpha$, $b^{\alpha}_{\zeta_{i}} = b_{\alpha,i}$.
\item $(b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ is $\ind^{K}$-spread out over $N$ and $s$-indiscernible over $NI_{\alpha,>\alpha}$.
\item If $\beta < \alpha$, $I_{\alpha} \equiv_{M} I_{\beta}$ and $I_{0} = I$.
\item $I_{\alpha,>\alpha}$ is $M(b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$-indiscernible.
\item If $\alpha < \beta$, then $b^{\alpha}_{\eta} = b^{\beta}_{\iota_{\alpha \beta}(\eta)}$ for $\eta \in \mathcal{T}_{\alpha}$.
\item $I_{\alpha}(b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}} \ind^{K}_{M} N$.
\item $b^{\alpha}_{\eta} \equiv_{N} b_{\alpha,i} \equiv_{N} b_{0}$ for all $\eta \in \mathcal{T}_{\alpha}$, $i \in \kappa \setminus \text{lim}(\kappa)$.
\end{enumerate}
For the base case, we define $b^{0}_{\emptyset} = b_{0}$ and $I_{0} = I$, which satisfies all the demands. Next, suppose we are given $(b^{\beta}_{\eta})_{\eta \in \mathcal{T}_{\beta}}$ for all $\beta \leq \alpha$ and we will construct $(b^{\alpha+1}_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$. By (6) and Lemma \ref{goodseq}, we may obtain a sequence $J = \langle (b^{\alpha}_{\eta,i})_{\eta \in \mathcal{T}_{\alpha}} : i < \omega \rangle$ with $(b^{\alpha}_{\eta,0})_{\eta \in \mathcal{T}_{\alpha}} = (b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ which is tree-Morley over $M$ and $\ind^{K}$-Morley over $N$. As $J$ is a tree Morley sequence over $M$ which is $N$-indiscernible, we have:
\begin{equation} \tag{a}
J \ind^{K}_{M} N,
\end{equation}
Likewise, as $I_{\alpha, > \alpha}$ is a tree Morley sequence over $M$ by (3) which is $M(b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$-indiscernible by (4), we have $I_{\alpha,>\alpha} \ind^{K}_{M} (b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$. By the chain condition (Lemma \ref{chain condition for indk}), there is $I'_{\alpha,>\alpha} \equiv_{M(b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}} I_{\alpha, > \alpha}$ so that $J$ is $MI'_{\alpha,>\alpha}$-indiscernible and also:
\begin{equation}\tag{b}
I'_{\alpha,>\alpha} \ind^{K}_{M} J.
\end{equation}
Choose $N'$ so that $NI_{\alpha,>\alpha} \equiv_{M} N'I'_{\alpha,>\alpha}$. By (6) and invariance, we have
\begin{equation} \tag{c}
N' \ind^{K}_{M} I'_{\alpha,>\alpha}.
\end{equation}
By (a), (b), and (c), we may apply the independence theorem to find a model $N''$ with $N'' \equiv_{MJ} N$, $N'' \equiv_{MI'_{\alpha,>\alpha}} N'$, and $N'' \ind^{K}_{M} I'_{\alpha,>\alpha}J$. Now choose $I''_{\alpha,>\alpha} = \langle b''_{\alpha,i} : i \in \kappa \setminus (\text{lim}(\kappa) \cup \alpha) \rangle$ so that $N I''_{\alpha,>\alpha}\equiv_{MJ} N'' I'_{\alpha,>\alpha}$.
Define a tree $(c_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ by setting $c_{\emptyset} = b''_{\alpha, \alpha+1}$ and $c_{\langle i \rangle \frown \eta} = b^{\alpha}_{\eta,i}$ for all $\eta \in \mathcal{T}_{\alpha}$ and $i < \omega$. With this definition, we have $N \ind^{K}_{M} (c_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}} I''_{\alpha,>\alpha+1}$. Let $(c'_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ be a tree which is $s$-indiscernible over $NI''_{\alpha,> \alpha+1}$ locally based on $(c_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$. By symmetry and finite character, we have $N \ind^{K}_{M} (c'_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}I''_{\alpha,>\alpha+1}$. Finally, let $I'''_{\alpha,>\alpha+1} = \langle b'''_{\alpha,i} : i \in \kappa \setminus (\text{lim}(\kappa) \cup (\alpha+1)) \rangle$ be an $N(c'_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$-indiscernible sequence locally based on $I''_{\alpha,>\alpha+1}$. By symmetry, we have $N \ind^{K}_{M} I'''_{\alpha,>\alpha}(c'_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$. Note that, by (2) and the construction, $(c'_{\eta})_{\eta \in \mathcal{T}_{\alpha+1}}$ is $\ind^{K}$-spread out over $N$ and $s$-indiscernible over $NI'''_{\alpha,>\alpha+1}$. Moreover, by (2) and the construction, there is an automorphism $\sigma \in \text{Aut}(\mathbb{M}/N)$ such that $\sigma(c'_{\langle 0 \rangle \frown \eta}) = b^{\alpha}_{\eta}$ for all $\eta \in \mathcal{T}_{\alpha}$ so we will define $b^{\alpha+1}_{\eta} = \sigma(c'_{\eta})$ for all $\eta \in \mathcal{T}_{\alpha+1}$. Likewise, we define $I_{\alpha+1} = \langle b_{\alpha+1,i} : i \in \kappa \setminus \text{lim}(\kappa)\rangle$ by $b_{\alpha+1,i} = b^{\alpha+1}_{\zeta_{i}}$ for non-limit $i \leq \alpha+1$ and $b_{\alpha+1,i} = \sigma(b'''_{\alpha,i})$ for non-limit $i > \alpha+1$. It is immediate that this construction satisfies (6) and (7) by induction and the construction of $N''$. To check (3), note that, by induction, using (1),(2), and (3), for any function $\eta : \alpha \to \omega$, we have $(b^{\alpha}_{\eta |_{[\beta,\alpha)}})_{\beta \in \alpha \setminus \text{lim}(\alpha)} I_{\alpha, > \alpha} \equiv_{M} I$, and therefore, for any $i < \omega$, we have
$$
(b^{\alpha}_{\eta|_{[\beta,\alpha)},i})_{\beta \in \alpha \setminus \text{lim}(\alpha)} I''_{\alpha,>\alpha} \equiv_{M} (b^{\alpha}_{\eta|_{[\beta,\alpha)},i})_{\beta \in \alpha \setminus \text{lim}(\alpha)} I'_{\alpha,>\alpha} \equiv_{M} I.
$$
By the definition of $(c_{\eta})_{\eta \in T_{\alpha+1}}$ and $s$-indiscernibility over $M$, it follows that, for any function $\eta' : (\alpha+1) \to \omega$,
$$
(c'_{\eta'|_{[\beta,\alpha+1)}})_{\beta \in (\alpha+1) \setminus \text{lim}(\alpha+1)}I'''_{\alpha,>\alpha+1} \equiv_{M} (c_{\eta'|_{[\beta,\alpha+1)}})_{\beta \in (\alpha+1) \setminus \text{lim}(\alpha+1)}I''_{\alpha,>\alpha+1} \equiv_{M} I,
$$
from which (3) follows.
The remaining constraints are easily seen to be satisfied by the construction.
Now for $\delta$ limit, if we are given $(b^{\alpha}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$ for $\alpha < \delta$, we may define $b^{\delta}_{\iota_{\alpha \delta}(\eta)} = b^{\alpha}_{\eta}$ for all $\alpha < \delta$ and $\eta \in \mathcal{T}_{\alpha}$. We define $I_{\delta}$ as follows: $I_{\delta,<\delta}$ will be defined by $b_{\delta,i} = b_{i,i}$ for all non-limit $i < \delta$. By (1),(3), and induction, we have $I_{\delta,<\delta} \equiv_{M} I_{<\delta}$. Choose $J$ so that $I_{\delta,<\delta}J \equiv_{M} I_{<\delta}I_{>\delta}$. Write $\overline{x}$ for $\langle x_{i} : i \in \kappa \setminus (\text{lim}(\kappa) \cup \delta) \rangle$ and $\varphi(\overline{x};c,n)$ to denote any formula where the variables are a finite subtuple of $\overline{x}$. By (6), induction, and compactness, the partial type, which contains $\text{tp}_{\overline{x}}(J/MI_{\delta,<\delta})$ and $\{\neg \varphi(\overline{x},c;n) : c \in (b^{\delta}_{\eta})_{\eta \in \mathcal{T}_{\delta}}, n \in N, \varphi(x,y;n) \text{ Kim-divides over }M\}$, and naturally expresses that both $(b^{\delta}_{\eta})_{\eta \in \mathcal{T}_{\delta}}$ is $s$-indiscernible over $N\overline{x}$ and $\overline{x}$ is $N(b^{\delta}_{\eta})_{\eta \in \mathcal{T}_{\alpha}}$-indiscernible, is consistent. Let $I_{\delta,>\delta}$ is a realization of this type, completing the definition of $I_{\delta}$. It is easy to check that these are well-defined and satisfy all of the requirements by induction and the finite character of Kim-independence.
This completes the recursion and yields $(b^{\kappa}_{\eta})_{\eta \in \mathcal{T}_{\kappa}}$ likewise defined by $b^{\kappa}_{\iota_{\alpha \kappa}(\eta)} = b^{\alpha}_{\eta}$ for all $\alpha < \kappa$ and $\eta \in \mathcal{T}_{\alpha}$. Apply Lemma \ref{indkERarg} to obtain a tree $(c_{\eta})_{\eta \in \mathcal{T}_{\omega}}$ so that for all $w \in [\omega]^{<\omega}$, there is $v \in [\kappa \setminus \text{lim}(\kappa)]^{<\omega}$ such that
$$
(b^{\kappa}_{\eta})_{\eta \in \mathcal{T}_{\kappa} \upharpoonright v} \equiv_{N} (c_{\eta})_{\eta \in \mathcal{T}_{\omega} \upharpoonright w},
$$
and, moreover, for all $w,v \in [\omega]^{<\omega}$ with $|w| = |v|$,
$$
(c_{\eta})_{\eta \in \mathcal{T}_{\omega} \upharpoonright w} \equiv_{N} (c_{\eta})_{\eta \in \mathcal{T}_{\omega} \upharpoonright v}.
$$
By an automorphism, we can assume $c_{\zeta_{0}} = b_{0}$, hence, setting $I' = \langle c_{\zeta_{i}} : i < \omega \rangle$, we have $I' \equiv_{Mb_{0}} I$. Moreover, by Lemma \ref{weak tree}, $I'$ is a tree Morley sequence over $N$, completing the proof.
\end{proof}
The second lifting lemma, is an analogue of Proposition \ref{tree lifting lemma} for $\ind^{K}$-Morley sequences.
\begin{prop}\label{lifting lemma}
Suppose $T$ is NSOP$_{1}$, $M \prec N \models T$, and $I = \langle b_{i} : i < \omega \rangle$ is an $\ind^{K}$-Morley sequence over $M$. If $b_{0} \ind^{K}_{M} N$, then there is $I' \equiv_{Mb_{0}} I$ satisfying the following conditions:
\begin{enumerate}
\item $I' \ind^{K}_{M} N$
\item $I'$ is an $\ind^{K}$-Morley sequence over $N$.
\end{enumerate}
\end{prop}
\begin{proof}
For $i < \omega$, let $q_{i}(x_{j} : j \leq i) = \text{tp}(b_{\leq i}/M)$ and let $p(x;N) = \text{tp}(b_{0}/N)$. For a natural number $K$, define $\Gamma_{K}$ to be the partial type defined as the union of the following:
\begin{enumerate}[(a)]
\item $q_{K}(x_{i} : i \leq K)$.
\item $\bigcup_{i \leq K} p(x_{i};N)$.
\item $\{\neg \varphi(x_{\leq K};c) : \varphi(x_{\leq K},y) \in L(M),c \in N,\varphi(x_{\leq K};c) \text{ Kim-divides over }M\}$.
\item $\{\neg \varphi(x_{<i};x_{i}) : i \leq K,\varphi(x_{<i};x_{i}) \in L(N), \varphi(x_{<i};b_{0}) \text{ Kim-divides over }N\}$.
\end{enumerate}
By Ramsey and compactness, it is enough to show the consistency of $\Gamma = \bigcup_{K < \omega} \Gamma_{K}$.
As $b_{0} \ind^{K}_{M} N$, $\Gamma_{0}$ is consistent. Suppose $\Gamma_{K}$ is consistent and we will that show $\Gamma_{K+1}$ is consistent. Let $\Delta(x_{0},\ldots, x_{K+1}) \subseteq \Gamma_{K+1}$ be the partial type defined as the union of the following:
\begin{enumerate}
\item $q_{K+1}(x_{i} : i \leq K+1)$.
\item $\bigcup_{i \leq K+1} p(x_{i};N)$.
\item $\{\neg \varphi(x_{\leq K+1};c) : \varphi(x_{\leq K+1},y) \in L(M),c \in N,\varphi(x_{\leq K+1};c) \text{ Kim-divides over }M\} $.
\item $\{\neg \varphi(x_{<i};x_{i}) : i < K,\varphi(x_{<i};x_{i}) \in L(N), \varphi(x_{<i};b_{0}) \text{ Kim-divides over }N\} $.
\end{enumerate}
Note that $\Delta$ is identical to $\Gamma_{K+1}$ except that in the final set of formulas, $i$ is taken to be less than $K$ rather than $K+1$.
\textbf{Claim 1}: $\Delta$ is consistent.
\emph{Proof of claim}: Let $(b'_{0},\ldots, b'_{K}) \models \Gamma_{K}$ and choose $b'_{K+1}$ so that $b'_{\leq K+1} \equiv_{M} b_{\leq K+1}$. Next, choose a model $N'$ so that $b'_{K+1}N' \equiv_{M} b_{0}N$. Now by definition of $\Gamma_{K}$ and symmetry, we have $N \ind^{K}_{M} b'_{\leq K}$ and our assumption that $b_{0} \ind^{K}_{M} N$ implies $N' \ind^{K}_{M} b'_{K+1}$ by symmetry and invariance. Moreover, because $I$ is an $\ind^{K}$-Morley sequence, we likewise have $b'_{K+1} \ind^{K}_{M} b'_{\leq K}$. Therefore, we may apply the independence theorem to find $N'' \models \text{tp}(N/Mb'_{\leq K}) \cup \text{tp}(N'/Mb'_{K+1})$ such that $N'' \ind^{K}_{M} b'_{\leq K+1}$. There is an automorphism $\sigma \in \text{Aut}(\mathbb{M}/Mb'_{\leq K})$ with $\sigma(N'') = N$. Let $b''_{K+1} = \sigma(b'_{K+1})$. Then $(b'_{0},\ldots, b'_{K},b''_{K+1}) \models \Delta$.\qed
\textbf{Claim 2}: Suppose $J = \langle c_{K+1,i} : i < \omega \rangle$ is an $\ind^{K}$-Morley sequence over $M$ with $b_{0} = c_{K+1,0}$. If $J$ is $N$-indiscernible and $J \ind^{K}_{M} N$, then $\bigcup_{i < \omega} \Delta(x_{0},\ldots, x_{K},c_{K+1,i})$ is consistent.
\emph{Proof of claim}: Choose $(c_{0},\ldots, c_{K})$ so that $(c_{0},\ldots, c_{K},c_{K+1,0}) \models \Delta$. Then $c_{K+1,0} \ind^{K}_{M} c_{\leq K}$ so there is $J' \equiv_{Mc_{K+1,0}} J$ such that $J'$ is $Mc_{\leq K}$-indiscernible and $J' \ind^{K}_{M} c_{\leq K}$, by the chain condition for $\ind^{K}$-Morley sequences Lemma \ref{chain condition for indk}. Moreover, by definition of $\Delta$, we have $c_{\leq K} \ind^{K}_{M} N$. By assumption, $J \ind^{K}_{M} N$ and, since $J \equiv_{M} J'$, we may, therefore, apply the strengthened independence theorem, Fact \ref{fact:Kim Morley is consistent}(3), to find $J_{*}$ that simultaneously realizes $\text{tp}(J'/Mc_{\leq K})$, to satisfy condition $(1)$ in the definition of $\Delta$, and $\text{tp}(J/N)$, to satisfy condition (2), and, moreover, such that $N \ind^{K}_{M} J_{*}c_{\leq K}$, to satisfy (3). Choose $c'_{\leq K}$ so that $c'_{\leq K}J \equiv_{N} c_{\leq K}J_{*}$. Then, by definition of $\Delta$, $c'_{\leq K} \models \bigcup_{i < \omega} \Delta(x_{0},\ldots, x_{K},c_{K+1,i})$.\qed
To conclude, we use Proposition \ref{tree lifting lemma} to select $J = \langle b'_{K+1,i} : i < \omega \rangle$ which is simultaneously a tree Morley sequence over $M$ and a tree Morley sequence over $N$ with $J \ind^{K}_{M} N$. In particular, $J$ is an $\ind^{K}$-Morley sequence over $M$. Hence, by Claim 2,
$$
\bigcup_{i < \omega} \Delta(x_{0},\ldots, x_{K}, b'_{K+1,i})
$$
is consistent, so we may realize it with $(b'_{0},\ldots, b'_{K})$. By compactness and Ramsey, we may additionally assume that $\langle b_{K+1,i} : i < \omega \rangle$ is $Nb'_{\leq K}$-indiscernible. Put $b'_{K+1} = b'_{K+1,0}$. It follows, by Kim's Lemma for tree Morley sequences (Fact \ref{witnessfacts}(1)), that $b'_{K+1} \ind^{K}_{N} b'_{\leq K}$ and, therefore by definition of $\Delta$, $b'_{<i} \ind^{K}_{N} b'_{i}$ for all $i \leq K+1$. Additionally, by definition of $\Delta$, we have $b'_{\leq K+1} \models q_{K+1}$, $b'_{i} \models p(x;N)$ for all $i \leq K+1$, and $b'_{\leq K+1} \ind^{K}_{M} N$. This shows $b'_{\leq K+1} \models \Gamma_{K+1}$.
\end{proof}
\subsection{Doubly local character}
In \cite[Lemma 3.7]{24-Kaplan2017}, the following variant of local character was established: if $\langle M_{i} : i < \alpha \rangle$ is an increasing sequence of elementary submodels of $N$ and $p \in S(N)$ does not Kim-divide over $M_{i}$ for all $i < \alpha$, then $p$ does not Kim-divide over $M_{\alpha}$. The proof there uses the fact that $p$ is a complete type in an essential way, which left open whether or not a local version of this form of local character (hence the name \emph{doubly} local character) might also hold, where the type $p$ is replaced by a formula over $N$. We prove this in Proposition \ref{doubly local character}, answering \cite[Question 3.17]{24-Kaplan2017} .
\begin{defn}
Suppose $\alpha$ is an ordinal and $\mathcal{U}$ is an ultrafilter on $\alpha$. Given a sequence of sequences $\langle \overline{b}_{i} : i < \omega \rangle$, where $\overline{b}_{i} = \langle b_{i,j} : j < \omega \rangle$ for all $i < \alpha$, we say that $\overline{a}$ is a $\mathcal{U}$\emph{-average} of $\langle \overline{b}_{i} : i < \alpha \rangle$ over $A$ if, for all $n< \omega$ and $\varphi(x_{0},\ldots, x_{n-1}) \in L(A)$, we have
$$
\mathbb{M} \models \varphi(a_{<n}) \iff \{ i \in \alpha : \mathbb{M} \models \varphi(b_{i,<n})\} \in \mathcal{U}.
$$
\end{defn}
It is an easy exercise to show that $\mathcal{U}$-averages exist for any sequence of sequences and parameter sets $A$.
\begin{lem} \label{lemma:limit of heirs}
Suppose we are given:
\begin{enumerate}
\item An increasing continuous elementary chain $\langle M_{i} : i \leq \alpha \rangle$ of models of $T$.
\item For every $i< \alpha$, $\bar{b}_{i}=\langle b_{i,j}: j<\omega\rangle$ is an indiscernible heir sequence over $M_{i}$.
\item For all $i \leq j$, $b_{i,0} \equiv_{M_{i}} b_{j,0}$.
\end{enumerate}
Then for any ultrafilter $\mathcal{U}$ on $\alpha$ concentrating on end segments of $\alpha$, if $\bar{a}=\langle a_{j}: j<\omega \rangle$ realizes the $\mathcal{U}$-average of $\langle \bar{b}_{i} : i<\alpha\rangle$
over $M_{\alpha}$, then $\langle a_{j} : j<\omega \rangle$ is an heir
sequence over $M_{\alpha}$ such that $a_{0}\equiv_{M_{i}} b_{i,0}$ for all $i<\alpha$.
\end{lem}
\begin{proof}
The fact that $\bar{a}$ is an indiscernible sequence over $M_{\alpha}$ and $a_{0} \equiv_{M_{i}} b_{i,0}$ is
clear by construction. We are left with showing that $\bar{a}$ is
an heir sequence over $M_{\alpha}$. Suppose that $\psi\left(a_{j},a_{<j},m\right)$
where $m\in M_{\alpha}$ and $\psi\left(y,z,w\right)$ is an $L$-formula. Then for some $i<\alpha$ such that $m\in M_{i}$, $\psi\left(b_{i,j},b_{i,<j},m\right)$
holds. Hence for some $n\in M_{i}$, $\psi\left(b_{i,j},n,m\right)$
holds. Hence $\psi\left(a_{0},n,m\right)$ holds (as $b_{i,j}\equiv_{M_{i}}a_{0}$)
and hence $\psi\left(a_{j},n,m\right)$ holds.
\end{proof}
\begin{defn}
Suppose $M$ is a model and $k<\omega$. Say that a formula $\varphi\left(x,a\right)$
\emph{$k$-Kim-divides over $M$} if there is an $\ind^{K}$-Morley sequence
$\langle a_{i}: i<\omega \rangle$ over $M$ starting with $a_{0}=a$ such
that $\{\varphi\left(x,a_{i}\right): i<\omega\}$ is $k$-inconsistent.
\end{defn}
\begin{rem}
There is a choice involved in defining \emph{$k$-Kim-dividing}, since it is not known if, in an NSOP$_{1}$ theory, a formula that $k$-divides with respect to some $\ind^{K}$-Morley sequence will also $k$-divide along a Morley sequence in a global invariant type. The above definition differs from the one implicitly used in \cite{24-Kaplan2017}, but in light of Corollary \ref{witnesschar} this definition seems reasonably canonical, given that any sequence which is a witness to Kim-dividing over $M$ will be an $\ind^{K}$-Morley sequence over $M$ and hence $\varphi\left(x,a\right)$
$k$-Kim-divides over $M$ for some $k<\omega$ iff $\varphi\left(x,a\right)$
Kim-divides over $M$.
\end{rem}
\begin{prop} \label{doubly local character}
Suppose that $\langle M_{i}: i<\alpha\rangle$ is an increasing sequence
of models of $T$ with union $M=\bigcup_{i<\alpha}M_{i}$. Let $\varphi\left(x,y\right)$
be some formula (over $\emptyset$) and $a\in\mathbb{M}^{y}$. Fix some $k<\omega$.
\begin{enumerate}
\item If $\varphi\left(x,a\right)$ Kim-divides over $M$ then $\varphi\left(x,a\right)$
Kim-divides over $M_{i}$ for some $i<\alpha$.
\item If $\varphi\left(x,a\right)$ $k$-Kim-divides over $M_{i}$ for
all $i<\alpha$ then $\varphi\left(x,a\right)$ $k$-Kim-divides
over $M$.
\end{enumerate}
\end{prop}
\begin{proof}
Note that this proposition, once proved, is immediately also true when
we allow parameters from $M$ inside $\varphi$, as long as we assume
these parameters are from $M_{0}$, by adding constants to the language. As the statement is trivial when $\alpha$ is a successor, we may assume $\alpha$ is a limit ordinal.
(1) Suppose that $\varphi\left(x,a\right)$ does not Kim-divide
over any $M_{i}$. For $i<\alpha$, let $\bar{b}_{i}=\langle b_{i,j} : j<\omega \rangle$
be an indiscernible heir sequence starting with $b_{i,0}=a$ over
$M_{i}$ (such a sequence exists, by e.g., taking a coheir sequence
in reverse). In particular, $\bar{b}$ is an $\ind^{K}$-Morley sequence by
symmetry. By Corollary \ref{cor:witnessing}, $\{\varphi\left(x,b_{i,j}\right): j<\omega\}$
is consistent. Let $\mathcal{U}$ be an ultrafilter on $\alpha$, concentrating on end-segments of $\alpha$.
Let $\bar{a}=\langle a_{j} : j<\omega \rangle$ be a $\mathcal{U}$-average
of $\langle \bar{b}_{i} : i<\alpha \rangle$ over $M$. Then Lemma \ref{lemma:limit of heirs}
and symmetry imply that $\bar{a}$ is a $\ind^{K}$-Morley sequence over
$M$, and by construction $\{\varphi\left(x,a_{j}\right): j<\omega\}$
is consistent. By Corollary \ref{cor:witnessing}, $\varphi\left(x,a\right)$
does not Kim-divide over $M$.
(2) Suppose that $\varphi\left(x,a\right)$ $k$-Kim-divides over $M_{i}$
for all $i<\alpha$. For $i<\alpha$ let $\bar{b}_{i}=\langle b_{i,j} : j<\omega \rangle$
be a $\ind^{K}$-Morley sequence over $M_{i}$ witnessing this, i.e., $\{\varphi\left(x,b_{i,j}\right): j<\omega\}$
is $k$-inconsistent and $b_{i,0}=a$. As above, we let $\mathcal{U}$ be an
ultrafilter on $\alpha$, concentrating on end-segments, and let $\bar{a}= \langle a_{j} : j<\omega\rangle$
be a $\mathcal{U}$-average of $\langle \bar{b}_{i} : i<\alpha \rangle$ over
$M$. Then $\overline{a}$ is an $M$-indiscernible sequence in $\text{tp}(a/M)$ such that $\{\varphi(x;a_{i}) : i < \omega\}$ is $k$-inconsistent, so is it is enough to show that $\bar{a}$ is a $\ind^{K}$-Morley sequence
over $M$. By symmetry it is enough to show that $a_{<j}\ind_{M}^{K}a_{j}$
for all $j<\omega$. Suppose this is not the case, i.e., $\models \psi\left(a_{<j},a_{j},m\right)$ for some $m\in M$ and $j < \omega$, where $\psi\left(z,y,w\right)$ is an $L$-formula
and $\psi\left(z,a_{j},m\right)$ Kim-divides over $M$, so also $\psi\left(z,a,m\right)$
Kim-divides over $M$. Hence, for some $S\in\mathcal{U}$, $m\in M_{i}$
and $\models \psi\left(b_{i,<j},b_{i,j},m\right)$ for all $i\in S$.
Let $\langle N_{i}: i<\beta \rangle$ be an increasing enumeration of $\langle M_{i} : i\in S \rangle$.
By (1), applied to $\langle N_{i} : i<\beta \rangle$ and the formula
$\psi\left(z,a,m\right)$, we have that $\psi(z,a,m)$ Kim-divides over $M_{i}$ for some $i\in S$.
Hence also $\psi\left(z,b_{i,j},m\right)$ Kim-divides over $M_{i}$
(as $b_{i,j}\equiv_{M_{i}}a$), contradicting the fact that $\bar{b}_{i}$
is a $\ind^{K}$-Morley sequence over $M_{i}$.
\end{proof}
\subsection{Reformulating the Kim-Pillay-style characterization}
Our final application will be an easy corollary of witnessing for $\ind^{K}$-Morley sequences, allowing us to give a more satisfying formulation of the Kim-Pillay-style characterization of Kim-independence. In \cite[Proposition 5.3]{ArtemNick}, a Kim-Pillay-style criterion was given for NSOP$_{1}$, consisting of 5 axioms for an abstract independence relation on subsets of the monster model. Later, it was shown in \cite[Theorem 9.1]{24-Kaplan2017} that any independence relation $\ind$ satisfying these axioms must \emph{strengthen} $\ind^{K}$ in the sense that whenever $M \models T$ and $a \ind_{M} b$, then also $a \ind^{K}_{M} b$. In order to characterize $\ind^{K}$, it was necessary to add an additional axiom to the list called \emph{witnessing}: if $a \nind_{M} b$ witnessed by $\varphi(x;b)$ and $(b_{i})_{i < \omega}$ is a Morley sequence over $M$ in a global $M$-invariant (or even $M$-finitely satisfiable) type extending $\text{tp}(b/M)$, then $\{\varphi(x;b_{i}) : i < \omega\}$ is inconsistent. Though useful in practice, this is somewhat unsatisfying, as it requires reference to independence notions like invariance or finite satisfiability instead of a property intrinsic to $\ind$.
\begin{thm} \label{criterion}
Assume there is an \(\text{Aut}(\mathbb{M})\)-invariant ternary relation \(\ind\) on small subsets of the monster \(\mathbb{M} \models T\) which satisfies the following properties, for an arbitrary \(M \models T\) and arbitrary tuples from $\mathbb{M}$.
\begin{enumerate}
\item Strong finite character: if \(a \nind_{M} b\), then there is a formula \(\varphi(x,b,m) \in \text{tp}(a/bM)\) such that for any \(a' \models \varphi(x,b,m)\), \(a' \nind_{M} b\).
\item Existence over models: \(M \models T\) implies \(a \ind_{M} M\) for any \(a\).
\item Monotonicity: \(aa' \ind_{M} bb'\) \(\implies\) \(a \ind_{M} b\).
\item Symmetry: \(a \ind_{M} b \iff b \ind_{M} a\).
\item The independence theorem: \(a \ind_{M} b\), \(a' \ind_{M} c\), \(b \ind_{M} c\) and $a \equiv_{M} a'$ implies there is $a''$ with $a'' \equiv_{Mb} a$, $a'' \equiv_{Mc} a'$ and $a'' \ind_{M} bc$.
\item $\ind$-Morley sequences are witnesses: if $M \models T$ and $I = (b_{i})_{i < \omega}$ is an $M$-indiscernible sequence with $b_{0} = b$ satisfying $b_{i} \ind_{M} b_{<i}$, then whenever $a \nind_{M} b$, there is $\varphi(x;m,b) \in \text{tp}(a/Mb)$ such that $\{\varphi(x;m,b_{i}) : i < \omega\}$ is inconsistent.
\end{enumerate}
Then \(T\) is NSOP\(_{1}\) and $\ind = \ind^{K}$ over models, i.e. if $M \models T$, $a \ind_{M} b$ if and only if $a \ind^{K}_{M} b$.
\end{thm}
\begin{proof}
Because $\ind$ satisfies axioms (1) through (5), it follows that $T$ is NSOP$_{1}$ and for any $M \models T$, if $a \ind_{M} b$ then $a \ind^{K}_{M} b$, by \cite[Theorem 9.1]{24-Kaplan2017}. For the other direction, suppose $a \ind^{K}_{M} b$. Let $I = \langle b_{i} : i < \omega \rangle$ be an $M$-finitely satisfiable Morley sequence over $M$ with $b_{0} = b$. As $a \ind^{K}_{M} b$, we find $a' \equiv_{Mb} a$ so that $I$ is $Ma'$-indiscernible. By \cite[Claim in proof of Proposition 5.3]{ArtemNick}, any relation $\ind$ satisfying (1)--(4), we have $c \ind^{u}_{M} d$ implies $c \ind_{M} d$. Therefore, the sequence $I$ is, in particular, an $\ind$-Morley sequence over $M$ and $a' \models \bigcup_{i < \omega} p(x;b_{i})$ so $a \ind_{M} b$ by (6).
\end{proof}
\begin{rem}
In any NSOP$_{1}$ theory, $\ind^{K}$ satisfies properties (1)--(6), by Fact \ref{basic kimindep facts} and Theorem \ref{witnessing theorem}, so the existence of such a relation characterizes NSOP$_{1}$ theories.
\end{rem}
\bibliographystyle{alpha}
| proofpile-arXiv_065-1605 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Metamaterials are extremely important because they hold the promise of huge advancements in optical devices and new technologies \citep{2010opme.book.....C,Zheludev582,2011NaPho...5..523S,2016NatNa..11...23J}. Motivations for them stem from the studies of Veselago with negative dielectric coefficients and the unusual properties they could exhibit \citep{1968SvPhU..10..509V}. Nowadays, metamaterials are an experimental reality and they are broadly understood as man-made media with subwavelength structures such that their dielectric coefficients (and hence their light properties) be controllable \citep{book-metamaterials}. Experimental approaches to metamaterials started with the tailoring of ``meta-atoms'' (for an example of that, see \citet{1999ITMTT..47.2075P}), leading to media with negative refractive indices \citep{2000PhRvL..84.4184S,2001Sci...292...77S,2004PhT....57f..37P}, but now metamaterials have reached an incredible abundance of phenomena and media settings (see for instance \citep{2010opme.book.....C,Zheludev582,2011NaPho...5..523S,2016NatNa..11...23J} and references therein).
Metamaterials can also be used to test analogously several gravity models, as for instance \citep{2014Galax...2...72S,2009NatPh...5..687G,2013NaPho...7..902S} and references therein.
Through transformation optics \cite{2009PrOpt..53...69L,2010NatMa...9..387C}, recipes for metamaterials with desired light trajectories
can also be readily obtained.
For the majority of above aspects, focus is put on linear metamaterials, naturally due to the direct relationship of the dielectric coefficients with controllable parameters.
However, nonlinear metamaterials \citep{2014RvMP...86.1093L}
are also rapidly gaining interest, especially for the unique uses they could have \citep{2014RvMP...86.1093L,2012PhRvA..86c3816R,2012ApPhL.101e1103R}.
A special class of nonlinear media on which we will focus in this work is magnetoelectric materials \citep{2005JPhD...38R.123F}.
In such media, permittivity (polarization) can depend on the magnetic field and permeability (magnetization) can have an electric field dependence.
The advantage of metamaterials is that nonlinear magnetoelectric effects there could be created and controlled. It is already known that they could happen for instance in periodic, subwavelength and polarizable structures, such as split ring resonators \citep{2012ApPhL.101e1103R}. It has also been recently shown that nonlinear magnetoelectric metamaterials could lead to trirefringence \citep{2012PhRvA..86a3801D}, associated with three different solutions to the Fresnel equation \citep{landau1984electrodynamics}. In this work we concentrate on some facets of this phenomenon, such as theory and experimental proposals for it.
Known not to exist in linear media \citep{1969AcOpt..16..133W}, trirefringence would thus be an intrinsically nonlinear effect. Analysis has suggested it might appear in an anisotropic medium presenting some negative components of its permittivity tensor, while its permeability should depend on the electric field (magnetization dependent on the electric field) \cite{2012PhRvA..86a3801D}. Symmetries of Maxwell equations for material media also suggest that trirefringence could take place in media with constant anisotropic permeability tensors and permittivity tensors depending on magnetic fields (magnetically induced polarization) too. The interest associated with trirefringence is that media exhibiting it could support three linearly independent light polarizations. For each polarization there would be a specific refractive index (phase velocity). Therefore, upon refraction of an incident light ray on a trirefringent medium, it should split into three light rays, each one of them with its specific ray velocity and polarization.
Interestingly, trirefringence might also occur in nonlinear theories of the electromagnetism related to large QED and QCD field regimes \citep{2013PhRvD..88f5015D}. In both cases, though, it is important to stress that trirefringence is an effective phenomenon. The same happens in metamaterials, given that the dielectric coefficients there are clearly effective and just valid for a range of frequencies.
Applications for trirefringence could be thought of regarding the extra degree of freedom brought by the third linearly independent polarization it presents. For instance, if information is stored in polarization modes, then trirefringent media would be $30\%$ more efficient than birefringent media. Another possible application could be related to its intrinsic light propagation asymmetry \citep{2012PhRvA..86a3801D}.
The reason is because trirefringent media lead to three waves propagating in a given range of wave directions (two extraordinary and one ordinary), while the opposite range just has one (ordinary wave) (see Fig. 1 of Ref. \citep{2012PhRvA..86a3801D}).
However, our main motivation in this work is conceptual: we want to argue there might already be feasible ways of having trirefringence with known metamaterials, and hence the possibility of having three linearly independent polarizations to electromagnetic waves in the realm of Maxwell's electrodynamics. In this sense, our analysis is a natural extension of the first ideas presented in \citet{2012PhRvA..86a3801D}.
We structure our work as follows. In Sec. \ref{wave-propagation} we elaborate on light propagation in nonlinear media in the limit of geometric optics.
Section \ref{trirefringence} is particularly devoted to trirefringence analysis and aspects of the magnetoelectric media which could support it.
Estimates of the effect as well as proposals of possible physical media where trirefringence could be found is given in Sec. \ref{toy-model-trirefringence}. Particularly, general estimates and graphic visualization of the effect is studied in Sec. \ref{general}, and easy-to-build proposals for trirefringent system based on layered media and estimates of the strength and resolution of the effect are presented in Sec. \ref{estimates}.
Finally, in Sec. \ref{discussion}, we discuss and summarize the main points raised in this work.
An alternative method to derive the Fresnel equation from Maxwell's equations and constitutive relations is presented in an appendix.
Unless otherwise specified, we work with international units.
Throughout the text Latin indices $i, j, k . . .$ run from 1 to 3 (the three spatial directions), and we use the Einstein convention that repeated indices in a monomial indicate summation.
\section{Wave propagation in material media}
\label{wave-propagation}
We start with Maxwell's equations in an optical medium at rest in the absence of free charges and currents \citep{2012PhRvA..86a3801D},
\begin{eqnarray}
\partial_{i}D_{i}&=&0, \hspace{5mm} \epsilon_{ijk}\partial_{i}E_{j}=-\partial_{t}B_{k},
\label{1}
\\
\partial_{i}B_{i}&=&0, \hspace{5mm} \epsilon_{ijk}\partial_{i}H_{j}=\partial_{t}D_{k},
\label{2}
\end{eqnarray}
together with the general constitutive relations between the fundamental fields $\vec{E}$ and $\vec{B}$ and the induced ones $\vec{D}$ and $\vec{H}$,
\begin{eqnarray}
D_{i}&=&\varepsilon_{ij}(\vec{E},\vec{B})E_{j}+\tilde{\varepsilon}_{ij}(\vec{E},\vec{B})B_{j},
\label{constitutive1}
\\
H_{i}&=&\mu_{ij}^{\mbox{\tiny $(-1)$}}(\vec{E},\vec{B})B_{j}+\tilde{\mu}_{ij}^{\mbox{\tiny $(-1)$}}(\vec{E},\vec{B})E_{j}.
\label{constitutive2}
\end{eqnarray}
Here the coefficients $\varepsilon_{ij}$, $\tilde{\varepsilon}_{ij}$, $\mu_{ij}^{\mbox{\tiny $(-1)$}} $, and $\tilde{\mu}_{ij}^{\mbox{\tiny $(-1)$}} $, describe the optical properties of the material.
Our interest relies on the study of light rays in a nonlinear magnetoelectric material. Hence, we may restrict our investigation to the domain of the geometrical optics.
We use the method of field disturbance \cite{hadamard1903} and define \cite{delorenci2004} $\Sigma$, given by $\phi(t,\vec{x})=0$, to be a smooth (differentiable of class $C^{n},n > 2$) hypersurface. The function $\phi(t,\vec{x})$ is understood to be a real valued smooth function and regular in a neighbourhood $U$ of $\Sigma$. The spacetime is divided by $\Sigma$ into two disjoint regions: $U^{-}$, for which $\phi(t,\vec{x})<0$, and $U^{+}$, corresponding to $\phi(t,\vec{x})>0$.
The step of an arbitrary function $f(t,\vec{x})$ (supposed to be a smooth function in the interior of $U^{\pm}$) through the borderless surface $\Sigma$ is a smooth function in $U$ and is calculated by
\begin{equation}
[f(t,\vec{x})]_{\Sigma}\doteq \underset{{P^{\pm}}\rightarrow P}{\lim} [f(P^{+})-f(P^{-})],
\end{equation}
with $P^{+}$,$P^{-}$ and $P$ belonging to $U^{+}$,$U^{-}$ and $\Sigma$, respectively. The electromagnetic fields are supposed to be smooth functions in the interior of $U^{+}$ and $U^{-}$ and continuous across $\Sigma$ ($\phi$ is now taken as the eikonal of the wave). However, they have a discontinuity in their first derivatives such that \cite{hadamard1903}
\begin{equation}
[\partial_{t}E_{i}]_{\Sigma}=\omega e_{i}, \hspace{5mm} [\partial_{t}B_{i}]_{\Sigma}=\omega b_{i},
\label{5}
\end{equation}
\begin{equation}
[\partial_{i}E_{j}]_{\Sigma}=-q_{i} e_{j}, \hspace{5mm} [\partial_{i}B_{j}]_{\Sigma}=-q_{i} b_{j},
\label{6}
\end{equation}
where $e_{i}$ and $b_{i}$ are related to the derivatives of the fields on $\Sigma$ and correspond to the components of the polarization of the propagating waves. The quantities $\omega$ and $q_{i}$ are the angular frequency and the $i$-th component of the wave-vector.
Thus, substituting Eqs. (\ref{5}) and (\ref{6}) into the Maxwell equations, together with the constitutive relations, we obtain the eingenvalue equation,
\begin{equation}
Z_{ij}e_{j}=0,
\label{ev}
\end{equation}
where $Z_{ij}$ gives the elements of the Fresnel matrix, and is given by (see the appendix for an alternative way of deriving these results)
\begin{equation}
Z_{ij}=C_{ij}v^{2}+\left(\epsilon_{ikl}\tilde{H}_{lj}+\epsilon_{nkj}\tilde{C}_{in} \right)\hat{q}_{k}v + \\\epsilon_{ikl}\epsilon_{npj}H_{ln}\hat{q}_{k}\hat{q}_{p},
\label{Zij}
\end{equation}
with the definitions
\begin{eqnarray}
C_{ij}&=&\varepsilon_{ij}+\frac{\partial\varepsilon_{in}}{\partial E_{j}}E_{n}+\frac{\partial\tilde{\varepsilon}_{in}}{\partial E_{j}}B_{n},
\label{C}
\\
\tilde{C}_{ij}&=&\tilde{\varepsilon}_{ij}+\frac{\partial\tilde{\varepsilon}_{in}}{\partial B_{j}}B_{n}+\frac{\partial\varepsilon_{in}}{\partial B_{j}}E_{n},
\label{ctilde}
\\
H_{ij}&=&\mu^{\mbox{\tiny $(-1)$}} _{ij}+\frac{\partial\mu^{\mbox{\tiny $(-1)$}} _{in}}{\partial B_{j}}B_{n}+\frac{\partial\tilde{\mu}^{\mbox{\tiny $(-1)$}} _{in}}{\partial B_{j}}E_{n},
\label{h}
\\
\tilde{H}_{ij}&=&\tilde{\mu}^{\mbox{\tiny $(-1)$}} _{ij}+\frac{\partial\tilde{\mu}^{\mbox{\tiny $(-1)$}} _{in}}{\partial E_{j}}E_{n}+\frac{\partial\mu^{\mbox{\tiny $(-1)$}} _{in}}{\partial E_{j}}B_{n},
\label{htilde}
\end{eqnarray}
Furthermore, we have defined the phase velocity as $v=\omega/q$, the unit wave vector as $\hat{q}=\vec{q}/q$, and its $i$-th component as $\hat{q}_{i}$.
\section{Trirefringence in nonlinear magnetoelectric media}
\label{trirefringence}
Magnetoelectric phenomenona in material media are related to the induction of magnetization or polarization (or both) by means applied electric or magnetic fields, respectively. In order to obtain the description of these phenomena, we start by expanding the free energy of the material in terms of the applied fields as follows \citep{2005JPhD...38R.123F},
\begin{eqnarray}
F(\vec{E},\vec{H})=&F_{0}& - P^{S}_{i}E_{i} - M^{S}_{i}H_{i}
\nonumber
\\
&-& \frac{1}{2}\varepsilon_{0}\chi_{ij}E_{i}E_{j} -\frac{1}{2}\mu_{0}\chi^{\mbox{\tiny $(m)$}} _{ij}H_{i}H_{j} - \alpha_{ij}E_{i}H_{j}
\nonumber
\\
&-& \frac{1}{2}\beta_{ijk}E_{i}H_{j}H_{k}-\frac{1}{2}\gamma_{ijk}H_{i}E_{j}E_{k} + ...\,,
\end{eqnarray}
with $F_{0}$ the free energy of the material in the absence of applied fields, and the other coefficients in this expansion will the addressed in the discussion that follows.
Differentiation of $F$ with respect to the $E_i$ and $H_i$ fields leads to the polarization and magnetization vectors,
\begin{align}
P_{i}(\vec{E},\vec{H})=&-\frac{\partial F}{\partial E_{i}} = P^{S}_{i}+\varepsilon_{0}\chi_{ij}E_{j}+\alpha_{ij}H_{j}
\nonumber
\\
&+ \frac{1}{2}\beta_{ijk}H_{j}H_{k}+\gamma_{j(ik)}H_{j}E_{k}+...
\label{Pi}
\\
M_{i}(\vec{E},\vec{H})=&-\frac{\partial F}{\partial H_{i}} = M^{S}_{i}+\mu_{0}\chi^{\mbox{\tiny $(m)$}} _{ij}H_{j}+\alpha_{ji}E_{j}
\nonumber
\\
&+ \beta_{j(ik)}E_{j}H_{k}+\frac{1}{2}\gamma_{ijk}E_{j}E_{k}+...,
\label{Mi}
\end{align}
where $P^{S}_{i}$ and $M^{S}_{i}$ represent the components of the spontaneous polarization and magnetization, respectively, whose contribution will be suppressed in our discussion. We use the notation that parenthesis enclosing a pair of indices means symmetrization, as for instance $\beta_{j(ik)} = (\beta_{jik}+\beta_{jki})/2$ .
Coefficients $\alpha_{ij}$ and $\gamma_{ijk}$ are responsible for linear and nonlinear electric-field-induced effects, respectively. However,
we specialize our discussion to the case of nonlinear magnetic-field-induced effect parametrized by the coefficients $\beta_{ijk}$. Hence,
\begin{eqnarray}
P_{i}&=&\varepsilon_{0}\chi_{ij}E_{j}+\frac{1}{2}\beta_{ijk} H_{j}H_{k},
\label{Pi2}
\\
M_{i}&=&\mu_{0}\chi^{\mbox{\tiny $(m)$}} _{ij}H_{j}+\beta_{j(ik)}E_{j}H_{k}.
\label{Mi2}
\end{eqnarray}
Furthermore, we assume that the linear electric susceptibility $\chi_{ij}$ is described by a real diagonal matrix, and the linear magnetic permeability is isotropic, $\chi^{\mbox{\tiny $(m)$}} _{ij}=\chi^{\mbox{\tiny $(m)$}} \delta_{ij}$. Losses are being neglected. In the regime of geometrical optics the wave fields are considered to be negligible when compared with the external fields, which we set as $\vec{E}_{ext}=E\hat{x}$ and $\vec{B}_{ext}=B\hat{y}$.
Now, as $B_{i}=M_{i}+\mu_{0}H_{i}$ and $D_{i}=P_{i}+\varepsilon_{0}E_{i}$, we obtain that
\begin{eqnarray}
B_{i}&=&\mu_{0}(1+\chi^{\mbox{\tiny $(m)$}} )H_{i}+\beta_{j(ik)}E_{j}H_{k},
\label{Bi}
\\
D_{i}&=&\varepsilon_{0}(\delta_{ij}+\chi_{ij})E_{j}+\frac{1}{2}\beta_{ijk} H_{j}H_{k}.
\label{Di}
\end{eqnarray}
We restrict our analysis to case where $\beta_{ijk} \ne 0$ only if $j=k$, which implies that $\vec{H}$ is parallel to $\vec{B}$. Additionally, assuming that $\beta_{zyy}$ is negligible and that $\varepsilon_{0}\chi_{xx}E \gg \beta_{xyy}H^{2}$, we obtain that $\vec P = (\varepsilon_{0}\chi_{xx}E,\, \beta_{yyy}H^{2}/2,\, 0)$ and $\vec M = (0,\, \mu_{0}\chi^{\mbox{\tiny $(m)$}} H+\beta_{xyy}EH,\,0)$. Now, returning these results into $\vec D$ and $\vec B$ fields and comparing with the general expressions for the constitutive relations we get that the optical system under consideration is characterized by the following electromagnetic coefficients:
\begin{align}
&\varepsilon_{ij} = \varepsilon_{0}(\delta_{ij}+\chi_{ij}),
\label{eij}
\\
&\tilde\varepsilon_{ij} = \frac{\beta_{yyy}B}{2\mu^{2}} \delta_{ij},
\label{etij}
\\
&\mu_{ij}^{\mbox{\tiny $(-1)$}} =\frac{1}{\mu}\delta_{ij},
\label{mij}
\\
&\tilde{\mu}_{ij}^{\mbox{\tiny $(-1)$}} =0,
\label{mtij}
\end{align}
where we have defined the nonlinear magnetic permeability
\begin{equation}
\mu \doteq \mu_{0}(1+\chi^{\mbox{\tiny $(m)$}} )+\beta_{xyy}E.
\label{m}
\end{equation}
The Fresnel matrix from Eq.~(8) is now given by
\begin{align}
Z_{ij}=&\left(\varepsilon_{ij}-\frac{\mu'}{\mu^{3}}\delta_{iy}\beta_{yyy}B^{2}E_{j}\right)v^{2}
\nonumber
\\ &+ \left[\frac{\delta_{iy}}{\mu^{2}}\beta_{yyy}(\vec{B}\times\hat{q})_{j}+\frac{\mu'}{\mu^{2}}(\vec{B}\times\hat{q})_{i}E_{j}\right]v
\nonumber
\\
&- \frac{1}{\mu}\left(\delta_{ij}-\hat{q}_{i}\hat{q}_{j}\right),
\label{zij}
\end{align}
where
\begin{equation}
\mu'\doteq\frac{1}{E}\frac{\partial\mu}{\partial E}=\frac{\beta_{xyy}}{E}.
\label{m'}
\end{equation}
We will be particularly interested in materials presenting a natural optical axis, and set
\begin{equation}
\varepsilon_{ij}=diag(\varepsilon_{\parallel},\varepsilon_{\perp},\varepsilon_{\perp}).
\label{eps}
\end{equation}
Nontrivial solutions of the eigenvalue problem state by Eq. (\ref{ev}) can be found by means of the generalized Fresnel equation $det\vert Z_{ij}\vert=0$, where $\vert Z_{ij}\vert$ symbolizes the matrix whose elements are given by $Z_{ij}$. This equation can be cast as
\begin{equation}
det\vert Z_{ij}\vert=-\frac{1}{6}(Z_{1})^{3}+\frac{1}{2}Z_{1}Z_{2}-\frac{1}{3}Z_{3}=0,
\label{fresnelleq}
\end{equation}
where we defined the traces $Z_{1}=Z_{ii}$, $Z_{2}=Z_{ij}Z_{ji}$, and $Z_{3}=Z_{ij}Z_{jk}Z_{ki}$.
Now, calculating the above traces and returning into Eq. (\ref{fresnelleq}) we obtain the following quartic equation for the phase velocity of the propagating wave,
\begin{equation}
av^{4}+bv^{3}+cv^{2}+dv+e=0,
\label{quartic}
\end{equation}
where
\begin{align}
a=&6\varepsilon_{\parallel}\varepsilon^{2}_{\perp},
\label{a}
\\
b=&6\frac{\mu'}{\mu^{2}}EB\varepsilon^{2}_{\perp}\hat{q}_{z},
\label{b}
\\
c=&-\frac{6}{\mu}\varepsilon_{\perp}\bigg[\varepsilon_{\parallel}(\hat{q}^{2}_{x}+1) +\varepsilon_{\perp}(\hat{q}^{2}_{y}+\hat{q}^{2}_{z})
\nonumber\\
&- \frac{\mu' \beta_{yyy}}{\mu^{3}}EB^{2}\hat{q}_{x}\hat{q}_{y}\bigg],
\label{c}
\\
d=&-\frac{6}{\mu^3}B\hat{q}_{z}\left[\mu' E\varepsilon_{\perp}+\beta_{yyy}\hat{q}_{x}\hat{q}_{y}(\varepsilon_{\perp}-\varepsilon_{\parallel})\right],
\label{d}
\\
e=&\frac{6}{\mu^{2}}\left[\varepsilon_{\parallel}\hat{q}^{2}_{x}+\varepsilon_{\perp}(\hat{q}^{2}_{y}+\hat{q}^{2}_{z})-\frac{\mu'\beta_{yyy}}{\mu^{3}}EB^{2}\hat{q}_{x}\hat{q}^{2}_{y}\right] .
\label{e}
\end{align}
This result generalizes the study presented in Ref. \cite{2012PhRvA..86a3801D}. Particularly, the term containing the coefficient $\beta_{yyy}$ is linked to the coefficient $\tilde\epsilon_{ij}$ in the constitutive relations, and makes the whole system consistent with the free energy approach used here to derive polarization and magnetization vectors that characterize the medium.
Now, if we set the propagation in the xz-plane, i.e, $\hat{q}= sin \theta \hat{x}+cos \theta \hat{z}$, the phase velocity solutions of Eq. (\ref{quartic}) reduce to the ones studied in Ref. \cite{2012PhRvA..86a3801D}, and are given by
\begin{equation}
v_{o}=\pm\frac{1}{\sqrt{\mu\varepsilon_{\perp}}},
\end{equation}
\begin{equation}
v^{\pm}_{e}=-\sigma\hat{q}_{z}\pm\sqrt{(\sigma\hat{q}_{z})^{2}+\frac{1}{\mu\varepsilon_{\parallel}}\left[\left(\frac{\varepsilon_{\parallel}}{\varepsilon_{\perp}}-1\right)\hat{q}^{2}_x+1\right]}
\end{equation}
where we defined the velocity dimensional quantity $\sigma\doteq {\mu'EB}/{2\mu^{2}\varepsilon_{\parallel}}$.
Here, the solution $v_{o}$ does not depend on the direction of the wave propagation and is referred to as the ordinary wave, whereas $v^{\pm}_{e}$ depend on the direction of the wave propagation, and are called extraordinary waves.
Now, the polarization modes $\vec{e}$ corresponding the above described wave solutions are given by the eigenvectors of Eq. (\ref{ev}). Introducing $Z_{ij}$ from Eq. (\ref{zij}) into the eigenvalue equation, we get the following equation relating the components of $\vec{e}$,
\begin{widetext}
\begin{align}
&\left[\frac{\varepsilon_{\parallel}}{\varepsilon_{\perp}}v^{2} + 2\mu\varepsilon_{\parallel}\sigma v_{o}^{2}\hat{q}^{2}_{z} v -\left(\frac{\varepsilon_{\parallel}}{\varepsilon_{\perp}}\hat{q}^{2}_{x}+q^{2}_{z}\right)v^{2}_{o}\right]e_{x}=0,
\label{first}
\\
&\left[-2\mu\varepsilon_{\parallel}\sigma v^{2}+\left(\hat{q}_{z}+\frac{\varepsilon_{\parallel}\hat{q}^{2}_{x}}{\varepsilon_{\perp}\hat{q}_{z}}\right)v\right]\frac{\beta_{yyy}B}{\mu}e_{x}+\left(\frac{v^{2}}{v^{2}_{o}}-1\right)e_{y}=0,
\label{second}
\\
&\left(\mu\varepsilon_{\perp}v^{2}-\hat{q}^{2}_{x}\right)e_{z} - (2\mu\varepsilon_{\parallel}\sigma v-\hat{q}_{z})\hat{q}_{x}e_{x}=0.
\label{third}
\end{align}
Note that the coefficient of $e_x$ in Eq. (\ref{first}) is zero only when $v=v^{\pm}_{e}$. Therefore, straightforward calculations lead us to the polarization vectors $\vec{e}_{o}$ and $\vec{e}^{\;\pm}_{e}$ corresponding to the ordinary and extraordinary waves, respectively,
\begin{align}
\vec{e}_{o}=&\hat{y},
\label{pe0}
\\
\vec{e}^{\;\pm}_{e}=&N^{\pm}\bigg\lbrace\hat{x}-\frac{\varepsilon_{\parallel}\beta_{yyy}Bv^{\pm}_{e}}{\mu\varepsilon_{\perp}}\bigg[1
+\frac{1}{(\varepsilon_{\perp}/\varepsilon_{\parallel}-1)\hat{q}^{2}_{z}-2\mu\varepsilon_{\perp}\sigma\hat{q}_{z}v^{\pm}_{e}}\bigg]\hat{y}-\frac{\varepsilon_{\parallel}\hat{q}_{x}}{\varepsilon_{\perp}\hat{q}_{z}}\hat{z}\bigg\rbrace,
\label{peext}
\end{align}
where $N^{\pm}$ holds for the normalization constants related to the extraordinary modes.
\end{widetext}
As we see, there will be up to three distinct polarization vectors, depending on the phase velocity solutions. In a same direction there will be only one possible solution for the ordinary wave, as they will always have opposite signs. On the other hand, it is possible to exist two distinct solutions for the extraordinary waves in a same direction, i.e., presenting the same signal. The condition behind this possibility can be expressed as follows \cite{2012PhRvA..86a3801D},
\begin{equation}
-1<\frac{1}{\mu\varepsilon_{\parallel}\sigma^{2}}\left(\frac{\varepsilon_{\parallel}}{\varepsilon_{\perp}}\tan^{2}\theta+1\right)<0.
\label{tri}
\end{equation}
However, in order to guarantee the existence of the ordinary wave solution we should set $\mu\varepsilon_{\perp} > 0$, which implies that the above condition requires that $\varepsilon_{\parallel}=-\epsilon_{\parallel} < 0$. Hence, the range of $\theta$ for which trirefringence occurs is such that
$1 >(\epsilon_{\parallel}/\varepsilon_{\perp}) \tan^{2}\theta > 1-\epsilon_{\parallel}\mu\sigma^{2}$.
\section{Estimates for trirefringence}
\label{toy-model-trirefringence}
\subsection{General estimates and pictorial analysis of the effect}
\label{general}
As we have seen, in the particular scenario here investigated, trirefringence occurrence is constrained by the condition set by Eq. (\ref{tri}), which requires a material presenting one negative component of the permittivity tensor. This behavior can naturally be found in plasmonic materials. Furthermore, artificial materials constructed with wire structures are known to follow this behavior for an adjustable range of frequencies. Nevertheless, an estimate of the effect can still be presented in terms of the magnitude of certain coefficients found in natural materials. It is expected that such results can be similarly produced in a tailored material.
Measurements of magnetoelectric polarization in some crystal systems \cite{liang2011,begunov2013,kharkovskiy2016} have shown that values of the $\beta$ coefficient as larger as $\beta_{xyy}\approx 10^{-16}{\rm sA^{-1}}$ can be found, as for instance in ${\rm HoAl}_3({\rm BO}_3)_4$, for which magnetic fields up to about 10T was applied.
Thereby, the angular range for which trirefringence may occur can be presented as
\begin{eqnarray}
&1-\frac{1.42\times10^{-2}}{(\epsilon_\parallel/\varepsilon_0)(\mu/\mu_0)^3}\left(\frac{\beta_{xyy}}{10^{-16} \rm{sA^{-1}}}\right)^2\left(\frac{B}{10 \rm{T}}\right)^2
< \frac{\epsilon_\parallel}{\varepsilon_\perp}\tan^2{\theta} < 1. \notag \\
& \label{range}
\end{eqnarray}
In the above result we can approximate $\mu/\mu_0 \approx 1+ \chi^{\mbox{\tiny $(m)$}}$, as higher order term in $\beta$ can be neglected.
The permittivity coefficients can be expressed in terms of linear susceptibilities as $\varepsilon_{\perp}=\varepsilon_0(1+\chi_{22})$ and $\epsilon_{\parallel}=\varepsilon_0(\vert\chi_{11}\vert-1)$.
If we assume typical values for the permittivity coefficients as $\epsilon_{\parallel}/\varepsilon_0 = 1.5$ and $\varepsilon_{\perp}/\varepsilon_0 = 2$, we obtain $1>0.75\tan^2{\theta} >1-9.4857\times10^{-3}(\mu/\mu_0)^{-3}\left(\beta_{xyy}/10^{-16} \rm{sA^{-1}}\right)^2\left(B/10 \rm{T}\right)^2$. Depending on the magnetic properties of the material the above range can be large enough to be experimentally measured.
Just to have a numerical result, assuming the above results and approximating $\mu/\mu_0 \approx 1$, we get $0.85471 < \theta(\mbox{rad}) < 0.85707$, i.e., there will be a window of about $0.0024\, \mbox{rad}$ ($\approx 0.14$ degrees) where the effect can be found.
For instance, if we set the direction of propagation at $\theta = 0.8570\, \mbox{rad}$, the ordinary and extraordinary velocities result $v_{0} \approx 0.707c$, $v_e^+ \approx 8.04\times10^{-4} c$ and $v_e^- \approx 0.103 c$, each of which presenting a distinct polarization vector, as given by Eqs. (\ref{pe0}) and (\ref{peext}).
\begin{figure}[h]
\includegraphics[scale=0.54]{fig1}
\caption{Normal surfaces \cite{landau,born} of the nonlinear magnetoelectric medium characterized by Eqs. (\ref{m}) and (\ref{eps}).
Here trirefringence phenomenon occurs in the shaded angular sectors. The solution for the phase velocity of the ordinary wave is depicted by the circular line (labeled by $v_o$), and the extraordinary waves are represented by solid and dashed curves (labeled by $v_e^+$ and $v_e^-$).}
\label{fig1}
\end{figure}
In order to have a better visualization of the effect, let us assume that a system for which $\beta = 10^{-15}sA^{-1}$ can be found (or tailored), and assume an applied magnetic field of 7T. In this case, keeping all the other assumptions, the angular opening for which the effect occurs increases to about $0.1556\, \mbox{rad}$ ($\approx 8.92$ degrees), within $0.70144 < \pm\theta(\mbox{rad}) < 0.85707$, as depicted in Fig. \ref{fig1}.
Note that the trirefringent region is symmetric with respect to the $Z$ direction. This is a consequence of the fact that $v_e^\pm(-\theta) = - v_e^\mp(\theta)$, i.e., the role of the extraordinaries solutions exchanges when $\theta \rightarrow -\theta$.
If we concentrate only on the first quadrant of Fig.~\ref{fig1}, and choose a direction of propagation inside the trirefringent window, we clearly see that in such direction there will be three possible solutions -- one ordinary wave solution and two extraordinary ones. This aspect is explored in Fig.~\ref{fig2}, where we have selected the direction $\theta = 0.730\, \mbox{rad}$ ($\approx 41.8^{\circ}$).
\begin{figure}[h]
\includegraphics[scale=0.51]{fig2}
\caption{Here the dotted straight line represents a specific direction of propagation inside the trirefringence region. It can be clearly seen that in this direction there exist three distinct solutions for wave propagation, which are given by the intersection points between the straight line and the normal surfaces.}
\label{fig2}
\end{figure}
Notice that the dotted straight line meets the normal surfaces in three distinct points, corresponding to the three distinct solutions of wave propagation in that direction.
Another aspect unveiled by Fig.~\ref{fig2} is the existence of wave solutions for one of the extraordinary rays ($v_e^-$ in the plot) presenting a phase velocity that can be arbitrarily close to zero. As we see, $v_e^-$ goes to zero as we approach the maximum angle for which trirefringence occurs. This issue is further addressed in the final remarks.
\subsection{Estimates based on a possible multilayered system}
\label{estimates}
Now, we elaborate on a specific proposal for a trirefringent medium. From the above estimate one learns that, ordinarily, trirefringence would be limited to small angular regions because the expected $\beta_{xyy}$s are small. However, Eq. \eqref{range} already suggests a way to increase the angular aperture trirefringence would take place. One should simply choose a small enough $\epsilon_{\parallel}$. Fortunately, this can be achieved with the effective dielectric coefficients in layered media, which are relatively easy to tailor and hence potential experimental candidates to our proposal.
Assume a layered medium whose constituents are two materials with homogeneous dielectric coefficients $(\epsilon_1,\mu_1)$ and $(\epsilon_2,\mu_2)$. For definiteness, take medium ``1'' as an ordinary dielectric material and medium ``2'' as a metal-like material. If one (indefinitely) juxtaposes alternating layers of medium 1 with thickness $d_1$ and medium 2 with thickness $d_2$ such that the directions perpendicular to the thicknesses of the layers are ideally infinite, then one has an idealized layered medium \cite{2006PhRvB..74k5116W}. A clear illustration of what has been just described can be seen in Ref. \cite{2006PhRvB..74k5116W}. If one defines a coordinate system such that the $x$-axis is in the direction of the alternating layers, then the principal effective permittivity components of this layered medium are \cite{2006JOSAB..23..498W}
\begin{equation}
\epsilon_y=\epsilon_z=\frac{\epsilon_1+\eta \epsilon_2}{1+\eta}\label{epsilon-parallel}
\end{equation}
and
\begin{equation}
\frac{1}{\epsilon_x}=\frac{1}{1+\eta}\left(\frac{1}{\epsilon_1} +\frac{\eta}{\epsilon_2}\right)\label{epsilon-perpendicular},
\end{equation}
where $\eta\equiv d_2/d_1$. Identical expressions also hold for the anisotropic components of the effective permeability tensor of the layered medium \citep{2006PhRvB..74k5116W}. One clearly sees from Eqs. \eqref{epsilon-parallel} and \eqref{epsilon-perpendicular} that when $\eta \rightarrow 0$, the effective dielectric parameters approach $\epsilon_1$, while they are $\epsilon_2$ when $\eta\rightarrow \infty$. This is exactly what one expects because, for instance, when $d_2\rightarrow 0$ ($\eta\rightarrow 0$) the layered medium would basically be medium 1. As of importance in the sequel, if one chooses a particular value for $\epsilon_x$, from Eq. \eqref{epsilon-perpendicular} with free $\epsilon_1$, it implies that
\begin{equation}
\epsilon_2=\frac{\epsilon_x \eta \epsilon_1}{\epsilon_1(1+\eta) -\epsilon_x}\label{epsilon_metal}.
\end{equation}
Consider now the layered medium is under the influence of applied (controllable) fields such that the electric field is in the $x$-direction and the magnetic field is in the $y$-direction. From our previous notation with regard to trirefringence, we have that $\epsilon_x\equiv \varepsilon_{\parallel}\equiv-\epsilon_{\parallel}<0$ and $\epsilon_y\equiv \varepsilon_{\perp}$. From Eq. \eqref{epsilon_metal}, it thus gets clear that a layered medium under applied fields in our model could just be trirefringent if $\epsilon_2$ is negative. Given that we have chosen medium 2 as a metal-like medium, $\epsilon_2<0$ could always be the case if the frequency $\omega$ of the propagating waves are smaller than the plasma frequency $\omega_p$ of the material. Indeed, the real (R) part of $\epsilon_2$ for metal-like structures is given by (modified Drude-Lorentz model \citep{2010opme.book.....C})
\begin{eqnarray}
\epsilon_2^R=\epsilon_m(\infty)-\epsilon_0\left(\frac{\omega_p}{\omega}\right)^2\label{permittivity-metal},
\end{eqnarray}
where $\epsilon_m(\infty)$ is a positive constant (changing from material to material) \citep{2010opme.book.....C}. Therefore, for any value of $\epsilon_x=-\epsilon_{\parallel}$, one can always find an associated $\omega$ fulfilling it from Eqs. \eqref{epsilon_metal} and \eqref{permittivity-metal}.
In general, losses attenuate electromagnetic waves and they could be estimated by the imaginary (I) parts of the dielectric coefficients. If one takes $\epsilon_{1}$ far from resonance, which we assume here, then its imaginary part is negligible \cite{2010opme.book.....C}. In this case, losses would just be associated with $\epsilon_2$ and for metal-like structures they are of the form $\epsilon^I_2=\epsilon_0\Gamma \omega_p^2/\omega^3$, with $\Gamma$ the damping constant and usually $\Gamma \ll \omega_p$ \cite{2010opme.book.....C}. Simple calculations from Eq. \eqref{epsilon-perpendicular} tell us that
\begin{equation}
\varepsilon_{\parallel}^I= \frac{\eta (\eta + 1)\epsilon_2^I(\epsilon_{1}^R)^2}{(\epsilon_2^I)^2 + (\epsilon_2^R+\eta \epsilon_{1
}^R)^2}\label{losses}.
\end{equation}
Since small values of $\varepsilon_{\parallel}$ would be interesting for trirefringence, from Eq. \eqref{epsilon_metal} that implies $\epsilon_2^R$ should be small as well and by consequence frequencies of interest should be close to the plasma frequency [see Eq. \eqref{permittivity-metal}]. Then, metal losses should also be small. If one assumes that $\eta \gg 1$, then $\varepsilon_{\parallel}^I\approx \epsilon_2^I$. Losses in layered media would be negligible when the real parts of their dielectric coefficients are much larger than their imaginary parts. For $|\varepsilon_{\parallel}^R|\gg |\varepsilon_{\parallel}^I|$, from the above, one would need $|\epsilon_{2}^R|\gg |\epsilon_{2}^I|$. $\varepsilon_{\perp}^R\gg \varepsilon_{\perp}^I$ is always the case when frequencies are near the plasma frequency and far from resonant frequencies of dielectric media, exactly what we have assumed in our analysis.
Another important condition for a trirefringent medium would be a nonlinear magnetization induced by an electric field.
For aspects of its permittivity, we have taken $\eta \gg 1$. Thus, if the permeability of the constituent media are close to a given constant, convenient choices for $\eta$ will render effective coefficients such that $\mu_x\approx \mu_y=\mu_z \approx \mu_2$. In order to have the desired nonlinear response, one should just take a metal-like constituent medium whose permeability is naturally nonlinear. This could be achieved if medium 2 was a ``metametal'', for example a metal array (wire medium \cite{2010opme.book.....C}) of split ring resonators embedded in a given host medium. In addition to the nonlinear response from the split ring resonators \cite{2010opme.book.....C}, such media could also have controllable plasma frequencies \cite{2010opme.book.....C}, which could be set to be in convenient regions of the electromagnetic spectrum, such as the microwave region \cite{2010opme.book.....C}. (Good conducting metals have plasma frequencies usually in the near-ultraviolet and optical \cite{2010opme.book.....C}.) In this case, larger structures could be tailored still behaving as an effective continuous medium, which is of experimental relevance. Moreover, adaptations of split ring resonators could also lead to effective nonlinear responses. Structures such as varactor-loaded or coupled split ring resonators already demonstrate such response in the microwave frequency \cite{2012ApPhL.101e1103R} and hence could also be used in magnetoelectric layered media.
Let us make some estimates for relevant parameters of our trirefringent layered media. Take $\epsilon_{\parallel}/\epsilon_0=1.5\times 10^{-2}$, and keep the other parameters as in the above estimate. For this case, $0.053<7.5\times 10^{-3} \tan^2\theta<1$, or $69.4<\theta(\mbox{degree})<85.1$ [$1.21 <\theta (\mbox{rad})<1.44$], which is considerably larger than the angular opening of the previous estimate. For typical metal-like parameters, $\Gamma/\omega_p\approx 10^{-3}-10^{-2}$ \cite{2010opme.book.....C}, which means losses could indeed be small for extraordinary waves and rays. If $\beta_{xyy}$ turns out to be smaller than the values estimated, then larger external magnetic fields could be used to increase the angle apertures. Smaller values of $\epsilon_{\parallel}$ could also work but losses in these cases should be investigated with more care, due to the relevance of dispersive effects.
\section{Final remarks}
\label{discussion}
We have shown that combinations of already known nonlinear magnetoelectric materials are feasible media for the occurrence of trirefringence. This sort of effect was previously reported \cite{2012PhRvA..86a3801D} in a more restrictive model, and it has been generalized here. Particularly, when we restrict the propagation of light rays to the xz-plane, the same solutions for phase and group velocities are found. As clearly seen from Eqs. \eqref{pe0} and \eqref{peext} for light rays propagating in the xz-plane, we note that the polarization vectors do not lie on this plane because they explicitly depend on an extra $\beta$ coefficient with regard to the velocities. As an important consequence thereof, the three polarizations of a trirefringent medium are linearly independent. We should also note that in the same way as it occurs in the idealized case \cite{2012PhRvA..86a3801D}, there are three distinct group velocities for each direction of the wave vector in the region where trirefringence takes place. This is an important result because it demonstrates that, analogously to the birefringent case, as a light ray enters a trirefringent slab it splits into three rays with linearly independent polarizations, and each one will follow the trajectory imposed by Snell's law.
We have also given estimates and examples of possible trirefringent media. They have been based on layered media, whose tailoring is relatively easy experimentally. In order to have all ingredients for trirefringence, convenient fields should be applied on these media and one of their constituent parts might have an intrinsic nonlinear magnetic response. This could be achieved, for instance, with arrays of split ring resonators in a given host medium, or any other metallic-meta-atoms which present nonlinear magnetic responses. The trirefringent media proposed would also have controllable plasma frequencies, which would be useful for practical realizations since all the structures should have subwavelength sizes. With layered media one could easily control allowed values for the permittivity components, even render some of them close to zero with small resultant losses. This is an important aspect for trirefringence, given that the set of angles where it takes place would generally depend on the inverse of the permittivity.
Closing, we should mention that the model investigated in this work also allows for the possibility of exploring slow light phenomena. This aspect can be understood by examining Eq.~(\ref{tri}), which sets the condition for the occurrence of trirefringence, or directly by inspecting the specific example explored in Fig.~\ref{fig2}. The angular opening for which this effect occurs is characterized by two critical angles, the minimum one, $\theta_{\tt min}$, that depends on the magnetoelectric properties of the medium, and the maximum one, $\theta_{\tt max}$, that depends only on the permittivity coefficients. We particularly see that $v_{e}^{-}(\theta_{\tt max}) = 0$, which means that the velocity of one of the extraordinary rays continually decreases to zero as we move from the trirefringent region to a birefringent region. The whole picture is clearly exposed in Fig.~\ref{fig2}. If we start with $\theta < \theta_{\tt min}$ there will be only one solution for wave propagation, which corresponds to the ordinary wave. When we achieve $ \theta_{\tt min}$ an extraordinary ray will appear, and this particular direction will exhibit birefringence. In fact, at this direction both extraordinary wave solutions coincide. Directions such that $\theta_{\tt min} < \theta < \theta_{\tt max}$ present three distinct solutions as discussed before, characterizing a trirefringent domain. However, as $\theta \rightarrow \theta_{\tt max}$ we have that $v_{e}^{-}(\theta) \rightarrow v_{e}^{-}(\theta_{\tt max}) = 0$. Hence, we can produce an extraordinary solution with a phase velocity as closer to zero as we wish just by adjusting the direction of propagation as closer to $\theta_{\tt max}$ as possible.
\begin{acknowledgments}
This work was partially supported by the Brazilian research agencies CNPq (Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico) under Grant No. 302248/2015-3, and FAPESP (Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo) under grants Nos. 2015/04174-9 and 2017/21384-2.
\end{acknowledgments}
\section{Appendix}
Alternatively to the method of field disturbances, proposed by Hadamard \cite{hadamard1903,papapetrou1974,boillat}, one can derive the Fresnel equation [Eq.~(\ref{Zij})] directly through Maxwell equations [Eqs.~(\ref{1}) and (\ref{2})], and the constitutive relations [Eqs.~(\ref{constitutive1}) and (\ref{constitutive2})], together with some assumptions on the fields.
The field can be decomposed into probe field $\vec{E}^{{}^{p}}$ and background field $\vec{E}^{{}^{o}}$, $\vec{E}=\vec{E}^{{}^{p}}+\vec{E}^{{}^{o}}$. Where $\vert \vec{E}^{{}^{o}} \vert \gg \vert \vec{E}^{{}^{p}} \vert$, $\vert \nabla\cdot\vec{E}^{{}^{o}} \vert \ll \vert \nabla\cdot\vec{E}^{{}^{p}} \vert$ and $\vert \partial_{t}\vec{E}^{{}^{o}} \vert \ll \vert \partial_{t}\vec{E}^{{}^{p}} \vert$. Note that
\begin{equation}
\frac{\partial}{\partial E_{j}}=\frac{\partial}{\partial E^{{}^{o}}_{n}}\frac{\partial E^{{}^{o}}_{n}}{\partial E_{j}}+\frac{\partial}{\partial E^{{}^{p}}_{n}}\frac{\partial E^{{}^{p}}_{n}}{\partial E_{j}}\simeq \frac{\partial}{\partial E^{{}^{o}}_{j}}.
\nonumber
\end{equation}
The same holds for the magnetic field: $\vec{B}=\vec{B}^{{}^{p}}+\vec{B}^{{}^{o}}$, and so on.
We assume plane wave solutions for the electric and magnetic probe field modes as $\vec{E}^{{}^{p}}=\vec{e}\;\mbox{exp}[i(\omega t-\vec{q}\cdot\vec{r})]$ and $\vec{B}^{{}^{p}}=\vec{b}\;\mbox{exp}[i(\omega t-\vec{q}\cdot\vec{r})]$, from which,
\begin{align}
&\partial_{t} E^{{}^{p}}_{j}=i \omega E^{{}^{p}}_{j}, \hspace{2mm} \partial_{t} B^{{}^{p}}_{j}=i \omega B^{{}^{p}}_{j},
\nonumber
\\
&
\partial_{i} E^{{}^{p}}_{j}=-i q_{i} E^{{}^{p}}_{j}, \hspace{2mm} \partial_{i} B^{{}^{p}}_{j}=-i q_{i} B^{{}^{p}}_{j}.
\nonumber
\end{align}
Substituting these relations in Faraday law [Eq. (\ref{1})] yields $\omega B^{{}^{p}}_{k}=\epsilon_{ijk}q_{i}E^{{}^{p}}_{j}$. Finally, returning this result in Eq. (\ref{2}), we obtain after some algebra the Fresnel equation $Z_{ij}E^{{}^{p}}_{j}=0$, where $Z_{ij}$ is the same as obtained in Eq. (\ref{Zij}).
| proofpile-arXiv_065-1609 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Graphene is a remarkable material that has generated enormous interest in both the theoretical and experimental community since its discovery in 2004 \cite{Geim2009}. While there are many reasons for interest in this unusual material, one particularly exciting feature is that it is possible to produce graphene samples with extreme purity and thus observe an electronic regime that had not previously been explored --- the hydrodynamic regime \cite{lucas2018hydrodynamics,Ho2018,Bandurin1055,Levitov2016,Mendoza2013}. In this regime the shortest scattering time is that for electron-electron collisions, and collisions with phonons and impurities are subdominant. As a result, semi-classical kinetic theory reduces to a form of quantum hydrodynamics \cite{Irving1951}.
In general, quantum Hydrodynamics (QH) describes the dynamics of systems that vary slowly in space and time. The foundation of QH is the ability to obtain a set of conservation laws for the electron liquid. These conservation laws are derived from the quantum Boltzmann equation (QBE), which is the equation of motion for the electron fluid's phase space distribution function \cite{Erdos2004,Kadanoff}.
The QH approach has been extremely successful in studying electron plasmas, fractional quantum Hall fluids, \cite{Passot2005,Wiegmann2013,Son2013}, and now also the electron fluid of graphene \cite{Fritz2008,lucas2018hydrodynamics}. In the current work we will derive QH equations from the QBE for the case of bilayer graphene (BLG) near charge neutrality (CN) where there is no well defined Fermi surface. We note, however, that our kinetic theory formalism is more general than QH.
Whereas QBE and QH have been studied extensively in monolayer graphene \cite{Fritz2008,Mueller2008,Mueller2008b,Lux2012}, they have not been as well studied in the case of bilayer graphene. There are, however, several reasons why the BLG case should be of substantial interest. Firstly, the band structure of BLG is fundamentally different from that of monolayer graphene. In the nearest-neighbor hopping approximation, A-B stacked bilayer graphene has quadratic bands of electron and hole-like excitations at low energies \cite{Neto:2007} which touch at zero energy. This interesting band structure, which was confirmed experimentally in Ref \cite{Nam2017}, provides the unique quantum transport properties of BLG \cite{McCann2013}.
A second reason that BLG is now of interest is due to recent experimental advances that have allowed measurements with unprecedented precision --- in particular \cite{Nam2017} reports measurements of the electrical conductivity of BLG. These advances have been possible due to the development of suspended BLG devices \cite{Nam2017,sBLG1,sBLG2,sBLG3,sBLG4,sBLG5,sBLG6,sBLG7,sBLG8}. As with monolayer graphene, BLG on a substrate suffers an inhomogeneous potential, which can lead to charge-puddle physics and superlattice effects. Suspended samples, in comparison, are far cleaner. The current limitation on suspended samples is on the size of these devices, however recent BLG devices have achieved sizes longer than the disorder scattering length \cite{Ki2013}.
Further, due to the low impurity scattering rates in clean samples, there has also been recent interest in studying the viscosity of the electron fluid in materials such as graphene \cite{Levitov2016,Torre2015}. Some signatures of electron viscosity, such as negative non-local resistance, have already been measured experimentally in monolayer graphene\cite{Levitov2016a}. The extension of these experiments to BLG seems natural.
Finally, measurements of the thermal conductivity of suspended single-layer graphene have been performed \cite{Baladin2008} and it may be possible to extend this study to BLG as well.
In this paper, we develop the QBE formalism for calculating transport properties of BLG which can in principle be compared against existing and future transport experiments. The analogous formalism for the electrical conductivity of single-layer graphene was worked out in \cite{Fritz2008}. This work was later extended to study Coulomb drag between two monolayers of graphene \cite{Lux2012} and BLG exactly at charge neutrality (CN) \cite{Lux2013}. We extend this work away from CN. To do so, we start with the derivation of the quantum transport operators including the charge current, the heat current and the stress tensor in terms of electron and hole fields. We include the Zitterbewegung contribution to these operators. Taking the expectation value of these operators in the DC limit, we obtain the main results for the electrical current \eqref{eq:currentexpect}, heat current \eqref{eq:Qcurrentexpect} and stress tensor \eqref{eq:Tijexpect}.
We then calculate the collision integral, which in general will have four contributions. A central result of our work is the collision integral \eqref{eq:CoItot}, which contains contributions from the Coulomb interactions \eqref{eq:CoI}, impurity scattering \eqref{eq:CoD}, scattering off the boundary \eqref{eq:CoS} and phonon scattering \eqref{eq:I_phonon}.
Once we have derived the QBE formalism, we can study the linear response of the system to perturbations. In particular, we study the behavior under an applied external electric field, thermal gradient, and straining motion in order to calculate the electrical conductivity, thermal conductivity, and viscosity respectively. Finally, in anticipation of
future experiments, we also consider the behavior of the transport properties in an applied magnetic field which leads to, e.g., nonzero Hall-conductivity. The inclusion of a magnetic field is possible for both electrical conductivity and thermal conductivity calculations.
Since the dominant scattering mechanism is electron-electron, and hole-hole, we should be able to represent the transport with a two-fluid hydrodynamics --- where the electron fluid and the hole fluid are individually in a thermal equilibrium, each having a well defined temperature, chemical potential, and velocity. As such we use the QBE to derive a two-fluid model which describes the evolution of the mean fluid velocities of the electron and hole fluids on timescales long compared to the electron-electron collision time. These are given by equations \eqref{eq:fluid1} and \eqref{eq:fluid2}. The two-fluid model includes Coulomb drag between the electron and hole fluids and the momentum-relaxing scattering from scattering with phonons. This then allows us to derive simple analytical expressions for the transport properties.
The plan of the paper is as follows. In section \ref{sec:wf} we review the electronic structure of bilayer graphene and the low-energy effective Hamiltonian. Section \ref{sec:Coul} deals with the Coulomb interaction between electrons, in particular the screening thereof in the RPA approximation. We perform the calculation in flat space first and then generalize to curved space so that we can later calculate the stress tensor, which gives the response to a change in the metric. In section \ref{sec:Conserved_currents} we calculate the conserved currents. We then derive the kinetic equation in section \ref{sec:Kinetic_Equation}. Section \ref{sec:Bfield} explores the effect of a constant magnetic field on the kinetic equation. In \ref{sec:Coll} we write down the collision integral. Then in section \ref{sec:sym} we discuss the symmetries of the collision integral. We introduce the two-fluid model in \ref{sec:2fm} and derive analytical expressions for the transport properties in terms of the fundamental parameters of the problem. In section \ref{sec:num_res} we evaluate the collision integral numerically and show the results for some transport properties of interest. We compare to the results from the two-fluid model and find good agreement. Detailed calculations are left for the Appendices.
\section{Review of the electronic structure of bilayer graphene}
To start off, we briefly review the derivation of the band structure and the explicit expression of the wave function in BLG. This part serves to make this work self-contained and to introduce notation.
\label{sec:wf}
\subsection{Hamiltonian}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{tight_binding.pdf}
\caption{Sketch of the bilayer graphene lattice used for the tight-binding Hamiltonian. There are two layers 1 and 2 and in each layer there are two inequivalent sites per unit cell labelled A and B. The couplings $\gamma_0$, $\gamma_1$ and $\gamma_3$ are defined in the figure.}
\label{fig:tight-binding}
\end{figure}
The tight-binding Hamiltonian of A-B stacked bilayer graphene with the coupling defined in Fig. \ref{fig:tight-binding} and external gauge field $A_\mu$ has the explicit form \cite{McCann2006,Misumi2008,McCann2013}
\begin{align}
\label{eq:Ham}
H_\xi=\xi\begin{pmatrix}
0 &v_3 \pi &0 &v_F \pi^\dagger\\mathbf{v}_3 \pi^\dagger &0 &v_F \pi &0\\0 &v_F \pi^\dagger &0&\xi \gamma_1\\mathbf{v}_F \pi &0 &\xi \gamma_1 &0
\end{pmatrix}-eA_0 I,
\end{align}
where
the velocity $v_3$ is given by $v_3=\frac{\sqrt 3}{2}a\gamma_3 /\hbar$, where $a$ is the lattice constant, and the Fermi-velocity is given by $v_F=\frac{\sqrt 3}{2}a\gamma_0 /\hbar$ \footnote{We omit the spin indices here and will take the spin degeneracy into account later on, when we introduce the fermion flavor. The Zeeman splitting is negligible for the fields that we consider here ($B<100$Gauss).}. Here,
$\xi=1$ corresponds to $K$ valley with corresponding wave function \footnote{Here we use the integral over $\mathbf{k}$ as the summation over $\mathbf{k}$ about each Dirac cone $K$ or $K'$ with a cut off $\Lambda$ of order $2\pi/a$. The physical results in our paper do not depend on the cut off since we consider the physics in the long wavelength limit where $k\ll \Lambda$.}
\begin{align}
\psi_K(\mathbf{x})&=\begin{pmatrix}
\varphi_{A_1}(\mx)\\\varphi_{B_2}(\mx)\\\varphi_{A_2}(\mx)\\\varphi_{B_1}(\mx)
\end{pmatrix}=\int \frac{d^2k}{(2\pi)^2}\psi_K(\mathbf{k})e^{i\mk\mx},
\end{align}
and $\xi=-1$ corresponds to the $K'$ valley with corresponding wave function $
\psi_{K'}(\mathbf{x})=(
\varphi_{B_2}(\mx),\varphi_{A_1}(\mx),\varphi_{B_1}(\mx),\varphi_{A_2}(\mx))$.
We have defined the momentum operator and its holomorphic and anti-holomorphic notation
\begin{align}
p_i=-i\hbar \partial_i -eA_i\\
\pi=p_x +i p_y, \qquad \pi^\dagger=p_x -i p_y.
\end{align}
$e<0$ is the electron charge. In the following, we will set $v_3=0$, since we are only interested in the quadratic bands (see Ref.~\cite{McCann2013} for details).
\subsection{Effective Hamiltonian}
Since the Hamiltonian \eqref{eq:Ham} provides information about both high energy and low energy states, it will be useful to create a low-energy effective Hamiltonian.
To simplify our model, we consider only the low-energy bands near the $K$ valley. In the long wavelength limit $v_F k \ll \gamma_1$, one can derive for $\xi=1$
\begin{equation}
\label{eq:Hameff1}
H_{K}=-\frac{1}{2m}\begin{pmatrix}
0 &(\pi^\dagger)^2 \\
\pi^2 & 0
\end{pmatrix}, \qquad \psi_K=\begin{pmatrix}
\varphi_{A_1}(\mathbf{x}) \\
\varphi_{B_2}(\mathbf{x})
\end{pmatrix},
\end{equation}
where $m=\frac{\gamma_1}{2v_F^2}$.
The wave function is given by
\begin{equation}
\label{eq:wfeK}
\psi^\lambda_K=\frac{1}{\sqrt{2}}\begin{pmatrix}
-\lambda e^{-2i\theta_{\mathbf{k}}} \\
1
\end{pmatrix}, \qquad \epsilon^\lambda_{\mathbf{k}}=\lambda\frac{k^2}{2m},
\end{equation}
where $\theta_{\mk}$ is the angle between the vector $\mk$ and the $x$-axis. $\lambda=-1$ denotes electrons in the valence band and $\lambda=1$ denotes electrons in the conduction band. In the low energy limit, we only consider the electrons appearing at sites $A_1$ and $B_2$.
In the following sections, we will omit the spin indices for simplicity and consider them back in the counting factors.
Similarly, we can derive the effective Hamiltonian near the $K'$ valley, the only difference is that the wave function is now $\psi_{K'}=( \varphi_{B_2}(\mathbf{x}), \varphi_{A_1}(\mathbf{x}))$
\section{Coulomb interaction and screening}
\label{sec:Coul}
Coulomb screening is the damping of the electric field due to mobile charge carriers which are quasi-particles and quasi-holes. As a result of screening, the long-range Coulomb interaction becomes short-range. In this section, we will calculate the screening effect of the Coulomb interactions in BLG, or in other words we will calculate the screening momentum $q_{TF}$.
\subsection{Charge density operator}
The Hamiltonian \eqref{eq:Ham} shows explicitly the coupling of BLG with an external gauge field. The free Lagrangian density is given by
\begin{equation}
\label{eq:freeL}
\mathcal{L}_\xi=i\hat \Psi^\dagger_{\xi}\overleftrightarrow{\partial_t} \hat \Psi_\xi -\hat\Psi^\dagger_{\xi} H_\xi \hat \Psi_\xi,
\end{equation}
where $\overleftrightarrow{\partial_t}=\frac{1}{2}(\overrightarrow{\partial_t}-\overleftarrow{\partial_t})$. The field operator in the $K$ valley in second quantization language is given by
\begin{equation}
\hat{\Psi}_K(\mx)=\int \frac{d^2k}{(2\pi)^2}\begin{pmatrix}
c_{K;A_1}(\mk)\\c_{K;B_2}(\mk)\\c_{K;A_2}(\mk)\\c_{K;B_1}(\mk)
\end{pmatrix}e^{i\mk\mx},
\end{equation}
where the operator $c_{K;a}(\mk)$ is the annihilation operator of an electron on the sublattice $a$ at momentum $\mathbf{K+k}$. Similarly, the field operator in the $K'$ valley in second quantization language can be derived. The free action in flat space is given by
\begin{equation}
\mathcal{S}_{\xi}=\int d^3 x \mathcal{L}_\xi.
\end{equation}
The total number density operator in both valleys is given by the definition
\begin{equation}
\rho(\mathbf{x})=\sum_{\xi}\rho_{\xi}(\mathbf{x})=\sum_{\xi,s}\frac{1}{e}\textbf{}\frac{\delta \mathcal{S}_{\xi}}{\delta A_0}(\mx)=\sum_{\xi,s}\hat{\Psi}^\dagger_\xi(\mx)\hat{\Psi}_{\xi}(\mx).
\end{equation}
From the wave function of the electron and hole bands at low energies, we can derive the transformation of the field operator
\begin{equation}
c_{\pm K}(\mk)=\frac{1}{\sqrt 2} \left(\mp e^{-2i\theta_{\mk}}c_{K;A_1}(\mk)+c_{K;B_2}(\mk)\right),
\end{equation}
and similarly for the $K'$ bands.
Combining the spin index and the valley index to a flavor index, we obtain the effective low-energy density
\begin{multline}
\rho^{\text{eff}}(\mq)=\sum_{f}\int \frac{d^2 k}{(2\pi)^2}\frac{1}{2}\\\times\sum_{\lambda,\lambda'}c^\dagger_{\lambda f}(\mk-\mq)c_{\lambda' f}(\mk)\left(1+\lambda \lambda' e^{-2 i(\theta_{\mk-\mq}-\theta_{\mk})}\right).
\label{eq:rho_eff}
\end{multline}
We can separate the effective charge density \eqref{eq:rho_eff} into a normal part where $\lambda=\lambda'$ and the \textit{Zitterbewegung} part where $\lambda=-\lambda'$. The homogeneous contribution to the charge density $\rho^{\text{eff}}(\mathbf{0})$ is only due to the normal part. The effective Coulomb interaction of the effective theory is given by
\begin{equation}
\label{eq:Coulomb}
\hat{V}_C=\frac{1}{2}\int \frac{d^2 q}{(2\pi)^2} V_C(\mq)\rho^{\text{eff}}(-\mq)\rho^{\text{eff}}(\mq).
\end{equation}
The screened Coulomb interaction $V_C(q)$ will be calculated in the next subsection. The bare Coulomb interaction is
\begin{equation}
V(q)=\frac{2\pi\alpha}{q}
\label{eq:bare}
\end{equation}
and $\alpha=\frac{e^2}{4\pi\varepsilon_0}$.
\subsection{Screening in flat background metric}
We need to account for the screening of the long-range Coulomb interaction by the mobile charge carriers. In the random phase approximation (RPA) the dressed interaction is given by
\begin{equation}
V_C(q,\omega)=\frac{V(q,\omega)}{1-\varPi^0(q,\omega) V(q,\omega)},
\label{eq:RPA}
\end{equation}
where $\varPi^0(q,\omega)$ is the bare susceptibility. In order to calculate $\varPi^0(q,\omega)$, we need to calculate the fermion loop of BLG at a finite temperature and at a given chemical potential $\mu$. This is the textbook Lindhart calculation and in the regime $\beta\mu\lesssim1$ and $\beta q^2/m\lesssim 1$ we use the approximate result from Ref \cite{Lv2010}
\begin{equation}
\varPi^0(\mq,0)\approx-\frac{m N_f}{2\pi}(1+\frac{\beta q^2}{12m}).
\end{equation}
This equation starts deviating from the full result in Ref \cite{Lv2010} for $\beta q^2/m>1$, however large momentum transfer is suppressed by the Fermi occupation factor. We have that the screened potential is given according to \eqref{eq:RPA} by
\begin{equation}
V_C(q)=\frac{2\pi \alpha}{q+q_{TF}(q)}
\label{eq:screened_V}
\end{equation}
with the screening momentum
\begin{equation}
\label{eq:qTF}
q_{TF}(q)=-\Pi^0(\mq,0) 2\pi \alpha.
\end{equation}
For $\beta\mu\lesssim 1$, the typical momentum is $k_T=\sqrt{2 k_B T m/\hbar^2}\ll q_{TF}$ for any realistic temperature, so we can safely approximate
\begin{equation}
V_C(q)=\frac{2\pi \alpha}{q_{TF}(q)}.
\end{equation}
\subsection{Screening in a homogeneous metric}
In order to calculate the stress tensor, we need to generalize the formalism to curved space. For a homogeneous metric $g_{ij}=\delta_{ij}+\delta g_{ij}$, we can follow the steps in the last subsection and obtain the screened Coulomb interaction as
\begin{align}
\label{eq:CoulombCur}
\tilde{\mathcal{V}}(\mq,0)=\frac{2\pi \alpha}{\sqrt{g}}\frac{1}{|\mq|+q_{TF}(|\mq|)},
\end{align}
where $|\mq|=\sqrt{g^{ij}q_iq_j}$ and $g=\textrm{det}(g_{ij})$ and where $q_{TF}(|\mq|)$ takes the form of \eqref{eq:qTF}. The detailed derivation of equation \eqref{eq:CoulombCur} is given in Appendix \ref{sec:CoulbCur}. Equation \eqref{eq:CoulombCur} is a new result of this paper and is required in order to calculate the stress tensor operator in the next section.
\section{Conserved current operators}
\label{sec:Conserved_currents}
In order to calculate the transport coefficients, we need to start by deriving the conserved current operators in the effective theory in second quantization language. The detailed derivations for the energy current and the stress tensor operators of BLG are new contributions of this paper. As has been shown in Refs \cite{Fritz2008} and \cite{Lux2013}, there are \textit{Zitterbewegung} contributions to the charge current operator of graphene as well as BLG. To obtain the DC transport coefficients of BLG, one can neglect the Zitterbewegung part which is just the contribution from the off-diagonal component of the Green's function in the generalized Kadanoff-Baym formalism \footnote{The detailed discussions of the Kadanoff-Baym formalism are left for Appendix \ref{sec:BoltzA}.}. However, this contribution will be necessary for studying the quantum transport at finite frequency and momentum using the QBE and we therefore include it for future extensions of this work. In this paper, we are only interested in spatially-independent current operators.
\subsection{Charge current operator}
The current operator is by definition
\begin{equation}
\label{eq:chargecurrent}
J^i_{\xi}(\mx)=\frac{\delta \mathcal{S}_\xi}{\delta A_i (\mx)}=-v_Fe\sum_{s} \xi\hat{\Psi}^\dagger_{\xi}(\mx)\begin{pmatrix}
0 & (\sigma^i)^{\dagger} \\
(\sigma^i)^{\dagger}& 0
\end{pmatrix}\hat{\Psi}_{\xi}(\mx)
\end{equation}
where $s$ stands for spin.
The current density is given by
\begin{equation}
\mathbf{J}(\mx)=\sum_{\xi}\mathbf{J}^{\text{I}}_{\xi}(\mx)+\mathbf{J}^{\text{II}}_{\xi}(\mx)
\end{equation}
where $\mathbf{J}^{\text{I}}_{\xi}(\mx)$ is the contribution of quasi-particle and quasi-hole flow, and the operator $\mathbf{J}^{\text{II}}_{\xi}(\mx)$ creates a quasi-particle-quasi-hole pair.
Using the explicit wave function of low-energy modes \eqref{eq:WaveK1e}, \eqref{eq:WaveK2e} and equation \eqref{eq:chargecurrent}, one can derive the spatially independent current operator
\begin{equation}
\label{eq:chargecurrent1}
\mathbf{J}^{\text{I}}(\mq=0)=\frac{e}{m}\sum_{f}\sum_{\lambda}\int \frac{d^2k}{(2\pi)^2} \lambda \mk c^\dagger_{\lambda f}(\mk) c_{\lambda f}(\mk),
\end{equation}
where we combined the spin index $s$ and valley index $\xi$ to flavor index $f$. Similarly, the \textit{Zitterbewegung} current operator is given by
\begin{multline}
\mathbf{J}^{\text{II}}(\mq=0)=-i\frac{e}{m}\sum_{f}\int \frac{d^2k}{(2\pi)^2}(\hat{z}\times \mk) \\\times\left[c^\dagger_{+ f}(\mk) c_{- f}(\mk)-c^\dagger_{- f}(\mk) c_{+ f}(\mk)\right].
\end{multline}
\subsection{Energy current operator}
The heat current is related to the energy current via
\begin{equation}
\label{eq:heat}
\mathbf{J}^{Q}(\mq=0)=\mathbf{J}^E(\mq=0)-\frac{\mu}{e} \mathbf{J}(\mq=0),
\end{equation}
where $\mu$ is the chemical potential and so we now calculate the energy current, which has contributions from both the kinetic and interaction energy terms in the Hamiltonian. We will follow Ref \cite{Jonson1980} in deriving the energy current operator. The conservation of energy gives us the continuity equation
\begin{equation}
\label{eq:Encont}
\mathbf{\partial}\cdot \mathbf{J}^{E}(\mx,t)+ \dot{\cE}(\mx,t)=0,
\end{equation}
where $\mathbf{J}^{E}$ is the total energy current, which includes both kinetic and interaction contributions, $\cE$ is the energy density. We will use equation \eqref{eq:Encont} as the definition of the energy current.
\subsubsection{Kinetic contribution}
The kinetic energy density operator is given by
\begin{equation}
\cE_{kin}(\mx)=\sum_{\xi}\mathbf{H}_{\xi}(\mx)=\sum_{\xi}\hat{\Psi}^\dagger_\xi(\mx)\overleftrightarrow{H}_{\xi}\hat{\Psi}_\xi(\mx)
\end{equation}
where $\overleftrightarrow{H}_{\xi}$ means we replace $\partial_i$ in $H_{\xi}$ by $\overleftrightarrow{\partial}_i=\frac{1}{2}(\overrightarrow{\partial}_i-\overleftarrow{\partial}_i)$. We can write down the kinetic energy density in momentum space by Fourier transformation
\begin{align}
\mathbf{H}_{\xi}(\mq)&=\int d^2 \mx e^{-i\mq\mx}h_{\xi}(\mx)\\&=\int \frac{d^2 k}{(2\pi)^2}\hat{\Psi}^\dagger_\xi(\mathbf{k-q})\hat{\mathbf{H}}_{\xi}(\mk,\mq)\hat{\Psi}_\xi(\mk),\nonumber
\end{align}
where $\hat{\mathbf{H}}_{\xi}(\mk,\mq)$ is given explicitly as follows
\begin{multline}
\label{eq:hkin}
\hat{\mathbf{H}}_{\xi}(\mk,\mq)=\\\xi\begin{pmatrix}
0 &0 &0 &v_F (\bar{k}-\frac{1}{2} \bar q)\\0 &0 &v_F (k-\frac{1}{2}q) &0\\0 &v_F (\bar k -\frac{1}{2}\bar q) &0&\xi \gamma_1\\mathbf{v}_F (k-\frac{1}{2}q) &0 &\xi \gamma_1 &0
\end{pmatrix},
\end{multline}
where we defined the holomorphic and anti-holomorphic vectors $X=X_1+i X_2,\qquad \bar{X}=X_1-iX_2$. Using Heisenberg's equation $\dot{\cE}_{kin}=-i[\dot{\cE}_{kin},\mathcal{H}]$
and the continuity equation of energy \eqref{eq:Encont} in momentum space we obtain the formula to determine the kinetic contribution to the energy current
\begin{equation}
\label{eq:Energycurrent}
\mq\cdot \mathbf{J}_{kin}^E(\mq)=[\cE_{kin}(\mq),\mathcal{H}],
\end{equation}
where the total Hamiltonian $\mathcal{H}$ is defined in equation \eqref{eq:htot}. We leave the detailed calculation, which is quite technical, to Appendix \ref{sec:operaotsApp}. We only quote here the results after taking the limit $\mathbf{q}\rightarrow 0$
\begin{equation}
\label{eq:energykin}
\mathbf{J}_{kin}^{E}(\mq=0)=\sum_{f}\sum_{\lambda}\int \frac{d^2k}{(2\pi)^2} \frac{ \mk k^2}{2(m)^2} c^\dagger_{\lambda f}(\mk) c_{\lambda f}(\mk).
\end{equation}
In comparison with the charge current operator, there is no \textit{Zitterbewegung} contribution to the kinetic part of the energy current. We also see that quasi-particle and quasi-hole bands contribute to the energy current equally. At each momentum $\mathbf{k}$, quasi-particle and quasi-hole bands have the same energy and velocity and hence make the same contribution to the energy current .
\subsubsection{Interaction contribution}
In the linear response calculation, the contribution of the Coulomb interaction to the energy density is given by
\begin{align}
\delta \mathbf{H}^C(\mx)&\equiv\frac{\delta \hat{V}_C}{\delta \rho(\mathbf{x})}=N_0 \int d^2 Y V_C(|\mathbf{x}-\mathbf{Y}|)\delta \rho (\mx)\\&=N_0 V_C(\mq=0)\delta\rho(\mx)=N_0 \frac{2\pi \alpha}{q_{TF}} \delta \rho(\mx),
\end{align}
where $N_0$ is the total background charge number.
The contribution to the energy current from the Coulomb interaction is then given by
\begin{equation}
\label{eq:EnergyCoulombJ}
\mathbf{J}^E_C(\mq=0)=N_0\frac{2\pi \alpha}{e q_{TF}}\mathbf{J}(\mq=0),
\end{equation}
where $\mathbf{J}(\mq=0)$ is nothing but the charge current. However in the kinetic formalism, we consider $\delta \mathbf{H}^C(\mx)$ as the shift of the chemical potential due to the background charge (the Hartree diagram).
\subsection{Stress tensor operator}
The effective Lagrangian in curved space is defined as
\begin{equation}
\label{eq:effactcur}
\mathcal{S}= \int dt \left(d^2 \x \sqrt{g(\x)} \sum_{\xi}\mathcal{L}_{\xi}(\x)-\hat{V}_C \right),
\end{equation}
where the free Lagrangian density $\mathcal{L}_{\xi}(\x)$ is defined in equation \eqref{eq:freeL} and $\hat{V}_C $ is the effective Coulomb interaction.
The stress tensor is defined as the response of the system with respect to a perturbation of the local metric,
\begin{equation}
\label{eq:Tijdef}
T^{ij}(\x)=-\frac{2}{\sqrt{g}(\x)}\frac{\delta \mathcal{S}(g_{ij}(\x),\tilde{\hat{\Psi}}^\dagger,\tilde{\hat{\Psi}})}{\delta g_{ij}(\x)}\arrowvert_{\delta g_{ij}=0}
\end{equation}
where $g_{ij}(\x)=\delta_{ij}+\delta g_{ij}(\x)$ and the rescaled field is
\begin{equation}
\label{eq:curvedpsi}
\tilde{\hat{\Psi}}=g^{1/4}\hat{\Psi}.
\end{equation}
\subsubsection{Kinetic contribution}
We calculate the stress tensor operator for the kinetic Hamiltonian \eqref{eq:Ham} following Ref \cite{Fujikawa1981}. We leave the detailed calculation to Appendix \ref{sec:operaotsApp}, where we derive the results directly using definition \eqref{eq:Tijdef} and the explicit form of the kinetic Hamiltonian in curved space. Here we quote the result of the kinetic contribution to the stress tensor
\begin{equation}
\label{eq:stresskin}
T^{ij}(\mq=0)=T^{(I)ij}(\mq=0)+T^{(II)ij}(\mq=0),
\end{equation}
where the normal contribution to the kinetic part of stress tensor is
\begin{align}
\label{eq:Tijnormal}
T^{(I)ij}(\mq=0)=\sum_{\xi}T^{(I)ij}_\xi(\mq=0)\nonumber\\=\sum_f \sum_\lambda\int \frac{d^2 k}{(2\pi)^2}\frac{\lambda k^i k^j}{m} c^\dagger_{\lambda f}(\mk) c_{\lambda f}(\mk).
\end{align}
Equation \eqref{eq:Tijnormal} has a similar form as the stress tensor operator for a quadratic semimetal \cite{Dumitrescu:2015} like HgTe.
The \textit{Zitterbewegung} contribution to the kinetic part of the stress tensor is given by
\begin{multline}
T^{(\text{II})11}(\mq=0)=-T^{(\text{II})22}(\mq=0)\\=i\sum_f \int \frac{d^2 k}{(2\pi)^2} \frac{k^1 k^2}{m}\left[c^\dagger_{+ f}(\mk) c_{- f}(\mk)-c^\dagger_{- f}(\mk) c_{+ f}(\mk)\right],
\end{multline}
\begin{multline}
T^{(\text{II})12}(\mq=0)=T^{(\text{II})21}(\mq=0)=-i\sum_f \int \frac{d^2 k}{(2\pi)^2} \\\times\frac{ (k^1)^2 -(k^2)^2}{2m}\left[c^\dagger_{+ f}(\mk) c_{- f}(\mk)-c^\dagger_{- f}(\mk) c_{+ f}(\mk)\right].
\end{multline}
\subsubsection{Interaction contribution}
Besides the kinetic contribution, the stress tensor also has a contribution from the interactions, which we will calculate now. We turn on the homogeneous metric perturbation $\delta g_{ij}$. The Coulomb interaction in curved space near charge neutrality is given by
\begin{equation}
\label{eq:Coulombcurved}
\hat{V}_C=\frac{1}{2\sqrt{g}}\int \frac{d^2 p}{(2\pi)^2} \frac{2\pi \alpha}{|\mathbf{p}|+q_{TF}}\tilde{\rho}^{\text{eff}}(-\mathbf{p})\tilde{\rho}^{\text{eff}}(\mathbf{p}),
\end{equation}
where $\tilde{\rho}^{\text{eff}}(\mq)$ is given by substituting
\begin{equation}
\label{eq:curvedc}
c^\dagger_{\lambda f}\rightarrow g^{1/4} c^\dagger_{\lambda f}, \qquad c_{\lambda f}\rightarrow g^{1/4} c_{\lambda f},
\end{equation}
in \eqref{eq:rho_eff}, and $q=|\mq|=\sqrt{g^{ij}q_iq_j}$.
The factor $g^{-1/2}$ appears in equation \eqref{eq:Coulombcurved} as was shown in the previous section. The transformation \eqref{eq:curvedc} is equivalent to the transformation \eqref{eq:curvedpsi}. We now can use the definition of the stress tensor \eqref{eq:Tijdef} to derive the contribution of the Coulomb interaction to the stress tensor in flat space time by taking the derivative of $\hat{V}_C$ with respect to the homogeneous metric \footnote{There is a metric dependence in the definition of $|\mathbf{p}|$.}
\begin{align}
\label{eq:TC}
T_C^{ij}(\mq=0)=&\pi \alpha \int \frac{d^2 p}{(2\pi)^2}\Big[-\frac{p^i p^j}{p}\frac{1}{(p+q_{TF})^2}+\delta^{ij}\frac{1}{p+q_{TF}}\Big]\nonumber\\&\times\rho^{\text{eff}}(\mathbf{p})\rho^{\text{eff}}(-\mathbf{p}).
\end{align}
We leave the detailed derivation of \eqref{eq:TC} to Appendix \ref{sec:operaotsApp}. The contribution to $T^{ij}_C(\mq=0)$ up to linear order in the perturbation is given by\footnote{In this paper, we are only interested in the linear transport calculation.}
\begin{align}
T_C^{ij}(\mq=0)&=2\pi\alpha \frac{\delta^{ij}}{q_{TF}(0)}N_0 \rho^{\text{eff}}(0)\\&=2\pi \alpha N_0 \sum_f \sum_{\lambda} \int \frac{d^2 k}{(2\pi)^2} \frac{\delta^{ij}}{q_{TF}(0)} c^\dagger_{\lambda f}(\mk) c_{\lambda f}(\mk), \nonumber
\end{align}
where $N_0$ is the background charge. We can view this contribution simply as a shift in the chemical potential. In the calculation for the shear stress tensor in section \ref{sec:Kinetic_Equation}, we will calculate $T^{12}$ under a constant shear. The contribution from the interactions $T^{ij}_C \sim \delta^{ij}$ will hence not enter our calculation.
\section{Kinetic equation and quantum transport}
\label{sec:Kinetic_Equation}
After having derived the conserved currents, we are now ready to begin the derivation of the actual QBE. In this vein, we will set up the semi-classical problem of electron and hole transport in bilayer graphene at a finite temperature $T$. We define the retarded Green's function as follows \footnote{We ignore the flavor index $f$.}
\begin{multline}
\label{eq:Green}
g^{<}_{\lambda \lambda'} (\mathbf{k},\omega,\mathbf{x},t)=i\int d^2 \mathbf{r}d\tau e^{i(\omega\tau-\mathbf{k}\cdot \mathbf{r})}\\\times\<\Psi^{\dagger}_{\lambda'}(\mathbf{x}-\frac{\mathbf{r}}{2},t-\frac{\tau}{2})\Psi_\lambda(\mathbf{x}+\frac{\mathbf{r}}{2},t+\frac{\tau}{2})\>,
\end{multline}
where $\lambda$ and $\lambda'$ are the band indices. The expectation value $\langle \,\, \rangle$ is evaluated at finite temperature as explained in more detail in Appendix \ref{sec:BoltzA}. In order to study the DC transport, we can ignore the off-diagonal part of the retarded Green's function since this part depends explicitly on time as explained in Appendix \ref{sec:BoltzA}. Equation \eqref{eq:Green} now takes the explicit form
\begin{equation}
\label{eq:gdiag}
g^{<}_{\lambda \lambda'} (\mathbf{k},\omega,\mathbf{x},t)=2\pi i\delta(\omega-\epsilon_{\lambda}(\mathbf{k}))f_{\lambda}(\mathbf{k},\mathbf{x},t)\delta_{\lambda {\lambda'}},
\end{equation}
where $f_{+(-)}(\mathbf{p},\mathbf{x},t)$ is the distribution function of electrons in the conduction (valence) band. We can write down formally the QBE for the distribution function as
\begin{multline}
\label{eq:Boltz1}
\left(\frac{\partial}{\partial t}+\mathbf{v}_\lambda(\mathbf{k}) \cdot\frac{\partial}{\partial \x} +e\mathbf{E}(\mathbf{x},t) \cdot \frac{\partial}{\partial \mk}\right)f_{\lambda}(\mk,\x,t)\\=-I_{\lambda}[\{f_{\lambda_i}\}](\mk,\mathbf{x},t),
\end{multline}
where the group velocity of band $\lambda$ is defined as
\begin{equation}
\mathbf{v}_{\lambda}(\mathbf{k})=\partial_{\mathbf{k}}\epsilon_{\lambda}(\mathbf{k}).
\label{eq:v_group}
\end{equation}
$\mathbf{E}(\mathbf{x},t)$ is slowly varying applied electric field. The right-hand side of the equation is the collision integral, which can be derived explicitly from first principles. In section \ref{sec:Coll}, we will discuss in detail the collision integral, which takes into account the scattering of quasi-particles off each other, on impurities as well as at the boundary. The microscopic derivation of equation \eqref{eq:Boltz1} is left for Appendix \ref{sec:BoltzA}. In the subsequent subsections, we will employ the equation \eqref{eq:Boltz1} to set up the calculation of the transport coefficients. In DC transport, we can ignore the contribution from the \textit{Zitterbewegung} contribution which comes from the off-diagonal part of the Green's function \eqref{eq:Green}. From the results \eqref{eq:chargecurrent1}, \eqref{eq:heat} with \eqref{eq:energykin}, and \eqref{eq:Tijnormal} in the previous section, we can obtain expressions for the expectation value of the normal contribution to the conserved currents in terms of the local distribution function as follows
\begin{equation}
\label{eq:currentexpect}
\mathbf{J}= N_f\frac{e}{m}\sum_{\lambda}\int \frac{d^2k}{(2\pi)^2} \lambda \mk f_{\lambda}(\mk),
\end{equation}
\begin{equation}
\label{eq:Qcurrentexpect}
\mathbf{J}^{Q}= N_f\sum_{\lambda}\int \frac{d^2k}{(2\pi)^2} \frac{\lambda \mk}{m}(\epsilon_{\lambda}(\mk)-\mu) f_{\lambda}(\mk),
\end{equation}
and the kinetic contribution to the stress tensor has the expectation value
\begin{align}
\label{eq:Tijexpect}
T^{ij}=N_f \sum_\lambda\int \frac{d^2 k}{(2\pi)^2}\frac{\lambda k^i k^j}{m} f_{\lambda}(\mk).
\end{align}
We note that the results in Eqns.~\eqref{eq:currentexpect}-\eqref{eq:Tijexpect} look similar to the Fermi liquid results for two types of particles, although we are in a very different regime without a well-formed Fermi surface. If we replace the distribution function $f_{\lambda}(\mk)$ in equations \eqref{eq:currentexpect}, \eqref{eq:Qcurrentexpect} and \eqref{eq:Tijexpect} by the unperturbed Fermi distribution $f^0_\lambda(\mk)$, we get zero. In this section, we will use the above equations to obtain the expectation value of the conserved currents in terms of the distribution function perturbations:
\begin{equation}
f_\lambda(\mk,\mathbf{x})=f^0_\lambda(\mk)+f^0_\lambda(\mk)[1-f^0_\lambda(\mk))]h_\lambda(\mk,\mathbf{x}).
\label{eq:f_pert}
\end{equation}
\subsection{Constant applied magnetic field}
\label{sec:Bfield}
So far, the experiments performed on the electrical conductivity of suspended BLG have been performed in zero magnetic field. However, we believe it is eminently possible to extend the experiments in this direction and to this end, we will set up the calculation process to obtain the transport coefficients with an applied magnetic field $\mathbf{B}=B\hat{\mathbf{z}}$. In order to use the kinetic equation with a magnetic field, we need to consider a weak magnetic field. In a Fermi liquid at zero temperature, the requirement is $k_F \ell_B \gg 1$ where the magnetic length is given by $\ell_B=\sqrt{\frac{\hbar c}{e B}}$. For neutral BLG, at finite temperature, one may guess that the valid limit of the kinetic equation is $k_T \ell_B \gg 1$, where the thermal momentum is defined as $k_T=\sqrt{2 k_B T m/\hbar^2}$.
For temperature $T=10K$, the appropriate magnetic field is $B<100$ Gauss. Such a small magnetic field also guarantees that the Zeeman energy term is small enough, that we can neglect the energy different between the two spin species.
With the appearance of a magnetic field, we need to add one more term in the left-hand side of the kinetic equation to take into account the Lorentz force \cite{Maciejko2007}\footnote{We ignore the Zeeman effect, since it is small in comparison with the experimental temperature \cite{Nam2017}.}
\begin{equation}
\label{eq:B}
e\left[\mathbf{v}_{\lambda}({\mathbf{k}})\times \mathbf{B} \right]\cdot \nabla_{\mathbf{k}} f_{\lambda}(\mathbf{k}, \mathbf{x},t)
\end{equation}
where the group velocity is given by \eqref{eq:v_group}.
In this section, we only consider the charge conductivity and thermal conductivity in the appearance of a magnetic field. We can rewrite \eqref{eq:B} as
\begin{equation}
\label{eq:ExtraB}
-\frac{eB\lambda}{m}f^0_\lambda(\mathbf{k})[1-f^0_\lambda(\mathbf{k})]\epsilon^{ij}k_j \partial_{k_i}h_\lambda(\mathbf{k},\mathbf{x}).
\end{equation}
\subsection{Thermoelectric coefficients}
We define the electrical conductivity $\sigma$, the thermal conductivity $K$ and the thermopower $\Theta$ by
\begin{equation}
\begin{pmatrix}
\vec{J}\\
\vec{J}^Q
\end{pmatrix}
=\begin{pmatrix}
\sigma &\Theta\\
T\Theta&K
\end{pmatrix}
\begin{pmatrix}
\vec{E}\\
-\vec{\nabla}T
\end{pmatrix}
\label{eq:def_thermo}
\end{equation}
where each of the thermoelectric coefficients is a $2\times2$ matrix. The fact that $\Theta$ appears twice in \eqref{eq:def_thermo} is due to the Onsager reciprocity relation \cite{Onsager1931}. Using these definitions, the Seebeck coefficient is $S=\sigma^{-1}\Theta$ and the Peltier coefficient is $\Pi=TS$. In experiments, the heat current is often measured such that $\vec{J}=0$, in which case the proportionality constant between $\vec{J}^Q$ and $-\vec{\nabla}T$ is $\kappa=K-T\Theta\sigma^{-1}\Theta$ \cite{Mueller2008,lucas2018hydrodynamics}.
\subsection{Charge conductivity}
In order to derive the coefficients of DC conductivity, we apply a constant electric field $\mathbf{E}$. The unperturbed distribution function is given by
\begin{equation}
f^0_\lambda(\mathbf{p})=\frac{1}{1+e^{\beta(\epsilon_\lambda(\mathbf{p})-\mu)}}.
\label{eq:f_0}
\end{equation}
We need to solve the equation \eqref{eq:Boltz1} in the following simplified form
\begin{align}
\label{eq:BolE}
-\lambda& \beta\frac{e\mathbf{E}\cdot\mathbf{p}}{m} f^0_\lambda(\mathbf{p})[1-f^0_\lambda(\mathbf{p})]\nonumber\\+&\frac{eB\lambda}{m}f^0_\lambda(\mathbf{p})[1-f^0_\lambda(\mathbf{p})]\epsilon^{ij}p_j \nabla_{p_i}h_\lambda(\mathbf{p})\\&=-I^{(1)}_\lambda[\{h_{\lambda_i}(\mk_i)\}](\mathbf{p})\nonumber
\end{align}
for $\lambda=+$ and $\lambda=-$ to obtain $h_\lambda(\mathbf{p})$. In equation \eqref{eq:BolE}, the right-hand side denotes the linear order in the perturbation of the collision integral derived in section \ref{sec:Coll}. The left-hand side is derived in the Green's function formalism as \eqref{eq:KineticBandL1linear1}. The suggested ansatz in this calculation is
\begin{equation}
\label{eq:as12}
h_\lambda(\mathbf{p})=\beta\frac{e\mathbf{E}}{m}\cdot\bigg(\mathbf{p}\chi_\lambda^\parallel(p)+\mathbf{p}\times\hat{\vec{z}}\chi_\lambda^\perp(p)\bigg)
\end{equation}
and we solve for $\chi_\lambda(p)$ numerically. The second term in the ansatz becomes relevant when we have a magnetic field. The charge current is given by \eqref{eq:currentexpect} and the DC conductivity can be directly read off. Due to the symmetry of the collision integral that will be discussed in section \ref{sec:sym}, we can show that $\sigma_{xx}=\sigma_{yy}$ because of rotational invariance and $\sigma_{xy}=\sigma_{yx}=0$ in the absence of magnetic field due to parity. The external magnetic field $\mathbf{B}$ breaks parity which gives us $\sigma_{xy}=-\sigma_{yx} \neq 0$.
\subsection{Thermal conductivity}
We consider a spatially dependent background temperature $T(\mathbf{x})=T+\delta T(\mathbf{x})$. The local equilibrium distribution function takes the form
\begin{equation}
f^0_\lambda(\mathbf{p},T(\mathbf{x}),\mu)=\frac{1}{1+e^{\frac{1}{k_BT(\mathbf{x})}\left(\epsilon_\lambda(\mathbf{p}) -\mu\right)}}.
\end{equation}
We now consider a constant gradient in temperature by introducing the space-time independent driving \textit{force}
$\mathbf{F}^T=-\frac{\nabla_{\mathbf{x}}\delta T}{T}$. We then need to solve equation \eqref{eq:Boltz1} in the following simplified form
\begin{align}
-\lambda& \beta\frac{\mathbf{F}^T\cdot\mathbf{p}}{m}(\epsilon_\lambda(\mathbf{p})-\mu) f^0_\lambda(\mathbf{p})[1-f^0_\lambda(\mathbf{p})]\nonumber\\+&\frac{eB\lambda}{m}f^0_\lambda(\mathbf{p})[1-f^0_\lambda(\mathbf{p})]\epsilon^{ij}p_j \nabla_{p_i}h_\lambda(\mathbf{p})\\&=-I^{(1)}_\lambda[\{h_{\lambda_i}(\mk_i)\}](\mathbf{p})\nonumber
\end{align}
for $\lambda=+$ and $\lambda=-$ to obtain $h_\lambda(\mathbf{p})$. The left-hand side is obtained from \eqref{eq:KineticBandL4linear3}. The suggested ansatz is
\begin{equation}
\label{eq:as32}
h_\lambda(\mathbf{p})=\beta\frac{\mathbf{F}^T}{m}(\epsilon_\lambda(\mathbf{p})-\mu)\cdot\bigg(\mathbf{p}\phi_\lambda^\parallel(p)+\mathbf{p}\times\hat{\vec{z}}\phi_\lambda^\perp(p)\bigg).
\end{equation}
From the heat current along with equations \eqref{eq:Qcurrentexpect} we can read off the thermal conductivity. For the thermopower, we consider the same ansatz as for the thermal conductivity, but calculate the charge current which is given by \eqref{eq:currentexpect}.
\subsection{Viscosity}
To calculate the DC shear viscosity, we consider a background velocity for the particles and holes. Therefore, the local equilibrium distribution function takes the form
\begin{equation}
f^0_\lambda(\mathbf{p},\mathbf{u}_\lambda(\mathbf{x}),\mu)=\frac{1}{1+e^{\beta(\epsilon_\lambda(\mathbf{p})-\mathbf{u}_\lambda(\mathbf x)\cdot\mathbf{p}-\mu)}},
\end{equation}
where $\mathbf{u}_{+(-)}(\x)$ is the perturbed background velocity of electrons (holes).
We apply a constant shear with the explicit form
\begin{equation}
u^\lambda_{ 12}=F_\lambda, \qquad u^{\lambda}_{11}= u^{\lambda}_{22}=0
\label{eq:strain}
\end{equation}
where $F_\lambda$ is a space-time independent perturbation and the definition of strain is
\begin{equation}
\label{eq:strain1}
u^\lambda_{ ij}=\frac{1}{2}\left( \partial_i u_{\lambda}^j+\partial_j u_{\lambda}^i\right).
\end{equation}
We need to solve equation \eqref{eq:Boltz1} in the following simplified form
\begin{equation}
\lambda \beta\frac{2p^1p^2F_\lambda}{m} f^0_\lambda(\mathbf{p})[1-f^0_\lambda(\mathbf{p})]=-I^{(1)}_\lambda[\{h_{\lambda_i}(\mk_i)\}](\mathbf{p})
\end{equation}
for $\lambda=+$ and $\lambda=-$ to obtain $h_\lambda(\mathbf{p})$. The left-hand side comes from \eqref{eq:KineticBandL3Linear2}. The suggested ansatz is
\begin{equation}
\label{eq:as2}
h_\lambda(\mathbf{p})=\beta\frac{2p^1 p^2}{m}\chi^\lambda_{\eta}(\mathbf{p};F_+,F_-).
\end{equation}
The stress tensor $T^{12}_\lambda$ is given by \eqref{eq:Tijexpect} and the shear viscosity coefficients are given by the definition
\begin{equation}
T^{12}_\lambda=-\eta_{\lambda \lambda'}F_{\lambda'}.
\end{equation}
In the experiment \cite{Kumar2017}, the authors found that quasi-particle collisions can importantly impact the transport in monolayer graphene. The results showed that the electrons behave as a highly viscous fluid due to the electron-electron interactions in the clean limit. Even though there has not been an analogous experiment for BLG yet, we expect that highly viscous behaviour of BLG will be found in the near future. The viscosity coefficients will play an important role for simulation of electronic transport in BLG and for comparison against experimental results.
\\
\\
\section{Collision integral}
\label{sec:Coll}
We now focus on the right-hand side of the QBE---the collision integral. We discuss the contribution from quasi-particle interactions, scattering on disorder and scattering off the boundary separately in subsections \ref{subsec:CI}, \ref{subsec:DI} and \ref{subsec:BI} respectively. In the quasi-particle scattering channel, we ignore Umklapp processes at low energies near charge neutrality. Since in our regime $k_F a\ll 1$, Umklapp scattering is negligible due to the lack of available phase space. Inter-valley scattering is also ignored due to the long range nature of the Coulomb interaction. Up to linear order in the perturbation, the generalized collision integral on the right-hand side of the kinetic equation \eqref{eq:Boltz1} includes contributions from quasi-particle interactions, scattering on disorders and finite size effect and scattering on phonons respectively
\begin{equation}
\label{eq:CoItot}
I^{(1)}_{\lambda}=I^{(1)}_{\lambda,\textrm{int}}+I^{(1)}_{\lambda,\textrm{dis}}+I^{(1)}_{\lambda,\textrm{size}}+I^{(1)}_{\lambda,\textrm{phonon}}.
\end{equation}
In the following subsection, we will discuss in detail each contribution of \eqref{eq:CoItot}.
\subsection{Quasi-particles' Coulomb interaction}
\label{subsec:CI}
The first contribution to the collision integral that we consider is that coming from the Coulomb interaction of the quasi-particles. We are interested in the experimental regime of sufficiently clean BLG \cite{Nam2017} in which the transport properties are dominated by quasi-particle interactions. In this section, we formulate the quasi-particle interactions of the form \eqref{eq:Coulomb} via the screened Coulomb potential $V_C(\mathbf{q})$ that was derived previously in section \ref{sec:Coul}. To derive the contribution $I^{(1)}_{\lambda,\textrm{int}}$, we generalize the Kadanoff-Baym equations \cite{Kadanoff} to BLG. We again only consider the diagonal component of the Green's functions \eqref{eq:gdiag} and calculate the collision integral contribution due to the interaction \eqref{eq:Coulomb}. The technical details of the derivation will be left for Appendix \ref{sec:BoltzA}, in this subsection we only quote the main result. The collision integral due to interactions for each band index $\lambda$ is then given by
\begin{widetext}
\begin{multline}
\label{eq:CoI}
I_{\lambda,\textrm{int}}[\{f_{\lambda_i}(\mk_i)\}](\mathbf{p})=-(2\pi)\sum_{\lambda_1\lambda_2\lambda_3}\int \frac{d^2 \mk_1}{(2\pi)^2}\frac{d^2 \mq}{(2\pi)^2}\delta(\epsilon_\lambda(\mathbf{p})+\epsilon_{\lambda_1}(\mk_1)-\epsilon_{\lambda_2}(\mathbf{p}+\mq)-\epsilon_{\lambda_3}(\mk_1-\mq))\\
\left[N_f|T_{\lambda\lambda_1\lambda_3\lambda_2}(\mathbf{p},\mk_1,\mq)|^2-T_{\lambda\lambda_1\lambda_3\lambda_2}(\mathbf{p},\mk_1,\mq)T^*_{\lambda\lambda_1\lambda_2\lambda_3}(\mathbf{p},\mk_1,\mk_1-\mathbf{p}-\mq)\right]\\
\Big[[1-f_\lambda(\mathbf{p})][1-f_{\lambda_1}(\mk_1)]f_{\lambda_2}(\mathbf{p}+\mq)f_{\lambda_3}(\mk_1-\mq)-f_\lambda(\mathbf{p})f_{\lambda_1}(\mk_1)[1-f_{\lambda_2}(\mathbf{p}+\mq)][1-f_{\lambda_3}(\mk_1-\mq)]\Big],
\end{multline}
\end{widetext}
where we follow \cite{Fritz2008} and define the form factor
\begin{equation}
M_{\lambda\lambda'}(\mk,\mk')=\frac{1}{2}\left(1+\lambda\lambda'e^{i(2\theta_{\mk'}-2\theta_{\mk})}\right),
\end{equation}
as well as the channel dependent scattering matrix
\begin{equation}
T_{\lambda_1\lambda_2\lambda_3\lambda_4}(\mk,\mk',\mq)=V(-\mq)M_{\lambda_1\lambda_4}(\mk+\mq,\mk)M_{\lambda_2\lambda_3}(\mk'-\mq,\mk').
\end{equation}
The collision integral vanishes when we substitute the Fermi distribution \eqref{eq:f_0}.
Linear order in perturbation of the collision integral is given by the perturbation \eqref{eq:f_pert}.
The linearized collision integral is then given by
\begin{widetext}
\begin{multline}
I_{\lambda,\textrm{int}}^{(1)}[\{h_{\lambda_i} (\vec{k}_i)\}](\vec{p})=-(2\pi)\sum_{\lambda_1\lambda_2\lambda_3}\int \frac{\diff^2\vec{k}}{(2\pi)^2} \frac{\diff^2\vec{q}}{(2\pi)^2}\delta(\epsilon_{\lambda}(\vec{p})+\epsilon_{\lambda_1}(\vec{k})-\epsilon_{\lambda_2}(\vec{p+q})-\epsilon_{\lambda_3}(\vec{k-q}))\\
\times\bigg[N_f|T_{\lambda\lambda_1\lambda_3\lambda_2}(\vec{p},\vec{k},\vec{q})|^2-T_{\lambda\lambda_1\lambda_3\lambda_2}(\vec{p},\vec{k},\vec{q})T_{\lambda\lambda_1\lambda_2\lambda_3}^*(\vec{p},\vec{k},\vec{k-p-q})\bigg]\\
\times \bigg[[1-f_\lambda^0(\vec{p})][1-f_{\lambda_1}^0(\vec{k})]f_{\lambda_2}^0(\vec{p+q})f_{\lambda_3}^0(\vec{k-q})\bigg]\bigg[-h_{\lambda}(\vec{p})-h_{\lambda_1}(\vec{k})+h_{\lambda_2}(\vec{p+q})+h_{\lambda_3}(\vec{k-q})\bigg],
\label{eq:lin_coll_int}
\end{multline}
\end{widetext}
where we defined $h_\lambda(\mk)$ in equation \eqref{eq:f_pert}. The collision integral \eqref{eq:lin_coll_int} shares similarities with one of monolayer graphene in Ref \cite{Fritz2008}. However, due to the difference in the quasi-particle dispersion relation, which is quadratic for BLG and linear for monolayer graphene, their allowed scattering channels differ qualitatively. In the case of BLG, we have to consider the scattering channel where one quasi-particle decays to two quasi-particles and one hole. On the other hand, this scattering channel is kinematically forbidden in monolayer graphene because of momentum and energy conservation. This contribution was missed in a previous publication on the kinetic theory of BLG \cite{Lux2013}. However, due to kinematic restrictions, the phase space for this scattering process is small and therefore this channel does not contribute significantly to the collision integral. We have checked this statement numerically.
\subsection{Contribution from disorder}
\label{subsec:DI}
Due to the Galilean invariance of our system in the absence of disorder, the collision integral is unchanged under a Galilean boost. However, under a uniform boost of all particles by
$\vec{u}$, the current density transforms as $ \vec{J}\to\vec{J}+en\vec{u}$. So as long as the charge density $n\neq0$ (ie $\mu\neq0$) we change the current density by boosting frames and therefore the conductivity is ill-defined in the absence of a momentum-relaxing mechanism. Including one or several momentum-relaxing scattering channels is therefore crucial for calculating the transport coefficients away from $\beta\mu=0$.
One such momentum relaxing process is the scattering of electrons off impurities in the sample. For this calculation, we put our system in a box of side length $L$ with periodic boundary conditions. We follow \cite{Fritz2008} and consider a disorder Hamiltonian
\begin{equation}
H_{\textrm{dis}}=\sum_f\int d^2\vec{x}\ V_{\textrm{dis}}(\vec{x})\Psi_f^\dagger(\vec{x})\Psi_f(\vec{x}),
\label{eq:H_dis}
\end{equation}
where $V_{\textrm{dis}}$ is the interaction potential between an electron and the impurities, which we take to be charges $Ze$ located at random positions $\vec{x}_i$ and having number density $n_{\textrm{imp}}=N_{\textrm{imp}}/L^2$. We use the screened Coulomb interactions to obtain
\begin{equation}
V_{\textrm{dis}}(\vec{x})=\sum_{i=1}^{N_{\textrm{imp}}}\frac{Ze^2}{\epsilon_r|\vec{x}-\vec{x}_i|}e^{-q_{TF}|\vec{x}-\vec{x}_i|}.
\end{equation}
From the interaction \eqref{eq:H_dis}, we can calculate the scattering rate of quasi-particles off the disorder. We then obtain the contribution to the collision integral from disorder up to linear order in the perturbation
\begin{equation}
\label{eq:CoD}
I^{(1)}_{\lambda,\textrm{dis}}[h_{\lambda_i} (\vec{k}_i)](\vec{p})=\tau_\textrm{imp}^{-1} f_\lambda^0(p)[1-f_\lambda^0(p)]h_\lambda(\vec{p}),
\end{equation}
where we define a short hand notation for the impurity scattering rate
\begin{equation}
\label{eq:tau_imp}
\tau_\textrm{imp}^{-1}=\frac{1}{2}mn_{\textrm{imp}}\bigg(\frac{2\pi Ze^2}{\epsilon_rq_{TF}}\bigg)^2.
\end{equation}
The corresponding dimensionless parameter is $\alpha_\textrm{imp}\equiv \beta\tau_\textrm{imp}^{-1}=1/2(8\pi^2Z/N_f\epsilon_r)^2\beta n_{\textrm{imp}}/m$.
The detailed derivation of \eqref{eq:CoD} is left for Appendix \ref{sec:CoD}.
\subsection{Effect of finite system size}
\label{subsec:BI}
In very clean samples of bilayer graphene, it is expected that the scattering length due to impurity scattering is longer than the system size $L$, which is currently limited in suspended graphene samples \cite{Ki2013}. In this case, in order to have a well-defined conductivity, we need to include the effect of the finite size of the system. There will be scattering of the electrons off the boundary, which effectively acts as an additional scattering time. Assume the scattering time due to collisions with the boundary is $ \tau(p)=\frac{L}{v}=\frac{Lm}{p}$ where $L$ is the size of the sample up to a factor depending on the geometry of the BLG sample. Here, we are making the simplifying assumption, that that the scattering does not depend on the direction of the momentum. We neglect the angular dependence of the boundary scattering which is likely to contribute a geometric factor to the scattering time. The collision integral is then
\begin{equation}
\label{eq:CoS}
I^{(1)}_{\lambda,\textrm{size}}[h_{\lambda_i} (\vec{k}_i)](\vec{p})=\frac{p}{mL}f_\lambda^0(p)[1-f_\lambda^0(p)]h_\lambda(\vec{p})
\end{equation}
which is just the Bhatnagar-Gross-Krook (BGK) collision operator \cite{BGK} with $\tau$ given by $\tau(p)$. The corresponding dimensionless parameter is $\alpha_L=\frac{\sqrt[]{\beta}}{\sqrt[]{m}L}$.
\subsection{Phonon scattering}
We should also consider the effect of the electrons scattering off phonons. The maximum energy of an acoustic phonon is $\varepsilon_{\textrm{max}}=2c\sqrt{2m\textrm{max}(k_BT,\mu)}$, where $c$ is the speed of sound in graphene. In the experimental setting, we are at high temperatures compared to the Bloch-Gr\"uneisen temperature
\begin{equation}
T_{BG}=\frac{2c}{v_F}\frac{\sqrt{\gamma_1|\mu|}}{k_B},
\end{equation}
and additionally we have $T\gg 2\gamma_1/k_B(c/v)^2$. Thus we are in the high temperature regime $k_BT\gg\varepsilon_{\textrm{max}}$, where can treat the phonons as introducing another scattering time \cite{Viljas2010}\footnote{We ignore the scattering channel from conductance electrons to valence electrons due to the emission and absorption of phonons because of the suppression in the scattering matrix.\cite{Viljas2010}.}
\begin{equation}
\tau_\textrm{phonon}^{-1}=\frac{D^2mk_BT}{2\rho\hbar^3
c^2},
\label{eq:tau_phonon}
\end{equation}
where $D$ is the deformation potential and $\rho$ is the mass density. Then the collision integral is
\begin{equation}
I^{(1)}_{\lambda,\textrm{phonon}}[h_{\lambda_i} (\vec{k}_i)](\vec{p})=\tau_\textrm{phonon}^{-1}f_\lambda^0(p)[1-f_\lambda^0(p)]h_\lambda(\vec{p}).
\label{eq:I_phonon}
\end{equation}
The corresponding dimensionless parameter is $\alpha_{\textrm{ph}}=\beta \tau_\textrm{phonon}^{-1}$. It is crucial to note that whereas $\alpha_{\textrm{imp}}=\alpha_{\textrm{imp}}(T)$ and $\alpha_{\textrm{L}}=\alpha_{\textrm{L}}(T)$, $\alpha_{\textrm{ph}}$ does not depend on temperature.
\section{Symmetries}
\label{sec:sym}
\subsection{Spatial symmetries}
The electrical conductivity is rotationally symmetric. The only rotationally symmetric tensors in 2d are $\delta_{ij}$ and $\epsilon_{ij}$ so any rotationally invariant tensor $\sigma_{ij}$ can be written as
\begin{equation}
\sigma_{ij}=\sigma_{xx}\delta_{ij}+\sigma_{xy}\epsilon_{ij}.
\end{equation}
In the absence of a magnetic field, we have an additional symmetry, namely 2D parity $y\to-y$, which implies
\begin{equation}
\sigma_{xy}=\sigma_{yx}=0.
\end{equation}
With a magnetic field, 2D parity implies
\begin{align}
\sigma_{xx}(B)&=\sigma_{xx}(-B),\\
\sigma_{xy}(B)&=-\sigma_{xy}(-B).
\end{align}
The thermal conductivity and thermopower satisfy the same relations.
\subsection{Particle-hole symmetry}
Under the particle-hole transformation, we have $\lambda\to-\lambda$, $\mu\to-\mu$ and $B\to-B$. First consider the electrical conductivity. Due to the particle-hole symmetry, we have
\begin{align}
\sigma_{xx}(\beta\mu,B)&=\sigma_{xx}(-\beta\mu,-B),\\ \sigma_{yx}(\beta\mu,B)&=\sigma_{yx}(-\beta\mu,-B).
\end{align}
Now consider the viscosity which we only calculate for $B=0$. Particle-hole symmetry implies that we have
\begin{equation}
\eta_{\lambda,\lambda'}(\beta\mu)=\eta_{-\lambda,-\lambda'}(-\beta\mu).
\end{equation}
These symmetries follow directly from the form of the collision integral \eqref{eq:lin_coll_int}.
\section{Two-fluid model}
\label{sec:2fm}
We introduce the two-fluid model, which reproduces the salient features of our numerical results. Motivated by comparison with experiment \cite{Wagner} we choose to only include the phonon scattering as a momentum relaxing mechanism. We multiply the kinetic equation by $\lambda \vec{p}/m$ and integrate over momentum space in order to derive the evolution of the mean fluid velocities as
\begin{equation}
m\partial_t\vec{u}^e=-\frac{m}{\tau_{eh}}(\vec{u}^e-\vec{u}^h)-\frac{m\vec{u}^e}{\tau_{se}}+e(\vec{E}+\vec{u}^e\times\vec{B})-\Lambda^e k_B\nabla T
\label{eq:fluid1}
\end{equation}
\begin{equation}
m\partial_t\vec{u}^h=\frac{m}{\tau_{he}}(\vec{u}^e-\vec{u}^h)-\frac{m\vec{u}^h}{\tau_{sh}}-e(\vec{E}+\vec{u}^h\times\vec{B})-\Lambda^h k_B\nabla T,
\label{eq:fluid2}
\end{equation}
where we defined the electron and hole velocities as
\begin{equation}
\mathbf{u}^e=\frac{\int \frac{d^2 \vec{p}}{(2\pi)^2}\frac{\mathbf{p}}{m} f_+(\mathbf{p})}{\int \frac{d^2 \vec{p}}{(2\pi)^2} f^0_+(\mathbf{p})},\quad \mathbf{u}^h=-\frac{\int \frac{d^2 \vec{p}}{(2\pi)^2}\frac{\mathbf{p}}{m} (1-f_-(\mathbf{p}))}{\int \frac{d^2 \vec{p}}{(2\pi)^2} (1-f^0_-(\mathbf{p}))}.
\end{equation}
The coefficients $\Lambda^{e,h}$ account for the fact that the average entropy per particle is $\Lambda k_B$
\begin{equation}
\label{eq:Lambdae}
k_BT\Lambda^e=\frac{\int \frac{d^2 \vec{p}}{(2\pi)^2} p^2(\epsilon_+(p)-\mu)f^0_+(\mathbf{p})[1-f^0_+(\mathbf{p})]}{\int \frac{d^2 \vec{p}}{(2\pi)^2} f^0_+(\mathbf{p})}
\end{equation}
\begin{equation}
\label{eq:Lambdah}
k_BT\Lambda^h=\frac{\int \frac{d^2 \vec{p}}{(2\pi)^2} p^2(-\epsilon_-(p)+\mu)f^0_-(\mathbf{p})[1-f^0_-(\mathbf{p})]}{\int \frac{d^2 \vec{p}}{(2\pi)^2}(1-f^0_-(\mathbf{p}))}
\end{equation}
The definitions \eqref{eq:Lambdae} and \eqref{eq:Lambdah} follow from the $\nabla T$ term in the QBE when \eqref{eq:fluid1} and \eqref{eq:fluid2} are derived. The Coulomb drag term can be derived explicitly from the collision integral
\begin{multline}
\int \frac{d^2 \vec{p}}{(2\pi)^2} \frac{\lambda\vec{p}}{m}I_{\lambda,\textrm{int}}^{(1)}\bigg[h_{\lambda_i} (\vec{k}_i)=\lambda_i\beta \vec{k}_i\cdot \bigg(\frac{\vec{u}^e-\vec{u}^h}{2}\bigg)\bigg](\vec{p})\\=
\left\{\begin{array}{ll}-\frac{mn^e}{\tau_{eh}}(\vec{u}^e-\vec{u}^h) & \lambda=+ \\
\frac{mn^h}{\tau_{he}}(\vec{u}^e-\vec{u}^h) & \lambda=-
\end{array} \right.
\label{eq:Coul_drag}
\end{multline}
This allows us to calculate $\tau_{eh}$ and $\tau_{he}$. We perform the calculation at charge neutrality and then use \eqref{eq:away_from_CN} to extrapolate. $\tau_{se}$ is the momentum-relaxing scattering time for electrons and $\tau_{sh}$ is the corresponding time for holes ($s$ stands for "scattering"). They are given by
\begin{equation}
\tau_{se}^{-1}=\tau_{\textrm{phonon}}^{-1}+\tau_{\textrm{imp}}^{-1}+\tau_{Le}^{-1},
\end{equation}
\begin{equation}
\tau_{sh}^{-1}=\tau_{\textrm{phonon}}^{-1}+\tau_{\textrm{imp}}^{-1}+\tau_{Lh}^{-1},
\end{equation}
where where $\tau_{\textrm{phonon}}$ and $\tau_{\textrm{imp}}$ are given by \eqref{eq:tau_phonon} and \eqref{eq:tau_imp} respectively and the scattering times off the boundary are
\begin{equation}
\label{eq:tauLE}
\tau_{Le}^{-1}=\frac{\beta}{2m^2L}\frac{\int \frac{d^2 \vec{p}}{(2\pi)^2} p^3 f^0_+(\mathbf{p})[1-f^0_+(\mathbf{p})]}{\int \frac{d^2 \vec{p}}{(2\pi)^2} f^0_+(\mathbf{p})},
\end{equation}
\begin{equation}
\label{eq:tauLH}
\tau_{Lh}^{-1}=\frac{\beta}{2m^2L}\frac{\int \frac{d^2 \vec{p}}{(2\pi)^2} p^3 f^0_-(\mathbf{p})[1-f^0_-(\mathbf{p})]}{\int \frac{d^2 \vec{p}}{(2\pi)^2} (1-f^0_-(\mathbf{p}))}.
\end{equation}
We consider the steady state $\partial_t\vec{u}^e=\partial_t\vec{u}^h=0$ and calculate the electric current and energy current
\begin{equation}
\vec{J}=e(n^e\vec{u}^e-n^h\vec{u}^h)
\end{equation}
\begin{equation}
\vec{J}^E= k_BT(\Lambda^en^e\vec{u}^e+\Lambda^hn^h\vec{u}^h)
\end{equation}
where the number densities calculated from the Fermi distribution are
\begin{equation}
n^e=\frac{N_fm}{2\pi\beta}\ln(1+e^{\beta\mu}), \qquad n^h=\frac{N_fm}{2\pi\beta}\ln(1+e^{-\beta\mu})
\end{equation}
From this, we can derive the thermoelectric coefficients in the absence of a magnetic field
\begin{equation}
\sigma_{xx}=\frac{e^2(n^e\tau_{sh}^{-1}+n^h\tau_{se}^{-1}+(\tau_{he}^{-1}-\tau_{eh}^{-1})(n^e-n^h))}{m(\tau_{eh}^{-1}\tau_{sh}^{-1}+\tau_{he}^{-1}\tau_{se}^{-1}+\tau_{se}^{-1}\tau_{sh}^{-1})}
\end{equation}
\begin{equation}
\Theta_{xx}=\frac{ek_B\bigg(n^e\tilde\Lambda^e-n^h\tilde\Lambda^h\bigg)}{m(\tau_{eh}^{-1}\tau_{sh}^{-1}+\tau_{he}^{-1}\tau_{se}^{-1}+\tau_{se}^{-1}\tau_{sh}^{-1})}
\end{equation}
\begin{equation}
K_{xx}=\frac{k_B^2T\bigg(\Lambda^en^e\tilde\Lambda^e+\Lambda^hn^h\tilde\Lambda^h\bigg)}{m(\tau_{eh}^{-1}\tau_{sh}^{-1}+\tau_{he}^{-1}\tau_{se}^{-1}+\tau_{se}^{-1}\tau_{sh}^{-1})}
\end{equation}
where
\begin{equation}
\tilde\Lambda^e=\Lambda^e(\tau_{he}^{-1}+\tau_{sh}^{-1})+\Lambda^h\tau_{eh}^{-1}
\end{equation}
\begin{equation}
\tilde\Lambda^h=\Lambda^h(\tau_{eh}^{-1}+\tau_{se}^{-1})+\Lambda^e\tau_{he}^{-1}
\end{equation}
For momentum conservation we require
\begin{equation}
n^e\tau_{eh}=n^h\tau_{he}
\label{eq:mmtm_cons}
\end{equation}
We verify explicitly that the Onsager relations for the thermoelectric coefficients are satisfied if equation \eqref{eq:mmtm_cons} is satisfied.
Thus we can choose
\begin{equation}
\tau_{eh}=\tau_0\frac{n_e+n_h}{n^h}, \qquad \tau_{he}=\tau_0\frac{n_e+n_h}{n^e}.
\label{eq:away_from_CN}
\end{equation}
This ansatz agrees with the full numerical result obtained from \eqref{eq:Coul_drag} to within 10\% in the entire range of $\beta\mu$ and is therefore a satisfactory approximation.
By evaluating the collision integral in \eqref{eq:Coul_drag} numerically, we find
\begin{equation}
\alpha_0\equiv \beta\tau_0^{-1}=0.15
\end{equation}
We define the dimensionless electrical conductivity as $\sigma_{ij}=\frac{N_fe^2}{2\hbar}\tilde\sigma_{ij}$. With a magnetic field and at CN, we calculate from the two fluid model that the Hall conductivity at small fields behaves like
\begin{equation}
\lim_{B\to 0}\frac{\tilde \sigma_{xy}}{\beta\omega_c}=\frac{\beta}{m}\frac{(n^e-n^h)[(n^e+n^h)^2(\alpha_0+ \alpha_{s})-4\alpha_0^2n^en^h]}{\alpha_{s}^2(\alpha_0+\alpha_{s})^2(n^e+n^h)^2}
\end{equation}
where $\alpha_{s}=\alpha_{\textrm{imp}}+\alpha_{\textrm{ph}}$ and we have neglected the boundary scattering. We plot this quantity in Fig. \ref{fig:Sigma_xx_B} and show that the result from the two-fluid model agrees perfectly with the numerical result.
\section{Numerical results}
\label{sec:num_res}
Armed with the full formalism for the QBE, we are in a position to numerically calculate the transport properties. In our companion paper \cite{Wagner} we plot the three thermoelectric coefficients as a function of $\beta\mu$. In this section we therefore focus on the behaviour of the transport coefficients at CN and in a magnetic field and we discuss the viscosity.
\subsection{Transport at CN}
In Fig. \ref{fig:sigma_T} we show how the electrical conductivity at charge neutrality depends on temperature. In order to obtain a non-trivial temperature dependence we need to go beyond the Coulomb and phonon scattering. We assume that collisions off impurities can be neglected, as claimed in the experimental work \cite{Nam2017}. With Coulomb interactions and phonons alone, the conductivity at CN would be independent of temperature, since at CN, the conductivity would only depend on the dimensionless parameter $\alpha_\textrm{ph}$, which is temperature-independent. So the temperature-dependence is entirely due to the finite size scattering, which comes with the dimensionless number $\alpha_L$, which does depend on temperature. This figure shows qualitative agreement with the behaviour seen in Fig. 4 of \cite{Nam2017}. The thermal conductivity at charge neutrality shows the same type of behaviour as the electrical conductivity and for the same reasons.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{sigma_T.pdf}
\caption{Electrical conductivity at CN $\tilde\sigma_{xx}(\mu=0)$ plotted as a function of temperature for the canonical value $\alpha_{\textrm{ph}}=0.05$. We have also chosen $\alpha_L=\frac{\sqrt[]{\beta}}{\sqrt[]{m}L}=0.03$ (at $T=25$K) using the scale $L\sim 3\mu$m set by the sample size in \cite{Nam2017}.}
\label{fig:sigma_T}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{sigma_B.pdf}
\caption{Plot of the slope of the off-diagonal component of the electrical conductivity $\sigma_{xy}/B$ from the QBE (solid) and the two fluid model (dashed) as a function of $\beta\mu$. This plot uses the experimentally motivated value $\alpha_{\textrm{ph}}=0.05$. The two-fluid model agrees perfectly with the full QBE calculation.}
\label{fig:Sigma_xx_B}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{eta.pdf}
\caption{Plot of the dimensionless viscosity $\tilde \eta_{\lambda,\lambda'}=(N_fm/\beta)^{-1}\eta_{\lambda,\lambda'}$ plotted for $\alpha_{\textrm{ph}}=0.05$. We plot $\eta_{++}$ (solid line), $\eta_{--}$ (dotted), $\eta_{+-}$ (dashed), $\eta_{-+}$ (dashed).}
\label{fig:viscosity}
\end{figure}
\subsection{Lorenz number}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Lorenz.pdf}
\caption{Plot of the Lorenz number for $\alpha_\textrm{s}=\beta\tau_{s}^{-1}=0.05$. We compare the QBE results (solid) with the two-fluid model results (dashed). }
\end{figure}
From the Lorenz number $\mathcal{L}=\kappa_{xx}/\sigma_{xx}T$ and the Hall Lorenz number $\mathcal{L}_H=\kappa_{xy}/\sigma_{xy}T$ we deduce a further signature of the two-fluid model. The Lorenz number is enhanced relative to the Wiedemann-Franz (WF) law which predicts $\mathcal{L}=\pi^2/3(k_B/e)^2$. The violation of the WF law has been reported in a recent theoretical work \cite{Vignale}. On the other hand, the violation of the WF law is much less severe for the Hall Lorenz number $\mathcal{L}_H$. Both these observations can be explained in the following simple picture. We find the Lorenz number at charge neutrality
\begin{equation}
\mathcal{L}\equiv \frac{\kappa_{xx}}{\sigma_{xx}T}=\Lambda^2\bigg(\frac{k_B}{e}\bigg)^2\bigg(1+\frac{\tau_{eh}^{-1}+\tau_{he}^{-1}}{\tau_{s}^{-1}}\bigg)
\label{eq:Lorenz_2fluid}
\end{equation}
where $\Lambda^e=\Lambda^h\equiv\Lambda$ at CN and $\tau_{s}^{-1}=\tau_{\textrm{imp}}^{-1}+\tau_{\textrm{phonon}}^{-1}$. We neglect scattering off the boundary in this section. From Drude theory, $\kappa_{xx}\propto \tau_{\kappa}$, where $\tau_{\kappa}^{-1}$ is the scattering rate due to all collisions that relax the energy current. At CN for an applied thermal gradient, $\vec{u}^e=\vec{u}^h$ so there is no Coulomb drag between the electrons and holes and hence only the momentum relaxing scattering limits the thermal conductivity $\tau_\kappa^{-1}=\tau_{s}^{-1}.$ On the other hand the Coulomb drag is important for the electrical conductivity, hence $\sigma_{xx}\propto \tau_\sigma$ where $\tau_\sigma^{-1}=\tau_{eh}^{-1}+\tau_{he}^{-1}+\tau_{s}^{-1}$, where we have added the scattering rates according to Matthiessen's rule \footnote{Matthiessen's rule is only valid at CN, where the electrons and holes have equal densities and scattering times.}. This immediately yields equation \eqref{eq:Lorenz_2fluid}.
The Hall Lorenz number on the other hand is
\begin{equation}
\mathcal{L}_H\equiv \frac{\kappa_{xy}}{\sigma_{xy}T}=\bigg(\frac{k_B}{e}\bigg)^2\Lambda^2
\label{eq:Lorenz_H_2fluid}
\end{equation}
We can again derive this result using simple arguments. For an applied thermal gradient, electrons and holes move in the same direction, thus the only scattering along the direction of the gradient, x, is $\tau_{s}^{-1}$. Electrons and holes will be deflected in opposite directions by the applied magnetic field, so the friction in the perpendicular y direction is $\tau_{s}^{-1}+\tau_{eh}^{-1}$. This will increase the friction between the two fluids and limit the value of $\kappa_{xy}$. For an applied electric field, the electrons and holes move in opposite direction, so the friction in the x direction is $\tau_{s}^{-1}+\tau_{eh}^{-1}$. The magnetic field will deflect them in the same direction and the two fluids will feel the reduced friction $\tau_{s}^{-1}$ between them in the y direction. From these considerations, \eqref{eq:Lorenz_H_2fluid} can be shown.
Thus,
\begin{equation}
\frac{\mathcal{L}}{\mathcal{L}_H}=1+\frac{\tau_{eh}^{-1}+\tau_{he}^{-1}}{\tau_{s}^{-1}}\gg 1
\end{equation}
From the numerics we indeed find $\mathcal{L}\approx25(k_B/e)^2$ and $\mathcal{L}_H\approx6(k_B/e)^2$.
\subsection{Viscosity}
We calculate the shear viscosity as the response of the stress tensor when a shear flow is applied to either of the particle species.
The viscosity tensor $\eta_{\lambda\lambda'}$ is then defined via $T_\lambda^{12}=-\eta_{\lambda\lambda'}F_{\lambda'}$ and $\eta_{\lambda,\lambda'}=\frac{N_fm}{\beta}\tilde\eta_{\lambda,\lambda'}$. $\eta_{\lambda,\lambda'}$ is a measure of the friction between particle species $\lambda$ and $\lambda'$. In the numerical data FIG. \ref{fig:viscosity} we see that at large $\beta\mu$, $\eta_{++}$ dominates, since electron-electron collisions are the dominating ones. Conversely $\eta_{+-}$ decreases at large $\beta\mu$ since there are less holes present to exert a friction on the electrons.
We also note that $\eta_{+-}\ll\eta_{++}$. This can be understood from the kinematics of collisions. Energy and momentum conservation constrain the available phase space more for electron-hole collisions than for electron-electron collisions. In addition, the matrix elements for electron-hole collisions favour large momentum exchange, which is suppressed by the potential.
This justifies the two-fluid model since this shows that the intra-fluid collisions of the electron and hole fluids dominate over the inter-fluid collisions and therefore we can treat the two fluids as weakly interacting.
A possible probe of the viscosity is via the negative non-local resistance \cite{Bandurin1055,Levitov2016}. In order to make this measurement quantitative, the relation between viscosity and negative nonlocal resistance must be determined which requires a full solution of the fluid equations in the relevant geometry. Another method for measuring the viscosity of graphene has been proposed using a Corbino-disc device \cite{Tomadin2014}.
We note that the famous KSS result \cite{KSS} provides a lower bound for the ratio of the shear viscosity $\eta$ to the entropy density $s$ in a strongly interacting quantum fluid, $\eta/s\geq 1/(4\pi k_B)$. In our case the entropy density is $s=n^ek_B\Lambda^e+n^hk_B\Lambda^h=(2\pi/3)mk_B/\beta$ and so
\begin{equation}
\frac{4\pi}{k_B}\frac{\eta}{s}=24\tilde \eta\gg 1,
\end{equation}
since $\tilde\eta\gtrsim 0.5$ from Fig. \ref{fig:viscosity}. Since the bound is saturated for an infinitely strongly coupled conformal field theory, the fact that we are away from the bound is consistent with the previous arguments that we are in a weakly coupled regime and the semiclassical method is valid.
\subsection{Detailed benchmarking}
In order to assess the usefulness of the two-fluid model, we now perform a detailed analysis of the agreement between the QBE and the two-fluid model for a large range of the parameter-space of the problem. In the top rows of Figs.~\ref{fig:sigma_benchmarking} and \ref{fig:K_benchmarking} we check the agreement for phonon or impurity scattering, which is described by dimensionless strength $\alpha_\textrm{ph}$ or $\alpha_\textrm{imp}$. These two cases are identical in both the QBE and two-fluid model at fixed $\alpha$, they only differ in the temperature dependence of the dimensionless number $\alpha$. The bottom row shows the corresponding results for finite-size scattering, which is described by the dimensionless strength $\alpha_\textrm{L}$.
We see from Figs.~\ref{fig:sigma_benchmarking} and \ref{fig:K_benchmarking} that in general the agreement is very good for weak momentum-relaxing scattering, ie. small values of $\alpha_\textrm{ph}$ or $\alpha_L$. In this limit, the Coulomb-mediated electron-electron collisions are dominant and the hydrodynamic description works well. For larger values the agreement gets worse, especially in the case of the thermal conductivity. This is due to the fact that our two-fluid model only includes the equation for the first moment of the QBE. To get the thermal conductivity accurately, one would have to include the second moment as well, however this renders the two-fluid model too complicated to solve analytically, defeating the purpose of introducing it in the first place. We note however, that there is no reason to trust either the QBE or the two-fluid model in the strongly coupled regime $\alpha\gtrsim 1$. We also note the at least for the conductivity, the agreement is significantly better for the phonon and impurity scattering compared to the boundary scattering. The reason is that the boundary scattering has a scattering time that depends on momentum and in the two-fluid model we parametrize this by an average scattering time.
We note that even when the agreement with the two-fluid model fails, our QBE solution still satisfies the symmetries listed in section \ref{sec:sym}, since these are exact symmetries of the QBE. We have checked our numerical solution to find that the symmetries are indeed obeyed to an accuracy of $10^{-7}$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{sigma_benchmarking.pdf}
\caption{Plots of the dimensionless conductivity defined via $\sigma_{ij}=\frac{N_fe^2}{2\hbar}\tilde\sigma\delta_{ij}$ for various values of $\alpha_L$ and $\alpha_\textrm{ph}$. We compare the QBE results (solid) with the two-fluid model results (dashed). We see that in the case of phonon or impurity scattering (top row), the results for the electrical conductivity are good for all values of the phonon coupling strength. On the other hand, for scattering off the boundary of the sample (bottom row), the agreement gets worse as the scattering off the boundary increases.}
\label{fig:sigma_benchmarking}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{K_benchmarking.pdf}
\caption{Plots of the dimensionless thermal conductivity defined via $K_{ij}=\frac{N_fk_B^2T}{2\hbar} \tilde K\delta_{ij}$. We compare the QBE results (solid) with the two-fluid model results (dashed) for the cases of phonon or impurity scattering (top row) and for scattering off the boundary of the sample (bottom row). We see that the agreement is good for weak momentum relaxing scattering. However, as the momentum relaxing scattering is increased, the agreement gets significantly worse. Note that at large $\alpha_\textrm{ph}/\alpha_\textrm{imp}$, the QBE predicts $\tilde K$ to increase with $\beta\mu$, whereas the two fluid model predicts a decrease, so even the qualitative behaviour is incorrect in this regime.}
\label{fig:K_benchmarking}
\end{figure}
\section{Conclusion}
This paper sets up the Quantum Boltzmann formalism for bilayer graphene. It will serve as a reference work for numerical studies of the QBE that can be compared to experimental results. The experimentally-relevant transport quantities that we focus on are the thermo-electric coefficients (electrical conductivity, thermopower, thermal conductivity) and the shear viscosity. So far, only the electrical conductivity has been measured in experiment.
The calculation of the transport coefficients requires two ingredients: Firstly, we need to calculate the conserved currents associated with the coefficients in terms of the distribution function. In the case of the viscosity for instance, one has to calculate the stress tensor. This requires working out the coupling of BLG to a curved background metric, a calculation that is performed in this paper for the first time. Secondly, we need to work out the change in the distribution function due to the applied external fields. We use the Kadanoff-Baym equations as a starting point. The most technical part of this derivation is the calculation of the collision integral, which is performed in detail in the appendices. Once we have these ingredients, we can plug the change of the distribution function into the expressions for the conserved currents to find the linear response to the applied external fields. This allows us to read off the transport coefficients.
The dominant term in the collision integral in the hydrodynamic regime of BLG will be the Coulomb interactions. However, in order to obtain a finite conductivity, we need to break the Galilean invariance of the system. There are three possible terms that can be added to the collision integral: the scattering of the electrons off (1) phonons, (2) impurities, (3) the boundary of the sample. Depending on the experimental parameters, one or the other may dominate and in this work we have calculated all three contributions.
In the case of monolayer graphene, the electrons obey a linear dispersion relation. Energy conservation together with momentum conservation then places tight constraints on the phase space of collisions and this allows analytical results for the collision integral to be obtained \cite{Fritz2008}. A similar simplification in the case of BLG is not possible due to the quadratic energy dispersion. Due to the analogy with monolayer graphene, some previous authors have neglected scattering terms that are forbidden for monolayer graphene but allowed for BLG. We explicitly included these terms in our work. The collision integral must be evaluated numerically.
We derived from the QBE the two-fluid model --- a simple hydrodynamic model for the evolution of the mean fluid velocity of the electron and hole fluids. There is Coulomb drag between the two fluids and they are both subject to scattering off phonons. This model is simple enough to be able to obtain analytical formulae for the transport coefficients. We show that the two-fluid model provides quantitatively accurate results in the hydrodynamic regime where the electron-electron collisions are dominant and momentum-relaxing collisions are subdominant.
Our predictions for the temperature and magnetic field dependence of the conductivity can be verified experimentally. It should be possible to add a magnetic field to the experiment and perform the measurement of the electrical and thermal conductivities. This is another interesting probe of the hydrodynamic regime in BLG and can be used to check the agreement between the experimental behaviour and the theoretical predictions.
Our formalism can be adapted to consider BLG far from charge neutrality by modifying the screening calculation. Due to the quadratic dispersion, we expect that in the $\beta\mu\gg1$ regime, one recovers the standard Fermi liquid results.
It is also possible to generalize the formalism to treat multilayer graphene. A further possible avenue of research is to extend the present formalism to finite frequencies. Besides adding an extra term to the collision integral corresponding to the time-derivative in the Boltzmann equation, this may require taking into account the off-diagonal components of the probability distribution matrix as well as considering the \textit{Zitterbewegung} contributions.
Finally, another direction of research is the calculation of the Hall viscosity, which has been measured experimentally in monolayer graphene and BLG experiments \cite{berdyugin2018measuring}.
\section{Acknowledgements}
We would like to acknowledge helpful discussions with Philipp Dumitrescu. We thank Christopher Herzog and Dam Thanh Son for their comments on the KSS bound. This work was supported by EP/N01930X/1 and EP/S020527/1. Statement of compliance with EPSRC policy framework on research data: This publication is theoretical work that does not require supporting research data.
\emph{Note added:} During the completion of this work we became aware of related work \cite{Vignale} which looks at the thermoelectric properties for BLG. However, in our work, we present a detailed derivation of the quantum Boltzmann formalism and we have a different form of the collision integral. In addition, we deduce the effect of the experimentally relevant finite-size effect as well as phonon scattering. We aim to provide quantitative results that can be compared to current and future experimental data.
| proofpile-arXiv_065-1614 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The existence of zero-point fluctuations distinguishes a quantum field from a classical field. For a free field in empty flat spacetime these fluctuations are not observable and one usually neglects them. In other words, one considers renormalized quantities in which the contribution of free vacuum zero-point fluctuations is omitted by their subtraction. However, in the presence of matter interacting with the quantum field zero-point fluctuations might lead to observable effects. A famous example is the Casimir effect: The presence of conducting metals and dielectrics changes the propagation of zero-point modes. Their contribution to the vacuum expectation value of the energy is modified, and this energy depends on the shape and position of macroscopic bodies. Thus, as a result of vacuum fluctuations, there appear forces acting on these bodies. This effect was described by Casimir in 1948 \cite{Casmir:1947hx,Casimir:1948dh}. In 1997 Lamoreaux \cite{Lamoreaux:1996wh} directly measured the force between two closely spaced conducting surfaces to within 5\% and experimentally confirmed the existence of the Casimir effect.
There are different ways to calculate the Casimir force that give the same result \cite{Milton:2004ya,Bordag:2009}. Let us consider two parallel conducting plates. As a result of the fluctuations, there exist microscopic currents in the plates. The average of the (retarded) forces between such currents does not vanish and depends on the distance between the plates, and thereby gives rise to the Casimir force. In the other way of calculation, one can focus on the electromagnetic zero-point fluctuations in the cavity between the plates. Taking the presence of the plates into account by properly choosing boundary conditions for the quantum field then yields the Casimir force. In the second approach one can also calculate the renormalized quantum average of stress-energy tensor $\langle \hat{T}_{\mu\nu}\rangle$.
Quantum vacuum averages of quantities that are quadratic in the field depend on the boundary conditions and on an external potential or a current. Quite often, these quantum averages are called \emph{vacuum polarization}. Using this terminology, one may say that the Casimir effect is a result of the vacuum polarization produced by conducting plates. Certainly, one can characterize the vacuum polarization by considering other quantities instead of the stress-energy tensor. For example, for a scalar field $\hat{\varphi}$ one may study the properties of $\langle \hat{\varphi}^2\rangle$, and one may consider this object as a ``poor man's version of $\langle \hat{T}_{\mu\nu}\rangle$.'' In the present paper we use this option. Namely, we consider the field $\hat{\varphi}$ obeying the following equation
\begin{equation} \n{LEQ}
[\hat{{\cal D}}-V(x)]\hat{\varphi}=0\, .
\end{equation}
In order to specify the operator $\hat{{\cal D}}$, let us consider an analytic function ${\cal D}(z)$ of the complex variable $z$. The operator $\hat{{\cal D}}$ is then obtained by substitution $z\to \Box-m^2$.
We consider and compare two different cases. In both cases $V(x)$ is an external potential producing the vacuum polarization. In the first case we put ${\cal D}(z)=z$, such that the operator $\hat{{\cal D}}$
is just Klein--Gordon operator $\Box-m^2$ and Eq.~\eqref{LEQ} describes a local massive scalar field.
In the second case let us instead consider the function ${\cal D}(z)$ = $z \exp[f(z)]$, where $f(z)$ is an entire function (and therefore has no poles in the complex plane). Then, the inverse of this function
\begin{equation}
{1\over {\cal D}(z)}\equiv {\exp[-f(z)]\over z}
\end{equation}
has only one pole at $z=0$. This implies that the propagator $1/\hat{{\cal D}}$ does not have ghosts at tree level and hence the theory \eqref{LEQ} has the same number of propagating degrees of freedom as in the first case, which is why these theories theories are called \emph{ghost-free}. Since an exponential of a derivative operator contains infinitely many derivatives by means of its series expansion, these ghost-free theories are also called ``infinite-derivative theories'' or ``non-local theories.'' We use these terms interchangeably.
Later on, we shall consider a special class of ghost-free theories specified by a positive integer number $N$,
\begin{align}
\label{eq:gfn}
f(z)=(-z \ell^2)^N \, ,
\end{align}
which we call $\mathrm{GF}_N$. The parameter $\ell$ is a critical length (or time) at which the modifications connected with the non-locality become important. Technically, this length scale appears in order to form the dimensionless combination $\ell^2(\Box-m^2)$. Let us introduce the symbol
\begin{equation}
\label{eq:gfn-form}
\alpha(z)=\exp[-f(z)]\, ,
\end{equation}
which we call a \emph{form factor}. These form factors need to have the proper behavior such that we can reproduce the local theory in a certain limit. For this purpose let us consider again the $\mathrm{GF_N}$ class of theories. In a Fourier basis one has $\ell^2(\Box-m^2) \rightarrow \ell^2(\omega^2 - q^2 - m^2)$, where $\omega$ and $q$ denote the temporal and spatial Fourier frequencies, respectively. The local limit is obtained when $\omega\ell \ll 1$, $q\ell \ll 1$, and $m\ell \ll 1$. Hence, in a more general case, it corresponds to the behavior of the differential operator $\hat{\mathcal{D}}(z)$ at $z=0$. Therefore, in order to obtain the correct infrared behavior that reproduces the standard local theory in the limit $z\to 0$, one needs to demand that all physical form factors satisfy $\alpha(0)=1$. This is evidently the case for the class of $\mathrm{GF_N}$ theories \eqref{eq:gfn}, but there are other choices as well.
Ghost-free field theories, and especially ghost-free gravity, have been discussed in a large number of publications, starting from the papers \cite{Tomboulis:1997gg,Biswas:2011ar,Modesto:2011kw,Biswas:2013cha}; see also \cite{Shapiro:2015uxa,Buoninfante:2018gce} for recent developments. The main driving force of the study of such theories is an attempt to improve the ultraviolet behavior of the theory without introducing unphysical (ghost) degrees of freedom. For applications of ghost-free gravity for resolving cosmological as well as black hole singularities, see e.g.\ \cite{Biswas:2010zk,Biswas:2013kla,Conroy:2015nva,Modesto:2017sdr,Buoninfante:2018xiw,Koshelev:2018hpt}; in the context of gravitational waves see \cite{Kilicarslan:2018unm,Kilicarslan:2019njc}.
The main goal of the present paper is to study the properties of zero-point fluctuations in the ghost-free theory. To probe such fluctuations we consider their response to a specially chosen potential $V(x)$. We restrict ourselves to the simplest case when this potential is static and is of the form of a $\delta$-like barrier. We demonstrate that for such a potential both problems, local and non-local one, are exactly solvable. In the main part of the paper we assume that the flat spacetime is two-dimensional. At the end we discuss the higher dimensional versions of the theory, and we also make remarks on the thermal fluctuations in the ghost-free theory in the presence of the potential $V(x)$.
\section{Scalar ghost-free theory}
We begin by considering a simple two-dimensional model of a ghost-free massive scalar field interacting with a potential $V$. We denote Cartesian coordinates by $X=(t,x)$, such that the Minkowski metric is
\begin{equation}
\mbox{d} s^2 = -\mbox{d} t^2 + \mbox{d} x^2 \, .
\end{equation}
The action of the theory reads
\begin{equation}\begin{split}\label{S2D}
S&=\frac12 \int \mbox{d}^2{X}\ \Big[\varphi\, \hat{{\cal D}} \varphi-V\varphi^2\Big] \, .
\end{split}
\end{equation}
For a quantum field $\hat{\varphi}$ this action gives Eq.~\eqref{LEQ}. The operator $\hat{{\cal D}}$ is a function of the Klein--Gordon operator $\Box-m^2$. Its explicit form for the local and non-local ghost-free theories was discussed in the Introduction. In order to study the vacuum polarization we choose a static potential $V(x)$ that has the form of a simple $\delta$-function
\begin{equation}\label{V}
V = \lambda\,\delta(x) \, ,
\end{equation}
where we assume that this potential is repulsive such that $\lambda>0$. For the calculations we shall employ the formalism of Green functions. Since there exists a wide set of different Green functions related to our problem, let us first discuss them and introduce notations that will be used throughout the rest of this paper.
\subsection{Green functions ``zoo''}
In general, we denote a Green function as $G(X,X')$ with a different choice of the fonts. For the Green functions in the local theory, in the presence of the potential, we use the bold font
\begin{equation}
\ts{G}^\bullet(X,X') \, ,
\end{equation}
where $\bullet=\big(+,-,(1),\mathrm{F,R,A}\big)$ denotes the type of the Green function:
\begin{equation}
\ts{G}^\bullet=\begin{cases}
\ts{G}^{+}& \mbox{positive frequency Wightman function}\\
\ts{G}^{-}& \mbox{negative frequency Wightman function}\\
\ts{G}^\ind{(1)}& \mbox{Hadamard function}\\
\ts{G}^\ind{F}& \mbox{Feynman propagator}\\
\ts{G}^\ind{R}& \mbox{retarded Green function}\\
\ts{G}^\ind{A}& \mbox{advanced Green function}
\end{cases}
\end{equation}
The first three objects satisfy the homogeneous equation
\begin{align}
[\hat{\mathcal{D}} - V(x)] \ \ts{G}^{+,-,}{}^\ind{(1)}(X,X')=0\, ,
\end{align}
while the last three objects are solutions of the inhomogeneous equation
\begin{align}
[\hat{\mathcal{D}} - V(x)]\ \ts{G}^\ind{F,R,A}(X,X')=-\delta(X-X')\, .
\end{align}
where $\hat{\mathcal{D}} = \Box-m^2$ in this local case.
Similarly, in the non-local ghost-free theory the corresponding Green functions (in the presence of the potential) are denoted by the bold font version of the calligraphic letters
\begin{align}
{\ts{\mathcal{G}}}^\bullet(X,X')\, .
\end{align}
These Green functions obey the equations
\begin{align}
[\hat{\mathcal{D}} - V(x)]\, \ts{\mathcal{G}}^{+,-,}{}^\ind{(1)}(X,X') &= 0 \, , \\
[\hat{\mathcal{D}} - V(x)]\, \ts{\mathcal{G}}^\ind{F,R,A}(X,X') &= -\delta(X-X') \, .
\end{align}
In the absence of the potential, that is, when $V(x)=0$, we shall use for the Green functions the same notations, but without boldface. The expressions
\begin{equation}
G^\bullet(X-X') \, , \quad \mathcal{G}^\bullet(X-X')
\end{equation}
denote free Green functions in the local and ghost-free theories, respectively.
It should be noted that not every method of quantization of local theories is applicable to the case of non-local theories. There are different approaches toward adapting traditional methods of quantization to non-local ghost-free theories:
For example, the definition of a quantization procedure using Wick rotation from the Euclidean signature to the Lorentz signature may not work for ghost-free theories. However, one may postulate that quantum field theory is well defined only in the Euclidean setup \cite{Buoninfante:2018mre,Asorey:2018wot} and then try to extract information about observables in a physical domain. This approach is attractive from a mathematical point of view because in the Euclidean geometry and in the local case the propagator is unique and well-defined. In the non-local case, however, the propagator picks up essential singularities in several asymptotic directions in the complex momentum plane, rendering the evaluation of correlators and contour integrals impossible.
An alternative approach consists of defining the non-local quantum theory in the physical domain with the Lorentz signature without ever resorting to Wick rotation. In the present paper we accept the second approach, which is technically more involved but conceptually clearer: The quantization employed in the present paper makes use of Green functions as well as their asymptotic boundary conditions which are well known in local field theory. As we will show, in this particular setting (a static, $\delta$-shaped potential) these methods are sufficient to construct a unique non-local quantum theory.
Of course non-locality requires us to reassess some concepts such as local causality and time-ordering. In particular, time-ordering is no longer applicable in the non-local theory in the traditional way. Also, the notions of retarded and advanced propagators are to be properly generalized because local causality in ghost-free theories is not respected. Usually, theories without local causality are prone to instabilities and hence undesirable from a physical point of view. But this generally accepted belief is not necessarily applicable to the entire class of non-local theories: there are some ghost-free theories that are free from any instabilities.
In local theories the retarded (advanced) propagator $G^\ind{R(A)}(x,y)$ vanishes provided $x$ lies everywhere outside the future (past) null cone of the point $y$. As it has been formulated by DeWitt \cite{DeWitt:1965jb}, in non-local theories this boundary condition is to be replaced by an asymptotic condition that causal propagators vanish only in the ``remote past'' (``remote future''). Similarly, the boundary conditions for the Feynman propagator have to be replaced by the asymptotic conditions. This approach is well-defined and adequate for the computation of various scattering amplitudes in the presence of external potentials, in spite of the fact that there appear some acausal effects in the vicinity of the potential. We shall comment on these conceptual issues in more detail elsewhere \cite{BFZ:upcoming}.
The presence of the potential $V(x)$ breaks the Poincar\'e invariance of the free theory in two ways: first, it violates translational invariance, and second, it selects a reference frame in which the potential is at rest. However, since the potential is static, the model preserves the translation invariance in time. This means that all Green functions depend only on the time difference $t-t'$ of their arguments. This makes it possible and convenient to use the temporal Fourier transformation. For a function $\varphi(t,x)$ we denote
\begin{eqnarray}
\varphi_{\omega}(x)&=&\int\limits_{-\infty}^{\infty} \mbox{d} t \,e^{i\omega t} \varphi(t,x)\, ,\\
\varphi(t,x)&=&\int\limits_{-\infty}^{\infty} {\mbox{d}\omega\over 2\pi} \,e^{-i\omega t} \varphi_{\omega}(x) \, .
\end{eqnarray}
The Fourier transform of the operator $\hat{{\cal D}}$ is
\begin{equation}\n{FFF}
\hat{{\cal D}}_{\omega}={\cal D}(\partial_x^2+\varpi^2),\hspace{0.5cm}
\varpi=\sqrt{\omega^2-m^2}\, .
\end{equation}
The temporal Fourier transforms of the above Green functions are marked by the subscript $\omega$:
\begin{equation}
\ts{G}^\bullet_\omega(x,x'),\hspace{0.2cm} \ts{\mathcal{G}}^\bullet_\omega(x,x'),\hspace{0.2cm} G^\bullet_\omega(x-x'),\hspace{0.2cm} \mathcal{G}^\bullet_\omega(x-x')\, .
\end{equation}
In the presence of the $\delta$-potential the model also has the discrete reflection symmetry $x\to - x$. This implies that
\begin{eqnarray}
\ts{G}^\bullet_\omega(x,x')&=&\ts{G}^\bullet_\omega(-x,-x')\, ,\\
\ts{\mathcal{G}}^\bullet_\omega(x,x')&=&\ts{\mathcal{G}}^\bullet_\omega(-x,-x')\, .
\end{eqnarray}
\subsection{Free local and ghost-free Green functions}
Non-local equations are well known in condensed matter theory. For example, the propagation of perturbations in a homogeneous dispersive medium can be described by \eqref{LEQ} with $\hat{{\cal D}}=-\partial_t^2 -f(\bigtriangleup)$, where $\bigtriangleup$ is the Laplace operator. Quasiparticles associated with such a theory have the dispersion relation $\omega^2=f(-k^2)$, where $\omega$ is the energy, and $k$ is a momentum of the quasi-particle. A property which distinguishes the ghost-free theory from other non-local theories is that its action is locally Lorentz invariant.
The corresponding dispersion relation is ${\cal D}(-\omega^2+k^2)=0$. This means that any solution of the homogeneous equation \eqref{LEQ} in the local theory is automatically a solution of the homogeneous ghost-free equation. In other words, the on-shell solutions in the local and ghost-free cases are the same.
Let us present now useful expressions for the temporal Fourier transforms of some Green functions which will be used later. We use the following notations:
\begin{eqnarray}
\varpi&=&\sqrt{\omega^2-m^2}\ \mbox{ for } |\omega|\ge m\, ,\\
\kappa&=&\sqrt{m^2-\omega^2}\ \mbox{ for } |\omega|< m\, .
\end{eqnarray}
For this definition both quantities are real non-negative quantities. Let us also notice that in the absence of the potential $V$ the Green functions (for both the local and non-local cases) depend only on the difference $x-x'$ of their arguments. In what follows we denote this difference simply by $x$.
In the local theory the Hadamard function reads
\begin{equation}
\label{eq:hdm-free}
{G}^\ind{(1)}_{\omega}(x)=\theta(|\omega|- m)\,{\cos (\varpi\,x)\over \varpi}\, ,
\end{equation}
while the Feynman propagator and the retarded Green function are
\begin{align}
\label{eq:feynman-free}
{G}^\ind{F}_{\omega}(x)&=\begin{cases} \frac{i}{2 \varpi}\,{e^{i\varpi\,|x|}}, & \ \mbox{ for } |\omega|\ge m\,;\\
{1\over 2 \kappa}\,{e^{-\kappa\,|x|}},& \ \mbox{ for } |\omega|< m \, .
\end{cases} \\
{G}^\ind{R}_{\omega}(x)&=\begin{cases}{i\varepsilon_{\omega}\over 2 \varpi}\,{e^{i\varepsilon_{\omega}\varpi\,|x|}},& \ \mbox{ for } |\omega|\ge m \, ;\\
{1\over 2\kappa}\,{e^{-\kappa\,|x|}},& \ \mbox{ for } |\omega|< m \, .
\end{cases}
\end{align}
Here and in what follows we denote $\varepsilon_{\omega}=\sgn(\omega)$. As mentioned previously, all these functions are invariant under the change $x\to -x$. If $\omega\ge -m$ one has that
${G}^\ind{R}_{\omega}(x)$ coincides with ${G}^\ind{F}_{\omega}(x)$. For $\omega\ge 0$ the following relation is valid:
\begin{equation} \n{FD0}
{G}^\ind{(1)}_{\omega}(x) = 2\Im [ {G}^\ind{R}_{\omega}(x) ]\, .
\end{equation}
The last equality is nothing but the fluctuation-dissipation theorem for the vacuum (zero temperature) case, and we shall comment on this in the Conclusion.
Let us now discuss the free Green functions for a generic non-local ghost-free theory.\footnote{A comprehensive discussion of the Green functions in the ghost-free theory can also be found in \cite{Buoninfante:2018mre}.} Note that the discussion which follows is valid for any non-local theory that can be formulated in terms of one form factor $\alpha$. To begin with, in the absence of the potential one has
\begin{equation}\label{GG}
\mathcal{G}_{\omega}^\ind{(1)}(x)=G_{\omega}^\ind{(1)}(x)\, .
\end{equation}
In local quantum field theory the free Hadamard function is defined as the symmetric expectation value
\begin{align}
\label{eq:def-hdm-qft-free}
\mathcal{G}^{(1)}(X,X') \equiv \langle \hat{\varphi}(X)\hat{\varphi}(X')+\hat{\varphi}(X')\hat{\varphi}(X)\rangle \, ,
\end{align}
where the expectation value is performed in the vacuum state and readily reproduces Eq.~\eqref{eq:hdm-free}. As seen in Eq.~\eqref{GG}, in the non-local free theory one obtains the same Hadamard function as in the local case. The Feynman propagators and the retarded Green functions in the non-local theory differ from their local versions by a universal term ${\Delta \mathcal{G}}_{\omega}(x)$ as follows:
\begin{align}
\label{eq:deltaG}
\mathcal{G}^\ind{F,R}_{\omega}(x) = G^\ind{F,R}_{\omega}(x)+{\Delta \mathcal{G}}_{\omega}(x)\, .
\end{align}
This additional term is given by the integral
\begin{align}
\Delta\mathcal{G}_{\omega}(x)&=\int\limits_{-\infty}^{\infty}{\mbox{d} q\over 2\pi}\cos(q x) {1-\alpha(\varpi^2-q^2)\over \varpi^2-q^2} \, .\n{INT}
\end{align}
Since the form factor $\alpha$ has the property $\alpha(0)= 1$, the integrand is a regular function at $q^2=\varpi^2$. Let us also notice that $\Delta\mathcal{G}_{\omega}(x)$ is a real function which is invariant under the transformation $x\to -x$. Last, in the local case when $\alpha=1$ one has $\Delta\mathcal{G}_\omega(x)=0$.
In what follows, we will recast all our results in terms of this modification term $\Delta\mathcal{G}_\omega(x)$ since it captures the impact of the non-local modification on the local theory.
\section{Green functions in the presence of the potential}
In this part we will derive exact expressions for the Hadamard function as well as the causal propagators (retarded and Feynman) for the ghost-free theory in the presence of the $\delta$-potential.
\subsection{Lippmann--Schwinger equation and its solution}
For the calculation of the response of zero-point fluctuations to an external potential one needs to find the corresponding Hadamard Green function. For our choice of the potential it is possible to obtain it in an explicit form. Consider the equation
\begin{align}
\label{eq:eom-phi-modes}
\hat{ {\cal D}}_{\omega}\varphi_{\omega}(x) -V(x)\varphi_{\omega}(x)=0\, .
\end{align}
Denote by $\varphi^0_{\omega}(x)$ a solution of the equation for $V=0$. Then one can write a solution of \eqref{eq:eom-phi-modes} for the mode function $\varphi_{\omega}(x)$ as
\begin{equation}
\label{eq:ls-fourier}
\varphi_{\omega}(x)=\varphi^0_{\omega}(x)-\int\limits_{-\infty}^\infty \! \mbox{d} x' \, \mathcal{G}^\ind{R}_{\omega}(x,x')V(x') \varphi_{\omega}(x')\, .
\end{equation}
This is a so-called Lippmann--Schwinger equation \cite{Lippmann:1950zz}.
For $V(x)=\lambda \delta(x)$ the integral can be taken explicitly and one obtains
\begin{equation}
\varphi_{\omega}(x)=\varphi^0_{\omega}(x)-\lambda \mathcal{G}^\ind{R}_{\omega}(x)\varphi_{\omega}(0)\, .
\end{equation}
Here we used that the free Green function $\mathcal{G}^\ind{R}_{\omega}(x,x')$ depends only on the difference of the coordinates $x-x'$; we denote such a function of one variable for $x'=0$ as $\mathcal{G}^\ind{R}_{\omega}(x)$.
Provided $1+\lambda\mathcal{G}^\ind{R}_\omega(0)\not=0$ this algebraic equation can be easily solved and one obtains
\begin{align}
\begin{split}
\n{FLS}
\varphi_{\omega}(x) & =\varphi^0_{\omega}(x)-\Lambda_\omega \varphi^0_{\omega}(0) \mathcal{G}^\ind{R}_{\omega}(x) \, , \\
\Lambda_\omega & ={\lambda \over 1+\lambda \mathcal{G}^\ind{R}_{\omega}(0)}\, .
\end{split}
\end{align}
Formally one can employ the free advanced Green function $\mathcal{G}^\ind{A}_\omega(x)$ as well, and it will also solve Eq.~\eqref{eq:eom-phi-modes}. Expanding a physical wave packet with ``advanced modes'' instead of ``retarded modes'' will correspond to different boundary conditions. However, we will prove below that both modes give rise to the same Hadamard function.
\subsection{Hadamard function}
The Hadamard function in the $X$-representation is defined as the symmetric expression
\begin{equation}
\ts{\mathcal{G}}^{(1)}(X,X') \equiv \langle \hat{\varphi}(X)\hat{\varphi}(X')+\hat{\varphi}(X')\hat{\varphi}(X)\rangle \, ,
\end{equation}
such that $\ts{\mathcal{G}}{}^{(1)}(X,X') = \ts{\mathcal{G}}{}^{(1)}(X',X)$. Applying a temporal Fourier transform results in the expression
\begin{equation}
\ts{\mathcal{G}}^{(1)}_{\omega}(x,x')=\langle \hat{\varphi}_{\omega}(x)\hat{\varphi}_{-\omega}(x')+\hat{\varphi}_{-\omega}(x') \hat{\varphi}_{\omega}(x)\rangle\, ,
\end{equation}
and the symmetry of $X \leftrightarrow X'$ implies that
\begin{align}
\label{eq:hdm-fourier-smtry}
\ts{\mathcal{G}}^{(1)}_{-\omega}(x,x') = \ts{\mathcal{G}}^{(1)}_{\omega}(x',x) \, .
\end{align}
These are formal expressions, but due to Eq.~\eqref{GG} we can relate them to local expressions in a unique way: Using Eq.~\eqref{FLS} for the field operator $\hat{\varphi}_{\omega}(x)$ and the property \eqref{GG} one obtains
\begin{align}
\begin{split}
\label{eq:hdm-result}
&\ts{\mathcal{G}}^{(1)}_{\omega}(x,x')\equiv{G}^{(1)}_{\omega}(x-x') \\
&-\Lambda_\omega {\mathcal{G}}^{R}_{\omega}(x) {G}^{(1)}_{-\omega}(x') -\Lambda_{-\omega}{\mathcal{G}}^{R}_{-\omega}(x') {G}^{(1)}_{\omega}(x)\\
&+{G}^{(1)}_{\omega}(0) \Lambda_\omega {\mathcal{G}}^{R}_{\omega}(x) \Lambda_{-\omega}{\mathcal{G}}^{R}_{-\omega}(x')\, .
\end{split}
\end{align}
We take this as a unique prescription for obtaining the non-local, interacting Hadamard function. In the case of vanishing potential, $\lambda=0$, or in the case of vanishing non-locality, $\Delta\mathcal{G}=0$, we recover the local results.
Let us now discuss the properties of relation \eqref{eq:hdm-result}. By construction, this expression satisfies \eqref{eq:hdm-fourier-smtry}. Second, by means of Eq.~\eqref{eq:hdm-free}, it is proportional to $\theta(|\omega|-m)$ and
\begin{align}
\label{eq:hdm-integral-bounds}
\ts{\mathcal{G}}^{(1)}_{\omega}(x,x') = 0 \quad \text{for} \quad |\omega| < m \, .
\end{align}
Last, let us notice that
\begin{align}
\label{eq:hdm-properties-0}
\ts{\mathcal{G}}^{(1)}_{-\omega}(x,x') = \ts{\mathcal{G}}^{(1)}_{\omega}(x,x') \, .
\end{align}
This, combined with \eqref{eq:hdm-fourier-smtry}, finally implies
\begin{align}
\label{eq:hdm-properties}
\ts{\mathcal{G}}^{(1)}_{\omega}(x,x') = \ts{\mathcal{G}}^{(1)}_{|\omega|}(x,x') = \ts{\mathcal{G}}^{(1)}_{|\omega|}(x',x) \, .
\end{align}
Again, one might substitute the free advanced Green function $\mathcal{G}^\ind{A}_\omega(x)$ in the above relations. It is related to the free retarded Green function via
\begin{align}
\label{eq:relation-free-adv-free-ret}
\Lambda^\ind{A}_\omega = \Lambda_{-\omega} \, , \quad \mathcal{G}^\ind{A}_\omega(x) = \mathcal{G}^\ind{R}_{-\omega}(x) \, ,
\end{align}
where we defined the analogous quantity
\begin{align}
\Lambda^\ind{A} := \frac{\lambda}{1+\lambda\mathcal{G}^\ind{A}_\omega(0)} \, .
\end{align}
Then one may define
\begin{align}
\begin{split}
\label{eq:hdm-result-adv}
&\ts{\mathcal{G}}^{(1)}_{\omega}(x,x')_\ind{A} ={G}^{(1)}_{\omega}(x-x') \\
&-\Lambda^\ind{A}_\omega {\mathcal{G}}^{A}_{\omega}(x) {G}^{(1)}_{-\omega}(x') -\Lambda^\ind{A}_{-\omega}{\mathcal{G}}^{A}_{-\omega}(x') {G}^{(1)}_{\omega}(x)\\
&+{G}^{(1)}_{\omega}(0) \Lambda^\ind{A}_\omega {\mathcal{G}}^{A}_{\omega}(x) \Lambda^\ind{A}_{-\omega}{\mathcal{G}}^{A}_{-\omega}(x')\, ,
\end{split}
\end{align}
but using the relations \eqref{eq:relation-free-adv-free-ret} as well as \eqref{eq:hdm-properties-0} one sees that
\begin{align}
\ts{\mathcal{G}}^{(1)}_{\omega}(x,x')_\ind{A} = \ts{\mathcal{G}}^{(1)}_{\omega}(x,x') \, .
\end{align}
Hence, for the calculation of the vacuum polarization in the static case considered here, the retarded and advanced free Green functions can be used interchangeably.
\subsection{Causal propagators}
In this part, let us denote the causal propagators (Feynman and retarded) by the superscript ``C.'' Let us write the causal propagator in the form
\begin{align}
\ts{\mathcal{G}}^\ind{C}_\omega(x,x')&={\mathcal{G}}^\ind{C}_\omega(x-x')+\mathcal{A}_{\omega}(x,x') \, , \label{GW}
\end{align}
where $\mathcal{A}_\omega(x,x')$ satisfies the equation
\begin{align}
\label{WW}
\big[\hat{\mathcal{D}} - V(x)\big]\mathcal{A}_\omega(x,x')=V(x)\,{\mathcal{G}}^\ind{C}_\omega(x-x') \, .
\end{align}
The solution is given by
\begin{align}
\mathcal{A}_\omega(x,x')&=-\int\limits_{-\infty}^\infty \mbox{d} x''\,\ts{\mathcal{G}}^\ind{C}_\omega(x,x'')\,V(x'')\,{\mathcal{G}}^\ind{C}_\omega(x''-x') \, . \label{W1}
\end{align}
One may think of this relation as the version of the Lippmann--Schwinger equation for the causal propagators.
Again, for $V(x) = \lambda\delta(x)$ the above integral can be taken and one finds
\begin{equation}
\mathcal{A}_\omega(x,x')=-\lambda\, \ts{\mathcal{G}}^\ind{C}_\omega(x,0)\,{\mathcal{G}}^\ind{C}_\omega(x') \, .
\end{equation}
Combining this relation with \eq{GW} one gets
\begin{equation}
\ts{\mathcal{G}}^\ind{C}_\omega(x,x')={\mathcal{G}}^\ind{C}_\omega(x-x')-\lambda\, \ts{\mathcal{G}}^\ind{C}_\omega(x,0)\,{\mathcal{G}}^\ind{C}_\omega(x') \, .
\end{equation}
For $x'=0$ it reduces to the consistency relation
\begin{equation}
\ts{\mathcal{G}}^\ind{C}_\omega(x,0)={\mathcal{G}}^\ind{C}_\omega(x)-\lambda \,\ts{\mathcal{G}}^\ind{C}_\omega(x,0)\,{\mathcal{G}}^\ind{C}_\omega(0) \, .
\end{equation}
Provided that $1 + \lambda\mathcal{G}^\ind{C}_\omega(0) \not= 0$, we obtain from this algebraic equation the condition
\begin{equation}
\ts{\mathcal{G}}^\ind{C}_\omega(x,0)={{\mathcal{G}}^\ind{C}_\omega(x)\over 1+\lambda \,{\mathcal{G}}^\ind{C}_\omega(0)} \, .
\end{equation}
Therefore one finally obtains for the causal propagators
\begin{align}
\label{eq:feynman-result}
\ts{\mathcal{G}}{}^\ind{C}_\omega(x,x') = \mathcal{G}{}^\ind{C}_\omega(x-x') - \lambda\,{{\mathcal{G}}^\ind{C}_\omega(x){\mathcal{G}}^\ind{C}_\omega(x')\over 1+\lambda \,{\mathcal{G}}^\ind{C}_\omega(0)} \, ,
\end{align}
where C=F or C=R for the Feynman or the retarded propagator, respectively. By construction, see Eq.~\eqref{eq:feynman-free}, the Feynman propagator satisfies
\begin{align}
\ts{\mathcal{G}}{}^\ind{F}_{\omega}(x,x') &= \ts{\mathcal{G}}{}^\ind{F}_\omega(x',x) \, , \\
\ts{\mathcal{G}}{}^\ind{F}_{-\omega}(x,x') &= \ts{\mathcal{G}}{}^\ind{F}_\omega(x,x') = \ts{\mathcal{G}}{}^\ind{F}_{|\omega|}(x,x') \, ,
\end{align}
as well as
\begin{align}
\label{eq:feynman-integral-bounds}
\Im \, [ \ts{\mathcal{G}}{}^\ind{F}_\omega(x,x') ] = 0 \quad \text{for} \quad |\omega| < m \, .
\end{align}
The retarded propagator, however, satisfies
\begin{align}
\ts{\mathcal{G}}{}^\ind{R}_{\omega}(x,x') &= \ts{\mathcal{G}}{}^\ind{R}_\omega(x',x) \, , \\
\ts{\mathcal{G}}{}^\ind{R}_{-\omega}(x,x') &= \overline{\ts{\mathcal{G}}{}^\ind{R}_\omega}(x',x) \, ,
\end{align}
where the bar denotes complex conjugation.
\subsection{Interrelation between Hadamard function and causal propagators in the static case}
Having the exact expressions for the Hadamard function \eqref{eq:hdm-result} as well as the causal propagators \eqref{eq:feynman-result} at our disposal, it is straightforward to show that they are related via
\begin{align}
\label{eq:relation-green-functions}
\ts{\mathcal{G}}{}^\ind{F}_\omega(x,x') = \frac12 \left( \ts{\mathcal{G}}{}^\ind{R}_\omega(x,x') + \ts{\mathcal{G}}{}^\ind{A}_\omega(x,x') + i \, \ts{\mathcal{G}}{}^\ind{(1)}_\omega(x,x') \right) \, .
\end{align}
Here, $\ts{\mathcal{G}}{}^\ind{A}_\omega(x,x')$ denotes the advanced propagator which can be defined as
\begin{align}
\ts{\mathcal{G}}{}^\ind{A}_\omega(x,x') \equiv \ts{\mathcal{G}}{}^\ind{R}_{-\omega}(x,x') \, .
\end{align}
This implies that also in the $X$-representation one has
\begin{align}
\begin{split}
\ts{\mathcal{G}}{}^\ind{F}(X,X') &= \frac12 \Big( \ts{\mathcal{G}}{}^\ind{R}(X,X') + \ts{\mathcal{G}}{}^\ind{A}(X,X') \\
& \hspace{30pt} + i \, \ts{\mathcal{G}}{}^\ind{(1)}(X,X') \Big) \, .
\end{split}
\end{align}
In particular, one can also show that the Hadamard function $\ts{\mathcal{G}}^\ind{(1)}(X,X')$ and the Feynman propagator $\ts{\mathcal{G}}^\ind{F}(X,X')$ are related via
\begin{align}
\label{eq:relation-1}
\ts{\mathcal{G}}^\ind{(1)}(X,X')=2\Im\big[ \ts{\mathcal{G}}^\ind{F}(X,X')\big] \, ,
\end{align}
which again is due to the Fourier space relation
\begin{align}
\label{eq:relation-2}
\ts{\mathcal{G}}^\ind{(1)}_\omega(x,x')=2\Im\big[ \ts{\mathcal{G}}^\ind{F}_\omega(x,x')\big] \, .
\end{align}
Evidently, similar relations hold for $V=0$ as well as in the local theories. We prove these relations in Appendix \ref{app:comparison-hadamard-feynman}. It is important to stress that these interrelations are valid for any non-local modification $\Delta \mathcal{G}_\omega(x)$.
Ultimately, we are interested in calculating the vacuum polarization which is defined in terms of the Hadamard function. The above relations show that it is also possible to perform the computations using the Feynman propagator, and take the imaginary part only at the end. We will make this more precise in the next section.
\section{Vacuum fluctuations}
\subsection{General expression for $\langle \varphi^2(x)\rangle_\ind{ren}$ }
We are interested in the quantity
\begin{align}
\langle\varphi^2(x)\rangle_\ind{ren} &:= \left.\left[ \left\langle \varphi(X) \varphi(X') \right\rangle_{V\not=0} - \left\langle \varphi(X) \varphi(X') \right\rangle_{V=0} \right]\right|_{X=X'} \nonumber \\
&= \frac12 \left. \left(\ts{\mathcal{G}}^\ins{(1)}(X,X') - {\mathcal{G}}^\ins{(1)}(X-X') \right) \right|_{X=X'} \label{eq:phisquared} \\
&= \frac12 \left[ \ts{\mathcal{G}}^\ins{(1)}(X,X') -{G}^\ins{(1)}(X-X') \right] |_{X=X'} \, . \nonumber
\end{align}
Inserting \eqref{eq:hdm-result} into \eqref{eq:phisquared} and using \eqref{eq:hdm-properties} one obtains
\begin{align}
\begin{split}
\label{eq:phisquared-hadamard}
\langle \varphi^2(x) \rangle_\ind{ren} &= -\int\limits_m^\infty \frac{\mbox{d} \omega}{2\pi} \Big[ 2\Lambda_\omega {\mathcal{G}}^\ind{R}_{\omega}(x) {G}^\ind{(1)}_{-\omega}(x) \\
&\hspace{45pt} -{G}^\ind{(1)}_{\omega}(0) \left|\Lambda_\omega {\mathcal{G}}^\ind{R}_{\omega}(x)\right|^2 \Big] \, .
\end{split}
\end{align}
Alternatively, inserting \eqref{eq:feynman-result} into \eqref{eq:phisquared} as well as making use of the interrelation \eqref{eq:relation-2} yields
\begin{align}
\label{eq:phisquared-feynman}
\langle \varphi^2(x) \rangle_\ind{ren} = - \Im \left[ \int\limits_m^\infty \frac{\mbox{d}\omega}{2\pi} \lambda\,\frac{\left[{\mathcal{G}}^\ind{F}_\omega(x)\right]^2}{1+\lambda \,{\mathcal{G}}^\ind{F}_\omega(0)} \right] \, .
\end{align}
The integration limits follow directly from Eqs.~\eqref{eq:hdm-integral-bounds} and \eqref{eq:feynman-integral-bounds}, respectively. At first glance these two expressions look quite different, but they are, in fact, identical. This can be shown by using the relations detailed in the previous section, as well as in Appendix \ref{app:comparison-hadamard-feynman}. Using expression \eqref{eq:phisquared-hadamard} it is easy to see that in the absence of the potential barrier, that is, when $\lambda=0$, $\langle \varphi^2(x)\rangle_{\mbox{\tiny{ren}}} =0$ as it should be.
Using Eq.~\eqref{eq:deltaG} we can isolate the terms encoding the non-locality and obtain (after changing the integration variable from $\omega$ to $\varpi$) the following expression:
\begin{align}
\begin{split}
\langle \varphi^2(x) \rangle_\ind{ren} &= \int \limits_0^\infty \frac{\mbox{d}\varpi}{4\pi} \frac{\Phi_{\omega}(x)}{\sqrt{\varpi^2+m^2}} \, ,
\\
\Phi_{\omega}(x) &= \frac{B^2-\cos^2(\varpi x)-2\cos(\varpi x)BC}{1+C^2} \, , \\
B &= 2g_\omega(x) - \sin(\varpi|x|) \, , \label{eq:main-integral} \\
C &= 2g_\omega(0) + 2\varpi/\lambda \, , \\
g_\omega(x) &= \varpi\Delta\mathcal{G}_\omega(x) \, .
\end{split}
\end{align}
This is a general expression for the renormalized vacuum polarization for any non-local theory specified by $\Delta\mathcal{G}_\omega(x)$ which enters via the dimensionless quantity $g_\omega(x)$.\footnote{The scattering of a scalar field on a $\delta$-like potential in a ghost-free theory was studied in \cite{Boos:2018kir}. By comparing \eqref{eq:main-integral} with the results of this paper one can conclude that the factor $1/(1+C^2)$ which enters the integral \eqref{eq:main-integral} coincides with the transmission probability $R$.}
In what follows, it is our goal to evaluate this expression in the local case, as well as for various non-local cases.
\subsection{Vacuum polarization in the local theory}
\label{subsec:local}
Let us first consider the vacuum fluctuations in the local theory which was studied earlier; see \cite{Bordag:1992cm,Milton:2004ya} and references therein. In terms of calculational techniques our approach is quite similar to the one employed in \cite{Munoz-Castaneda:2013yga}. In what follows we shall use the results of the local theory for the comparison with the results in the ghost-free models. This will allow us to better understand the effects of the non-locality.
In the local case one has $\Delta\mathcal{G}_\omega(x) = 0$, and hence
\begin{align}
B = -\sin(\varpi|x|) \, , \quad C = \frac{2\varpi}{\lambda} \, .
\end{align}
The integral \eqref{eq:main-integral} then takes the form
\begin{align}
\label{eq:vac-pol-local}
\langle \varphi^2(x) \rangle_\ind{ren}^\ind{loc.} &= \lambda \int\limits_0^\infty \frac{\mbox{d} \varpi}{4\pi} \frac{2\varpi\sin(2\varpi|x|) - \lambda\cos(2\varpi x)}{\sqrt{\varpi^2+m^2}(4\varpi^2 + \lambda^2)} \, .
\end{align}
Provided $m > 0$ this integral converges, but it is difficult to evaluate this integral analytically.
For $x=0$ we can calculate \eqref{eq:vac-pol-local} analytically and obtain
\iffalse
\begin{align}
\langle \varphi^2(0) \rangle_\ind{ren}^\ind{loc.} = -\frac{\ln\left(1 + \sqrt{1-4\mu^2}\right)-\ln\left(2\mu\right)}{4\pi\sqrt{1-4\mu^2}} \, ,
\end{align}
\fi
\begin{align}
\begin{split}
\langle \varphi^2(0) \rangle_\ind{ren}^\ind{loc.} &= - \int\limits_0^\infty \frac{\mbox{d} \varpi}{4\pi} \frac{1}{\sqrt{\varpi^2+\mu^2}} \frac{1}{1 + 4\varpi^2} \\
&= \begin{cases} \displaystyle -\frac{\text{arcosh}\left(\frac{1}{2\mu}\right)}{4\pi\sqrt{1-4\mu^2}} & \text{for~} \mu < \tfrac12 \, , \\[15pt]
\displaystyle -\frac{1}{4\pi} & \text{for~} \mu=\tfrac12 \, , \\[15pt]
\displaystyle -\frac{\text{arccos}\left(\frac{1}{2\mu}\right)}{4\pi\sqrt{4\mu^2-1}} & \text{for~} \mu > \tfrac12 \, .
\end{cases}
\end{split}
\end{align}
where $\mu := m/\lambda$. Note that $\langle \varphi^2(0) \rangle_\ind{ren}^\ind{loc.}$ is always negative, and asymptotically one has
\begin{alignat}{3}
\langle \varphi^2(0) \rangle_\ind{ren}^\ind{loc.} &\rightarrow -\infty \quad &&\text{for} \quad \mu \rightarrow 0 \, , \\
\langle \varphi^2(0) \rangle_\ind{ren}^\ind{loc.} &\rightarrow 0 \quad &&\text{for} \quad \mu \rightarrow \infty \, .
\end{alignat}
The divergence for $\mu\rightarrow 0$ corresponds to the well-known IR divergence for a massless scalar field theory in two dimensions.
In the case of $x\not=0$ the vacuum polarization \eqref{eq:vac-pol-local} can be evaluated numerically. In Fig.~\ref{fig:phisquared-loc} we plot the local vacuum polarization $\langle \varphi^2(x) \rangle_\ind{ren}^\ind{loc.}$ as a function of $x$ for different values of the mass $m$.
\begin{figure}[!htb]%
\centering
\includegraphics[width=0.47\textwidth]{{plot-phisquared-loc}.pdf}\\[-15pt]
\caption{The local vacuum polarization $\langle \varphi^2(x)\rangle_\ind{ren}$\ as a function of the dimensionless distance $x/\ell$ for a fixed potential parameter ($\lambda\ell=0.5$) and for various values of the dimensionless mass parameter $m\ell$.}
\label{fig:phisquared-loc}
\end{figure}
For the remainder of this paper we shall focus on $\mathrm{GF_N}$ non-local theories for which the non-local modification takes the explicit form
\begin{align}
\label{eq:int-gfn}
\Delta\mathcal{G}_{\omega}(x) &= \int\limits_{-\infty}^\infty\frac{\mbox{d} q}{2\pi} \cos(q x) \frac{1-e^{-[\ell^2(q^2-\varpi^2)]^N}}{\varpi^2-q^2} \, .
\end{align}
Note that the integrand is manifestly regular at $q=\varpi$ for all values of $N$. It is also clear that for even $N$ the asymptotic behavior in $\varpi\rightarrow\infty$ is regular, whereas for odd $N$ the asymptotic behavior in $\varpi$ is divergent. This feature will become important in the following discussion.
\subsection{Vacuum polarization in $\mathrm{GF_1}$ theory}
\label{subsec:gf1}
The non-local $\mathrm{GF}_1$ theory is defined by the form factor
\begin{align}
\alpha(z) = \exp(\ell^2 z) \, ,
\end{align}
which is obtained by setting $N=1$ in Eqs.~\eqref{eq:gfn} and \eqref{eq:gfn-form}. In this case the integral \eqref{INT} can be calculated analytically. For $|\omega|\ge m$ (that is, $\varpi>0$) the result is
\begin{align}
\Delta\mathcal{G}_\omega(x) = \frac{1}{2\varpi}\left\{ \sin(\varpi|x|) - \Im \left[ e^{i\varpi |x|} \erf\left( x_+ \right) \right] \right\} \, ,\n{DG}
\end{align}
where we defined
\begin{align}
x_\pm = \frac{|x|}{2\ell} \pm i\omega\ell \, ,
\end{align}
and $\erf(z)$ denotes the error function. In what follows we shall use the fact that the asymptotic of this function for $\Re(z)$=fixed and $\Im(z)\to \pm \infty$ is
\begin{align}
\erf(z) \sim -\frac{e^{-z^2}}{\sqrt{\pi} z} \, .
\end{align}
From expression \eqref{DG} we can read off
\begin{align}
B &= - \Im \left[ e^{i\varpi |x|} \erf\left( x_+ \right) \right] \, , \\
C &= \frac{2\varpi}{\lambda} - \erfi (\varpi\ell) \, ,
\end{align}
where $\erfi(z) = -i\erf(i z)$ denotes the imaginary error function \cite{Olver:2010}. Its asymptotic for real $z\to \infty$ is
\begin{align}
\erfi(z) \sim \frac{e^{z^2}}{\sqrt{\pi} z} \, .
\end{align}
Asymptotically, for finite $\lambda>0$ and $\omega\to \infty$, one has
\begin{align}
B \sim -\frac{1}{\sqrt{\pi}\varpi\ell} e^{\varpi^2\ell^2 - x^2/(4\ell^2)} ,\hspace{0.2cm}
C \sim \frac{1}{\sqrt{\pi}\varpi\ell} e^{\varpi^2\ell^2} \, .
\end{align}
Both of these quantities are exponentially divergent for large frequencies $\varpi$. However, the ratio $B/C$ remains finite in this limit:
\begin{align}
{B\over C}\sim -e^{-x^2/(4\ell^2)}\,,
\end{align}
and one has
\begin{align}
\Phi_\omega(x) \sim e^{-x^2/(2\ell^2)} - 2\cos(\varpi x) e^{-x^2/(4\ell^2)} \, .
\end{align}
The first term in the right-hand side of this expression does not depend on the frequency, and hence the corresponding contribution to $\langle \varphi^2(x)\rangle_\ind{ren}$\ is logarithmically divergent. By introducing a UV cutoff $\Omega$ one obtains the following expression for the regularized divergent integral:
\begin{align}
Z_0=\int\limits_0^\Omega \frac{\mbox{d}\varpi}{4\pi} \frac{1}{\sqrt{\varpi^2+m^2}}={1\over 4\pi}\ln\left({\Omega+\sqrt{\Omega^2+m^2}\over m} \right)\, .
\end{align}
One also has
\begin{align}
Z_1=-\int\limits_0^\infty \frac{\mbox{d}\varpi}{4\pi} \frac{2\cos(\varpi x)}{\sqrt{\varpi^2+m^2}}
= -\frac{1}{2\pi} K_0(m|x|) \, ,
\end{align}
where $K_0(x)$ is the modified Bessel function. Using these results one can write the expression for $\langle \varphi^2(x)\rangle_\ind{ren}$\ in the $\mathrm{GF_1}$ theory as follows:
\begin{align}
\langle \varphi^2(x) \rangle_\ind{ren}^\mathrm{GF_1}&=e^{-x^2/(2\ell^2)} Z_0+e^{-x^2/(4\ell^2)} Z_1+\Psi(x)\, ,\\
\Psi(x)&=\int \limits_0^\infty \frac{\mbox{d}\varpi}{4\pi} \frac{\widetilde{\Phi}_{\omega}(x)}{\sqrt{\varpi^2+m^2}}\, ,\\
\widetilde{\Phi}_{\omega}(x)&=\Phi_\omega(x) - e^{-x^2/(2\ell^2)} + 2\cos(\varpi x) e^{-x^2/(4\ell^2)}\, .
\end{align}
The integral for $\Psi(x)$ is convergent. When adding the Bessel function contribution $Z_1$ to $\Psi(x)$ we arrive at some ``renormalized vacuum polarization'' that we can compare to the local expression for $\langle \varphi^2(x)\rangle_\ind{ren}$. See a graphical comparison of these quantities in Fig.~\ref{fig:phisquared-loc-gf1}.
\begin{figure}[H]%
\centering
\includegraphics[width=0.47\textwidth]{plot-phisquared-loc-gf1.pdf}\\[-15pt]
\caption{We plot the $\langle \varphi^2(x)\rangle_\ind{ren}$\ in the local case as well as in the $\mathrm{GF_1}$ case (where we subtracted the logarithmically divergent term $Z_0$) as a function of the dimensionless distance $x/\ell$ for a fixed set of potential parameter ($\lambda\ell=0.5$) as well as mass parameter ($m\ell=0.01$). At large distance scales, remarkably, the ``renormalized vacuum polarization'' agrees with the local result. Its shape for small values of $x/\ell$ is drastically different from the local theory.}
\label{fig:phisquared-loc-gf1}
\end{figure}
Our main insights regarding the vacuum polarization in the $\mathrm{GF_1}$ theory are the following: The Gaussian form of the form factor $\alpha(z)$ in this model makes it possible to obtain the Fourier transform of the non-local part of the Green functions \eqref{DG} in an explicit form. This is a very attractive property of this class of ghost-free theories. Namely for this reason, $\mathrm{GF_1}$ theory has been widely used in the study of solutions for static sources. In particular, they effectively regularize the field of a point-like source in four and higher spacetime dimensions (see e.g.\ \cite{Boos:2018bxf} and references therein). However, the propagator in this model behaves poorly in the high-frequency regime, resulting in the peculiar behavior of the field created by a time-dependent source in its near zone (see e.g.\ \cite{Frolov:2016xhq}). In the above calculations of $\langle \varphi^2(x)\rangle_\ind{ren}$\ we found that the frequency integral for this quantity is logarithmically divergent at high frequencies. The origin of this divergence can easily be traced since the integrand in expression \eqref{eq:int-gfn} exponentially grows when $\varpi\to\infty$. The same property is valid for any $\mathrm{GF_{2n+1}}$ theory, wherein the factor in the numerator grows as $\exp[(\varpi\ell)^{2(2n+1)}]$.
The situation is quite different in the case of $\mathrm{GF_{2n}}$ theories: the corresponding form factor $\alpha(z)$ decreases for both spacelike and timelike momenta when their absolute values tend to infinity. In particular, the integrand in the expression \eqref{eq:int-gfn} exponentially decreases when $\varpi\to\infty$ and is of the order of $\exp[-(\varpi\ell)^{4n}]$. Thus non-local contributions of $\mathrm{GF_{2n}}$ theories are well-defined and divergence-free. However, the analytic calculations in these theories are more involved. In the next section we calculate $\langle \varphi^2(x)\rangle_\ind{ren}$\ for the $\mathrm{GF_2}$ theory and show that our expectations regarding the finiteness of the vacuum polarization are correct.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{{plot-fomega}.pdf}
\caption{The shape of the function $f_\omega(\xi)$ which enters the integral \eqref{eq:int-gf2} changes drastically for different values of the dimensionless quantity $\varpi\ell$: For small values it is a numerically small smooth function (the solid line in the above plot; to increase visibility we scaled the function by a factor of 5). For larger values of $\varpi\ell$ that surpass the critical value of $\sqrt{2a_2}\approx 1.058\dots$, however, the function begins to vary sharply around $\xi=1$ which fundamentally influences its Fourier transform.}
\label{fig:plot-fb-1}
\end{figure}
\subsection{Vacuum polarization in $\mathrm{GF_2}$ theory}
\label{subsec:gf2}
The non-local $\mathrm{GF}_2$ theory is defined by the form factor
\begin{align}
\alpha(z) = \exp(-\ell^4 z^2) \, ,
\end{align}
which is obtained from setting $N=2$ in Eqs.~\eqref{eq:gfn}--\eqref{eq:gfn-form}. The non-local modification $g_\omega(x)$ then takes the form
\begin{align}
\label{eq:int-gf2}
g_\omega(x) &= \int\limits_0^\infty\frac{\mbox{d} \xi}{\pi} \cos(\xi \tilde{x})f_{\omega}(\xi) \, ,\\
f_{\omega}(\xi)& =\frac{1-e^{-(\varpi\ell)^4(1-\xi^2)^2}}{1-\xi^2} \, ,
\end{align}
where we introduced the dimensionless quantity $\tilde{x}= \varpi x$. We are not aware of any analytic expression for this integral. This property distinguishes this theory from $\mathrm{GF_1}$ theory and necessitates more involved numerical calculations.
It is quite remarkable that for the point at the position of the potential the quantity $g_{\omega}(0)$ can be found analytically. One can use the following representation:
\begin{align}
g_\omega(0) &={1\over 2(\varpi\ell)^2{\pi}^{3/2}}\int\limits_{-\infty}^{\infty}\mbox{d} y\, e^{-{y^2\over 4(\varpi\ell)^4}}\int\limits_0^y \mbox{d} z P(z) \, ,
\end{align}
where
\begin{align}
P(z)=\int\limits_0^{\infty}\mbox{d}\xi\,\sin\left[(1-\xi^2)z\right]={\sqrt{2\,\pi}\over 4\sqrt{z}}\Big(\sin z-\cos z\Big).
\end{align}
The integration over the parameter $z$ and then over $y$ leads to the result
\begin{align}
\begin{split}
g_\omega(0) & = \frac{\sqrt{2} (\varpi\ell)^3}{6\Gamma\left(\tfrac34\right)}~{}_\ins{2}F_\ins{2}\left[\tfrac34,\tfrac54;\tfrac32,\tfrac74;-(\varpi\ell)^4\right] \\
& -\Gamma\left(\tfrac34\right) \frac{\varpi\ell}{\pi} ~{}_\ins{2}F_\ins{2}\left[\tfrac14,\tfrac34;\tfrac12,\tfrac54;-(\varpi\ell)^4\right] \, .
\end{split}
\end{align}
Let us now consider the case when $x\ne 0$. The integrand in \eqref{eq:int-gf2} contains the function $f_{\omega}(\xi)$; for small values of $\varpi\ell$ it is quite smooth, but for large values of this parameter it has rather sharp features (see Fig.~\ref{fig:plot-fb-1}).
To work numerically, we shall employ a hybrid approach: we approximate the main features of the non-local modification \eqref{eq:int-gf2} analytically and use numerics only for the residual difference between our approximation and the exact expressions (see Appendix \ref{app:gf2} for a detailed explanation of our methods).
We find the following large-$\varpi$ asymptotics:
\begin{align}
g_\omega(0) &= -\frac{1}{4\sqrt{\pi} \varpi^2\ell^2} + \mathcal{O}\left(\varpi^{-6}\right) \, , \\
g_\omega(x) &= \frac{\sin(\varpi |x|)}{2} - \frac{a_2}{3\pi} \left( 2 + e^{-4a_2^2} \right) \frac{x \sin(\varpi x)}{\varpi\ell^2} \nonumber \\
&\hspace{10pt} - \frac{a_2}{2\pi} \left( 3-e^{-4a_2^2} \right)\frac{\cos(\varpi x)}{\varpi^2\ell^2} + \mathcal{O}\left(\varpi^{-4}\right) \, . \nonumber
\end{align}
Here $a_2$ is a special parameter which we use in our approximation,
\begin{align}
a_2 \approx 0.5604532115\dots \, .
\end{align}
For more details see Appendix \ref{app:gf2}. Thus one obtains the following asymptotic formulas for the parameters $B$ and $C$ which enter \eqref{eq:main-integral} valid in the limit of large values $\varpi$:
\begin{align}
B &\sim -\frac{2a_2}{3\pi} \left( 2 + e^{-4a_2^2} \right) \frac{x \sin(\varpi x)}{\varpi\ell^2} \, , \\
C &\sim \frac{2\varpi}{\lambda} - \frac{1}{4\sqrt{\pi} \varpi^2\ell^2} \, .
\end{align}
The asymptotics for $C$ can readily be reproduced using an alternative analytical approximation scheme, see Appendix \ref{app:gf2n-asymptotics}. As a result we obtain the following asymptotic expression for $\Phi_\omega(x)$ in the limit of large $\varpi$:
\begin{align}
\begin{split}
\Phi_\omega(x) \sim &-\frac{\lambda^2\cos^2(\varpi x)}{4\varpi^2 + \lambda^2}\\
&+ \frac{8a_2\lambda}{3\pi\ell^2}\left(1-e^{-4a_2^2}\right) \frac{x\cos(\varpi x)\sin(\varpi x)}{4\varpi^2 + \lambda^2} \, .
\end{split}
\end{align}
We see that $\Phi_\omega(x)$ is a decreasing function of $\varpi$. Together with the $\sqrt{\varpi^2+m^2}$-factor in \eqref{eq:main-integral} the behavior is improved even more. These considerations imply that---unlike in $\mathrm{GF_1}$ theory---the vacuum polarization for $\mathrm{GF_2}$ theory is well-defined and finite for any value of $x$.
Having a numerical evaluation of $g_\omega(x)$ at our disposal, we can now numerically evaluate $\langle \varphi^2(x) \rangle_\ind{ren}^\mathrm{GF_2}$. The plot of this function (and the comparison to the local theory) can be found in Fig.~\ref{fig:phisquared-loc-gf2}.
\begin{figure}[!htb]%
\centering
\iffalse \subfloat{{\includegraphics[width=0.47\textwidth]{{plot-phisquared-loc-gf2-detail}.pdf} }}%
\qquad \fi
\subfloat{{\includegraphics[width=0.47\textwidth]{{plot-phisquared-loc-gf2}.pdf} }}\\[-15pt]
\caption{Local and non-local vacuum polarization $\langle\varphi^2(x)\rangle_\text{ren}$ plotted against the dimensionless distance parameter $x/\ell$ for two different potential parameters ($\lambda\ell=0.5$ and $\lambda\ell=2$). For large distances the local and non-local polarizations approach each other, but for small distance scales $x/\ell \sim \tfrac12$ there is a crossover between the local and non-local vacuum polarization which we previously discussed elsewhere \cite{Boos:2018bhd} on a heuristic level. The effect of the non-locality is a smoothing of the polarization around $x=0$.}
\label{fig:phisquared-loc-gf2}
\end{figure}
There are a few observations: (i) \emph{Asymptotics.}---For large distances $x\gg\ell$ the vacuum polarization in $\mathrm{GF_2}$ theory approaches that of the local theory, as expected. As this feature is built into all ghost-free theories considered in this paper, this result confirms that our numerical methods work well.
(ii) \emph{Smoothing.}---At small distances scales $x \sim \ell$, however, there is a difference between the local theory and $\mathrm{GF_2}$ theory: the vacuum polarization is smoothed out at the origin $x=0$ as compared to the local case. This implies that all quantities related to the derivative of the vacuum polarization ($\sim \partial_x \varphi^2$) are now regular at the presence of the $\delta$-potential, whereas in the local theory they are not necessarily continuous.
(iii) \emph{Overshoot.}---Across a wide range of masses and potential parameters (quite possible for \emph{all} possible values) the vacuum polarization at the location of the $\delta$-potential is numerically larger than in the local case. We call this an ``overshoot,'' and this feature is plotted in Fig.~\ref{fig:phisquared0-difference}.
(iv) \emph{Crossing.}---Last, at the intermediate location $x\sim\ell$, there is a crossing of the local and $\mathrm{GF_2}$ vacuum polarization. This implies that the difference of the local and non-local vacuum polarization can be both positive and negative. In the $\mathrm{GF_1}$ theory this feature is even more pronounced with multiple crossings, see Fig.~\ref{fig:phisquared-loc-gf1}. We previously discussed these features in the effective energy density in linearized classical non-local gravity \cite{Boos:2018bhd}, and it seems that these crossings or oscillations are a generic feature of ghost-free theories.
\begin{figure}[!htb]%
\centering
\subfloat{{\includegraphics[width=0.47\textwidth]{{plot-phisquared0-comparison}.pdf} }}\\[-10pt]
\caption{We plot the difference of the vacuum polarization at the location of the potential at $x=0$ as a function of the potential strength $\lambda\ell$. We see that the difference is a function of the dimensionless mass parameter $m\ell$: for larger masses $m$ at fixed non-locality $\ell$ the difference decreases. In the limiting case $\lambda\rightarrow 0$ the renormalized vacuum polarization vanishes as expected.}
\label{fig:phisquared0-difference}
\end{figure}
In the regularized vacuum polarization obtained in the context of $\mathrm{GF_1}$ theory many of these features appear as well, with the notable exception of point no.~3: the vacuum polarization at the location of the potential is more negative than that of the local theory, which we may call ``undershoot.''
\section{Discussion}
In this paper we discussed a non-local two-dimensional massive scalar quantum field. For the calculations of the vacuum fluctuations of such a field in the presence of a $\delta$-like potential we employed Green-function techniques.
The calculation of $\langle \varphi^2(x)\rangle $ \ in the usual local quantum field theory is rather simple. It is greatly simplified by employing a Wick rotation and using the standard methods of the Euclidean theory. In the class of non-local theories which we consider in this paper, however, this method usually does not work: the corresponding form factor $\alpha(z)$ -- see \eqref{eq:gfn-form} -- can infinitely grow when its complex argument $z$ reaches infinity along some directions in the complex plane. As a result, one cannot perform a Wick rotation and all the required calculations are to be done in the ``physical domain'' of the momentum variables. This makes the calculations of the vacuum fluctuations in ghost-free theories much more complicated. In this paper we developed the tools required for these calculations, and this is one of its results.
In order to find $\langle \varphi^2(x)\rangle $ \ it is sufficient to obtain the Hadamard Green function. We demonstrated that in the absence of the potential the corresponding Hadamard Green function in the ghost-free theory coincides with a similar function in the local theory. We defined $\langle \varphi^2(x)\rangle_\ind{ren}$\ as the coincidence limit $x'\to x$ of the difference of the Hadamard Green function of our model and the free local one. This means that $\langle \varphi^2(x)\rangle_\ind{ren}$\ vanishes in the absence of the potential. However, in the presence of the potential, $\langle \varphi^2(x)\rangle_\ind{ren}$\ does not vanish in both non-local and local cases, and the corresponding functions depend on the choice of the theory. The second objective of this paper was to study this effect.
In order to simplify calculations we chose the simple model of a repulsive $\delta$-potential. For such a potential one can find the required Green function in an explicit form by solving the field equations by means of the Lippmann--Schwinger method. The expressions for the Hadamard Green function for a general type of the ghost-free theory as well as integral representations for $\langle \varphi^2(x)\rangle_\ind{ren}$\ have been obtained in this paper explicitly.
We focused on the calculations of $\langle \varphi^2(x)\rangle_\ind{ren}$\ for two ghost-free theories ($\mathrm{GF_1}$ and $\mathrm{GF_2}$) and demonstrated that the properties of $\langle \varphi^2(x)\rangle_\ind{ren}$\ in these models are quite different. In the $\mathrm{GF_1}$ theory the quantity $\langle \varphi^2(x)\rangle_\ind{ren}$\ is logarithmically divergent, whereas in the $\mathrm{GF_2}$ the quantity $\langle \varphi^2(x)\rangle_\ind{ren}$\ is a finite smooth function of $x$ for any choice of the mass parameters $m$ and the scale of non-locality $\ell$. The logarithmic divergence of $\langle \varphi^2(x)\rangle_\ind{ren}$\ in the $\mathrm{GF_1}$ theory is an ultraviolet problem connected with the behavior of the $\mathrm{GF_1}$ form factor in the high-frequency domain. In the $\mathrm{GF_2}$ theory (as well as for any $\mathrm{GF_{2n}}$ theory) this problem does not exist. For $\mathrm{GF_2}$ theory we also managed to find an exact analytic expression for $\langle\varphi^2(0)\rangle_\ind{ren}$ at the position of the potential. This provided us with a good test of our numerical computations.
We showed that non-local contributions arise from the universal non-local correction term $\Delta\mathcal{G}_\omega(x)$, see Eq.~\eqref{INT}, which is added to the local causal propagators (retarded, advanced, and Feynman). This correction is real-valued and well-defined in the physical Minkowski space for all $\mathrm{GF_N}$ theories.
Our numerical computations demonstrated (see Figs.~\ref{fig:phisquared-loc-gf1} and \ref{fig:phisquared-loc-gf2}), as we expected, that non-locality smooths the vacuum polarization in the narrow vicinity of the potential and then asymptotically approaches the corresponding value of the local theory. Moreover, at some distance $x < \ell$, there is a crossover between the local and the non-local vacuum polarization. At the location of the potential the ``renormalized'' vacuum polarization of $\mathrm{GF_1}$ is more negative than the local polarization, whereas in the completely regular $\mathrm{GF_2}$ vacuum polarization is larger than the local polarization at $x=0$.
One might think that the model of a two-dimensional massive scalar field, which we consider in this paper, is oversimplified. However, the methods developed here can easily be generalized and adapted to a more realistic case. Suppose that there exist more than one spatial dimension and denote the coordinates in this space as $(x,\vec{y}_{\perp})$. If the potential barrier still has the form $\lambda\delta(x)$, one can perform the Fourier transform not only with respect to time $t$ but also with respect to transverse coordinates $\vec{y}_{\perp}$. This is possible since the translational invariance into the perpendicular directions is unbroken by the presence of the potential. Denote by $\vec{k}_{\perp}$ the momenta conjugated to $\vec{y}_{\perp}$. Then one can use the same expression \eqref{FFF} for the operator $\hat{{\cal D}}_{\omega}$ where now the quantity $\varpi$ takes the form
\begin{align}
\varpi=\sqrt{\omega^2-m^2-\vec{k}^2_{\perp}} \, .
\end{align}
Last, an additional factor depending on $\omega$ appears in the formula \eqref{eq:phisquared-feynman} for $\langle \varphi^2(x)\rangle_\ind{ren}$, which is connected to the phase volume in momentum space. We hope to address the higher-dimensional problem in a future work.
As a final remark, it would be interesting as well to study the vacuum fluctuations beyond the vacuum state in a thermal bath of finite temperature $T$. An important connected problem lies in studying under which conditions the fluctuation-dissipation theorem is valid in the class of non-local ghost-free theories.
\section*{Acknowledgments}
J.B.\ is grateful for a Vanier Canada Graduate Scholarship administered by the Natural Sciences and Engineering Research Council of Canada as well as for the Golden Bell Jar Graduate Scholarship in Physics by the University of Alberta.
V.F.\ and A.Z.\ thank the Natural Sciences and Engineering Research Council of Canada and the Killam Trust for their financial support.
| proofpile-arXiv_065-1617 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Since the birth of quantum mechanics, the concept of entanglement has been at the core of any quantum theory. Its profound and sometimes subtle links to various aspects of physics, ranging from the connection to thermal entropy \cite{Takahashi-book,leshouches-PC} to the applications in the early universe e.g.~\cite{Boyanovsky2018,Brahma2020,Martin2021}, have been stimulating an enormous scientific activity in the last years. Nowadays, entanglement measures have become a commonly recognised (and very efficient) tool for the investigation and the understanding of quantum correlations. This is particularly true in low-dimensional quantum systems, which are notable for hosting strong quantum correlations, see e.g. \cite{vNE1,vNE2,vNE3,vNE4} for recent reviews and e.g. \cite{Islam2015,Kaufman2016,Elben2018,Brydges2019,Lukin2019} for some experimental tests.
Alongside a still fascinating research on entanglement in exotic and/or out-of-equilibrium contexts, it has been initiated to formulate refined tools to go beyond the conventional entanglement measures and, in this way, obtain more information on such quantum correlations. An example is the so-called entanglement Hamiltonian e.g. \cite{Li2008,Eisler2017,ep-18,Eisler2019,Cardy2016,wrl-18,Hislop1982,Dalmonte:2017bzm,ksz-21,kbe-21,zksz-22,Javerzat2021} (and more recently the negativity Hamiltonian \cite{Murciano2022}), which encodes in a single object the full description on the entanglement spectrum and on its topological properties, or the understanding of the structure of entanglement with respect to an internal symmetry of the model under analysis \cite{Lukin2019}, which is the main character of this work. Despite the enormous success that the idea of the symmetry resolution of entanglement is experiencing, it has been put forward only very recently in Ref.~\cite{gs-18}.
Typically, one considers models having $U(1)$ internal symmetry associated with the conservation of the particle number $\braket{\hat{N}}$, but the case of non-abelian symmetries has been also investigated, see Ref.~\cite{cdmWZW-21}. Here, we focus on the usual case of an abelian internal symmetry by considering a pure quantum state $\vert \Psi \rangle$
such that
\begin{equation}
[\hat{\rho},\hat{N}]=0\,,\;\;\hat{\rho}=\vert\Psi\rangle\langle\Psi\vert\,.
\end{equation}
Since the particle number operator is made out of the sum of local densities, i.e., $\hat{N}=\sum_{j\in\mathbb{Z}} \hat{n}_j$, by taking any spatial bi-partition $A\cup \bar{A}\equiv[-\infty,\ell]\cup[\ell+1,\infty]$ of the system with a cut at a certain position $\ell\in\mathbb{Z}$, one finds that
\begin{equation}
\hat{N}=\hat{N}_A \otimes \mathds{1}_{\bar{A}}+ \mathds{1}_{A} \otimes \hat{N}_{\bar{A}},
\end{equation}
where $\hat{N}_A=\sum_{j\leq\ell} \hat{n}_j$ and similarly for $\hat{N}_{\bar{A}}$. It is then easy to show \cite{cdmWZW-21} that the reduced density matrix $\hat{\rho}_A={\rm tr}_{\bar{A}} (\hat{\rho})$, still commutes with $\hat{N}_A$, that is,
\begin{equation}
[\hat\rho_A,\hat{N}_A]=0\,.
\label{RDMCommutation}
\end{equation}
Moreover, it is evident that since $\hat{N}$ is a global conserved quantity $[\hat{N},\hat{H}]=0$, Eq.~\eqref{RDMCommutation} remains valid during the course of time evolution as well, i.e.,
\begin{equation}
[\hat\rho_A(t),\hat{N}_A]=0\,.
\label{RDMCommutationTimeEvol}
\end{equation}
The commutation relations \eqref{RDMCommutation} and \eqref{RDMCommutationTimeEvol} imply a block-diagonal structure of $\hat\rho_A$ in terms of the eigenvalues $N$ of $\hat{N}_A$ as
\begin{equation}
\label{rhoN}
\hat\rho_A=\bigoplus_N \hat\Pi_N \ \hat\rho_A = \bigoplus_N \left[ p(N) \hat\rho_A(N)\right]
\end{equation}
where $p(N)={\rm tr}(\hat\Pi_N \hat\rho_A)$ is the probability of having $N$ particles in the subsystem $A$. The symmetry-resolved R\'enyi entropy is then defined as usual
\begin{equation}\label{SRRE}
S_{n,N}=\frac{1}{1-n} \log\tr\left(\hat\rho_A(N)^n\right)\,,
\end{equation}
via the symmetry-resolved reduced density matrix $\hat\rho_A(N)$. The calculation of $\hat\rho_A(N)$ is typically very challenging, due to the non-local action of the projector $\hat\Pi_N$ on the subspace with $N$ particles, making the calculation of $S_{n,N}$ almost out-of-reach, if one tries to work it out directly from its definition. Nevertheless, it was noticed in Ref.~\cite{gs-18} that an equivalent (and very convenient) formulation of the problem can be done by focusing on the so-called symmetry resolved charged moments using path-integral formalism or known lattice techniques. These quantities were already introduced before their connection to symmetry-resolved entropies were established \cite{bym-13,cms-13,cnn-16,d-16, d-16b,ssr-17,srrc-19}, and in particular, are defined as
\begin{equation}
\label{c}
Z_n(\alpha)=\tr\left[\hat\rho_A^n e^{\ensuremath{\mathbf{i}} \alpha\hat{N}_A}\right],
\end{equation}
implying that their computation is indeed feasible in the path integral formulation of the replicated model at the price of introducing an additional flux on one of the Riemann sheets.
Equally importantly, one finds that
\begin{equation}
\label{SRPartitionFunction}
{\cal Z}_n(N)=\int_{-\pi}^\pi \frac{\mathrm{d}\alpha}{2\pi} \ e^{-\ensuremath{\mathbf{i}} N \alpha} Z_n(\alpha)\equiv \tr\left[\hat\Pi_N\hat\rho_A^n\right]\,,
\end{equation}
for the Fourier transform of the charged moments, aka the symmetry-resolved partition function, and hence the symmetry-resolved R\'enyi entropy \eqref{SRRE} can be expressed as
\begin{equation}\label{SRRE2}
S_{n,N}=\frac{1}{1-n} \log \left[\frac{{\cal Z}_n(N)}{({\cal Z}_1(N))^n}\right].
\end{equation}
Immediately after the introduction of these concepts in Ref.~\cite{gs-18}, the symmetry resolution of entanglement has been investigated in a large variety of systems, including e.g.~1+1 conformal field theories \cite{lr-14,gs-18,Equipartitioning,SREQuench,bc-20,crc-20,Zn,mbc-21, Chen-21, cc-21, cdmWZW-21,ThermalCorrectionSRECFT}, free \cite{mdc-20b,U(1)FreeFF}
and interacting integrable quantum field theories \cite{Z2IsingShg, SGSRE, PottsSRE} and also holographic settings \cite{znm,Zhao2021,Zhao2022}. These studies are generically carried out in lattice models \cite{lr-14,Equipartitioning,SREQuench,brc-19,fg-20,FreeF1,FreeF2,mdc-20,ccdm-20,pbc-20,bc-20,HigherDimFermions,mrc-20,FFChriticalChain,Ma2021}, but other systems exhibiting more exotic types of dynamics have been also considered \cite{trac-20,MBL,MBL2,Topology,Anyons, as-20,QuantumHallSRE}. Moreover, the interest for symmetry-resolved entanglement measures on the theoretical side is accompanied and consolidated by their experimental feasibility, see e.g. Ref.~\cite{ncv-21,vecd-20}. \\
In addition, the investigation of symmetry-resolution in out-of-equilibrium situations has also been initiated, although so far carried out only for very few cases, see \cite{SREQuench, pbc-20,ncv-21,fg-21, pbc-21, pbc-22}. The reason for this lack in literature is quite evident as the study of out-of-equilibrium entanglement is known to be often challenging and its possible symmetry-resolved counterpart is seen to be even harder to analyse. However, besides being interesting in its own right, probing the non-equilibrium properties of the symmetry-resolved entanglement could significantly help us thoroughly understand these quantities and the eventual physical systems they are associated with. With this scope, in this manuscript, we wish to connect the idea of symmetry resolution with the framework of quantum generalised hydrodynamics \cite{Ruggiero2019,Ruggiero2020,Collura2020,StefanoJerome,Scopa2021b,Rottoli2022,Ruggiero2022,Ares2022}, which recently enabled to obtain very accurate predictions for the non-equilibrium dynamics of the total entropy in non-homogeneous quench settings. \\
More precisely, the aim of this work is twofold: on the one hand, we detail the exact asymptotic solution for a prototypical setting of non-homogeneous and out-of-equilibrium system, that is, a bi-partitioning quench protocol made with a one-dimensional gas of free spinless fermions, see e.g. Ref.~\cite{Dubail2017,StefanoJerome,Scopa2021b,Rottoli2022,Ares2022} and Sec.~\ref{sec:intro} below. To our best knowledge there are no similar studies about the symmetry-resolved entanglement of such inhomogeneous quench protocols. On the other hand, and most importantly, our discussion on symmetry resolution has a general validity and applies to any inhomogeneous quench protocol which is accessible by quantum generalised hydrodynamics.\\
{\it Outline}~---~In Sec.~\ref{sec:intro}, we briefly introduce the model and the quench protocol considered in this work. Similarly, Sec.~\ref{SemiClassicalHydroDescription} is a short introduction to phase-space hydrodynamics, made by following the discussions in recent works (e.g. \cite{Dubail2017,Collura2020,StefanoJerome,Scopa2021b,Rottoli2022,Ares2022}) and that of previous studies on the quasi-classical evolution of the conserved charges (e.g. \cite{Antal1999,Antal2008,Karevski2002, Rigol2004,Platini2005,Platini2007,Eisler2008,Alba2014,Rigol2015,Allegra2016}). After preparing this ground, in Sec.~\ref{sec:QGHD} we re-quantise the hydrodynamic solution at low energy in terms of a Luttinger liquid by following the recent literature on quantum generalised hydrodynamics, see e.g. \cite{Ruggiero2022,StefanoJerome}. In Sec.~\ref{sec:charge-mom} we specialise to the calculation of symmetry resolved quantities and, in particular, we detail the strategy of calculation of the charged moments in the quantum generalised hydrodynamic framework. Finally, Sec.~\ref{sec:SRRE} contains the analysis of the symmetry-resolved partition function \eqref{SRPartitionFunction} and of the symmetry-resolved R\'enyi entropy \eqref{SRRE2} while Sec.~\ref{sec:conclusions} summarise our work and our results. We provide a numerical check of our major results based on exact lattice calculations, whose implementation details are reported in \ref{app:NUM}.
\section{The model, the quench and the hydrodynamic descriptions}
\subsection{Quantum model and quench protocol}\label{sec:intro}
In this work, we consider a one-dimensional gas of non-interacting spinless fermions on a semi-infinite lattice $j\in [-L,\infty]$ with nearest-neighbour hopping, whose Hamiltonian reads
\begin{equation}\label{model}
\Ha=-\frac{1}{2}\sum_{j=-L}^\infty\left(\hat{c}^\dagger_j\hat{c}_{j+1} + \hat{c}^\dagger_{j+1}\hat{c}_j \right),
\end{equation}
where $\hat{c}^\dagger_j$, $\hat{c}_j$ are standard fermionic lattice operators satisfying canonical anti-commutation relations $\{ \hat{c}_i , \hat{c}_j^\dagger \} = \delta_{ij}$. The system is initially prepared in a state $\left| \Omega \right>$ obtained as ground state of the trapped Hamiltonian
\begin{equation}
\label{free-gas}
\Ha_{t<0}=-\frac{1}{2}\sum_{j=-L}^\infty \left(\hat{c}^\dagger_j\hat{c}_{j+1} + \hat{c}^\dagger_{j+1}\hat{c}_j +(V_j-\mu) \hat{c}^\dagger_j\hat{c}_{j} \right)\,,
\end{equation}
where $\mu$ is a chemical potential and $V_j$ is a confining potential specified as
\begin{equation}\label{potential}
V_j \, = \, \left\{ \begin{array}{rcl}
0 & \text{if \quad $-L\leq j \leq -1$}; \\
+\infty & \text{otherwise}.
\end{array} \right.
\end{equation}
This setup can be equivalently interpreted as a quench protocol where the trap in Eq.~\eqref{free-gas} is suddenly released at $t=0$ and the model is subsequently evolved with Hamiltonian \eqref{model}. Notice that if we set $\mu=0$ in Eq.~\eqref{free-gas}, the ground state contains exactly $L/2$ particles (we assume $L$ is even). This can be easily seen by diagonalising the Hamiltonian~\eqref{free-gas} with the potential \eqref{potential} yielding
\begin{equation}
\Ha_{t<0} =- \sum_k \cos(k)\; \hat\eta_k^\dagger \hat\eta_k,
\end{equation}
with Fourier modes of momentum $k= \pi q/(L+1)$, $q=1,\dots, L$, given as
\begin{equation}
\hat\eta_k^\dagger= \sqrt{\frac{2}{L+1}} \sum_{j=-L}^{-1} \sin \left( k j \right) \hat{c}_j^\dagger,\qquad \{ \hat{\eta}_k^\dagger, \hat{\eta}_{k'} \}=\delta_{kk'}.
\end{equation}
Indeed, the single-particle energy $-\cos(k)$ is negative for $q=1, \dots,L/2$ and the ground state is obtained by acting on the fermion vacuum $ \left| 0 \right>$ with such single-particle creation operators
\begin{equation}\label{half-filled-GS}
\ket{\Omega}\equiv \left| \{\rho=1/2\} \right>=\hat{\eta}_1^\dagger \hat{\eta}_2^\dagger \dots \hat{\eta}_{\nicefrac{L}{2}}^\dagger \left| 0 \right>.
\end{equation}
Here, we introduced the notation $ \left| \{\rho=1/2\} \right>$ to emphasise that the ground state at $\mu=0$ is half-filled, that is, on average, every second site is occupied by a fermion. Together with the state in Eq.~\eqref{half-filled-GS}, we consider also the case of a fully-filled ground state, obtained by setting $\mu<-1$ in Eq.~\eqref{free-gas} as
\begin{equation}
\ket{\Omega}\equiv \left| \{\rho=1\} \right>=\hat{\eta}_1^\dagger \hat{\eta}_2^\dagger \dots \hat{\eta}_{L}^\dagger \left| 0 \right>.
\end{equation}
Both the variants of the initial states allow for an intuitive spin chain interpretation since the model in Eq.~\eqref{free-gas} is known to map to a spin-$1/2$ XX-chain
\begin{equation}\label{xx-model}
\Ha= - \frac{1}{4} \sum_{j=-L}^{\infty} \left(\hat{\sigma}^x_j \hat{\sigma}^x_{j+1} + \hat{\sigma}^y_j\hat{\sigma}^y_{j+1}\right) \, + \, \frac{1}{2}\sum_{j=-L}^{\infty} (V_j-\mu) \ssz_j \, + \, {\rm constant}
\end{equation}
through the Jordan-Wigner transformation \cite{Jordan1928}
\begin{equation}
\hat{c}_j^\dagger= \exp\left(\ensuremath{\mathbf{i}} \pi \sum_{i<j} \hat{\sigma}^+_i\hat{\sigma}^-_i\right) \hat{\sigma}^+_j ,
\end{equation}
where $\hat\sigma^\pm_j=(\hat{\sigma}^x_j\pm \ensuremath{\mathbf{i}} \hat{\sigma}^y_j)/2$ and $\hat\sigma_j^{a}$, $a=x,y,z$, are spin-$1/2$ operators acting at site $j$. In particular, the specific choice $\ket{\Omega}\equiv\left| \{\rho=1\} \right>$ corresponds to the standard domain wall where the left and the right parts of the system display opposite value of magnetisation equal to $+\frac{1}{2}$ and $-\frac{1}{2}$ respectively, while $\ket{\Omega}\equiv\left| \{\rho=1/2\} \right>$ is regarded as a zero-magnetisation ground state. \\
Hence, the quench dynamics takes place by switching off the initial potential $V_j$ at time $t=0$. Consequently, the gas expands freely to the right side of the chain and develops a non-trivial profile of density around the junction at $j=0$, which enlarges with time. More precisely, in the hydrodynamic limit $L\to \infty$, $t \rightarrow \infty$, $ j \rightarrow \infty $ with $t\leq L$ and $j/t$ fixed (see Subsec.~\ref{SemiClassicalHydroDescription}), a non-trivial density profile forms in the region $-t\leq j \leq t$ according to~\cite{Antal1999,Antal2008}
\begin{equation}\label{dens-intro}
\rho(j,t) \, =
\begin{cases}
\frac{1}{2\pi} {\rm arccos} \frac{j}{t} & \text{ if } \left| \Omega \right>= \left| \{ \rho=1/2\} \right> \\
\frac{1}{\pi} {\rm arccos} \frac{j}{t} & \text{ if } \left| \Omega \right>= \left| \{ \rho=1 \} \right>
\end{cases}
\end{equation}
at times $0<t \leq L$.
As a result of the expansion of the gas, quantum correlations spread from the junction $j=0$ towards outer regions, leading to a growth of the entanglement with a non-homogeneous behaviour along the chain. In Refs.~\cite{Alba2014,Gruber2019,StefanoJerome} (resp.~ Refs.~\cite{Dubail2017,Rottoli2022,Ares2022}) such growth for the $n$-R\'enyi entropy, defined as
\begin{equation}
S_n(j,t)= \frac{1}{1-\alpha} \log \tr \ (\hat{\rho}_A(t) )^n,
\end{equation}
and its limit $n \to 1$ where it reduces to the von Neumann entanglement entropy
\begin{equation}
S_1(j,t)= - \tr \hat{\rho}_A(t) \log \hat{\rho}_A(t),
\end{equation}
has been thoroughly studied for the reduced density matrix of the subsystem $A= [j,+\infty]$ for the half-filled (resp. fully-filled) case.
In this work, we compute and study the symmetry-resolved counterparts of the above quantities, that are
\begin{equation}
S_{n,N}(j,t)= \frac{1}{1-\alpha} \log \tr \ (\hat{\rho}_A(t,N) )^n
\end{equation}
for the symmetry-resolved $n$-R\'enyi entropy and
\begin{equation}
S_{1,N}(j,t)= - \tr \hat{\rho}_A(t,N) \log \hat{\rho}_A(t,N)
\end{equation}
for the von Neumann entanglement. Here, $\hat{\rho}_A(t,N)$ is defined via Eq.~\eqref{rhoN} by taking the hydrodynamic limit of the expanding quantum gas in Eq.~\eqref{model}, as further specified below.
\subsection{Phase-space hydrodynamics} \label{SemiClassicalHydroDescription}
As a first step towards the calculation of the symmetry-resolved entanglement during the quench dynamics, we move to a hydrodynamic description of the problem. In this way, we eventually obtain a quasi-classical description of the time evolution from which asymptotically exact result for the conserved charges readily follows, see e.g. Ref.~\cite{Antal1999,Antal2008,Karevski2002,Rigol2004,Platini2005,Platini2007}. However, this machinery is not sufficient for the description of quantum effects such as entanglement at zero temperature. In fact, the latter are captured only after the re-introduction of large-scale quantum fluctuations on top of the phase-space hydrodynamics, as detailed in the next subsection.\\
The quasi-classical treatment consists of taking an appropriate hydrodynamic limit i.e., considering the model at large space-time scales by sending $L,j, t\to\infty$ keeping fixed the ratio $j/t$. In such a limit, the model can be described in terms of fluid cells $\Delta x$ labeled by $x$ each containing a large number of particles $\Delta x=[j,j+M]$ with $M\gg 1$. It follows that the Hamiltonian \eqref{model} can be rewritten as \cite{Wendenbaum2013,StefanoJerome}
\begin{equation}
\Ha=-\frac{1}{2}\int_{-L}^{\infty} \mathrm{d} x \int_0^{\Delta x} \frac{\mathrm{d} y}{\Delta x} \ \left(\hat{c}^\dagger_{x+y} \hat{c}_{x+y+1} + \text{h.c.}\right)
\end{equation}
and can be diagonalised in Fourier basis within each fluid cell as
\begin{equation}
\Ha= -\int_{-L}^{\infty} \mathrm{d} x \ \int_{-\pi}^\pi \frac{\mathrm{d} k}{2\pi} \cos k \ \hat\eta^\dagger_{k,x} \ \hat\eta_{k,x}
\end{equation}
where
\begin{equation}
\hat{c}^\dagger_{x+y}=\int_{-\pi}^\pi \frac{\mathrm{d} k}{2\pi} \ e^{\ensuremath{\mathbf{i}} k y} \ \hat\eta^\dagger_{k,x}, \qquad \hat\eta_{k,x}=(\hat\eta_{k,x}^\dagger)^\dagger.
\end{equation}
The two variants of the initial state that we investigate fill the left part of the system with modes $-\pi\rho_0\leq k \leq \pi\rho_0$ ($0\leq \rho_0\leq 1$), leaving empty the right side. The specific choice $\rho_0=1$ (resp.,~$\rho_0=1/2$) associated with $\left| \{\rho=1\} \right>$ ($\left| \{\rho=1/2\} \right>$) corresponds to l.h.s. of the system being entirely filled (half filled). \\
Crucially, both initial states are asymptotically described by a Wigner function which is, in our cases, equivalent to the local occupation function of the free particles and reads as
\begin{equation}
W_0(x,k)=\begin{cases} 1, \quad \text{if $x\leq 0$ and $-\pi\rho_0\leq k \leq \pi\rho_0$}; \\[4pt] 0, \quad\text{otherwise.}\end{cases}
\end{equation}
Its evolution in phase-space is dictated by the Euler equation \cite{Ruggiero2019,StefanoJerome}
\begin{equation}
\partial_t \ W_t(x,k) + \sin k \ \partial_x \ W_t(x,k)=0
\end{equation}
with the simple solution
\begin{equation}
W_t(x,k)=W_0(x-t\sin k,k),
\end{equation}
see also Ref.~\cite{Fagotti2017,Fagotti2020} for details on the derivation. An important consequence of the above solution is that the dynamics at zero-temperature is characterised by the zero-entropy condition $W_t=\{0,1\}$ of the local macro-states at each time. It follows that one can focus only on the hydrodynamic evolution of local Fermi points $k^\pm_F(x,t)$, satisfying the so-called zero-entropy GHD equation \cite{Doyon2017}
\begin{equation}\label{zero-entropyGHD}
\left(\partial_t + \sin k^\pm_F \ \partial_x\right) k^\pm_F=0.
\end{equation}
The solution of Eq.~\eqref{zero-entropyGHD} allows us to built the Fermi contour $\Gamma_t$ as
\begin{equation}\label{gamma}
\Gamma_t=\left\{ (x,k) \ : \ k_F^-(x,t)\leq k \leq k_F^+(x,t)\right\} \end{equation}
and to re-construct the time-evolved Wigner function simply as
\begin{equation}
W_t(x,k)=\begin{cases} 1, \quad \text{if $k_F^-(x,t)\leq k \leq k_F^+(x,t)$}; \\[4pt] 0, \quad \text{otherwise.}\end{cases}
\end{equation}
Notice that the quench problem under analysis is characterised by a connected Fermi sea at each time \cite{Allegra2016,Dubail2017,Gruber2019,StefanoJerome,Scopa2021b}, i.e., it displays only two local Fermi points $k_F^-(x,t)\leq k_F^+(x,t)$ resulting in the Fermi contour of Eq.~\eqref{gamma}. The time-evolved Fermi contour $\Gamma_t$ is a key quantity for our study, not only because it fully encodes the quasi-classical dynamics of the model, but also because it constitutes the background over which quantum fluctuations are re-introduced, as shortly presented. \\
As already mentioned, once the Fermi contour is determined, one has immediate access to the exact asymptotic profile of conserved charges densities $q$ and currents $j_q$ as
\begin{subequations}\begin{equation}
q(x,t)=\int_{k_F^-(x,t)}^{k_F^+(x,t)} \frac{\mathrm{d} k}{2\pi} \ h_q(k);
\end{equation}
\begin{equation}
j_q(x,t)=\int_{k_F^-(x,t)}^{k_F^+(x,t)} \frac{\mathrm{d} k}{2\pi} \ \sin k \ h_q(k),
\end{equation}
\end{subequations}
where $h_q(k)$ is the single-particle eigenvalue associated to the charge $q$ (for instance: $h_1\equiv 1$ for the particle density, $h_2\equiv -\cos k$ for the energy density and so on).\\
For sake of concreteness, in the cases $\rho_0=\{1,1/2\}$, one finds the solutions for $0\leq x/t\leq1$
\begin{equation}
k_F^\pm(x,t)=\begin{cases} \left\{ \pi-\arcsin(x/t); \ \arcsin(x/t)\right\}; \quad \rho_0=1
\\[4pt]
\left\{\pi/2; \ \arcsin(x/t)\right\}; \qquad \rho_0=1/2
\end{cases}
\end{equation}
for the Fermi points, and
\begin{equation}\label{density}
\rho(x,t)=\begin{cases}
(\rho_0/\pi) \arccos(x/t),\qquad \text{if $|x|/t\leq 1$}; \\[4pt]
\rho_0, \qquad \text{if $x/t<-1$};\\[4pt]
0, \qquad \text{otherwise}
\end{cases}
\end{equation}
for the density profile. Given a bi-partition of the system as
\begin{equation}\label{bi-partition}
A\cup \bar{A} \qquad \text{with $A=[-L, x]$,}
\end{equation}
we compute, for future convenience, the number of particles in $A$ as function of the cutting point $x$ and of time $t$
\begin{equation}\label{NA}
N_A(x,t)=\int_{-L}^x \mathrm{d} y \ \rho(y,t)\equiv \rho_0\ {\cal N}(x,t)
\end{equation}
with scaling function
\begin{equation}\label{resc-NA}
{\cal N}(x,t)=\begin{cases}
L-t/\pi\sqrt{1-x^2/t^2}+(x/\pi)\arccos(x/t), \\
\text{if $|x|/t\leq 1$};\\[8pt]
(L+x), \qquad \text{if $x/t<-1$};\\[4pt]
L, \qquad \text{otherwise}.
\end{cases}
\end{equation}
At $x=0$, we find simply
\begin{equation}
{\cal N}(0,t)=L-t/\pi.
\end{equation}
\begin{figure}[t]
\centering
(a) \hspace{3cm} (b) \\
\includegraphics[width=\textwidth]{semi-classical.pdf}
\caption{(a) Particle density $\rho(x,t)$ in Eq.~\eqref{density} and (b) rescaled particle number ${\cal N}(x,t)$ of the subsystem $A=[-L,x]$ in Eq.~\eqref{resc-NA} as function of the rescaled position at different instants of time. The symbols are numerical data obtained for a lattice of $300$ sites while the full line is the hydrodynamic prediction.}\label{fig1}
\end{figure}
In Fig.~\ref{fig1}, the semi-classical hydrodynamic results in Eqs.~\eqref{density} and \eqref{NA} for the particle density $\rho(x,t)$ and number $N_A(x,t)$ are compared to exact numerical data obtained for the lattice model in Eq.~\eqref{model}, see~\ref{app:NUM} for details on the numerical implementation.
\subsection{Quantum fluctuating hydrodynamics}\label{sec:QGHD}
As we above mentioned, for the calculation of the entanglement entropy it is essential to restore the quantum fluctuations on top of the semi-classical hydrodynamic solution that we previously determined\cite{Ruggiero2019,Ruggiero2020,Collura2020,StefanoJerome,Ruggiero2022}. A useful and successful way to do so is to incorporate only those quantum processes that are relevant at low-energy, which can be described in terms of a Luttinger liquid. Therefore, we introduce a large-scale density fluctuation field as
\begin{equation}
\delta\hat\rho=\frac{1}{2\pi} \partial_x \hat\varphi
\end{equation}
and we expand the time-dependent fermionic operators in terms of the low-energy fields of the underlying Luttinger liquid
\begin{equation}
\begin{matrix}
&\hat{c}^\dagger_x(t) \propto \exp\left[\frac{\ensuremath{\mathbf{i}}}{2}\left( \hat\varphi_+ - \hat\varphi_-\right)\right] + \dots \\[4pt]
&\hat{c}_x(t) \propto \exp\left[\frac{\ensuremath{\mathbf{i}}}{2}\left( \hat\varphi_- - \hat\varphi_+\right)\right] + \dots
\end{matrix}
\end{equation}
retaining only the leading order terms, i.e., those with smallest scaling dimensions. The above identification is valid up to a non-universal amplitude and to a semi-classical phase that are unimportant for our scopes. It is customary and useful to denote the chiral components of $\hat\varphi$ as $\hat\varphi=\hat\varphi_+ +\hat\varphi_-$. The dynamics of these quantum fluctuations is then established by the following effective Hamiltonian \cite{Dubail2017,Brun2017,Brun2018, Ruggiero2019,Scopa2020,Bastianello2020, Ruggiero2020,StefanoJerome,Scopa2021b,Ares2022}
\begin{equation}\label{LL}
\Ha[\Gamma_t]=\int_{\Gamma_t} \frac{\mathrm{d}\theta}{2\pi} {\cal J}(\theta)\ \sin(k(\theta)) \ (\partial_\theta \hat\varphi_a)^2
\end{equation}
together with the parametrisation of the Fermi contour
\begin{equation}
\Gamma_t=\left\{ (x(\theta),k(\theta)) \ : \ k(\theta)=k_F(x(\theta),t)\right\},
\end{equation}
in terms of $\theta\in 2\pi \mathbb{R}/\mathbb{Z}$, which is a coordinate along the contour $\Gamma_t$, $a\equiv a(\theta)=\pm$ iff $k(\theta)\gtrless 0$ and ${\cal J}(\theta)$ is simply the Jacobian of the coordinate change. In our cases, the large-scale quantum fluctuations are obtained from the ground state of the Luttinger liquid Hamiltonian \eqref{LL} at time $t=0$. Over the course of time evolution, these fluctuations are then simply transported along the Fermi contour, which gets modified according to the semi-classical hydrodynamics of Sec.~\ref{SemiClassicalHydroDescription}. \\
Importantly, in our quench setting, any bi-partition of the system $A\cup\bar{A}$ with cut in real space at position $x$ (i.e., $A=[-L,x]$) can be identified with two boundary points $\theta_{1,2}$ along the curve $\Gamma_t$ such that $k(x(\theta_{1,2}),t)=k_F^\pm(x,t)$, see Fig.~\ref{fig:illustration-cut} for an illustration.\\
We conclude this section with a remark. Although the semi-classical hydrodynamics of Sec.~\ref{SemiClassicalHydroDescription} is found to be the same for $\rho_0=\{1,1/2\}$ (up to a simple rescaling of the profiles), the same is not true for the behaviour of quantum fluctuations. In fact, the parametrisation of the Fermi contour $\Gamma_t$ as well as the final result for the entanglement entropy is strongly dependent on the value of $\rho_0$, see the subsequent section for a brief summary and Refs.~\cite{Dubail2017}-\cite{StefanoJerome} for a comprehensive discussion of the two cases.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{disegno-1.pdf}
\caption{Illustration of the Fermi contour $\Gamma_t$ at $t>0$ (left panel) and at $t=0$ (right panel) for the cases $\rho_0=1/2,1$. It is shown that a bi-partition $A=[-L,x]$ at a time $t>0$ can be encoded by the coordinates $\theta_{1,2}$ along the Fermi contour, which are then mapped backward in time to the initial Fermi contour where they can be more easily parametrised, see Ref.~\cite{StefanoJerome} for details.}\label{fig:illustration-cut}
\end{figure}
\section{Total R\'enyi entropies and charged moments}\label{sec:charge-mom}
The quantum fluctuating hydrodynamic framework enable us to exactly determine the non-equilibrium dynamics of both the total R\'enyi entropies (first computed in Ref.~\cite{Dubail2017,StefanoJerome}) and of the symmetry-resolved charged moments (cf. Eq.~\eqref{c}) in a similar fashion, as we now discuss.\\
In the original formulation, the essence of this computation for $S_n$ is in fact the determination of the one-point function of a specific field, namely the branch-point twist field $\hat{\cal T}_n$, associated with the permutation symmetry of the $n$ copies of the Luttinger liquid \eqref{LL} in the replica approach, see e.g. Refs.~\cite{cc-04,Cardy2008,Calabrese2009} and the discussion below. Similarly, the extension to symmetry resolution requires the replacement of $\hat{\cal T}_n$ with the recently discovered composite (branch point) twist fields $\hat{\cal T}_{n,\alpha}$ \cite{gs-18}. This field can be regarded as the fusion of the standard branch point twist field $\hat{\cal T}_n$ with a $U(1)$ vertex field $\hat{\cal V}_\alpha$, that is,
\begin{equation}
\hat{\cal T}_{n,\alpha}=\hat{\cal T}_n \times \hat{\cal V}_\alpha\ .
\end{equation}
The vertex operator is associated with the internal symmetry of the non-replicated model and corresponds to the insertion of the flux on one of the Riemann sheets.
Notice that in absence of flux-insertions (i.e., setting $\alpha=0$), the vertex field $\hat{\cal V}_0\equiv \mathds{1}$ and we recover the usual twist field.
With these considerations, we can relate the charged moments in Eq.~\eqref{c} to the expectation value of the composite twist field as \cite{gs-18}
\begin{equation}
\label{logZ}
\begin{split}
\log Z_{n,\alpha}(x,t) &\equiv \log\tr\left[\hat\rho_A^n\ e^{\ensuremath{\mathbf{i}}\alpha \hat{N}_A}\right]=\log\left[ \varepsilon(x,t)^{2\Delta_{n,\alpha}}\braket{\hat{\cal T}_{n,\alpha}(x,t)}\right] +\ensuremath{\mathbf{i}} \alpha N_A(x,t)\\[4pt]
&=\log\left( \varepsilon(x,t)^{2\Delta_{n,\alpha}} \left|\frac{\mathrm{d}\theta}{\mathrm{d} x}\right|_{\theta=\theta_1}^{\Delta_{n,\alpha}} \left|\frac{\mathrm{d}\theta}{\mathrm{d} x}\right|_{\theta=\theta_2}^{\Delta_{n,\alpha}} \braket{\hat\tau^+_{n,\alpha}(\theta_1)\hat\tau^-_{n,\alpha}(\theta_2)}\right) +\ensuremath{\mathbf{i}} \alpha N_A(x,t)\,,
\end{split}
\end{equation}
where $\hat\tau_{n,\alpha}^\pm$ are the chiral components of the composite (or standard if $\alpha=0$) branch point twist field $\hat{\cal T}_{n,\alpha}$ living at the boundary points $\theta_{1,2}$ of subsystem $A$ with scaling dimension
\begin{equation}\label{dimensions}
\Delta_{n,\alpha}=\frac{h_n}{2}+\frac{h_\alpha}{2n}
\end{equation}
where
\begin{equation}
h_n=\frac{c}{12}\left(n- n^{-1}\right) , \quad h_\alpha=\frac{\alpha^2}{(2\pi)^2}
\end{equation}
are the scaling dimension of $\hat{\cal T}_n$ and $\hat{\cal V}_\alpha$ respectively, and the central charge $c=1$ for the free Fermi gas. The factor $ \varepsilon(x,t)$ appearing in Eq.~\eqref{logZ} is a short-distance regularisation, which also guarantees that the quantity in the r.h.s of Eq. \eqref{logZ} is dimensionless. As was already shown in Ref.~\cite{Dubail2017,StefanoJerome,Scopa2021b,Ruggiero2022,Ares2022,Rottoli2022}, the expression of $\varepsilon$ for a connected Fermi sea is
\begin{equation}\label{cutoff}
\varepsilon(x,t)=\frac{C_{n,\alpha}}{\sin \pi \rho(x,t)}\,,
\end{equation}
where $\rho(x,t)$ is the particle density in Eq.~\eqref{density} and $C_{n,\alpha}$ is a known non-universal constant, see Ref.~\cite{Calabrese2010,Jin2004} and the discussion below.\\
Eq.~\eqref{logZ} is the building block for the calculation of the total and of the symmetry-resolved entropies. For concreteness, we report below the explicit derivation for the saturated case $\rho_0=1$. The same logic applies then to the half-filled case $\rho_0=1/2$, with some additional technicalities for which we address the interested reader to Ref.~\cite{StefanoJerome}.\\
For $\rho_0=1$, one finds that the coordinate $\theta$ along the Fermi contour can be simply written as
\begin{equation}
\theta \equiv k + \pi
\end{equation}
and, therefore, one obtains
\begin{equation}
\theta_1=2\pi-\arcsin\frac{x}{t}; \qquad \theta_2=\pi+\arcsin\frac{x}{t},
\end{equation}
for the Fermi points. The Weyl factors in Eq.~\eqref{logZ} associated with the change of coordinates read as
\begin{equation}
\left|\frac{\mathrm{d}\theta}{\mathrm{d} x}\right|_{\theta=\theta_{1,2}}=\left(t \sqrt{1-\frac{x^2}{t^2}}\right)^{-1}
\end{equation}
and the two-point correlation function is expressed as
\begin{equation}
\braket{\hat\tau^+_{n,\alpha}(\theta_1)\hat\tau^-_{n,\alpha}(\theta_2)}=\left|2\sin\frac{\theta_1-\theta_2}{2}\right|^{-2\Delta_{n,\alpha}}=\left(2\sqrt{1-\frac{x^2}{t^2}}\right)^{-2\Delta_{n,\alpha}}.
\end{equation}
Finally, using Eq.~\eqref{density}, we write the UV cutoff $\varepsilon$ in Eq.~\eqref{cutoff} explicitly as
\begin{equation}
\label{epsilon}
\varepsilon(x,t)=\frac{C_{n,\alpha}}{\sqrt{1-x^2/t^2}}\,.
\end{equation}
Putting all the elements together, one eventually obtains
\begin{equation}\label{eq-L-saturated}
\log Z_{n,\alpha} = -2\Delta_{n,\alpha} \log\left[2t \left|1-\frac{x^2}{t^2}\right|^{3/2}\right] + \ensuremath{\mathbf{i}} \alpha N_A(x,t)+\Upsilon_{n,\alpha},
\end{equation}
(with $N_A$ given in \eqref{NA}) which we rewrite as
\begin{equation}\label{logZ-res}
\log Z_{n,\alpha} = -2\Delta_{n,\alpha} \log \mathcal{L} (x,t) +\ensuremath{\mathbf{i}} \alpha\rho_0 \ {\cal N}(x,t) + \Upsilon_{n,\alpha} \,,
\end{equation}
in terms of the function $\mathcal{L}(x,t)$, introduced for convenience. Indeed, it is possible to show that the structure in Eq.~\eqref{logZ-res} for the charged moments holds also for the case $\rho_0=1/2$ and that the details of the specific quench protocol under consideration enters only through the definition of ${\cal L}(x,t)$. From Eq.~\eqref{eq-L-saturated} and Refs.~\cite{Dubail2017,StefanoJerome}, we find that
\begin{equation}\label{L}
{\cal L}(x,t)=\begin{cases}
2t \left|1-\frac{x^2}{t^2}\right|^{3/2}; \quad \text{if $\rho_0=1$};\\[10pt]
\frac{2L}{\pi}\sqrt{|\frac{x}{t}-t(1-\frac{x^2}{t^2})|}\times\\
|\sqrt{1+\sqrt{1-\frac{x^2}{t^2}}}-\mathrm{sgn}(x)\sqrt{1-\sqrt{1-\frac{x^2}{t^2}}}| |\sin\frac{\pi(x-t)}{2L}|;\\[3pt]
\text{if $\rho_0=1/2$.}
\end{cases}\end{equation}
The additive constant $\Upsilon_{n,\alpha}$ in Eq.~\eqref{logZ-res} is related to $C_{n,\alpha}$ in Eq.~\eqref{epsilon} as
\begin{equation}
\Upsilon_{n,\alpha}\equiv 2\Delta_{n,\alpha}\log \frac{C_{n,\alpha}}{2}
\end{equation}
and it has been analytically determined in Ref.~\cite{brc-19} exploiting the Fisher-Hartwig conjecture
\begin{equation}
\Upsilon_{n,\alpha}=\frac{\ensuremath{\mathbf{i}} n}{2} \int_{-\infty}^\infty \mathrm{d} w \ \left(\tanh(\pi w)-\tanh(\pi n w + \ensuremath{\mathbf{i}} \alpha/2)\right) \log\frac{\Gamma(\frac{1}{2}+\ensuremath{\mathbf{i}} w)}{\Gamma(\frac{1}{2}-\ensuremath{\mathbf{i}} w)} -2\Delta_{n,\alpha} \log(2).
\end{equation}
It is then customary to rewrite this constant as
\begin{equation}\label{non-uni-cst}
\Upsilon_{n,\alpha}= \Upsilon_n + \alpha^2 \nu_n +e_{n,\alpha},
\end{equation}
with
\begin{equation}
\nu_n=-\frac{\log(2)}{4\pi^2 n}+\frac{\ensuremath{\mathbf{i}} n}{8}\int_{-\infty}^{\infty} \mathrm{d} w \left(\tanh^3(\pi n w) -\tanh(\pi n w)\right) \log\frac{\Gamma(\frac{1}{2}+\ensuremath{\mathbf{i}} w)}{\Gamma(\frac{1}{2}-\ensuremath{\mathbf{i}} w)},
\end{equation}
and $e_{n,\alpha}= {\cal O}(\alpha^4)$ (cf.~Ref.~\cite{brc-19}), such that the first term $\Upsilon_n\equiv \Upsilon_{n,0}$ reproduces the non-universal constant obtained in Ref.~ \cite{Calabrese2010,Jin2004} in the absence of fluxes $\alpha=0$.\\
The total R\'enyi entropies are recovered from Eq.~\eqref{logZ} by plugging the correct prefactor $1/(1-n)$ and by setting the flux $\alpha=0$ \cite{Dubail2017,StefanoJerome} i.e.,
\begin{equation}\label{tot-RE}\begin{split}
S_n(x,t)&=\frac{1}{1-n} \log Z_{n,\alpha}\Big\vert_{\alpha=0}=\frac{n+1}{12n}\log{\cal L}(x,t) + \frac{\Upsilon_{n}}{1-n}.
\end{split}
\end{equation}
Notice that the constant $\Upsilon_n/(1-n)$ is related to $C_{n, 0}$ in \eqref{epsilon} as
\begin{equation}
\frac{\Upsilon_n}{1-n}=-\frac{n+1}{12n}\log\frac{C_{n,0}}{2},
\end{equation}
and gives for $n\to 1$
\begin{equation}
\lim_{n\to 1} \frac{\Upsilon_n}{1-n}= \frac{\widetilde\Upsilon +\log(2)/3}{2}\,,
\end{equation}
where $\widetilde\Upsilon\approx 0.49502$ is the Korepin-Jin constant \cite{Jin2004}, consistently with the known results for the total entropy \cite{StefanoJerome,Scopa2021b,Eisler2021}. In Fig.~\ref{fig:tot-EE}, we show the exact numerical results for the total von Neumann entanglement entropy alongside with the hydrodynamic formula in Eq.~\eqref{tot-RE} for the cases $\rho_0=1/2,1$ respectively. The agreement of the hydrodynamic prediction with the data is remarkably good.
\begin{figure}[t]
\centering
(a) \hspace{3cm} (b) \\
\includegraphics[width=\textwidth]{total-entropy.pdf}
\caption{Total von Neumann entanglement entropy for (a) $\rho_0=1/2$ and (b) $\rho_0=1$. The solid line shows the analytical prediction in Eq.~\eqref{tot-RE} provided by quantum fluctuating hydrodynamics while symbols show the numerical data obtained for a lattice of $300$ sites. Although the two settings are characterised by the same semi-classical description (up to a rescaling), cf.~Fig.~\ref{fig1}, the entanglement properties are very different. In both the cases (a)-(b), the hydrodynamic prediction is very accurate.}\label{fig:tot-EE}
\end{figure}
\section{Symmetry-resolved R\'enyi entropies}\label{sec:SRRE}
We now move towards the calculation of the symmetry-resolved R\'enyi entropies, starting from the charged moments that we computed in the previous section.\\
First, let us write down explicitly the real part of the charged moments in Eq.~\eqref{logZ-res}
\begin{equation}\label{re-mom}
{\rm Re}\log Z_{n,\alpha}(x,t)=-2\Delta_{n,\alpha} \log{\cal L}(x,t)+ \Upsilon_{n,\alpha}
\end{equation}
which at half system and for large times becomes
\begin{equation}\label{exp-ReLogZ}
{\rm Re}\log Z_{n,\alpha}(0,t)\sim -2(2-\rho_0)\Delta_{n,\alpha} \log t + \delta_{n,\alpha}(\rho_0)
\end{equation}
with $\delta_{n,\alpha}(\rho_0)\equiv\Upsilon_{n,\alpha}-2\Delta_{n,\alpha} \log(2^{\rho_0})$, and it displays a logarithmic growth for both $\rho_0=\{1,1/2\}$, see Fig.~\ref{fig:charged-mom-2}. The imaginary part reads instead
\begin{equation}
{\rm Im}\log Z_{n,\alpha}(x,t)= \alpha \rho_0 \ {\cal N}(x,t)\,,
\end{equation}
and at half-system it decreases linearly in time
\begin{equation}\label{exp-ImLogZ}
{\rm Im}\log Z_{n,\alpha}(0,t)= \alpha\rho_0 \left(L-t/\pi\right).
\end{equation}
In Figs.~\ref{fig:charged-mom}-\ref{fig:charged-mom-3}, the above predictions for $Z_{n,\alpha}$ given by quantum fluctuating hydrodynamics are tested against exact lattice calculations. In particular, Fig.~\ref{fig:charged-mom} shows the real part of the charged moments as function of $x$ at different $t$ and $\alpha$ while Fig.~\ref{fig:charged-mom-2} is an analysis of the logarithmic growth in Eq.~\eqref{exp-ReLogZ} observed at half system. Finally, Fig.~\ref{fig:charged-mom-3} contains the result for the imaginary part of $Z_{n,\alpha}$. In all cases, the hydrodynamic results are found in a very good agreement.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{re-charge-mom.pdf}
\caption{Real part of the charged moments in Eq.~\eqref{re-mom} as function of the cutting position $x$ for $n=1$, different values of $\alpha$ (see plots legend) and at different times $t/L=0.5,0.67,0.8$ from the left to rightmost panel. The top (bottom) row shows the results for a half (fully) filled initial state $\rho_0=1/2$ ($\rho_0=1$). In each plot, the symbols show the numerical data obtained for a lattice of $300$ sites while the solid line is the analytical prediction in \eqref{re-mom}; the vertical axes mark the light cone position $|x|=t$.}\label{fig:charged-mom}
\end{figure}
\begin{figure}[t!]
\centering
(a) \hspace{5cm} (b) \\
\includegraphics[width=0.8\textwidth]{re-charge-half-sys-v2.pdf}
\caption{Half system behaviour of the real part of the charged moments as function of time for different values of $n$ and $\alpha$ (see plots legend) for (a) half-filled initial state $\rho_0=1/2$; (b) fully-filled initial state $\rho_0=1$. The analytical prediction in Eq.~\eqref{exp-ReLogZ} ({\it thick solid line}) is compared with exact lattice calculations ({\it symbols}) obtained for a lattice of $300$ sites.}\label{fig:charged-mom-2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{im-charge-mom-v2.pdf}
\caption{Imaginary part of the charged moments as function of time for different cutting position $x$ (panels) and values of $\alpha$, $n$ (see plots legend). The hydrodynamic prediction in Eq.~\eqref{exp-ImLogZ} is found in agreement with exact numerical data obtained for a system of size $300$.}\label{fig:charged-mom-3}
\end{figure}
\subsection{Fourier transform of the charged moments}
The next step is to compute the Fourier transform of the charged moments yielding the symmetry-resolved partition function \eqref{SRPartitionFunction}, which we denote by $\mathcal{Z}_{n,N}(x,t)$ to stress the space-time dependence.
When explicitly written, one obtains for ${\cal Z}_{n,N}$
\begin{equation}\label{ZnN}\begin{split}
{\cal Z}_{n,N}(x,t)&=\int_{-\pi}^\pi \frac{\mathrm{d}\alpha}{2\pi} \ e^{-\ensuremath{\mathbf{i}} N \alpha} Z_{n,\alpha}(x,t)\\
&=Z_{n,0}(x,t) \int_{-\pi}^\pi \frac{\mathrm{d}\alpha}{2\pi} e^{-\ensuremath{\mathbf{i}}\alpha(N-N_A(x,t))} e^{-b_n(x,t) \alpha^2 + e(n,\alpha)}
\end{split}\end{equation}
where
\begin{equation}\label{b-def}
b_n(x,t)=\frac{1}{4\pi^2 n} \log{\cal L}(x,t) - \nu_n
\end{equation}
and
\begin{equation}
Z_{n,0}(x,t)=\exp\left(-h_n \log{\cal L}(x,t) + \Upsilon_n\right).
\end{equation}
The computation of the integral in Eq.~\eqref{ZnN} is performed using the saddle-point (in our case equal to a quadratic order) approximation. In particular, this amounts to ignore the contribution of $\exp(e_{n,\alpha})$ (since $\exp(e_{n,\alpha})=1+{\cal O}(\alpha^4)$, cf. Ref. \cite{brc-19}) and hence obtaining
\begin{equation}\label{Zfn}
{\cal Z}_{n,N}(x,t)= \frac{Z_{n,0}(x,t)}{\sqrt{4\pi b_n(x,t)}} \exp\left(-\frac{(N-N_A(x,t))^2}{4 b_n(x,t)}\right).
\end{equation}
It is useful to comment on the validity of this approximation, for which one can use the analogous argument of Ref. \cite{SGSRE}. In our case, one can identify the small parameter $\varepsilon$ of Ref.~\cite{SGSRE} with the inverse of $ b_n(x,t)$, which indeed becomes small as time progresses. The comparison to \cite{SGSRE} tells us that neglecting $\exp(e_{n,\alpha})$ is legitimate if $ b_n(x,t)\gg1$ and more importantly,
\begin{equation}
(N-N_A(x,t))^2 \ll b_n(x,t)\,,
\end{equation}
that is, $(N-N_A(x,t))^2$ does not have to be large but can take small values if $ b_n(x,t)$ is large. In Fig.~\ref{fig:Zfn-2}, we show the probabilities ${\cal Z}_{1,N}$ at half system as function of the charge imbalance $\Delta N\equiv N-N_A(x,t)$ at different times. Notice that, due to particle number conservation, the width $b_n(x,t)$ of the Gaussian in \eqref{Zfn} is really small
(for instance, the variance at half-system for large times is $b_n(0,t) \propto {\cal L}(0,t)\sim \log(t)$) and therefore the fluctuations with $|\Delta N| \gtrsim 2$ particles are strongly suppressed.
In Fig.~\ref{fig:Zfn}, ${\cal Z}_{1,N}$ is visualised as function of $x$ for different choices of $N$ and $t$, and compared with exact lattice numerics.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{ZFn-dN.pdf}
\caption{Symmetry-resolved partition function in Eq.~\eqref{ZnN} at half system ($x=0$) as function of $\Delta N$, at $n=1$ and at different times $t/L=0.33,0.8$ (see plots legend). In each plot, the symbols show the numerical data obtained for a lattice of $300$ sites while the solid line is the analytical prediction in \eqref{Zfn}.}\label{fig:Zfn-2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Zfn_final.pdf}
\caption{Symmetry-resolved partition function in Eq.~\eqref{ZnN} as function of the cutting position $x$ for $n=1$, different values of $\Delta N=0,\pm 1$ (see plots legend) and at different times $t/L=0.5,0.67,0.8$ from the left to rightmost panel. The top (bottom) row shows the results for a half (fully) filled initial state $\rho_0=1/2$ ($\rho_0=1$). In each plot, the symbols show the numerical data obtained for a lattice of $300$ sites while the solid line is the analytical prediction in \eqref{Zfn}; the vertical axes mark the light cone position $|x|=t$.}\label{fig:Zfn}
\end{figure}
\subsection{Symmetry-resolved R\'enyi entropies}
Finally, from the symmetry-resolved partition function in Eq.~\eqref{Zfn}, the symmetry-resolved R\'enyi entropies are straightforwardly obtained as (cf.~Eq.~\eqref{SRRE2})
\begin{equation}\begin{split}\label{SRRE-fin}
S_{n,N}(x,t)&\equiv\frac{1}{1-n}\log\left[\frac{{\cal Z}_{n,N}(x,t)}{({\cal Z}_{1,N}(x,t))^n}\right]=S_n(x,t)+ \frac{1}{1-n} \left[\frac{n(N-N_A(x,t))^2}{4b_1(x,t)}-\frac{(N-N_A(x,t))^2}{4b_n(x,t)}\right]\\[4pt]
&\qquad -\log(2\sqrt{\pi})+\frac{1}{1-n}\log\left[\frac{b_1(x,t)^{n/2}}{b_n(x,t)^{1/2}}\right].
\end{split}\end{equation}
At this point, we wish to consider the analytic continuation of $S_{n,N}$ in \eqref{SRRE-fin} for $n\to1$ and obtain a closed expression for the symmetry-resolved von Neumann entropy. To this end, we write the symmetry-resolved von Neumann entropy as $S_{1,N}=-\partial_n \left[\mathcal{Z}_{n,N}/\mathcal{Z}_{1,N}^{n}\right]\Big\vert_{n=1}$ and differentiate the quantity $-\mathcal{Z}_{n,N}/\mathcal{Z}_{1,N}^{n}$ with respect to $n$, exploiting the product form $\mathcal{Z}_{n,N}\equiv {Z}_{n,0}\times g_{n,N}$
with
\begin{equation}
g_{n,N}(x,t)=\frac{\exp\left(-\frac{(N-N_{A}(x,t))^{2}}{4b_{n}(x,t)}\right)}{\sqrt{4\pi b_n(x,t)}} \ .
\end{equation}
Doing so, the analytic continuation of the symmetry-resolved von Neumann entropy takes the form
\begin{equation}
\begin{split}S_{1,N}(x,t) & =\log {Z}_{1,0}+\log g_{1,N}-\frac{g_{n,N}\ \partial_{n}{Z}_{n,0}+{Z}_{1,0} \ \partial_{n}g_{n,N}}{{Z}_{1,0} \ g_{1,N}}\\[5pt]
& =S_{1}(x,t)+\log g_{1,N}(x,t)-\frac{\partial_{n}g_{1,N}(x,t)}{g_{1,N}(x,t)}
\end{split}
\end{equation}
since ${Z}_{1,0}\equiv 1$ by construction, even in the non-homogeneous quench setting under analysis. This means that the symmetry-resolved von Neumann entropy can eventually be expressed as
\begin{equation}
\label{SRvNFinal}
\begin{split} S_{1,N}(x,t) &= S_{1}(x,t)-\frac{(N-N_{A}(x,t))^{2}}{4b_{1}(x,t)}-\log\left[(4\pi b_{1}(x,t))^{1/2}\right]\\[4pt]
& -b'_{1}(x,t) \frac{(N-N_{A}(x,t))^{2}-2b_{1}(x,t)}{4b_{1}(x,t)^{2}},
\end{split}
\end{equation}
where $b'_{1}(t,x)$ denotes the derivative of $b_n$ with respect to $n$ evaluated at $n=1$, i.e.,
\begin{equation}
b'_{1}(x,t)\equiv \partial_n b_n(x,t) \Big\vert_{n=1} =-(b{}_{1}(x,t)+\nu_1+\nu'_1)\,,
\end{equation}
following directly from the definition of $b_n$ in \eqref{b-def}. We recall that $\nu_n$ is related to the non-universal constant appearing in Eq.~\eqref{non-uni-cst}.\\
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{SRRE_final.pdf}
\caption{Symmetry-resolved von Neumann entropy in Eq.~\eqref{SRvNFinal} as function of the cutting position $x$ at different values of $\Delta N=0,\pm 1$ (see plots legend) and at different times $t/L=0.5,0.67,0.8$ from the left to rightmost panel. The top (bottom) row shows the results for a half (fully) filled initial state $\rho_0=1/2$ ($\rho_0=1$). In each plot, the symbols show the numerical data obtained for a lattice of $300$ sites while the solid line is the analytical prediction in \eqref{SRvNFinal}; the vertical axes mark the light cone position $|x|=t$. The additive constants are fitted with numerics at half system.}\label{fig:SRRE}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{ent_conf_num.pdf}
\caption{Configurational $S^{(c)}$ and number $S^{(n)}$ entanglement entropy as function of the cutting position $x$ at time $t/L=0.67$, for different initial states $\rho_0=1,1/2$ (see plots legend). The data are obtained for a system of $300$ sites with exact lattice numerics, retaining only terms with $|\Delta N|\leq 2$ in Eq.~\eqref{Sc-Sn}. The sum of the two contributions (symbols) is found in remarkable agreement with the hydrodynamic prediction in Eq.~\eqref{tot-RE} (solid line).}\label{fig:Sc_Sn}
\end{figure}
From the analytic expressions of the symmetry-resolved von Neumann entropy in Eq.~\eqref{SRvNFinal}, one can easily investigate the limit $b_{1}(x,t)\rightarrow\infty$, which physically corresponds to a long time limit beyond the Euler scaling regime.
Expanding Eq.~\eqref{SRvNFinal} in such a $b_{1}(x,t)\rightarrow\infty$ limit, we find that
\begin{equation}
\begin{split}S_{1,N}(x,t)= & S_{1}(x,t)-\log\left[(4\pi b_{1}(x,t))^{1/2}\right]-\frac{1}{2}-\frac{2(\nu(1)+\nu'(1))}{4b_{1}(x,t)}\\
& +\frac{\left(\nu(1)+\nu'(1)\right)(N-N_{A}(x,t))^{2}}{b_{1}(x,t)^{2}}+\mathcal{O}\left(b_{1}(x,t)^{-3}\right)\,,
\end{split}
\end{equation}
and we recall that the validity of the above expression requires that $\Delta N^2\ll b_1(x,t)$, (notice that $\nu(1)+\nu'(1)$ differs from zero). Since $b_{1}(x,t)\propto S_{1}(x,t)$, we conclude that the equipartition of entanglement in the symmetry sectors is asymptotically restored according to
\begin{equation}\label{deltaS}
\delta S_{1,N}(x,t) \sim \frac{(N-N_{A}(x,t))^{2}}{S_{1}(x,t)^{2}},
\end{equation}
with a non-trivial prefactor that depends on non-universal quantities. We finally notice that the total von Neumann entropy profile in Eq.~\eqref{tot-RE} can be recovered, for each position $x$ and time $t$, as
\begin{equation}\label{Sc-Sn}
S_1(x,t)=\sum_{N}{\cal Z}_{1,N}(x,t) S_{1,N}(x,t) - \sum_{N} {\cal Z}_{1,N}(x,t)\log {\cal Z}_{1,N}(x,t)\equiv S^{(c)} + S^{(n)}.
\end{equation}
The two terms appearing in the sum are known as configurational entanglement entropy $S^{(c)}$, measuring the total entropy due to each charge sector, and the number entropy $S^{(n)}$, which accounts for the entropy due to the charge fluctuations among different sectors, see e.g. \cite{Lukin2019}. In Fig.~\ref{fig:Sc_Sn}, we show these two contributions and we compare their sum to the hydrodynamic prediction in Eq.~\eqref{tot-RE}. Notice that the oscillations observed in $S^{(c)}$ and $S^{(n)}$ (coming from those of the charged moments, cf.~Fig.~\ref{fig:charged-mom}) nicely disappear once the two contributions are summed.
\section{Summary and conclusions}\label{sec:conclusions}
We considered a one-dimensional gas of non-interacting fermions initially prepared in a bi-partite state $\ket{\Omega}$, characterised by the absence of particles on the right part $(j\geq 0)$ and by a filling on the left part ($j<0$) of the chain with density $\rho_0=1/2$ or $1$. We subsequently let $\ket{\Omega}$ evolve unitarily with the hopping Hamiltonian in Eq.~\eqref{model} and we studied the non-equilibrium dynamics after the quench in the Euler hydrodynamic limit of large space-time scales $j,t\to \infty$ at fixed $j/t$, see Sec.~\ref{SemiClassicalHydroDescription} for details.
For this prototypical model of inhomogeneous quench setting, the non-equilibrium dynamics of conserved charges has been determined long ago (see e.g. Ref.~\cite{Antal1999,Antal2008,Karevski2002,Platini2005,Rigol2015,Rigol2004,Platini2007}) and recently complemented by results on the dynamics of the total entanglement (Ref.~\cite{Dubail2017,StefanoJerome}) and on the entanglement Hamiltonian \cite{Rottoli2022}, obtained through quantum generalised hydrodynamics. In this manuscript, we eventually completed the study on entanglement providing a careful analysis of the symmetry-resolved R\'enyi entropies as function of time and of the entangling position along the inhomogeneous system, see Sec.~\ref{sec:charge-mom} and \ref{sec:SRRE}. We found that the charged moments at half system display a logarithmic growth in time (see Eq.~\eqref{exp-ReLogZ} and Fig.~\ref{fig:charged-mom-2}) and that the symmetry-resolved von Neumann entropy is distributed among symmetry sectors with equal weights, up to corrections that scale as the inverse of the square of the total entanglement (see Eq.~\eqref{deltaS}). Our analytical results for symmetry resolved quantities are based on quantum generalised hydrodynamics and have been checked with numerical exact lattice calculations (see \ref{app:NUM} for details on the implementation), returning a very good agreement of the hydrodynamic prediction with data.
\\
Beside the per se interest of our results for the initial bi-partite state, our work aims to connect two current branches of research on entanglement, that are, symmetry resolution and quantum generalised hydrodynamics. Indeed, our discussion in Sec.~\ref{sec:charge-mom} and \ref{sec:SRRE} has general validity and can be straightforwardly extended to the study of symmetry resolved quantities in any inhomogeneous quench setting that is accessible with quantum generalised hydrodynamics, opening the doors to several subsequent analysis. For instance, it would be interesting to consider the symmetry resolution in a quartic-to-quadratic quench protocol (see e.g. \cite{Kinoshita2006}), whose total entanglement has been calculated recently in Ref.~\cite{Ruggiero2022} and realised in Ref.~\cite{Schemmer2019} with rubidium atom chips for an experimental test of the hydrodynamic results on conserved charges.
\vspace{0.5cm}
{\it Acknowledgments --- } The authors acknowledge support from ERC under Consolidator grant number 771536 (NEMO). We are very thankful to Pasquale Calabrese for discussions on the project at various stages of its development and for valuable comments on the manuscript. We acknowledge Riccarda Bonsignori, Sara Murciano and Filiberto Ares for useful discussions and remarks on the manuscript.
| proofpile-arXiv_065-1634 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{Applications of the Adjoint Equations}
The solution of many nonlinear problems involves successive linearization, and as such variational equations and their adjoints play a critical role in a variety of applications. Adjoint equations are of particular interest when the parameter space is significantly higher dimension than that of the output or objective. In particular, the simulation of adjoint equations arise in sensitivity analysis~\cite{Ca1981, CaLiPeSe2003}, adaptive mesh refinement~\cite{LiPe2003}, uncertainty quantification~\cite{WaDuAlIa2012}, automatic differentiation~\cite{Gr2003}, superconvergent functional recovery~\cite{PiGi2000}, optimal control~\cite{Ro2005}, optimal design~\cite{GiPi2000}, optimal estimation~\cite{NgGeBe2016}, and deep learning viewed as an optimal control problem~\cite{DeCeEhOwSc2019}.
The study of geometric aspects of adjoint systems arose from the observation that the combination of any system of differential equations and its adjoint equations are described by a formal Lagrangian~\cite{Ib2006, Ib2007}. This naturally leads to the question of when the formation of adjoints and discretization commutes~\cite{SoTz1997}, and prior work on this include the Ross--Fahroo lemma~\cite{RoFa2001}, and the observation by \citet{Sa2016} that the adjoints and discretization commute if and only if the discretization is symplectic.
\subsection{Symplectic and Presymplectic Geometry}\label{SymplecticGeometrySection}
Throughout the paper, we will assume that all manifolds and maps are smooth, unless otherwise stated. Let $(P,\Omega)$ be a (finite-dimensional) symplectic manifold, i.e., $\Omega$ is a closed nondegenerate two-form on $P$. Given a Hamiltonian $H: P \rightarrow \mathbb{R}$, the Hamiltonian system is defined by
$$ i_{X_H} \Omega = dH, $$
where the vector field $X_H$ is a section of the tangent bundle to $P$. By nondegeneracy, the vector field $X_H$ exists and is uniquely determined. For an open interval $I \subset \mathbb{R}$, we say that a curve $z: I \rightarrow P$ is a solution of Hamilton's equations if $z$ is an integral curve of $X_H$, i.e., $\dot{z}(t) = X_H(z(t))$ for all $t \in I$.
A particularly important example for our purposes is when the symplectic manifold is the cotangent bundle of a manifold, $P = T^*M$, equipped with the canonical symplectic form $\Omega = dq \wedge dp$ in natural coordinates $(q,p)$ on $T^*M$. A Hamiltonian system has the coordinate expression
\begin{align*}
\dot{q} &= \frac{\partial H(q,p)}{\partial p}, \\
\dot{p} &= - \frac{\partial H(q,p)}{\partial q}.
\end{align*}
By Darboux's theorem, any symplectic manifold is locally symplectomorphic to a cotangent bundle equipped with its canonical symplectic form. As such, any Hamiltonian system can be locally expressed in the above form (even when $P$ is not a cotangent bundle), using Darboux coordinates.
We now consider the generalization of Hamiltonian systems where we relax the condition that $\Omega$ is nondegenerate, i.e., presymplectic geometry. Let $(P,\Omega)$ be a presymplectic manifold, i.e., $\Omega$ is a closed two-form on $P$ with constant rank. As before, given a Hamiltonian $H: P \rightarrow \mathbb{R}$, we define the associated Hamiltonian system as
$$ i_{X_H}\Omega = dH. $$
Note that since $\Omega$ is now degenerate, $X_H$ is not guaranteed to exist and if it does, it need not be unique and in general is only partially defined on a submanifold of $P$. Again, we say a curve on $P$ is a solution to Hamilton's equations if it is an integral curve of $X_H$. Using Darboux coordinates $(q,p,r)$ adapted to $(P,\Omega)$, where $\Omega = dq \wedge dp$ and $\ker(\Omega) = \text{span}\{\partial/\partial r\}$, the local expression for Hamilton's equations is given by
\begin{align*}
\dot{q} &= \frac{\partial H(q,p,r)}{\partial p}, \\
\dot{p} &= -\frac{\partial H(q,p,r)}{\partial q}, \\
0 &= \frac{\partial H(q,p,r)}{\partial r}.
\end{align*}
The third equation above is interpreted as a constraint equation which any solution curve must satisfy. We will assume that the constraint defines a submanifold of $P$. It is clear that in order for a solution vector field $X_H$ to exist, it must be restricted to lie on this submanifold. However, in order for its flow to remain on the submanifold, it must be tangent to this submanifold, which further restricts where $X$ can be defined. Alternating restriction in order to satisfy these two constraints yields the presymplectic constraint algorithm of \citet{GoNeHi1978}. The presymplectic constraint algorithm begins with the observation that for any $X$ satisfying the above system, so does $X+Z$, where $Z \in \text{ker}(\Omega)$. In order to obtain such a vector field $X$, one considers the subset $P_1$ of $P$ such that $Z_p(H) = 0$ for any $Z \in \text{ker}(\Omega), p \in P_1$. We will assume that the set $P_1$ is a submanifold of $P$. We refer to $P_1$ as the primary constraint manifold. In order for the flow of the resulting Hamiltonian vector field $X$ to remain on $P_1$, one further requires that $X$ is tangent to $P_1$. The set of points satisfying this property defines a subsequent secondary constraint submanifold $P_2$. Iterating this process, one obtains a sequence of submanifolds
$$ \dots \rightarrow P_k \rightarrow \dots \rightarrow P_1 \rightarrow P_0 \equiv P, $$
defined by
\begin{equation}\label{Presymplectic Constraint Algorithm Sequence}
P_{k+1} = \{ p \in P_k : Z_p(H_k) = 0 \text{ for all } Z \in \text{ker}(\Omega_k)\},
\end{equation}
where
\begin{align*}
\Omega_{k+1} &= \Omega_k|_{P_{k+1}}, \\
H_{k+1} &= H_k|_{P_{k+1}}.
\end{align*}
If there exists a nontrivial fixed point in this sequence, i.e., a submanifold $P_k$ of $P$ such that $P_{k} = P_{k+1}$, we refer to $P_{k}$ as the final constraint manifold. If such a fixed point exists, we denote by $\nu_P$ the minimum integer such that $P_{\nu_P} = P_{\nu_P+1}$, i.e., $\nu_P$ is the number of steps necessary for the presymplectic constraint algorithm to terminate. If such a final constraint manifold $P_{\nu_P}$ exists, there always exists a solution vector field $X$ defined on and tangent to $P_{\nu_P}$ such that $i_X \Omega_{\nu_P} = dH_{\nu_P}$ and $X$ is unique up to the kernel of $\Omega_{\nu_P}$. Furthermore, such a final constraint manifold is maximal in the sense that if there exists a submanifold $N$ of $P$ which admits a vector field $X$ defined on and tangent to $N$ such that $i_X\Omega|_N = dH|_N$, then $N \subset P_{\nu_P}$ (\citet{GoNe1979}).
\subsection{Main Contributions}
In this paper, we explore the geometric properties of adjoint systems associated with ordinary differential equations (ODEs) and differential-algebraic equations (DAEs). For a discussion of adjoint systems associated with ODEs and DAEs, see \citet{Sa2016} and \citet{CaLiPeSe2003}, respectively. In particular, we utilize the machinery of symplectic and presymplectic geometry as a basis for understanding such systems.
In Section \ref{AdjointVectorSpaceSection}, we review the notion of adjoint equations associated with ODEs over vector spaces. We show that the quadratic conservation law, which is the key to adjoint sensitivity analysis, arises from the symplecticity of the flow of the adjoint system. In Section \ref{AdjointManifoldSection}, we investigate the symplectic geometry of adjoint systems associated with ODEs on manifolds. We additionally discuss augmented adjoint systems, which are useful in the adjoint sensitivity of running cost functions. In Section \ref{AdjointDAESection}, we investigate the presymplectic geometry of adjoint systems associated with DAEs on manifolds. We investigate the relation between the index of the base DAE and the index of the associated adjoint system, using the notions of DAE reduction and the presymplectic constraint algorithm. We additionally consider augmented systems for such adjoint DAE systems. For the various adjoint systems that we consider, we derive various quadratic conservation laws which are useful in adjoint sensitivity analysis of terminal and running cost functions. We additionally discuss symmetry properties and present variational characterizations of such systems that provide a useful perspective for constructing geometric numerical methods for these systems.
In Section \ref{ApplicationsSection}, we discuss applications of the various adjoint systems to adjoint sensitivity and optimal control. In Section \ref{AdjointSensitivitySection}, we show how the quadratic conservation laws developed in Section \ref{AdjointSystems Main Section} can be used for adjoint sensitivity analysis of running and terminal cost functions, subject to ODE or DAE constraints. In Section \ref{DiscreteAdjointSystemsSection}, we construct structure-preserving discretizations of adjoint systems using the Galerkin Hamiltonian variational integrator construction of \citet{LeZh2011}. For adjoint DAE systems, we introduce a presymplectic analogue of the Galerkin Hamiltonian variational integrator construction. We show that such discretizations admit discrete analogues of the aforementioned quadratic conservation laws and hence are suitable for the numerical computation of adjoint sensitivities. Furthermore, we show that such discretizations are natural when applied to DAE systems, in the sense that reduction, forming the adjoint system, and discretization all commute (for particular choices of these processes). As an application of this naturality, we derive a variational error analysis result for the resulting presymplectic variational integrator for adjoint DAE systems. Finally, in Section \ref{OCPSection}, we discuss adjoint systems in the context of optimal control problems, where we prove a similar naturality result, in that suitable choices of reduction, extremization, and discretization commute.
By developing a geometric theory for adjoint systems, the application areas that utilize such adjoint systems can benefit from the existing work on geometric and structure-preserving methods.
\subsection{Main Results}
In this paper, we prove that, starting with an index 1 DAE, appopriate choices of reduction, discretization, and forming the adjoint system commute. That is, the following diagram commutes.
\[\small\begin{tikzcd}[column sep=-4ex,row sep=10ex]
\txt{Index 1 DAE} && \txt{ODE} \\
& \txt{Discrete DAE} && \txt{Discrete ODE} \\
\txt{Presymplectic Adjoint\\ DAE System} && \txt{Symplectic Adjoint\\ ODE System} \\
& \txt{Presymplectic Galerkin\\ Hamiltonian Variational \\ Integrator} && \txt{Symplectic Galerkin \\ Hamiltonian Variational \\ Integrator}
\arrow["{\text{Reduce}}", from=1-1, to=1-3]
\arrow["{\text{Reduce}}"{pos=0.25}, from=3-1, to=3-3]
\arrow["{\text{Adjoint}}"'{pos=0.6}, from=1-1, to=3-1]
\arrow["{\text{Adjoint}}"{pos=0.6}, from=1-3, to=3-3]
\arrow["{\text{Reduce}}"{pos=0.3}, from=2-2, to=2-4,crossing over]
\arrow["{\text{Reduce}}"', from=4-2, to=4-4]
\arrow["{\text{Adjoint}}"{pos=0.6}, from=2-2, to=4-2,crossing over]
\arrow["{\text{Adjoint}}"{pos=0.6}, from=2-4, to=4-4]
\arrow["{\text{Discretize}}"', from=1-1, to=2-2]
\arrow["{\text{Discretize}}"{pos=0.6}, from=1-3, to=2-4]
\arrow["{\text{Discretize}}"', from=3-1, to=4-2]
\arrow["{\text{Discretize}}"', from=3-3, to=4-4]
\end{tikzcd}\]
In order to prove this result, we develop along the way the definitions of the various vertices and arrows in the above diagram. Roughly speaking, the four ``Adjoint" arrows are defined by forming the appropriate continuous or discrete action and enforcing the variational principle; the four ``Reduce" arrows are defined by solving the algebraic variables in terms of the kinematic variables through the continuous or discrete constraint equations; the two ``Discretize" arrows on the top face are given by a Runge--Kutta method, while the two ``Discretize" arrows on the bottom face are given by the associated symplectic partitioned Runge--Kutta method. The above commutative diagram can be understood as an extension of the result of \citet{Sa2016} (that discretization and forming the adjoint of an ODE commute when the discretization is a symplectic Runge--Kutta method) by adding the reduction operation. In order to appropriately define this reduction operation, we will show that the presymplectic adjoint DAE system has index 1 if the base DAE has index 1, so that the reduction of the presymplectic adjoint DAE system results in a symplectic adjoint ODE system; the tool for this will be the presymplectic constraint algorithm.
In the process of defining the ingredients in the above diagram, we will additionally prove various properties of adjoint systems associated with ODEs and DAEs. The key properties that we will prove for such adjoint systems are the adjoint variational quadratic conservation laws, Propositions \ref{ManifoldQuadraticInvariantProp}, \ref{AugmentedQuadraticInvariantProp}, \ref{Quadratic Invariant Adjoint DAE Prop}, \ref{Quadratic Invariant Augmented DAE Adjoint Prop}. As we will show, these conservation laws can be used to compute adjoint sensitivities of running and terminal cost functions under the flow of an ODE or DAE. In order to prove these conservation laws, we will need to define the variational equations associated with an adjoint system. We will define them as the linearization of the base ODE or DAE; for the DAE case, we will show that the variational equations have the same index as the base DAE so that they have the same (local) solvability.
\section{Adjoint Systems}\label{AdjointSystems Main Section}
\subsection{Adjoint Equations on Vector Spaces}\label{AdjointVectorSpaceSection}
In this section, we review the notion of adjoint equations on vector spaces and their properties, as preparation for adjoint systems on manifolds.
Let $Q$ be a finite-dimensional vector space and consider the ordinary differential equation on $Q$ given by
\begin{equation}\label{ODEvectorspace}
\dot{q} = f(q),
\end{equation}
where $f: Q \rightarrow Q$ is a differentiable vector field on $Q$. Let $Df(q)$ denote the linearization of $f$ at $q \in Q$, $Df(q) \in L(Q,Q)$. Denoting its adjoint by $[Df(q)]^* \in L(Q^*,Q^*)$, the adjoint equation associated with \eqref{ODEvectorspace} is given by
\begin{equation}\label{AdjointODEvectorspace}
\dot{p} = -[Df(q)]^* p,
\end{equation}
where $p$ is a curve on $Q^*$.
Let $q^A$ be coordinates for $Q$ and let $p_A$ be the associated dual coordinates for $Q^*$, so that the duality pairing is given by $\langle p,q\rangle = p_Aq^A$. The linearization of $f$ at $q$ is given in coordinates by
$$ (Df(q))^A_B = \frac{\partial f^A(q)}{\partial q^B}, $$
where its action on $v \in Q$ in coordinates is
$$ (Df(q) v)^A = \frac{\partial f^A(q)}{\partial q^B} v^B. $$
Its adjoint then acts on $p \in Q^*$ by
$$ ([Df(q)]^* p)_A = \frac{\partial f^B(q)}{\partial q^A} p_B. $$
Thus, the ODE and its adjoint can be expressed in coordinates as
\begin{align*}
\dot{q}^A &= f^A(q), \\
\dot{p}_A &= - \frac{\partial f^B(q)}{\partial q^A} p_B.
\end{align*}
Next, we recall that the combined system \eqref{ODEvectorspace}-\eqref{AdjointODEvectorspace}, which we refer to as the adjoint system, arises from a variational principle. Letting $\langle \cdot,\cdot\rangle$ denote the duality pairing between $Q^*$ and $Q$, we define the Hamiltonian
\begin{align*}
H: Q \times Q^* &\rightarrow \mathbb{R}, \\
(q,p) &\mapsto H(q,p) \equiv \langle p, f(q)\rangle.
\end{align*}
The associated action, defined on the space of curves on $Q \times Q^*$ covering some interval $(t_0,t_1)$, is given by
$$ S[q,p] = \int_{t_0}^{t_1} \left( \langle p,\dot{q}\rangle - H(q,p) \right) dt = \int_{t_0}^{t_1} \left(\langle p,\dot{q}\rangle - \langle p,f(q)\rangle \right) dt. $$
\begin{prop}\label{VariationalPrincipleVectorSpaceCase}
The variational principle $\delta S = 0$, subject to variations $(\delta q,\delta p)$ which fix the endpoints $\delta q(t_0) = 0$, $\delta q(t_1) = 0$, yields the adjoint system \eqref{ODEvectorspace}-\eqref{AdjointODEvectorspace}.
\begin{proof}
Compute the variation of $S$ with respect to a compactly supported variation $(\delta q, \delta p)$,
\begin{align*}
\delta S[q,p] \cdot (\delta q, \delta p) &= \frac{d}{d\epsilon}\Big|_{\epsilon = 0} S[q + \epsilon \delta q, p + \epsilon \delta p] = \int_{t_0}^{t_1} \frac{d}{d\epsilon}\Big|_{\epsilon = 0} \langle p + \epsilon \delta p, \dot{q} + \epsilon \dot{\delta q} - f(q + \epsilon \delta q) \rangle dt \\
&= \int_{t_0}^{t_1} \Big( \langle \delta p, \dot{q} - f(q)\rangle + \langle p, \dot{\delta q} - Df(q) \delta q \rangle \Big) dt \\
&= \int_{t_0}^{t_1} \Big( \langle \delta p, \dot{q} - f(q)\rangle + \langle -\dot{p} - [Df(q)]^* p, \delta q\rangle \Big) dt.
\end{align*}
The fundamental lemma of the calculus of variations then yields \eqref{ODEvectorspace}-\eqref{AdjointODEvectorspace}.
\end{proof}
\end{prop}
\begin{remark}
Note that an analogous statement of Proposition \ref{VariationalPrincipleVectorSpaceCase} can also be stated using the Type II variational principle, where one instead considers the generating function
$$ H_+(q_0,p_1) = \ext \left[ p(t_1) q(t_1) - \int_{t_0}^{t_1} \langle p,\dot{q} - H(q,p)\rangle dt \right], $$
and one extremizes over $C^2$ curves from $[t_0,t_1]$ to $T^*Q$ such that $q(t_0) = q_0, p(t_1) = p_1$. The Type II variational principle again gives the above adjoint system, but with differing boundary conditions. These boundary conditions are typical in adjoint sensitivity analysis, where one fixes the initial position and the final momenta.
\end{remark}
The variational principle utilized above is formulated so that the stationarity condition $\delta S = 0$ is equivalent to Hamilton's equations, where we view $Q \times Q^* \cong T^*Q$ with the canonical symplectic form on the cotangent bundle $\Omega = dq \wedge dp$ and with the corresponding Hamiltonian $H: T^*Q \rightarrow \mathbb{R}$ given as above. It then follows that the flow of the adjoint system is symplectic.
The symplecticity of the adjoint system is a key feature of the system. In fact, the symplecticity of the adjoint system implies that a certain quadratic invariant is preserved along the flow of the system. This quadratic invariant is the key ingredient to the use of adjoint equations for sensitivity analysis. To state the quadratic invariant, consider the variational equation associated with equation \eqref{ODEvectorspace},
\begin{equation}\label{VariationalODEvectorspace}
\frac{d}{dt}\delta q = Df(q)\delta q,
\end{equation}
which corresponds to the linearization of \eqref{ODEvectorspace} at $q \in Q$. For solution curves $p$ and $\delta q$ to \eqref{AdjointODEvectorspace} and \eqref{VariationalODEvectorspace}, respectively, over the same curve $q$, one has that the quantity $\langle p, \delta q\rangle$ is preserved along the flow of the system, since
\begin{align*}
\frac{d}{dt} \langle p,\delta q\rangle &= \langle \dot{p},\delta q\rangle + \langle p, \frac{d}{dt}\delta q\rangle = \langle -[Df(q)]^*p,\delta q\rangle + \langle p, Df(q)\delta q\rangle \\
&= - \langle p, Df(q)\delta q\rangle + \langle p, Df(q)\delta q\rangle = 0.
\end{align*}
To see that symplecticity implies the preservation of this quantity, recall that symplecticity is the statement that, along a solution curve of the adjoint system \eqref{ODEvectorspace}-\eqref{AdjointODEvectorspace}, one has
$$ \frac{d}{dt}\Omega (V,W) = 0,$$
where $V$ and $W$ are first variations to the adjoint system (i.e., that the flow of $V$ and $W$ on solutions are again solutions). Infinitesimally, first variations $V$ and $W$ correspond to solutions of the linearization of the adjoint system \eqref{ODEvectorspace}-\eqref{AdjointODEvectorspace}. At a solution $(q,p)$ to the adjoint system, the linearization of the system is given by
\begin{align*}
\frac{d}{dt} \delta q &= Df(q)\delta q, \\
\frac{d}{dt} \delta p &= -[Df(q)]^* \delta p.
\end{align*}
Note that the first equation is just the variational equation \eqref{VariationalODEvectorspace} while the second equation is the adjoint equation \eqref{AdjointODEvectorspace}, with $p$ replaced by $\delta p$, since the adjoint equation is linear in $p$. The first variation vector field $V$ corresponding to a solution $(\delta q, \delta p)$ of this linearized system is
$$ V = \delta q \frac{\partial}{\partial q} + \delta p \frac{\partial}{\partial p}. $$
Now, we make two choices for the first variations $V$ and $W$. For $W$, we take the solution $\delta q=0$, $\delta p = p$ of the linearized system, which gives $W = p\, \partial/\partial p$. For $V$, we take the solution $\delta q = \delta q$, $\delta p = 0$ of the linearized system, which gives $V = \delta q\, \partial/\partial q$. Inserting these into $\Omega$ gives
$$ \Omega(V,W) = p \frac{\partial}{\partial p} \lrcorner \left( \delta q \frac{\partial}{\partial q} \lrcorner ( dq \wedge dp ) \right) = \langle p,\delta q\rangle. $$
Thus, symplecticity $\frac{d}{dt}\Omega(V,W) = 0$ with this particular choice of first variations $V,W$ gives the preservation of the quadratic invariant $\langle p,\delta q\rangle$.
\subsection{Adjoint Systems on Manifolds}\label{AdjointManifoldSection}
We now extend the notion of the adjoint system to the case where the configuration space of the base ODE is a manifold. We will provide a symplectic characterization of these adjoint systems, prove the associated adjoint variational quadratic conservation laws, and additionally discuss symmetries and variational principles associated with these systems.
Let $M$ be a manifold and consider the ODE on $M$ given by
\begin{equation}\label{ODEmanifold}
\dot{q} = f(q),
\end{equation}
where $f$ is a vector field on $M$. Letting $\pi: TM \rightarrow M$ denote the tangent bundle projection, we recall that a vector field $f$ is a map $f: M \rightarrow TM$ which satisfies $\pi \circ f = \mathbf{1}_M$, i.e., $f$ is a section of the tangent bundle.
Analogous to the adjoint system on vector spaces, we will define the adjoint system on a manifold as an ODE on the cotangent bundle $T^*M$ which covers \eqref{ODEmanifold}, such that the time evolution of the momenta in the fibers of $T^*M$ are given by an adjoint linearization of $f$.
To do this, in analogy with the vector space case, consider the Hamiltonian $H: T^*M \rightarrow \mathbb{R}$ given by $H(q,p) = \langle p, f(q) \rangle_q$ where $\langle\cdot,\cdot\rangle_q$ is the duality pairing of $T^*_qM$ with $T_qM$. When there is no possibility for confusion of the base point, we simply denote this duality pairing as $\langle\cdot,\cdot\rangle$. Recall that the cotangent bundle $T^*M$ possesses a canonical symplectic form $\Omega = -d\Theta$ where $\Theta$ is the tautological one-form on $T^*M$. With coordinates $(q,p) = (q^A, p_A)$ on $T^*M$, this symplectic form has the coordinate expression $\Omega = dq\wedge dp \equiv dq^A \wedge dp_A$.
We define the adjoint system as the ODE on $T^*M$ given by Hamilton's equations, with the above choice of Hamiltonian $H$ and the canonical symplectic form. Thus, the adjoint system is given by the equation
$$ i_{X_H}\Omega = dH, $$
whose solution curves on $T^*M$ are the integral curves of the Hamiltonian vector field $X_H$. As is well-known, for the particular choice of Hamiltonian $H(q,p) = \langle p, f(q)\rangle$, the Hamiltonian vector field $X_H$ is given by the cotangent lift $\widehat{f}$ of $f$, which is a vector field on $T^*M$ that covers $f$ (see, for example, \citet{BuLe2014}). With coordinates $z = (q,p)$ on $T^*M$, the adjoint system is the ODE on $T^*M$ given by
\begin{equation}\label{AdjointODEmanifold}
\dot{z} = \widehat{f}(z).
\end{equation}
To be more explicit, recall that the cotangent lift of $f$ is constructed as follows. Let $\Phi_{\epsilon}: M \rightarrow M$ denote the one-parameter family of diffeomorphisms generated by $f$. Then, we consider the cotangent lifted diffeomorphisms given by $(\Phi_{-\epsilon})^*: T^*M \rightarrow T^*M$. This covers $\Phi_{\epsilon}$ in the sense that $\pi_{T^*M} \circ (\Phi_{-\epsilon})^* = \Phi_{\epsilon} \circ \pi_{T^*M} $ where $\pi_{T^*M}: T^*M \rightarrow M$ is the cotangent projection. The cotangent lift $\widehat{f}$ is then defined to be the infinitesimal generator of the cotangent lifted flow,
$$ \widehat{f}(z) = \frac{d}{d\epsilon}\Big|_{\epsilon=0} (\Phi_{-\epsilon})^* (z). $$
We can directly verify that $\widehat{f}$ is the Hamiltonian vector field for $H$, which follows from
$$ i_{\widehat{f}}\Omega = -i_{\widehat{f}}d\Theta = -\mathcal{L}_{\widehat{f}}\Theta + d( i_{\widehat{f}}\Theta ) = d( i_{\widehat{f}}\Theta) = dH, $$
where $\mathcal{L}_{\hat{f}}\Theta = 0$ follows from the fact that cotangent lifted flows preserve the tautological one-form and $H = i_{\widehat{f}}\Theta$ follows from a direct computation (where $i_{\widehat{f}}\Theta$ is interpreted as a function on the cotangent bundle which maps $(q,p)$ to $\langle \Theta(q,p), \widehat{f}(q,p)\rangle$)
The adjoint system \eqref{AdjointODEmanifold} covers \eqref{ODEmanifold} in the following sense.
\begin{prop}\label{LiftIntegralCurveProp}
Integral curves to the adjoint system \eqref{AdjointODEmanifold} lift integral curves to the system \eqref{ODEmanifold}.
\begin{proof}
Let $z = (q,p)$ be coordinates on $T^*M$. Let $(\dot{q},\dot{p}) \in T_{(q,p)}T^*M$. Then, $T\pi_{T^*M} (\dot{q},\dot{p}) = \dot{q}$ where $T\pi_{T^*M}$ is the pushforward of the cotangent projection. Furthermore,
\begin{align*}
T\pi_{T^*M} \hat{f}(q,p) &= T\pi_{T^*M} \frac{d}{d\epsilon}\Big|_{\epsilon = 0} (\Phi_{-\epsilon})^*(q,p) = \frac{d}{d\epsilon}\Big|_{\epsilon = 0} (\pi_{T^*M} \circ (\Phi_{-\epsilon})^*)(q,p) \\
&= \frac{d}{d\epsilon}\Big|_{\epsilon = 0} (\Phi_{\epsilon} \circ \pi_{T^*M})(q,p) = \frac{d}{d\epsilon}\Big|_{\epsilon=0} \Phi_{\epsilon}(q) = f(q).
\end{align*}
Thus, the pushforward of the cotangent projection applied to \eqref{AdjointODEmanifold} gives \eqref{ODEmanifold}. It then follows that integral curves of \eqref{AdjointODEmanifold} lift integral curves of \eqref{ODEmanifold}.
\end{proof}
\end{prop}
\begin{remark}
This can also be seen explicitly in coordinates. Recalling that $i_{\widehat{f}}\Omega = dH$, one has
$$ dH = d(p_A f^A(q)) = f^A(q) dp_A + p_B \frac{\partial f^B(q)}{\partial q^A} dq^A,$$
and, on the other hand, denoting $\widehat{f}(q,p) = X^A(q,p) \partial/\partial{q^A} + Y_A(q,p) \partial/\partial{p_A}$,
$$ i_{\widehat{f}}\Omega = (X^A(q,p) \partial_{q^A} + Y_A(q,p) \partial_{p_A}) \lrcorner\, (dq^B \wedge dp_B) = X^A(q,p)dp_A - Y_A(q,p) dq^A. $$
Equating these two gives the coordinate expression for the cotangent lift $\widehat{f}$,
$$ \widehat{f}(q,p) = f^A(q) \frac{\partial}{\partial q^A} - p_B \frac{\partial f^B(q)}{\partial q^A} \frac{\partial}{\partial p_A}. $$
Thus, the system $\dot{z} = \widehat{f}(z)$ can be expressed in coordinates as
\begin{subequations}
\begin{align}
\dot{q}^A &= f^A(q), \label{AdjointODEa} \\
\dot{p}_A &=- p_B \frac{\partial f^B(q)}{\partial q^A}, \label{AdjointODEb}
\end{align}
\end{subequations}
which clearly covers the original ODE $\dot{q}^A = f^A(q)$. Also, note that this coordinate expression for the adjoint system recovers the coordinate expression for the adjoint system in the vector space case.
\end{remark}
Analogous to the vector space case, the adjoint system possesses a quadratic invariant associated with the variational equations of \eqref{ODEmanifold}. The variational equation is given by considering the tangent lifted vector field on $TM$, $\widetilde{f}: TM \rightarrow TTM$, which is defined in terms of the flow $\Phi_{\epsilon}$ generated by $f$ by
$$ \widetilde{f}(q,\delta q) = \frac{d}{d\epsilon}\Big|_{\epsilon = 0} T\Phi_{\epsilon} (q,\delta q), $$
where $(q,\delta q)$ are coordinates on $TM$. That is, $\widetilde{f}$ is the infinitesimal generator of the tangent lifted flow. The variational equation associated with \eqref{ODEmanifold} is the ODE associated with the tangent lifted vector field. In coordinates,
\begin{equation}\label{VariationalODEmanifold}
\frac{d}{dt}(q,\delta q) = \widetilde{f}(q,\delta q).
\end{equation}
\begin{prop}\label{ManifoldQuadraticInvariantProp}
For integral curves $(q,p)$ of \eqref{AdjointODEmanifold} and $(q,\delta q)$ of \eqref{VariationalODEmanifold}, which cover the same curve $q$,
\begin{equation}\label{ManifoldVariationalQuadratic}
\frac{d}{dt} \Big\langle (q(t),p(t)), (q(t),\delta q(t))\Big\rangle_{q(t)} = 0. \end{equation}
\begin{proof}
Note that $(q(t),p(t)) \in T^*_{q(t)}M$ and $(q(t),\delta q(t)) \in T_{q(t)}M$ so the duality pairing is well-defined. Then,
\begin{align*}
\Big\langle (q(t),p(t)), (q(t),\delta q(t))\Big\rangle_{q(t)} &= \Big\langle (\Phi_{-t})^* (q(0),p(0)), T\Phi_t (q(0),\delta q(0))\Big\rangle_{q(t)} \\
&= \Big\langle (q(0),p(0)), T\Phi_{-t} \circ T\Phi_t (q(0),\delta q(0))\Big\rangle_{q(0)} \\
&= \Big\langle (q(0),p(0)), T(\Phi_{-t} \circ \Phi_t) (q(0),\delta q(0))\Big\rangle_{q(0)} \\
&= \Big\langle (q(0),p(0)), (q(0),\delta q(0))\Big\rangle_{q(0)},
\end{align*}
so the pairing is constant.
\end{proof}
\end{prop}
\begin{remark}\label{ManifoldQuadraticInvariantSymplecticityRemark}
In the vector space case, we saw that the preservation of the quadratic invariant is implied by symplecticity. The above result is analogously implied by symplecticity, noting that the flow of the adjoint system is symplectic since $\widehat{f}$ is a Hamiltonian vector field.
\end{remark}
Another conserved quantity for the adjoint system \eqref{AdjointODEmanifold} is the Hamiltonian, since the adjoint system corresponds to a time-independent Hamiltonian flow, $\frac{d}{dt} H = \Omega(X_H,X_H) = 0.$
Additionally, conserved quantities for adjoint systems are generated, via cotangent lift, by symmetries of the original ODE \eqref{ODEmanifold}, where we say that a vector field $g$ is a symmetry of the ODE $\dot{x} = h(x)$ if $[g,h] = 0$.
\begin{prop}\label{AdjointSystemSymmetry}
Let $g$ be a symmetry of \eqref{ODEmanifold}, i.e., $[g,f] = 0$. Then, its cotangent lift $\widehat{g}$ is a symmetry of \eqref{AdjointODEmanifold} and additionally, the function
$$ \langle \Theta, \widehat{g}\rangle $$
on $T^*M$ is preserved along the flow of $\widehat{f}$, i.e., under the flow of the adjoint system \eqref{AdjointODEmanifold}.
\begin{proof}
We first show that $\widehat{g}$ is a symmetry of \eqref{AdjointODEmanifold}, i.e., that $[\widehat{g},\widehat{f}] = 0$. To see this, we recall that the cotangent lift of the Lie bracket of two vector fields equals the Lie bracket of their cotangent lifts,
$$ \widehat{[g,f]} = [\widehat{g},\widehat{f}]. $$
Then, since $[g,f]=0$ by assumption, $[\widehat{g},\widehat{f}] = \widehat{[g,f]} = \widehat{0} = 0$.
To see that $\langle \Theta, \widehat{g}\rangle$ is preserved along the flow of $\widehat{f}$, we have
\begin{align*}
\mathcal{L}_{\widehat{f}}\langle \Theta,\widehat{g}\rangle &= \langle \mathcal{L}_{\widehat{f}} \Theta, \widehat{g}\rangle + \langle \Theta, \mathcal{L}_{\widehat{f}} \widehat{g}\rangle = \langle 0, \widehat{g}\rangle + \langle \Theta, [\widehat{f},\widehat{g}]\rangle = 0,
\end{align*}
where we used that $\mathcal{L}_{\widehat{f}}\Theta = 0$ since $\widehat{f}$ is a cotangent lifted vector field.
\end{proof}
\end{prop}
\begin{remark}
The above proposition states when $[f,g]=0$, the Hamiltonian for the adjoint system associated with $g$, $\langle \Theta,\widehat{g}\rangle$, is preserved along the Hamiltonian flow corresponding to the Hamiltonian for the adjoint system associated with $f$, $\langle \Theta,\widehat{f}\rangle$, and vice versa. Note, $\langle \Theta, \widehat{g}\rangle$ can be interpreted as the momentum map corresponding to the action on $T^*M$ given by the flow of $\widehat{g}$.
The above proposition shows that (at least some) symmetries of the adjoint system \eqref{AdjointODEmanifold} can be found by cotangent lifting symmetries of the original ODE \eqref{ODEmanifold}. Additionally, the above proposition states that such cotangent lifted symmetries give rise to conserved quantities.
\end{remark}
In light of the above proposition, it is natural to ask the following question. Given a symmetry $G$ of the adjoint system \eqref{AdjointODEmanifold} (i.e., $[G,\widehat{f}] = 0$), does it arise from a cotangent lifted symmetry in the sense of Proposition \ref{AdjointSystemSymmetry}? In general, the answer is no. However, for a projectable vector field $G$ which is a symmetry of the adjoint system, its projection by $T\pi_{T^*M}$ to a vector field on $M$ does satisfy the assumptions of Proposition \ref{AdjointSystemSymmetry}. This gives the following partial converse to the above proposition.
\begin{prop}
Let $G$ be a projectable vector field on the bundle $\pi_{T^*M}: T^*M \rightarrow M$ which is a symmetry of \eqref{AdjointODEmanifold}, i.e., $[G,\widehat{f}] = 0$. Then, the pushforward vector field $g = T\pi_{T^*M}(G)$ on $M$ satisfies the assumptions of Proposition \ref{AdjointSystemSymmetry} and $T\pi_{T^*M}\widehat{g} = T\pi_{T^*M}G$.
\begin{proof}
Since $G$ is a projectable vector field on the cotangent bundle, $g = T\pi_{T^*M}G$ defines a well-defined vector field on $M$. Thus,
$$ [g,f] = [T\pi_{T^*M}G, T\pi_{T^*M}\widehat{f}] = T\pi_{T^*M}[G,\widehat{f}] = T\pi_{T^*M} 0 = 0, $$
so $g$ is a symmetry of \eqref{ODEmanifold}. Furthermore, we also have
$$ T\pi_{T^*M}\widehat{g} = T\pi_{T^*M} \widehat{(T\pi_{T^*M} G)} = T\pi_{T^*M} G. $$
\end{proof}
\end{prop}
The preceding proposition shows that, for the class of projectable symmetries of the adjoint system \eqref{AdjointODEmanifold}, it is always possible to find an associated symmetry of the original ODE \eqref{ODEmanifold} which, by Proposition \ref{AdjointSystemSymmetry}, corresponds to a Hamiltonian symmetry. Note that this implies that we can associate a conserved quantity $\langle \Theta, \widehat{g}\rangle$ to $G$, where $g = T\pi_{T^*M}G$. Furthermore, since $T\pi_{T^*M}\widehat{g} = T\pi_{T^*M}G$ and the canonical form $\Theta$ is a horizontal one-form, this implies that $\langle \Theta, G\rangle$ equals $\langle \Theta, \widehat{g}\rangle$ and hence, is conserved.
These two propositions show that symmetries of an ODE can be identified with equivalence classes of projectable symmetries of the associated adjoint system, where two projectable symmetries are equivalent if their difference lies in the kernel of $T\pi_{T^*M}$.
We also recall that the adjoint system \eqref{AdjointODEmanifold} formally arises from a variational principle. To do so, let $\Theta$ be the tautological one-form on $T^*M$. The action is defined to be
\begin{equation}\label{AdjointAction}
S[\psi] = \int_I [\psi^* \Theta - (H \circ \psi) dt ],
\end{equation}
where $\psi(t) = (q(t),p(t))$ is a curve on $T^*M$ over the interval $I= (t_0,t_1)$. We consider the variational principle $\delta S[\psi] = 0$, subject to variations which fix the endpoints $q(t_0)$, $q(t_1)$.
\begin{prop}
Let $\psi$ be a curve on $T^*M$ over the interval $I$. Then, $\psi$ is a stationary point of $S$ with respect to variations which fix $q(t_0)$, $q(t_1)$ if and only if \eqref{AdjointODEmanifold} holds.
\end{prop}
The proof of the above proposition is standard in the literature, so we will omit it.
\begin{remark}
In coordinates, the above action \eqref{AdjointAction} takes the form
$$ S = \int_{t_0}^{t_1}(\langle p,\dot{q}\rangle - \langle p, f(q)\rangle)dt, $$
which is the same coordinate expression as the action in the vector space case.
\end{remark}
\subsubsection{Adjoint Systems with Augmented Hamiltonians}\label{AugmentedAdjointODESection}
In this section, we consider a class of modified adjoint systems, where some function on the base manifold $M$ is added to the Hamiltonian of the adjoint system. More precisely, let $H: T^*M \rightarrow \mathbb{R}, H(q,p) = \langle p, f(q)\rangle$ be the Hamiltonian of the previous section, corresponding to the ODE $\dot{q} = f(q)$. Let $L: M \rightarrow \mathbb{R}$ be a function on $M$. We identify $L$ with its pullback through $\pi_{T^*M}: T^*M \rightarrow M$. Then, we define the augmented Hamiltonian
\begin{align*}
H_L \equiv H+L: T^*M &\rightarrow \mathbb{R} \\
(q,p) &\mapsto H(q,p) + L(q) = \langle p, f(q)\rangle + L(q).
\end{align*}
We define the augmented adjoint system as the Hamiltonian system associated with $H_L$ relative to the canonical symplectic form $\Omega$ on $T^*M$,
\begin{equation}\label{AugmentedAdjointSystem}
i_{X_{H_L}}\Omega = dH_L.
\end{equation}
\begin{remark}
The motivation for such systems arises from adjoint sensitivity analysis and optimal control. For adjoint sensitivity analysis of a running cost function, one is concerned with the sensitivity of some functional
$$ \int_{0}^{t} L(q)dt $$
along the flow of the ODE $\dot{q} = f(q)$. In the setting of optimal control, the goal is to minimize such a functional, constrained to curves satisfying the ODE (see, for example, \citet{AgCaFo2021}). We will discuss such applications in more detail in Section \ref{ApplicationsSection}.
\end{remark}
In coordinates, the augmented adjoint system \eqref{AugmentedAdjointSystem} takes the form
\begin{subequations}
\begin{align}
\dot{q}^A &= \frac{\partial H}{\partial p_A} = f^A(q), \label{AugmentedAdjointCoordinate1} \\
\dot{p}_A &= - \frac{\partial H}{\partial q^A} = -p_B \frac{\partial f^B(q)}{\partial q^A} - \frac{\partial L(q)}{\partial q^A}. \label{AugmentedAdjointCoordinate2}
\end{align}
\end{subequations}
We now prove various properties of the augmented adjoint system, analogous to the previous section. To start, first note that we can decompose the Hamiltonian vector field $X_{H_L}$ as follows. Let $\widehat{f}$ be the cotangent lift of $f$. Let $X_L \equiv X_{H_L} - \widehat{f}$. Then, observe that
$$ i_{X_L}\Omega = i_{X_{H_L}}\Omega - i_{\widehat{f}}\Omega = dH_L - dH = dL. $$
Thus, we have the decomposition $X_{H_L} = \widehat{f} + X_L$, where $\widehat{f}$ and $X_L$ are the Hamiltonian vector fields for $H$ and $L$, respectively. In coordinates,
$$ X_L = - \frac{\partial L}{\partial q^A} \frac{\partial}{\partial p_A}. $$
From the coordinate expression, we see that $X_L$ is a vertical vector field over the bundle $T^*M \rightarrow M$. We can also see this intrinsically, since $dL$ is a horizontal one-form on $T^*M$, $X_L$ satisfies $i_{X_L}\Omega = dL$, and $\Omega$ restricts to an isomorphism from vertical vector fields on $T^*M$ to horizontal one-forms on $T^*M$. Thus, it is immediate to see intrinsically that an analogous statement to Proposition \ref{LiftIntegralCurveProp} holds, since the flow of $\widehat{f}$ lifts the flow of $f$, while the flow of $X_L$ is purely vertical. That is, since $T\pi_{T^*M}X_L = 0$,
$$ T\pi_{T^*M}X_{H_L} = T\pi_{T^*M}\widehat{f} = f. $$
We can of course also see that the augmented adjoint system lifts the original ODE from the coordinate expression for the augmented adjoint system, \eqref{AugmentedAdjointCoordinate1}-\eqref{AugmentedAdjointCoordinate2}.
We now prove analogous statements to Propositions \ref{ManifoldQuadraticInvariantProp} and \ref{AdjointSystemSymmetry}, modified appropriately for the presence of $L$ in the augmented Hamiltonian.
\begin{prop}\label{AugmentedQuadraticInvariantProp}
Let $(q,p)$ be an integral curve of the augmented adjoint system \eqref{AugmentedAdjointSystem} and let $(q,\delta q)$ be an integral curve of the variational equation \eqref{VariationalODEmanifold}, covering the same curve $q$. Then,
$$ \frac{d}{dt} \langle p,\delta q\rangle = - \langle dL,\delta q\rangle. $$
\begin{remark}
Note that the variational equation associated with the above system is the same as in the nonaugmented case, equation \eqref{VariationalODEmanifold}, since augmenting $L$ to the Hamiltonian system only shifts the Hamiltonian vector field in the vertical direction.
\end{remark}
\begin{proof}
We will prove this in coordinates. We have the equations
\begin{align*}
\dot{p}_A &= -p_B \frac{\partial f^B}{\partial q^A} - \frac{\partial L}{\partial q^A}, \\
\frac{d}{dt}\delta q^B &= \frac{\partial f^B}{\partial q^A} \delta q^A.
\end{align*}
Then,
\begin{align*}
\frac{d}{dt} \langle p,\delta q\rangle &= \frac{d}{dt} p_A\delta q^A = \dot{p}_A \delta q^A + p_B \frac{d}{dt}\delta q^B \\
&= -p_B \frac{\partial f^B}{\partial q^A}\delta q^A - \frac{\partial L}{\partial q^A}\delta q^A + p_B \frac{\partial f^B}{\partial q^A}\delta q^A \\
&= - \frac{\partial L}{\partial q^A}\delta q^A = - \langle dL,\delta q\rangle.
\end{align*}
\end{proof}
\end{prop}
\begin{remark}\label{Augmented ODE Quadratic Invariant Remark}
Interestingly, the above proposition states that in the augmented case, $\langle p,\delta q\rangle$ is no longer preserved but rather, its change measures the change of $L$ with respect to the variation $\delta q$. This may at first seem contradictory since both the augmented and nonaugmented Hamiltonian vector fields, $X_{H_L}$ and $X_H$, preserve $\Omega$, and as we noted previously in Remark \ref{ManifoldQuadraticInvariantSymplecticityRemark}, the preservation of the quadratic invariant is implied by symplecticity. However, upon closer inspection, there is no contradiction because the two cases have different first variations, where recall a first variation is a symmetry vector field of the Hamiltonian system and symplecticity can be stated as
$$ \frac{d}{dt}\Omega(V,W) = 0, $$
for first variation vector fields $V$ and $W$. In the nonaugmented case, the equations satisfied by the first variation of the momenta $p$ can be identified with $p$ itself, since the adjoint equation for $p$ is linear in $p$. On the other hand, in the augmented case, the adjoint equation for $p$, \eqref{AugmentedAdjointCoordinate2}, is no longer linear in $p$, rather, it is affine in $p$. Furthermore, the failure of this equation to be linear in $p$ is given precisely by $-dL$. Thus, in the augmented case, first variations in $p$ can no longer be identified with $p$, and this leads to the additional term $-\langle dL,\delta q\rangle$ in the above proposition.
\end{remark}
To prove an analogous statement to Proposition \ref{AdjointSystemSymmetry}, we need the additional assumption that the symmetry vector field $g$ leaves $L$ invariant, $\mathcal{L}_gL = 0$.
\begin{prop}
Let $g$ be a symmetry of the ODE $\dot{q} = f(q)$, i.e., $[g,f] = 0$. Additionally, assume that $g$ is a symmetry of $L$, i.e., $\mathcal{L}_gL = 0$. Then, its cotangent lift $\widehat{g}$ is a symmetry of the augmented adjoint system, $[\widehat{g},X_{H_L}] = 0$ and additionally, the function
$$ \langle \Theta, \widehat{g}\rangle $$
on $T^*M$ is preserved along the flow of $X_{H_L}$.
\begin{proof}
To see that $[\widehat{g},X_{H_L}] = 0$, note that with the decomposition $X_{H_L} = \widehat{f} + X_L$, we have
$$ [\widehat{g}, X_{H_L}] = [\widehat{g},\widehat{f}] + [\widehat{g}, X_L] = [\widehat{g},X_L], $$
where we used that $[\widehat{g},\widehat{f}] = \widehat{[g,f]} = 0$. To see that $[\widehat{g},X_L] = 0$, we note that $[\widehat{g},X_L]$ can be expressed
$$ [\widehat{g},X_L] = \mathcal{L}_{\widehat{g}}X_L = \mathcal{L}_{\widehat{g}} (\Omega^{-1}(dL)), $$
where we interpret $\Omega: T(T^*M) \rightarrow T^*(T^*M)$. Then, note that $\widehat{g}$ preserves $\Omega$ since $\widehat{g}$ is a cotangent lift and it also preserves $L$ (where, since we identify $L$ with its pullback through $\pi_{T^*M}$, this is equivalent to $g$ preserving $L$). More precisely, since we are identifying $L$ with its pullback $(\pi_{T^*M})^*L$, we have
$$ \mathcal{L}_{\widehat{g}}((\pi_{T^*M})^*L) =\langle (\pi_{T^*M})^* dL, \widehat{g}\rangle = \langle dL, T\pi_{T^*M}\widehat{g}\rangle =\langle dL, g\rangle = \mathcal{L}_gL = 0. $$
Hence, $\mathcal{L}_{\widehat{g}} (\Omega^{-1}(dL)) = 0$. One can also verify this in coordinates, and a direct computation yields
$$ [\widehat{g}, X_L] = \frac{\partial}{\partial q^A}\left( g^B(q) \frac{\partial L}{\partial q^B} \right) \frac{\partial}{\partial p_A}, $$
which vanishes since $\mathcal{L}_gL = 0$.
Now, to show that $\langle \Theta, \widehat{g}\rangle$ is preserved along the flow of $X_{H_L}$, compute
\begin{align*}
\mathcal{L}_{X_{H_L}} \langle \Theta, \widehat{g} \rangle &= \mathcal{L}_{\widehat{f}}\langle \Theta, \widehat{g} \rangle + \mathcal{L}_{X_L} \langle \Theta, \widehat{g} \rangle = \mathcal{L}_{X_L}\langle \Theta, \widehat{g} \rangle,
\end{align*}
where we used that $\mathcal{L}_{\widehat{f}}\langle\Theta,\widehat{g}\rangle = 0$ by Proposition \ref{AdjointSystemSymmetry}. Now, we have
\begin{align*}
\mathcal{L}_{X_{H_L}} \langle \Theta, \widehat{g} \rangle &= \mathcal{L}_{X_L}\langle \Theta,\widehat{g}\rangle = \langle \mathcal{L}_{X_L}\Theta, \widehat{g}\rangle + \langle \Theta, \mathcal{L}_{X_L} \widehat{g}\rangle = \langle \mathcal{L}_{X_L}\Theta, \widehat{g}\rangle + \langle \Theta, \underbrace{[X_L,\widehat{g}]}_{=0}\rangle \\
&= \langle i_{X_L}d\Theta + d(i_{X_L}\Theta), \widehat{g} \rangle = \langle -i_{X_L}\Omega, \widehat{g}\rangle + \langle d(i_{X_L}\Theta),\widehat{g}\rangle \\
&= -\langle dL, \widehat{g}\rangle + \langle d(i_{X_L}\Theta),\widehat{g}\rangle.
\end{align*}
The first term above vanishes since $\mathcal{L}_g L = 0$.
Furthermore, $\langle d(i_{X_L}\Theta),\widehat{g}\rangle = 0$ since $X_L$ is a vertical vector field while $\Theta$ is a horizontal one-form. Hence, $\mathcal{L}_{X_{H_L}}\langle\Theta,\widehat{g}\rangle = 0$.
\end{proof}
\end{prop}
\subsection{Adjoint Systems for DAEs via Presymplectic Mechanics}\label{AdjointDAESection}
In this section, we generalize the notion of adjoint system to the case where the base equation is a (semi-explicit) DAE. We will prove analogous results to the ODE case. However, more care is needed than the ODE case, since the DAE constraint introduces issues with solvability. As we will see, the adjoint system associated with a DAE is a presymplectic system, so we will approach the solvability of such systems through the presymplectic constraint algorithm.
We consider the following setup for a differential-algebraic equation. Let $M_d$ and $M_a$ be two manifolds, where we regard $M_d$ as the configuration space of the ``dynamical" or ``differential" variables and $M_a$ as the configuration space of the ``algebraic" variables. Let $\pi_{\Phi}: \Phi \rightarrow M_d \times M_a$ be a vector bundle over $M_d \times M_a$. Furthermore, let $\pi_d: M_d \times M_a \rightarrow M_d$ be the projection onto the first factor and let $\pi_{\overline{TM}_d}: \overline{TM}_d \rightarrow M_d \times M_a$ be the pullback bundle of the tangent bundle $\pi_{TM_d}: TM_d \rightarrow M_d$ by $\pi_d$, i.e., $\overline{TM}_d = \pi_d^*(TM_d)$. Then, a (semi-explicit) DAE is specified by a section $f \in \Gamma(\overline{TM}_d)$ and a section $\phi \in \Gamma(\Phi)$, via the system
\begin{subequations}
\begin{align}
\dot{q} &= f(q,u), \label{DAEa} \\
0 &= \phi(q,u), \label{DAEb}
\end{align}
\end{subequations}
where $(q,u)$ are coordinates on $M_d \times M_a$. We refer to $\overline{TM}_d$ as the differential tangent bundle, with coordinates $(q,u,v)$ and to $\Phi$ as the constraint bundle.
\begin{remark}\label{DAE local solvability}
For the local solvability of \eqref{DAEa}-\eqref{DAEb}, regard $\phi$ locally as a map $\mathbb{R}^{\dim(M_d)} \times \mathbb{R}^{\dim(M_a)} \rightarrow \mathbb{R}^{\normalfont{\text{rank}}(\Phi)}$. If $\partial\phi/\partial u$ is an isomorphism at a point $(q_0,u_0)$ where $\Phi(q_0,u_0)=0$, then by the implicit function theorem, one can locally solve $u = u(q)$ about $(q_0,u_0)$ such that $\phi(q,u(q))=0$, and subsequently solve the unconstrained differential equation $\dot{q} = f(q,u(q))$ locally. This is the case for semi-explicit index $1$ DAEs.
In order for the $\normalfont{\text{rank}}(\Phi) \times \dim(M_a)$ matrix $\partial\phi/\partial u(q_0,u_0)$ to be an isomorphism, it is necessary that $\normalfont{\text{rank}}(\Phi) = \dim(M_a)$. However, we will make no such assumption, so as to treat the theory in full generality, allowing for, e.g., nonunique solutions.
\end{remark}
Now, let $\overline{T^*M}_d$ be the pullback bundle of the cotangent bundle $T^*M_d$ by $\pi_d$, with coordinates $(q,u,p)$, which we refer to as the differential cotangent bundle. Furthermore, let $\Phi^*$ be the dual vector bundle to $\Phi$, with coordinates $(q,u,\lambda)$. Let $\overline{T^*M}_d \oplus \Phi^*$ be the Whitney sum of these two vector bundles over $M_d \times M_a$ with coordinates $(q,u,p,\lambda)$, which we refer to as the generalized phase space bundle. We define a Hamiltonian on the generalized phase space,
\begin{align*}
&H: \overline{T^*M}_d \oplus \Phi^* \rightarrow \mathbb{R}, \\
&H(q,u,p,\lambda) = \langle p, f(q,u) \rangle + \langle \lambda, \phi(q,u)\rangle.
\end{align*}
Let $\Omega_d$ denote the canonical symplectic form on $T^*M_d$, with coordinate expression $\Omega_d = dq \wedge dp$. We define a presymplectic form $\Omega_0$ on $\overline{T^*M}_d \oplus \Phi^*$ as follows: the pullback bundle admits the map $\tilde{\pi}_d: \overline{T^*M}_d \rightarrow T^*M_d$ which covers $\pi_d$ and acts as the identity on fibers and furthermore, the generalized phase space bundle admits the projection $\Pi: \overline{TM}_d^* \oplus \Phi^* \rightarrow \overline{TM}_d^*$, since the Whitney sum has the structure of a double vector bundle. Hence, we can pullback $\Omega_d$ along the sequence of maps
$$ \overline{T^*M}_d \oplus \Phi^* \overset{\Pi}{\longrightarrow} \overline{T^*M}_d \overset{\tilde{\pi}_d}{\longrightarrow} T^*M_d, $$
which allows us to define a two-form $\Omega_0 \equiv \Pi^* \circ \tilde{\pi}_d^* (\Omega_d)$ on the generalized phase space bundle. Clearly, $\Omega_0$ is closed as the pullback of a closed form. In general, $\Omega_0$ will be degenerate except in the trivial case where $M_a$ is empty and the fibers of $\Phi$ are the zero vector space. Hence, $\Omega_0$ is a presymplectic form. Note that since $\Pi$ acts by projection and $\tilde{\pi}_d$ acts as the identity on fibers, the coordinate expression for $\Omega_0$ on $\overline{T^*M}_d \oplus \Phi^*$ with coordinates $(q,u,p,\lambda)$ is the same as the coordinate expression for $\Omega_d$, $\Omega_0 = dq \wedge dp$. The various spaces and their coordinates are summarized in the diagram below.
\adjustbox{scale=0.8,center}{%
\begin{tikzcd}[column sep=3ex,row sep=5ex]
{(q,u,p,\lambda) \in \overline{T^*M}_d\oplus \Phi^*} && {(q,u,\lambda)\in\Phi^*} && \Phi \\
\\
{(q,u,p) \in \overline{T^*M}_d} &&& {M_d \times M_a \ni (q,u)} &&& {\overline{TM}_d \ni (q,u,v)} \\
\\
{(q,p) \in T^*M_d} &&& {M_d \ni q} &&& {TM_d \ni (q,v)}
\arrow["{\pi_d}", from=3-4, to=5-4]
\arrow[from=5-7, to=5-4]
\arrow[from=5-1, to=5-4]
\arrow[from=3-7, to=3-4]
\arrow[from=3-7, to=5-7]
\arrow[from=3-1, to=5-1]
\arrow[from=3-1, to=3-4]
\arrow[from=1-3, to=3-4]
\arrow[from=1-5, to=3-4]
\arrow[from=1-1, to=3-1]
\arrow[from=1-1, to=1-3]
\end{tikzcd}
}
We now define the adjoint system associated with the DAE \eqref{DAEa}-\eqref{DAEb} as the Hamiltonian system
\begin{equation}\label{DAE Adjoint System Intrinsic}
i_X\Omega_0 = dH.
\end{equation}
Given a (generally, partially defined) vector field $X$ on the generalized phase space satisfying \eqref{DAE Adjoint System Intrinsic}, we say a curve $(q(t),u(t),p(t),\lambda(t))$ is a solution curve of \eqref{DAE Adjoint System Intrinsic} if it is an integral curve of $X$.
Let us find a coordinate expression for the above system. Expressing our coordinates with indices $(q^i, u^a, p_j, \lambda_A)$, the left hand side of \eqref{DAE Adjoint System Intrinsic} along a solution curve has the expression
\begin{align*}
i_X\Omega_0 &= \left( \dot{q}^i \frac{\partial}{\partial q^i} + \dot{u}^a \frac{\partial}{\partial u^a} + \dot{p}_j \frac{\partial}{\partial p_j} + \dot{\lambda}_A \frac{\partial}{\partial \lambda_A} \right) \lrcorner\, dq^k \wedge dp_k \\
&= \dot{q}^i dp_i - \dot{p}_j dq^j.
\end{align*}
On the other hand, the right hand side of \eqref{DAE Adjoint System Intrinsic} has the expression
\begin{align*}
dH &= d\Big(p_if^i(q,u) + \lambda_A \phi^A(q,u)\Big) \\
&= f^i(q,u) dp_i + \left( p_i \frac{\partial f^i}{\partial q^j} + \lambda_A \frac{\partial \phi^A}{\partial q^j} \right) dq^j + \phi^A(q,u) d\lambda_A + \left( p_i \frac{\partial f^i}{\partial u^a} + \lambda_A \frac{\partial \phi^A}{\partial u^a} \right) du^a.
\end{align*}
Equating these expressions gives the coordinate expression for the adjoint DAE system,
\begin{subequations}
\begin{align}
\dot{q}^i &= f^i(q,u), \label{DAE Adjoint System Coord 1} \\
\dot{p}_j &= -p_i \frac{\partial f^i}{\partial q^j} - \lambda_A \frac{\partial \phi^A}{\partial q^j}, \label{DAE Adjoint System Coord 2}\\
0 &= \phi^A(q,u),\label{DAE Adjoint System Coord 3} \\
0 &= p_i \frac{\partial f^i}{\partial u^a} + \lambda_A \frac{\partial \phi^A}{\partial u^a}.\label{DAE Adjoint System Coord 4}
\end{align}
\end{subequations}
\begin{remark}\label{Adjoint DAE local solvability}
As mentioned in Remark \ref{DAE local solvability}, in the index $1$ case, one can locally solve the original DAE \eqref{DAE Adjoint System Coord 1} and \eqref{DAE Adjoint System Coord 3}. Viewing such a solution $(q,u)$ as fixed, one can subsequently locally solve for $\lambda$ in equation \eqref{DAE Adjoint System Coord 4} as a function of $p$, since $\partial \phi/\partial u$ is locally invertible. Substituting this into \eqref{DAE Adjoint System Coord 2} gives an ODE solely in the variable $p$, which can be solved locally.
Stated another way, if the original DAE \eqref{DAEa}-\eqref{DAEb} is an index $1$ system, then the adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} is an index $1$ system with dynamical variables $(q,p)$ and algebraic variables $(u,\lambda)$. To see this, if one denotes the constraints for the adjoint system \eqref{DAE Adjoint System Coord 3} and \eqref{DAE Adjoint System Coord 4} as
$$ 0 = \tilde{\phi}(q,u,p,\lambda) \equiv \begin{pmatrix} \phi^A(q,u) \\ p_i \frac{\partial f^i}{\partial u^a} + \lambda_A \frac{\partial \phi^A}{\partial u^a} \end{pmatrix}, $$
then the matrix derivative of $\tilde{\phi}$ with respect to the algebraic variables $(u,\lambda)$ can be locally expressed in block form as
$$ \begin{pmatrix} \partial\phi/\partial u & A \\ 0 & \partial\phi/\partial u \end{pmatrix}, $$
where the block $A$ has components given by the derivative of the right hand side of \eqref{DAE Adjoint System Coord 4} with respect to $u$. It is clear from the block triangular form of this matrix that it is pointwise invertible if $\partial\phi/\partial u$ is.
\end{remark}
\begin{remark}
It is clear from the coordinate expression \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} that a solution curve of the adjoint DAE system, if it exists, covers a solution curve of the original DAE system.
\end{remark}
We now prove several results regarding the structure of the adjoint DAE system.
First, we show that the constraint equations \eqref{DAE Adjoint System Coord 3}-\eqref{DAE Adjoint System Coord 4} can be interpreted as the statement that the Hamiltonian $H$ has the same time dependence as the ``dynamical" Hamiltonian,
\begin{align*}
&H_d: \overline{T^*M}_d \oplus \Phi^* \rightarrow \mathbb{R}, \\
&H_d(q,u,p,\lambda) = \langle p, f(q,u) \rangle,
\end{align*}
when evaluated along a solution curve.
\begin{prop}\label{Dynamical Hamiltonian Time Dependence Prop}
For a solution curve $(q,u,p,\lambda)$ of \eqref{DAE Adjoint System Intrinsic},
$$ \frac{d}{dt}H(q(t),u(t),p(t),\lambda(t)) = \frac{d}{dt}H_d(q(t),u(t),p(t),\lambda(t)). $$
\begin{proof}
For brevity, all functions below are appropriately evaluated along the solution curve. We have
\begin{align*}
\frac{d}{dt}H &= \frac{\partial H}{\partial q^i} \dot{q}^i + \frac{\partial H}{\partial p_j} \dot{p}_j + \frac{\partial H}{\partial u^a} \dot{u}^a + \frac{\partial H}{\partial \lambda_A} \dot{\lambda}_A \\
&= \frac{\partial H}{\partial q^i} \dot{q}^i + \frac{\partial H}{\partial p_j} \dot{p}_j + \left(p_i \frac{\partial f^i}{\partial u^a} + \lambda_A \frac{\partial \phi^A}{\partial u^a} \right) \dot{u}^a + \phi^A \dot{\lambda}_A \\
&= \frac{\partial H}{\partial q^i} \dot{q}^i + \frac{\partial H}{\partial p_j} \dot{p}_j \\
&= \frac{\partial H_d}{\partial q^i} \dot{q}^i + \frac{\partial H_d}{\partial p_j} \dot{p}_j = \frac{d}{dt}H_d,
\end{align*}
where in the third equality, we used \eqref{DAE Adjoint System Coord 3} and \eqref{DAE Adjoint System Coord 4}.
\end{proof}
\end{prop}
\begin{remark}
A more geometric way to view the above proposition is as follows: note that if a partially-defined vector field $X$ exists such that $i_X\Omega_0 = dH$, then the change of $H$ in a given direction $Y$, at any point where $X$ is defined, can be computed as $dH(Y) = \Omega_0(X,Y)$. Observe that the kernel of $\Omega_0$ is locally spanned by $\partial/\partial u$, $\partial/\partial \lambda$, i.e., it is spanned by the coordinate vectors in the algebraic coordinates. Hence, the change of $H$ in the algebraic coordinate directions is zero. This justifies referring to $(u,\lambda)$ as ``algebraic" variables.
\end{remark}
We now show that the adjoint system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} formally arises from a variational principle. To do so, let $\Theta_0$ be the pullback of the tautological one-form $\Theta_d$ on the cotangent bundle $T^*M_d$ by the maps $\Pi$ and $\tilde{\pi}_d$, $\Theta_0 = \Pi^*\circ \tilde{\pi}_d^* (\Theta_d)$. Of course, one has $\Omega_0 = -d\Theta_0$. Consider the action $S$ defined by
$$ S[\psi] = \int_I [\psi^* \Theta_0 - (H \circ \psi) dt ], $$
where $\psi(t) = (q(t),u(t),p(t),\lambda(t))$ is a curve on the generalized phase space bundle over the interval $I = (t_0,t_1)$. We consider the variational principle $\delta S[\psi] = 0$, subject to variations which fix the endpoints $q(t_0)$, $q(t_1)$.
\begin{prop}\label{AdjointDAEVariationalPrincipleProp}
Let $\psi$ be a curve on the generalized phase space bundle over the interval $I$. Then, $\psi$ is a stationary point of $S$ with respect to variations which fix $q(t_0)$, $q(t_1)$ if and only if \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} hold.
\begin{proof}
In $\psi = (q,u,p,\lambda)$ coordinates, the action has the expression
$$ S[q,u,p,\lambda] = \int_I \left(p_i\dot{q}^i - p_if^i(q,u) - \lambda_A \phi^A(q,u) \right) dt = \int_I \left(p_i (\dot{q}^i - f^i(q,u)) - \lambda_A\phi^A(q,u) \right) dt. $$
The variation of the action reads
\begin{align*}
\delta &S[q,u,p,\lambda]\cdot (\delta q, \delta u, \delta p, \delta \lambda) \\
&= \int_I \left[ \delta p_i (\dot{q}^i - f^i) + p_j \frac{d}{dt} \delta q^j - p_i\frac{\partial f^i}{\partial q^j} \delta q^j - \lambda_A \frac{\partial\phi^A}{\partial q^j}\delta q^j - \delta\lambda_A \phi^A + \left(-p_i \frac{\partial f^i}{\partial u^a} - \lambda_A \frac{\partial \phi^A}{\partial u^a}\right)\delta u^a \right] dt\\
&=\int_I \left[ \delta p_i (\dot{q}^i - f^i) - \left(\dot{p}_j + p_i\frac{\partial f^i}{\partial q^j} + \lambda_A \frac{\partial \phi^A}{\partial q^j} \right) \delta q^j - \delta\lambda_A \phi^A + \left(-p_i \frac{\partial f^i}{\partial u^a} - \lambda_A \frac{\partial \phi^A}{\partial u^a}\right)\delta u^a \right] dt,
\end{align*}
where we used integration by parts and the vanishing of the variations at the endpoints to drop any boundary terms. Clearly, if \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} hold, then $\delta S = 0$ for all such variations. Conversely, by the fundamental lemma of the calculus of variations, if $\delta S = 0$ for all such variations, then \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} hold.
\end{proof}
\end{prop}
\begin{remark}
We will use the variational structure associated with the adjoint DAE system to construct numerical integrators in Section \ref{DiscreteAdjointSystemsSection}.
\end{remark}
We now prove a result regarding the conservation of a quadratic invariant, analogous to the case of cotangent lifted adjoint systems in the ODE case. To do this, we define the variational equations as the linearization of the DAE \eqref{DAEa}-\eqref{DAEb}. The coordinate expressions for the variational equations are obtained by taking the variation of equations \eqref{DAEa}-\eqref{DAEb} with respect to variations $(\delta q, \delta u$),
\begin{subequations}
\begin{align}
\dot{q}^i &= f^i(q,u), \label{DAEVariationalEqn1}\\
0 &= \phi^A(q,u), \label{DAEVariationalEqn2} \\
\frac{d}{dt}\delta q^i &= \frac{\partial f^i(q,u)}{\partial q^j}\delta q^j + \frac{\partial f^i(q,u)}{\partial u^a}\delta u^a, \label{DAEVariationalEqn3}\\
0 &= \frac{\partial \phi^A(q,u)}{\partial q^j} \delta q^j + \frac{\partial\phi^A(q,u)}{\partial u^a} \delta u^a. \label{DAEVariationalEqn4}
\end{align}
\end{subequations}
\begin{prop}\label{Quadratic Invariant Adjoint DAE Prop}
For a solution $(q,u,p,\lambda)$ of the adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} and a solution $(q,u,\delta q,\delta u)$ of the variational equations \eqref{DAEVariationalEqn1}-\eqref{DAEVariationalEqn4}, covering the same curve $(q,u)$, one has
$$ \frac{d}{dt} \langle p(t),\delta q(t) \rangle = 0. $$
\begin{proof}
This follows from a direct computation,
\begin{align*}
\frac{d}{dt} \langle p, \delta q \rangle &= \frac{d}{dt} \left(p_i \delta q^i\right) = \dot{p}_j \delta q^j + p_i \frac{d}{dt}\delta q^i \\
&= -p_i \frac{\partial f^i}{\partial q^j} \delta q^j - \lambda_A \frac{\partial\phi^A}{\partial q^j}\delta q^j + p_i \frac{\partial f^i}{\partial q^j}\delta q^j + p_i\frac{\partial f^i}{\partial u^a}\delta u^a \\
&= - \lambda_A \frac{\partial \phi^A}{\partial q^j}\delta q^j + p_i \frac{\partial f^i}{\partial u^a}\delta u^a \\
&= \left(\lambda_A \frac{\partial \phi^A}{\partial u^a} + p_i\frac{\partial f^i}{\partial u^a} \right)\delta u^a = 0,
\end{align*}
where we used \eqref{DAE Adjoint System Coord 2}, \eqref{DAEVariationalEqn3}, \eqref{DAEVariationalEqn4}, and \eqref{DAE Adjoint System Coord 4}.
\end{proof}
\end{prop}
\begin{remark}\label{PresymplecticityQuadraticInvariantDAERemark}
Although we proved the previous proposition in coordinates, it can be understood intrinsically through the presymplecticity of the adjoint DAE flow. To see this, assume a partially-defined vector field $X$ exists such that $i_X\Omega_0 = dH$. Then, the flow of $X$ preserves $\Omega_0$, which follows from
$$ \mathcal{L}_X\Omega_0 = i_X d\Omega_0 + d(i_X\Omega_0) = d(i_X\Omega_0) = d^2H = 0. $$
The coordinate expression for the preservation of the presymplectic form $\Omega_0 = dq^i \wedge dp_i$, with the appropriate choice of first variations, gives the previous proposition, analogous to the argument that we made in the symplectic (unconstrained) case.
Additionally, as we will see in Section \ref{AdjointSensitivitySection}, Proposition \ref{Quadratic Invariant Adjoint DAE Prop} will provide a method for computing adjoint sensitivities.
These two observations are interesting when constructing numerical methods to compute adjoint sensitivities, since if we can construct integrators that preserve the presymplectic form, then it will preserve the quadratic invariant and hence, be suitable for computing adjoint sensitivities efficiently.
\end{remark}
\begin{remark}\label{DAEVariationalEqnExistenceRemark}
For an index 1 DAE \eqref{DAEa}-\eqref{DAEb}, since $\partial\phi/\partial u$ is (pointwise) invertible for a fixed curve $(q,u)$, one can solve for $\delta u$ as a function of $\delta q$ in the variational equation \eqref{DAEVariationalEqn4} and substitute this into \eqref{DAEVariationalEqn3} to obtain an explicit ODE for $\delta q$. Hence, in the index 1 case, given a solution $(q,u)$ of the DAE \eqref{DAEa}-\eqref{DAEb} and an initial condition $\delta q(0)$ in the tangent fiber over $q(0)$, there is a corresponding (at least local) unique solution of the variational equations.
\end{remark}
\subsubsection{DAE Index and the Presymplectic Constraint Algorithm}\label{DAEIndexPCASection}
In this section, we relate the index of the DAE \eqref{DAEa}-\eqref{DAEb} to the number of steps for convergence in the presymplectic constraint algorithm associated with the adjoint DAE system \eqref{DAE Adjoint System Intrinsic}. In particular, we show that for an index 1 DAE, the presymplectic constraint algorithm for the associated adjoint DAE system converges after $\nu_P = 1$ step. Subsequently, we discuss how one can formally handle the more general index $\nu$ DAE case.
We consider again the presymplectic system given by the adjoint DAE system, $P = \overline{T^*M}_d \oplus \Phi^*$ equipped with the presymplectic form $\Omega_0 = dq \wedge dp$ and Hamiltonian $H(q,u,p,\lambda) = \langle p,f(q,u)\rangle + \langle \lambda, \phi(q,u)\rangle$, as discussed in the previous section. Our goal is to bound the number of steps in the presymplectic constraint algorithm $\nu_P$ for this presymplectic system in terms of the index $\nu$ of the underlying DAE \eqref{DAEa}-\eqref{DAEb}.
Recall the presymplectic constraint algorithm discussed in Section \ref{SymplecticGeometrySection}. We first determine the primary constraint manifold $P_1$. Observe that since $\Omega_0 = dq \wedge dp$, we have the local expression $\text{ker}(\Omega_0)|_{(q,u,p,\lambda)} = \text{span}\{\partial/\partial u, \partial/\partial \lambda\}$. Thus, we require that
\begin{align*}
\frac{\partial H}{\partial u} &= 0, \\
\frac{\partial H}{\partial \lambda} &= 0,
\end{align*}
i.e., $P_1$ consists of the points $(q,u,p,\lambda)$ such that
\begin{align*}
0 &= \frac{\partial H(q,u,p,\lambda)}{\partial u^a} = p_i \frac{\partial f^i(q,u)}{\partial u^a} + \lambda_A \frac{\partial\phi^A(q,u)}{\partial u^a}, \\
0 &= \frac{\partial H(q,u,p,\lambda)}{\partial \lambda^A} = \phi^A(q,u).
\end{align*}
These are of course the constraint equations \eqref{DAE Adjoint System Coord 3}-\eqref{DAE Adjoint System Coord 4} of the adjoint DAE system.
We now consider first the case when the DAE system \eqref{DAEa}-\eqref{DAEb} has index $\nu=1$ and subsequently, consider the general case $\nu \geq 1$.
\textbf{The Presymplectic Constraint Algorithm for $\nu=1$.} For the case $\nu=1$, we will show that the presymplectic constraint algorithm terminates after $1$ step, i.e., $\nu_P = \nu = 1$.
Now, assume that the DAE system \eqref{DAEa}-\eqref{DAEb} has index $\nu=1$, i.e., for each $(q,u) \in M_d \times M_a$ such that $\phi(q,u) = 0$, the matrix with $A^{th}$ row and $a^{th}$ column entry
$$ \frac{\partial\phi^A(q,u)}{\partial u^a} $$
is invertible. Observe that the definition of the presymplectic constraint algorithm, equation \eqref{Presymplectic Constraint Algorithm Sequence}, is local and hence, we seek a local coordinate expression for $\Omega_1 \equiv \Omega_0|_{P_1}$ and its kernel.
Let $(q,u,p,\lambda) \in P_1$. In particular, $\phi(q,u) = 0$. Since $\partial\phi(q,u)/\partial u$ is invertible, by the implicit function theorem, one can locally solve for $u$ as a function of $q$, which we denote $u = u(q)$, such that $\phi(q,u(q)) = 0$. Then, one can furthermore locally solve for $\lambda$ as a function of $q$ and $p$ from the second constraint equation,
$$ \lambda_A(q,p) = - \left[ \left( \frac{\partial\phi(q,u(q))}{\partial u} \right)^{-1}\right]^a_A p_i \frac{\partial f^i(q,u(q))}{\partial u^a}. $$
Thus, we can coordinatize $P_1$ via coordinates $(q',p')$, where the inclusion $i_1: P_1 \hookrightarrow P$ is given by the coordinate expression
$$ i_1: (q',p') \mapsto (q', u(q'), p', \lambda(q',p')). $$
Then, one obtains the local expression for $\Omega_1$,
$$ \Omega_1 = i_1^*\Omega_0 = i_1^*(dq) \wedge i_1^*(dp) = dq' \wedge dp'. $$
This is clearly nondegenerate, i.e., $Z_p = 0$ for any $Z \in \text{ker}(\Omega_1), p \in P_1$, so the presymplectic constraint algorithm terminates, $P_2 = P_1$. We conclude that $\nu_P = 1$.
To conclude the discussion of the index $1$ case, we obtain coordinate expressions for the resulting nondegenerate Hamiltonian system. The Hamiltonian on $P_1$ can be expressed as
$$ H_1(q',p') = H(i_1(q',p')) = \langle p', f(q',u(q'))\rangle + \langle \lambda(q',p'), \phi(q',u(q'))\rangle = \langle p', f(q',u(q'))\rangle. $$
Thus, with the coordinate expression $X = \dot{q}'^i \partial/\partial q'^i + \dot{p}'_i \partial/\partial p'_i$, Hamilton's equations $i_X \Omega_1 = dH_1$ can be expressed as
\begin{align*}
\dot{q}'^i &= \frac{\partial H_1}{\partial p'_i} = f^i(q',u(q')), \\
\dot{p}'_i &= - \frac{\partial H_1}{\partial q'^i} = -p'_j \frac{\partial f^j(q',u(q'))}{\partial q^i} - p'_j \frac{\partial f^j(q',u(q'))}{\partial u^a} \frac{\partial u^a(q')}{\partial q'^i}.
\end{align*}
We will now show explicitly that this Hamiltonian system solves \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} along the submanifold $P_1$. Clearly, the latter two equations \eqref{DAE Adjoint System Coord 3}-\eqref{DAE Adjoint System Coord 4} are satisfied, by definition of $P_1$. So, we want to show that the first two equations \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 2} are satisfied. Using the second constraint equation \eqref{DAE Adjoint System Coord 4}, we have
$$- p'_j \frac{\partial f^j(q',u(q'))}{\partial u^a} = \lambda_A(q',p') \frac{\partial\phi^A(q',u(q'))}{\partial u^a}.$$
Substituting this into the equation for $\dot{p}'_i$ above gives
$$ \dot{p}'_i = -p'_j \frac{\partial f^j(q',u(q'))}{\partial q^i} + \lambda_A(q',p') \frac{\partial\phi^A(q',u(q'))}{\partial u^a} \frac{\partial u^a(q')}{\partial q'^i}. $$
By the implicit function theorem, one has
$$ \frac{\partial\phi^A(q',u(q'))}{\partial u^a} \frac{\partial u^a(q')}{\partial q'^i} = - \frac{\partial \phi^A(q',u(q'))}{\partial q^i}. $$
Hence, the Hamiltonian system on $P_1$ can be equivalently expressed as
\begin{align*}
\dot{q}'^i &= f^i(q',u(q')), \\
\dot{p}'_i &= -p'_j \frac{\partial f^j(q',u(q'))}{\partial q^i} - \lambda_A(q',p') \frac{\partial \phi^A(q',u(q'))}{\partial q^i}.
\end{align*}
Thus, we have explicitly verified that \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} are satisfied along $P_1$. Note that since the presymplectic constraint algorithm terminates at $\nu_P = 1$, $X$ is guaranteed to be tangent to $P_1$. One can also verify this explicitly by computing the pushforward $Ti_1(X)$ and verifying that it annihilates the constraint functions whose zero level set defines $P_1$,
\begin{align*}
(q,u,p,\lambda) &\mapsto \phi^A(q,u), \\
(q,u,p,\lambda) &\mapsto p_i \frac{\partial f^i(q,u)}{\partial u^a} + \lambda_A \frac{\partial\phi^A(q,u)}{\partial u^a}.
\end{align*}
\begin{remark}\label{Index1ReductionPCA}
It is interesting to note that the Hamiltonian system $i_X\Omega_1 = dH_1$, which we obtained by forming the adjoint system of the underlying index 1 DAE and subsequently, reducing the index of the adjoint DAE system through the presymplectic constraint algorithm, can be equivalently obtained (at least locally) by first reducing the index of the underlying DAE and then forming the adjoint system.
More precisely, if one locally solves $\phi(q,u) = 0$ for $u = u(q)$, then the index 1 DAE can be reduced to an ODE,
$$ \dot{q} = f(q,u(q)). $$
Subsequently, we can form the adjoint system to this ODE, as discussed in Section \ref{AdjointManifoldSection}. The corresponding Hamiltonian is $H(q,p) = \langle p, f(q,u(q)) \rangle$, which is the same as $H_1$.
Thus, for the index 1 case, the process of forming the adjoint system and reducing the index commute.
\end{remark}
\begin{remark}
In the language of the presymplectic constraint algorithm, Proposition~\ref{Dynamical Hamiltonian Time Dependence Prop} can be restated as the statement that the Hamiltonian $H$ and its first derivatives, restricted to the primary constraint manifold, agrees with the dynamical Hamiltonian $H_1$ and its first derivatives.
\end{remark}
\begin{remark}
An alternative view of the solution theory of the presymplectic adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} is through singular perturbation theory (see, for example, \citet{Be2007} and \citet{ChTr2021}). We proceed by writing \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} as
\begin{align*}
\dot{q} &= \frac{\partial H}{\partial p} = f(q,u), \\
\dot{p} &= -\frac{\partial H}{\partial q} = - [D_qf(q,u)]^*p - [D_q\phi(q,u)]^*\lambda, \\
0 &= \frac{\partial H}{\partial \lambda} = \phi(q,u), \\
0 &= - \frac{\partial H}{\partial u} = - [D_uf(q,u)]^*p - [D_u\phi(q,u)]^*\lambda.
\end{align*}
Applying a singular perturbation to the constraint equations yields the system
\begin{align*}
\dot{q} &= \frac{\partial H}{\partial p}, \\
\dot{p} &= -\frac{\partial H}{\partial q}, \\
\epsilon \dot{u} &= \frac{\partial H}{\partial \lambda}, \\
\epsilon \dot{\lambda} &= - \frac{\partial H}{\partial u},
\end{align*}
where $\epsilon > 0$. Observe that this is a nondegenerate Hamiltonian system with $H(q,u,p,\lambda)$ as previously defined but with the modified symplectic form $\Omega_\epsilon = dq \wedge dp + \epsilon\, du \wedge d\lambda$. Then, the above system can be expressed $i_{X_H}\Omega_\epsilon = dH$. In the language of perturbation theory, the primary constraint manifold for the presymplectic system is precisely the slow manifold of the singularly perturbed system. One can utilize techniques from singular perturbation theory to develop a solution theory for this system, using Tihonov's theorem, whose assumptions for this particular system depend on the eigenvalues of the algebraic Hessian $D_{u,\lambda}^2H$ (see, \citet{Be2007}). Although we will not elaborate on this here, this could be an interesting approach for the existence, stability, and approximation theory of such systems. In particular, the slow manifold integrators introduced in \citet{BuKl2020} may be relevant to their discretization. It is also interesting to note that for a solution $(q_\epsilon, p_\epsilon, u_\epsilon, \lambda_\epsilon)$ of the singularly perturbed system and a solution $(\delta q_\epsilon, \delta u_\epsilon)$ of the variational equations,
\begin{align*}
\frac{d}{dt} \delta q_\epsilon &= D_qf(q_\epsilon, u_\epsilon) \delta q_\epsilon + D_uf(q_\epsilon,u_\epsilon)\delta u_\epsilon, \\
\epsilon \frac{d}{dt} \delta u_\epsilon &= D_q\phi(q_\epsilon,u_\epsilon)\delta q_\epsilon + D_u\phi(q_\epsilon, u_\epsilon) \delta u_\epsilon,
\end{align*}
one has the perturbed adjoint variational quadratic conservation law
$$ \frac{d}{dt} \Big( \langle p_\epsilon, \delta q_\epsilon \rangle + \epsilon \langle \lambda_\epsilon, \delta u_\epsilon\rangle \Big) = 0, $$
which follows immediately from the preservation of $\Omega_\epsilon$ under the symplectic flow.
\end{remark}
\textbf{The Presymplectic Constraint Algorithm for General $\nu \geq 1$.} Note that for the general case, we assume that the index of the DAE is finite, $1 \leq \nu < \infty$.
In this case, there are two possible approaches to reduce the adjoint system: either form the adjoint system associated with the index $\nu$ DAE and then successively apply the presymplectic constraint algorithm or, alternatively, reduce the index of the DAE, form the adjoint system, and then apply the presymplectic constraint algorithm as necessary.
Since we have already worked out the presymplectic constraint algorithm for the index 1 case, we will take the latter approach. Namely, we reduce an index $\nu$ DAE to an index $1$ DAE, and subsequently, apply the presymplectic constraint algorithm to the reduced index 1 DAE. Given an index $\nu$ DAE, it is generally possible to reduce the DAE to an index 1 DAE using the algorithm introduced in \citet{MaSo1993}. The process of index reduction is given by differentiating the equations of the DAE to reveal hidden constraints. Geometrically, the process of index reduction can be understood as the successive jet prolongation of the DAE and subsequent projection back onto the first jet (see, \citet{ReLiWi2001}).
Thus, given an index $\nu$ DAE $\dot{x} = \tilde{f}(x,y)$, $\tilde{\phi}(x,y) = 0$, we can, after $\nu-1$ reduction steps, transform it into an index 1 DAE of the form $\dot{q} = f(q,u)$, $\phi(q,u) = 0$. Subsequently, we can form the adjoint DAE system and apply one iteration of the presymplectic constraint algorithm to obtain the underlying nondegenerate dynamical system. If we let the $\nu_{R,P}$ denote the minimum number of DAE index reduction steps plus presymplectic constraint algorithm iterations necessary to take an index $\nu$ DAE and obtain the underlying nondegenerate Hamiltonian system associated with the adjoint, we have $\nu_{R,P} \leq \nu$.
\begin{remark}
Note that we could have reduced the index $\nu$ DAE to an explicit ODE after $\nu$ reduction steps, and subsequently, formed the adjoint. While this is formally equivalent to the above procedure by Remark \ref{Index1ReductionPCA}, we prefer to keep the DAE in index 1 form. This is especially preferable from the viewpoint of numerics: if one reduces an index 1 DAE to an ODE and attempts to apply a numerical integrator, it is generically the case that the discrete flow drifts off the constraint manifold. For this reason, it is preferable to develop numerical integrators for the index 1 adjoint DAE system directly to prevent constraint violation.
\end{remark}
\begin{example}[Hessenberg Index 2 DAE]
Consider a Hessenberg index 2 DAE, i.e., a DAE of the form
\begin{align*}
\dot{q} &= f(q,u), \\
0 &= g(q),
\end{align*}
where $(q,u) \in \mathbb{R}^n \times \mathbb{R}^m$, $f: \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}^n$, $g: \mathbb{R}^n \rightarrow \mathbb{R}^m$, and $\frac{\partial g}{\partial q} \frac{\partial f}{\partial u}$ is pointwise invertible. We reduce this to an index 1 DAE \eqref{DAEa}-\eqref{DAEb} as follows. Let $M_d = g^{-1}(\{0\})$ be the dynamical configuration space which we will assume is a submanifold of $\mathbb{R}^n$. For example, this is true if $g$ is a constant rank map. Furthermore, let $M_a = \mathbb{R}^m$ be the algebraic configuration space. To reduce the index, we differentiate the constraint $g(q) = 0$ with respect to time. This is equivalent to enforcing that the dynamics are tangent to $M_d$. This gives
$$ 0 = \frac{\partial g^A(q)}{\partial q^i}{\dot{q}^i} = \frac{\partial g^A(q)}{\partial q^i} f^i(q,u) \equiv \phi^A(q,u). $$
Hence, we can form the semi-explicit index 1 system on $M_d \times M_a$ given by
\begin{align*}
\dot{q} &= f(q,u), \\
0 &= \phi(q,u).
\end{align*}
The above system is an index 1 DAE since $\frac{\partial \phi}{\partial u} = \frac{\partial g}{\partial q}\frac{\partial f}{\partial u}$ is pointwise invertible.
We now form the adjoint DAE system associated with this index 1 DAE, \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4}. Expressing the constraint in terms of $g$ and $f$, instead of $\phi$, gives
\begin{align*}
\dot{q}^i &= f^i(q,u), \\
\dot{p}_j &= -p_i \frac{\partial f^i(q,u)}{\partial q^j} - \lambda_A \left( \frac{\partial^2 g^A(q)}{\partial q^j \partial q^i} f^i(q,u) + \frac{\partial g^A(q)}{\partial q^i} \frac{\partial f^i(q,u)}{\partial q^j} \right), \\
0 &= \frac{\partial g^A(q)}{\partial q^i} f^i(q,u),\\
0 &= p_i \frac{\partial f^i(q,u)}{\partial u^a} + \lambda_A \left( \frac{\partial g^A(q)}{\partial q^i} \frac{\partial f^i(q,u)}{\partial u^a} \right).
\end{align*}
We can then apply one iteration of the presymplectic constraint algorithm, as discussed above in the index $\nu=1$ case, to obtain the underlying nondegenerate Hamiltonian dynamics. Restricting to the primary constraint manifold, using the first constraint equation to solve for $u=u(q)$ by the implicit function theorem and subsequently, using the second constraint equation to solve for $\lambda = \lambda(q,p)$ by inverting $\left(\frac{\partial g}{\partial q} \frac{\partial f}{\partial u}\right)^T$, gives the Hamiltonian system
\begin{align*}
\dot{q}'^i &= f^i(q',u(q')), \\
\dot{p}'_j &= -p'_i \frac{\partial f^i(q',u(q'))}{\partial q^j} - \lambda_A(q',p') \left( \frac{\partial^2 g^A(q')}{\partial q^j \partial q^i} f^i(q',u(q')) + \frac{\partial g^A(q')}{\partial q^i} \frac{\partial f^i(q',u(q'))}{\partial q^j} \right).
\end{align*}
\end{example}
\subsubsection{Adjoint Systems for DAEs with Augmented Hamiltonians}\label{AugmentedAdjointDAESection}
In Section \ref{AugmentedAdjointODESection}, we augmented the adjoint ODE Hamiltonian by some function $L$. In this section, we do analogously for the adjoint DAE system.
To begin, let $H(q,u,p,\lambda) = \langle p,f(q,u)\rangle + \langle \lambda,\phi(q,u)\rangle$ be the Hamiltonian on the generalized phase space bundle corresponding to the DAE $\dot{q}=f(q,u)$, $0 = \phi(q,u)$, and let $L: M_d \times M_a \rightarrow \mathbb{R}$ be the function that we would like to augment. We identify $L$ with its pullback through $\overline{T^*M}_d \oplus \Phi^* \rightarrow M_d \times M_a$. Then, we define the augmented Hamiltonian
\begin{align*}
H_L \equiv H+L: \overline{T^*M}_d \oplus \Phi^* &\rightarrow \mathbb{R} \\
(q,u,p,\lambda) &\mapsto H(q,u,p,\lambda) + L(q,u).
\end{align*}
We define the augmented adjoint DAE system as the presymplectic system
\begin{equation}\label{AugmentedAdjointDAEIntrinsic}
i_{X_{H_L}}\Omega_0 = dH_L.
\end{equation}
A direct calculation yields the coordinate expression, along an integral curve of such a (generally, partially-defined) vector field $X_{H_L}$,
\begin{subequations}
\begin{align}
\dot{q}^i &= f^i(q,u), \label{Augmented Adjoint DAE System Coord 1} \\
\dot{p}_j &= -p_i \frac{\partial f^i}{\partial q^j} - \lambda_A \frac{\partial \phi^A}{\partial q^j} - \frac{\partial L}{\partial q^j}, \label{Augmented Adjoint DAE System Coord 2}\\
0 &= \phi^A(q,u),\label{Augmented Adjoint DAE System Coord 3} \\
0 &= p_i \frac{\partial f^i}{\partial u^a} + \lambda_A \frac{\partial \phi^A}{\partial u^a} + \frac{\partial L}{\partial u^a}.\label{Augmented Adjoint DAE System Coord 4}
\end{align}
\end{subequations}
\begin{remark}
Observe that if the base DAE \eqref{DAEa}-\eqref{DAEb} has index 1, then the above system has index 1 by the exact same argument given in the nonaugmented case. After reduction by applying the presymplectic constraint algorithm and solving for $u$ as a function of $q$ and $\lambda$ as a function of $(q,p)$, the underlying nondegenerate Hamiltonian system on the primary (final) constraint manifold corresponds to the Hamiltonian
$$(H_L)_1(q',p') = \langle p',f(q',u(q'))\rangle + L(q',u(q')),$$
which is the adjoint Hamiltonian for the ODE $\dot{q}' = f(q',u(q'))$, augmented by $L(q',u(q'))$.
However, as we will discuss in Section \ref{OCPSection}, it is not uncommon in optimal control problems for $\partial\phi/\partial u$ to be singular, but the presence of $\int L\, dt$ in the minimization objective may uniquely specify the singular degrees of freedom.
\end{remark}
We now prove an analogous proposition to Proposition \ref{Quadratic Invariant Adjoint DAE Prop}, modified by the presence of $L$ in the Hamiltonian. We again consider the variational equations \eqref{DAEVariationalEqn1}-\eqref{DAEVariationalEqn4} associated with the base DAE \eqref{DAEa}-\eqref{DAEb}, which for simplicity we express in matrix derivative notation as
\begin{subequations}
\begin{align}
\dot{q} &= f(q,u), \label{Full q,u DAE variation 1} \\
0 &= \phi(q,u), \label{Full q,u DAE variation 2} \\
\frac{d}{dt} \delta q &= D_qf(q,u)\delta q + D_uf(q,u)\delta u, \label{Full q,u DAE variation 3} \\
0 &= D_q\phi(q,u)\delta q + D_u\phi(q,u)\delta u. \label{Full q,u DAE variation 4}
\end{align}
\end{subequations}
\begin{prop}\label{Quadratic Invariant Augmented DAE Adjoint Prop}
For a solution $(q,u,p,\lambda)$ of the augmented adjoint DAE system \eqref{Augmented Adjoint DAE System Coord 1}-\eqref{Augmented Adjoint DAE System Coord 4} and a solution $(q,u,\delta q, \delta u)$ of the variational equations \eqref{Full q,u DAE variation 1}-\eqref{Full q,u DAE variation 4}, covering the same solution $(q,u)$ of the base DAE \eqref{DAEa}-\eqref{DAEb},
\begin{equation}\label{Quadratic Invariant Augmented DAE Adjoint Eqn}
\frac{d}{dt} \langle p,\delta q \rangle = -\langle \nabla_qL, \delta q\rangle - \langle \nabla_uL,\delta u\rangle.
\end{equation}
\begin{proof}
This follows from a direct computation:
\begin{align*}
\frac{d}{dt} \langle p,\delta q\rangle &= \langle \dot{p},\delta q\rangle + \langle p, \frac{d}{dt}\delta q\rangle \\
&= - \langle [D_qf]^*p, \delta q \rangle - \langle [D_q\phi]^*\lambda, \delta q\rangle - \langle \nabla_qL, \delta q\rangle + \langle p, D_qf \delta q\rangle + \langle p, D_uf \delta u\rangle \\
&= - \langle \lambda, D_q\phi\delta q\rangle - \langle \nabla_qL, \delta q\rangle + \langle p, D_uf \delta u\rangle \\
&= \langle \lambda, D_u\phi \delta u\rangle - \langle \nabla_qL, \delta q\rangle + \langle p, D_uf \delta u\rangle \\
&= - \langle \nabla_qL, \delta q\rangle + \langle [D_u\phi]^*\lambda + [D_uf]^*p, \delta u\rangle \\
&= -\langle \nabla_qL, \delta q\rangle - \langle \nabla_uL,\delta u\rangle,
\end{align*}
where in the fourth equality above we used \eqref{Full q,u DAE variation 4} and in the sixth equality above we used \eqref{Augmented Adjoint DAE System Coord 4}.
\end{proof}
\end{prop}
\begin{remark}
Analogous to the ODE case discussed in Remark \ref{Augmented ODE Quadratic Invariant Remark}, we remark that for the nonaugmented adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4}, we have preservation of $\langle p, \delta q\rangle$ by virtue of presymplecticity. On the other hand, for the augmented adjoint DAE system, despite preserving the same presymplectic form, the change of $\langle p,\delta q\rangle$ now measures the change in $L$ with respect to variations in $q$ and $u$. This can be understood from the fact that the adjoint equations for $(p,\lambda)$ in the nonaugmented case, \eqref{DAE Adjoint System Coord 2} and \eqref{DAE Adjoint System Coord 4}, are linear in $(p,\lambda)$, so that one can identify first variations in $(p,\lambda)$ with $(p,\lambda)$; whereas, in the augmented case, equations \eqref{Augmented Adjoint DAE System Coord 2} and \eqref{Augmented Adjoint DAE System Coord 4} are affine in $(p,\lambda)$, so such an identification cannot be made. Furthermore, the failure of \eqref{Augmented Adjoint DAE System Coord 2} and \eqref{Augmented Adjoint DAE System Coord 4} to be linear in $(p,\lambda)$ are given precisely by $\nabla_qL$ and $\nabla_uL$, respectively. Thus, in the augmented case, this leads to the additional terms $-\langle \nabla_uL,\delta q\rangle - \langle \nabla_q L,\delta u\rangle$ in equation \eqref{Quadratic Invariant Augmented DAE Adjoint Eqn}.
\end{remark}
\section{Applications}\label{ApplicationsSection}
\subsection{Adjoint Sensitivity Analysis for Semi-explicit Index 1 DAEs}\label{AdjointSensitivitySection}
In this section, we discuss how one can utilize adjoint systems to compute sensitivities. We will split this into four cases; namely, we want to compute sensitivities for ODEs or DAEs (we will focus on index 1 DAEs), and whether we are computing the sensitivity of a terminal cost or the sensitivity of a running cost.
The relevant adjoint system used to compute sensitivities in all four cases can be summarized:
\renewcommand{\arraystretch}{1.3}
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
& Terminal Cost & Running Cost \\
\hline
ODE & Adjoint ODE System \eqref{AdjointODEa}-\eqref{AdjointODEb} & Augmented Adjoint ODE System \eqref{AugmentedAdjointCoordinate1}-\eqref{AugmentedAdjointCoordinate2} \\
DAE & Adjoint DAE System \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} & Augmented Adjoint DAE System \eqref{Augmented Adjoint DAE System Coord 1}-\eqref{Augmented Adjoint DAE System Coord 4} \\
\hline
\end{tabular}
\end{center}
Note that in our calculations below, the top row (the ODE case) can be formally obtained from the bottom row (the DAE case) simply by ignoring the algebraic variables $(u,\lambda)$ and letting the constraint function $\phi$ be identically zero. Thus, we will focus on the bottom row, i.e., computing sensitivities of a terminal cost function and of a running cost function, subject to a DAE constraint. In both cases, we will first show how the adjoint sensitivity can be derived using a traditional variational argument. Subsequently, we will show how the adjoint sensitivity can be derived more simply by using Propositions \ref{Quadratic Invariant Adjoint DAE Prop} and \ref{Quadratic Invariant Augmented DAE Adjoint Prop}.
\textbf{Adjoint Sensitivity of a Terminal Cost.}
Consider the DAE $\dot{q} = f(q,u)$, $0 = \phi(q,u)$ as in Section \ref{AdjointDAESection}. We will assume that $M_d$ is a vector space and additionally, that the DAE has index 1. We would like to extract the gradient of a terminal cost function $C(q(t_f))$ with respect to the initial condition $q(0) = \alpha$, i.e., we want to extract the sensitivity of $C(q(t_f))$ with respect to an infinitesimal perturbation in the initial condition, given by $\nabla_\alpha C(q(t_f))$. Consider the functional $J$ defined by
$$ J = C(q(t_f)) - \langle p_0, q(0) - \alpha\rangle - \int_0^{t_f} [\langle p, \dot{q}-f(q,u)\rangle - \langle \lambda, \phi(q,u)\rangle ]dt. $$
Observe that for $(q,u)$ satisfying the given DAE with initial condition $q(0) = \alpha$, $J$ coincides with $C(q(t_f))$. We think of $p_0$ as a free parameter. For simplicity, we will use matrix derivative notation instead of indices. Computing the variation of $J$ yields
\begin{align*}
\delta J &= \langle \nabla_qC(q(t_f)), \delta q(t_f)\rangle - \langle p_0, \delta q(0) - \delta \alpha\rangle \\
&\qquad - \int_0^{t_f} \Big[ \langle p, \frac{d}{dt} \delta q - D_qf(q,u)\delta q\rangle - \langle p, D_uf(q,u)\delta u\rangle - \langle \lambda, D_q\phi(q,u)\delta q + D_u\phi(q,u)\delta u\rangle \Big]dt.
\end{align*}
Integrating by parts in the term containing $\frac{d}{dt}\delta q$ and restricting to a solution $(q,u,p,\lambda)$ of the adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} yields
\begin{align*}
\delta J &= \langle \nabla_qC(q(t_f)) - p(t_f), \delta q(t_f)\rangle - \langle p_0, \delta \alpha\rangle + \langle p(0) - p_0, \delta q(0)\rangle .
\end{align*}
We enforce the endpoint condition $p(t_f) = \nabla_qC(q(t_f))$ and choose $p_0 = p(0)$, which yields
$$ \delta J = \langle p(0), \delta\alpha\rangle. $$
Hence, the sensitivity of $C(q(t_f))$ is given by
$$ p(0) = \nabla_\alpha J = \nabla_\alpha C(q(t_f)), $$
with initial condition $q(0) = \alpha$ and terminal condition $p(t_f) = \nabla_qC(q(t_f))$. Thus, the adjoint sensitivity can be computed by setting the terminal condition on $p(t_f)$ above and subsequently, solving for the momenta $p$ at time $0$. In order for this to be well-defined, we have to verify that the given initial and terminal conditions lie on the primary constraint manifold $P_1$. However, as discussed in Section \ref{DAEIndexPCASection}, since the DAE has index 1, we can always solve for the algebraic variables $u = u(q)$ and $\lambda = \lambda(q,p)$ and thus, we are free to choose the initial and terminal values of $q$ and $p$, respectively. For higher index DAEs, one has to ensure that these conditions are compatible with the final constraint manifold. For example, this is done in \cite{CaLiPeSe2003} in the case of Hessenberg index 2 DAEs. Alternatively, at least theoretically, for higher index DAEs, one can reduce the DAE to an index 1 DAE and then the above discussion applies, however, this reduction may fail in practice due to numerical cancellation.
Note that the above adjoint sensitivity result is also a consequence of the preservation of the quadratic invariant $\langle p,v\rangle$ as in Proposition \ref{Quadratic Invariant Adjoint DAE Prop}. From this proposition, one has that
$$ \langle p(t_f), \delta q(t_f) \rangle = \langle p(0), \delta q(0)\rangle, $$
where $\delta q$ satisfies the variational equations. Setting $p(t_f) = \nabla_q C(q(t_f))$ and $\delta q(0) = \delta\alpha$ gives the same result. As mentioned in Remark \ref{PresymplecticityQuadraticInvariantDAERemark}, this quadratic invariant arises from the presymplecticity of the adjoint DAE system. Thus, a numerical integrator which preserves the presymplectic structure is desirable for computing adjoint sensitivities, as it exactly preserves the quadratic invariant that allows the adjoint sensitivities to be accurately and efficiently computed. We will discuss this in more detail in Section \ref{DiscreteAdjointSystemsSection}.
\textbf{Adjoint Sensitivity of a Running Cost.} Again, consider an index 1 DAE $\dot{q} = f(q,u)$, $0 = \phi(q,u)$. We would like to extract the sensitivity of a running cost function
$$ \int_0^{t_f} L(q,u) dt,$$
where $L: M_d \times M_a \rightarrow \mathbb{R}$, with respect to an infinitesimal perturbation in the initial condition $q(0) = \alpha$. Consider the functional $J$ defined by
$$ J = -\langle p_0, q(0)-\alpha\rangle + \int_{0}^{t_f}[L(q,u) + \langle p,f(q,u) - \dot{q}\rangle + \langle \lambda, \phi(q,u)\rangle]dt. $$
Observe that when the DAE is satisfied with initial condition $q(0)=\alpha$, $J = \int_0^{t_f}L\, dt$. Now, we would to compute the implicit change in $\int_0^{t_f}L\,dt$ with respect to a perturbation $\delta\alpha$ in the initial condition. Taking the variation in $J$ yields
\begin{align*}
\delta J &= -\langle p_0, \delta q(0)-\delta \alpha\rangle\\
&\qquad + \int_0^{t_f} \Big[\langle \nabla_qL, \delta q\rangle + \langle \nabla_uL, \delta u\rangle + \langle p, D_qf \delta q - \frac{d}{dt}\delta q \rangle + \langle p, D_uf \delta u\rangle + \langle \lambda, D_q\phi \delta q + D_u\phi \delta u \rangle \Big]dt \\
&= -\langle p_0, \delta q(0)-\delta \alpha\rangle - \langle p(t_f), \delta q(t_f)\rangle + \langle p(0), \delta q(0)\rangle \\
&\qquad + \int_0^{t_f} \Big[ \langle \nabla_qL + [D_qf]^*p + [D_q\phi]^*\lambda + \dot{p}, \delta q\rangle + \langle \nabla_uL + [D_uf]^*p + [D_u\phi]^*\lambda, \delta u\rangle \Big] dt.
\end{align*}
Restricting to a solution $(q,u,p,\lambda)$ of the augmented adjoint DAE system \eqref{Augmented Adjoint DAE System Coord 1}-\eqref{Augmented Adjoint DAE System Coord 4}, setting the terminal condition $p(t_f) = 0$, and choosing $p_0 = p(0)$ gives $ \delta J = \langle p(0), \delta \alpha\rangle.$ Hence, the implicit sensitivity of $\int_{0}^{t_f} L\, dt$ with respect to a change $\delta\alpha$ in the initial condition is given by
$$ p(0) = \delta_\alpha J = \delta_\alpha \int_0^{t_f}L(q,u)dt.$$
Thus, the adjoint sensitivity of a running cost functional with respect to a perturbation in the initial condition can be computed by using the augmented adjoint DAE system \eqref{Augmented Adjoint DAE System Coord 1}-\eqref{Augmented Adjoint DAE System Coord 4} with terminal condition $p(t_f) = 0$ to solve for the momenta $p$ at time 0.
Note that the above adjoint sensitivity result can be obtained from Proposition \ref{Quadratic Invariant Augmented DAE Adjoint Prop} as follows. We write equation \eqref{Quadratic Invariant Augmented DAE Adjoint Eqn} as
$$ \frac{d}{dt} \langle p,\delta q\rangle = -\langle dL, (\delta q, \delta u)\rangle, $$
to highlight that the right hand side measures the total induced variation of $L$. Now, we integrate this equation from $0$ to $t_f$, which gives
$$ \langle p(t_f), \delta q(t_f)\rangle - \langle p(0),\delta q(0)\rangle = - \int_0^{t_f} \langle dL, (\delta q, \delta u)\rangle dt. $$
Since we want to determine the change in the running cost functional with respect to a perturbation in the initial condition, we set $p(t_f) = 0$ which yields
$$ \langle p(0),\delta q(0)\rangle = \int_0^{t_f} \langle dL, (\delta q, \delta u)\rangle dt. $$
The right hand side is the total change induced on the running cost functional, whereas the left hand side tells us how this change is implicitly induced from a perturbation $\delta q(0)$ in the initial condition. Note that a perturbation in the initial condition $\delta q(0)$ will generally induce perturbations in both $q$ and $u$, according to the variational equations. Such a curve $(\delta q, \delta u)$ satisfying the variational equations exists in the index 1 case as noted in Remark \ref{DAEVariationalEqnExistenceRemark}. Thus, we arrive at the same conclusion as the variational argument: $p(0)$ is the desired adjoint sensitivity.
To summarize, adjoint sensitivities for terminal and running costs can be computed using the properties of adjoint systems, such as the various aforementioned propositions regarding $\frac{d}{dt} \langle p, \delta q\rangle$, which is zero in the nonaugmented case and measures the variation of $L$ in the augmented case. In the case of a terminal cost, one sets an inhomogeneous terminal condition $p(t_f) = \nabla_qC(q(t_f))$ and backpropagates the momenta through the nonaugmented adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4} to obtain the sensitivity $p(0)$. On the other hand, in the case of a running cost, one sets a homogeneous terminal condition $p(t_f) = 0$ and backpropagates the momenta through the augmented adjoint DAE system \eqref{Augmented Adjoint DAE System Coord 1}-\eqref{Augmented Adjoint DAE System Coord 4} to obtain the sensitivity $p(0)$.
The various propositions used to derive the above adjoint sensitivity results are summarized below. We also include the ODE case, since it follows similarly.
\renewcommand{\arraystretch}{1.7}
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
& Terminal Cost & Running Cost \\
\hline ODE & Proposition \ref{ManifoldQuadraticInvariantProp}, $\frac{d}{dt}\langle p,\delta q\rangle = 0$ & Proposition \ref{AugmentedQuadraticInvariantProp}, $\frac{d}{dt} \langle p,\delta q\rangle = - \langle dL, \delta q\rangle$ \\
DAE & Proposition \ref{Quadratic Invariant Adjoint DAE Prop}, $\frac{d}{dt}\langle p,\delta q\rangle = 0$ & Proposition \ref{Quadratic Invariant Augmented DAE Adjoint Prop}, $\frac{d}{dt}\langle p,\delta q\rangle = - \langle dL, (\delta q,\delta u)\rangle$ \\
\hline
\end{tabular}
\end{center}
In Section \ref{DiscreteAdjointSystemsSection}, we will construct integrators that admit discrete analogues of the above propositions, and hence, are suitable for computing discrete adjoint sensitivities.
\subsection{Structure-Preserving Discretizations of Adjoint Systems}\label{DiscreteAdjointSystemsSection}
In this section, we utilize the Galerkin Hamiltonian variational integrators of \citet{LeZh2011} to construct structure-preserving integrators which admit discrete analogues of Propositions \ref{ManifoldQuadraticInvariantProp}, \ref{AugmentedQuadraticInvariantProp}, \ref{Quadratic Invariant Adjoint DAE Prop}, and \ref{Quadratic Invariant Augmented DAE Adjoint Prop}, and are therefore suitable for numerical adjoint sensitivity analysis. For brevity, the proofs of these discrete analogues can be found in Appendix \ref{QuadraticConservationProofs}.
We start by recalling the construction of Galerkin Hamiltonian variational integrators as introduced in \citet{LeZh2011}. We assume that the base manifold $Q$ is a vector space and thus, we have the identification $T^*Q \cong Q \times Q^*$. To construct a variational integrator for a Hamiltonian system on $T^*Q$, one starts with the exact Type II generating function
$$ H^+_{d,\text{exact}}(q_0,p_1) = \ext\left[\langle p_1,q_1\rangle - \int_0^{\Delta t} [\langle p,\dot{q}\rangle - H(q,p)]dt\right], $$
where one extremizes over $C^2$ curves on the cotangent bundle satisfying $q(0) = q_0, p(\Delta t) = p_1$. This is a Type II generating function in the sense that it defines a symplectic map $(q_0,p_1) \mapsto (q_1, p_0)$ by $q_1 = D_2H^+_{d,\text{exact}}(q_0,p_1)$, $p_0 = D_1H^+_{d,\text{exact}}(q_0,p_1)$.
To approximate this generating function, one approximates the integral above using a quadrature rule and extremizes the resulting expression over a finite-dimensional subspace satisfying the prescribed boundary conditions. This yields the Galerkin discrete Hamiltonian
$$ H_d^+(q_0,p_1) = \ext \left[\langle p_1, q_1\rangle - \Delta t \sum_i b_i \Big( \langle P^i, V^i\rangle - H(Q^i,P^i) \Big) \right], $$
where $\Delta t > 0$ is the timestep, $q_0, q_1, p_0, p_1$ are numerical approximations to $q(0), q(\Delta t), p(0), p(\Delta t)$, respectively, $b_i > 0$ are quadrature weights corresponding to quadrature nodes $c_i \in [0,1]$, $Q^i$ and $P^i$ are internal stages representing $q(c_i\Delta t), p(c_i\Delta t)$, respectively, and $V$ is related to $Q$ by $Q^i = q_0 + \Delta t \sum_j a_{ij}V^j$, where the coefficients $a_{ij}$ arise from the choice of function space. The expression above is extremized over the internal stages $Q^i, P^i$ and subsequently, one applies the discrete right Hamilton's equations
\begin{align*}
q_1 &= D_2H_d^+(q_0,p_1), \\
p_0 &= D_1H_d^+(q_0,p_1),
\end{align*}
to obtain a Galerkin Hamiltonian variational integrator. The extremization conditions and the discrete right Hamilton's equations can be expressed as
\begin{subequations}
\begin{align}
q_1 &= q_0 + \Delta t \sum_i b_i D_pH(Q^i,P^i), \label{GHVI1} \\
Q^i &= q_0 + \Delta t \sum_j a_{ij} D_pH(Q^j,P^j), \label{GHVI2} \\
p_1 &= p_0 - \Delta t \sum_i b_i D_qH(Q^i,P^i), \label{GHVI3} \\
P^i &= p_0 - \Delta t \sum_j \tilde{a}_{ij} D_qH(Q^i,P^i), \label{GHVI4}
\end{align}
\end{subequations}
where we interpret $a_{ij}$ as Runge--Kutta coefficients and $\tilde{a}_{ij} = (b_ib_j - b_ja_{ji})/b_i$ as the symplectic adjoint of the $a_{ij}$ coefficients. Thus, \eqref{GHVI1}-\eqref{GHVI4} can be viewed as a symplectic partitioned Runge--Kutta method.
We will consider such methods in four cases: adjoint systems corresponding to a base ODE or DAE, and whether or not the corresponding system is augmented. Note that in the DAE case, we will have to modify the above construction because the system is presymplectic. Furthermore, we will assume that all of the relevant configuration spaces are vector spaces.
\textbf{Nonaugmented Adjoint ODE System.} The simplest case to consider is the nonaugmented adjoint ODE system \eqref{AdjointODEa}-\eqref{AdjointODEb}. Since the quadratic conservation law in Proposition \ref{ManifoldQuadraticInvariantProp},
$$\frac{d}{dt} \langle p,\delta q\rangle = 0,$$
arises from symplecticity, a structure-preserving discretization can be obtained by applying a symplectic integrator. This case is already discussed in \citet{Sa2016}, so we will only outline it briefly.
Applying the Galerkin Hamiltonian variational integrator \eqref{GHVI1}-\eqref{GHVI4} to the Hamiltonian for the adjoint ODE system, $H(q,p) = \langle p, f(q)\rangle, $
yields
\begin{subequations}
\begin{align}
q_1 &= q_0 + \Delta t \sum_i b_i f(Q^i), \label{SPRKAdjointODE1} \\
Q^i &= q_0 + \Delta t \sum_j a_{ij} f(Q^j), \label{SPRKAdjointODE2} \\
p_1 &= p_0 - \Delta t \sum_i b_i [Df(Q^i)]^*P^i, \label{SPRKAdjointODE3} \\
P^i &= p_0 - \Delta t \sum_j \tilde{a}_{ij} [Df(Q^j)]^*P^j. \label{SPRKAdjointODE4}
\end{align}
\end{subequations}
In the setting of adjoint sensitivity analysis of a terminal cost function, the appropriate boundary condition to prescribe on the momenta is $p_1 = \nabla_qC(q(t_f))$, as discussed in Section \ref{AdjointSensitivitySection}.
Since the above integrator is symplectic, we have the symplectic conservation law,
$$ dq_1 \wedge dp_1 = dq_0 \wedge dp_0, $$
when evaluated on discrete first variations of \eqref{SPRKAdjointODE1}-\eqref{SPRKAdjointODE4}. In this setting, a discrete first variation can be identified with solutions of the linearization of \eqref{SPRKAdjointODE1}-\eqref{SPRKAdjointODE4}. For the linearization of the equations in the position variables, \eqref{SPRKAdjointODE1}-\eqref{SPRKAdjointODE2}, we have
\begin{subequations}
\begin{align}
\delta q_1 &= \delta q_0 + \Delta t \sum_i b_i Df(Q^i)\delta Q^i, \label{DiscreteVariationalEquations1} \\
\delta Q^i &= \delta q_0 + \Delta t \sum_j a_{ij} Df(Q^j) \delta Q^j. \label{DiscreteVariationalEquations2}
\end{align}
\end{subequations}
As observed in \citet{Sa2016}, while we obtained this by linearizing the discrete equations, one could also obtain this by first linearizing \eqref{ODEvectorspace} and subsequently, applying the Runge--Kutta scheme to the linearization. For the linearization of the equations for the adjoint variables, \eqref{SPRKAdjointODE3}-\eqref{SPRKAdjointODE4}, observe that they are already linear in the adjoint variables, so we can identify the linearization with itself. Thus, we can choose for first variations vector fields $V$ as the first variation corresponding to the solution of the linearized position equation and $W$ as the first variation corresponding to the solution of the adjoint equation itself. With these choices, the above symplectic conservation law yields
$$ 0 = dq_1 \wedge dp_1(V,W)|_{(q_1,p_1)} - dq_0 \wedge dp_0 (V,W)|_{(q_0,p_0)} = \langle p_1, \delta q_1\rangle - \langle p_0, \delta q_0\rangle. $$
This is of course a discrete analogue of Proposition \ref{ManifoldQuadraticInvariantProp}. Note that one can derive the conservation law $\langle p_1,\delta q_1 \rangle = \langle p_0,\delta q_0\rangle$ directly by starting with the expression $\langle p_1,\delta q_1\rangle$ and substituting the discrete equations where appropriate. We will do this in the more general augmented case below.
\textbf{Augmented Adjoint ODE System.} We now consider the case of the augmented adjoint ODE system \eqref{AugmentedAdjointCoordinate1}-\eqref{AugmentedAdjointCoordinate2}. In the continuous setting, we have from Proposition \ref{AugmentedQuadraticInvariantProp},
$$ \frac{d}{dt}\langle p,\delta q\rangle = -\langle dL, \delta q\rangle. $$
We would like to construct an integrator which admits a discrete analogue of this equation. To do this, we apply the Galerkin Hamiltonian variational integrator, equations \eqref{GHVI1}-\eqref{GHVI4}, to the augmented Hamiltonian $H_L(q,p) = \langle p,f(q)\rangle + L(q)$. This gives
\begin{subequations}
\begin{align}
q_1 &= q_0 + \Delta t \sum_i b_i f(Q^i), \label{SPRKAugmentedAdjointODE1} \\
Q^i &= q_0 + \Delta t \sum_j a_{ij} f(Q^j), \label{SPRKAugmentedAdjointODE2} \\
p_1 &= p_0 - \Delta t \sum_i b_i ([Df(Q^i)]^*P^i + dL(Q^i)) , \label{SPRKAugmentedAdjointODE3} \\
P^i &= p_0 - \Delta t \sum_j \tilde{a}_{ij} ([Df(Q^j)]^*P^j + dL(Q^j)). \label{SPRKAugmentedAdjointODE4}
\end{align}
\end{subequations}
We now prove a discrete analogue of Proposition \ref{AugmentedQuadraticInvariantProp}. To do this, we again consider the discrete variational equations for the position variables, \eqref{DiscreteVariationalEquations1}-\eqref{DiscreteVariationalEquations2}.
\begin{prop}\label{DiscreteAugmentedQuadraticInvariantProp}
With the above notation, the above integrator satisfies
\begin{equation}\label{DiscreteAugmentedQuadraticInvariant}
\langle p_1, \delta q_1\rangle = \langle p_0, \delta q_0\rangle - \Delta t \sum_i b_i \langle dL(Q^i), \delta Q^i\rangle.
\end{equation}
\begin{proof}
See Appendix \ref{QuadraticConservationProofs}.
\end{proof}
\end{prop}
\begin{remark}
To see that this is a discrete analogue of $\frac{d}{dt} \langle p,\delta q\rangle = -\langle dL,\delta q\rangle$, we write it in integral form as
$$ \langle p_1, \delta q_1\rangle = \langle p_0,\delta q_0\rangle - \int_0^{\Delta t} \langle dL(q),\delta q\rangle dt. $$
Then, applying the quadrature rule on $[0,\Delta t]$ given by quadrature weights $b_i\Delta t$ and quadrature nodes $c_i\Delta t$, the above integral is approximated by
$$ \int_0^{\Delta t} \langle dL(q),\delta q\rangle dt \approx \Delta t\sum_i b_i \langle dL(q(c_i\Delta t)), \delta q(c_i\Delta t) \rangle = \Delta t \sum_i b_i \langle dL(Q^i), \delta Q^i\rangle,$$
which yields equation \eqref{DiscreteAugmentedQuadraticInvariant}. The discrete analogue is natural in the sense that the quadrature rule for which the discrete equation \eqref{DiscreteAugmentedQuadraticInvariant} approximates the continuous equation is the same as the quadrature rule used to approximate the exact discrete generating function. This occurs more generally for such Hamiltonian variational integrators, as noted in \citet{TrLe2022} for the more general setting of multisymplectic Hamiltonian variational integrators.
\end{remark}
For adjoint sensitivity analysis of a running cost $\int L \, dt$, the appropriate boundary condition to prescribe on the momenta is $p_1 = 0$, as discussed in Section \ref{AdjointSensitivitySection}. With such a boundary condition, equation \eqref{DiscreteAugmentedQuadraticInvariant} reduces to
$$ \langle p_0, \delta q_0\rangle = \Delta t\sum_i b_i\langle dL(Q^i), \delta Q^i\rangle. $$
Thus, $p_0$ gives the discrete sensitivity, i.e., the change in the quadrature approximation of $\int L\, dt$ induced by a change in the initial condition along a discrete solution trajectory.
One can compute this quantity directly via the direct method, where one needs to integrate the discrete variational equations for every desired search direction $\delta q_0$. On the other hand, by the above proposition, one can compute this quantity using the adjoint method: one integrates the adjoint equation with $p_1 = 0$ once to compute $p_0$ and subsequently, pair $p_0$ with any search direction $\delta q_0$ to obtain the sensitivity in that direction. By the above proposition, both methods give the same sensitivities. However, assuming the search space has dimension $n>1$, the adjoint method is more efficient since it only requires $\mathcal{O}(1)$ integrations and $\mathcal{O}(n)$ vector-vector products, whereas the direct method requires $\mathcal{O}(n)$ integrations and $\mathcal{O}(ns)$ vector-vector products where $s \geq 1$ is the number of Runge--Kutta stages, since, in the direct method, one has to compute $\langle dL(Q^i), \delta Q^i\rangle$ for each $i$ and for each choice of $\delta q_0$.
\textbf{Nonaugmented Adjoint DAE System.}
We will now construct discrete Hamiltonian variational integrators for the adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4}, where we assume that the base DAE has index 1. To construct such a method, we have to modify the Galerkin Hamiltonian variational integrator \eqref{GHVI1}-\eqref{GHVI4}, so that it is applicable to the presymplectic adjoint DAE system.
First, consider a general presymplectic system $i_X\Omega' = dH$. Note that, locally, any presymplectic system can be transformed to the canonical form~(see, \citet{CaIbGoRo1987}),
\begin{align*}
\dot{q} &= D_pH(q,p,r), \\
\dot{p} &= -D_qH(q,p,r), \\
0 &= D_rH(q,p,r),
\end{align*}
where, in these coordinates, $\Omega' = dq \wedge dp$, so that $\text{ker}(\Omega') = \text{span}\{\partial/\partial r\}.$ The action for this system is given by $\int_0^{\Delta t} (\langle p, \dot{q} \rangle - H(q,p,r) )dt$. We approximate this integral by quadrature, introduce internal stages for $q,p$ as before, and additionally introduce internal stages $R^i = r(c_ih)$. This gives the discrete generating function
$$ H_d^+(q_0,p_1) = \text{ext}\Big[ \langle p_1, q_1\rangle - \Delta t \sum_i b_i \left( \langle P^i,V^i\rangle - H(Q^i,P^i,R^i) \right) \Big], $$
where again $V$ is related to the internal stages of $Q$ by $Q^i = q_0 + \Delta t \sum_j a_{ij}V^j$ and the above expression is extremized over the internal stages $Q^i, P^i, R^i$. The discrete right Hamilton's equations are again given by
$$ q_1 = H_d^+(q_0,p_1),\ p_0 = H_d^+(q_0,p_1), $$
which we interpret as the evolution equations of the system. There are no evolution equations for $r$ due to the presymplectic structure and the absence of derivatives of $r$ in the action. This gives the integrator
\begin{subequations}
\begin{align}
q_1 &= q_0 + \Delta t \sum_i b_i D_pH(Q^i, P^i, R^i), \label{PresymplecticGHVI1}\\
Q^i &= q_0 + \Delta t \sum_j a_{ij} D_pH(Q^i,P^i,R^i), \label{PresymplecticGHVI2} \\
p_1 &= p_0 - \Delta t \sum_i b_i D_qH(Q^i, P^i, R^i), \label{PresymplecticGHVI3} \\
P^i &= p_0 - \Delta t \sum_j \tilde{a}_{ij} D_qH(Q^i, P^i, R^i), \label{PresymplecticGHVI4} \\
0 &= D_rH(Q^i,P^i,R^i), \label{PresymplecticGHVI5}
\end{align}
\end{subequations}
where \eqref{PresymplecticGHVI2}, \eqref{PresymplecticGHVI4}, \eqref{PresymplecticGHVI5} arise from extremizing with respect to $P^i, Q^i, R^i$, respectively, while \eqref{PresymplecticGHVI1} and \eqref{PresymplecticGHVI3} arise from the discrete right Hamilton's equations. This integrator is presymplectic, in the sense that
$$ dq_1 \wedge dp_1 = dq_0 \wedge dp_0, $$
when evaluated on discrete first variations. The proof is formally identical to the symplectic case. For this reason, we refer to \eqref{PresymplecticGHVI1}-\eqref{PresymplecticGHVI5} as a presymplectic Galerkin Hamiltonian variational integrator.
\begin{remark}
In general, the system \eqref{PresymplecticGHVI1}-\eqref{PresymplecticGHVI5} evolves on the primary constraint manifold given implicitly by the zero level set of $D_rH$, however, it may not evolve on the final constraint manifold. This is not an issue for us since we are dealing with adjoint DAE systems for index 1 DAEs, for which we know the primary constraint manifold and the final constraint manifold coincide. For the general case, one may need to additionally differentiate the constraint equation $D_rH = 0$ to obtain hidden constraints.
Thus, the method \eqref{PresymplecticGHVI1}-\eqref{PresymplecticGHVI5} is generally only applicable to index 1 presymplectic systems, unless we add in further hidden constraints. In order for the continuous presymplectic system to have index 1, it is sufficient that the Hessian of $H$ with respect to the algebraic variables, $D_r^2H$, is (pointwise) invertible on the primary constraint manifold. This is the case for the adjoint DAE system corresponding to an index 1 DAE.
\end{remark}
We now specialize to the adjoint DAE system \eqref{DAE Adjoint System Coord 1}-\eqref{DAE Adjoint System Coord 4}, corresponding to an index 1 DAE, which is already in the above canonical form with $r = (u,\lambda)$ and $H(q,u,p,\lambda) = \langle p,f(q,u)\rangle + \langle \lambda, \phi(q,u)\rangle$. Note that we reordered the argument of $H$, $(q,p,r) = (q,p,u,\lambda) \rightarrow (q,u,p,\lambda)$, in order to be consistent with the previous notation used throughout. We label the internal stages for the algebraic variables as $R^i = (U^i, \Lambda^i)$. Applying the presymplectic Galerkin Hamiltonian variational integrator to this particular system yields
\begin{subequations}
\begin{align} \label{SPRKAdjointDAE1}
q_1 &= q_0 + \Delta t \sum_i b_i f(Q^i,U^i), \\
Q^i &= q_0 + \Delta t \sum_j a_{ij} f(Q^j, U^j), \label{SPRKAdjointDAE2}\\
p_1 &= p_0 - \Delta t \sum_i b_i \left( [D_qf(Q^i,U^i)]^*P^i + [D_q\phi(Q^i,U^i)]^*\Lambda^i \right), \label{SPRKAdjointDAE3} \\
P^i &= p_0 - \Delta t \sum_j \tilde{a}_{ij} \left( [D_qf(Q^j,U^j)]^*P^j + [D_q\phi(Q^j,U^j)]^*\Lambda^j \right), \label{SPRKAdjointDAE4} \\
0 &= \phi(Q^i,U^i), \label{SPRKAdjointDAE5} \\
0 &= [D_uf(Q^i,U^i)]^*P^i + [D_u\phi(Q^i,U^i)]^*\Lambda^i, \label{SPRKAdjointDAE6}
\end{align}
\end{subequations}
where \eqref{SPRKAdjointDAE2}, \eqref{SPRKAdjointDAE4}, \eqref{SPRKAdjointDAE5}, \eqref{SPRKAdjointDAE6} arise from extremizing over $P^i, Q^i, \Lambda^i, U^i$, respectively, while \eqref{SPRKAdjointDAE1}, \eqref{SPRKAdjointDAE3} arise from the discrete right Hamilton's equations.
\begin{remark}
In order for $q_1$ to appropriately satisfy the constraint, we should take the final quadrature point to be $c_s = 1$ (for an $s$-stage method), so that $\phi(q_1, U^s) = \phi(Q^s,U^s) = 0$. In this case, equation \eqref{SPRKAdjointDAE1} and equation \eqref{SPRKAdjointDAE2} with $i=s$ are redundant. Note that with the choice $c_s=1$, they are still consistent (i.e., are the same equation), since in the Galerkin construction, the coefficients $a_{ij}$ and $b_i$ are defined as
$$ a_{ij} = \int_0^{c_i} \phi_j(\tau)d\tau,\ b_i = \int_0^1 \phi_j(\tau)d\tau, $$
where $\phi_j$ are functions on $[0,1]$ which interpolate the nodes $c_j$ (see, \citet{LeZh2011}). Hence, $a_{sj} = b_j$, so that the two equations are consistent. However, we will write the system as above for conceptual clarity. Furthermore, even in the case where one does not take $c_s = 1$, the proposition that we prove below still holds, despite the possibility of constraint violations.
A similar remark holds for the adjoint variable $p$ and the associated constraint \eqref{SPRKAdjointDAE6}, except we think of $p_0$ as the unknown, instead of $p_1$.
\end{remark}
Note that \eqref{SPRKAdjointDAE1}, \eqref{SPRKAdjointDAE2}, \eqref{SPRKAdjointDAE5} is a standard Runge--Kutta discretization of an index 1 DAE $\dot{q} = f(q,u)$, $0 = \phi(q,u)$, where again, usually $c_s = 1$. Associated with these equations are the variational equations given by their linearization,
\begin{subequations}
\begin{align}\label{DiscreteDAEVariationalEquations1}
\delta q_1 &= \delta q_0 + \Delta t \sum_i b_i(D_qf(Q^i,U^i)\delta Q^i + D_uf(Q^i,U^i)\delta U^i), \\
\delta Q^i &= \delta q_0 + \Delta t \sum_j a_{ij}(D_qf(Q^j,U^j)\delta Q^j + D_uf(Q^j,U^j)\delta U^j), \label{DiscreteDAEVariationalEquations2}\\
0 &= D_q\phi(Q^i,U^i)\delta Q^i + D_u\phi(Q^i,U^i) \delta U^i,\label{DiscreteDAEVariationalEquations3}
\end{align}
\end{subequations}
which is the Runge--Kutta discretization of the continuous variational equations \eqref{DAEVariationalEqn3} - \eqref{DAEVariationalEqn4}.
\begin{prop}\label{DiscreteDAEQuadraticInvariantProp}
With the above notation, the above integrator satisfies
$$ \langle p_1, \delta q_1\rangle = \langle p_0, \delta q_0\rangle. $$
\begin{proof}
See Appendix \ref{QuadraticConservationProofs}.
\end{proof}
\end{prop}
Thus, the above integrator admits a discrete analogue of Proposition \ref{Quadratic Invariant Adjoint DAE Prop} for the nonaugmented adjoint DAE system. By setting $p_1 = \nabla_q C(q(t_f))$, one can use this integrator to compute the sensitivity $p_0$ of a terminal cost function with respect to a perturbation in the initial condition. As discussed before, this only requires $\mathcal{O}(1)$ integrations instead of $\mathcal{O}(n)$ integrations via the direct method (for a dimension $n$ search space). Furthermore, the adjoint method requires only $\mathcal{O}(1)$ numerical solves of the constraints, while the direct method requires $\mathcal{O}(n)$ numerical solves.
\begin{remark}
Since we are assuming the DAE has index 1, it is always possible to prescribe an arbitrary initial condition $q_0$ (and $\delta q_0$) and terminal condition $p_1$, since the corresponding algebraic variables can always formally be solved for using the corresponding constraints. In practice, one generally has to solve the constraints to some tolerance, e.g., through an iterative scheme. If the constraints are only satisfied to a tolerance $\mathcal{O}(\epsilon)$, then the above proposition holds to $\mathcal{O}(s\epsilon)$, where $s$ is the number of Runge--Kutta stages.
\end{remark}
\begin{remark}
The above method \eqref{SPRKAdjointDAE1}-\eqref{SPRKAdjointDAE6} is presymplectic, since it is a special case of the more general presymplectic Galerkin Hamiltonian variational integrator \eqref{PresymplecticGHVI1}-\eqref{PresymplecticGHVI5}. Although we proved it directly, the above proposition could also have been proven from presymplecticity, with the appropriate choices of first variations.
\end{remark}
\textbf{Augmented Adjoint DAE System.} Finally, we construct a discrete Hamiltonian variational integrator for the augmented adjoint DAE system \eqref{Augmented Adjoint DAE System Coord 1}-\eqref{Augmented Adjoint DAE System Coord 4} associated with an index 1 DAE. To do this, we apply the presymplectic Galerkin Hamiltonian variational integrator \eqref{PresymplecticGHVI1}-\eqref{PresymplecticGHVI5} with $r = (u,\lambda)$ and with Hamiltonian given by the augmented adjoint DAE Hamiltonian,
$$ H_L(q,u,p,\lambda) = \langle p,f(q,u)\rangle + \langle \lambda,\phi(q,u)\rangle + L(q,u). $$
The presymplectic integrator is then
\begin{subequations}
\begin{align} \label{SPRKAugmentAdjointDAE1}
q_1 &= q_0 + \Delta t \sum_i b_i f(Q^i,U^i), \\
Q^i &= q_0 + \Delta t \sum_j a_{ij} f(Q^j, U^j), \label{SPRKAugmentAdjointDAE2}\\
p_1 &= p_0 - \Delta t \sum_i b_i \left( [D_qf(Q^i,U^i)]^*P^i + [D_q\phi(Q^i,U^i)]^*\Lambda^i + D_qL(Q^i,U^i) \right), \label{SPRKAugmentAdjointDAE3} \\
P^i &= p_0 - \Delta t \sum_j \tilde{a}_{ij} \left( [D_qf(Q^j,U^j)]^*P^j + [D_q\phi(Q^j,U^j)]^*\Lambda^j + D_qL(Q^i,U^i) \right), \label{SPRKAugmentAdjointDAE4} \\
0 &= \phi(Q^i,U^i), \label{SPRKAugmentAdjointDAE5} \\
0 &= [D_uf(Q^i,U^i)]^*P^i + [D_u\phi(Q^i,U^i)]^*\Lambda^i + D_uL(Q^i,U^i). \label{SPRKAugmentAdjointDAE6}
\end{align}
\end{subequations}
The associated variational equations are again \eqref{DiscreteDAEVariationalEquations1}-\eqref{DiscreteDAEVariationalEquations3}.
Remarks analogous to the nonaugmented case regarding setting the quadrature node $c_s=1$ and solvability of these systems under the index 1 assumption can be made.
\begin{prop}\label{DiscreteAugmentedAdjointDAEQuadraticProp}
With the above notation, the above integrator satisfies
$$ \langle p_1, \delta q_1\rangle = \langle p_0, \delta q_0\rangle - \Delta t \sum_i b_i \langle dL(Q^i,U^i), (\delta Q^i, \delta U^i)\rangle. $$
\begin{proof}
See Appendix \ref{QuadraticConservationProofs}.
\end{proof}
\end{prop}
\begin{remark}
Analogous to the remark in the augmented adjoint ODE case, the above proposition is a discrete analogue of Proposition \ref{Quadratic Invariant Augmented DAE Adjoint Prop}, in integral form,
$$ \langle p_1, \delta q_1\rangle - \langle p_0,\delta q_0\rangle = - \int_0^{\Delta t}\langle dL(q,u), (\delta q, \delta u)\rangle dt. $$
The discrete analogue is natural in the sense that it is just quadrature applied to the right hand side of this equation, with the same quadrature rule used to discretize the generating function.
\end{remark}
\begin{remark}
As with the augmented adjoint ODE case, the above proposition allows one to compute numerical sensitivities of a running cost function by solving for $p_0$ with $p_1 = 0$, which is more efficient than the direct method.
\end{remark}
To summarize, we have utilized Galerkin Hamiltonian variational integrators to construct methods which admit natural discrete analogues of the various propositions used for sensitivity analysis. We summarize the results below.
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
& Terminal Cost & Running Cost \\
\hline
ODE & $\langle p_1, \delta q_1\rangle = \langle p_0, \delta q_0\rangle$ & $\langle p_1, \delta q_1\rangle = \langle p_0, \delta q_0\rangle - \Delta t \sum_i b_i \langle dL(Q^i), \delta Q^i\rangle$ \\
DAE & $\langle p_1, \delta q_1\rangle = \langle p_0, \delta q_0\rangle$ & $\langle p_1,\delta q_1\rangle = \langle p_0, \delta q_0\rangle - \Delta t\sum_i b_i \langle dL(Q^i,U^i), (\delta Q^i,\delta U^i)\rangle$ \\
\hline
\end{tabular}
\end{center}
\subsubsection{Naturality of the Adjoint DAE System Discretization}\label{NaturalityDiscretizationSection}
To conclude our discussion of discretizing adjoint systems, we prove a discrete extension of the fact that, for an index 1 DAE, the process of index reduction and forming the adjoint system commute, as discussed in Section \ref{DAEIndexPCASection}. Namely, we will show that, starting from an index 1 DAE \eqref{DAEa}-\eqref{DAEb}, the processes of reduction, forming the adjoint system, and discretization all commute, for particular choices of these processes which we will define and choose below. This can be summarized in the following commutative diagram.
\[\small\begin{tikzcd}[column sep=-4ex,row sep=10ex]
\txt{Index 1 DAE} && \txt{ODE} \\
& \txt{Discrete DAE} && \txt{Discrete ODE} \\
\txt{Presymplectic Adjoint\\ DAE System} && \txt{Symplectic Adjoint\\ ODE System} \\
& \txt{Presymplectic Galerkin\\ Hamiltonian Variational \\ Integrator} && \txt{Symplectic Galerkin \\ Hamiltonian Variational \\ Integrator}
\arrow["{\text{Reduce}}", from=1-1, to=1-3]
\arrow["{\text{Reduce}}"{pos=0.25}, from=3-1, to=3-3]
\arrow["{\text{Adjoint}}"'{pos=0.6}, from=1-1, to=3-1]
\arrow["{\text{Adjoint}}"{pos=0.6}, from=1-3, to=3-3]
\arrow["{\text{Reduce}}"{pos=0.3}, from=2-2, to=2-4,crossing over]
\arrow["{\text{Reduce}}"', from=4-2, to=4-4]
\arrow["{\text{Adjoint}}"{pos=0.6}, from=2-2, to=4-2,crossing over]
\arrow["{\text{Adjoint}}"{pos=0.6}, from=2-4, to=4-4]
\arrow["{\text{Discretize}}"', from=1-1, to=2-2]
\arrow["{\text{Discretize}}"{pos=0.6}, from=1-3, to=2-4]
\arrow["{\text{Discretize}}"', from=3-1, to=4-2]
\arrow["{\text{Discretize}}"', from=3-3, to=4-4]
\end{tikzcd}\]
In the above diagram, we will use the convention that the ``Discretize" arrows point forward, the ``Adjoint" arrows point downward, and the ``Reduce" arrows point to the right. For the ``Discretize" arrows on the top face, we take the discretization to be a Runge--Kutta discretization (of a DAE on the left and of an ODE on the right, with the same Runge--Kutta coefficients in both cases). For the ``Discretize" arrows on the bottom face, we take the discretization to be the symplectic partitioned Runge--Kutta discretization induced by the discretization of the base DAE or ODE, i.e., the momenta expansion coefficients $\tilde{a}_{ij}$ are the symplectic adjoint of the coefficients $a_{ij}$ used on the top face. We have already defined the ``Adjoint" arrows on the back face, as discussed in Section \ref{AdjointSystems Main Section}. For the ``Adjoint" arrows on the front face, we define them as forming the discrete adjoint system corresponding to a discrete (and generally nonlinear) system of equations and we will review this notion where needed in the proof. We have already defined the ``Reduce" arrows on the back face, as discussed in Section \ref{DAEIndexPCASection}. For the ``Reduce" arrows on the front face, we define this as solving for the discrete algebraic variables in terms of the discrete kinematic variables through the discrete constraint equations. With these choices, the above diagram commutes, as we will show. To prove this, it suffices to prove that the diagrams on each of the six faces commutes. To keep the exposition concise, we provide the proof in Appendix \ref{AppendixNaturalityProof} and move on to discuss the implications of this result.
The previous discussion shows that the presymplectic Galerkin Hamiltonian variational integrator construction is natural for discretizing adjoint (index 1) DAE systems, in the sense that the integrator is equivalent to the integrator produced from applying a symplectic Galerkin Hamiltonian variational integrator to the underlying nondegenerate Hamiltonian system. Of course, in practice, one cannot generally determine the function $u = u(q)$ needed to reduce the DAE to an ODE. Therefore, one generally works with the presymplectic Galerkin Hamiltonian variational integrator instead, where one iteratively solves the constraint equations. However, although reduction then symplectic integration is often impractical, one can utilize this naturality to derive properties of the presymplectic integrator. For example, we will use this naturality to prove a variational error analysis result.
The basic idea for the variational error analysis result goes as follows: one utilizes the naturality to relate the presymplectic variational integrator to a symplectic variational integrator of the underlying nondegenerate Hamiltonian system and subsequently, applies the variational error analysis result in the symplectic case (\citet{ScLe2017}). Recall the discrete generating function for the previously constructed presymplectic variational integrator,
$$ H_d^+(q_0,p_1; \Delta t) = \text{ext}\Big[ \langle p_1, q_1\rangle - \Delta t \sum_i b_i \left( \langle P^i,V^i\rangle - H(Q^i,U^i,P^i,\Lambda^i) \right) \Big], $$
where we have now explicitly included the timestep dependence in $H_d^+$ and $H$ is the Hamiltonian for the adjoint DAE system (augmented or nonaugmented), corresponding to an index 1 DAE.
\begin{prop}
Suppose the discrete generating function $H_d^+(q_0,p_1; \Delta t)$ for the presymplectic variational integrator approximates the exact discrete generating function $H_d^{+,E}(q_0,p_1; \Delta t)$ to order $r$, i.e.,
$$ H_d^+(q_0,p_1; \Delta t) = H_d^{+,E}(q_0,p_1;\Delta t) + \mathcal{O}(\Delta t^{r+1}), $$
and the Hamiltonian $H$ is continuously differentiable, then the Type II map $(q_0,p_1) \mapsto (q_1, p_0)$ and the evolution map $(q_0,p_0) \mapsto (q_1, p_1)$ are order-$r$ accurate.
\begin{proof}
The proof follows from two simple steps. First, observe that the discrete generating function $H_d^+(q_0,p_1; \Delta t)$ for the presymplectic integrator is also the discrete generating function for the symplectic integrator for the underlying nondegenerate Hamiltonian system. This follows since in the definition of $H_d^+$, one extremizes over the algebraic variables $U^i,\Lambda^i$ which enforces the constraints and hence, determines $U^i,\Lambda^i$ as functions of the kinematic variables $Q^i,P^i$. Thus, the discrete (or continuous) Type II map determined by $H_d^+$ (or $H_d^{+,E}$, respectively), $(q_0,p_1) \mapsto (q_1,p_0)$, is the same as the Type II map for the underlying nondegenerate Hamiltonian system, which is just another consequence of the aforementioned naturality. One then applies the variational error analysis result in \citet{ScLe2017}.
\end{proof}
\end{prop}
\begin{remark}
Another way to view this result is that the order of an implicit (partitioned) Runge--Kutta scheme for index 1 DAEs is the same as the order of an implicit (partitioned) Runge--Kutta scheme for ODEs (\citet{Ro1989}), since the aforementioned discretization generates a partitioned Runge--Kutta scheme. To be complete, we should determine the order for the full presymplectic flow, i.e., including also the algebraic variables. As discussed in \citet{Ro1989}, as long as $a_{si} = b_i$ for each $i$, which, as we have discussed, is a natural choice and holds as long as $c_s=1$, there is no order reduction arising from the algebraic variables. Thus, with this assumption, the presymplectic variational integrator in the previous proposition approximates the presymplectic flow, in both the kinematic and algebraic variables, to order $r$.
\end{remark}
\begin{remark}
In the above proposition, we considered both the Type II map $(q_0, p_1) \mapsto (q_1, p_0)$ and the evolution map $(q_0,p_0) \mapsto (q_1,p_1)$. The latter is of course the traditional way to view the map corresponding to a numerical method, but the former is the form of the map used in adjoint sensitivity analysis.
\end{remark}
Furthermore, in light of this naturality, we can view Propositions \ref{DiscreteDAEQuadraticInvariantProp} and \ref{DiscreteAugmentedAdjointDAEQuadraticProp} as following from the analogous propositions for symplectic Galerkin Hamiltonian variational integrators, applied to the underlying nondegenerate Hamiltonian system.
\subsection{Optimal Control of DAE Systems}\label{OCPSection}
In this section, we derive the optimality conditions for an optimal control problem (OCP) subject to a semi-explicit DAE constraint. It is known that the optimality conditions can be described as a presymplectic system on the generalized phase space bundle (\citet{DeIb2003}, \citet{EcMaMuRo2003}). We will subsequently consider a variational discretization of such OCPs and discuss the naturality of such discretizations.
Consider the following optimal control problem in Bolza form, subject to a DAE constraint, which we refer to as (OCP-DAE),
\begin{align*}
&\text{min } C(q(t_f)) + \int_0^{t_f}L(q,u)dt \\
&\text{subject to} \\
& \qquad \dot{q} = f(q,u), \\
& \qquad 0 = \phi(q,u), \\
& \qquad q_0 = q(0), \\
& \qquad 0 = \phi_f(q(t_f)),
\end{align*}
where the DAE system $\dot{q} = f(q,u)$, $0 = \phi(q,u)$ is over $M_d \times M_a$ as described in Section \ref{AdjointDAESection}, $C: M_d \rightarrow \mathbb{R}$ is the terminal cost, $L: M_d \times M_a \rightarrow \mathbb{R}$ is the running cost, the initial condition $q(0) = q_0$ is prescribed, and for generality, a terminal constraint $\phi_f(q(t_f)) = 0$ is also imposed, where $\phi_f$ is a map from $M_d$ into some vector space $V$.
We assume a local optimum to (OCP-DAE). We then adjoin the constraints to $J$ using adjoint variables, which gives the adjoined functional
$$ \mathcal{J} = C(q(t_f)) + \langle \lambda_f, \phi_f(q(t_f))\rangle + \int_{0}^{t_f} \left[ L(q,u) + \langle p, f(q,u) - \dot{q}\rangle + \langle \lambda, \phi(q,u)\rangle \right]dt. $$
The optimality conditions are given by the condition that $\mathcal{J}$ is stationary about the local optimum, $\delta \mathcal{J} = 0$ (\citet{Bi2010}). For simplicity in the notation, we will use matrix derivative instead of indices. Note also that we will implicitly leave out the variation of the adjoint variables, since those terms pair with the DAE constraints, which vanish at the local optimum. The optimality condition $\delta \mathcal{J} = 0$ is then
\begin{align*}
0 &= \delta \mathcal{J} = \langle \nabla_q C(q(t_f)), \delta q(t_f) \rangle + \langle \lambda_f, D_q\phi_f(q(t_f)) \delta q(t_f) \rangle \\
&\qquad \qquad \quad + \int_0^{t_f} \Big[ \langle \nabla_q L(q,u), \delta q\rangle + \langle \nabla_u L(q,u), \delta u \rangle + \langle p, D_qf(q,u) \delta q \rangle + \langle p, D_uf(q,u)\delta u\rangle \\
& \qquad \qquad \qquad \qquad - \langle p, \frac{d}{dt} \delta q\rangle + \langle \lambda, D_q\phi(q,u)\delta q \rangle + \langle \lambda, D_u\phi(q,u)\delta u \rangle \Big]dt \\
&= \langle \nabla_q C(q(t_f)) + [D_q\phi_f(q(t_f))]^* \lambda_f - p(t_f), \delta q(t_f)\rangle \\
& \qquad + \int_0^{t_f} \Big[ \langle \nabla_qL(q,u) + [D_qf(q,u)]^*p + \dot{p} + [D_q\phi(q,u)]^*\lambda, \delta q \rangle \\
& \qquad \quad\quad \quad + \langle \nabla_u L(q,u) + [D_uf(q,u)]^*p + [D_u\phi(q,u)]^*\lambda, \delta u\rangle \Big] dt,
\end{align*}
where we integrated by parts on the term $\langle p, \frac{d}{dt} \delta q\rangle$ and used $\delta q(0) = 0$ since the initial condition is fixed. Enforcing stationarity for all such variations gives the optimality conditions,
\begin{subequations}
\begin{align}
\dot{q} &= f(q,u), \label{OCPDAEoptimality1} \\
\dot{p} &= -[D_qf(q,u)]^*p - [D_q\phi(q,u)]^*\lambda - \nabla_qL(q,u), \label{OCPDAEoptimality2} \\
0 &= \phi(q,u), \label{OCPDAEoptimality3} \\
0 &= \nabla_u L(q,u) + [D_uf(q,u)]^*p + [D_u\phi(q,u)]^*\lambda, \label{OCPDAEoptimality4} \\
0 &= \phi_f(q(t_f)), \label{OCPDAEoptimality5} \\
p(t_f) &= \nabla_q C(q(t_f)) + [D_q\phi_f(q(t_f))]^* \lambda_f. \label{OCPDAEoptimality6}
\end{align}
\end{subequations}
The first four optimality conditions \eqref{OCPDAEoptimality1}-\eqref{OCPDAEoptimality4} are precisely the augmented adjoint DAE equations, \eqref{Augmented Adjoint DAE System Coord 1}-\eqref{Augmented Adjoint DAE System Coord 4}. The last two optimality conditions \eqref{OCPDAEoptimality5}, \eqref{OCPDAEoptimality6} are the terminal constraint and the associated transversality condition, respectively. Note that these conditions are only sufficient for a trajectory $(q,u,p,\lambda)$ to be an extremum of the optimal control problem; whether or not the trajectory is optimal depends on the properties of the DAE constraint and cost function, e.g., convexity of $L$.
\textbf{Regular Index 1 Optimal Control.} In the literature, the problem (OCP-DAE) is usually formulated by making a distinction between algebraic variables and control variables, $(q,y,u)$, instead of $(q,u)$ (see, for example, \citet{Bi2010} and \citet{AgCaFo2021}). This does not change any of the previous discussion of the optimality conditions, except that \eqref{OCPDAEoptimality4} splits into two equations for $y$ and $u$. That is, the distinction is not formally important for the previous discussion. It is of course important when actually solving such an optimal control problem. For example, the constraint function $\phi(q,y,u)$ may have a singular matrix derivative with respect to $(y,u)$ but may have a nonsingular matrix derivative with respect to $y$. In such a case, one interprets $y$ as the algebraic variable, in that it can locally be solved in terms of $(q,u)$ via the constraint, and the control variable $u$ as ``free" to optimize over. We now briefly elaborate on this case.
We take the configuration manifold for the algebraic variables to be $M_a = Y_a \times U \ni (y,u)$, where $y$ is interpreted as the algebraic constraint variable and $u$ is interpreted as the control variable. We will assume that the control space $U$ is compact. The constraint has the form $\phi(q,y,u) = 0$, and we assume that $\partial\phi/\partial y$ is pointwise invertible. We consider the following optimal control problem,
\begin{align*}
&\text{min } \int_0^{t_f}L(q,y,u)dt \\
&\text{subject to} \\
& \qquad \dot{q} = f(q,y,u), \\
& \qquad 0 = \phi(q,y,u), \\
& \qquad q_0 = q(0).
\end{align*}
We perform an analogous argument to before, except that, in this case, since $U$ may have a boundary, the optimality for the control variable $u$ will either require $u$ to lie on $\partial U$ or will require the stationarity of the adjoined functional with respect to variations in $u$. In any case, the necessary conditions for optimality can be expressed as
\begin{subequations}
\begin{align}
\dot{q} &= f(q,y,u), \label{SecondOCPDAEoptimality1} \\
\dot{p} &= -[D_qf(q,y,u)]^*p - [D_q\phi(q,y,u)]^*\lambda - \nabla_qL(q,y,u), \label{SecondOCPDAEoptimality2} \\
0 &= \phi(q,y,u), \label{SecondOCPDAEoptimality3} \\
0 &= \nabla_y L(q,y,u) + [D_yf(q,y,u)]^*p + [D_y\phi(q,y,u)]^*\lambda, \label{SecondOCPDAEoptimality4} \\
u &= \argmin_{u' \in U} H_L(q,y,u'), \label{SecondOCPDAEoptimality5} \\
0 &= p(t_f), \label{SecondOCPDAEoptimality6}
\end{align}
\end{subequations}
where $H_L$ is the augmented Hamiltonian $H_L(q,y,u) = L(q,y,u) + \langle p,f(q,y,u)\rangle + \langle \lambda,\phi(q,y,u)\rangle$. Assuming that $u$ lies in the interior of $U$, \eqref{SecondOCPDAEoptimality5} can be expressed as
$$ 0 = \nabla_u L(q,y,u) + [D_uf(q,y,u)]^*p + [D_u\phi(q,y,u)]^*\lambda, $$
or $D_u H_L(q,y,u) = 0.$ We say that an optimal control problem with a DAE constraint forms a regular index 1 system if both $\partial\phi/\partial y$ and the Hessian $D_u^2 H_L$ are pointwise invertible. In this case, whenever $u$ lies on the interior of $U$, $(y,u,\lambda)$ can be locally solved as functions of $(q,p)$. Thus, in principle, the resulting Hamiltonian ODE for $(q,p)$ can be integrated to yield extremal trajectories for the optimal control problem. As mentioned before, without additional assumptions on the DAE and cost function, such a trajectory will only generally be an extremum but not necessarily optimal.
Of course, in practice, one cannot generally analytically integrate the resulting ODE nor determine the functions which give $(y,u,\lambda)$ in terms of $(q,p)$. Thus, the only practical option is to discretize the presymplectic system above to compute approximate extremal trajectories. To integrate such a presymplectic system, one can again use the presymplectic Galerkin Hamiltonian variational integrator construction discussed in Section \ref{DiscreteAdjointSystemsSection}. Such an integrator would be natural in the following sense. First, as discussed in Section \ref{DiscreteAdjointSystemsSection}, a presymplectic Galerkin Hamiltonian variational integrator applied to the augmented adjoint DAE system is equivalent to applying a symplectic Galerkin Hamiltonian variational integrator to the underlying Hamiltonian ODE, with the same Runge--Kutta expansions for $q_1, Q^i$ in both methods. Furthermore, as shown in \citet{Sa2016}, utilizing a symplectic integrator to discretize the extremality conditions is equivalent to first discretizing the ODE constraint by a Runge--Kutta method and then enforcing the associated discrete extremality conditions. This also holds in the DAE case.
More precisely, beginning with a regular index 1 optimal control problem, the processes of reduction, extremization, and discretization commute, for suitable choices of these processes, analogous to those used in the naturality result discussed in Section \ref{NaturalityDiscretizationSection}. The proof is similar to the naturality result discussed in Section \ref{NaturalityDiscretizationSection}, where the arrow given by forming the adjoint is replaced by extremization. In essence, these are the same, since the extremization condition is given by the adjoint system, so we will just elaborate briefly. We already know how to extremize the continuous optimal control problem, with either a DAE constraint or an ODE constraint after reduction, which results in an adjoint system. We also already know how to discretize the resulting adjoint system after discretization, using a (pre)symplectic partitioned Runge--Kutta method. Furthermore, at any step, reduction is just defined to be solving the continuous or discrete constraints for $y$ in terms of $(q,u)$. Thus, the only major difference compared to the previous naturality result is defining the discretization of the optimal control problem and subsequently, how to extremize the discrete optimal control problem. For the regular index 1 optimal control problem,
\begin{align*}
&\text{min } \int_0^{t_f}L(q,y,u)dt \\
&\text{subject to} \\
& \qquad \dot{q} = f(q,y,u), \\
& \qquad 0 = \phi(q,y,u), \\
& \qquad q_0 = q(0),
\end{align*}
its discretization is obtained by replacing the constraints with a Runge--Kutta discretization and replacing the cost function with its quadrature approximation, using the same quadrature weights as those in the Runge--Kutta discretization. This can be written as
\begin{align*}
&\text{min } \Delta t \sum_i b_i L(Q^i,Y^i,U^i) \\
&\text{subject to} \\
& \qquad V^i = f(Q^i,Y^i,U^i), \\
& \qquad 0 = \phi(Q^i,Y^i,U^i),
\end{align*}
where $Q^i = q_0 + \Delta t\sum_j a_{ij}V^j$, which implicitly encodes $q(0)=q_0$. One can then extremize this discrete system, which is given by the discrete Euler--Lagrange equations for the discrete action
$$ \mathbb{S} = \Delta t \sum_i b_i \Big( \langle P^i,V^i-f(Q^i,Y^i,U^i) \rangle - \langle \Lambda^i,\phi(Q^i,Y^i,U^i)\rangle - L(Q^i,Y^i,U^i) \Big). $$
That is, we enforce the discrete constraints by adding to the discrete Lagrangian the appropriate Lagrange multiplier terms paired with the constraints, where we weighted the Lagrange multipliers $P^i,\Lambda^i$ by $\Delta t b_i$ just as convention, in order to interpret them as the appropriate variables, as discussed in Appendix \ref{AppendixNaturalityProof}. Enforcing extremality of this action recovers a partitioned Runge--Kutta method applied to the adjoint system corresponding to extremizing the continuous optimal control problem, as discussed in Appendix \ref{AppendixNaturalityProof}, where the Runge--Kutta coefficients for the momenta are the symplectic adjoint of the original Runge--Kutta coefficients. Alternatively, starting from the original continuous optimal control problem, one could first reduce the DAE constraint to an ODE constraint using the invertibility of $D_y\phi$ to give
\begin{align*}
&\text{min } \int_0^{t_f}L(q,y(q,u),u)dt \\
&\text{subject to} \\
& \qquad \dot{q} = f(q,y(q,u),u), \\
& \qquad q_0 = q(0).
\end{align*}
One can then discretize this using the same Runge--Kutta method as before, where the cost function is replaced with a quadrature approximation, and then extremize using Lagrange multipliers. Alternatively, one can extremize the continuous problem to yield an adjoint system and then apply a partitioned Runge--Kutta method to that system, where the momenta Runge--Kutta coefficients are again the symplectic adjoint of the original Runge--Kutta coefficients. Having defined all of these processes, a direct computation yields that all of the processes commute, analogous to the computation in Appendix \ref{AppendixNaturalityProof}.
\section{Conclusion and Future Research Directions}
In this paper, we utilized symplectic and presymplectic geometry to study the properties of adjoint systems associated with ODEs and DAEs, respectively. The (pre)symplectic structure of these adjoint systems led us to a geometric characterization of the adjoint variational quadratic conservation law used in adjoint sensitivity analysis. As an application of this geometric characterization, we constructed structure-preserving discretizations of adjoint systems by utilizing (pre)symplectic integrators, which led to natural discrete analogues of the quadratic conservation laws.
A natural research direction is to extend the current framework to adjoint systems for differential equations with nonholonomic constraints, in order to more generally allow for constraints between configuration variables and their derivatives. In this setting, it is reasonable to expect that the geometry of the associated adjoint systems can be described using Dirac structures (see, for example, \citet{YoMa2006a, YoMa2006b}), which generalize the symplectic and presymplectic structures of adjoint ODE and DAE systems, respectively. Structure-preserving discretizations of such systems could then be studied through the lens of discrete Dirac structures (\citet{LeOh2008}). These discrete Dirac structures make use of the notion of a retraction (\citet{AbMaSe2008}). The tangent and cotangent lifts of a retraction also provide a useful framework for constructing geometric integrators (\citet{BaMa2021}). It would be interesting to synthesize the notion of tangent and cotangent lifts of retraction maps with discrete Dirac structures in order to construct discrete Dirac integrators for adjoint systems with nonholonomic constraints which generalize the presymplectic integrators constructed in \citet{BaMa2022}.
Another natural research direction is to extend the current framework to evolutionary partial differential equations (PDEs). There are two possible approaches in this direction. The first is to consider evolutionary PDEs as ODEs evolving on infinite-dimensional spaces, such as Banach or Hilbert manifolds. One can then investigate the geometry of the infinite-dimensional symplectic structure associated with the corresponding adjoint system. In practice, adjoint systems for evolutionary PDEs are often formed after semi-discretization, leading to an ODE on a finite-dimensional space. Understanding the reduction of the infinite-dimensional symplectic structure of the adjoint system to a finite-dimensional symplectic structure under semi-discretization could provide useful insights into structure-preservation. The second approach would be to explore the multisymplectic structure of the adjoint system associated with a PDE. This approach would be insightful for several reasons. First, an adjoint variational quadratic conservation law arising from multisymplecticity would be adapted to spacetime instead of just time. With appropriate spacetime splitting and boundary conditions, such a quadratic conservation law would induce either a temporal or spatial conservation law. As such, one could use the multisymplectic conservation law to determine adjoint sensitivities for a PDE with respect to spatial or temporal directions, which could be useful in practice \citep{LiPe2004}. Furthermore, the multisymplectic framework would apply equally as well to nonevolutionary (elliptic) PDEs, where there is no interpretation of a PDE as an infinite-dimensional evolutionary ODE. Additionally, adjoint systems for PDEs with constraints could be investigated with multi-Dirac structures (\citet{VaYoLeMa2012}). In future work, we aim to explore both approaches, relate them once a spacetime splitting has been chosen, and investigate structure-preserving discretizations of such systems by utilizing the multisymplectic variational integrators constructed in \citet{TrLe2022}.
\section*{Acknowledgements}
BT was supported by the NSF Graduate Research Fellowship DGE-2038238, and by NSF under grants DMS-1813635. ML was supported by NSF under grants DMS-1345013, DMS-1813635, and by AFOSR under grant FA9550-18-1-0288.
\section*{Data Availability Statement}
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
| proofpile-arXiv_065-1635 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A key prediction of the standard $\Lambda$ Cold Dark Matter ($\Lambda$CDM ) cosmological model \citep{Davis-1985} is that galaxies such as the Milky Way (MW) are surrounded by a dark matter halo and by satellite galaxies formed within its substructures. In apparent contradiction, the ``plane of satellites" describes the arrangement of satellite galaxies in a thin \citep[e.g.][]{Lynden-Bell-1976, Kroupa-2005, Pawlowski-2018}, possibly rotationally supported \citep{Metz-2008} plane.
Whereas energy dissipation can lead to the gas inside of galaxies settling into rotating thin disks, such a configuration is highly unlikely to form from the collisionless dark matter halo. While other apparent discrepancies between predictions and observations of Milky Way satellites have been resolved through baryonic effects \citep{Navarro-1996, Pontzen-2014, Sawala-2016a} there is no plausible formation mechanism for rotating satellite planes within dispersion-supported dark matter halos. Consequently, the ``plane of satellites problem" has emerged as the most persistent challenge to the dark matter paradigm \citep{Kroupa-2012, Bullock-2017, Perivolaropoulos-2021}.
That the ``plane of satellites" problem has so far eluded resolution
is not for lack of trying. Planes of satellites were found with the same (low) frequency in collisionless and hydrodynamic cosmological simulations \citep{Cautun-2015, Ahmed-2017, Muller-2021} and in MW analogues in isolation or in pairs \citep{Forero-Romero-2018, Pawlowski-2019b}; planes show no significant correlation with other properties of the host halo \citep{Pawlowski-2019b}. There is evidence that filamentary accretion \citep{Libeskind-2005, Shao-2018} or the presence of massive satellites \citep{Samuel-2021} can generate some anisotropy, but systems as thin as the Milky Way's are still very rare \citep{Cautun-2015}. Moreover, any planes that do form in $\Lambda$CDM are transient, chance alignments of substructures \citep{Muller-2021, Buck-2016, Shao-2019, Samuel-2021}, rather than long-lived, rotationally supported disks.
With no apparent explanation within $\Lambda$CDM , the ``plane of satellites" might constitute evidence for MOND \citep{Famaey-2012}, an entirely different cosmological framework, in which the Milky Way's satellite galaxies are dark-matter-free ``tidal" dwarf galaxies formed during a hypothetical past close encounter between the MW and M31 \citep{Yang-2014, Bilek-2018, Banik-2018}.
Here, we re-examine the contention that the MW contains an exceptional plane of satellites, explain the origin of the observed anisotropy, and study its time evolution in light of proper motion measurements by the {\it Gaia} space telescope\citep{McConnachie-2020}.
This paper is organised as follows. We define the metrics for spatial and orbital anisotropy in Section~\ref{sec:methods:spatial} and~\ref{sec:methods:look-elsewhere}, and introduce the Gini coefficient mechanism that captures the dependence of the anisotropy on the radial distribution in Section~\ref{sec:methods:gini}. We describe the observational data and its analysis in Sections~\ref{sec:methods:observations}--\ref{sec:methods:integration}, and our $\Lambda$CDM simulations in Sections~\ref{sec:methods:simulations} -- ~\ref{sec:methods:orphans}. Our results for the Milky Way's plane of satellites are shown in Section~\ref{sec:results}. We analyse the spatial anisotropy in Section~\ref{sec:spatial}, the orbital anisotropy in Section~\ref{sec:orbital}, and the time evolution in Section~\ref{sec:evolution}. We conclude with a summary in Section~\ref{sec:summary}.
\section{Methods}
\subsection{Definition of spatial anisotropy}\label{sec:methods:spatial}
The Milky Way's ``plane of satellites" canonically consists of the 11 ``classical" satellites, the brightest within $r=300$~kpc of the Galactic centre, believed to constitute a complete sample. To characterise the spatial anisotropy of a satellite system, it is customary to consider the inertia tensor, defined as
\begin{equation}
I_{ij} = \sum_{n=1}^N x_{n,i} x_{n,j},
\label{eqn:inertia}
\end{equation}
where $x_n$ are the coordinates of the $n$-th satellite relative to the centre of positions. We label the square roots of its eigenvalues as $a$, $b$ and $c$, corresponding to the dispersions in position along the unit eigenvectors, $\vec{x_a}$, $\vec{x_b}$ and $\vec{x_c}$. A related metric is the ``reduced" inertia tensor defined after projection of the positions onto a unit sphere. We label the square roots of its eigenvalues as $a_\mathrm{red}$, $b_\mathrm{red}$ and $c_\mathrm{red}$. Both $c/a$ and $(c/a)_\mathrm{red} \equiv c_\mathrm{red}/a_\mathrm{red}$ parametrise the spatial anisotropy, smaller values imply greater anisotropy. Note that for small $N$, the expectation values of $c/a$ and $(c/a)_\mathrm{red}$ decrease, regardless of the underlying anisotropy \cite{Santos-Santos-2020}.
\subsection{Definition of orbital anisotropy} \label{sec:methods:look-elsewhere}
To characterise the clustering of orbital poles, we adopt the orbital pole dispersion for a subset of $N_{s}$ satellites, $\Delta_{std}$, defined by \cite{Pawlowski-2019a} as:
\begin{equation}
\Delta_{\rm{std}} (N_{s}) = \sqrt{ \frac{1}{N_{s}} \sum_{i=1}^{N_{s}} \theta_i^2 },
\end{equation}
where $\theta_i$ is the angle between the orbital pole of the $i$th satellite and the mean orbital pole of the satellites in the subset. To compute the clustering of an observed system relative to expectation, the same analysis is performed on the observations and simulations.
Based on earlier {\it Gaia} DR2 and HST proper motions, \cite{Pawlowski-2019a} calculated orbital pole dispersions for all possible satellite subsets in the MW and in simulations with $N_{\rm{subset}}=3 ... 11$, and discovered that $N_s = 7$ yielded the most unusual configuration. However, there is no a priori reason to specifically consider $N_s=7$. When considering only a proper subset of satellites, the interpretation of $\Delta_{\rm{std}} (N_{s})$ as evidence for unusual clustering is subject to the ``look elsewhere effect''. To account for this, we follow here the method of \cite{Cautun-2015}, which involves performing the same analysis for the simulated systems. As in the observations, we consider all subsets of size $N_s=3 ... 11$ in each simulated system, and identify the most unlikely to arise by chance from an isotropic distribution, which we calculate based on $10^5$ isotropic distributions of $N=11$ points, and the probability distributions of $\Delta_{\rm{std}} (N_s)$ for all $N_s=3 ... 11$ possible subsets.
\subsection{The Gini coefficient of inertia} \label{sec:methods:gini}
As each satellite contributes to the inertia (Equation~\ref{eqn:inertia}) proportional to $r_i^2$, $c/a$ is sensitive to the radial profile. To quantify this relationship, we introduce the Gini coefficient formalism. The central panel of Figure~\ref{fig:radial} shows the summed weights of the closest $i$ satellites from the centre, $\sum\limits_{j=1}^{i}r_j^2$, normalised by the total weight of all 11 satellites, $\sum\limits_{j=1}^{11}r_j^2$. The area between each curve and the diagonal measures the inequality of the satellites' contributions to the inertia, or the sample {\it Gini coefficient of inertia},
\begin{equation}
G = \left. \frac{1}{N-1} \sum\limits_{i=1}^N (2i - N -1) r^2_i \middle/ \right. \sum\limits_{i=1}^{N}r^2_i . \label{eqn:gini}
\end{equation}
Figure~\ref{fig:radial-synthetic} illustrates the relation between the radial distribution and the Gini coefficient, $G$, and the correlation between the $G$ and the measured anisotropy $c/a$, for samples of 11 points drawn from isotropic angular distributions. A more centrally concentrated radial distribution (higher $G$, as shown in the top panel) to greater anisotropy (lower $c/a$).
Compared to a more equal distribution, the Milky Way's centrally concentrated radial profile is equivalent to sampling a system with fewer points. For the purpose of computing $c/a$, the effective sample size \cite{Kish-1965} is only $N_\mathrm{eff}=4.16$.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{plots/radial_distribution_isotropic.pdf}
\caption{Relation between Gini coefficient of inertia, $G$, and
anisotropy, $c/a$, for $10^5$ random samples of 11 points, drawn
from isotropic angular distributions and radial distributions
uniformly distributed in $r^{1/2}$ (top) or $r$ (bottom). On the right panel, black lines denote the $90^\mathrm{th}$, $50^\mathrm{th}$ and $10^\mathrm{th}$ percentiles of each dataset, grey lines repeat the corresponding percentiles for the other dataset. For clarity, the left
and middle panels only show the first 200 samples. }
\label{fig:radial-synthetic}
\end{figure}
\subsection{Observations} \label{sec:methods:observations}
We adopt the sky positions and radial velocities from the \cite{McConnachie-2012} catalogue, and combine these, where available, with the \cite{McConnachie-2020b} proper motion measurements based on {\it Gaia} EDR3 \citep{McConnachie-2020a}. The systemic proper motions were measured within a Bayesian framework that combines information from stars with full astrometric data with information from stars with only photometric and proper motion data. The method is a mixture model that associates a probability for each candidate star to be associated with the target galaxy taking into account foreground and background contaminants. For the three innermost satellites (Sagittarius dSph, the LMC and the SMC), where the \cite{McConnachie-2020b} catalogue does not include proper motions, we use the {\it Gaia} DR2 proper motions of \cite{Riley-2019}. We further compiled the most recent estimates of the distance moduli of each satellite. The distance moduli, sky coordinates, radial velocities and proper motions used in this study, including their sources, are listed in Table~\ref{tab:observations}. As discussed in Appendix~\ref{appendix:gaia}, we also repeated our analysis using the {\it Gaia} EDR3 proper motions of \cite{Battaglia-2022}, and using only the {\it Gaia} DR2 proper motions described in \cite{Riley-2019}.
We account for measurement errors by generating Monte Carlo samples of the satellites in the space of observed quantities: distance modulus, radial velocity and proper motions, as well as the position of the Sun relative to the Galactic centre. We model each observable as a Gaussian distribution with the mean value and standard deviation given by the measurements and their quoted errors. For the sun's distance from the Galactic centre, we assume $R_\odot= (8.178 \pm 0.022) ~\rm{kpc}$ \citep{Gravity-2019}, for the circular velocity at the Sun's position, $V_{\rm circ} = (234.7\pm1.7)~\rm{km/s}$ \citep{Nitschai-2021}, and for Sun's motion with respect to the local standard of rest, $(U,V,W)=(11.10 \pm 0.72, 12.24\pm0.47, 7.25\pm0.37)~\rm{km/s}$ \cite{Schonrich-2010}.
\subsection{Orbital Integration}\label{sec:methods:integration}
To infer the time evolution of the Milky Way satellite system, the orbits of the satellites are integrated numerically as massless test particles in a static potential using the Gala package \citep{gala}. The potential consists of a disk, stellar nucleus and bulge, and a dark matter halo. The disk is modelled as an
axisymmetric Miyamoto-Nagai disk
\citep{Miyamoto-Nagai-1975}, which, for our default model,
has disk mass $5.17\times 10^ {10}~ \mathrm{M}_\odot $, a = 3 kpc, b = 0.028 kpc
\citep{Licquia-2015}. The nucleus and stellar bulge are both
modelled as spherically symmetric Hernquist profiles
\citep{Hernquist-1990}. For the nucleus we assume a mass of
$1.71 \times 10^9~ \mathrm{M}_\odot $, and a scale radius $a = 0.07$~kpc, and for the
bulge we assume a mass of $5.0 \times 10^9 ~ \mathrm{M}_\odot $ and $a = 1.0$
kpc. For the dark matter halo we assume a spherically symmetric
NFW \citep{Navarro-1996} potential.
Until recently, the Milky Way halo mass may have been a prohibitive
source of uncertainty for calculating the orbital evolution of the
satellites, as its value was known only to within a factor of two \citep[e.g.][]{Wang-2020}. However, the Galactic
halo mass has now been estimated with an uncertainty of only about 20\% using {\it Gaia} data. Multiple dynamical probes, such as the stellar rotation
curve, the escape velocity, the motions of halo stars, globular
clusters, and satellite galaxies
\citep{Monari-2018,Callingham-2019,deason-2019,Cautun-2020,Koppelman-2021}, consistently imply a dark matter halo mass for the MW of $M_{200}=(1.0\pm0.2)\times10^{12}~ \mathrm{M}_\odot $ and NFW concentration, $c_{200}=11\pm2$.
Based on these results, we adopt a reference MW halo of mass $1.0 \times 10^{12} \mathrm{M}_\odot $ and a concentration parameter, $c_{200}=11$, corresponding to an NFW scale radius of $r_s=19.2$~kpc. The positions and velocities relative to the plane of satellites, and the orbital periods and apocentre distances for the default potential, are listed in Table~\ref{tab:kinematics}, where the quoted uncertainties reflect $68\%$ confidence intervals for all quantities based on Monte Carlo sampling. Varying the MW potential within the observational uncertainties does not significantly affect the conclusions of our study, as we show in Appendix~\ref{appendix:potential}.
\subsection{$\Lambda$CDM simulations} \label{sec:methods:simulations}
The simulations used in this work are cosmological zoom-in constrained simulations, based on initial conditions created for the {\sc Sibelius} project \citep{Sawala-2021, McAlpine-2022} and designed to reproduce Local Group (LG) analogues within the observed large-scale structure. The simulations assume a $\Lambda$CDM cosmology with $\Omega_0 = 0.307$, $\Omega_\Lambda = 0.693$, $\sigma_8 = 0.8288$, and $h = 0.6777$. We use physical units throughout this work. In total, we generated 60,000 simulations, resulting in several thousand loosely defined Local Group analogues. From these, we selected 101 for the re-simulations, performed at a mass resolution of $1.0\times10^6 \mathrm{M}_\odot $ with the public {\sc Gadget-4} code \citep{Springel-2021}. At this resolution, a MW analogue halo contains approximately $10^6$ particles and an average of $\sim 200$ subhalos down to $2\times 10^7 \mathrm{M}_\odot $ can be identified within 300 kpc from the centre.
Structures and self-bound substructures were identified using the Friends-of-Friends (FoF) and {\sc Subfind} algorithms implemented in {\sc Gadget-4} at 60 snapshots equally spaced in time, from $z=4$ until a lookback time of 1 Gyr, and a further 40 snapshots equally spaced over the final 1 Gyr up to $z=0$. Throughout this work, we refer to the two principal self-bound substructures of each LG analogue at $z=0$ simply as ``halos'', and to the lower mass substructures within 300 kpc of the centre of each halo as ``satellites''. For the purposes of this work, we consider both halos as Milky Way analogues.
We use {\sc Gadget}'s on-the-fly merger tree construction, and cut the chain of links when a subhalo's progenitor is no longer found, or when a clear discontinuity in mass and position indicates that a satellite's progenitor has been erroneously identified as the main halo. At each snapshot, we record the maximum circular velocity of each subhalo,
$v_\mathrm{max}$ $= \mathrm{max}\left(\sqrt{ \frac{G M}{r}}\right) $, and define $v_\mathrm{peak}$ as the highest value of $v_\mathrm{max}$ of a subhalo and its progenitors over time. Following \cite{Libeskind-2005}, we use the standard procedure to rank satellites by $v_\mathrm{peak}$, and identify the top 11 within 300 kpc of each MW analogue at $z=0$ as analogues to the classical MW satellites.
\subsection{Orphan subhalos} \label{sec:methods:orphans}
As noted above, the radial distribution of satellites is important for the anisotropy. Numerical simulations suffer from the artificial disruption of substructures, that can affect subhalos far beyond the particle number limit at which they can theoretically be identified \citep{VanDenBosch-2018, Guo-2011}.
This effect can, however, be mitigated using semi-analytical models (which populate merger-trees constructed from simulated dark matter subhalos with galaxies). These models include so-called ``orphan" galaxies, that is, galaxies whose dark-matter subhalo has been numerically disrupted. After the subhalo is disrupted numerically, its subsequent evolution is followed by tracing the positions of its most bound particle \citep{Simha-2017}. Our ``complete" sample includes these ``orphan" subhalos.
One important result of this work is that the ``incomplete" and ``complete" samples of satellite halos have different radial distributions. Even though our high-resolution simulations resolve, on average, 200 surviving satellite halos inside 300 kpc of each MW analogue at $z=0$, and although we rank the satellites by $v_\mathrm{peak}$ ($v_\mathrm{max}$ being more strongly affected by tidal stripping), we find that the radial distribution of the top 11 surviving satellites in the ``incomplete" samples are systematically and significantly less centrally concentrated than the MW's brightest satellites.
\begin{figure}
\centering
\includegraphics[width=.8\columnwidth]{plots/orbits_gala_MC_N11.pdf}
\caption{Maximum likelihood positions (arrowheads) and orbits of
the 11 brightest MW satellites within 300 kpc, projected face-on
(top) and edge-on (bottom) according to the eigenvectors of the inertia tensor. Bold lines show maximum-likelihood orbits integrated for 1~Gyr into
the past and future in a halo of mass $10^{12} \mathrm{M}_\odot $, faint lines show 200 Monte-Carlo samples. The {\it Gaia} EDR3 measurements tightly constrain the proper motions, except for the LMC and SMC. Several galaxies, including the two outermost, Leo I and II, are presently crossing the plane (indicated by grey horizontal lines in the bottom panels), which soon disperses as a result.}
\label{fig:projections}
\end{figure}
\section{The Milky Way's Plane of Satellites in light of {\it Gaia} EDR3} \label{sec:results}
Figure~\ref{fig:projections} shows the present most likely positions and estimated orbits of the 11 brightest MW satellites projected along the principal axes of inertia. For the present positions, we measure $c/a = 0.183 \pm 0.004$ and $(c/a)_\mathrm{red}=0.3676 \pm 0.0004$. In the bottom two panels, the solid grey line shows the plane of satellites projected edge-on. However, from a visual inspection of the orbits, shown here integrated over $\pm$ 1 Gyr and including Monte-Carlo sampling of the observational uncertainties, it is already apparent that this configuration is short lived.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plots/radial_distribution_complete_EDR3-McConnachie.pdf}
\caption{Radial distribution and anisotropy of the classical MW satellites and those of simulated $\Lambda$CDM counterparts. On all panels, black symbols and lines represent the MW, lines coloured by $c/a$ represent the simulations. Left panel: radius, $r_i$, of the $i^{\mathrm{th}}$ closest satellite. Centre panel: sum of the squares of the radii of the closest $i$ satellites, normalised by the sum of all 11 satellites, i.e. the cumulative contributions to the inertia. The {\it Gini coefficient of inertia}, $G$, corresponds to the area between each line and the diagonal. Right panel: correlation between $G$ and anisotropy, $c/a$, accounting for artificial disruption (``complete'', circles), or without accounting for artificial disruption (``incomplete'', crosses). Grey lines indicate median and 10$^\mathrm{th}$ and 90$^\mathrm{th}$ percentiles. The black circle denotes the Milky Way's present values of $G$ and $c/a$, lines show its most likely (bold) and Monte-Carlo sampled (thin) evolution over the past 0.5 Gyr. Accounting for artificial disruption, the MW lies within the distribution.}
\label{fig:radial}
\end{figure}
\subsection{Spatial Anisotropy} \label{sec:spatial}
Earlier comparisons to $\Lambda$CDM systems \citep{Pawlowski-2019a} found that only $0.7\%$ of $\Lambda$CDM simulations produce systems as anisotropic as the Milky Way. However, we find this to be an artefact caused by the disruption of satellites in numerical simulations, which results in artificially extended radial profiles \citep{Guo-2014, VanDenBosch-2018, Webb-2020}. Accounting for this effect through the inclusion of orphans (see Section~\ref{sec:methods:orphans}) into our ``complete" sample of satellites, we recover radial distributions resembling the MW's, as shown in the left panel of Figure~\ref{fig:radial}.
The right panel of Figure~\ref{fig:radial} shows the relationship between $G$ and $c/a$. Systems with higher central concentration (higher $G$) tend to be more anisotropic (lower $c/a$). Accounting for artificial disruption (filled circles), $58\%$ of $\Lambda$CDM systems have $G$ above the MW, and 11 ($5.5\%$) have $c/a < 0.183$. Neglecting this effect (faint crosses) produces no systems with $G$ as high the MW and only two (1\%) with as low $c/a$ - in line with earlier studies \citep{Shao-2019, Pawlowski-2019a} which found $c/a$ values as low as the MW's to be exceedingly rare.
The outsized influence of the outermost satellites on the measured anisotropy is shown in Figure~\ref{fig:anisotropy-distributions}, which shows the probability distributions of $c/a$ (left panel) and $(c/a)_\mathrm{red}$ (right panel) when one satellite is placed at random angular coordinates at its observed radius, while all other satellites remain fixed. Satellites are ordered from top to bottom in order of decreasing distance. Vertical lines show the values for all 11 satellites at their observed positions. We also list the median values of the distributions and, in brackets, the range corresponding to $1\sigma$ around the median. For Sagittarius, the satellite with the smallest distance, the $c/a$ distribution is extremely narrow: due to its close proximity, it contributes less than $1\%$ to the inertia tensor. For Fornax, the third most distant galaxy, located 38 kpc above the plane, randomising the angular coordinates can result in both significantly greater or smaller anisotropy. However, most significantly, for the two most distant satellites, Leo I and Leo II, randomising the angular coordinates of just one object raises the median value of $c/a$ to 0.28 and 0.31, respectively, with maxima of 0.53 and 0.63. In other words, randomising the position of Leo I or Leo II alone could turn the Milky Way's classical satellites into a system more {\it isotropic} than the majority of $\Lambda$CDM systems.
In addition to the radial distribution, the Milky Way's present anisotropy results from the fact that its two outermost satellites, Leo I and Leo II, which contribute two thirds of the total inertia, are currently in close proximity to each other. However, as is already apparent from Figure~\ref{fig:projections}, and as we discuss in more detail below, this constellation is short-lived.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/ca_distributions_randomised_EDR3-McConnachie.pdf}
\caption{Probability density functions of $c/a$ (left) and
$(c/a)_\mathrm{red}$ (right) for the 11 brightest satellites when
the angular coordinates of each galaxy are randomised in turn,
with the distance set to the observed value, and the coordinates
of all other galaxies kept fixed. Numbers show the median values of $c/a$, and
$(c/a)_\mathrm{red}$; those in brackets show the
$10^\mathrm{th}$ and $90^\mathrm{th}$ percentiles. Galaxies are
sorted from top to bottom in decreasing order of radius. Black
horizontal lines indicate the vertical offset, the black vertical
lines show the values with all 11 galaxies at their observed
positions. Each galaxy impacts the distribution of $c/a $
differently, and the range of possible $c/a$ values correlates
with the radius of the satellite. Just placing either one of Leo I
or Leo II at
different angular coordinates at their respective radius could
result in a completely different anisotropy, including cases that
are more
isotropic than the majority of $\Lambda$CDM systems. }
\label{fig:anisotropy-distributions}
\end{figure}
\clearpage
\begin{figure}
\includegraphics[width=\columnwidth]{plots/poles_hammer_N11.pdf}
\caption{Hammer projection of Milky Way satellite orbital poles, using {\it Gaia} EDR3 proper motions. Large circles show the most likely values, small circles show 200 Monte Carlo samples of the observational errors, within $\pm 1 \sigma$ of the most likely values. The dotted black curve indicates the dispersion reported by \cite{Pawlowski-2019a} for seven MW satellites. The solid black curve indicates the dispersion we find for the same set based on {\it Gaia} EDR3, the dashed black curve indicates the minimum dispersion we find for seven satellites exchanging Leo I and Leo II. The orbital poles of the MW satellites are significantly clustered, but several of our simulated $\Lambda$CDM systems contain equally or more strongly clustered satellite systems.}
\label{fig:hammer}
\end{figure}
\subsection{Orbital Anisotropy} \label{sec:orbital}
Supporting the notion that the satellite plane constitutes a spinning disk, the orbital poles of 7 of the 11 classical satellites -- the LMC, the SMC, Fornax, Leo II, Carina, Draco and Ursa Minor -- are reportedly clustered with a standard deviation in direction of only $\Delta_{\rm{std}} (N_s=7)=16.0^{\circ}$, found in only $0.04\%$ of $\Lambda$CDM systems \citep{Pawlowski-2019a}.
Using the more precise proper motions from {\it Gaia} EDR3 \citep{McConnachie-2020b} for the same seven satellites, we find that this angle increases significantly, to $\Delta_{\rm{std}} (N_s=7)=23.2^{\circ}~_{-2.8}^{+3.5}$. This configuration has a 0.087\% probability to arise from an isotropic distribution. We also repeated the analysis, and find that a different subset (that includes Leo I instead of Leo II) yields a smaller dispersion of $\Delta_{\rm{std}} (N_s=7)=18.9^{\circ}~_{-1.4}^{+1.9}$, with a corresponding 0.011\% probability. In Figure~\ref{fig:hammer}, we show the orbital poles of the 11 classical satellites, and the orbital pole dispersions calculated by \cite{Pawlowski-2019a} (dotted), by us for the same set of seven satellites based on {\it Gaia} EDR3 (solid), and by us for the most clustered set of seven satellites (dashed).
In our sample of 202 simulated systems, Among our sample of 202 simulated systems, adopting either $\Delta_{\rm{std}} (N_s=7) = 18.9^{\circ}$ or $23.2^{\circ}$ and accounting for the minimum look-elsewhere effect (see Section~\ref{sec:methods:look-elsewhere}), we find three or five systems with subsets of satellites with a smaller probability to arise from isotropic distributions. That is, we find that $\sim 2\%$ of $\Lambda$CDM systems contain satellites whose orbital poles are even more anisotropic than the most clustered subset of the Milky Way, a $\sim$ 50-fold increase over previous results. The orbital clustering of a subset of the Milky Way satellites is unusual in $\Lambda$CDM , but not {\it astronomically} improbable.
Importantly, while the ``plane of satellites" includes all 11 classical satellites, the orbital anisotropy only concerns a subset, which is in fact more spatially isotropic than the system as a whole. The orbital pole clustering does not drive the spatial anisotropy.
Importantly, while the ``plane of satellites" includes all 11 classical satellites, the orbital anisotropy only concerns a subset, which happens to be more spatially isotropic than the system as a whole. The orbital pole clustering does not drive the spatial anisotropy.
\subsection{Time Evolution} \label{sec:evolution}
Another defining feature of a rotationally supported disk would be a significantly higher velocity dispersion parallel to the plane than perpendicular to it. However, for the MW's classical satellites, we measure $\sigma_{v\parallel} = 165.1 \pm 1.2$~kms$^{-1}$ and $\sigma_{v\perp} = 121.6 \pm 0.4 $~kms$^{-1}$. The ratio, $\sigma_{v\parallel} / \sigma_{v\perp} = 1.36$, is indistinguishable from the purely geometrical factor of $\sqrt{2}$. By this basic measure, the plane is not rotation-supported.
The longevity of the plane can also be tested directly via orbital integration, as described in Section~\ref{sec:methods:integration}. This method was first, but inconclusively, applied using pre-{\it Gaia} data by \cite{Maji-2017}. In this work, as described in Section~\ref{sec:methods:observations}, we benefit from significantly more precise observations, including {\it Gaia} EDR3 proper motions and more accurate distances.
In Figure~\ref{fig:projections}, we saw that several satellites are presently crossing the plane, while Leo I and II, which dominate the inertia, are moving apart. To elucidate the impact of such ``fortuitous alignments" on the anisotropy, we show at the top of Figure~\ref{fig:distributions} the anisotropy distributions when each satellite moves along its orbit while the others remain at their present positions. The time-averaged anisotropy of the system is then calculated over one full orbital period centred on the present time. Depending on the orbital phase of Leo II alone, $c/a$ could be as high as 0.39, more {\it isotropic} than most $\Lambda$CDM systems.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{plots/ca_distributions_phase_EDR3-McConnachie.pdf}
\caption{Top: time-averaged probability densities for the
anisotropy, $c/a$ (left) and $(c/a)_\mathrm{red}$ (right), when each satellite evolves along its most likely orbit during one period, $\tau$, while the other 10
satellites remain at their present positions. Bottom: probability density functions over lookback times of 0.5, 1 and 2 Gyr, evolving all orbits
simultaneously. Triangles below each graph indicate the time-averaged medians, filled areas extend from the 10$^\mathrm{th}$ to 90$^\mathrm{th}$ percentiles. Black vertical lines indicate the present anisotropy, $c/a=0.183$, $(c/a)_\mathrm{red}=0.364$. Downward triangles at the top show percentiles in the $\Lambda$CDM simulations. Over the past 1~Gyr, $c/a$ varied between 0.17 and 0.31, while $(c/a)_\mathrm{red}$ varied between 0.22 and 0.52. The present value of $c/a$ is an outlier even compared to the past 0.5 Gyr.}
\label{fig:distributions}
\end{figure}
The bottom panels of Figure~\ref{fig:distributions} show time-averaged probability densities of $c/a$ and $(c/a)_\mathrm{red}$ when all satellites evolve simultaneously. The current value of $c/a$ is significantly lower than in the recent past: over the past 0.5 and 1~Gyr, the time-averaged medians of $c/a$ are 0.23 and 0.27 respectively, greater than $13\%$ and $23\%$ of $\Lambda$CDM systems. $(c/a)_\mathrm{red}$ has varied widely. Neither metric is an invariant of the satellite system. Instead, both are sensitive to the orbital phases.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{plots/evolution_MC_N11_EDR3-McConnachie.pdf}
\caption{Evolution of $c/a$, $(c/a)_{\mathrm{red}}$, and directions of the normals to the plane of the full and reduced inertia tensors $\vec{x_c}$ and $\vec{x_{c,\mathrm{red}}}$. Dashed dark blue lines show most likely results, light blue lines show 200 Monte-Carlo samples, until one satellite is beyond 300 kpc from the Galactic centre. The small inset shows $t=0 \pm 1$~Myr. Grey vertical bars at $t=0$ show 10$^{\mathrm{th}}$ to 90$^{\mathrm{th}}$ percentiles of simulated $\Lambda$CDM analogues at $z=0$, dotted horizontal lines show the medians of these distributions. The MW's $c/a$ evolves towards the median of the $\Lambda$CDM systems, while $(c/a)_{\mathrm{red}}$ varies significantly, exceeding the median in both the near past and near future. The evolution of $\vec{x_c}$ and $\vec{x_{c,\mathrm{red}}}$ reveal the plane to be tilting by either definition.}
\label{fig:MC-evolution}
\end{figure}
The four panels of Figure~\ref{fig:MC-evolution} show the evolution of $c/a$, $(c/a)_\mathrm{red}$, and of the orientations of the planes defined by the full and reduced inertia tensors, which we parametrise by the angles between the vectors normal to the planes, $\vec{x_c}$ and $\vec{x_{c,\mathrm{red}}}$ and their present day equivalents, $\vec{x_{c,0}}$ and $\vec{x_{c,0,\mathrm{red}}}$. The value of $c/a$ approaches the $\Lambda$CDM median within a lookback time of 0.5 Gyr. $(c/a)_\mathrm{red}$ evolves more rapidly, twice exceeding the $\Lambda$CDM median. That $c/a$ and $(c/a)_\mathrm{red}$ vary on such different timescales is a further consequence of the radial distribution. While Leo I and Leo II, which largely determine $c/a$, have orbital periods of $3.4 \pm 0.2$ and $7.2 \pm 0.3$ Gyr, respectively, the eight closest satellites, which largely determine $(c/a)_\mathrm{red}$ have orbital periods under 2~Gyr.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{plots/orbits_gala_plane_evolution_EDR3-McConnachie.pdf}
\caption{Evolution of the orientation of the plane of satellites. Shown on all panels are the most likely orbits of the MW satellites for $\pm$ 1 Gyr from the present time, analogous to Figure~\ref{fig:projections}. From left to right, panels show the position of satellites at 1 Gyr and 0.5 Gyr in the past, today (denoted with $t_0$), and 0.5 and 1 Gyr into the future. All panels are plotted in the frame of the major and minor eigenvectors of the inertia tensor at time $t_0$, $\vec{x_{a,0}}$ and $\vec{x_{c,0}}$. Thick black lines show the plane at each time edge-on. In the centre panel, by construction the plane axes align with the coordinate axes. In the other panels, the tilt of the plane can be observed. The orientation of the plane closely follows the locations of the most distant satellites.}
\label{fig:plane-tilt}
\end{figure}
We further see that the orientation of satellite plane is not stable, but has tilted by $\sim 17^{\circ}$ over the past 0.5 Gyr (and $\sim 40^{\circ}$ for the reduced definition). The tilt of the plane is illustrated most clearly in Figure~\ref{fig:plane-tilt}, where we show the most likely orbits and positions of the classical MW satellites, projected in the frame of the major and minor eigenvectors at time $t_0$. In each panel, the thick black line shows the orientation of the plane edge-on, computed at the respective time. In the centre panel, by construction the plane axes align with the coordinate axes. In the other panels, the tilt of the plane can be observed. Rather than satellites orbiting inside a stable plane, the plane itself tilts, as it tracks the positions of its two most distant members. Neither observational errors (see Appendix~\ref{appendix:gaia}) nor uncertainties regarding the MW potential (see Appendix~\ref{appendix:potential}) significantly affect these results.
\section{Summary} \label{sec:summary}
The high reported anisotropy of the MW satellite system can largely be attributed to its high central concentration, not previously reproduced in simulations, combined with the close but fleeting contiguity of its two most distant members. Accounting for the radial distribution reveals the MW satellites to be consistent with $\Lambda$CDM expectations. Compared to previous works, we also find a much higher likelihood of subsets whose orbital poles are as clustered as the MW. Although the Milky Way contains such a subset, the plane of satellites does not constitute a rotationally supported disk. Instead, it evolves on timescales similar to the ``transient" alignments previously found in $\Lambda$CDM simulations.
Our orbital integration assumes a static MW potential with satellites as massless test particles. We Monte-Carlo sample all sources of observational error and also vary the components of the potential within the uncertainties (see Section~\ref{sec:methods:observations} and Appendix~\ref{appendix:gaia}). We find our results to be robust, and while the real potential is more complex, for example due to the presence of the LMC, these simplifications are valid within a dynamical time ($\sim 2$ Gyr) of the halo, particularly for the important outer satellites \citep{Garavito-Camargo-2021, Battaglia-2022}. A more complex potential would only accelerate the dissolution of a rotating disk \citep{Nuwanthika-2018}. The true Milky Way potential also evolves with time, but the dynamical time of the halo ($\sim 2$ Gyr at $z=0$) is significantly longer than the time scale for the reported dissolution of the plane of satellites (several hundred Myr).
This work only directly addresses the archetypal ``plane of satellites" around the MW. Anisotropic satellite distributions have also been reported around several other galaxies \citep{Ibata-2014, Muller-2021} with different criteria, which can exacerbate the look-elsewhere effect. Assessing their significance requires a careful statistical analysis \citep{Cautun-2015}. While not all criteria are equally sensitive to the radial distribution, we also expect that the significantly higher anisotropy we report here for simulated MW analogues will apply to $\Lambda$CDM analogues of other, similarly defined systems.
After centuries of observations, the Milky Way and its satellites are the best studied system of galaxies in the Universe. Viewed in sufficient detail, every system inevitably reveals some remarkable features, and the Milky Way is no exception. However, based on the best currently available data, there is no evidence for a plane of satellites incompatible with, or even particularly remarkable in $\Lambda$CDM . On the contrary, as judged by the spatial anisotropy of the brightest satellite galaxies, we live in a fairly typical $\Lambda$CDM system.
\subsection*{Acknowledgements}
This work used the DiRAC Data Centric system at
Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk), and facilities hosted by the CSC - IT Centre for Science, Finland. The DiRAC system was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1,
STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.
TS is an Academy of Finland Research Fellow. This work was supported by Academy of Finland grant numbers 314238 and 335607. PHJ acknowledges the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930) and the Academy of Finland grant 339127. MS is supported by the Netherlands Organisation for Scientific
Research (NWO) through VENI grant 639.041.749. CSF acknowledges
support by the European Research Council(ERC) through Advanced
Investigator DMIDAS (GA 786910). GL acknowledges financial support by
the French National Research Agency for the project BIG4, under
reference ANR-16-CE23-0002, and MMUniverse, under reference
ANR-19-CE31-0020.
\subsection*{Data and Code}
The analysis in this paper was performed in python3 and makes extensive use of open-source libraries, including Matplotlib 3.4.2, NumPy 1.21.1 \citep{numpy}, SciPy 1.7.0 \citep{2020SciPy-NMeth}, GalPy 1.7.0 \citep{galpy}, Py-SPHViewer \citep{pysphviewer}, TensorFlow \citep{tensorflow2015-whitepaper} and Gala 1.4.1 \citep{gala}.
The full data and analysis code is available at \url{https://github.com/TillSawala/plane-of-satellites}.
| proofpile-arXiv_065-1644 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Collimated sprays of hadrons called jets are the manifestation of Quantum Chromodynamics (QCD) at high-energy colliders \cite{Hanson:1975fe,Sterman:1977wj}.
The seminal introduction of experimentally robust infrared-and-collinear-safe jet algorithms \cite{Cacciari:2005hq,Cacciari:2008gp,Cacciari:2011ma}, combined with the remarkable resolution of the Large Hadron Collider (LHC) detectors \cite{CMSPF,ATLAS:2017ghe}, has enabled the precision study of the detailed structure of energy flow within jets, a field referred to as jet substructure \cite{Larkoski:2017jix,Marzani:2019hun}.
The ability to exploit the detailed internal structure of jets has opened up numerous new avenues to search for new physics signals \cite{Butterworth:2008iy,Kaplan:2008ie,Krohn:2009th}, and it continues to provide some of the most innovative searches for new physics \cite{CMS:2022jed,CMS:2021yqw}.
It has also provided numerous new ways to probe QCD, both in vacuum and in the medium \cite{Andrews:2018jcm,Cunqueiro:2021wls}.
These new approaches necessitate the theoretical understanding of the detailed structure of energy flow \emph{within} jets.
While there has been extensive progress in the theoretical understanding of jet cross sections, with remarkable calculations including $W+9$-jet cross sections at tree level \cite{Hoche:2019flt}, 5-jet cross sections at next-to-leading order (NLO) \cite{Badger:2013yda}, and 3-jet cross sections at NNLO \cite{Czakon:2021mjy}, much less is understood about the structure of energy flow within jets themselves.
\begin{figure}[t]
\begin{center}
\subfloat[]{
\includegraphics[scale=0.22]{figures/spacetimes_v2.pdf}\label{fig:intro_a}
}\qquad\qquad
\subfloat[]{
\includegraphics[scale=0.22]{figures/jet_fancy_threepoint_noperson.pdf}\label{fig:intro_b}
}
\end{center}
\caption{
(a) Correlation functions measured on the asymptotic boundaries of different spacetimes.
(b) In the flat space case, a wealth of data exists inside high-energy jets at the LHC, allowing for the direct measurement of higher-point correlators.}
\label{fig:intro}
\end{figure}
Asymptotic energy flow in collider experiments is characterized by correlation functions
$\langle \mathcal{E}(\vec n_1) \mathcal{E}(\vec n_2) \cdots \mathcal{E}(\vec n_k) \rangle$ of the energy flow operator~\cite{Sveshnikov:1995vi,Tkachov:1995kk,Korchemsky:1999kt,Bauer:2008dt,Hofman:2008ar,Belitsky:2013xxa,Belitsky:2013bja,Kravchuk:2018htv}:
\begin{align}
\label{energy_flow_operator}
\mathcal{E}(\vec n) = \lim_{r\rightarrow \infty} \int\limits_0^\infty dt \, r^2 \, n^i \, T_{0i}(t,r \vec n)\,.
\end{align}
These energy flow operators are illustrated in the Penrose diagram in \Fig{fig:intro_a}, which shows their non-local spacetime structure, and inside a high-energy jet at the LHC in \Fig{fig:intro_b}, where they are shown as points, illustrating their locality on the celestial sphere.
The underlying microscopic details of the collision are imprinted in the detailed structure of these correlation functions.
This is in analogy with how the details of inflation are imprinted in cosmological correlation functions of scalar $\langle \zeta_{\vec k_1} \zeta_{\vec k_2} \cdots \zeta_{\vec k_n} \rangle $ or tensor $\langle \gamma_{\vec k_1} \gamma_{\vec k_2} \cdots \gamma_{\vec k_n} \rangle$ fluctuations, illustrated schematically in \Fig{fig:intro_a}.
Despite their direct relation to experiments of interest, correlation functions that live on the boundary of flat or de-Sitter space are much less understood than the correlation functions of local operators in a conformal field theory on the boundary of AdS \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, but they have recently received significant interest.
The cosmological three-point function $\langle \zeta_{\vec k_1} \zeta_{\vec k_2} \zeta_{\vec k_3} \rangle $ was first computed in single-field inflation in the seminal work of Maldacena \cite{Maldacena:2011nz}, and it has since been studied extensively \cite{Babich:2004gb,Chen:2006nt,Cheung:2007sv,Cheung:2007st,Weinberg:2008hq} and extended to an understanding of three-point functions of tensor fluctuations \cite{Maldacena:2011nz}.
These three-point functions encode a wealth of information about the underlying dynamics through their shape dependence.
Limiting shapes are shown in \Fig{fig:shapes}, each of which encode different physics, see e.g.~\cite{Bartolo:2004if,Babich:2004gb,Chen:2006nt,Baumann:2009ds,Chen:2010xka,Meerburg:2019qqi,Planck:2015zfm,Planck:2019kim}.
Phenomenological applications of the shapes of four-point correlators have also been studied~\cite{Arroja:2009pd,Chen:2009bc,Hindmarsh:2009es,Senatore:2010jy,Bartolo:2010di,Lewis:2011au}.
There has recently been significant advances in the understanding of cosmological correlation functions, driven by advances from the amplitudes and conformal bootstrap programs; see e.g.~\cite{DiPietro:2021sjt,Cabass:2021fnw,Goodhew:2021oqg,Melville:2021lst,Benincasa:2020aoj,Arkani-Hamed:2018bjr,Arkani-Hamed:2017fdk,Baumann:2021fxj,Baumann:2020dch,Baumann:2019oyu,Arkani-Hamed:2018kmz,Lee:2016vti}.
\begin{figure}
\begin{center}
\subfloat[]{
\includegraphics[width=0.2\textwidth]{figures/shape_c}\label{fig:shapes_a}
}\qquad
\subfloat[]{
\includegraphics[width=0.30\textwidth]{figures/shape_b}\label{fig:shapes_b}
}\qquad
\subfloat[]{
\includegraphics[width=0.30\textwidth]{figures/shape_a}\label{fig:shapes_c}
}
\end{center}
\caption{
Limiting shapes for the three-point correlation function that will play a central role in our discussion: (a) equilateral, (b) squeezed, and (c) flattened.
The behavior of the correlation function in these different limits is determined by distinct physics.
The naming conventions used depend on the particular community.
}
\label{fig:shapes}
\end{figure}
By contrast, despite the wealth of collider data, the understanding of the asymptotic structure of energy flow operators lags behind its cosmological counterparts.
The two-point correlator was studied early on in the history of QCD~\cite{Basham:1978bw,Basham:1977iq,Basham:1979gh,Basham:1978zq,Konishi:1979cb}, measured at $e^+e^-$ colliders~\cite{SLD:1994idb,L3:1992btq,OPAL:1991uui,TOPAZ:1989yod,TASSO:1987mcs,JADE:1984taa,Fernandez:1984db,Wood:1987uf,CELLO:1982rca,PLUTO:1985yzc}, and more recently computed analytically to higher perturbative orders~\cite{Belitsky:2013ofa,Dixon:2018qgp,Henn:2019gkr,Luo:2019nig}.
However, the first calculation of multi-point correlators of energy flow operators was the seminal work of Hofman and Maldacena \cite{Hofman:2008ar} as an expansion about strong coupling.
This motivated significant theoretical study of these correlators, particularly in the context of conformal field theories~\cite{Belitsky:2013xxa,Belitsky:2013bja,Belitsky:2013ofa,Belitsky:2014zha,Korchemsky:2015ssa} and the development of the light-ray-operator product expansion (OPE)~\cite{Hofman:2008ar,Kravchuk:2018htv,Kologlu:2019bco,Kologlu:2019mfz,Chang:2020qpj}.
Recently, the three-point correlator was computed in the collinear limit at weak coupling in both QCD and $\mathcal{N}=4$ super Yang-Mills~\cite{Chang:2022ryc,Chen:2022jhb}, where it was analyzed in detail and expressed as a sum over celestial blocks incorporating the symmetries of the Lorentz group.
(This has since been extended to a calculation of the full angular dependence~\cite{Yan:2022cye}.)
It was also shown that it can be directly analyzed inside jets at the LHC using CMS Open Data~\cite{Komiske:2022enw}.
This is part of a broader program to reformulate the study of jet substructure in terms of energy correlators~\cite{Dixon:2019uzg,Chen:2019bpb,Chen:2020adz,Chen:2020vvp,Chen:2021gdk,Chen:2022jhb,Holguin:2022epo}.%
\footnote{The study of energy correlators in the back-to-back (Sudakov) region has also seen significant progress; see e.g.~\cite{Gao:2019ojf,Moult:2018jzp,Moult:2019vou,Ebert:2020sfi,Li:2021txc}.}
An important part of this program includes the development of techniques to compute energy correlators on charged particles (tracks) \cite{Chang:2013rca,Chang:2013iba,Chen:2020vvp,Li:2021zcf,Jaarsma:2022kdd} to enable the experimental measurement of higher-point correlators.
This availability of higher-point correlators in the collider context allows us to begin to start asking a similar class of questions to those studied in cosmology, namely about the shape dependence of correlations in the QCD energy flux.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{figures/decorated_opendata}
\caption{
The full shape dependence of the celestial non-gaussianity $Q_\mathcal{E}$ in CMS Open Data, showing a strong peak in the ``flattened triangle" region.
The $(\xi, \phi)$ coordinates parametrize the shape of the three-point correlator and are defined in \Eq{eq:transf}, but the representative shapes from \Fig{fig:shapes} are drawn to guide the reader.
The squeezed limit is characterized by $\xi\rightarrow 0$, while the flattened triangle is characterized by $\phi\rightarrow 0$. To our knowledge, this is the first study of non-gaussianities in QCD energy flux.}
\label{fig:shape_intro}
\end{center}
\end{figure}
A first step in the use of the three-point correlator in collider physics was taken in \Refs{Chen:2020adz,Chen:2021gdk}, which focused on the squeezed or operator product expansion (OPE) limit, as illustrated in \Fig{fig:shapes_b}.
There, it was shown that interference effects associated with the spin of the gluon are encoded in the azimuthal structure as two squeezed correlators are rotated with respect to a third.
This is in analogy to similar effects in the squeezed limit of the cosmological three-point correlator~\cite{Arkani-Hamed:2015bza}.
The analytic resummation of these effects \cite{Chen:2020adz,Chen:2021gdk} was then used to verify their incorporation into parton showers \cite{Karlberg:2021kwr}.
Note that in the squeezed limit, the three-point correlator factorizes into a product of two-point correlators, much in analogy with the consistency relations in the cosmological case \cite{Maldacena:2002vr,Creminelli:2004yq,Cheung:2007sv,Goldberger:2013rsa}.
The full shape dependence, and the wealth of physics incorporated into it, has not yet been exploited.
In this paper, we introduce ``celestial non-gaussianities", which are a particular ratio of the three-point correlator to a product of two-point correlators, in analogy to non-gaussianities for correlation functions of local operators in conformal field theories and for cosmological correlation functions.
We show that this observable is robust to hadronization effects, allowing it to be computed perturbatively and compared with data from high-energy jets at the LHC, and we study it in detail using perturbative results and parton showers.
We then plot the celestial non-gaussianities using publicly available data from the CMS experiment at the LHC~\cite{CMS:JetPrimary2011A}, finding good agreement with our theoretical calculations for a track-based analysis.
A plot of the shape dependence of the celestial non-gaussianity is shown in \Fig{fig:shape_intro}. The coordinates $(\xi, \phi)$ will be defined in \Sec{sec:NG}, however, we have drawn representative shapes of the three-point correlator to guide the reader.
Interestingly, we find that the non-gaussianity is highly peaked in the flattened triangle region, from which we can draw an analogy with the case of cosmological correlation functions.
Our results provide the first study of the non-gaussianities of QCD radiation, and they show for the first time control over three-point correlations within jets at the LHC.
This provides a significant step in our understanding of the structure of energy flow inside jets, which we believe will be useful in a number of different directions for improving our understanding of QCD in the high-energy regime.
First, and most obviously, our results provide new detailed probes of the perturbative interactions of quarks and gluons in QCD.
Second, increasingly sophisticated properties of energy flow are being exploited by machine learning techniques to search for increasingly subtle imprints of new physics within jets (see e.g.~\cite{Komiske:2018cqr,Qu:2019gqs,CMS:2020poo}), greatly extending the reach of previous observables~\cite{Thaler:2010tr,Thaler:2011gf,Larkoski:2013eya,Larkoski:2014zma,Larkoski:2014gra,Larkoski:2015kga,Moult:2016cvt,Larkoski:2017cqq,Larkoski:2017iuy,Komiske:2017aww,Komiske:2018cqr,Komiske:2019fks}.
Supervised machine learning relies on the accurate description of this energy flow using parton shower Monte Carlo generators.
There is currently a push to extend these to incorporate higher-order effects into shower generators, from a variety of different directions~\cite{Li:2016yez,Hoche:2017hno,Hoche:2017iem,Dulat:2018vuy,Gellersen:2021eci,Hamilton:2020rcu,Dasgupta:2020fwr,Hamilton:2021dyz,Karlberg:2021kwr}.
Having an analytic understanding of properties of the radiation flow will prove important for this goal.
Indeed, the understanding of the three-point correlator in the collinear limit \cite{Chen:2020adz} has already been useful for verifying the incorporation of spin effects \cite{Hamilton:2021dyz,Karlberg:2021kwr}.
Finally, jet substructure has begun to play an important role in the study of heavy-ion collisions \cite{Andrews:2018jcm,Cunqueiro:2021wls}.
Much like in the case of inflation, the three-point correlator provides a wealth of information potentially allowing one to distinguish different mechanisms of medium modification.
All of these motivate pushing our understanding of radiation patterns within jets to the level of three-point correlations.
An outline of this paper is as follows.
In \Sec{sec:NG}, we introduce our definition of ``celestial non-gaussianity" and describe its basic features and theoretical motivation.
We study its basic theoretical properties in \Sec{sec:NG_analytic}, using both perturbative fixed-order results and parton shower generators.
In \Sec{sec:open_data}, we analyze the celestial non-gaussianity using CMS Open Data, and compare with our analytic results.
We conclude in \Sec{sec:conc}.
Additional technical details related to both the theory and data analysis are provided in the Appendices.
\section{Celestial Non-Gaussianities}\label{sec:NG}
A main point of this paper is to introduce a robust definition of non-gaussianity for asymptotic energy flow, which we term ``celestial non-gaussianity".
This is inspired by the definition of non-gaussianities often used in the study of condensed matter systems, conformal field theory, and cosmology.
Taking as an example the four-point correlator of a local operator $\sigma$ in a CFT, one traditionally defines the non-gaussianity by dividing the four-point correlator by the sum of Wick contractions:
\begin{align} \label{eq:Ising_NG_def}
Q_\sigma(1,2,3,4)=\frac{\langle \sigma_1 \sigma_2 \sigma_3 \sigma_4 \rangle}{\langle \sigma_1 \sigma_2 \rangle \langle \sigma_3 \sigma_4 \rangle + \langle \sigma_1 \sigma_3 \rangle \langle \sigma_2 \sigma_4 \rangle + \langle \sigma_1 \sigma_4 \rangle \langle \sigma_2 \sigma_3 \rangle}\,.
\end{align}
In the case of a Gaussian (free) theory, $Q_\sigma=1$.
The non-gaussianity $Q_\sigma$ is known exactly in the 2d Ising model~\cite{Belavin:1984vu}, and it has recently been studied in the 3d Ising model in \Ref{Rychkov:2016mrc} using state of the art data \cite{Kos:2016ysd,Komargodski:2016auf,Simmons-Duffin:2016wlq} from the conformal bootstrap program \cite{Rattazzi:2008pe,El-Showk:2012cjh,El-Showk:2014dwa,Poland:2018epd}.
There, it was found that the non-gaussianity of the 3d Ising model is strongly peaked for an equilateral configuration.
Similar measures are used in cosmology \cite{Baumann:2009ds}.
Here, we will introduce a similar quantity for the asymptotic energy flux, which is both experimentally robust and insensitive to hadronization effects.
\subsection{Definition}\label{sec:def}
In the collinear limit, the first correlator with a non-trivial shape dependence is the three-point correlator (EEEC):
\begin{equation}
\langle\mathcal{E}(\vec{n}_1)\, \mathcal{E}(\vec{n}_2)\, \mathcal{E}(\vec{n}_3)\rangle,
\end{equation}
as illustrated in \Fig{fig:intro_b}.
Much like the four-point correlator of local operators discussed above, the three-point correlator contains in it iterations of two-point correlations.
In the case of asymptotic energy flux, these are interpreted physically as iterated $1\rightarrow 2$ splittings, as are implemented in standard parton shower generators.
For a recent discussion in terms of the $1\rightarrow 3$ splitting function, see \Ref{Braun-White:2022rtg}.
These iterations give rise to the leading singular behavior in the squeezed/OPE limit, where the three-point correlator factorizes into a product of two-point correlators~\cite{Chen:2021gdk,Chen:2020adz}.
To derive an appropriate non-gaussianity for asymptotic energy flux $Q_\mathcal{E}$, one therefore wants to remove these contributions from the three-point correlator, defining an observable such that $Q_\mathcal{E}\rightarrow \text{const}$ in the squeezed limits.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{figures/NG_schematic.pdf}
\end{center}
\caption{The celestial non-gaussianity $Q_{\mathcal{E}}(\vec n_1, \vec n_2, \vec n_3)$, as defined in \Eq{eq:NG_def} as a ratio of correlators. }
\label{fig:NG_def}
\end{figure}
At a hadron collider, the three-particle correlator is expressed in terms of the boost invariant angles $R_{ij}=\sqrt{\Delta y_{ij}^2 + \Delta \phi_{ij}^2}$ on the rapidity-azimuth cylinder.
Without loss of generality, we assume the angles are ordered as $R_{12} < R_{23} < R_{13}$. We then define the ``celestial non-gaussianity"%
\footnote{We use the word ``non-gaussianity'', since in the squeezed limit, the three-point correlator reduces to a product of two-point correlators. More generally in squeezed limits of high-point functions, they reduce to products of two-point functions. A non-standard feature is that these two-point functions involve higher powers, $\mathcal{E}^n$ of the energy flow operator. In this sense, it is different from a standard non-gaussianity as in \Eq{eq:Ising_NG_def}, where the fields are strictly gaussian (free). However, we believe that \Eq{eq:NG_def} is the appropriate generalization for the case of asymptotic energy flow.} as the ratio
\begin{align}\label{eq:NG_def}
\boxed{Q_{\mathcal{E}}(\vec n_1, \vec n_2, \vec n_3)=\frac{\langle\mathcal{E}(\vec{n}_1)\, \mathcal{E}(\vec{n}_2)\, \mathcal{E}(\vec{n}_3)\rangle_{\Psi} ~ \langle\mathcal{E}^2(\vec{n}_1)\rangle_{\Psi} }
{\langle\mathcal{E}(\vec{n}_1)\, \mathcal{E}(\vec{n}_2)\rangle_{\Psi} ~ \langle\mathcal{E}^2(\vec{n}_1)\, \mathcal{E}(\vec{n}_3)\rangle_{\Psi} }\,. }
\end{align}
Here, we have divided the three-point correlator by a product of two two-point correlators, one involving $\mathcal{E}^m(\vec{n})$, which is used to denote the measurement of $m$-th power weighting along direction $\vec{n}$.
Precise definitions of all observables appearing in the definition of the celestial non-gaussianity are provided in \App{sec:def_app}, and for our later studies, individual energy correlators are computed using \Ref{EEC_github}.%
\footnote{We are indebted to Patrick Komiske for creating and maintaining this software package.}
We have made the state dependence explicit, where $\Psi$ denotes a generic state in which the energy correlators are evaluated.
This ratio is illustrated in \Fig{fig:NG_def}.
One can think of the terms in the denominator, which reproduce the squeezed limit through a product of two-point correlators, as being a form of Wick contraction.
Although the motivation for introducing $Q_\mathcal{E}$ was to isolate the non-gaussianity, we find that it also turns out to be quite robust to hadronization, capturing in a clean way the perturbative structure of the three-point correlator.
For simplicity, in this paper, we consider the simplest case where the state $\Psi$ is unpolarized, which we will denote by $\Phi$ to distinguish it from the more general case.
(For a detailed discussion of energy correlators in polarized states, see e.g.~\cite{Chang:2020qpj}).
In this case, the energy correlators only depend on the relative angles between the $n_i$, namely $R_{12} < R_{23} < R_{13}$.
To study the celestial non-gaussianity experimentally, it is convenient to map the region over which it is defined into a square so that it can be binned in a simple manner.
A mapping allowing this was introduced in \Ref{Komiske:2022enw}, which we follow here.
To simplify the notation, we define the long, medium, and small sides of the correlator by $(R_L, R_M, R_S)$.
%
We then change to the coordinates $(R_L, \xi, \phi)$, where
\begin{equation}
\xi=\frac{R_S}{R_M} \,, \qquad \phi=\mathrm{sgn} (\tau) \, \arcsin \sqrt{1 - \frac{(R_L-R_M)^2}{R_S^2}}\,.
\label{eq:transf}
\end{equation}
Here $\mathrm{sgn}(\tau)$ is the sign of the determinant $\tau = \mathrm{det}(\vec{n}_3, \vec{n}_2, \vec{n}_1)$, characterizing whether the ordering of $(R_S, R_M, R_L)$ is clockwise or counter-clockwise on the celestial sphere. With this choice of coordinates, $R_L$ is used to characterize the overall size of the correlator, and $(\xi, \phi)$ are used to characterize its shape. Since the primary focus of this paper is on the shape dependence, plots in this paper have a fixed $R_L$. Detailed studies of the scaling behavior in $R_L$ were presented in \Ref{Komiske:2022enw}.
The $(\xi, \phi)$ coordinates in \Eq{eq:transf} blow up the OPE region into a line, with $\xi \in (0,1)$ acting as a radial coordinate about the OPE limit and $\phi \in (-\pi/2, \pi/2)$ as an azimuthal coordinate.
In QCD, this observable is $\mathbb{Z}_2$-symmetric under $\phi \rightarrow -\phi$, and hence we restrict ourselves in the region $\phi \in (0, \pi/2)$.
A detailed discussion of the experimental implementation, under the $\mathbb{Z}_2$-symmetric assumption, is described in detail in \App{sec:algorithm}.
In addition to the full shape dependence of the celestial non-gaussianity, we also consider an azimuthally averaged version:
\begin{equation}
\langle Q_\mathcal{E} \rangle_\phi (\xi) =\frac{
\int_{R_{12}<R_{23}<R_{13}} d\vec{n}_1 \, d\vec{n}_2 \, d\vec{n}_3 \, \delta(\xi - R_{12}/R_{23}) ~ \langle\mathcal{E}(\vec{n}_1)\, \mathcal{E}(\vec{n}_2)\, \mathcal{E}(\vec{n}_3)\rangle_\Phi ~ \langle\mathcal{E}^2(\vec{n}_1)\rangle_\Phi}
{\int_{R_{12}<R_{23}<R_{13}} d\vec{n}_1 \, d\vec{n}_2 \, d\vec{n}_3 \, \delta(\xi - R_{12}/R_{23}) ~ \langle\mathcal{E}(\vec{n}_1)\, \mathcal{E}(\vec{n}_2)\rangle_\Phi ~ \langle\mathcal{E}^2(\vec{n}_1)\, \mathcal{E}(\vec{n}_3)\rangle_\Phi}\,.
\label{eq:azimuthal_average}
\end{equation}
Being a one-dimensional distribution, this projection is easier to visualize than the full shape dependence, however, it does not allow us to study the shape of the non-gaussianities.
\subsection{Theoretical Motivation} \label{sec:NG_motivation}
At the first sight, the definition of the celestial non-gaussianity $Q_{\mathcal{E}}$ in \Eq{eq:NG_def} might not look straightforward, nor does it have the same form as the non-gaussianity $Q_\sigma$ of \Eq{eq:Ising_NG_def}.
We now describe in more detail the theoretical motivation for this definition in terms of the factorization of the three-point correlator in the squeezed limits.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{figures/sequential.pdf}
\caption{An illustration of the squeezed limit of a $1 \rightarrow 3$ splitting from iterated $1 \rightarrow 2$ splittings. }
\label{fig:sequential}
\end{center}
\end{figure}
An intuitive motivation of \Eq{eq:Ising_NG_def} comes from momentum space particle splitting.
The EEEC in the squeezed limit is dominated by iterated $1 \rightarrow 2$ splittings.
At leading order in QCD, it requires two iterated $1 \rightarrow 2$ splittings, as shown schematically in \Fig{fig:sequential}, where the first branch has momentum fractions $x$ and $1-x$, while the second branch has momentum fractions $z$ and $1-z$.
The EEEC is then obtained by weighting the cross section with the energy of the final-state particles, $x^2 (1- x) z (1-z) E^3$.
This motivates us to put in the denominator of the ratio \Eq{eq:NG_def} a $\langle \mathcal{E} \mathcal{E} \rangle$ for the second splitting, which is weighted by $z (1 - z) E^2$, and a $\langle \mathcal{E}^2 \mathcal{E} \rangle$ for the first splitting, which is weighted by $x^2 (1-x) E^3$.
While this choice of denominator mimics the momentum fraction structure in the numerator, it has two issues: (a) the power of $E$ is not balanced between numerator and denominator; and (b) $\langle \mathcal{E}^2 \mathcal{E} \rangle$ is not collinear safe since it is not linear in particle energies.
Interestingly, both issues can be overcome if we put a one-point $\langle \mathcal{E}^2 \rangle$ function in the numerator.
While $\langle \mathcal{E}^2 \rangle$ by itself is also collinear unsafe, it nicely cancels the collinear un-safety of $\langle \mathcal{E}^2 \mathcal{E} \rangle$ in the denominator, in the sense that the un-cancelled collinear divergences in perturbation theory for one-point function $\langle \mathcal{E}^2 \rangle$ and two-point correlator $\langle \mathcal{E}^2 \mathcal{E} \rangle$ are the same.
In the case that there are multiple flavors, this cancellation is not exact in the sense that the observable is strictly speaking collinear unsafe, nevertheless it has been observed in the study of track functions that the low moments of track/fragmentation functions, as appear in the calculation of the celestial non-gaussianity, are numerically very similar \cite{Jaarsma:2022kdd}. Therefore, while the celestial non-gaussianity is not technically collinear safe, we will see that it is numerically insensitive to hadronization. We therefore achieve an effectively collinear-safe ratio observable which is constructed from collinear-unsafe ingredients. Furthermore, it can be systematically computed in perturbation theory using the techniques of \Ref{Li:2021zcf}.
We believe that this construction can be generalized to more general energy weightings.
The above argument can be made rigorous using the leading-power perturbative light-ray OPE for the squeezed limit of the EEEC in QCD~\cite{Chen:2020adz,Chen:2021gdk}:
\begin{equation}
\langle\mathcal{E}(\vec{n}_1) \, \mathcal{E}(\vec{n}_2) \, \mathcal{E}(\vec{n}_3)\rangle_\Phi^{\mathrm{LP,\, LO}}
=\frac{1}{(2\pi)^2} \frac{2}{R_S^2}\frac{2}{R_L^2} {\mathcal{J}}\cdot {C}_{R_S}^{(1,1)}
\cdot {C}_{R_L}^{(1,2)}
\cdot \mathcal{S}_\Phi\,,
\end{equation}
where the details of the notation are explained in \App{sec:resum_formula}.
Roughly speaking, $\mathcal{J}$ and $\mathcal{S}_\Phi$ can be considered as projection vectors to select the correct matrix components.
The more interesting ingredients are the two OPE coefficients matrices $C_{R_S}^{(1,1)}$ and $C_{R_L}^{(1,2)}$ that respectively appear in the LO OPE for $\langle\mathcal{E}(\vec{n}_1)\mathcal{E}(\vec{n}_2)\rangle_\Phi^{\mathrm{LP,\, LO}}$ and $\langle\mathcal{E}^2(\vec{n}_1)\mathcal{E}(\vec{n}_3)\rangle_\Phi^{\mathrm{LP,\, LO}}$:
\begin{align}
\langle\mathcal{E}(\vec{n}_1)\mathcal{E}(\vec{n}_2)\rangle_\Phi^{\mathrm{LP,\, LO}}
&= -\frac{1}{2\pi} \frac{2}{R_S^2} \mathcal{J} \cdot C_{R_S}^{(1,1)} \cdot \mathcal{S}_\Phi\,, \\
\langle\mathcal{E}^2(\vec{n}_1)\mathcal{E}(\vec{n}_3)\rangle_\Phi^{\mathrm{LP,\, LO}}
&= -\frac{1}{2\pi} \frac{2}{R_L^2} \mathcal{J} \cdot C_{R_L}^{(1,2)} \cdot \mathcal{S}_\Phi\,.
\end{align}
This formula implies that, neglecting matrix multiplication, the 3-point function factorizes into the product of two 2-point function with the energy weighting $(1,1)$ and $(1,2)$.
\section{Theoretical Properties}
\label{sec:NG_analytic}
While our ultimate goal is to study $Q_{\mathcal{E}}$ in actual collider data, we begin by studying its properties using perturbative calculations as well as comparisons to parton showers.
In addition to gaining some intuition for its behavior, it is crucial to demonstrate that this observable is under perturbative control for jet energies accessible at the LHC.
We first consider the properties of the azimuthal-averaged $\langle Q_\mathcal{E} \rangle_\phi$, highlighting its general behavior as well as how it can be understood in terms of celestial blocks.
We then consider the behavior of the full shape dependence of $Q_\mathcal{E}$.
Throughout this section, we use the analytic results for the three-point correlator from \Ref{Chen:2019bpb}, as well as resummed results for the squeezed limits derived and discussed in detail in \App{sec:resum_formula}.
These factorized expressions can be embedded into jets at the LHC using the fragmenting jets formalism \cite{Procura:2009vm,Kang:2016ehg,Kang:2016mcy}, though we will not discuss this detail here.
\subsection{Azimuthal Averaging}
\label{sec:blocks}
\begin{figure}
\begin{center}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/gluon_5TeV_logxi_plot_parton_vs_hadron_4}\label{fig:azi_5TeV_a}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/gluon_5TeV_logxi_plot_parton_vs_hadron_1}\label{fig:azi_5TeV_b}
}\qquad
\end{center}
\caption{
The azimuthally averaged celestial non-gaussianity for 5 TeV gluon jets for (a) $R_L = 0.15$ and (b) $R_L = 0.38$.
We observe minimal corrections from hadronization and excellent perturbative control.
In the squeezed limit of $\xi \rightarrow 0$, $\langle Q \rangle_\phi (\xi)$ asymptotes to a constant, illustrating the correct construction of the celestial non-gaussianity ratio.
The celestial non-gaussianity increases monotonically away from the squeezed limit.}
\label{fig:azi_5TeV}
\end{figure}
\begin{figure}
\begin{center}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/quark_LL_logxi_1.pdf}\label{fig:azi_500GeV_a}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/gluon_LL_logxi_1.pdf}\label{fig:azi_500GeV_b}
}\\
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/quark_LL_xi_1.pdf}\label{fig:azi_500GeV_c}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/gluon_LL_xi_1.pdf}\label{fig:azi_500GeV_d}
}
\end{center}
\caption{
The azimuthally averaged celestial non-gaussianity for 500 GeV quark jets (left column) and gluon jets (right column).
The logarithmic (top row) and linear (bottom row) scalings respectively highlight the squeezed ($\xi\rightarrow 0$) and perturbative ($\xi\rightarrow 1$) limits.
The green error bands are from scale variation.
}
\label{fig:azi_500GeV}
\end{figure}
For simplicity, we start with the azimuthally-averaged non-gaussianity $\langle Q \rangle_\phi$, defined in \Eq{eq:azimuthal_average}.
We want to verify that the basic features of the celestial non-gaussianity are satisfied, namely that it flattens to a constant in the squeezed limit and that it is insensitive to hadronization.
We begin by considering jets at extremely high energies, namely 5 TeV anti-$k_T$ \cite{Cacciari:2008gp}, $R=0.5$ gluon jets simulated using \textsc{Pythia} 8.305 \cite{Sjostrand:2014zea,Sjostrand:2007gs} and clustered using \textsc{FastJet} 3.4.0 \cite{Cacciari:2011ma}, where these properties should be maximally apparent.
In \Fig{fig:azi_5TeV}, we show results for $\langle Q \rangle_\phi$ for two different values of $R_L$ at both parton level and hadron level.
We see clearly that the desired features of a non-gaussianity are indeed reproduced.
First, $\langle Q \rangle_\phi$ asymptotes to a constant (i.e.~becomes $\xi$ independent) in the squeezed limit of $\xi\rightarrow 0$, where the three-point correlator factorizes into a product of two-point correlators.
Second, despite being infrared and collinear unsafe, hadronization corrections are small, illustrating that the non-gaussianity is determined dominantly by perturbative physics.
Finally, we see a sharp rise in the non-gaussianity as $\xi\rightarrow 1$, showing that the three-point correlator deviates strongly from a product of two two-point correlators away from the squeezed limit.
For any fixed jet energy, at smaller $R_L$, non-perturbative effects become larger, as illustrated by comparing the results for $R_L=0.15$ and $R_L=0.38$. When showing results in future sections, we will choose the largest value of $R_L$ possible for which we are unaffected by the jet boundary \cite{Komiske:2022enw}.
We now consider the more realistic energy of 500 GeV, which is the scale accessible with the CMS Open Data.
Results for both quark and gluon jets are shown in \Fig{fig:azi_500GeV}.
Since this is our ultimate energy of interest, we also show analytic results at leading order (LO) and with leading-logarithmic (LL) resummation.
In the squeezed limit ($\xi \rightarrow 0$, highlighted with the logarithmic axis), the non-gaussianity asymptotes to a constant, although not quite as clearly as at 5 TeV, and has minimal non-perturbative corrections.
In the perturbative regime ($\xi\rightarrow 1$, highlighted with the linear axis), the analytic results provide a good description of the parton shower behavior.
The error bands in green are the result of scale variation, and are quite large since we have only considered the LL result.
While any comparison between physically distinct systems comes with numerous caveats, it is amusing to compare to the level of non-gaussianity in high-energy jets to those found in the 2d and 3d Ising models.
In the Ising model, the non-gaussianity is minimized at $Q_{\text{min}}=1/\sqrt{6} \simeq 0.408$ in two-dimensions \cite{Belavin:1984vu} and $Q_{\text{min}}\simeq 0.683$ in three-dimensions \cite{Rychkov:2016mrc}.
Interestingly, the fractional variation in $Q$, by approximately a factor of $2$ (slightly larger in 2d, slightly smaller in 3d), is qualitatively similar to the QCD results shown in \Figs{fig:azi_5TeV}{fig:azi_500GeV}.
This provides some intuition for the amount of celestial non-gaussianity in the QCD energy flux, and also suggests that our measure is reasonable.
We note also that Hofman and Maldacena found that at strong coupling, the three-point correlator of energy flow operators was also highly non-gaussian \cite{Hofman:2008ar}.
\subsection{Celestial Block Structure}
It is intuitive that the level of non-gaussianity increases as $\xi$ moves away from the squeezed $\xi \rightarrow 0$ region, since genuine $1 \rightarrow 3$ effects start to dominate over iterated $1 \rightarrow 2$ splittings.
It is interesting to understand if this intuition is a generic feature.
We now present an argument that the monotonically increasing behavior of the non-gaussianity in $\xi$ on the line $\phi=0$, which corresponds to the flattened triangle configuration, is a consequence of Lorentz symmetry and the average null energy condition (ANEC) \cite{Tipler:1978zz,Klinkhammer:1991ki,Wald:1991xn,Hofman:2008ar,Faulkner:2016mzt,Hartman:2016lgu}. We will later show that the flattened triangle configuration dominates the shape of the non-gaussianity when there are poles associated to the massless propagators of the particles initiating the jets, and therefore the behavior in this region dominates the angular averaged non-gaussianity.
\begin{figure}
\begin{center}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/block_quark.pdf}\label{fig:block_a}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/block_gluon.pdf}\label{fig:block_b}
}\qquad
\end{center}
\caption{
The celestial block expansion of the leading order result for the azimuthally averaged $\langle Q \rangle_\phi (\xi)$ for (a) quark jets and (b) gluon jets.
At LO, the result is independent of $R_L$.
The twist-4 expansion describes the dominant features of the non-gaussianity. The positivity of the non-gaussianity can the be derived from the properties of the celestial block expansion.}
\label{fig:block_expand}
\end{figure}
To understand the monotonicity of the non-gaussianity, we must first understand how the shape of the three-point correlator is constrained by the symmetries of the Lorentz group. It was shown in \Refs{Chang:2022ryc,Chen:2022jhb} that the three-point correlation function can be expanded in a basis of appropriate partial waves of definite quantum numbers under the Lorentz group, which are referred to as celestial blocks \cite{Chang:2022ryc,Chen:2022jhb}. This allows us to express the LO results for the quark and gluon three-point correlators as
\begin{equation}\label{eq:cb_expand}
G_{q/g} = \sum_{\delta,\,j} c_{\delta,j}^{(q/g)} G_{\delta,\,j}(\xi,\phi)\,,
\end{equation}
where the sum is over celestial quantum numbers $(\delta, j)$, which describe respectively the quantum number under boosts ($\delta$) and the transverse spin ($j$), $c_{\delta,j}^{(q/g)}$ are coefficients encoding the dynamics of the theory, and $G_{\delta,\,j}(\xi,\phi)$ are the celestial partial waves. More details can be found in \Refs{Chang:2022ryc,Chen:2022jhb}, and a short review is given in \App{sec:resum_formula}.
The celestial block expansion is an expansion about the squeezed limit.
In \Fig{fig:block_expand}, we show the celestial block expansion of $\langle Q_{\mathcal{E}}\rangle_\phi$, including the leading twist-2 contribution and the twist-4 expansion.%
\footnote{
Intermediate states of light-ray OPEs are light transforms of twist operator $O_{\Delta, J}$ with dimension $\Delta$ and collinear spin $J$, where the twist is defined as $\Delta - J$. Similar to the twist expansion in deep ineleastic scattering, the light-ray OPE twist expansion provides a power expansion for timelike parton fragmentation. The expansion is systematic in a conformal field theory, while in perturbative QCD they receive corrections from running coupling. }
At LO, the result is independent of $R_L$. Here, we see that the block expansion converges well. In our coordinates, the leading block (twist-2, transverse spin $j=0$) has the form
\begin{align}
G_{4,0}(\xi, \phi=0)=\left[_2F_1\left(1,2,4,\,\frac{\xi}{\xi+1}\right)\right]^2\,.
\end{align}
From the definition of the hypergeometric function $_2F_1(a,b,c,z) = \sum_{n=0}^{\infty} \frac{\Gamma(n+a) \Gamma(n+b)}{\Gamma(n+c)\Gamma(n)} \frac{z^n}{n!}$, we see that $_2F_1(a,b,c,z)$ should monotonically increase on its convergent domain $z\in (0,1)$ when $a,b,c>0$. This structure is fixed by Lorentz invariance. The coefficient of the leading block is positive by the ANEC, and hence this guarantees the growth of the non-gaussianity (at least in some small window), as one increases $\xi$ away from $0$. Due to the near cancellation between bosonic and fermionic contributions, the leading twist transverse spin-2 block gives a negligible contribution, and so we do not consider it \cite{Chen:2020adz,Chen:2021gdk}. The twist-4 transverse spin-0 and transverse spin-2 celestial blocks take the form of similar hypergeometric functions and are therefore also monotonic. Unlike for the OPE of local operators, the positivity of OPE coefficients in the light-ray OPE is less well understood. However, we again find that the OPE coefficients for the twist-4 operators are positive; it would be interesting to understand this better.\footnote{Starting at twist-6, the OPE coefficients in the EEEC are no longer found to be positive, which was identified as arising from soft (zero-mode) contributions \cite{Chen:2022jhb}. Again, it would be interesting to understand this issue better.} Therefore, while the leading twist-2 contribution guarantees the increase of $\langle Q_{\mathcal{E}}\rangle_\phi$ in some small window of the squeezed limit, this extends to the whole range due to the positivity of the higher twist OPE coefficients.
\subsection{Shapes of Non-Gaussianities}
\label{sec:shape_explain}
\begin{figure}[t]
\begin{center}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/Quark_LO_3Dplot_mesh.pdf}\label{fig:pert_shape_a}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/Gluon_LO_3Dplot_mesh.pdf}\label{fig:pert_shape_b}
}%
\end{center}
\caption{
The full shape dependence of the celestial non-gaussianity, $Q_\mathcal{E}$, in the leading order perturbative calculations for (a) quark jets and (b) gluon jets. The non-gaussianity is strongly peaked in the ``flattened triangle" region, illustrating the presence of the propagator associated with the highly boosted quark or gluon state producing the jet.}
\label{fig:pert_shape}
\end{figure}
The most exciting aspect of the celestial non-gaussianity is that we can study its full shape dependence, much like what is done for cosmological correlators.
In \Fig{fig:pert_shape}, we show the shape dependence of the celestial non-gaussianity for both quark and gluon jets using the leading order calculation of \Ref{Chen:2019bpb}.
In both cases we find that it is peaked in the ``flattened triangle" region (illustrated in \Fig{fig:shapes_c}).
Furthermore, we see that the overall shape of the non-gaussianity is quite similar for both quark and gluon jets.
As we will see shortly, this is due to the fact that in both cases it is dominated by the leading twist celestial block, whose form is fixed by Lorentz symmetry.
While the shape is quite similar, the non-gaussianity is larger for quark jets than gluon jets. Unlike for many jet substructure observables that have a simple scaling with the Casimir factors, this is not true for the non-gaussianities. For example, for a three-point splitting function that is the exact iteration of the $1\rightarrow 2$ splitting functions, the celestial non-gaussianity is constructed to be unity independent of the color factors. The fact that the non-gaussianity is larger for quark jets illustrates a non-trivial feature of the $1\rightarrow 3$ splitting functions, namely that they are less close to iterated $1\rightarrow 2$ splittings for quark jets than gluon jets. It would be interesting to understand this more intuitively.
\begin{figure}[t]
\begin{center}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/toy_s123_3Dplot_mesh.pdf}\label{fig:toy_a}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/toy_INVs123_3Dplot_mesh.pdf}\label{fig:toy_b}
}
\end{center}
\caption{
A comparison of the shape dependence of the celestial non-gaussianity for toy functions: (a) $s_{123}$ and (b) $1/s_{123}$. We see that the function $s_{123}$ gives rise to a non-gaussianity that is peaked in the equilateral region, whereas the function $1/s_{123}$ gives rise to a non-gaussianity peaked in the flattened triangle region. }
\label{fig:toy}
\end{figure}
It is enlightening to compare the shape of the celestial non-gaussianity with non-gaussianities studied in other systems.
In the 2d and 3d Ising models, the non-gaussianity is peaked in equilateral triangle configurations \cite{Rychkov:2016mrc}, making it quite distinct from the case of celestial non-gaussianities.
The ``shapes of non-gaussianities" have been extensively studied for cosmological correlators, where one can find models of inflation that give rise to enhanced non-gaussianities for a wide variety of shapes; see e.g.~\cite{Babich:2004gb,Chen:2006nt} or more recently for the tensor case \cite{Cabass:2021fnw}.
For example, ghost inflation \cite{Arkani-Hamed:2003juy}, DBI \cite{Alishahiha:2004eh,Silverstein:2003hf}, and general higher-derivative operators have non-gaussianities peaked in the equilateral region \cite{Babich:2004gb,Chen:2006nt,Baumann:2011su}; solid inflation is peaked in the squeezed limit \cite{Endlich:2012pz}; and excited initial states lead to enhanced non-gaussianities in the flattened limit \cite{Chen:2006nt,Holman:2007na,Meerburg:2009ys,LopezNacir:2011kk,Flauger:2013hra,Green:2020whw,Green:2022fwg}.
This last example has been studied in detail recently, trying to use the absence of enhancement of non-gaussianity in the flattened limit to prove the quantum nature of cosmological fluctuations \cite{Green:2020whw,Green:2022fwg}.
\begin{figure}
\begin{center}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/G40_3Dplot_mesh.pdf}\label{fig:G40_3d}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/G42_3Dplot_mesh.pdf}\label{fig:G42_3d}
}\\
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/G60_3Dplot_mesh.pdf}\label{fig:G60_3d}
}\qquad
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/G62_3Dplot_mesh.pdf}\label{fig:G62_3d}
}
\end{center}
\caption{
The celestial non-gaussianity for twist-2 blocks $G_{4, j}$ (top row) and twist-4 blocks $G_{6, j}$ (bottom row), with $j = 0$ (left column) and $j = 2$ (right column). The leading twist block with zero transverse spin---$G_{4, 0}$ in (a)---provides the dominant contribution to the shape for high-energy jets, since it appears with the largest pre-factor in the celestial block expansion.}
\label{fig:twist2_blocks}
\end{figure}
Much like in the cosmological case, we can also gain some intuition for the shape of the celestial non-gaussianity by studying the shape of $Q_\mathcal{E}$ for certain toy functions, instead of the full QCD splitting functions.
In \Fig{fig:toy_a}, we show the result for the toy function $s_{123} = s_{12} + s_{23} + s_{13}$.
This is peaked in the equilateral triangle configuration, and is thus completely different in shape from the QCD result.
This can be viewed analogously to the case of a higher-derivative operator for cosmological correlators.
By contrast, in \Fig{fig:toy_b} we show the result for the toy function $1/s_{123}$, which provides a simplified description of the pole structure for the initiating parton in QCD.
Here, we see that the result is maximized for a flattened triangle configuration.
This should be viewed in analogy with the case of an excited initial state in cosmology~\cite{Green:2020whw,Green:2022fwg}.
While the full case of QCD is of course more complicated than a single pole in $s_{123}$, the enhancement in the flattened limit arises for the same general reason, namely due to the presence of the propagator associated with the highly boosted quark/gluon state.
We find it satisfying that for the first time we are starting to be able to understand the flux of energy within jets at this level of detail.
This shape is robust, and we will see that it is well borne out in CMS Open Data.
As discussed in \Sec{sec:blocks}, another way of analyzing the three-point correlation function is to expand it in celestial blocks.
Since the celestial block expansion converges rapidly, it is interesting to plot the shape dependence of the contributions to the non-gaussianities for the low twist blocks.
This is analogous to the visualization of the spherical harmonics for the shape of atomic orbits.
We show the contributions to the non-gaussianity for the twist-2 and twist-4 blocks in \Fig{fig:twist2_blocks}.
The blocks with a non-zero transverse spin $j$ exhibit a clear modulation in $\phi$.
We see that the contribution from the leading twist, $j=0$ block, $G_{4,0}$, has a similar shape to the full result, showing that it captures most of the shape dependence.
We emphasize that blocks themselves, like spherical harmonics, are purely kinematic and their contributions should be weighted by the numerical coefficients given in \Eqs{eq:quark_block_approx}{eq:gluon_block_approx} for quark and gluon jets, respectively.
For example, while $G_{4,2}$ has quite a different shape, it is negligible in both quark and gluon jets because the corresponding coefficients are small due to the cancellation between bosonic and fermionic statistics~\cite{Chen:2020adz,Chen:2021gdk}.
\begin{figure}
\begin{center}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/LL_vs_CMS_logxi_1}\label{fig:OD_average_a}
}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{figures/plots/LL_vs_CMS_xi_1}\label{fig:OD_average_b}
}\qquad
\end{center}
\caption{
The angular averaged non-gaussianity $\langle Q \rangle_\phi$, plotted using CMS Open Data, and compared with our LL+LO calculation with (a) logarithmic and (b) linear scales. Excellent agreement is seen between data and our calculation in the perturbative regime. As expected, the result is strongly peaked in the $\xi\rightarrow 1$ limit. }
\label{fig:OD_average}
\end{figure}
\section{Celestial Non-Gaussianities in CMS Open Data}\label{sec:open_data}
We now investigate the behavior of celestial non-gaussianities inside jets on actual LHC data.
This is possible due to the remarkable release~\cite{CERNOpenDataPortal} of research-grade data by the CMS collaboration~\cite{Chatrchyan:2008aa,CMS:OpenAccessPolicy}.
For examples of past analyses using these datasets, see e.g.~\cite{Larkoski:2017bvj,Tripathee:2017ybi,PaktinatMehdiabadi:2019ujl,Cesarotti:2019nax,Komiske:2019fks,Lester:2019bso,Apyan:2019ybx,Komiske:2019jim,Bhaduri:2019zkd,refId0,An:2021yqd,Elgammal:2021rne}.
In \Ref{Komiske:2022enw}, the three-point correlator was first studied using a jet dataset from the CMS 2011A Open Data~\cite{CMS:JetPrimary2011A}, which has been made public in the ``MIT Open Data'' (MOD) format by \Refs{Komiske:2019jim,komiske_patrick_2019_3340205}.
Here, we follow precisely this analysis, but extend it to the study of non-gaussianities.
In particular, we select $R = 0.5$ anti-$k_t$~\cite{Cacciari:2008gp} jets with $p_T \in [500,550]$ GeV and pseudo-rapidity $|\eta| < 1.9$.
Following \Refs{Komiske:2019jim}, we use charged hadron subtraction (CHS)~\cite{CMS:2014ata} and restrict to particle flow candidates (PFCs) with $p_T > 1$ GeV; this is done to mitigate pileup and minimize acceptance effects.
We compute the celestial non-gaussianity using only charged particles, capitalizing on the fantastic track reconstruction performance of CMS~\cite{CMS:2014pgm}.
The use of charged particles allows us to study three-point correlators with exceptional angular resolution, and it also minimizes the impact of detector distortions.
We do not perform any unfolding in this paper, so for this reason we cannot include systematic error bars.
(For ease of display, we omit statistical error bars, though they can be inferred from the size of bin-to-bin fluctuations.)
As with the scaling features of the energy correlators studied in \Ref{Komiske:2022enw}, we find that the non-gaussianity $Q_{\mathcal{E}}$ is robust, and the expected features are clearly reproduced in the CMS Open Data.
Nevertheless, it would be highly interesting to properly unfold the data to perform quantitative studies.
We start in \Fig{fig:OD_average} by ploting the azimuthally-averaged celestial non-gaussianity $\langle Q \rangle_\phi$, as extracted from CMS Open Data, along with our theoretical predictions.
At LL+LO, our perturbative predictions require knowing the quark/gluon fraction, which we have extracted from \textsc{Pythia} to be $43\%$ quarks and $57\%$ gluons.
As before, we plot the results with both logarithmic and linear scales to emphasize the squeezed ($\xi\rightarrow 0$) and perturbative ($\xi\rightarrow 1$) limits, respectively.
We see that our calculations describe the Open Data results remarkably well, emphasizing that we have identified a robust perturbative quantity within the three-point correlator.
Since we have only performed a LL calculation, the uncertainty band from scale variation is quite large, and it would be interesting to improve this.
\begin{figure}
\begin{center}
\subfloat[]{
\includegraphics[scale=0.45]{figures/plots/CMS_3Dplot}\label{fig:OD_shape_a}
}\qquad
\subfloat[]{
\includegraphics[scale=0.45]{figures/plots/LL_3Dplot_mesh}\label{fig:OD_shape_b}
}\qquad
\end{center}
\caption{The full shape dependence of the celestial non-gaussianity $Q_\mathcal{E}$ in (a) CMS Open Data and (b) our theoretical prediction at LL+LO. The non-gaussianity is strongly peaked in the ``flattened triangle" region. To our knowledge, this is the first study of non-gaussianities in QCD energy flux.}
\label{fig:OD_shape}
\end{figure}
In \Fig{fig:OD_shape_a}, we show the shape dependence of the celestial non-gaussianity on CMS Open Data, and in \Fig{fig:OD_shape_b} we show our analytic prediction at LL+LO.
As expected from the analysis in \Sec{sec:shape_explain}, the data exhibits an enhanced non-gaussianity in the ``flattened triangle" region.
We find it quite remarkable that we can study the energy flux inside QCD jets at this level of detail, with control over the shape dependence of the three-point correlations.
To our knowledge, this is the first study of non-gaussianities of QCD energy flux, so we are finally able to match the beautiful studies of the shapes of non-gaussianities for cosmological correlators, but in the collider context!
We believe that these observables will be extremely useful for increasing the perturbative accuracy of parton shower Monte Carlo programs used at the LHC.
In \Fig{fig:OD_shape_error} we show the ratio between the CMS Open Data and our LO+LL calculation.
Overall, we find remarkably good agreement in the perturbative ($\xi\rightarrow 1$) limit, even though we only have a LO+LL calculation and have not incorporated any non-perturbative corrections.
This illustrates the robustness of this observable, as well as the fact that the celestial non-gaussianity isolates the perturbative component of the three-point correlator.
There are deviations in the squeezed ($\xi\rightarrow 0$) limit, which are of course expected, since non-perturbative corrections should dominate in the squeezed limit which probes lower scales.
All in all, we find it quite remarkable to see these theoretical predictions borne out in real LHC data.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.6]{figures/plots/LL_compare_densityplot}
\caption{
The ratio of the CMS Open Data to the LL+LO perturbative prediction. Perturbative control is lost in the squeezed limit ($\xi \rightarrow 0$), which we have illustrated in grey. Excellent agreement is observed in the perturbative regime, particularly for a LL+LO prediction.}
\label{fig:OD_shape_error}
\end{center}
\end{figure}
\section{Conclusions}\label{sec:conc}
In the last few years, there has been remarkable progress in our theoretical understanding of jet substructure, allowing for the first time the study of multi-point correlations of energy flow within high-energy jets at the LHC.
This progress has been driven both by advances in perturbative calculations in quantum field theory, as well as by new techniques in the study of light-ray operators from the conformal bootstrap program.
In this paper, we took the next step by defining a notion of ``celestial non-gaussianity", allowing us to isolate the robust features of three-point correlations of energy flow in jets.
The celestial non-gaussianity is designed to capture features that go beyond iterated $1\rightarrow 2$ limits.
We studied the properties of this celestial non-gaussianity using both analytic results and parton shower simulations, and we found that the shape of the celestial non-gaussianity is highly peaked in the ``flattened triangle" configuration due to the initial-state propagator.
Using CMS Open Data, we were able to directly study the celestial non-gaussianity inside high-energy jets and compare with our analytic results.
To our knowledge, this is the first study of non-gaussianities in QCD energy flux, and it shows that we have theoretical control over multi-point correlation functions.
We find this extremely exciting, particularly since the remarkable LHC dataset allows these higher-point correlation functions to be directly measured with exceptional resolution.
Although we have emphasized the robustness of the celestial non-gaussianities, to perform true unfolded measurements of these observables will require several developments.
First, our analysis relied crucially on the use of charged-particle tracks for angular resolution.
One of the key features of energy correlators is that they can be easily calculated on tracks \cite{Chen:2020vvp,Li:2021zcf,Jaarsma:2022kdd} using the track function formalism \cite{Chang:2013rca,Chang:2013iba}.
This motivates further understanding of track functions theoretically, as well as their experimental measurement.
Second, the ability to calculate increasingly sophisticated features of the statistical properties of energy flow within jets also motivates the development of unfolding techniques to handle the complete phase space.
Recent examples of this using deep learning are \textsc{OmniFold} \cite{Andreassen:2019cjw} and conditional invertible neural networks~\cite{Bellagente:2020piv}, and it will be important to further develop these approaches on LHC data.
Achieving theoretical control over multi-point correlators has a number of applications for improving our understanding of jet substructure.
There has been significant recent work on improving the theoretical accuracy of parton showers~\cite{Li:2016yez,Hoche:2017hno,Hoche:2017iem,Dulat:2018vuy,Gellersen:2021eci,Hamilton:2020rcu,Dasgupta:2020fwr,Hamilton:2021dyz,Karlberg:2021kwr}, with the goal of better describing energy flow within jets at the LHC.
The ability to compute and measure higher-point correlators will provide crucial theoretical data for the development of higher-order parton showers.
These parton showers can then be used more broadly in increasingly sophisticated searches for new physics at the LHC.
Another application of the celestial non-gaussianities is to understanding the interactions of jets with the medium in heavy-ion collisions, where the use of jet substructure techniques has seen significant recent interest~\cite{Andrews:2018jcm,Cunqueiro:2021wls}.
One difficulty with current measurements is that it is often hard to disentangle multiple possible sources of modification due to the medium.
Much like the study of the shapes of non-gaussianity in inflation, the shapes of the celestial non-gaussianity contain significantly more information about energy flow within jets, and may therefore prove useful in unravelling different sources of medium modification.
Another interesting feature of the celestial non-gaussianities is that their shape is quite similar for both quark and gluon jets, even though their normalizations are different.
While for many jet substructure observables, the observed modification in the medium can be explained by a modification of the quark/gluon fractions for the jets, a modification of the non-gaussianity would illustrate a genuine modification from interaction with the medium.
The triple collinear splitting functions in the medium are known \cite{Fickinger:2013xwa}, but to our knowledge have not yet been used in applications.
We believe that this is an interesting avenue for future study.
The results in this paper are a significant step in our understanding of the energy flow within jets at the LHC, demonstrating that we have robust control over three-point correlations.
Going forward, it will be important to continue to push towards higher-point correlators, with the next obvious step being four-point correlators.
Shapes of four-point correlators have been studied in the cosmological context~\cite{Arroja:2009pd,Chen:2009bc,Hindmarsh:2009es,Senatore:2010jy,Bartolo:2010di,Lewis:2011au}.
Our success in isolating the perturbative physics of three-point correlation functions of energy flow suggests that this may also be possible for four-point correlators, allowing them to robustly measured within high-energy jets at the LHC.
An initial step in this direction will be figuring out how to normalize the four-point correlators to isolate physics beyond the iterated $1 \rightarrow 2$ limit while also suppressing non-perturbative contributions.
An efficient understanding of multi-point correlators will also require the development of improved theoretical tools.
Motivated by the progress in the understanding of cosmological correlators, we are optimistic that insights from scattering amplitudes and the conformal bootstrap can have an impact, and many of the required theoretical ingredients are already available~\cite{DelDuca:2019ggv,DelDuca:2020vst}.
We believe that higher-point non-gaussianities will be interesting both for improving our understanding of quantum field theory and precision studies of QCD, but also for better exploiting jet substructure to hunt for ever more subtle features of potential new physics encoded in the radiation pattern within jets.
\begin{acknowledgments}
We thank Cyuan-Han Chang, David Simmons-Duffin, Patrick Komiske, Jingjing Pan for many useful discussions.
H.C. is supported by the Natural Science Foundation of
China under contract No. 11975200.
I.M. is supported by start-up funds from Yale University.
J.T. is supported by the U.S. Department of Energy (DOE) Office of High Energy Physics under contract DE-SC0012567.
H.X.Z. is supported by the Natural Science Foundation of
China under contract No.~11975200.
\end{acknowledgments}
| proofpile-arXiv_065-1655 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
With the rapid growth and unprecedented success of artificial intelligence (AI) applications, a huge amount of data is transferred every day from distributed clients to data centers for further processing. However, in many applications, the data is sensitive, and sending it to data centers constitutes a major privacy concern \cite{10.1145/335191.335438,duchi2013local,Zhou2018}. Federated learning (FL) has emerged as a promising solution to such privacy concern\cite{pmlr-v54-mcmahan17a,DBLP:journals/corr/abs-1912-04977,9530714}. With FL, multiple distributed clients train a global machine learning model without sharing their local data. Local computations are carried out at the different clients and only the model parameter updates (or gradients) are sent to the central server (CS). The latter aggregates the local updates and forwards the result to the distributed clients for another training round.
Although data privacy is preserved using FL, allowing the clients to perform model parameter update opens the way for possible adversarial attacks. Indeed, in some cases, some of the distributed clients may send modified parameter updates with the intention of misleading the learning process \cite{6582732,8454396}. In this context, Byzantine attacks are a popular class, where certain clients aim to prevent the model from converging or causing convergence to a falsified model. The Byzantine clients may act independently or collectively. Unfortunately, even a single malicious client in a distributed setup as in FL can seriously affect the end model \cite{blanchard2017machine}. Developing countermeasures to these attacks has gained an increasing interest in recent years. Considering the FL setup without communication constraint, several aggregation techniques have been proposed to robustify the stochastic gradient descent (SGD) in the distributed setup. Stochastic algorithms tolerating few Byzantine attacks has been developed by aggregating the stochastic gradients updates using the median \cite{Xie2018GeneralizedBS}, geometric median\cite{minsker2015geometric,10.1145/3154503}, trimmed mean \cite{yin2018byzantine}, and iterative filtering \cite{10.1145/3322205.3311083}. The Krum aggregation in \cite{blanchard2017machine} selects the stochastic gradient having a minimum distance from a certain number of nearest stochastic gradients. A robust stochastic aggregation (RSA) has been developed in \cite{li2019rsa}, which tolerates heterogeneous datasets and Byzantine attacks. Other related works include leveraging redundancy of gradients to enhance robustness \cite{chen2018draco,Rajput2019DETOXAR} and avoiding saddle points in non-convex distributed learning in presence of Byzantine attacks \cite{yin2019defending}. Moreover, the advantages of reducing the variance of the stochastic gradients to defend against Byzantine attackers have been investigated in \cite{9153949,peng2020byzantine}.
All the above works are applicable when the individual local updates are sent separately to the CS. Sending the individual model updates at each training round constitutes a significant communication overhead especially for wireless devices. This becomes more and more significant when the number of participating clients is huge which is the case of most real world applications of FL. To alleviate this communication overhead, over-the-air computation has been proposed as a potential solution, where the local model updates are sent simultaneously over the multiple-access channel \cite{Kai2020,9042352}. Such approach can reduce significantly the communication bottleneck of FL \cite{9014530,Sery2020}. Many works have investigated the convergence of over-the-air FL (OTA-FL). However, its robustness to Byzantine attacks has not been well investigated.
This paper is one of the first attempts to tackle the Byzantine attacks in case of OTA-FL. To the best of our knowledge, the only works that tried addressing the Byzantine attacks problem in OTA-FL are \cite{9473694,fan2022bevsgd}. While the Weiszfeld algorithm for geometric aggregation using AirComp has been implemented in \cite{9473694}, a best effort voting (BEV) power control policy for OTA SGD has been proposed in \cite{fan2022bevsgd} to defend against Byzantine attacks by letting the clients transmit with their maximum power.
In this work, to accommodate Byzantine attacks in the case of OTA-FL, we propose a transmission and aggregation approach that simultaneously exploits the benefits of over-the-air computation while being robust to Byzantine attacks, which is named 'ROTAF' standing for robust OTA-FL. Our approach consists of dividing the participating clients into several groups at each global training round, assigning a transmission time slot for each group, and aggregating the model updates of different groups using geometric median aggregation. The convergence of the proposed approach is analyzed under some assumptions on the loss functions. Based on our analysis, when the number of attacks is less than half of the number of groups, the proposed algorithm converges at a linear rate to a neighborhood of the optimal solution with an asymptotic learning error that depends on the noise variance and the number of Byzantine attackers. Moreover, as evidenced by numerical results, the proposed approach is robust to Byzantine attacks compared with the simple averaging OTA-FL. The approach is then extended to handle the case of non-i.i.d. data, where a simple resampling step is added before the geometric median aggregation. Such step can reduce significantly the variance of the group updates and thus enhance the performance of the geometric median aggregation.
The main contributions of the paper are summarized as follows:
\begin{itemize}
\item[1)] We propose a transmission and aggregation framework to deal with Byzantine attacks in OTA-FL. The proposed framework can handle both i.i.d. and non-i.i.d. data distributions.
\item[2)] We conduct a theoretical convergence analysis of the proposed framework for both i.i.d. and non-i.i.d. data distributions. Specifically, we show that our proposed algorithm converges at a linear rate to the neighborhood of the optimal solution.
\item[3)] We provide extensive numerical experiments on both i.i.d. and non-i.i.d. data distributions. The experimental results show the robustness of our proposed approach to different types of Byzantine attacks.
\end{itemize}
The remainder of the paper is organized as follows. In the next section, we introduce analog OTA-FL and the transmission model. In Section \ref{Proposed_approach}, our proposed framework is presented. In section \ref{noniid}, our approach is extended to handle non-i.i.d. data distributions among clients. Numerical experiments are provided in Section \ref{numerical_simulation} and concluding remarks are drawn in Section \ref{conclusion}.
\section{System Model}
\label{system_model}
We consider a FL system where $N$ clients, each with a local dataset $D_n = \{({\bf x}_i\in \mathbb{R}^d ,y_i \in\mathbb{R})\}_{i=1}^{m_n}$, communicate with a CS to collaboratively train a global model. The output of the FL process is the optimal parameter vector ${\bf w}^\star\in\mathbb{R}^p$ that minimizes a global loss function $f({\bf w})$ given by
\begin{equation}
f({\bf w})=\frac{1}{N}\sum_{n=1}^N {\mathbb E}_{\boldsymbol{\boldsymbol\xi} \sim \mathcal{D}_n}f_n({\bf w}, {\boldsymbol\xi}),
\end{equation}
where $f_n({\bf w},{\boldsymbol\xi})$ is the local loss function at client $n$, $\mathcal{D}_n$ is the data distribution of the local dataset $n$.
One of the main challenges of FL is the communication bottleneck, where the parameter updates (or the stochastic gradients) of the participating clients need to be sent to the server at each global training round, and usually over limited bandwidth wireless channels. This has motivated researchers to propose over the air computation as a promising solution to reduce the communication overhead. In the sequel, we introduce analog OTA-FL, where the parameter updates are sent simultaneously over the multiple access channel.
\subsection{Analog over-the-air FL}
By virtue of its communication-efficiency, OTA-FL has attracted increasing interest from many researchers. Different variants of OTA-FL have been proposed considering different design criteria, such as power control, scheduling, beamforming design, learning rate optimization and gradient compression. Our proposed framework can be integrated with any of these variants. But, for sake clarity, we adopt the OTA-FL design proposed in \cite{Sery2020}.
For OTA-FL, at each global training round $t$, the CS sends the model parameter vector, ${\bf w}_t$, to the clients. It is usually assumed that the downlink communication is perfect due to the high power available at the CS. Thus, each client receives the global model without distortions. Then, client $n$ sets its local model as ${\bf w}_{t,0}^n={\bf w}_t$ and runs its local SGD for $H$ iterations based on its local dataset
\begin{equation}
{\bf w}_{t,i+1}^n={\bf w}_{t,i}^n-\eta_t f_{n,j_{n,i}^t}'({\bf w}_{t,i}^n), \ \ {\rm for} \ \ i = 0,1,\cdots, H-1,
\label{SGD}
\end{equation}
where $\eta_t$ is the SGD step size at round $t$ and $f_{n,j_{n,i}^t}'({\bf w}_{t,i}^n)$ denotes the stochastic gradient computed using a sample with index $j_{n,i}^t$ chosen uniformly at random from the local dataset of client $n$. In practice, a minibatch can be used instead of a single sample to compute the stochastic gradient.
The clients then send their model updates,
\begin{equation}
{\bf m}_t^n = {\bf w}_{t,H}^n - {\bf w}_{t},
\end{equation}
simultaneously to the CS via analog OTA. The model updates should be precoded in order to mitigate the effect of channel fading. Let $ \tilde h_{n,t}= h_{t,n}e^{j \phi_t^n}$ be the block fading channel corresponding to user $n$ at the transmission time of global round $t$, where $h_{t,n} >0$ and $\phi_t^n=[-\pi,\pi]$ are its module and phase respectively. As in \cite{Kai2020,9076343,9042352,Sery2020,Liu2021}, we assume that perfect channel state information (CSI) is available at the clients and the CS. The imperfect CSI case is left for future investigation. Moreover, since the power budget at the clients is limited, the transmitted signal should satisfy the following average power constraint
\begin{equation}
\mathbb{E} \left[ \| {\bf x}_{t,n} \|^2 \right] \leq P.
\label{power_const}
\end{equation}
In practice, weak channels might cause a high amplification of transmit power, possibly violating the transmission power constraint \eqref{power_const}. To overcome this issue, a threshold $h_{\min}$ can be set and clients with channel fading coefficients less than $h_{\min}$ in magnitude will not transmit in that training round.
We adopt in this paper the precoding scheme proposed in \cite{Sery2020}, where every client $n$ precodes its model update ${\bf m}_t^n$ as
\begin{equation}
{\bf x}_{t,n}=\begin{cases}\rho_{t}\frac{h_{ \rm min}}{ h_{t,n}} e^{-j \phi_t^n} {\bf m}_t^n, \ \ \ {\rm if} \ \ h_{t,n}>h_{\min}\\ 0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm if} \ \ h_{t,n} \leq h_{\min}\end{cases}
\label{precoding}
\end{equation}
where $\rho_{t}$ is a factor to satisfy the average power constraint at client $n$, which is set as follows
\begin{equation}
\rho_{t} =\sqrt{ \frac{P}{ \max_{n} \mathbb{E} \| {\bf m}_t^n \|^2}}.
\end{equation}
Note that in practice, the clients do not have access the updates of each other in order to compute $\rho_t$ at each training round. A possible way to deal with this issue is that the CS can estimate this parameter offline using a small dataset and then forward it to the clients so they can use it at every training round. Another solution is to obtain an upper bound for $\rho_t$ and use it at every iteration \cite{Sery2020}.
The received signal at the CS is
\begin{equation}
{\bf y}_t=\sum_{n \in \mathcal{K}_t} \rho_t h_{\min}{\bf m}^n_{t} +\tilde{\bf z}_t,
\end{equation}
where $\mathcal{K}_t$ is the set of clients indices with channel fading verifying $h_{n,t}>h_{\min}$ and $\tilde{\bf z}_t \sim \mathcal{N}(\boldsymbol{0},\sigma^2{\bf I}_p)$ stands for additive noise. In order to update the global model, the CS sets
\begin{equation}
{\bf w}_{t+1}=\frac{{\bf y}_t}{|\mathcal{K}_t|\rho_t h_{\min}}+{\bf w}_t,
\label{global_update}
\end{equation}
where $|\mathcal{K}_t|$ is the cardinality of the set $\mathcal{K}_t$. The global model update in \eqref{global_update} can be also written as
\begin{equation}
{\bf w}_{t+1}=\frac{1}{|\mathcal{K}_t|}\sum_{n=1}^{|\mathcal{K}_t| }{\bf w}^n_{t,H} + {\bf z}_t,
\end{equation}
where ${\bf z}_t \triangleq \frac{\tilde{\bf z}_t}{|\mathcal{K}_t|\rho_t h_{\min}} \sim \mathcal{N}(\boldsymbol{0}, \frac{\sigma^2}{ |\mathcal{K}_t|^2 \rho_t^2 h_{\min}^2 } {\bf I}_p)$.
\subsection{Byzantine attacks}
Although FL addresses the issue of sending sensitive data of the clients, it opens the way to possible adversarial attacks since it allows the clients to perform model update. A popular class of adversarial attacks in this context is Byzantine attacks where some clients send falsified parameter updates aiming at affecting the convergence of the training phase or promoting a particular model. Byzantine attacks include also modifications due to possible hardware failures for example. The challenge of designing a Byzantine robust FL process have attracted many researchers \cite{blanchard2017machine,Xie2018GeneralizedBS,minsker2015geometric,10.1145/3154503,yin2018byzantine,10.1145/3322205.3311083,li2019rsa,chen2018draco,Rajput2019DETOXAR,yin2019defending,9153949}. However, these works are applicable in the case where the model updates of the clients are sent separately which is not the case of OTA-FL. The challenge becomes more significant in the case of OTA-FL where all the model updates are sent simultaneously over the analog wireless channel. This may affect heavily the applicability of OTA-FL in practice. In the next section, we propose a transmission and aggregation framework to deal with Byzantine attacks in the context of OTA-FL.
\section{Byzantine Resilient OTA-FL}
\label{Proposed_approach}
In this section, we first develop our approach under the assumption that the local datasets at all the clients are identically distributed and then analyze its convergence. The extension to the case of non-i.i.d. data is considered in the next section.
\subsection{Algorithm development}
We assume that $B$ clients are Byzantine and the rest $R=N-B$ clients are regular.
In order to reduce the effect of Byzantine attacks, we propose the following approach. At each global training round $t$, the CS divides uniformly at random the $N$ clients into $G=N/m$ groups where each group is composed of $m$ clients. Each group will be allocated a time slot for transmission of their model updates. Precisely, the clients of group $g$ will transmit simultaneously their updates over-the-air. This allows the CS to obtain $G$ model updates. Then, with a robust aggregation technique, the different model updates of the groups will be aggregated to update the global model. We will demonstrate that this approach will be robust to Byzantine attacks.
At global iteration $t$, the global model, ${\bf w}_t$, is forwarded to all the clients. The clients in group $g$ perform $H_g$ steps of SGD using their local datasets as in $\eqref{SGD}$, and compute their model updates
\begin{equation}
{\bf m}_t^n = {\bf w}_{t,H_g}^n - {\bf w}_{t}, \ \ {\rm for} \ \ n \in \mathcal{G}_{g,t},
\end{equation}
where $\mathcal{G}_{g,t}$ is the set of user indices belonging to group $g$ at the global training round $t$. Note that since the clients in different groups are not sending at the same time slot, we can let the number of the SGD steps varying among groups. In other terms, the clients of a group with later transmission time can perform more SGD steps than those in the current transmitting group. However, for simplicity we assume in the sequel that all clients perform the same number of SGD steps $H$ regardless of their transmission time.
The clients in group $g$ compute their precoded signal as
\begin{equation}
{\bf x}_{t,n}=\begin{cases}\rho_{t}\frac{h_{ \rm min}}{ h_{t,n}} e^{-j \phi_t^n} {\bf m}_t^n, \ \ \ {\rm if} \ \ h_{t,n}>h_{\min}\\ 0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm if} \ \ h_{t,n} \leq h_{\min}\end{cases}
\label{precoding_11}
\end{equation}
and send their updates simultaneously during their allocated transmission time slot $T_g$. At the CS, the received signal vector corresponding to group $g$ can be expressed as
\begin{equation}
{\bf y}_{t,g}=\sum_{n \in \mathcal{K}_{t,g}} \rho_t h_{\min}{\bf m}^n_{t} +\tilde{\bf z}_{t,g},
\end{equation}
where $\mathcal{K}_{t,g}$ is the set of clients indices of group $g$ with channels such that $h_{n,t}>h_{\min}$ and $\tilde{\bf z}_{t,g} \sim \mathcal{N}(\boldsymbol{0},\sigma^2{\bf I}_p)$ is the additive noise. The CS estimates the model update of the group $g$ as
\begin{equation}
{\bf u}_g ^t= \frac{{\bf y}_{t,g}}{\rho_t h_{\min} |\mathcal{K}_{t,g}|}.
\label{group_update}
\end{equation}
After all the group updates are collected, the CS disposes of $G$ vector updates ${\bf u}_1^t,\cdots,{\bf u}_G^t$ and can aggregate these updates to obtain the global model. For instance, one of the most efficient aggregation techniques that can be used is the geometric median \cite{bhagoji2019analyzing}. Other aggregation techniques can be used such as the Krum aggregation rule proposed in \cite{blanchard2017machine}. In this work, we focus on geometric median aggregation.
The global model is updated as
\begin{equation}
{\bf w}_{t+1} = {\rm geomed}({\bf u}_1^t,\cdots,{\bf u}_G^t) + {\bf w}_t,
\label{global_update_}
\end{equation}
where ${\rm geomed}(.)$ stands for the geometric median aggregation defined as
$$
{\rm geomed}(\{ {\bf u}_i\}_{i\in \mathcal{K}}) = {\rm arg} \min_{\bf z} \sum_{i\in \mathcal{K}} \|{\bf z}-{\bf u}_i\|.
$$
\begin{algorithm}[ht]
\SetAlgoLined
\caption{ROTAF}
\BlankLine
\KwIn{Initial model ${\bf w}_0$}
\For{$t = 0 , 1, 2 ,\cdots$}{The CS forwards ${\bf w}_t$ to the clients\;
\For{each client $n$}{
${\bf m}_t^n\gets LocalComp({\bf w}_t,H, b,\eta,D_n)$
}
\For{$g = 1,2,\cdots,G$}{
\For{each client $n $ in group $g$ $(n\in \mathcal{G}_{g,t}) $}{
client $n$ transmit its model update ${\bf x}_{t,n}$ precoded via \eqref{precoding_11} during transmission time slot $T_g$
}
The CS receives ${\bf y}_g$ of group $g$ and computes ${\bf u}_t^g$ as in \eqref{group_update}
}
The CS aggregates the group updates using \eqref{global_update_} to obtain the new global parameter vector ${\bf w}_{t+1}$
}
\label{sum_alg}
\end{algorithm}
The geometric median aggregation has been proposed as an efficient solution to Byzantine attacks when the individual updates of the clients are sent separately to the CS \cite{li2019rsa,9153949}. In fact, it approximates well the mean of the honest clients weight updates when $B<N/2$ \cite{9153949}. In our approach, it is used to aggregate the group updates. Thus, the number of Byzantine clients should satisfy $B<G/2$ in order for the geometric median to well approximate the mean of the group updates composed by only regular clients.
The geometric median can be efficiently computed using Weiszfeld \cite{weiszfeld1937point}. To avoid numerical instabilities, a smoothed version of the Weiszfeld algorithm can be used in practice \cite{pillutla2019robust}, which computes a smoothed geometric median defined as
$$
{\rm geomed}_{\epsilon}(\{ {\bf u}_i\}_{i\in \mathcal{K}}) = {\rm arg} \min_{\bf z} \sum_{i\in \mathcal{K}} \|{\bf z}-{\bf u}_i\|_\epsilon,
$$
where
$$
\|{\bf x}\|_\epsilon=\begin{cases}
\frac{1}{2\epsilon}\|{\bf x}\|^2+\frac{\epsilon}{2} \ \ \ {\rm if} \ \ \|{\bf x}\|\leq\epsilon\\
\|{\bf x}\| \ \ \ \ \ \ \ \ \ \ \ \ {\rm if} \ \ \|{\bf x}\|>\epsilon,
\end{cases}
$$
where $\epsilon>0$ is a smoothing parameter. For sake of completeness, we present hereafter the smoothed Weiszfeld algorithm.
The steps of the proposed approach are summarized in Algorithm \ref{sum_alg}, where $LocalComp({\bf w}_t,H,b,\eta,D_n)$ consists of $H$ steps of batch-SGD using local dataset $D_n$ with learning rate $\eta$ and initial parameter vector ${\bf w}_t$ as described in \eqref{SGD}.
\begin{algorithm}
\SetAlgoLined
\caption{Smoothed Weiszfeld algorithm}
\BlankLine
\KwIn{ Set of vectors $\{{\bf v}_i\}_{i=1}^k$, smoothing parameter $\epsilon$}
\kwInit{$ {\bf z} = {\bf w}_t$}
\Repeat{${\bf z}$ converges}{\For{$i=1,\cdots,k$}{$ \theta_i= \frac{1}{\max(\epsilon, \|{\bf z}-{\bf v}_i\| )}$}
${\bf z} =\frac{ \sum_{i=1}^k \theta_i {\bf v}_i}{\sum_{i=1}^k \theta_i}$
}
\label{Weiszfeld_algorithm}
\end{algorithm}
\subsection{Convergence Analysis}
\label{conver_analysis}
In this section, the convergence of the proposed framework is studied. As in most of the works that studied the convergence of SGD, the analysis is conducted under the following assumptions.
\begin{assumption}
\begin{itemize}
\item[(i)] Strong convexity: The objective function $f$ is $\mu$-strongly convex, that is, for all ${\bf x},{\bf y} \in \mathbb{R}^p$
$$
f({\bf x})\geq f({\bf y}) + \langle f'({\bf y}), {\bf x}-{\bf y}\rangle+\frac{\mu}{2}\|{\bf x}-{\bf y}\|^2.
$$
for some $\mu>0$.
\item[(ii)] Lipschitz continuity of gradients: The objective function $f$ has $L$-Lipchitz continuous gradients, that is, for all ${\bf x},{\bf y} \in \mathbb{R}^p$
$$
\|f'({\bf x})-f'({\bf y})\| \leq L \|{\bf x}-{\bf y}\|.
$$
for some $L>0$.
\item[(iii)] Bounded outer variation: For each honest client $n$, the variations of its aggregated gradients with respect to the over-all gradient is bounded as
$$
\| f'_{n}({\bf x})-f'({\bf x}) \|^2 \leq \delta^2, \ {\rm for \ all} \ {\bf x} \in \mathbb{R}^p.
$$
\item[(iv)] Bounded inner variation: for each honest client $n$, the variation of its stochastic gradients with respect to its aggregated gradients is bounded as
$$
\mathbb{E}\|f'_{n,j_{n,i}^t}({\bf x})-f_n'({\bf x})\|^2\leq \kappa^2, \ {\rm for \ all} \ {\bf x} \in \mathbb{R}^p.
$$
\item[(v)] Bounded stochastic gradients: For each honest client $n$, stochastic gradient $f'_{n,j_{n,i}^t}({\bf x})$ satisfies
$$
\mathbb{E}\|f'_{n,j_{n,i}^t}({\bf x})\|^2\leq K^2, \ {\rm for \ all} \ {\bf x} \in \mathbb{R}^p,
$$
for some fixed $K^2>0$.
\end{itemize}
\label{assump1}
\end{assumption}
Items (i) and (ii) in Assumption \ref{assump1} are common in convex analysis. Items (iii) and (iv) are needed to bound the inner and outer variations of the stochastic gradients and the gradients of the honest clients, respectively. These assumptions are adopted in most of the existing works considering distributed SGD in presence of Byzantine attacks \cite{9153949,pmlr-v80-tang18a}. The convergence of the proposed approach is presented in the following theorem, proved in the Appendix. For simplicity, we assume in this section that the learning rate is constant and that the clients perform one SGD step at each global iteration, that is, $H=1$ and $\eta_t=\eta$ for all $t$.
\begin{theorem} Under Assumption \ref{assump1}, when the number of Byzantine attackers satisfies $B< \frac{G}{2}$ and the step size $\eta$ verifies $\eta< \min(\frac{\mu}{2L^2} , \frac{2}{\mu})$, then
$$
\mathbb{E}\|{\bf w}_t-{\bf w}^*\|^2\leq ( 1-\eta\mu)^{t} B_1+A_1,
$$
where
\begin{align}
B_1&= \|{\bf w}_0-{\bf w}^*\|^2 -A_1,\\
A_1&= \frac{2}{\mu^2}C_\alpha^2 \left(\delta^2+\kappa^2+\frac{p\sigma^2}{mPh_{min}^2} K^2\right),\label{err1}
\end{align}
with $C_\alpha=\frac{2-2\alpha}{1-2\alpha}$ and $\alpha= \frac{B}{G}$.
\label{thm1}
\end{theorem}
Theorem \ref{thm1} states that the proposed approach converges at a linear rate to a neighborhood of ${\bf w}^*$. The asymptotic learning error, $A_1$, depends on the number of Byzantine attackers through $C_\alpha$ and on the noise variance. $C_\alpha$ increases with the number of Byzantine attackers, which yields a higher asymptotic error. Moreover, the error is composed of three terms which are proportional to the outer variations of the gradients, the inner variations of the stochastic gradients, and the noise variance, respectively.
\section{Byzantine resilience for non-i.i.d. data}
\label{noniid}
In the previous section, we have focused on the case of identically distributed local datasets. However, this is not the case in most real applications of federated learning \cite{smith2017federated}. In this section, we modify our proposed approach to handle the case of non-i.i.d. data.
\subsection{Resampling before aggregation}
The main concern of non-i.i.d. datasets is that the model updates (or the stochastic gradients) are not identically distributed and may have a large variance. Unfortunately, large variance can affect heavily the performance of most of the existing robust aggregation techniques including the geometric median, Krum, and coordinate wise median \cite{karimireddy2021byzantinerobust} . Recently, a simple solution has been proposed in \cite{karimireddy2021byzantinerobust} to robustify the aggregation techniques in the case of non-i.i.d. data. The resampling process consists of multiple rounds where $s$ vectors of the model updates are sampled uniformly at random at each round, with the constraint that each vector is sampled at most $s$ times. The average of these samples is then computed to generate a new message. The new messages are then fed to the aggregation technique. The steps of the resampling process are presented in Algorithm \ref{sam_alg}.
Since our proposed framework is based on geometric median, its performance can be degraded heavily in the case of non-i.i.d. data. To address this issue, we propose to apply the resampling process on the group updates before computing their geometric median. To motivate the importance of reducing the variance of the model updates before aggregation, we review the following lemma from \cite{9153949} that characterizes the error of the geometric median compared to the true mean.
\begin{lemma}\cite[Lemma 1]{9153949} Let $\mathcal V$ be a subset of random vectors distributed in
a normed vector space with cardinality $N$. Let $\mathcal{B} \subset \mathcal{V}$ be a subset of $\mathcal V$ with cardinality $B$ such that $B< \frac{N}{2}$ and let $R=N-B$. Then, it holds that
$$
\mathbb{E}\|\underset{{\bf v}\in \mathcal{V}}{\rm geomed} ({\bf v}) -\bar{\bf v} \|^2 \leq \frac{ C_\alpha^2}{R}{\sum_{{\bf v} \notin \mathcal{B}}\|{\bf v} -\mathbb{E}{\bf v}\|^2}+\frac{ C_\alpha^2}{R} {\sum_{{\bf v} \notin \mathcal{B}}\|{\mathbb E}{\bf v} -\bar{\bf v}\|^2},
$$
where $\bar{\bf v} = \frac{1}{R}\sum_{{\bf v} \notin \mathcal{B}} {\mathbb E}{\bf v}$, $C_\alpha=\frac{2-2\alpha}{1-2\alpha}$ and $\alpha= \frac{B }{N}$.\label{gm_Byzantine}\end{lemma}
\begin{algorithm}
\SetAlgoLined
\caption{$s$-resmapling process}
\BlankLine
\KwIn{ Set of vectors $\mathcal{S} = \{{\bf v}_i\}_{i=1}^R$ of size $R$, resampling rate $s$}
\KwOut{ A new set of vectors $\tilde{ \mathcal{S}} = \{ \tilde {\bf v}_i \}_{i=1}^R$ }
\For{$i = 1 , 2 ,\cdots, R$}{Choose uniformly at random $s$ vectors from $\mathcal{S}$ such that any vector is chosen at most $s$ times \;
Compute a new $\tilde {\bf v}_i$ as the average of the chosen vectors $\tilde{\bf v}_i =\frac{1}{s} \sum_{j\in \mathcal{S}_i}{\bf v}_j$
}
\label{sam_alg}
\end{algorithm}
The bound on the error of the geometric median reported in Lemma \ref{gm_Byzantine} is composed of two terms. The first one is related to the inner variations of the vector updates of the regular updates, that is, the stochastic gradient computed at a certain regular client compared to its true gradient. The inner variations can be reduced by increasing the size of the minibatch at every iteration for instance. The second term is related to the outer variations, where the effect the difference between the distribution of the local datasets is highlighted. The resampling process helps reducing the outer variations in the case of non-i.i.d. data. To motivate further the importance of the resampling step, we compare in Table \ref{table:resamp} the performance of our proposed approach, ROTAF, with and without resampling for different number of Byzantine attacks. From the table, without resampling, the performance of the proposed is heavily degraded for non i.i.d data even in the case where no Byzantine attacks are applied since the geometric median is sensitive to high variance of the input vectors. Moreover, we remark that with resampling the performance is enhanced and approaches that obtained in the case of i.i.d. data distributions.
\begin{table}[ht]
\centering
\caption{Test accuracy of ROTAF for different values of the resampling rate $s$ on non-i.i.d. MNIST dataset. The simulation settings are detailed in section \ref{numerical_simulation}. The test accuracy after 500 global iterations is reported.}
\subtable[Logistic regression]{
\centering
\begin{tabular}{c c c c c}
\hline\hline
Number of attacks & non-i.i.d. $s = 1$ & non-i.i.d. $s =2$ & non-i.i.d. $s=3$ & i.i.d. $s=1$ \\ [0.5ex]
\hline
No Byzantine attacks & 0.6842 & 0.8690 & 0.8971 & 0.9151\\
2 Byzantine attackers & 0.6842 & 0.8817 & 0.9090 & 0.9139\\
5 Byzantine attackers & 0.6996 & 0.8829 &0.9102 &0.9112\\ [1ex]
\hline
\end{tabular}
}
\hfill
\subtable[CNN model]{
\centering
\begin{tabular}{c c c c c}
\hline\hline
Number of attacks & non-i.i.d. $s = 1$ & non-i.i.d. $s =2$ & non-i.i.d. $s=3$ & i.i.d. $s=1$ \\ [0.5ex]
\hline
No Byzantine attacks & 0.6557 & 0.9034 & 0.9379 & 0.9529 \\
2 Byzantine attackers & 0.6799 & 0.9260 & 0.9533 & 0.9528 \\
5 Byzantine attackers & 0.6814 & 0.9280 & 0.9553 & 0.9526 \\ [1ex]
\hline
\end{tabular}
}
\label{table:resamp}
\end{table}
\subsection{Convergence Analysis}
The convergence of our proposed approach for non-i.i.d. data among clients is quite different from that of i.i.d. data due to the resampling step before the geometric median aggregation. In the following theorem, the convergence of our proposed framework with resampling is studied under the assumptions stated in Section \ref{conver_analysis}. The proof is given in the Appendix.
\begin{theorem} Under Assumption \ref{assump1}, when the number of Byzantine attackers satisfies $B < \frac{G}{2s}$ and the step size $\eta$ verifies $\eta< \min(\frac{\mu}{2L^2} , \frac{2}{\mu})$, then
$$
\mathbb{E}\|{\bf w}_t-{\bf w}^*\|^2\leq ( 1-\eta\mu)^{t} B_2+A_2,
$$
where
\begin{align}
B_2&= \|{\bf w}_0-{\bf w}^*\|^2 -A_2,\\
A_2 &= \frac{2}{\mu^2}C_{s\alpha}^2 \left(d+ \frac{1-d}{G-B}\right)\left(\delta^2+\kappa^2+\frac{p\sigma^2}{mPh_{min}^2} K^2\right),
\label{err2}
\end{align}
with $C_{s\alpha}=\frac{2-2s\alpha}{1-2s\alpha}$, $\alpha= \frac{B}{G}$ and $d = \frac{G-1}{Gs-1}$.
\label{thm3}
\end{theorem}
Theorem \ref{thm3} establishes the convergence of the proposed framework for non-i.i.d. data to a neighborhood of the optimal solution with an asymptotic learning error, $A_2$, that depends on the number of Byzantine attackers, the resampling rate and the noise variance. Note that the resampling step comes at the cost of tolerating lower number of attackers $B<\frac{G}{2s}$. This is due to the fact that more vectors will be contaminated by malicious updates as the resampling rate $s$ increases. This is not a major issue since low value of $s$ is needed as we will see later in the numerical experiments. Specifically, $s=2$ or $3$ would be sufficient to robustify the geometric median aggregation against high variance of the model updates in non-i.i.d. data settings.
By comparing the asymptotic errors in \eqref{err1} and \eqref{err2}, the difference is in the coefficients $C_{\alpha}^2$ and $C_{s\alpha}^2 \left(d+ \frac{1-d}{G-B}\right)$, respectively. For $s=1$, the asymptotic errors are equal, while for $s>1$ the coefficient $C_{s\alpha}^2 \left(d+ \frac{1-d}{G-B}\right)\approx C_{s\alpha}^2 d$ is smaller than $C_{\alpha}^2$ when $\alpha$ is sufficiently small. This implies that the asymptotic error is reduced after resampling. However, this comes at the cost of tolerating a smaller number of Byzantine attacks.
\section{Numerical Results}
\label{numerical_simulation}
In this section, the performance of the proposed approach is studied using real datasets, and compared with the existing OTA-FL design.
We consider two popular image-classification datasets, MNIST and CIFAR10.\\
{\bf MNIST: } the dataset is composed of 28x28 images of handwritten digits corresponding to 10 classes. The dataset contains 60,000 training samples and 10,000 testing samples. For the MNIST dataset, we used the multi-class logistic regression model and a CNN model composed of two 5x5 convolutional layers (the first with 32 channels, the second with 64 channels, each followed by 2x2 max pooling), a fully connected layer with 128 neurons and Relu activation, and a Softmax output layer.\\
{\bf CIFAR10:} the dataset is composed of 32x32x3 colour images corresponding to 10 classes. The dataset contains 50,000 training images and 10,000 test images. For this dataset, we adopt a CNN model composed three 3x3 convolutional layers (the first with 32 channels, the second with 64 channels, the third with 128 channels, each followed by 2x2 max pooling), a two fully connected layers with 512 neurons and 128 neurons, respectively, and Relu activation, and a Softmax output layer.
\begin{figure}
\begin{center}
\subfigure[Logistic regression]{
\begin{tikzpicture}[scale=0.75, spy using outlines={black, circle, magnification=4, size=1.5cm,
connect spies}]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=500,
ymin=0, ymax=1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test accuracy},
ymajorgrids,
ytick style={color=black},
grid=major,
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
legend style={nodes={scale=0.7, transform shape}},
legend image post style={}
]
\addplot+[semithick, color =red, mark=x, mark options={mark repeat=20,mark phase=1}]
coordinates {
(0,0.0856)(1,0.1038)(2,0.123)(3,0.1469)(4,0.172)(5,0.1933)(6,0.2096)(7,0.2311)(8,0.2542)(9,0.2785)(10,0.3011)(11,0.3214)(12,0.3402)(13,0.3549)(14,0.3711)(15,0.3881)(16,0.3989)(17,0.414)(18,0.4294)(19,0.4416)(20,0.4573)(21,0.467)(22,0.4776)(23,0.4866)(24,0.4948)(25,0.5051)(26,0.5144)(27,0.5229)(28,0.5333)(29,0.5415)(30,0.5485)(31,0.5538)(32,0.5556)(33,0.5655)(34,0.5742)(35,0.5819)(36,0.5871)(37,0.5958)(38,0.5998)(39,0.6043)(40,0.6106)(41,0.6143)(42,0.6181)(43,0.6235)(44,0.6284)(45,0.6381)(46,0.6394)(47,0.6427)(48,0.6473)(49,0.6485)(50,0.6519)(51,0.6555)(52,0.6583)(53,0.6634)(54,0.6666)(55,0.6718)(56,0.6752)(57,0.6781)(58,0.6795)(59,0.6811)(60,0.6842)(61,0.6861)(62,0.688)(63,0.6909)(64,0.6965)(65,0.6978)(66,0.6983)(67,0.7024)(68,0.707)(69,0.7109)(70,0.7157)(71,0.7191)(72,0.7251)(73,0.7252)(74,0.724)(75,0.7259)(76,0.7267)(77,0.7309)(78,0.7307)(79,0.7321)(80,0.7334)(81,0.7351)(82,0.7368)(83,0.7382)(84,0.7391)(85,0.74)(86,0.7402)(87,0.7434)(88,0.7441)(89,0.7452)(90,0.7482)(91,0.7498)(92,0.7522)(93,0.7526)(94,0.7556)(95,0.7567)(96,0.7562)(97,0.7589)(98,0.7603)(99,0.7613)(100,0.7628)(101,0.7638)(102,0.7649)(103,0.7657)(104,0.7686)(105,0.7682)(106,0.7711)(107,0.7721)(108,0.7752)(109,0.7765)(110,0.7767)(111,0.7776)(112,0.7779)(113,0.7776)(114,0.7792)(115,0.7788)(116,0.7799)(117,0.7806)(118,0.7828)(119,0.7841)(120,0.7849)(121,0.786)(122,0.7858)(123,0.7852)(124,0.7863)(125,0.7888)(126,0.7896)(127,0.7891)(128,0.7914)(129,0.7925)(130,0.7927)(131,0.7959)(132,0.796)(133,0.7966)(134,0.7972)(135,0.7969)(136,0.7986)(137,0.7986)(138,0.7998)(139,0.8009)(140,0.8012)(141,0.802)(142,0.8012)(143,0.8024)(144,0.8029)(145,0.803)(146,0.8015)(147,0.8029)(148,0.8029)(149,0.8028)(150,0.8031)(151,0.8044)(152,0.8048)(153,0.8053)(154,0.8059)(155,0.8069)(156,0.8071)(157,0.8081)(158,0.8082)(159,0.8078)(160,0.8106)(161,0.8098)(162,0.8114)(163,0.8112)(164,0.8107)(165,0.8124)(166,0.8134)(167,0.812)(168,0.8125)(169,0.8138)(170,0.8128)(171,0.8143)(172,0.8154)(173,0.8147)(174,0.8164)(175,0.8158)(176,0.8159)(177,0.8167)(178,0.8168)(179,0.8167)(180,0.816)(181,0.8159)(182,0.8169)(183,0.8172)(184,0.8172)(185,0.8156)(186,0.8174)(187,0.8168)(188,0.8163)(189,0.8164)(190,0.8186)(191,0.8187)(192,0.8196)(193,0.8215)(194,0.8225)(195,0.8213)(196,0.8223)(197,0.8219)(198,0.8226)(199,0.8224)(200,0.8218)(201,0.8215)(202,0.8223)(203,0.8223)(204,0.8225)(205,0.8224)(206,0.8215)(207,0.8222)(208,0.8235)(209,0.8236)(210,0.8241)(211,0.824)(212,0.8224)(213,0.8237)(214,0.8238)(215,0.8236)(216,0.8241)(217,0.8238)(218,0.8239)(219,0.824)(220,0.8237)(221,0.8238)(222,0.824)(223,0.823)(224,0.8237)(225,0.8244)(226,0.8263)(227,0.8263)(228,0.8279)(229,0.8267)(230,0.8269)(231,0.8256)(232,0.8271)(233,0.8261)(234,0.827)(235,0.8263)(236,0.8267)(237,0.8263)(238,0.8279)(239,0.828)(240,0.8288)(241,0.8289)(242,0.8289)(243,0.8305)(244,0.8304)(245,0.8313)(246,0.8294)(247,0.8305)(248,0.8298)(249,0.831)(250,0.8303)(251,0.8308)(252,0.8309)(253,0.8328)(254,0.8316)(255,0.8321)(256,0.8322)(257,0.8325)(258,0.8329)(259,0.8324)(260,0.8331)(261,0.832)(262,0.8319)(263,0.832)(264,0.8322)(265,0.8337)(266,0.8324)(267,0.8331)(268,0.833)(269,0.8336)(270,0.8339)(271,0.834)(272,0.8348)(273,0.8331)(274,0.8328)(275,0.8325)(276,0.8338)(277,0.8347)(278,0.8333)(279,0.8346)(280,0.8348)(281,0.8345)(282,0.835)(283,0.8339)(284,0.8342)(285,0.8356)(286,0.8375)(287,0.8368)(288,0.8361)(289,0.8376)(290,0.8367)(291,0.8371)(292,0.8376)(293,0.8371)(294,0.8374)(295,0.8365)(296,0.8377)(297,0.8367)(298,0.8353)(299,0.8346)(300,0.8348)(301,0.8348)(302,0.8346)(303,0.8353)(304,0.8366)(305,0.8366)(306,0.8369)(307,0.8366)(308,0.8359)(309,0.8345)(310,0.8361)(311,0.8351)(312,0.8364)(313,0.8367)(314,0.8355)(315,0.8361)(316,0.8378)(317,0.838)(318,0.8376)(319,0.8382)(320,0.838)(321,0.8377)(322,0.8382)(323,0.8378)(324,0.8389)(325,0.8384)(326,0.8394)(327,0.8375)(328,0.839)(329,0.8399)(330,0.8407)(331,0.8408)(332,0.8417)(333,0.8422)(334,0.8415)(335,0.8427)(336,0.8424)(337,0.8419)(338,0.8418)(339,0.8428)(340,0.8411)(341,0.8419)(342,0.8423)(343,0.8406)(344,0.8423)(345,0.8411)(346,0.8414)(347,0.8417)(348,0.8419)(349,0.8421)(350,0.8412)(351,0.843)(352,0.8426)(353,0.8423)(354,0.843)(355,0.8431)(356,0.8433)(357,0.8431)(358,0.8428)(359,0.8421)(360,0.8431)(361,0.8419)(362,0.842)(363,0.8422)(364,0.8425)(365,0.8422)(366,0.8413)(367,0.8412)(368,0.8416)(369,0.8427)(370,0.8423)(371,0.8422)(372,0.8407)(373,0.8408)(374,0.8417)(375,0.8428)(376,0.842)(377,0.8415)(378,0.8406)(379,0.8416)(380,0.8412)(381,0.843)(382,0.8425)(383,0.8424)(384,0.8423)(385,0.8426)(386,0.8433)(387,0.8434)(388,0.8435)(389,0.8428)(390,0.8438)(391,0.8435)(392,0.8437)(393,0.8431)(394,0.8435)(395,0.844)(396,0.8435)(397,0.8445)(398,0.8433)(399,0.8441)(400,0.8438)(401,0.8439)(402,0.8451)(403,0.8447)(404,0.8454)(405,0.8452)(406,0.8439)(407,0.8439)(408,0.8462)(409,0.8457)(410,0.8462)(411,0.8465)(412,0.8479)(413,0.8474)(414,0.8468)(415,0.8457)(416,0.8464)(417,0.8469)(418,0.8476)(419,0.8465)(420,0.8464)(421,0.845)(422,0.8454)(423,0.8455)(424,0.8448)(425,0.8452)(426,0.8454)(427,0.8439)(428,0.8447)(429,0.8457)(430,0.8451)(431,0.8464)(432,0.8454)(433,0.8459)(434,0.8462)(435,0.8463)(436,0.8463)(437,0.8461)(438,0.8456)(439,0.8461)(440,0.8453)(441,0.8459)(442,0.8457)(443,0.8443)(444,0.845)(445,0.846)(446,0.8467)(447,0.8481)(448,0.8481)(449,0.8493)(450,0.8496)(451,0.849)(452,0.8492)(453,0.8495)(454,0.8501)(455,0.8493)(456,0.8489)(457,0.8505)(458,0.8507)(459,0.8506)(460,0.851)(461,0.8525)(462,0.8517)(463,0.8523)(464,0.8523)(465,0.8514)(466,0.8521)(467,0.8525)(468,0.8523)(469,0.852)(470,0.8517)(471,0.8542)(472,0.853)(473,0.8521)(474,0.8516)(475,0.8521)(476,0.8523)(477,0.853)(478,0.8513)(479,0.8515)(480,0.8527)(481,0.8519)(482,0.8524)(483,0.8529)(484,0.8532)(485,0.854)(486,0.8539)(487,0.8542)(488,0.8552)(489,0.8549)(490,0.854)(491,0.854)(492,0.8539)(493,0.8533)(494,0.8532)(495,0.8537)(496,0.8535)(497,0.8546)(498,0.8542)(499,0.8559)
};\addlegendentry{ROTAF $B=0$}
\addplot+[semithick, color =blue, mark=square, mark options={mark repeat=20,mark phase=1}]
coordinates {
(0,0.0856)(1,0.1003)(2,0.1241)(3,0.1459)(4,0.1678)(5,0.1863)(6,0.2066)(7,0.2279)(8,0.2504)(9,0.2754)(10,0.2968)(11,0.3152)(12,0.3356)(13,0.3547)(14,0.3706)(15,0.3866)(16,0.4041)(17,0.4183)(18,0.4298)(19,0.4422)(20,0.4552)(21,0.4656)(22,0.4785)(23,0.4899)(24,0.5014)(25,0.5132)(26,0.5224)(27,0.5316)(28,0.5403)(29,0.5473)(30,0.5565)(31,0.5631)(32,0.57)(33,0.5777)(34,0.5845)(35,0.5908)(36,0.5977)(37,0.6035)(38,0.6097)(39,0.6152)(40,0.6214)(41,0.6262)(42,0.6324)(43,0.6372)(44,0.6433)(45,0.6475)(46,0.6519)(47,0.6559)(48,0.66)(49,0.664)(50,0.6692)(51,0.6723)(52,0.6763)(53,0.6789)(54,0.6812)(55,0.6852)(56,0.6883)(57,0.6915)(58,0.6945)(59,0.6964)(60,0.6992)(61,0.702)(62,0.7058)(63,0.7084)(64,0.7112)(65,0.7142)(66,0.7175)(67,0.7194)(68,0.7209)(69,0.7243)(70,0.7259)(71,0.7285)(72,0.7305)(73,0.7312)(74,0.7337)(75,0.7368)(76,0.7405)(77,0.7418)(78,0.7429)(79,0.746)(80,0.7477)(81,0.7496)(82,0.751)(83,0.753)(84,0.7545)(85,0.7562)(86,0.7577)(87,0.76)(88,0.761)(89,0.7618)(90,0.7632)(91,0.7661)(92,0.7665)(93,0.7674)(94,0.7687)(95,0.7696)(96,0.7717)(97,0.7732)(98,0.7745)(99,0.776)(100,0.7776)(101,0.7789)(102,0.7799)(103,0.7813)(104,0.782)(105,0.7834)(106,0.7842)(107,0.7853)(108,0.786)(109,0.787)(110,0.7879)(111,0.789)(112,0.7901)(113,0.7913)(114,0.7918)(115,0.7943)(116,0.7956)(117,0.796)(118,0.7967)(119,0.7982)(120,0.7988)(121,0.7991)(122,0.7999)(123,0.8004)(124,0.8018)(125,0.8023)(126,0.8032)(127,0.8046)(128,0.8044)(129,0.8049)(130,0.8057)(131,0.8065)(132,0.8076)(133,0.809)(134,0.8091)(135,0.8105)(136,0.8099)(137,0.8106)(138,0.8105)(139,0.8117)(140,0.812)(141,0.8127)(142,0.8124)(143,0.8139)(144,0.8145)(145,0.8155)(146,0.816)(147,0.8167)(148,0.8168)(149,0.8173)(150,0.8179)(151,0.8187)(152,0.8189)(153,0.8194)(154,0.8198)(155,0.8207)(156,0.8208)(157,0.8216)(158,0.8219)(159,0.8224)(160,0.823)(161,0.8234)(162,0.8238)(163,0.8247)(164,0.8247)(165,0.8255)(166,0.8258)(167,0.8265)(168,0.8268)(169,0.8284)(170,0.8282)(171,0.8285)(172,0.8287)(173,0.8292)(174,0.8297)(175,0.8303)(176,0.8303)(177,0.8305)(178,0.831)(179,0.8313)(180,0.8322)(181,0.8326)(182,0.8325)(183,0.8327)(184,0.8332)(185,0.8338)(186,0.8341)(187,0.8345)(188,0.8349)(189,0.8357)(190,0.8359)(191,0.8364)(192,0.8367)(193,0.837)(194,0.8381)(195,0.8382)(196,0.8376)(197,0.8386)(198,0.8392)(199,0.8393)(200,0.8403)(201,0.8399)(202,0.8401)(203,0.8401)(204,0.8403)(205,0.8411)(206,0.8418)(207,0.8415)(208,0.8425)(209,0.8423)(210,0.8426)(211,0.8427)(212,0.8429)(213,0.8435)(214,0.8439)(215,0.844)(216,0.8445)(217,0.8453)(218,0.845)(219,0.8459)(220,0.846)(221,0.8467)(222,0.847)(223,0.847)(224,0.8466)(225,0.8471)(226,0.8472)(227,0.8469)(228,0.8472)(229,0.8468)(230,0.8473)(231,0.8474)(232,0.8476)(233,0.8477)(234,0.8476)(235,0.8474)(236,0.8475)(237,0.8475)(238,0.8477)(239,0.8481)(240,0.8481)(241,0.8489)(242,0.8493)(243,0.8495)(244,0.8491)(245,0.8495)(246,0.8497)(247,0.8493)(248,0.8493)(249,0.8498)(250,0.8498)(251,0.8505)(252,0.8504)(253,0.8504)(254,0.8507)(255,0.8508)(256,0.8507)(257,0.8504)(258,0.8511)(259,0.8514)(260,0.8515)(261,0.8518)(262,0.852)(263,0.8519)(264,0.853)(265,0.8532)(266,0.8533)(267,0.8538)(268,0.8536)(269,0.8539)(270,0.8534)(271,0.854)(272,0.8539)(273,0.8544)(274,0.854)(275,0.8542)(276,0.8539)(277,0.8541)(278,0.854)(279,0.854)(280,0.8545)(281,0.8543)(282,0.8546)(283,0.8545)(284,0.855)(285,0.8552)(286,0.8551)(287,0.8555)(288,0.8553)(289,0.8552)(290,0.8553)(291,0.8549)(292,0.8553)(293,0.8558)(294,0.8561)(295,0.8558)(296,0.856)(297,0.8562)(298,0.8561)(299,0.8563)(300,0.8567)(301,0.8568)(302,0.8565)(303,0.8569)(304,0.8572)(305,0.8573)(306,0.8575)(307,0.8575)(308,0.8582)(309,0.8586)(310,0.859)(311,0.8589)(312,0.8591)(313,0.8592)(314,0.8595)(315,0.8596)(316,0.8592)(317,0.8595)(318,0.8593)(319,0.8593)(320,0.8594)(321,0.8599)(322,0.86)(323,0.8607)(324,0.8606)(325,0.8606)(326,0.8605)(327,0.8604)(328,0.8606)(329,0.8603)(330,0.8599)(331,0.8603)(332,0.86)(333,0.8601)(334,0.8607)(335,0.8608)(336,0.8609)(337,0.8611)(338,0.8624)(339,0.8616)(340,0.862)(341,0.8623)(342,0.8621)(343,0.8622)(344,0.8627)(345,0.8623)(346,0.8624)(347,0.8622)(348,0.8623)(349,0.863)(350,0.8629)(351,0.8624)(352,0.8632)(353,0.8635)(354,0.8639)(355,0.8638)(356,0.8641)(357,0.8638)(358,0.8642)(359,0.8645)(360,0.8644)(361,0.8645)(362,0.8645)(363,0.8641)(364,0.8645)(365,0.8648)(366,0.8654)(367,0.8649)(368,0.8646)(369,0.8653)(370,0.8652)(371,0.8658)(372,0.8659)(373,0.8655)(374,0.866)(375,0.8664)(376,0.8666)(377,0.8665)(378,0.8665)(379,0.8668)(380,0.8667)(381,0.8667)(382,0.8661)(383,0.867)(384,0.8667)(385,0.8664)(386,0.8671)(387,0.8664)(388,0.8666)(389,0.8671)(390,0.8669)(391,0.8668)(392,0.8669)(393,0.8674)(394,0.8679)(395,0.8684)(396,0.868)(397,0.8677)(398,0.8685)(399,0.8688)(400,0.8686)(401,0.8686)(402,0.8686)(403,0.8686)(404,0.8689)(405,0.8687)(406,0.8686)(407,0.8686)(408,0.8691)(409,0.8692)(410,0.8692)(411,0.8691)(412,0.8695)(413,0.8695)(414,0.8694)(415,0.8695)(416,0.8699)(417,0.8695)(418,0.8699)(419,0.8697)(420,0.8693)(421,0.8698)(422,0.8695)(423,0.8692)(424,0.8699)(425,0.8699)(426,0.87)(427,0.8698)(428,0.87)(429,0.8698)(430,0.8699)(431,0.8701)(432,0.8703)(433,0.8705)(434,0.8698)(435,0.8708)(436,0.8703)(437,0.8702)(438,0.8703)(439,0.8708)(440,0.8709)(441,0.8706)(442,0.8704)(443,0.8705)(444,0.8709)(445,0.8707)(446,0.8703)(447,0.8708)(448,0.8706)(449,0.8709)(450,0.8708)(451,0.8706)(452,0.8701)(453,0.8703)(454,0.8705)(455,0.8706)(456,0.8707)(457,0.8711)(458,0.871)(459,0.8709)(460,0.8716)(461,0.871)(462,0.8709)(463,0.871)(464,0.8715)(465,0.8709)(466,0.8704)(467,0.8711)(468,0.8712)(469,0.8715)(470,0.8714)(471,0.8713)(472,0.8715)(473,0.8712)(474,0.8718)(475,0.8719)(476,0.8719)(477,0.8724)(478,0.8719)(479,0.8723)(480,0.8721)(481,0.8724)(482,0.8724)(483,0.8727)(484,0.8724)(485,0.8718)(486,0.8715)(487,0.8717)(488,0.8716)(489,0.8721)(490,0.8728)(491,0.8729)(492,0.8735)(493,0.873)(494,0.8734)(495,0.8737)(496,0.8737)(497,0.874)(498,0.8742)(499,0.8744)
};\addlegendentry{COTAF $B=0$}
\addplot [semithick, mark size=3pt, color =green, mark = triangle, mark options={mark repeat=20,mark phase=1}]
coordinates {
(0,0.0856)(1,0.1046)(2,0.1278)(3,0.1456)(4,0.1682)(5,0.188)(6,0.2085)(7,0.2299)(8,0.2504)(9,0.2725)(10,0.2913)(11,0.3129)(12,0.3225)(13,0.3427)(14,0.3603)(15,0.3688)(16,0.3829)(17,0.3958)(18,0.416)(19,0.4246)(20,0.4351)(21,0.4447)(22,0.4588)(23,0.4699)(24,0.4785)(25,0.4884)(26,0.4987)(27,0.5088)(28,0.5204)(29,0.5301)(30,0.5404)(31,0.5488)(32,0.5561)(33,0.5625)(34,0.5718)(35,0.5772)(36,0.5844)(37,0.5873)(38,0.5915)(39,0.5964)(40,0.6029)(41,0.6081)(42,0.6143)(43,0.6221)(44,0.628)(45,0.6343)(46,0.6386)(47,0.643)(48,0.6467)(49,0.6485)(50,0.651)(51,0.656)(52,0.6616)(53,0.6649)(54,0.6675)(55,0.6714)(56,0.6743)(57,0.6794)(58,0.6807)(59,0.6819)(60,0.6865)(61,0.6879)(62,0.6911)(63,0.6917)(64,0.6952)(65,0.7007)(66,0.7027)(67,0.7057)(68,0.7104)(69,0.7132)(70,0.7145)(71,0.7172)(72,0.72)(73,0.7243)(74,0.7239)(75,0.7252)(76,0.7274)(77,0.727)(78,0.7292)(79,0.729)(80,0.7334)(81,0.7327)(82,0.7353)(83,0.7375)(84,0.74)(85,0.7439)(86,0.7458)(87,0.7443)(88,0.7457)(89,0.7474)(90,0.748)(91,0.7512)(92,0.7539)(93,0.7559)(94,0.758)(95,0.758)(96,0.7582)(97,0.7589)(98,0.7596)(99,0.7607)(100,0.7619)(101,0.7633)(102,0.7649)(103,0.7644)(104,0.7666)(105,0.7689)(106,0.7693)(107,0.7694)(108,0.7695)(109,0.7717)(110,0.7729)(111,0.7736)(112,0.7726)(113,0.7744)(114,0.7743)(115,0.7739)(116,0.7761)(117,0.7764)(118,0.7779)(119,0.7797)(120,0.7813)(121,0.7807)(122,0.7816)(123,0.781)(124,0.7825)(125,0.7848)(126,0.7856)(127,0.7859)(128,0.7857)(129,0.7876)(130,0.7884)(131,0.788)(132,0.7901)(133,0.7909)(134,0.7894)(135,0.7914)(136,0.7912)(137,0.7922)(138,0.7918)(139,0.7905)(140,0.7926)(141,0.7947)(142,0.7941)(143,0.7936)(144,0.793)(145,0.7936)(146,0.7931)(147,0.7976)(148,0.7977)(149,0.7966)(150,0.7984)(151,0.7978)(152,0.7979)(153,0.799)(154,0.7993)(155,0.8007)(156,0.8028)(157,0.803)(158,0.8029)(159,0.8027)(160,0.8042)(161,0.8051)(162,0.8045)(163,0.8057)(164,0.8057)(165,0.8079)(166,0.8061)(167,0.8064)(168,0.8085)(169,0.8069)(170,0.8093)(171,0.8092)(172,0.8079)(173,0.8071)(174,0.8081)(175,0.808)(176,0.81)(177,0.8092)(178,0.8101)(179,0.8078)(180,0.8094)(181,0.81)(182,0.8096)(183,0.8088)(184,0.8108)(185,0.8106)(186,0.8111)(187,0.8107)(188,0.8119)(189,0.8117)(190,0.8121)(191,0.8119)(192,0.8121)(193,0.8136)(194,0.8119)(195,0.813)(196,0.8131)(197,0.8133)(198,0.8128)(199,0.8126)(200,0.8118)(201,0.8131)(202,0.8153)(203,0.8157)(204,0.8176)(205,0.8206)(206,0.8193)(207,0.8205)(208,0.8194)(209,0.8184)(210,0.82)(211,0.8182)(212,0.8181)(213,0.8208)(214,0.8208)(215,0.8218)(216,0.8221)(217,0.8212)(218,0.8212)(219,0.8185)(220,0.8208)(221,0.8195)(222,0.8202)(223,0.8204)(224,0.8174)(225,0.8197)(226,0.8194)(227,0.8204)(228,0.8234)(229,0.8219)(230,0.8227)(231,0.8222)(232,0.8238)(233,0.823)(234,0.8234)(235,0.8237)(236,0.8239)(237,0.8243)(238,0.8255)(239,0.8253)(240,0.8269)(241,0.8278)(242,0.8285)(243,0.8269)(244,0.8286)(245,0.827)(246,0.8279)(247,0.8273)(248,0.8266)(249,0.8283)(250,0.8284)(251,0.828)(252,0.8296)(253,0.8308)(254,0.8296)(255,0.8323)(256,0.8321)(257,0.8323)(258,0.8325)(259,0.8325)(260,0.8323)(261,0.8312)(262,0.8311)(263,0.8311)(264,0.831)(265,0.8306)(266,0.8303)(267,0.8286)(268,0.8299)(269,0.8294)(270,0.8285)(271,0.8306)(272,0.8296)(273,0.83)(274,0.8313)(275,0.8323)(276,0.8312)(277,0.8319)(278,0.8337)(279,0.8357)(280,0.8356)(281,0.8361)(282,0.8355)(283,0.8346)(284,0.8354)(285,0.835)(286,0.8354)(287,0.8366)(288,0.8376)(289,0.8381)(290,0.8377)(291,0.8368)(292,0.8373)(293,0.8371)(294,0.8374)(295,0.8366)(296,0.8354)(297,0.8353)(298,0.835)(299,0.8339)(300,0.8362)(301,0.8347)(302,0.8381)(303,0.8372)(304,0.8363)(305,0.8352)(306,0.8359)(307,0.8366)(308,0.8359)(309,0.8348)(310,0.834)(311,0.8351)(312,0.835)(313,0.8345)(314,0.8335)(315,0.835)(316,0.8352)(317,0.8377)(318,0.8353)(319,0.8361)(320,0.8362)(321,0.8348)(322,0.8373)(323,0.8378)(324,0.8389)(325,0.8379)(326,0.837)(327,0.8371)(328,0.8352)(329,0.8347)(330,0.8337)(331,0.8339)(332,0.8355)(333,0.8352)(334,0.8357)(335,0.8371)(336,0.8361)(337,0.8362)(338,0.8362)(339,0.838)(340,0.8369)(341,0.8367)(342,0.8371)(343,0.8361)(344,0.8374)(345,0.8376)(346,0.8371)(347,0.8388)(348,0.8376)(349,0.8368)(350,0.8376)(351,0.8364)(352,0.8366)(353,0.8365)(354,0.8375)(355,0.8375)(356,0.8355)(357,0.8361)(358,0.8366)(359,0.8357)(360,0.8366)(361,0.835)(362,0.8359)(363,0.8358)(364,0.8381)(365,0.8381)(366,0.8393)(367,0.8384)(368,0.8388)(369,0.8388)(370,0.8403)(371,0.8403)(372,0.8395)(373,0.8405)(374,0.8397)(375,0.8384)(376,0.8387)(377,0.84)(378,0.8406)(379,0.8417)(380,0.8417)(381,0.8427)(382,0.8425)(383,0.844)(384,0.8447)(385,0.8432)(386,0.8434)(387,0.8451)(388,0.8449)(389,0.8434)(390,0.8431)(391,0.8435)(392,0.8431)(393,0.844)(394,0.8436)(395,0.8435)(396,0.8438)(397,0.8426)(398,0.8429)(399,0.843)(400,0.843)(401,0.8427)(402,0.8422)(403,0.8418)(404,0.8426)(405,0.8426)(406,0.8424)(407,0.8425)(408,0.8415)(409,0.8417)(410,0.8422)(411,0.8428)(412,0.8427)(413,0.8434)(414,0.8435)(415,0.8431)(416,0.8424)(417,0.8417)(418,0.8417)(419,0.842)(420,0.8439)(421,0.843)(422,0.8415)(423,0.8429)(424,0.8432)(425,0.8445)(426,0.8434)(427,0.8443)(428,0.8442)(429,0.8442)(430,0.8433)(431,0.8448)(432,0.8438)(433,0.8431)(434,0.8434)(435,0.8428)(436,0.8427)(437,0.843)(438,0.8433)(439,0.844)(440,0.8424)(441,0.843)(442,0.8443)(443,0.8426)(444,0.8431)(445,0.8435)(446,0.8433)(447,0.8427)(448,0.8434)(449,0.8443)(450,0.8458)(451,0.8453)(452,0.8455)(453,0.8457)(454,0.8466)(455,0.8456)(456,0.8456)(457,0.8462)(458,0.8458)(459,0.846)(460,0.8444)(461,0.8442)(462,0.8459)(463,0.8444)(464,0.8436)(465,0.8448)(466,0.8445)(467,0.8453)(468,0.8451)(469,0.8464)(470,0.8465)(471,0.8461)(472,0.845)(473,0.8459)(474,0.846)(475,0.8461)(476,0.8442)(477,0.8459)(478,0.8458)(479,0.8451)(480,0.8464)(481,0.8462)(482,0.8441)(483,0.8463)(484,0.8451)(485,0.8447)(486,0.8432)(487,0.8465)(488,0.8457)(489,0.8463)(490,0.847)(491,0.8472)(492,0.8474)(493,0.847)(494,0.8466)(495,0.8489)(496,0.8494)(497,0.8501)(498,0.8509)(499,0.852)
};\addlegendentry{ROTAF $B=5$}
\addplot [semithick, color =olive, mark=None]
coordinates {
(0,0.0856)(1,0.1322)(2,0.1423)(3,0.1195)(4,0.1005)(5,0.1156)(6,0.0762)(7,0.1014)(8,0.1335)(9,0.1717)(10,0.1505)(11,0.168)(12,0.2046)(13,0.1789)(14,0.1776)(15,0.1334)(16,0.116)(17,0.1231)(18,0.126)(19,0.118)(20,0.1364)(21,0.1332)(22,0.1153)(23,0.1311)(24,0.1459)(25,0.1506)(26,0.1394)(27,0.1316)(28,0.1265)(29,0.1013)(30,0.0987)(31,0.0864)(32,0.0708)(33,0.0745)(34,0.1077)(35,0.1163)(36,0.0945)(37,0.0891)(38,0.0776)(39,0.0904)(40,0.0877)(41,0.0881)(42,0.1062)(43,0.108)(44,0.1039)(45,0.1045)(46,0.0933)(47,0.0855)(48,0.1013)(49,0.1078)(50,0.0932)(51,0.1111)(52,0.1127)(53,0.1115)(54,0.1222)(55,0.1237)(56,0.113)(57,0.1219)(58,0.1245)(59,0.115)(60,0.1109)(61,0.1239)(62,0.1365)(63,0.1231)(64,0.1345)(65,0.1171)(66,0.1061)(67,0.1124)(68,0.1239)(69,0.1263)(70,0.1178)(71,0.0986)(72,0.1212)(73,0.1258)(74,0.1156)(75,0.1049)(76,0.0998)(77,0.1227)(78,0.1112)(79,0.1083)(80,0.107)(81,0.1028)(82,0.1232)(83,0.1043)(84,0.1294)(85,0.1553)(86,0.1561)(87,0.1225)(88,0.1118)(89,0.096)(90,0.0956)(91,0.1191)(92,0.1439)(93,0.1625)(94,0.1506)(95,0.1364)(96,0.1504)(97,0.1408)(98,0.1459)(99,0.1071)(100,0.1309)(101,0.1152)(102,0.1039)(103,0.1019)(104,0.1315)(105,0.1513)(106,0.1401)(107,0.1128)(108,0.1214)(109,0.1014)(110,0.1044)(111,0.1028)(112,0.0846)(113,0.0866)(114,0.0824)(115,0.0927)(116,0.0946)(117,0.1045)(118,0.0987)(119,0.1225)(120,0.1197)(121,0.1244)(122,0.1158)(123,0.099)(124,0.09)(125,0.0892)(126,0.0933)(127,0.1195)(128,0.1311)(129,0.1022)(130,0.0973)(131,0.0907)(132,0.0966)(133,0.1087)(134,0.1221)(135,0.112)(136,0.1232)(137,0.1029)(138,0.0712)(139,0.0969)(140,0.099)(141,0.0834)(142,0.0863)(143,0.1129)(144,0.1031)(145,0.1129)(146,0.1292)(147,0.1199)(148,0.148)(149,0.1211)(150,0.1133)(151,0.0869)(152,0.0951)(153,0.0893)(154,0.0801)(155,0.09)(156,0.0951)(157,0.087)(158,0.0923)(159,0.0908)(160,0.0914)(161,0.1263)(162,0.1157)(163,0.1186)(164,0.1184)(165,0.1206)(166,0.1322)(167,0.1442)(168,0.1629)(169,0.1465)(170,0.1294)(171,0.0913)(172,0.0711)(173,0.0725)(174,0.0809)(175,0.0827)(176,0.0729)(177,0.0989)(178,0.1224)(179,0.1191)(180,0.168)(181,0.1363)(182,0.1363)(183,0.1301)(184,0.1249)(185,0.1288)(186,0.1551)(187,0.1523)(188,0.1341)(189,0.0995)(190,0.1015)(191,0.0881)(192,0.1045)(193,0.1055)(194,0.1045)(195,0.107)(196,0.1447)(197,0.1243)(198,0.1169)(199,0.0982)(200,0.1294)(201,0.1272)(202,0.1662)(203,0.1696)(204,0.1746)(205,0.1685)(206,0.1468)(207,0.1506)(208,0.1482)(209,0.1452)(210,0.1696)(211,0.1813)(212,0.1638)(213,0.1684)(214,0.1641)(215,0.1199)(216,0.0945)(217,0.1265)(218,0.1329)(219,0.1766)(220,0.1722)(221,0.1454)(222,0.1391)(223,0.1625)(224,0.1554)(225,0.1107)(226,0.1229)(227,0.1207)(228,0.1115)(229,0.0995)(230,0.1192)(231,0.1218)(232,0.1231)(233,0.1166)(234,0.1084)(235,0.0982)(236,0.102)(237,0.0954)(238,0.0968)(239,0.0745)(240,0.0826)(241,0.0741)(242,0.0756)(243,0.0855)(244,0.1018)(245,0.1197)(246,0.1282)(247,0.1468)(248,0.1616)(249,0.1586)(250,0.1626)(251,0.146)(252,0.1476)(253,0.1529)(254,0.151)(255,0.1552)(256,0.1436)(257,0.1065)(258,0.1287)(259,0.1188)(260,0.119)(261,0.1024)(262,0.0899)(263,0.077)(264,0.073)(265,0.0702)(266,0.0873)(267,0.1053)(268,0.1319)(269,0.1405)(270,0.1342)(271,0.1362)(272,0.1325)(273,0.1339)(274,0.1464)(275,0.1489)(276,0.149)(277,0.1278)(278,0.1199)(279,0.1323)(280,0.1339)(281,0.1176)(282,0.1216)(283,0.1307)(284,0.1265)(285,0.1269)(286,0.1109)(287,0.1081)(288,0.1077)(289,0.1246)(290,0.159)(291,0.1537)(292,0.1489)(293,0.1262)(294,0.1328)(295,0.1089)(296,0.0805)(297,0.0789)(298,0.1096)(299,0.1022)(300,0.0834)(301,0.0692)(302,0.058)(303,0.0846)(304,0.0565)(305,0.0538)(306,0.0563)(307,0.065)(308,0.0596)(309,0.0628)(310,0.0674)(311,0.0606)(312,0.0615)(313,0.0733)(314,0.0804)(315,0.0861)(316,0.0789)(317,0.0946)(318,0.1123)(319,0.138)(320,0.1752)(321,0.1827)(322,0.2073)(323,0.1749)(324,0.1481)(325,0.131)(326,0.1165)(327,0.129)(328,0.1321)(329,0.1324)(330,0.1106)(331,0.1302)(332,0.1442)(333,0.1249)(334,0.119)(335,0.1492)(336,0.1499)(337,0.1278)(338,0.1308)(339,0.1358)(340,0.1496)(341,0.1206)(342,0.117)(343,0.1183)(344,0.1108)(345,0.1093)(346,0.1201)(347,0.1061)(348,0.0653)(349,0.0795)(350,0.0952)(351,0.1013)(352,0.0862)(353,0.0827)(354,0.073)(355,0.0777)(356,0.074)(357,0.0873)(358,0.1137)(359,0.1107)(360,0.0949)(361,0.113)(362,0.0841)(363,0.0955)(364,0.1091)(365,0.1221)(366,0.1202)(367,0.1112)(368,0.1369)(369,0.1475)(370,0.1647)(371,0.1624)(372,0.1575)(373,0.1484)(374,0.1602)(375,0.1779)(376,0.151)(377,0.1608)(378,0.1379)(379,0.1471)(380,0.1468)(381,0.1736)(382,0.1727)(383,0.169)(384,0.1951)(385,0.1892)(386,0.2018)(387,0.191)(388,0.1984)(389,0.1877)(390,0.2087)(391,0.1779)(392,0.1908)(393,0.1776)(394,0.2165)(395,0.1931)(396,0.181)(397,0.1647)(398,0.1618)(399,0.1609)(400,0.158)(401,0.152)(402,0.156)(403,0.1293)(404,0.1329)(405,0.1244)(406,0.1122)(407,0.1098)(408,0.0988)(409,0.082)(410,0.0873)(411,0.0851)(412,0.0773)(413,0.0851)(414,0.0982)(415,0.0848)(416,0.0875)(417,0.0925)(418,0.1157)(419,0.1221)(420,0.1225)(421,0.1342)(422,0.1334)(423,0.1265)(424,0.1266)(425,0.117)(426,0.0999)(427,0.1249)(428,0.1389)(429,0.15)(430,0.1816)(431,0.1708)(432,0.1535)(433,0.1434)(434,0.1277)(435,0.1213)(436,0.1126)(437,0.0957)(438,0.0817)(439,0.0931)(440,0.1102)(441,0.1326)(442,0.155)(443,0.1125)(444,0.1111)(445,0.113)(446,0.1099)(447,0.1442)(448,0.1416)(449,0.158)(450,0.1354)(451,0.1664)(452,0.1492)(453,0.1897)(454,0.1921)(455,0.1588)(456,0.1641)(457,0.1674)(458,0.1694)(459,0.1534)(460,0.1377)(461,0.089)(462,0.1056)(463,0.1107)(464,0.1103)(465,0.1126)(466,0.1117)(467,0.1263)(468,0.0851)(469,0.092)(470,0.1007)(471,0.1307)(472,0.1104)(473,0.1638)(474,0.1553)(475,0.1657)(476,0.19)(477,0.2027)(478,0.2497)(479,0.219)(480,0.2112)(481,0.2005)(482,0.2262)(483,0.2427)(484,0.2509)(485,0.2671)(486,0.2512)(487,0.2509)(488,0.2278)(489,0.2025)(490,0.2187)(491,0.2209)(492,0.2434)(493,0.2418)(494,0.2603)(495,0.2623)(496,0.2599)(497,0.2365)(498,0.2454)(499,0.2658)
};\addlegendentry{COTAF $B=5$}
\coordinate (c1) at (axis cs: 180,0.58);
\coordinate (c2) at (axis cs: 280,0.4);
\spy on (c1) in node at (c2);
\end{axis}
\end{tikzpicture}}
\subfigure[CNN model]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=500,
ymin=0.0, ymax=1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test accuracy},
ymajorgrids,
ytick style={color=black},
grid=major,
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
legend style={nodes={scale=0.7, transform shape}},
legend image post style={},
spy using outlines={red, circle, magnification=4, size=25 * 4,
connect spies}
]
\addplot+[semithick, color =red, mark=x, mark options={mark repeat=20,mark phase=1}]
coordinates {
(0,0.1039)(1,0.0854)(2,0.1161)(3,0.1608)(4,0.231)(5,0.3012)(6,0.3559)(7,0.3963)(8,0.4384)(9,0.4805)(10,0.5132)(11,0.5604)(12,0.5872)(13,0.6112)(14,0.6324)(15,0.659)(16,0.6793)(17,0.6978)(18,0.7176)(19,0.7358)(20,0.7456)(21,0.7637)(22,0.7676)(23,0.7787)(24,0.7802)(25,0.7871)(26,0.7854)(27,0.8019)(28,0.7985)(29,0.8074)(30,0.8096)(31,0.8048)(32,0.7934)(33,0.8035)(34,0.7901)(35,0.8052)(36,0.7944)(37,0.8046)(38,0.767)(39,0.8031)(40,0.7984)(41,0.8107)(42,0.783)(43,0.802)(44,0.7754)(45,0.806)(46,0.7881)(47,0.8083)(48,0.8037)(49,0.8313)(50,0.8275)(51,0.8286)(52,0.8222)(53,0.8375)(54,0.8413)(55,0.854)(56,0.858)(57,0.868)(58,0.8578)(59,0.8603)(60,0.8485)(61,0.8566)(62,0.8566)(63,0.8668)(64,0.8656)(65,0.8637)(66,0.8654)(67,0.8726)(68,0.8716)(69,0.8798)(70,0.875)(71,0.878)(72,0.8786)(73,0.8871)(74,0.8855)(75,0.894)(76,0.8933)(77,0.896)(78,0.896)(79,0.8954)(80,0.8924)(81,0.8985)(82,0.8982)(83,0.8983)(84,0.8976)(85,0.9011)(86,0.8976)(87,0.9022)(88,0.8973)(89,0.9012)(90,0.9023)(91,0.9026)(92,0.9029)(93,0.9018)(94,0.907)(95,0.9049)(96,0.9055)(97,0.9012)(98,0.9048)(99,0.9061)(100,0.9046)(101,0.9034)(102,0.8998)(103,0.9026)(104,0.9046)(105,0.9055)(106,0.907)(107,0.9036)(108,0.8975)(109,0.8973)(110,0.8907)(111,0.9005)(112,0.8945)(113,0.9043)(114,0.9029)(115,0.9073)(116,0.9049)(117,0.9084)(118,0.9083)(119,0.9083)(120,0.9075)(121,0.9123)(122,0.9113)(123,0.9118)(124,0.907)(125,0.9074)(126,0.9065)(127,0.9089)(128,0.9098)(129,0.9144)(130,0.9131)(131,0.9171)(132,0.9183)(133,0.9175)(134,0.9156)(135,0.9187)(136,0.9181)(137,0.9174)(138,0.9181)(139,0.9156)(140,0.9167)(141,0.9132)(142,0.9136)(143,0.9079)(144,0.9103)(145,0.905)(146,0.9079)(147,0.907)(148,0.9106)(149,0.9079)(150,0.9088)(151,0.9147)(152,0.9154)(153,0.9179)(154,0.9178)(155,0.9173)(156,0.9182)(157,0.9175)(158,0.9204)(159,0.9151)(160,0.9168)(161,0.9174)(162,0.9127)(163,0.9114)(164,0.918)(165,0.9146)(166,0.9185)(167,0.9164)(168,0.9195)(169,0.9185)(170,0.9203)(171,0.9176)(172,0.9206)(173,0.9184)(174,0.9164)(175,0.9174)(176,0.9177)(177,0.918)(178,0.9188)(179,0.9171)(180,0.9189)(181,0.9182)(182,0.9194)(183,0.914)(184,0.9196)(185,0.9189)(186,0.9187)(187,0.9163)(188,0.9198)(189,0.9185)(190,0.9164)(191,0.9197)(192,0.9216)(193,0.9209)(194,0.9197)(195,0.9219)(196,0.9223)(197,0.9194)(198,0.9211)(199,0.9227)(200,0.9192)(201,0.9209)(202,0.9194)(203,0.9213)(204,0.9188)(205,0.9231)(206,0.917)(207,0.9129)(208,0.9047)(209,0.9117)(210,0.9086)(211,0.9169)(212,0.9153)(213,0.9176)(214,0.9164)(215,0.9181)(216,0.9172)(217,0.913)(218,0.9111)(219,0.9168)(220,0.9226)(221,0.9225)(222,0.9222)(223,0.9237)(224,0.9223)(225,0.9166)(226,0.9196)(227,0.9191)(228,0.9195)(229,0.9237)(230,0.9187)(231,0.9163)(232,0.9104)(233,0.9097)(234,0.9135)(235,0.9096)(236,0.8987)(237,0.8972)(238,0.888)(239,0.8941)(240,0.898)(241,0.8994)(242,0.9032)(243,0.9139)(244,0.9138)(245,0.9201)(246,0.9217)(247,0.9268)(248,0.9241)(249,0.9286)(250,0.9259)(251,0.9246)(252,0.9253)(253,0.9272)(254,0.9259)(255,0.9275)(256,0.9269)(257,0.9275)(258,0.9263)(259,0.9266)(260,0.9274)(261,0.9275)(262,0.928)(263,0.9283)(264,0.9278)(265,0.9276)(266,0.9274)(267,0.9293)(268,0.9248)(269,0.9259)(270,0.9295)(271,0.9294)(272,0.9273)(273,0.9267)(274,0.9268)(275,0.9296)(276,0.929)(277,0.9296)(278,0.9294)(279,0.9283)(280,0.9292)(281,0.9287)(282,0.9301)(283,0.9296)(284,0.929)(285,0.9295)(286,0.9282)(287,0.925)(288,0.9264)(289,0.93)(290,0.9286)(291,0.9275)(292,0.9277)(293,0.929)(294,0.9272)(295,0.9274)(296,0.9283)(297,0.9278)(298,0.9302)(299,0.9289)(300,0.9299)(301,0.9259)(302,0.9308)(303,0.9244)(304,0.9293)(305,0.9253)(306,0.9279)(307,0.9254)(308,0.9288)(309,0.926)(310,0.9278)(311,0.9263)(312,0.9259)(313,0.9253)(314,0.9256)(315,0.923)(316,0.9262)(317,0.9278)(318,0.9263)(319,0.9263)(320,0.9268)(321,0.9273)(322,0.9273)(323,0.9248)(324,0.9237)(325,0.9183)(326,0.9171)(327,0.9184)(328,0.9255)(329,0.9238)(330,0.9287)(331,0.9298)(332,0.9286)(333,0.9236)(334,0.9317)(335,0.9287)(336,0.9309)(337,0.9297)(338,0.9316)(339,0.9298)(340,0.9309)(341,0.9258)(342,0.9283)(343,0.9283)(344,0.9303)(345,0.9287)(346,0.9305)(347,0.9305)(348,0.9306)(349,0.9312)(350,0.9325)(351,0.9306)(352,0.9314)(353,0.9323)(354,0.9305)(355,0.928)(356,0.931)(357,0.9298)(358,0.9309)(359,0.9319)(360,0.9315)(361,0.9332)(362,0.931)(363,0.9309)(364,0.9303)(365,0.9336)(366,0.9316)(367,0.9315)(368,0.9313)(369,0.9324)(370,0.931)(371,0.9306)(372,0.9318)(373,0.9254)(374,0.9318)(375,0.9243)(376,0.9276)(377,0.9216)(378,0.9318)(379,0.9263)(380,0.9311)(381,0.9278)(382,0.9334)(383,0.932)(384,0.9339)(385,0.9288)(386,0.9351)(387,0.931)(388,0.934)(389,0.9344)(390,0.9333)(391,0.936)(392,0.9341)(393,0.9342)(394,0.9306)(395,0.9342)(396,0.9345)(397,0.9349)(398,0.9356)(399,0.9346)(400,0.9342)(401,0.9337)(402,0.9337)(403,0.9345)(404,0.9343)(405,0.9337)(406,0.9356)(407,0.9332)(408,0.9358)(409,0.9368)(410,0.9357)(411,0.9343)(412,0.9347)(413,0.9352)(414,0.9376)(415,0.9362)(416,0.9339)(417,0.9333)(418,0.9359)(419,0.9328)(420,0.9356)(421,0.9328)(422,0.936)(423,0.9359)(424,0.9368)(425,0.9331)(426,0.9348)(427,0.932)(428,0.9337)(429,0.9357)(430,0.9333)(431,0.9353)(432,0.933)(433,0.9376)(434,0.9375)(435,0.9362)(436,0.9382)(437,0.9367)(438,0.9369)(439,0.9381)(440,0.9351)(441,0.9373)(442,0.9365)(443,0.9387)(444,0.9343)(445,0.936)(446,0.9361)(447,0.9389)(448,0.9357)(449,0.9364)(450,0.9337)(451,0.9339)(452,0.9374)(453,0.9347)(454,0.9363)(455,0.9334)(456,0.9386)(457,0.9371)(458,0.9362)(459,0.9351)(460,0.9363)(461,0.9357)(462,0.9378)(463,0.9343)(464,0.9363)(465,0.9357)(466,0.9365)(467,0.9368)(468,0.9356)(469,0.9361)(470,0.9344)(471,0.9326)(472,0.9322)(473,0.9308)(474,0.9294)(475,0.9315)(476,0.929)(477,0.9314)(478,0.9302)(479,0.9297)(480,0.9318)(481,0.9299)(482,0.9315)(483,0.9318)(484,0.9315)(485,0.9333)(486,0.9341)(487,0.9338)(488,0.9366)(489,0.9336)(490,0.9356)(491,0.9334)(492,0.9336)(493,0.9341)(494,0.9333)(495,0.9336)(496,0.9349)(497,0.9327)(498,0.9326)(499,0.9332)
};\addlegendentry{ROTAF $B=0$}
\addplot+[semithick, color =blue, mark=square, mark options={mark repeat=20,mark phase=1}]
coordinates {
(0,0.1039)(1,0.0909)(2,0.1163)(3,0.1589)(4,0.2231)(5,0.2889)(6,0.3482)(7,0.4051)(8,0.4583)(9,0.5077)(10,0.5463)(11,0.5843)(12,0.6146)(13,0.6409)(14,0.666)(15,0.6871)(16,0.7044)(17,0.7184)(18,0.7348)(19,0.7459)(20,0.7577)(21,0.7678)(22,0.7749)(23,0.7835)(24,0.7899)(25,0.7965)(26,0.804)(27,0.8102)(28,0.8146)(29,0.8193)(30,0.8211)(31,0.8259)(32,0.8296)(33,0.8337)(34,0.8355)(35,0.8407)(36,0.8411)(37,0.8448)(38,0.8451)(39,0.8492)(40,0.8451)(41,0.8519)(42,0.8444)(43,0.8518)(44,0.8392)(45,0.8496)(46,0.8343)(47,0.8482)(48,0.8312)(49,0.8477)(50,0.831)(51,0.8504)(52,0.8363)(53,0.8534)(54,0.8448)(55,0.8608)(56,0.852)(57,0.8675)(58,0.8609)(59,0.8702)(60,0.8646)(61,0.8752)(62,0.869)(63,0.8787)(64,0.8744)(65,0.881)(66,0.8784)(67,0.8846)(68,0.883)(69,0.8864)(70,0.8851)(71,0.8891)(72,0.8891)(73,0.8918)(74,0.8903)(75,0.8932)(76,0.8914)(77,0.895)(78,0.8936)(79,0.8971)(80,0.8958)(81,0.8992)(82,0.8981)(83,0.8994)(84,0.8991)(85,0.9021)(86,0.9018)(87,0.9041)(88,0.9015)(89,0.9036)(90,0.9039)(91,0.9051)(92,0.9052)(93,0.9049)(94,0.9061)(95,0.9072)(96,0.9079)(97,0.9082)(98,0.908)(99,0.9094)(100,0.9089)(101,0.9101)(102,0.9104)(103,0.9112)(104,0.9106)(105,0.9111)(106,0.9117)(107,0.9132)(108,0.9127)(109,0.9137)(110,0.9127)(111,0.9128)(112,0.9143)(113,0.9146)(114,0.9147)(115,0.9148)(116,0.9149)(117,0.9148)(118,0.916)(119,0.916)(120,0.916)(121,0.916)(122,0.9169)(123,0.9167)(124,0.9172)(125,0.9174)(126,0.9178)(127,0.9177)(128,0.919)(129,0.9175)(130,0.9194)(131,0.9189)(132,0.9199)(133,0.9197)(134,0.9207)(135,0.92)(136,0.9213)(137,0.9197)(138,0.9206)(139,0.92)(140,0.9207)(141,0.9201)(142,0.9209)(143,0.9215)(144,0.9226)(145,0.9214)(146,0.923)(147,0.9225)(148,0.9227)(149,0.9227)(150,0.9234)(151,0.9235)(152,0.9239)(153,0.9236)(154,0.9245)(155,0.9248)(156,0.9255)(157,0.9245)(158,0.9246)(159,0.9247)(160,0.9252)(161,0.9256)(162,0.9253)(163,0.9266)(164,0.9261)(165,0.9252)(166,0.9266)(167,0.9255)(168,0.9262)(169,0.9263)(170,0.9258)(171,0.927)(172,0.9276)(173,0.9274)(174,0.9276)(175,0.9271)(176,0.9284)(177,0.927)(178,0.928)(179,0.9275)(180,0.9275)(181,0.9271)(182,0.9279)(183,0.9275)(184,0.9279)(185,0.9288)(186,0.9283)(187,0.9291)(188,0.9299)(189,0.9295)(190,0.9303)(191,0.9297)(192,0.9307)(193,0.9298)(194,0.9312)(195,0.9307)(196,0.93)(197,0.9318)(198,0.9307)(199,0.9309)(200,0.9304)(201,0.9318)(202,0.9304)(203,0.9326)(204,0.9311)(205,0.9326)(206,0.9321)(207,0.9324)(208,0.9312)(209,0.9333)(210,0.9326)(211,0.9328)(212,0.9326)(213,0.9332)(214,0.9319)(215,0.9334)(216,0.9322)(217,0.9325)(218,0.9326)(219,0.9338)(220,0.933)(221,0.9332)(222,0.9342)(223,0.9351)(224,0.9335)(225,0.9349)(226,0.9343)(227,0.9343)(228,0.9347)(229,0.9348)(230,0.9348)(231,0.9355)(232,0.9357)(233,0.935)(234,0.9354)(235,0.9361)(236,0.9354)(237,0.9357)(238,0.9358)(239,0.9373)(240,0.9361)(241,0.9365)(242,0.9366)(243,0.9365)(244,0.9369)(245,0.9367)(246,0.9373)(247,0.9371)(248,0.9379)(249,0.9375)(250,0.9379)(251,0.9381)(252,0.9373)(253,0.9378)(254,0.9378)(255,0.9382)(256,0.9383)(257,0.9385)(258,0.9385)(259,0.9378)(260,0.9388)(261,0.938)(262,0.939)(263,0.938)(264,0.9393)(265,0.9388)(266,0.9395)(267,0.9396)(268,0.9396)(269,0.9398)(270,0.9396)(271,0.9397)(272,0.9405)(273,0.94)(274,0.9398)(275,0.9399)(276,0.9406)(277,0.9406)(278,0.9405)(279,0.9413)(280,0.9408)(281,0.9411)(282,0.9412)(283,0.9415)(284,0.9411)(285,0.9412)(286,0.9414)(287,0.9412)(288,0.9416)(289,0.9421)(290,0.9419)(291,0.942)(292,0.9412)(293,0.9424)(294,0.9415)(295,0.9422)(296,0.9421)(297,0.9429)(298,0.9429)(299,0.9424)(300,0.943)(301,0.9432)(302,0.9422)(303,0.9434)(304,0.9432)(305,0.944)(306,0.9431)(307,0.9434)(308,0.9441)(309,0.9439)(310,0.9439)(311,0.9443)(312,0.9437)(313,0.9443)(314,0.9436)(315,0.9434)(316,0.9437)(317,0.9438)(318,0.9435)(319,0.9442)(320,0.9435)(321,0.9438)(322,0.9442)(323,0.944)(324,0.9446)(325,0.9454)(326,0.9458)(327,0.9458)(328,0.9445)(329,0.946)(330,0.945)(331,0.9448)(332,0.9461)(333,0.9446)(334,0.9449)(335,0.9463)(336,0.9461)(337,0.9459)(338,0.9457)(339,0.9466)(340,0.9457)(341,0.9459)(342,0.9466)(343,0.9467)(344,0.9463)(345,0.9457)(346,0.9464)(347,0.9453)(348,0.9464)(349,0.9458)(350,0.9456)(351,0.9454)(352,0.9458)(353,0.9452)(354,0.9464)(355,0.9468)(356,0.9463)(357,0.947)(358,0.9469)(359,0.9468)(360,0.9472)(361,0.9474)(362,0.9469)(363,0.9479)(364,0.9463)(365,0.947)(366,0.9471)(367,0.9466)(368,0.9468)(369,0.9471)(370,0.9469)(371,0.9477)(372,0.9477)(373,0.9474)(374,0.9473)(375,0.9475)(376,0.9481)(377,0.948)(378,0.9474)(379,0.9478)(380,0.9477)(381,0.9474)(382,0.9473)(383,0.948)(384,0.9478)(385,0.9476)(386,0.9476)(387,0.9478)(388,0.9476)(389,0.9482)(390,0.9475)(391,0.9477)(392,0.9476)(393,0.9484)(394,0.9479)(395,0.9478)(396,0.9486)(397,0.9481)(398,0.9482)(399,0.9485)(400,0.9488)(401,0.949)(402,0.9489)(403,0.9487)(404,0.9489)(405,0.9493)(406,0.9491)(407,0.9487)(408,0.9492)(409,0.9488)(410,0.949)(411,0.9496)(412,0.949)(413,0.949)(414,0.949)(415,0.9492)(416,0.9489)(417,0.9495)(418,0.949)(419,0.9485)(420,0.949)(421,0.9489)(422,0.9493)(423,0.9497)(424,0.9497)(425,0.9497)(426,0.9496)(427,0.9496)(428,0.9488)(429,0.95)(430,0.9491)(431,0.9495)(432,0.9498)(433,0.9497)(434,0.9493)(435,0.9495)(436,0.9507)(437,0.9494)(438,0.9494)(439,0.9498)(440,0.9487)(441,0.9495)(442,0.9487)(443,0.9498)(444,0.9497)(445,0.9493)(446,0.9504)(447,0.9497)(448,0.9497)(449,0.9497)(450,0.9491)(451,0.9497)(452,0.9491)(453,0.9495)(454,0.9491)(455,0.9497)(456,0.9499)(457,0.9506)(458,0.9507)(459,0.95)(460,0.9504)(461,0.9507)(462,0.9506)(463,0.95)(464,0.9503)(465,0.9506)(466,0.9503)(467,0.9506)(468,0.9506)(469,0.9506)(470,0.9499)(471,0.9502)(472,0.9509)(473,0.9504)(474,0.9508)(475,0.951)(476,0.9511)(477,0.9508)(478,0.9511)(479,0.9507)(480,0.9509)(481,0.951)(482,0.9505)(483,0.9513)(484,0.951)(485,0.951)(486,0.9507)(487,0.95)(488,0.9508)(489,0.9503)(490,0.9514)(491,0.951)(492,0.9508)(493,0.9511)(494,0.9511)(495,0.9509)(496,0.9506)(497,0.9503)(498,0.9514)(499,0.9507)
};\addlegendentry{COTAF $B=0$}
\addplot+[semithick, color =olive, mark=none, mark options={mark repeat=20,mark phase=1}]coordinates {
(0,0.1039)(1,0.1239)(2,0.0994)(3,0.1148)(4,0.1204)(5,0.0969)(6,0.0927)(7,0.1216)(8,0.0921)(9,0.153)(10,0.1551)(11,0.158)(12,0.1236)(13,0.1642)(14,0.137)(15,0.1632)(16,0.1189)(17,0.1204)(18,0.1649)(19,0.1245)(20,0.228)(21,0.1319)(22,0.1807)(23,0.1833)(24,0.1333)(25,0.1696)(26,0.1589)(27,0.1318)(28,0.1331)(29,0.1511)(30,0.1533)(31,0.175)(32,0.1926)(33,0.1498)(34,0.1501)(35,0.1733)(36,0.1068)(37,0.1564)(38,0.1368)(39,0.1549)(40,0.147)(41,0.2414)(42,0.1967)(43,0.1765)(44,0.1257)(45,0.1607)(46,0.1808)(47,0.2084)(48,0.1435)(49,0.1964)(50,0.277)(51,0.146)(52,0.1862)(53,0.1672)(54,0.2002)(55,0.1542)(56,0.1754)(57,0.1515)(58,0.1793)(59,0.0891)(60,0.1692)(61,0.1518)(62,0.1328)(63,0.2523)(64,0.2074)(65,0.2358)(66,0.208)(67,0.1854)(68,0.1404)(69,0.1606)(70,0.1322)(71,0.1468)(72,0.1333)(73,0.1613)(74,0.1282)(75,0.1142)(76,0.0906)(77,0.1653)(78,0.1374)(79,0.1501)(80,0.1898)(81,0.1625)(82,0.1611)(83,0.0884)(84,0.1302)(85,0.161)(86,0.1413)(87,0.1521)(88,0.1428)(89,0.1509)(90,0.1373)(91,0.1422)(92,0.0915)(93,0.107)(94,0.1269)(95,0.1817)(96,0.1763)(97,0.2538)(98,0.1667)(99,0.1785)(100,0.2147)(101,0.1862)(102,0.1753)(103,0.1624)(104,0.1993)(105,0.1756)(106,0.1798)(107,0.2384)(108,0.1249)(109,0.1232)(110,0.2203)(111,0.0927)(112,0.1611)(113,0.165)(114,0.2267)(115,0.2186)(116,0.1451)(117,0.1808)(118,0.2551)(119,0.1909)(120,0.1768)(121,0.1714)(122,0.2582)(123,0.1935)(124,0.1867)(125,0.285)(126,0.1933)(127,0.2176)(128,0.2072)(129,0.1321)(130,0.2169)(131,0.1258)(132,0.1945)(133,0.142)(134,0.1415)(135,0.1647)(136,0.1862)(137,0.0979)(138,0.2066)(139,0.1402)(140,0.2213)(141,0.2113)(142,0.1646)(143,0.1884)(144,0.2478)(145,0.234)(146,0.1822)(147,0.1681)(148,0.1544)(149,0.1197)(150,0.1296)(151,0.1104)(152,0.1103)(153,0.131)(154,0.1925)(155,0.1571)(156,0.1383)(157,0.1333)(158,0.2063)(159,0.1692)(160,0.232)(161,0.2377)(162,0.2331)(163,0.233)(164,0.166)(165,0.1368)(166,0.1578)(167,0.1733)(168,0.1642)(169,0.1627)(170,0.183)(171,0.1734)(172,0.1809)(173,0.1414)(174,0.1789)(175,0.1835)(176,0.2004)(177,0.2013)(178,0.1912)(179,0.1949)(180,0.2101)(181,0.1295)(182,0.2532)(183,0.2029)(184,0.1363)(185,0.184)(186,0.1515)(187,0.1575)(188,0.1791)(189,0.1476)(190,0.1565)(191,0.2075)(192,0.1848)(193,0.1952)(194,0.2193)(195,0.1974)(196,0.2006)(197,0.1984)(198,0.183)(199,0.224)(200,0.2381)(201,0.2164)(202,0.3028)(203,0.2324)(204,0.2598)(205,0.2553)(206,0.2111)(207,0.2434)(208,0.1747)(209,0.1669)(210,0.1639)(211,0.2775)(212,0.1912)(213,0.1184)(214,0.1848)(215,0.2083)(216,0.1884)(217,0.1639)(218,0.19)(219,0.1691)(220,0.1401)(221,0.1689)(222,0.1587)(223,0.1326)(224,0.1469)(225,0.0977)(226,0.1624)(227,0.2076)(228,0.1336)(229,0.1443)(230,0.2152)(231,0.1826)(232,0.1792)(233,0.187)(234,0.154)(235,0.2619)(236,0.2362)(237,0.2071)(238,0.1795)(239,0.1953)(240,0.2045)(241,0.1569)(242,0.1961)(243,0.2198)(244,0.2031)(245,0.1902)(246,0.1351)(247,0.1756)(248,0.1382)(249,0.1408)(250,0.1433)(251,0.1893)(252,0.2219)(253,0.2237)(254,0.1524)(255,0.1705)(256,0.2107)(257,0.2102)(258,0.2624)(259,0.167)(260,0.216)(261,0.176)(262,0.227)(263,0.1771)(264,0.233)(265,0.2178)(266,0.1277)(267,0.1529)(268,0.1771)(269,0.189)(270,0.169)(271,0.163)(272,0.1638)(273,0.1708)(274,0.1552)(275,0.1476)(276,0.2106)(277,0.2305)(278,0.168)(279,0.2382)(280,0.2136)(281,0.2802)(282,0.2338)(283,0.1871)(284,0.2062)(285,0.149)(286,0.1462)(287,0.1387)(288,0.2033)(289,0.2305)(290,0.194)(291,0.172)(292,0.175)(293,0.1902)(294,0.2014)(295,0.2005)(296,0.1798)(297,0.163)(298,0.1472)(299,0.1296)(300,0.1401)(301,0.181)(302,0.1768)(303,0.1451)(304,0.2022)(305,0.1244)(306,0.1779)(307,0.1718)(308,0.1963)(309,0.2161)(310,0.1753)(311,0.1779)(312,0.209)(313,0.1123)(314,0.2196)(315,0.158)(316,0.2749)(317,0.1854)(318,0.2806)(319,0.2823)(320,0.193)(321,0.1875)(322,0.2212)(323,0.2106)(324,0.1507)(325,0.1836)(326,0.2657)(327,0.315)(328,0.2559)(329,0.237)(330,0.2008)(331,0.1311)(332,0.1976)(333,0.1975)(334,0.2267)(335,0.2336)(336,0.1918)(337,0.1208)(338,0.1616)(339,0.2077)(340,0.2707)(341,0.1993)(342,0.2376)(343,0.2741)(344,0.3294)(345,0.2515)(346,0.2311)(347,0.2643)(348,0.2717)(349,0.2134)(350,0.1937)(351,0.1504)(352,0.11)(353,0.0767)(354,0.2148)(355,0.1745)(356,0.1796)(357,0.1246)(358,0.1568)(359,0.169)(360,0.2008)(361,0.1997)(362,0.1983)(363,0.2031)(364,0.23)(365,0.1784)(366,0.1802)(367,0.194)(368,0.2376)(369,0.1987)(370,0.2623)(371,0.2141)(372,0.2486)(373,0.1973)(374,0.2863)(375,0.1488)(376,0.227)(377,0.2076)(378,0.1679)(379,0.2837)(380,0.1439)(381,0.1317)(382,0.1751)(383,0.2714)(384,0.178)(385,0.1618)(386,0.2787)(387,0.2347)(388,0.2432)(389,0.2361)(390,0.297)(391,0.1498)(392,0.1933)(393,0.1535)(394,0.2386)(395,0.2115)(396,0.203)(397,0.2307)(398,0.2505)(399,0.2309)(400,0.2225)(401,0.2522)(402,0.2156)(403,0.224)(404,0.1903)(405,0.1991)(406,0.2418)(407,0.2529)(408,0.1515)(409,0.1777)(410,0.1968)(411,0.1564)(412,0.1965)(413,0.1687)(414,0.1212)(415,0.2874)(416,0.2494)(417,0.2538)(418,0.1851)(419,0.2725)(420,0.1649)(421,0.2092)(422,0.2215)(423,0.1556)(424,0.2412)(425,0.2167)(426,0.1587)(427,0.2173)(428,0.1982)(429,0.2333)(430,0.1994)(431,0.1606)(432,0.2209)(433,0.1063)(434,0.1379)(435,0.1835)(436,0.1604)(437,0.1537)(438,0.1415)(439,0.196)(440,0.1962)(441,0.1912)(442,0.1929)(443,0.2157)(444,0.141)(445,0.1839)(446,0.2546)(447,0.1573)(448,0.2008)(449,0.2691)(450,0.265)(451,0.1912)(452,0.2599)(453,0.1813)(454,0.2344)(455,0.1223)(456,0.1486)(457,0.151)(458,0.1853)(459,0.2599)(460,0.1959)(461,0.1919)(462,0.2596)(463,0.1987)(464,0.198)(465,0.2038)(466,0.1431)(467,0.1717)(468,0.2261)(469,0.1895)(470,0.2013)(471,0.1353)(472,0.1858)(473,0.1961)(474,0.1886)(475,0.2147)(476,0.2944)(477,0.2568)(478,0.262)(479,0.2108)(480,0.236)(481,0.1648)(482,0.1531)(483,0.1423)(484,0.1433)(485,0.2086)(486,0.1629)(487,0.1608)(488,0.2525)(489,0.2188)(490,0.1796)(491,0.214)(492,0.2092)(493,0.1596)(494,0.1177)(495,0.1501)(496,0.2027)(497,0.1647)(498,0.1304)(499,0.1959)};\addlegendentry{COTAF $B=5$}
\addplot+[semithick, color = green, mark = triangle, mark options={mark repeat=20,mark phase=1}]coordinates {
(0,0.1039)(1,0.0964)(2,0.1184)(3,0.1662)(4,0.2338)(5,0.297)(6,0.3571)(7,0.4125)(8,0.4732)(9,0.525)(10,0.5711)(11,0.6049)(12,0.6281)(13,0.6598)(14,0.6719)(15,0.6906)(16,0.6902)(17,0.7172)(18,0.7256)(19,0.7348)(20,0.7542)(21,0.7599)(22,0.7667)(23,0.7738)(24,0.7849)(25,0.7929)(26,0.7978)(27,0.7906)(28,0.8005)(29,0.8043)(30,0.8154)(31,0.8134)(32,0.8199)(33,0.8155)(34,0.8212)(35,0.8153)(36,0.8245)(37,0.8033)(38,0.8239)(39,0.8241)(40,0.8236)(41,0.8274)(42,0.8436)(43,0.8388)(44,0.8488)(45,0.8342)(46,0.846)(47,0.8276)(48,0.8544)(49,0.8473)(50,0.8639)(51,0.8598)(52,0.8601)(53,0.8646)(54,0.8643)(55,0.862)(56,0.8719)(57,0.8692)(58,0.8735)(59,0.8693)(60,0.8738)(61,0.8749)(62,0.8766)(63,0.872)(64,0.8764)(65,0.8695)(66,0.8726)(67,0.8709)(68,0.8779)(69,0.8768)(70,0.8766)(71,0.8736)(72,0.8775)(73,0.8726)(74,0.8746)(75,0.8631)(76,0.8702)(77,0.8634)(78,0.8736)(79,0.8493)(80,0.8369)(81,0.8173)(82,0.7785)(83,0.8315)(84,0.8151)(85,0.8238)(86,0.8087)(87,0.8475)(88,0.8558)(89,0.8766)(90,0.8827)(91,0.8862)(92,0.8908)(93,0.8917)(94,0.8926)(95,0.8908)(96,0.8936)(97,0.8952)(98,0.8962)(99,0.8974)(100,0.8957)(101,0.8952)(102,0.8958)(103,0.8953)(104,0.8962)(105,0.8881)(106,0.8916)(107,0.8927)(108,0.8937)(109,0.8905)(110,0.8857)(111,0.8835)(112,0.8874)(113,0.8953)(114,0.8956)(115,0.8995)(116,0.8957)(117,0.9009)(118,0.8985)(119,0.9022)(120,0.8997)(121,0.9027)(122,0.9017)(123,0.8997)(124,0.8999)(125,0.8935)(126,0.8973)(127,0.8928)(128,0.8997)(129,0.8916)(130,0.9)(131,0.8955)(132,0.9022)(133,0.8891)(134,0.8988)(135,0.8918)(136,0.894)(137,0.8857)(138,0.89)(139,0.8865)(140,0.9004)(141,0.8958)(142,0.9037)(143,0.9017)(144,0.9057)(145,0.8998)(146,0.9024)(147,0.8932)(148,0.8938)(149,0.8853)(150,0.8915)(151,0.8771)(152,0.8689)(153,0.8792)(154,0.8901)(155,0.8949)(156,0.9023)(157,0.9003)(158,0.9001)(159,0.9059)(160,0.9068)(161,0.9065)(162,0.9062)(163,0.9072)(164,0.9087)(165,0.9114)(166,0.9123)(167,0.9103)(168,0.9113)(169,0.9137)(170,0.9128)(171,0.9166)(172,0.913)(173,0.9154)(174,0.9145)(175,0.9164)(176,0.9148)(177,0.917)(178,0.9171)(179,0.9143)(180,0.9164)(181,0.9126)(182,0.9132)(183,0.9085)(184,0.9077)(185,0.9011)(186,0.9117)(187,0.9112)(188,0.9102)(189,0.9158)(190,0.9169)(191,0.9144)(192,0.9177)(193,0.9118)(194,0.9143)(195,0.913)(196,0.9152)(197,0.9127)(198,0.9156)(199,0.9143)(200,0.9131)(201,0.914)(202,0.9129)(203,0.905)(204,0.9068)(205,0.9088)(206,0.913)(207,0.9079)(208,0.9069)(209,0.8912)(210,0.9008)(211,0.9019)(212,0.9006)(213,0.8949)(214,0.8926)(215,0.8841)(216,0.8902)(217,0.8762)(218,0.8842)(219,0.8754)(220,0.8801)(221,0.8817)(222,0.8873)(223,0.8929)(224,0.8947)(225,0.8969)(226,0.9036)(227,0.9084)(228,0.9038)(229,0.9036)(230,0.9085)(231,0.9146)(232,0.9139)(233,0.9115)(234,0.9113)(235,0.9149)(236,0.914)(237,0.913)(238,0.9127)(239,0.9152)(240,0.9206)(241,0.9183)(242,0.9176)(243,0.9177)(244,0.9103)(245,0.914)(246,0.9191)(247,0.9193)(248,0.9192)(249,0.9155)(250,0.9122)(251,0.9171)(252,0.9179)(253,0.9198)(254,0.9194)(255,0.9186)(256,0.9155)(257,0.9154)(258,0.918)(259,0.9202)(260,0.9209)(261,0.92)(262,0.9185)(263,0.9203)(264,0.9127)(265,0.9165)(266,0.9187)(267,0.9217)(268,0.9172)(269,0.9188)(270,0.9163)(271,0.9194)(272,0.9086)(273,0.9138)(274,0.9162)(275,0.9197)(276,0.9141)(277,0.9168)(278,0.9093)(279,0.9171)(280,0.9156)(281,0.9197)(282,0.9191)(283,0.9195)(284,0.9218)(285,0.9224)(286,0.9183)(287,0.9219)(288,0.919)(289,0.9202)(290,0.9189)(291,0.9214)(292,0.9159)(293,0.9138)(294,0.906)(295,0.9036)(296,0.8914)(297,0.8795)(298,0.8681)(299,0.8942)(300,0.8949)(301,0.9139)(302,0.9185)(303,0.9208)(304,0.9191)(305,0.9212)(306,0.9224)(307,0.9225)(308,0.92)(309,0.9191)(310,0.9184)(311,0.9194)(312,0.9178)(313,0.9204)(314,0.9179)(315,0.9193)(316,0.9165)(317,0.9178)(318,0.9159)(319,0.9194)(320,0.9207)(321,0.9201)(322,0.9184)(323,0.9226)(324,0.9216)(325,0.9175)(326,0.9188)(327,0.9149)(328,0.9203)(329,0.9185)(330,0.9184)(331,0.9181)(332,0.9188)(333,0.9196)(334,0.9182)(335,0.9227)(336,0.9255)(337,0.9226)(338,0.9231)(339,0.9242)(340,0.9221)(341,0.9208)(342,0.9198)(343,0.9219)(344,0.923)(345,0.925)(346,0.9248)(347,0.9227)(348,0.9251)(349,0.9223)(350,0.9237)(351,0.9216)(352,0.9238)(353,0.9216)(354,0.9235)(355,0.9247)(356,0.9269)(357,0.9238)(358,0.9238)(359,0.9259)(360,0.9217)(361,0.9248)(362,0.9251)(363,0.9246)(364,0.9259)(365,0.9261)(366,0.9231)(367,0.9253)(368,0.927)(369,0.9245)(370,0.9284)(371,0.9254)(372,0.9274)(373,0.9274)(374,0.9285)(375,0.9264)(376,0.9257)(377,0.9283)(378,0.9255)(379,0.9268)(380,0.9272)(381,0.9276)(382,0.9254)(383,0.9216)(384,0.9254)(385,0.9238)(386,0.9228)(387,0.9244)(388,0.9233)(389,0.926)(390,0.9253)(391,0.9167)(392,0.919)(393,0.9265)(394,0.924)(395,0.9214)(396,0.9221)(397,0.9239)(398,0.9259)(399,0.9249)(400,0.9254)(401,0.9253)(402,0.928)(403,0.9246)(404,0.9295)(405,0.9268)(406,0.9274)(407,0.9248)(408,0.927)(409,0.9144)(410,0.9178)(411,0.9165)(412,0.9251)(413,0.9295)(414,0.9291)(415,0.9242)(416,0.9233)(417,0.9237)(418,0.9287)(419,0.9203)(420,0.9303)(421,0.9274)(422,0.9213)(423,0.9259)(424,0.9289)(425,0.9237)(426,0.9244)(427,0.9207)(428,0.9272)(429,0.928)(430,0.9283)(431,0.9271)(432,0.9247)(433,0.9189)(434,0.9231)(435,0.9117)(436,0.92)(437,0.918)(438,0.9212)(439,0.9186)(440,0.9225)(441,0.9165)(442,0.9216)(443,0.9195)(444,0.9163)(445,0.9196)(446,0.9192)(447,0.919)(448,0.9175)(449,0.9205)(450,0.9211)(451,0.9237)(452,0.9211)(453,0.9218)(454,0.917)(455,0.9188)(456,0.9222)(457,0.9252)(458,0.9248)(459,0.9246)(460,0.9266)(461,0.9239)(462,0.9249)(463,0.9237)(464,0.9219)(465,0.9224)(466,0.924)(467,0.9274)(468,0.9244)(469,0.9235)(470,0.9263)(471,0.9253)(472,0.9272)(473,0.9262)(474,0.9239)(475,0.9261)(476,0.9223)(477,0.9206)(478,0.9245)(479,0.9226)(480,0.9243)(481,0.9263)(482,0.9248)(483,0.9223)(484,0.9267)(485,0.9171)(486,0.9212)(487,0.9023)(488,0.8982)(489,0.8539)(490,0.8474)(491,0.8304)(492,0.8372)(493,0.8526)(494,0.8715)(495,0.9009)(496,0.9254)(497,0.9262)(498,0.9288)(499,0.9269)};\addlegendentry{ROTAF $B=5$}
\end{axis}
\end{tikzpicture}}
\end{center}
\caption{Test accuracy vs. global training rounds with $B$ Byzantine clients applying Gaussian attacks. MNIST i.i.d. data}
\label{gaussian_attacks}
\end{figure}
We explain hereafter how the training sets are divided among the clients for i.i.d. data and non-i.i.d. data settings:\\
{\bf i.i.d. data:} the whole training dataset is split uniformly at random among clients with the same number of samples per client.\\
{\bf non-i.i.d. data:} First, the classes are sampled with exponentially decreasing portions, that is, $\gamma^i$ portion of samples of class $i$ is taken for certain $\gamma\in (0,1]$. Note that all the classes have the same amount of samples when $\gamma = 1$. The same procedure is applied to the test dataset. Then, the dataset is sorted by its label and equally distributed between clients. This results in clients having samples from certain classes only.
At every global training round, each client performs one step of local batch-SGD, where a minibatch of size $b=50$ is used at each step. The leaning rate $\eta_t$ is fixed to $0.01$ in all local SGD steps. In all experiments, the number of groups is fixed to $G=20$ while the noise variance and the power constraint are set such that $\frac{P}{\sigma^2}=20dB$, unless differently stated. The smoothing parameter of the Weiszfeld algorithm used is $\epsilon=10^{-4}$. In all experiments, the transmission threshold $h_{\min}$ and the scaling factor $\rho_t$ are fixed to 0.1 and 10 respectively.
We consider different types of attack to study the robustness of the proposed approach:\\
{\bf Gaussian attacks}: each Byzantine client sends a Gaussian vector with entries having mean $0$ and variance $30$ instead of its actual model update. \\
{\bf Class-flip attacks}: each Byzantine client changes the labels of its local datasets as $y=9-y$.\\
{\bf Mimic attacks}: all Byzantine clients choose a regular client and mimic its local update. This results in a consistent bias towards emphasizing the chosen worker and thus underrepresenting other clients. Such attack is applied in the case of non-i.i.d. data.
In \figref{gaussian_attacks}, we compare our proposed approach and the simple averaging COTAF \cite{Sery2020} described in Section \ref{system_model}. The MNIST dataset is used with i.i.d. local data at the clients. As expected, the performance of COTAF is heavily affected by Byzantine attacks even for small number of attackers. On the other hand, the proposed approach guarantees fast convergence and provides almost the same performance as the case without attacks. Moreover, the performance of both approaches is the same in the case of no attacks $(B=0)$.
In the second experiment, we consider class flip attacks where the Byzantine clients change the labels of their local datasets as $y=9-y$. \figref{class_flip_attacks} demonstrates the effect of increasing the number of Byzantine workers. From this figure, the proposed approach is robust to Byzantine attacks even though the test accuracy relatively decreases with the number of the Byzantine clients. This is expected since the training, in presence of malicious clients, is done over a smaller sample size as only the honest clients contribute in the learning process. This was also predicted by the convergence analysis conducted in section \ref{conver_analysis}, where it has been shown that the asymptotic learning error increases with the number of Byzantine attacks.
In the third experiment, we study the effect of non i.i.d data distributions among clients. Particularly, we plot in \figref{sample_dup_attack_noniid} the performance of the proposed approach for MNIST non-i.i.d. dataset in presence of $B=5$ attackers applying mimic attacks. As seen, the performance of ROTAF is significantly enhanced using resampling. Moreover, we note that $s=3$ is sufficient without need for higher values of the resampling rate.
\begin{figure}[ht]
\begin{center}
\subfigure[Logistic regression]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=500,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test accuracy},
ymajorgrids,
ymin=0.0, ymax=1,
ytick style={color=black},
grid=major,
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
]
\addplot [semithick, color =red]
coordinates {
(0,0.0856)(1,0.1021)(2,0.1197)(3,0.1403)(4,0.1619)(5,0.1814)(6,0.1946)(7,0.2099)(8,0.2282)(9,0.2493)(10,0.2699)(11,0.2886)(12,0.3063)(13,0.3217)(14,0.336)(15,0.3505)(16,0.3605)(17,0.3725)(18,0.386)(19,0.4001)(20,0.4126)(21,0.4243)(22,0.4344)(23,0.4422)(24,0.4509)(25,0.4616)(26,0.4708)(27,0.4805)(28,0.4898)(29,0.4989)(30,0.5032)(31,0.5076)(32,0.5135)(33,0.5205)(34,0.5296)(35,0.5351)(36,0.5418)(37,0.55)(38,0.5566)(39,0.5607)(40,0.5673)(41,0.5702)(42,0.5742)(43,0.5794)(44,0.5879)(45,0.5979)(46,0.6017)(47,0.6017)(48,0.6061)(49,0.6085)(50,0.6124)(51,0.6169)(52,0.6198)(53,0.6245)(54,0.6293)(55,0.6318)(56,0.6364)(57,0.6382)(58,0.6405)(59,0.6441)(60,0.6484)(61,0.6467)(62,0.6513)(63,0.6545)(64,0.6573)(65,0.6587)(66,0.6594)(67,0.6634)(68,0.6689)(69,0.6729)(70,0.6759)(71,0.679)(72,0.6845)(73,0.6871)(74,0.686)(75,0.6856)(76,0.6892)(77,0.6909)(78,0.6915)(79,0.6934)(80,0.6915)(81,0.6936)(82,0.6967)(83,0.6963)(84,0.6979)(85,0.7002)(86,0.7024)(87,0.7018)(88,0.7051)(89,0.7063)(90,0.708)(91,0.7115)(92,0.712)(93,0.7138)(94,0.7148)(95,0.7164)(96,0.716)(97,0.718)(98,0.7182)(99,0.7218)(100,0.7205)(101,0.7236)(102,0.728)(103,0.7268)(104,0.7289)(105,0.7284)(106,0.7313)(107,0.7326)(108,0.7342)(109,0.7367)(110,0.7366)(111,0.7375)(112,0.737)(113,0.739)(114,0.7429)(115,0.7428)(116,0.7443)(117,0.7451)(118,0.7455)(119,0.7466)(120,0.7478)(121,0.7497)(122,0.7458)(123,0.7451)(124,0.7449)(125,0.749)(126,0.7482)(127,0.7496)(128,0.75)(129,0.7511)(130,0.752)(131,0.7529)(132,0.7539)(133,0.7541)(134,0.7541)(135,0.7542)(136,0.7562)(137,0.7583)(138,0.757)(139,0.759)(140,0.7607)(141,0.7616)(142,0.7607)(143,0.761)(144,0.7624)(145,0.7625)(146,0.7634)(147,0.7655)(148,0.7642)(149,0.7627)(150,0.7631)(151,0.765)(152,0.7648)(153,0.7668)(154,0.7674)(155,0.7684)(156,0.77)(157,0.7683)(158,0.7688)(159,0.7679)(160,0.7706)(161,0.7705)(162,0.7724)(163,0.7745)(164,0.7761)(165,0.7732)(166,0.7721)(167,0.7735)(168,0.7744)(169,0.7758)(170,0.7761)(171,0.7775)(172,0.7776)(173,0.7758)(174,0.7783)(175,0.7789)(176,0.7791)(177,0.7774)(178,0.778)(179,0.7765)(180,0.7768)(181,0.7771)(182,0.7765)(183,0.7757)(184,0.7763)(185,0.7783)(186,0.7789)(187,0.7782)(188,0.777)(189,0.7782)(190,0.7785)(191,0.7775)(192,0.778)(193,0.7789)(194,0.7784)(195,0.7772)(196,0.7789)(197,0.7769)(198,0.7778)(199,0.7783)(200,0.7785)(201,0.7775)(202,0.7764)(203,0.7775)(204,0.7768)(205,0.7768)(206,0.7762)(207,0.7788)(208,0.7777)(209,0.7778)(210,0.7772)(211,0.7791)(212,0.7781)(213,0.7804)(214,0.7789)(215,0.7818)(216,0.7817)(217,0.78)(218,0.7812)(219,0.7807)(220,0.7805)(221,0.7813)(222,0.7815)(223,0.7828)(224,0.7818)(225,0.7811)(226,0.782)(227,0.7816)(228,0.7832)(229,0.7833)(230,0.7822)(231,0.782)(232,0.781)(233,0.7797)(234,0.7822)(235,0.7811)(236,0.7816)(237,0.7822)(238,0.7823)(239,0.782)(240,0.784)(241,0.784)(242,0.7858)(243,0.7881)(244,0.7873)(245,0.788)(246,0.7879)(247,0.7874)(248,0.7899)(249,0.7914)(250,0.7885)(251,0.7885)(252,0.7883)(253,0.7891)(254,0.7888)(255,0.7887)(256,0.7885)(257,0.7889)(258,0.7887)(259,0.7887)(260,0.7873)(261,0.7868)(262,0.7858)(263,0.787)(264,0.7855)(265,0.7861)(266,0.7895)(267,0.788)(268,0.788)(269,0.788)(270,0.7873)(271,0.7871)(272,0.788)(273,0.7852)(274,0.7865)(275,0.7877)(276,0.7867)(277,0.7882)(278,0.7895)(279,0.7893)(280,0.7881)(281,0.7897)(282,0.7892)(283,0.7898)(284,0.7903)(285,0.7908)(286,0.7918)(287,0.7909)(288,0.7906)(289,0.7914)(290,0.7905)(291,0.7924)(292,0.7918)(293,0.7905)(294,0.7887)(295,0.7904)(296,0.79)(297,0.7897)(298,0.788)(299,0.7888)(300,0.7901)(301,0.7899)(302,0.7907)(303,0.7893)(304,0.7902)(305,0.7913)(306,0.7913)(307,0.7902)(308,0.7913)(309,0.7916)(310,0.7909)(311,0.7911)(312,0.7912)(313,0.7915)(314,0.7926)(315,0.7921)(316,0.7934)(317,0.7944)(318,0.7949)(319,0.7957)(320,0.7949)(321,0.7927)(322,0.7937)(323,0.7938)(324,0.7953)(325,0.7954)(326,0.7959)(327,0.7962)(328,0.7962)(329,0.7966)(330,0.7968)(331,0.7967)(332,0.7969)(333,0.7979)(334,0.7965)(335,0.7984)(336,0.7965)(337,0.7963)(338,0.7964)(339,0.7969)(340,0.7964)(341,0.7957)(342,0.7964)(343,0.7971)(344,0.7976)(345,0.7988)(346,0.7981)(347,0.8)(348,0.7985)(349,0.7975)(350,0.7974)(351,0.7984)(352,0.7989)(353,0.7974)(354,0.7976)(355,0.7977)(356,0.7977)(357,0.7966)(358,0.7955)(359,0.7959)(360,0.7977)(361,0.7952)(362,0.795)(363,0.7966)(364,0.796)(365,0.7957)(366,0.7956)(367,0.7964)(368,0.796)(369,0.7975)(370,0.7981)(371,0.797)(372,0.7951)(373,0.796)(374,0.7966)(375,0.7966)(376,0.7961)(377,0.7964)(378,0.7954)(379,0.797)(380,0.7947)(381,0.7968)(382,0.7983)(383,0.7967)(384,0.7978)(385,0.7971)(386,0.7972)(387,0.7967)(388,0.7975)(389,0.797)(390,0.7962)(391,0.7957)(392,0.7964)(393,0.7978)(394,0.7974)(395,0.7974)(396,0.7968)(397,0.7969)(398,0.797)(399,0.796)(400,0.7956)(401,0.7951)(402,0.7955)(403,0.796)(404,0.7963)(405,0.7963)(406,0.7975)(407,0.7967)(408,0.7966)(409,0.7979)(410,0.7978)(411,0.7968)(412,0.7973)(413,0.7977)(414,0.797)(415,0.7957)(416,0.7959)(417,0.7972)(418,0.7995)(419,0.7986)(420,0.798)(421,0.7982)(422,0.7995)(423,0.7982)(424,0.7987)(425,0.7995)(426,0.7999)(427,0.7997)(428,0.7992)(429,0.7984)(430,0.7993)(431,0.799)(432,0.8005)(433,0.8012)(434,0.8024)(435,0.8021)(436,0.8022)(437,0.8002)(438,0.8022)(439,0.8017)(440,0.801)(441,0.8006)(442,0.7989)(443,0.7994)(444,0.7993)(445,0.7994)(446,0.8018)(447,0.8013)(448,0.7997)(449,0.8013)(450,0.8019)(451,0.8019)(452,0.8023)(453,0.8031)(454,0.8055)(455,0.805)(456,0.8052)(457,0.8047)(458,0.8048)(459,0.8045)(460,0.8038)(461,0.8048)(462,0.8037)(463,0.8044)(464,0.8041)(465,0.8048)(466,0.8059)(467,0.8052)(468,0.8053)(469,0.8058)(470,0.8059)(471,0.8055)(472,0.8067)(473,0.8063)(474,0.8056)(475,0.8055)(476,0.8059)(477,0.8045)(478,0.8052)(479,0.8058)(480,0.8052)(481,0.8059)(482,0.8061)(483,0.8073)(484,0.8055)(485,0.8069)(486,0.8054)(487,0.8064)(488,0.8065)(489,0.8065)(490,0.807)(491,0.8073)(492,0.8072)(493,0.8091)(494,0.8089)(495,0.8084)(496,0.8084)(497,0.8088)(498,0.8088)(499,0.8085)
}; \addlegendentry{$B = 10$ }
\addplot [semithick, color=blue]
coordinates {
(0,0.0856)(1,0.1029)(2,0.1213)(3,0.1441)(4,0.1662)(5,0.1871)(6,0.2024)(7,0.2188)(8,0.2397)(9,0.2647)(10,0.2852)(11,0.3062)(12,0.3237)(13,0.3398)(14,0.3571)(15,0.3704)(16,0.3803)(17,0.3953)(18,0.4107)(19,0.422)(20,0.4368)(21,0.4474)(22,0.4573)(23,0.4676)(24,0.4761)(25,0.4867)(26,0.4942)(27,0.5056)(28,0.5147)(29,0.5213)(30,0.5281)(31,0.5328)(32,0.5369)(33,0.5467)(34,0.5557)(35,0.5608)(36,0.5659)(37,0.5734)(38,0.5806)(39,0.5866)(40,0.5909)(41,0.5959)(42,0.6025)(43,0.6067)(44,0.6122)(45,0.6219)(46,0.6248)(47,0.6261)(48,0.6311)(49,0.6328)(50,0.6349)(51,0.6404)(52,0.6426)(53,0.6482)(54,0.6513)(55,0.6549)(56,0.6581)(57,0.6599)(58,0.6626)(59,0.6644)(60,0.6677)(61,0.6681)(62,0.6713)(63,0.6758)(64,0.6806)(65,0.6819)(66,0.6825)(67,0.6865)(68,0.6928)(69,0.6951)(70,0.699)(71,0.7023)(72,0.7057)(73,0.7084)(74,0.7067)(75,0.7059)(76,0.7085)(77,0.7117)(78,0.7132)(79,0.7158)(80,0.7154)(81,0.7175)(82,0.7208)(83,0.7212)(84,0.7221)(85,0.7241)(86,0.7261)(87,0.7255)(88,0.7273)(89,0.7285)(90,0.73)(91,0.7312)(92,0.7362)(93,0.737)(94,0.7391)(95,0.7399)(96,0.7393)(97,0.7425)(98,0.7438)(99,0.7458)(100,0.746)(101,0.7485)(102,0.7511)(103,0.752)(104,0.753)(105,0.7521)(106,0.7564)(107,0.7566)(108,0.7592)(109,0.7599)(110,0.7609)(111,0.7612)(112,0.7616)(113,0.7632)(114,0.7647)(115,0.7641)(116,0.7669)(117,0.7668)(118,0.7673)(119,0.7685)(120,0.7692)(121,0.7715)(122,0.7709)(123,0.7696)(124,0.7702)(125,0.7738)(126,0.7723)(127,0.7714)(128,0.7744)(129,0.7738)(130,0.7744)(131,0.7763)(132,0.7771)(133,0.7775)(134,0.7772)(135,0.7777)(136,0.7791)(137,0.7814)(138,0.7807)(139,0.7819)(140,0.7834)(141,0.7833)(142,0.7834)(143,0.7861)(144,0.7859)(145,0.7859)(146,0.7853)(147,0.7869)(148,0.7858)(149,0.7846)(150,0.7859)(151,0.7877)(152,0.789)(153,0.7901)(154,0.7895)(155,0.7909)(156,0.7908)(157,0.7901)(158,0.791)(159,0.7931)(160,0.7942)(161,0.795)(162,0.7946)(163,0.7958)(164,0.7973)(165,0.7948)(166,0.7946)(167,0.7946)(168,0.7963)(169,0.7977)(170,0.7965)(171,0.7972)(172,0.7988)(173,0.7996)(174,0.8013)(175,0.8008)(176,0.7986)(177,0.7984)(178,0.7999)(179,0.7975)(180,0.7984)(181,0.7977)(182,0.7974)(183,0.7985)(184,0.7994)(185,0.7997)(186,0.8009)(187,0.8016)(188,0.8006)(189,0.8005)(190,0.8013)(191,0.8003)(192,0.8005)(193,0.8017)(194,0.8031)(195,0.8039)(196,0.8039)(197,0.8029)(198,0.8024)(199,0.8022)(200,0.8023)(201,0.8023)(202,0.8038)(203,0.8034)(204,0.8032)(205,0.8032)(206,0.8029)(207,0.803)(208,0.8045)(209,0.8055)(210,0.8043)(211,0.805)(212,0.8049)(213,0.8066)(214,0.8083)(215,0.8061)(216,0.8058)(217,0.8057)(218,0.807)(219,0.8073)(220,0.8057)(221,0.8047)(222,0.8055)(223,0.8063)(224,0.806)(225,0.8068)(226,0.8078)(227,0.8079)(228,0.8091)(229,0.8099)(230,0.809)(231,0.8068)(232,0.8071)(233,0.8072)(234,0.8074)(235,0.8081)(236,0.8058)(237,0.8078)(238,0.8086)(239,0.809)(240,0.8087)(241,0.8087)(242,0.8094)(243,0.812)(244,0.8133)(245,0.8136)(246,0.8138)(247,0.8149)(248,0.8135)(249,0.8147)(250,0.813)(251,0.8136)(252,0.815)(253,0.8164)(254,0.8169)(255,0.8154)(256,0.8163)(257,0.8156)(258,0.8181)(259,0.8157)(260,0.8161)(261,0.8151)(262,0.8167)(263,0.8166)(264,0.8156)(265,0.8171)(266,0.8179)(267,0.8164)(268,0.816)(269,0.8156)(270,0.8162)(271,0.8165)(272,0.8171)(273,0.8168)(274,0.8159)(275,0.8162)(276,0.8161)(277,0.8163)(278,0.8161)(279,0.816)(280,0.8169)(281,0.8172)(282,0.8164)(283,0.8161)(284,0.8179)(285,0.8192)(286,0.8198)(287,0.8204)(288,0.8211)(289,0.8205)(290,0.8212)(291,0.8201)(292,0.8207)(293,0.8197)(294,0.8196)(295,0.82)(296,0.8203)(297,0.8193)(298,0.819)(299,0.8197)(300,0.8202)(301,0.819)(302,0.8176)(303,0.8177)(304,0.8184)(305,0.8187)(306,0.819)(307,0.8179)(308,0.8176)(309,0.8171)(310,0.817)(311,0.8179)(312,0.8171)(313,0.8171)(314,0.8177)(315,0.8174)(316,0.8169)(317,0.8194)(318,0.8196)(319,0.8212)(320,0.8198)(321,0.8193)(322,0.8193)(323,0.8205)(324,0.8202)(325,0.8209)(326,0.8203)(327,0.8202)(328,0.8214)(329,0.8219)(330,0.8217)(331,0.8213)(332,0.822)(333,0.8231)(334,0.822)(335,0.8245)(336,0.823)(337,0.8235)(338,0.8244)(339,0.8237)(340,0.8223)(341,0.8215)(342,0.822)(343,0.8213)(344,0.8225)(345,0.8229)(346,0.8224)(347,0.8222)(348,0.8223)(349,0.8219)(350,0.8224)(351,0.824)(352,0.8242)(353,0.8237)(354,0.8231)(355,0.8227)(356,0.8236)(357,0.8225)(358,0.8233)(359,0.8222)(360,0.8232)(361,0.8226)(362,0.8223)(363,0.8231)(364,0.8228)(365,0.8229)(366,0.8225)(367,0.8219)(368,0.8224)(369,0.8233)(370,0.8236)(371,0.8238)(372,0.8215)(373,0.8222)(374,0.8214)(375,0.824)(376,0.8236)(377,0.8236)(378,0.8217)(379,0.823)(380,0.8217)(381,0.8219)(382,0.823)(383,0.8217)(384,0.8228)(385,0.8219)(386,0.8228)(387,0.823)(388,0.8228)(389,0.8248)(390,0.8245)(391,0.8225)(392,0.8224)(393,0.8235)(394,0.8228)(395,0.8216)(396,0.8216)(397,0.8226)(398,0.8238)(399,0.8242)(400,0.8239)(401,0.8227)(402,0.8241)(403,0.8244)(404,0.8258)(405,0.8233)(406,0.8237)(407,0.8239)(408,0.8253)(409,0.8254)(410,0.8249)(411,0.8261)(412,0.8253)(413,0.8271)(414,0.8261)(415,0.8263)(416,0.8264)(417,0.8275)(418,0.8291)(419,0.8278)(420,0.8269)(421,0.8261)(422,0.8249)(423,0.8263)(424,0.8255)(425,0.8258)(426,0.8272)(427,0.8265)(428,0.8256)(429,0.824)(430,0.8269)(431,0.8258)(432,0.8256)(433,0.8265)(434,0.827)(435,0.8273)(436,0.8272)(437,0.8267)(438,0.8266)(439,0.8257)(440,0.8259)(441,0.8262)(442,0.8258)(443,0.8269)(444,0.828)(445,0.8284)(446,0.8284)(447,0.8282)(448,0.8282)(449,0.8281)(450,0.8275)(451,0.8284)(452,0.8279)(453,0.829)(454,0.829)(455,0.8302)(456,0.828)(457,0.8299)(458,0.8304)(459,0.8292)(460,0.8304)(461,0.8324)(462,0.8313)(463,0.8321)(464,0.8316)(465,0.8304)(466,0.8304)(467,0.8308)(468,0.8308)(469,0.8324)(470,0.8322)(471,0.8324)(472,0.8333)(473,0.8329)(474,0.833)(475,0.8338)(476,0.8344)(477,0.8327)(478,0.8314)(479,0.8317)(480,0.8317)(481,0.8318)(482,0.8319)(483,0.8331)(484,0.8328)(485,0.8333)(486,0.8319)(487,0.8325)(488,0.8325)(489,0.8326)(490,0.8323)(491,0.8341)(492,0.8348)(493,0.8336)(494,0.8325)(495,0.8332)(496,0.8331)(497,0.8343)(498,0.8334)(499,0.8336)
}; \addlegendentry{$B = 5$ }
\addplot [semithick, color =olive]
coordinates{
(0,0.0856)(1,0.1038)(2,0.123)(3,0.1469)(4,0.172)(5,0.1933)(6,0.2097)(7,0.2311)(8,0.2542)(9,0.2785)(10,0.3011)(11,0.3217)(12,0.3401)(13,0.355)(14,0.3709)(15,0.388)(16,0.399)(17,0.4138)(18,0.4294)(19,0.4418)(20,0.4574)(21,0.4668)(22,0.4774)(23,0.4867)(24,0.495)(25,0.5052)(26,0.5144)(27,0.5232)(28,0.5333)(29,0.5414)(30,0.5485)(31,0.554)(32,0.5556)(33,0.5651)(34,0.5741)(35,0.5819)(36,0.5871)(37,0.5958)(38,0.5999)(39,0.6044)(40,0.6108)(41,0.6145)(42,0.6182)(43,0.6235)(44,0.6283)(45,0.638)(46,0.6396)(47,0.6427)(48,0.6472)(49,0.6485)(50,0.652)(51,0.6556)(52,0.6583)(53,0.6634)(54,0.6666)(55,0.6719)(56,0.6751)(57,0.6781)(58,0.6794)(59,0.6811)(60,0.6842)(61,0.6861)(62,0.688)(63,0.6912)(64,0.6965)(65,0.698)(66,0.6983)(67,0.7023)(68,0.707)(69,0.7109)(70,0.7157)(71,0.7191)(72,0.7251)(73,0.7253)(74,0.7242)(75,0.7259)(76,0.7267)(77,0.7309)(78,0.731)(79,0.7323)(80,0.7334)(81,0.7351)(82,0.7367)(83,0.7383)(84,0.7392)(85,0.74)(86,0.7402)(87,0.7434)(88,0.7441)(89,0.7453)(90,0.7483)(91,0.7498)(92,0.7522)(93,0.7525)(94,0.7558)(95,0.7569)(96,0.7561)(97,0.7588)(98,0.7602)(99,0.7613)(100,0.7628)(101,0.7639)(102,0.7651)(103,0.7657)(104,0.7687)(105,0.7681)(106,0.771)(107,0.7719)(108,0.7753)(109,0.7765)(110,0.7768)(111,0.7776)(112,0.7779)(113,0.7776)(114,0.7792)(115,0.7787)(116,0.78)(117,0.7806)(118,0.7829)(119,0.7842)(120,0.7849)(121,0.786)(122,0.7857)(123,0.785)(124,0.7863)(125,0.7888)(126,0.7897)(127,0.7891)(128,0.7914)(129,0.7925)(130,0.7926)(131,0.796)(132,0.7959)(133,0.7966)(134,0.7972)(135,0.797)(136,0.7985)(137,0.7986)(138,0.7999)(139,0.8011)(140,0.8013)(141,0.8019)(142,0.8012)(143,0.8024)(144,0.8029)(145,0.803)(146,0.8016)(147,0.8029)(148,0.8031)(149,0.803)(150,0.8032)(151,0.8043)(152,0.8047)(153,0.8054)(154,0.8058)(155,0.8069)(156,0.8071)(157,0.8079)(158,0.8083)(159,0.8078)(160,0.8107)(161,0.8099)(162,0.8115)(163,0.8111)(164,0.8106)(165,0.8124)(166,0.8134)(167,0.812)(168,0.8125)(169,0.8138)(170,0.8128)(171,0.8141)(172,0.8155)(173,0.8149)(174,0.8164)(175,0.8158)(176,0.8159)(177,0.8167)(178,0.8167)(179,0.8168)(180,0.8161)(181,0.816)(182,0.8169)(183,0.8173)(184,0.8172)(185,0.8156)(186,0.8174)(187,0.8168)(188,0.8161)(189,0.8163)(190,0.8186)(191,0.8186)(192,0.8196)(193,0.8215)(194,0.8225)(195,0.8213)(196,0.8225)(197,0.8219)(198,0.8227)(199,0.8223)(200,0.8218)(201,0.8215)(202,0.8221)(203,0.8224)(204,0.8225)(205,0.8224)(206,0.8215)(207,0.8222)(208,0.8234)(209,0.8237)(210,0.8241)(211,0.824)(212,0.8223)(213,0.8238)(214,0.8239)(215,0.8237)(216,0.824)(217,0.8239)(218,0.8239)(219,0.824)(220,0.8237)(221,0.8238)(222,0.8239)(223,0.8229)(224,0.8237)(225,0.8244)(226,0.8263)(227,0.8265)(228,0.8279)(229,0.8267)(230,0.8269)(231,0.8258)(232,0.8269)(233,0.8261)(234,0.8271)(235,0.8264)(236,0.8267)(237,0.8263)(238,0.8279)(239,0.828)(240,0.8288)(241,0.8289)(242,0.8289)(243,0.8305)(244,0.8304)(245,0.8313)(246,0.8295)(247,0.8305)(248,0.8299)(249,0.831)(250,0.8302)(251,0.8308)(252,0.8309)(253,0.8327)(254,0.8316)(255,0.8321)(256,0.8321)(257,0.8325)(258,0.8329)(259,0.8324)(260,0.8332)(261,0.832)(262,0.8319)(263,0.8319)(264,0.8324)(265,0.8335)(266,0.8323)(267,0.833)(268,0.8328)(269,0.8335)(270,0.8339)(271,0.834)(272,0.8348)(273,0.8331)(274,0.8328)(275,0.8325)(276,0.8338)(277,0.8347)(278,0.8332)(279,0.8346)(280,0.8347)(281,0.8345)(282,0.835)(283,0.8339)(284,0.8343)(285,0.8356)(286,0.8374)(287,0.8368)(288,0.836)(289,0.8376)(290,0.8367)(291,0.837)(292,0.8377)(293,0.8371)(294,0.8373)(295,0.8365)(296,0.8376)(297,0.8369)(298,0.8353)(299,0.8346)(300,0.8349)(301,0.8348)(302,0.8346)(303,0.8355)(304,0.8366)(305,0.8366)(306,0.8369)(307,0.8366)(308,0.8359)(309,0.8344)(310,0.8361)(311,0.8351)(312,0.8365)(313,0.8368)(314,0.8355)(315,0.8361)(316,0.8378)(317,0.8381)(318,0.8376)(319,0.8382)(320,0.8381)(321,0.8377)(322,0.8382)(323,0.8379)(324,0.8389)(325,0.8385)(326,0.8395)(327,0.8374)(328,0.839)(329,0.8399)(330,0.8407)(331,0.8407)(332,0.8417)(333,0.8422)(334,0.8416)(335,0.8428)(336,0.8424)(337,0.8418)(338,0.8417)(339,0.8428)(340,0.841)(341,0.8419)(342,0.8425)(343,0.8405)(344,0.8422)(345,0.8411)(346,0.8414)(347,0.8417)(348,0.842)(349,0.8421)(350,0.8413)(351,0.843)(352,0.8426)(353,0.8424)(354,0.8429)(355,0.8431)(356,0.8434)(357,0.8432)(358,0.8428)(359,0.8422)(360,0.8431)(361,0.8419)(362,0.842)(363,0.8422)(364,0.8425)(365,0.8422)(366,0.8414)(367,0.8412)(368,0.8416)(369,0.8426)(370,0.8423)(371,0.8423)(372,0.8407)(373,0.841)(374,0.8417)(375,0.8428)(376,0.8421)(377,0.8415)(378,0.8408)(379,0.8416)(380,0.8412)(381,0.843)(382,0.8427)(383,0.8424)(384,0.8423)(385,0.8426)(386,0.8433)(387,0.8435)(388,0.8435)(389,0.8429)(390,0.8438)(391,0.8435)(392,0.8437)(393,0.8431)(394,0.8433)(395,0.8439)(396,0.8435)(397,0.8445)(398,0.8433)(399,0.8441)(400,0.8438)(401,0.8439)(402,0.845)(403,0.8448)(404,0.8454)(405,0.8452)(406,0.8439)(407,0.8439)(408,0.846)(409,0.8458)(410,0.8462)(411,0.8465)(412,0.8479)(413,0.8474)(414,0.8468)(415,0.8457)(416,0.8464)(417,0.8469)(418,0.8476)(419,0.8465)(420,0.8464)(421,0.8451)(422,0.8453)(423,0.8455)(424,0.8448)(425,0.8452)(426,0.8454)(427,0.8439)(428,0.8447)(429,0.8457)(430,0.8451)(431,0.8465)(432,0.8455)(433,0.8459)(434,0.8463)(435,0.8463)(436,0.8464)(437,0.8461)(438,0.8456)(439,0.8461)(440,0.8453)(441,0.8459)(442,0.8457)(443,0.8442)(444,0.845)(445,0.8461)(446,0.8468)(447,0.8481)(448,0.848)(449,0.8493)(450,0.8496)(451,0.8491)(452,0.8493)(453,0.8496)(454,0.8501)(455,0.8493)(456,0.8489)(457,0.8505)(458,0.8507)(459,0.8506)(460,0.8511)(461,0.8524)(462,0.8516)(463,0.8523)(464,0.8523)(465,0.8514)(466,0.8521)(467,0.8525)(468,0.8523)(469,0.8519)(470,0.8516)(471,0.8541)(472,0.8531)(473,0.8522)(474,0.8515)(475,0.8521)(476,0.8522)(477,0.8529)(478,0.8513)(479,0.8516)(480,0.8527)(481,0.852)(482,0.8523)(483,0.8529)(484,0.8532)(485,0.854)(486,0.8539)(487,0.8542)(488,0.8551)(489,0.8549)(490,0.854)(491,0.8541)(492,0.8539)(493,0.8535)(494,0.8532)(495,0.8537)(496,0.8535)(497,0.8547)(498,0.854)(499,0.8559)
};\addlegendentry{$B=0$}
\end{axis}
\end{tikzpicture}}
\subfigure[CNN model]{
\begin{tikzpicture}[scale=0.75, spy using outlines={black, circle, magnification=4, size=1cm,
connect spies}]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=500,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test Accuracy},
ymajorgrids,
ymin=0.0, ymax=1,
ytick style={color=black},
grid=major,
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
]
\addplot [semithick, color =red]
coordinates {
(0,0.104)(1,0.0856)(2,0.1113)(3,0.1458)(4,0.2078)(5,0.2662)(6,0.3194)(7,0.3544)(8,0.3881)(9,0.4247)(10,0.4609)(11,0.5023)(12,0.5273)(13,0.5553)(14,0.578)(15,0.6032)(16,0.6244)(17,0.6439)(18,0.6689)(19,0.694)(20,0.7054)(21,0.7212)(22,0.7282)(23,0.7463)(24,0.7548)(25,0.7575)(26,0.7608)(27,0.7744)(28,0.7751)(29,0.7775)(30,0.7876)(31,0.7856)(32,0.7865)(33,0.8019)(34,0.7933)(35,0.8103)(36,0.8069)(37,0.8063)(38,0.811)(39,0.8172)(40,0.813)(41,0.8122)(42,0.8067)(43,0.7996)(44,0.8024)(45,0.788)(46,0.7945)(47,0.7689)(48,0.7894)(49,0.7441)(50,0.7632)(51,0.7335)(52,0.769)(53,0.7575)(54,0.8026)(55,0.7786)(56,0.8091)(57,0.7998)(58,0.8436)(59,0.8396)(60,0.8546)(61,0.8379)(62,0.8471)(63,0.8381)(64,0.8551)(65,0.8537)(66,0.8619)(67,0.8502)(68,0.8612)(69,0.8435)(70,0.8605)(71,0.8505)(72,0.8537)(73,0.8358)(74,0.8613)(75,0.8397)(76,0.8603)(77,0.8503)(78,0.8656)(79,0.859)(80,0.8659)(81,0.865)(82,0.8747)(83,0.8703)(84,0.8785)(85,0.8742)(86,0.8811)(87,0.8803)(88,0.8807)(89,0.8802)(90,0.8814)(91,0.8765)(92,0.8804)(93,0.8781)(94,0.8837)(95,0.8789)(96,0.8843)(97,0.8782)(98,0.879)(99,0.884)(100,0.8849)(101,0.8804)(102,0.8716)(103,0.8799)(104,0.8772)(105,0.8809)(106,0.8755)(107,0.8779)(108,0.8542)(109,0.8597)(110,0.8335)(111,0.8643)(112,0.8424)(113,0.8689)(114,0.851)(115,0.8678)(116,0.855)(117,0.8693)(118,0.8526)(119,0.8692)(120,0.8631)(121,0.8812)(122,0.8679)(123,0.8795)(124,0.8612)(125,0.8771)(126,0.87)(127,0.8848)(128,0.8807)(129,0.8928)(130,0.8877)(131,0.8953)(132,0.895)(133,0.8985)(134,0.8968)(135,0.9003)(136,0.8971)(137,0.8971)(138,0.8966)(139,0.8957)(140,0.8963)(141,0.8923)(142,0.8913)(143,0.8769)(144,0.8819)(145,0.8589)(146,0.8674)(147,0.8499)(148,0.858)(149,0.8458)(150,0.8741)(151,0.8802)(152,0.8898)(153,0.894)(154,0.8948)(155,0.8952)(156,0.8956)(157,0.8951)(158,0.8968)(159,0.8882)(160,0.8946)(161,0.8929)(162,0.8929)(163,0.8817)(164,0.8989)(165,0.8881)(166,0.8979)(167,0.8912)(168,0.8987)(169,0.892)(170,0.8901)(171,0.8896)(172,0.8952)(173,0.8892)(174,0.8822)(175,0.8876)(176,0.8837)(177,0.8915)(178,0.8902)(179,0.8931)(180,0.8892)(181,0.8941)(182,0.8953)(183,0.8934)(184,0.9003)(185,0.891)(186,0.8986)(187,0.8872)(188,0.8996)(189,0.8879)(190,0.8876)(191,0.8722)(192,0.8824)(193,0.8608)(194,0.8817)(195,0.8543)(196,0.8592)(197,0.8339)(198,0.8771)(199,0.8698)(200,0.8906)(201,0.8714)(202,0.8912)(203,0.8843)(204,0.8979)(205,0.8946)(206,0.8949)(207,0.9016)(208,0.9005)(209,0.9032)(210,0.8977)(211,0.8979)(212,0.8941)(213,0.896)(214,0.8958)(215,0.8954)(216,0.8828)(217,0.8917)(218,0.8751)(219,0.8842)(220,0.8702)(221,0.8791)(222,0.8879)(223,0.8925)(224,0.8967)(225,0.8974)(226,0.8995)(227,0.8971)(228,0.8941)(229,0.8953)(230,0.8986)(231,0.8985)(232,0.8932)(233,0.8971)(234,0.8963)(235,0.8973)(236,0.8937)(237,0.8939)(238,0.8842)(239,0.886)(240,0.8823)(241,0.8825)(242,0.8733)(243,0.8844)(244,0.8626)(245,0.8767)(246,0.8781)(247,0.893)(248,0.8856)(249,0.9004)(250,0.8958)(251,0.8943)(252,0.9003)(253,0.897)(254,0.9005)(255,0.9042)(256,0.9034)(257,0.9022)(258,0.8981)(259,0.9021)(260,0.9)(261,0.8987)(262,0.8976)(263,0.902)(264,0.8955)(265,0.8961)(266,0.8916)(267,0.8957)(268,0.8802)(269,0.88)(270,0.8779)(271,0.8774)(272,0.8647)(273,0.8776)(274,0.8655)(275,0.8774)(276,0.8684)(277,0.8789)(278,0.8599)(279,0.872)(280,0.8829)(281,0.8899)(282,0.8939)(283,0.9061)(284,0.9004)(285,0.9072)(286,0.8992)(287,0.8994)(288,0.9008)(289,0.9019)(290,0.9003)(291,0.9047)(292,0.9035)(293,0.904)(294,0.8996)(295,0.9007)(296,0.9)(297,0.9035)(298,0.9026)(299,0.8958)(300,0.9006)(301,0.8907)(302,0.9031)(303,0.8966)(304,0.9024)(305,0.8849)(306,0.8958)(307,0.8931)(308,0.9038)(309,0.8952)(310,0.8972)(311,0.8945)(312,0.898)(313,0.8951)(314,0.8975)(315,0.885)(316,0.8881)(317,0.8864)(318,0.8886)(319,0.8989)(320,0.8964)(321,0.9034)(322,0.9)(323,0.9005)(324,0.897)(325,0.8901)(326,0.8866)(327,0.8938)(328,0.8934)(329,0.8911)(330,0.8985)(331,0.9002)(332,0.9032)(333,0.8938)(334,0.9032)(335,0.8972)(336,0.9002)(337,0.9002)(338,0.9048)(339,0.9018)(340,0.9013)(341,0.8959)(342,0.9036)(343,0.9009)(344,0.9055)(345,0.8959)(346,0.9004)(347,0.8881)(348,0.8995)(349,0.8778)(350,0.896)(351,0.8824)(352,0.8981)(353,0.8865)(354,0.9002)(355,0.9009)(356,0.9042)(357,0.9037)(358,0.9063)(359,0.897)(360,0.9036)(361,0.8945)(362,0.9057)(363,0.8966)(364,0.9032)(365,0.8912)(366,0.9)(367,0.8849)(368,0.8999)(369,0.8879)(370,0.9018)(371,0.8813)(372,0.8924)(373,0.8818)(374,0.9034)(375,0.8948)(376,0.9012)(377,0.8952)(378,0.9039)(379,0.8896)(380,0.9)(381,0.8803)(382,0.897)(383,0.8877)(384,0.8968)(385,0.886)(386,0.8943)(387,0.8829)(388,0.8938)(389,0.8965)(390,0.9054)(391,0.9042)(392,0.906)(393,0.8975)(394,0.9002)(395,0.8901)(396,0.8925)(397,0.8896)(398,0.8956)(399,0.8851)(400,0.8945)(401,0.8865)(402,0.8952)(403,0.8892)(404,0.8948)(405,0.8831)(406,0.8974)(407,0.887)(408,0.8995)(409,0.8921)(410,0.9045)(411,0.8943)(412,0.9036)(413,0.8937)(414,0.9048)(415,0.8978)(416,0.9031)(417,0.9023)(418,0.9086)(419,0.8998)(420,0.9014)(421,0.8959)(422,0.8994)(423,0.8947)(424,0.8995)(425,0.8829)(426,0.8975)(427,0.8784)(428,0.8986)(429,0.8883)(430,0.8969)(431,0.8813)(432,0.8999)(433,0.8891)(434,0.9021)(435,0.873)(436,0.8896)(437,0.8773)(438,0.898)(439,0.8823)(440,0.8946)(441,0.8872)(442,0.8979)(443,0.8953)(444,0.8982)(445,0.9029)(446,0.9062)(447,0.9032)(448,0.9025)(449,0.8948)(450,0.9026)(451,0.8878)(452,0.9049)(453,0.8913)(454,0.9017)(455,0.8964)(456,0.9017)(457,0.8954)(458,0.8994)(459,0.8882)(460,0.8971)(461,0.8867)(462,0.8971)(463,0.8867)(464,0.9026)(465,0.8989)(466,0.9075)(467,0.9034)(468,0.9023)(469,0.9005)(470,0.8989)(471,0.8887)(472,0.8993)(473,0.8864)(474,0.8935)(475,0.8805)(476,0.8911)(477,0.874)(478,0.8798)(479,0.8742)(480,0.891)(481,0.8833)(482,0.8941)(483,0.883)(484,0.8938)(485,0.8846)(486,0.8967)(487,0.8909)(488,0.8976)(489,0.891)(490,0.8996)(491,0.8963)(492,0.9068)(493,0.9031)(494,0.9063)(495,0.9024)(496,0.9042)(497,0.889)(498,0.8996)(499,0.8869)
}; \addlegendentry{$B = 10$ }
\addplot [semithick, color=blue]
coordinates {
(0,0.104)(1,0.0856)(2,0.1138)(3,0.1527)(4,0.2187)(5,0.2839)(6,0.3405)(7,0.3742)(8,0.4116)(9,0.4536)(10,0.4864)(11,0.533)(12,0.5577)(13,0.5838)(14,0.6045)(15,0.6318)(16,0.6533)(17,0.6736)(18,0.6943)(19,0.7186)(20,0.7259)(21,0.7436)(22,0.7495)(23,0.7646)(24,0.7683)(25,0.7738)(26,0.7756)(27,0.7901)(28,0.788)(29,0.7961)(30,0.8015)(31,0.797)(32,0.798)(33,0.8082)(34,0.8034)(35,0.8166)(36,0.8139)(37,0.8205)(38,0.7996)(39,0.8145)(40,0.7977)(41,0.8191)(42,0.7949)(43,0.8278)(44,0.8081)(45,0.8319)(46,0.811)(47,0.8274)(48,0.8083)(49,0.828)(50,0.7999)(51,0.7911)(52,0.7585)(53,0.7949)(54,0.7835)(55,0.8227)(56,0.8198)(57,0.8445)(58,0.8245)(59,0.8359)(60,0.8114)(61,0.8417)(62,0.8402)(63,0.8597)(64,0.8577)(65,0.8587)(66,0.8602)(67,0.8717)(68,0.8709)(69,0.8793)(70,0.8743)(71,0.8784)(72,0.8785)(73,0.8808)(74,0.881)(75,0.8822)(76,0.886)(77,0.8871)(78,0.8885)(79,0.8885)(80,0.8876)(81,0.888)(82,0.8901)(83,0.8888)(84,0.8924)(85,0.8902)(86,0.892)(87,0.8917)(88,0.89)(89,0.8948)(90,0.8954)(91,0.8946)(92,0.8955)(93,0.8941)(94,0.8972)(95,0.8955)(96,0.8947)(97,0.8925)(98,0.8951)(99,0.8984)(100,0.8941)(101,0.8928)(102,0.8821)(103,0.8923)(104,0.8909)(105,0.8953)(106,0.8944)(107,0.8937)(108,0.8808)(109,0.8853)(110,0.8737)(111,0.8902)(112,0.8814)(113,0.8953)(114,0.8935)(115,0.9007)(116,0.8981)(117,0.9021)(118,0.8947)(119,0.9001)(120,0.8974)(121,0.9035)(122,0.8998)(123,0.9029)(124,0.8891)(125,0.8931)(126,0.8909)(127,0.897)(128,0.8974)(129,0.905)(130,0.9021)(131,0.9076)(132,0.9077)(133,0.9081)(134,0.9089)(135,0.911)(136,0.9085)(137,0.9082)(138,0.9099)(139,0.9082)(140,0.909)(141,0.9045)(142,0.905)(143,0.8959)(144,0.9002)(145,0.8853)(146,0.8921)(147,0.8872)(148,0.8892)(149,0.8846)(150,0.8942)(151,0.9015)(152,0.905)(153,0.9073)(154,0.9077)(155,0.9076)(156,0.9078)(157,0.9059)(158,0.9088)(159,0.902)(160,0.9067)(161,0.9039)(162,0.9007)(163,0.8944)(164,0.9059)(165,0.9004)(166,0.9072)(167,0.9039)(168,0.9091)(169,0.9049)(170,0.9068)(171,0.9067)(172,0.9093)(173,0.9048)(174,0.9027)(175,0.9033)(176,0.9079)(177,0.9085)(178,0.9057)(179,0.9086)(180,0.9078)(181,0.9071)(182,0.9091)(183,0.9053)(184,0.91)(185,0.9076)(186,0.9066)(187,0.9055)(188,0.9091)(189,0.9064)(190,0.9053)(191,0.9045)(192,0.9049)(193,0.9028)(194,0.906)(195,0.9028)(196,0.902)(197,0.8926)(198,0.9001)(199,0.8951)(200,0.8977)(201,0.8818)(202,0.8919)(203,0.882)(204,0.8949)(205,0.8889)(206,0.8991)(207,0.8995)(208,0.908)(209,0.905)(210,0.9052)(211,0.905)(212,0.9072)(213,0.9088)(214,0.9147)(215,0.9148)(216,0.9111)(217,0.9145)(218,0.91)(219,0.9115)(220,0.9086)(221,0.9083)(222,0.9121)(223,0.9141)(224,0.9132)(225,0.912)(226,0.9139)(227,0.9107)(228,0.9098)(229,0.9116)(230,0.9118)(231,0.9119)(232,0.907)(233,0.9099)(234,0.9102)(235,0.9105)(236,0.9073)(237,0.9079)(238,0.9042)(239,0.9046)(240,0.9084)(241,0.905)(242,0.9038)(243,0.9121)(244,0.9028)(245,0.9069)(246,0.9071)(247,0.9122)(248,0.9084)(249,0.9148)(250,0.9118)(251,0.9105)(252,0.912)(253,0.9125)(254,0.911)(255,0.9138)(256,0.9133)(257,0.9123)(258,0.9078)(259,0.9123)(260,0.913)(261,0.9153)(262,0.9114)(263,0.9158)(264,0.9114)(265,0.9136)(266,0.9095)(267,0.9129)(268,0.9045)(269,0.9084)(270,0.909)(271,0.9101)(272,0.9039)(273,0.909)(274,0.9112)(275,0.9149)(276,0.9114)(277,0.912)(278,0.905)(279,0.907)(280,0.9098)(281,0.9084)(282,0.9077)(283,0.9101)(284,0.9072)(285,0.9077)(286,0.9002)(287,0.8939)(288,0.8941)(289,0.899)(290,0.8943)(291,0.8954)(292,0.891)(293,0.8926)(294,0.8803)(295,0.8861)(296,0.8806)(297,0.894)(298,0.9013)(299,0.9082)(300,0.9121)(301,0.9165)(302,0.9176)(303,0.9186)(304,0.9172)(305,0.9151)(306,0.916)(307,0.9143)(308,0.916)(309,0.916)(310,0.9136)(311,0.9154)(312,0.9125)(313,0.9147)(314,0.918)(315,0.9122)(316,0.9167)(317,0.9176)(318,0.9176)(319,0.9196)(320,0.9191)(321,0.9206)(322,0.9168)(323,0.9178)(324,0.9138)(325,0.912)(326,0.9071)(327,0.9134)(328,0.9135)(329,0.9135)(330,0.9135)(331,0.9165)(332,0.9157)(333,0.912)(334,0.9183)(335,0.9168)(336,0.9144)(337,0.9156)(338,0.9194)(339,0.918)(340,0.9182)(341,0.9146)(342,0.9179)(343,0.917)(344,0.9206)(345,0.917)(346,0.9163)(347,0.917)(348,0.9187)(349,0.9164)(350,0.9184)(351,0.9143)(352,0.919)(353,0.9179)(354,0.9176)(355,0.9176)(356,0.9186)(357,0.9195)(358,0.9211)(359,0.9195)(360,0.9204)(361,0.9175)(362,0.9204)(363,0.9175)(364,0.9186)(365,0.915)(366,0.9167)(367,0.9125)(368,0.9151)(369,0.9142)(370,0.9188)(371,0.9125)(372,0.9131)(373,0.9066)(374,0.914)(375,0.9097)(376,0.9118)(377,0.9065)(378,0.9181)(379,0.9088)(380,0.9157)(381,0.9104)(382,0.9177)(383,0.9126)(384,0.917)(385,0.9098)(386,0.9177)(387,0.9113)(388,0.9191)(389,0.9198)(390,0.9204)(391,0.9222)(392,0.9226)(393,0.9209)(394,0.9175)(395,0.9193)(396,0.9212)(397,0.9217)(398,0.9238)(399,0.9194)(400,0.9229)(401,0.919)(402,0.9215)(403,0.9192)(404,0.9198)(405,0.9144)(406,0.9188)(407,0.9138)(408,0.9219)(409,0.9183)(410,0.9214)(411,0.9177)(412,0.918)(413,0.9186)(414,0.9163)(415,0.9201)(416,0.9151)(417,0.9187)(418,0.914)(419,0.9182)(420,0.9161)(421,0.916)(422,0.9132)(423,0.9134)(424,0.9108)(425,0.9169)(426,0.9057)(427,0.8982)(428,0.8922)(429,0.8962)(430,0.8843)(431,0.8864)(432,0.8744)(433,0.8811)(434,0.883)(435,0.9052)(436,0.902)(437,0.9195)(438,0.9186)(439,0.9264)(440,0.9245)(441,0.9265)(442,0.9244)(443,0.9253)(444,0.9231)(445,0.9238)(446,0.9248)(447,0.9237)(448,0.921)(449,0.9207)(450,0.9223)(451,0.917)(452,0.9211)(453,0.9194)(454,0.9213)(455,0.9199)(456,0.9198)(457,0.92)(458,0.9179)(459,0.9184)(460,0.9184)(461,0.9194)(462,0.9184)(463,0.9183)(464,0.9208)(465,0.92)(466,0.9214)(467,0.9219)(468,0.9164)(469,0.9189)(470,0.9162)(471,0.9137)(472,0.9164)(473,0.9147)(474,0.9145)(475,0.9132)(476,0.9124)(477,0.912)(478,0.9111)(479,0.9118)(480,0.9141)(481,0.9147)(482,0.9136)(483,0.9136)(484,0.914)(485,0.9106)(486,0.9122)(487,0.9137)(488,0.917)(489,0.9127)(490,0.9145)(491,0.9133)(492,0.9182)(493,0.915)(494,0.9157)(495,0.9145)(496,0.9129)(497,0.8973)(498,0.8948)(499,0.8797)
}; \addlegendentry{$B = 5$ }
\addplot [semithick, color =olive]
coordinates{
(0,0.104)(1,0.0857)(2,0.1162)(3,0.1604)(4,0.2311)(5,0.3011)(6,0.3558)(7,0.396)(8,0.4387)(9,0.4807)(10,0.514)(11,0.5601)(12,0.5873)(13,0.6113)(14,0.6321)(15,0.6585)(16,0.6795)(17,0.6978)(18,0.7174)(19,0.736)(20,0.7456)(21,0.7633)(22,0.7675)(23,0.7788)(24,0.78)(25,0.7872)(26,0.7854)(27,0.8013)(28,0.7985)(29,0.8074)(30,0.8094)(31,0.8051)(32,0.7941)(33,0.8035)(34,0.7894)(35,0.8055)(36,0.7945)(37,0.8045)(38,0.768)(39,0.805)(40,0.7997)(41,0.8109)(42,0.7849)(43,0.8032)(44,0.7759)(45,0.8061)(46,0.787)(47,0.8072)(48,0.8022)(49,0.8304)(50,0.8254)(51,0.8271)(52,0.821)(53,0.8365)(54,0.8404)(55,0.8531)(56,0.858)(57,0.8681)(58,0.8574)(59,0.86)(60,0.8485)(61,0.8566)(62,0.857)(63,0.8673)(64,0.8666)(65,0.8642)(66,0.8658)(67,0.8726)(68,0.8718)(69,0.8804)(70,0.8755)(71,0.8786)(72,0.879)(73,0.8875)(74,0.8857)(75,0.8937)(76,0.893)(77,0.896)(78,0.8961)(79,0.8953)(80,0.8925)(81,0.8984)(82,0.8984)(83,0.8985)(84,0.8974)(85,0.9011)(86,0.8974)(87,0.9019)(88,0.8973)(89,0.9012)(90,0.9024)(91,0.9025)(92,0.9029)(93,0.9022)(94,0.9068)(95,0.905)(96,0.9053)(97,0.9013)(98,0.9051)(99,0.9061)(100,0.9046)(101,0.9034)(102,0.9)(103,0.903)(104,0.9045)(105,0.9057)(106,0.9063)(107,0.9035)(108,0.8968)(109,0.8968)(110,0.8902)(111,0.8995)(112,0.8942)(113,0.9039)(114,0.9027)(115,0.9075)(116,0.9051)(117,0.9084)(118,0.9074)(119,0.9082)(120,0.9074)(121,0.9123)(122,0.911)(123,0.9118)(124,0.9073)(125,0.907)(126,0.9065)(127,0.9089)(128,0.9098)(129,0.9145)(130,0.913)(131,0.9173)(132,0.918)(133,0.9172)(134,0.9158)(135,0.9188)(136,0.9182)(137,0.9177)(138,0.9181)(139,0.9157)(140,0.9167)(141,0.913)(142,0.9134)(143,0.908)(144,0.9104)(145,0.9054)(146,0.9086)(147,0.9078)(148,0.9112)(149,0.9082)(150,0.9091)(151,0.9151)(152,0.9151)(153,0.9183)(154,0.9178)(155,0.9171)(156,0.9184)(157,0.9174)(158,0.9207)(159,0.9148)(160,0.917)(161,0.9175)(162,0.9127)(163,0.9115)(164,0.9182)(165,0.9145)(166,0.9186)(167,0.9163)(168,0.9196)(169,0.9185)(170,0.9201)(171,0.9176)(172,0.921)(173,0.9185)(174,0.9159)(175,0.9172)(176,0.9177)(177,0.9179)(178,0.919)(179,0.9175)(180,0.9185)(181,0.918)(182,0.9194)(183,0.914)(184,0.9195)(185,0.919)(186,0.919)(187,0.9164)(188,0.9202)(189,0.9189)(190,0.9162)(191,0.9195)(192,0.9215)(193,0.9211)(194,0.9197)(195,0.9224)(196,0.9224)(197,0.92)(198,0.9212)(199,0.9223)(200,0.9197)(201,0.9213)(202,0.9188)(203,0.9214)(204,0.9184)(205,0.9226)(206,0.9166)(207,0.9107)(208,0.8986)(209,0.9065)(210,0.9013)(211,0.9113)(212,0.9076)(213,0.9133)(214,0.9084)(215,0.9126)(216,0.9126)(217,0.9088)(218,0.9065)(219,0.9155)(220,0.9228)(221,0.9217)(222,0.9221)(223,0.9239)(224,0.9233)(225,0.9173)(226,0.9214)(227,0.922)(228,0.9217)(229,0.9249)(230,0.921)(231,0.9195)(232,0.9172)(233,0.916)(234,0.9206)(235,0.918)(236,0.9155)(237,0.9137)(238,0.9136)(239,0.9144)(240,0.9199)(241,0.9175)(242,0.9175)(243,0.9214)(244,0.9193)(245,0.9218)(246,0.9231)(247,0.9268)(248,0.924)(249,0.9279)(250,0.9253)(251,0.9237)(252,0.9251)(253,0.9265)(254,0.9253)(255,0.9264)(256,0.9264)(257,0.9262)(258,0.9245)(259,0.9257)(260,0.9278)(261,0.9272)(262,0.9274)(263,0.9275)(264,0.9269)(265,0.9267)(266,0.9276)(267,0.9285)(268,0.9236)(269,0.9255)(270,0.9287)(271,0.9283)(272,0.9268)(273,0.9269)(274,0.9271)(275,0.9288)(276,0.9287)(277,0.9283)(278,0.9286)(279,0.9276)(280,0.9292)(281,0.9289)(282,0.9298)(283,0.9284)(284,0.9287)(285,0.9289)(286,0.9281)(287,0.9247)(288,0.927)(289,0.9299)(290,0.9283)(291,0.9277)(292,0.9275)(293,0.9287)(294,0.9257)(295,0.9272)(296,0.9269)(297,0.928)(298,0.9292)(299,0.9288)(300,0.9301)(301,0.926)(302,0.9308)(303,0.9242)(304,0.9284)(305,0.9256)(306,0.9271)(307,0.9248)(308,0.928)(309,0.924)(310,0.9261)(311,0.9243)(312,0.9248)(313,0.9229)(314,0.9226)(315,0.9141)(316,0.9186)(317,0.9158)(318,0.9163)(319,0.914)(320,0.9157)(321,0.915)(322,0.9132)(323,0.9095)(324,0.9052)(325,0.8975)(326,0.8904)(327,0.9009)(328,0.9138)(329,0.9182)(330,0.9269)(331,0.9287)(332,0.9302)(333,0.9256)(334,0.9323)(335,0.9304)(336,0.932)(337,0.9312)(338,0.9322)(339,0.9307)(340,0.9307)(341,0.9264)(342,0.9287)(343,0.9288)(344,0.9303)(345,0.9292)(346,0.93)(347,0.9314)(348,0.9311)(349,0.9319)(350,0.9329)(351,0.9311)(352,0.9316)(353,0.9328)(354,0.9312)(355,0.929)(356,0.9311)(357,0.9306)(358,0.9315)(359,0.9324)(360,0.9322)(361,0.9332)(362,0.9311)(363,0.9318)(364,0.9309)(365,0.934)(366,0.932)(367,0.9314)(368,0.9325)(369,0.9323)(370,0.9308)(371,0.9308)(372,0.9326)(373,0.9277)(374,0.933)(375,0.9266)(376,0.9293)(377,0.9252)(378,0.9329)(379,0.9295)(380,0.9318)(381,0.9308)(382,0.9337)(383,0.9339)(384,0.9337)(385,0.9294)(386,0.9352)(387,0.9314)(388,0.9336)(389,0.9346)(390,0.9336)(391,0.9358)(392,0.9346)(393,0.9338)(394,0.9311)(395,0.9342)(396,0.9345)(397,0.9349)(398,0.9357)(399,0.9347)(400,0.9344)(401,0.9338)(402,0.9341)(403,0.9345)(404,0.9342)(405,0.9333)(406,0.9354)(407,0.9339)(408,0.9359)(409,0.9367)(410,0.9357)(411,0.9341)(412,0.9347)(413,0.9348)(414,0.9372)(415,0.9364)(416,0.934)(417,0.9332)(418,0.936)(419,0.933)(420,0.9356)(421,0.9325)(422,0.937)(423,0.9362)(424,0.9365)(425,0.9328)(426,0.9349)(427,0.9322)(428,0.9336)(429,0.9361)(430,0.9338)(431,0.9359)(432,0.934)(433,0.9381)(434,0.9374)(435,0.9359)(436,0.9378)(437,0.9367)(438,0.9375)(439,0.9377)(440,0.935)(441,0.9369)(442,0.9362)(443,0.9389)(444,0.9344)(445,0.9369)(446,0.9359)(447,0.9391)(448,0.9359)(449,0.9356)(450,0.9341)(451,0.9335)(452,0.9373)(453,0.9343)(454,0.9362)(455,0.9338)(456,0.9385)(457,0.937)(458,0.9365)(459,0.9353)(460,0.9365)(461,0.9363)(462,0.9381)(463,0.9347)(464,0.9363)(465,0.9369)(466,0.9366)(467,0.9378)(468,0.9357)(469,0.9368)(470,0.9355)(471,0.9349)(472,0.9347)(473,0.9332)(474,0.9325)(475,0.9342)(476,0.933)(477,0.9352)(478,0.9346)(479,0.9345)(480,0.9357)(481,0.9353)(482,0.9357)(483,0.9343)(484,0.9346)(485,0.9363)(486,0.936)(487,0.9366)(488,0.9371)(489,0.9347)(490,0.9365)(491,0.9363)(492,0.9342)(493,0.9352)(494,0.9348)(495,0.9351)(496,0.9359)(497,0.9344)(498,0.935)(499,0.934)
};\addlegendentry{$B=0$}
\end{axis}
\end{tikzpicture}}
\end{center}
\caption{Test loss and test accuracy vs. the number of iterations for different numbers of Byzantine clients applying class-flip attacks.}
\label{class_flip_attacks}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[CNN model]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=500,
ymin=0, ymax=1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test accuracy},
ymajorgrids,
ytick style={color=black},
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
grid=major,
]
\addplot [semithick, color = red]
coordinates {
(0,0.043925603482390184)(1,0.37514839730906213)(2,0.6121883656509696)(3,0.6426592797783933)(4,0.6501780767708746)(5,0.6509695290858726)(6,0.6513652552433716)(7,0.6513652552433716)(8,0.6513652552433716)(9,0.6513652552433716)(10,0.6513652552433716)(11,0.6513652552433716)(12,0.6513652552433716)(13,0.6513652552433716)(14,0.6513652552433716)(15,0.6513652552433716)(16,0.6513652552433716)(17,0.6513652552433716)(18,0.6517609814008706)(19,0.6517609814008706)(20,0.6517609814008706)(21,0.6525524337158686)(22,0.6525524337158686)(23,0.6525524337158686)(24,0.6525524337158686)(25,0.6525524337158686)(26,0.6529481598733676)(27,0.6525524337158686)(28,0.6529481598733676)(29,0.6529481598733676)(30,0.6529481598733676)(31,0.6529481598733676)(32,0.6529481598733676)(33,0.6529481598733676)(34,0.6529481598733676)(35,0.6529481598733676)(36,0.6525524337158686)(37,0.6529481598733676)(38,0.6529481598733676)(39,0.6529481598733676)(40,0.6529481598733676)(41,0.6529481598733676)(42,0.6529481598733676)(43,0.6529481598733676)(44,0.6529481598733676)(45,0.6529481598733676)(46,0.6529481598733676)(47,0.6529481598733676)(48,0.6529481598733676)(49,0.6529481598733676)(50,0.6529481598733676)(51,0.6529481598733676)(52,0.6529481598733676)(53,0.6529481598733676)(54,0.6529481598733676)(55,0.6529481598733676)(56,0.6529481598733676)(57,0.6529481598733676)(58,0.6529481598733676)(59,0.6529481598733676)(60,0.6529481598733676)(61,0.6529481598733676)(62,0.6529481598733676)(63,0.6529481598733676)(64,0.6529481598733676)(65,0.6529481598733676)(66,0.6529481598733676)(67,0.6529481598733676)(68,0.6529481598733676)(69,0.6529481598733676)(70,0.6529481598733676)(71,0.6529481598733676)(72,0.6529481598733676)(73,0.6529481598733676)(74,0.6529481598733676)(75,0.6529481598733676)(76,0.6533438860308667)(77,0.6533438860308667)(78,0.6533438860308667)(79,0.6533438860308667)(80,0.6533438860308667)(81,0.6533438860308667)(82,0.6533438860308667)(83,0.6533438860308667)(84,0.6533438860308667)(85,0.6533438860308667)(86,0.6533438860308667)(87,0.6533438860308667)(88,0.6533438860308667)(89,0.6533438860308667)(90,0.6533438860308667)(91,0.6537396121883656)(92,0.6533438860308667)(93,0.6537396121883656)(94,0.6533438860308667)(95,0.6533438860308667)(96,0.6537396121883656)(97,0.6537396121883656)(98,0.6533438860308667)(99,0.6537396121883656)(100,0.6537396121883656)(101,0.6537396121883656)(102,0.6537396121883656)(103,0.6537396121883656)(104,0.6537396121883656)(105,0.6537396121883656)(106,0.6541353383458647)(107,0.6541353383458647)(108,0.6541353383458647)(109,0.6541353383458647)(110,0.6541353383458647)(111,0.6549267906608627)(112,0.6549267906608627)(113,0.6549267906608627)(114,0.6549267906608627)(115,0.6549267906608627)(116,0.6549267906608627)(117,0.6549267906608627)(118,0.6553225168183617)(119,0.6553225168183617)(120,0.6553225168183617)(121,0.6553225168183617)(122,0.6557182429758607)(123,0.6557182429758607)(124,0.6557182429758607)(125,0.6557182429758607)(126,0.6557182429758607)(127,0.6557182429758607)(128,0.6557182429758607)(129,0.6557182429758607)(130,0.6557182429758607)(131,0.6557182429758607)(132,0.6557182429758607)(133,0.6565096952908587)(134,0.6565096952908587)(135,0.6569054214483577)(136,0.6569054214483577)(137,0.6569054214483577)(138,0.6569054214483577)(139,0.6569054214483577)(140,0.6569054214483577)(141,0.6569054214483577)(142,0.6569054214483577)(143,0.6569054214483577)(144,0.6569054214483577)(145,0.6569054214483577)(146,0.6569054214483577)(147,0.6569054214483577)(148,0.6569054214483577)(149,0.6569054214483577)(150,0.6569054214483577)(151,0.6569054214483577)(152,0.6569054214483577)(153,0.6569054214483577)(154,0.6573011476058568)(155,0.6569054214483577)(156,0.6569054214483577)(157,0.6573011476058568)(158,0.6573011476058568)(159,0.6573011476058568)(160,0.6573011476058568)(161,0.6573011476058568)(162,0.6576968737633557)(163,0.6576968737633557)(164,0.6576968737633557)(165,0.6580925999208548)(166,0.6580925999208548)(167,0.6580925999208548)(168,0.6580925999208548)(169,0.6580925999208548)(170,0.6580925999208548)(171,0.6584883260783537)(172,0.6584883260783537)(173,0.6584883260783537)(174,0.6584883260783537)(175,0.6584883260783537)(176,0.6584883260783537)(177,0.6584883260783537)(178,0.6584883260783537)(179,0.6584883260783537)(180,0.6584883260783537)(181,0.6584883260783537)(182,0.6588840522358528)(183,0.6588840522358528)(184,0.6588840522358528)(185,0.6588840522358528)(186,0.6588840522358528)(187,0.6592797783933518)(188,0.6588840522358528)(189,0.6588840522358528)(190,0.6588840522358528)(191,0.6588840522358528)(192,0.6588840522358528)(193,0.6588840522358528)(194,0.6592797783933518)(195,0.6592797783933518)(196,0.6596755045508508)(197,0.6596755045508508)(198,0.6596755045508508)(199,0.6600712307083498)(200,0.6600712307083498)(201,0.6600712307083498)(202,0.6600712307083498)(203,0.6600712307083498)(204,0.6600712307083498)(205,0.6600712307083498)(206,0.6600712307083498)(207,0.6600712307083498)(208,0.6600712307083498)(209,0.6604669568658489)(210,0.6604669568658489)(211,0.6604669568658489)(212,0.6604669568658489)(213,0.6608626830233478)(214,0.6604669568658489)(215,0.6608626830233478)(216,0.6608626830233478)(217,0.6608626830233478)(218,0.6612584091808469)(219,0.6616541353383458)(220,0.6620498614958449)(221,0.6620498614958449)(222,0.6632370399683419)(223,0.6632370399683419)(224,0.6632370399683419)(225,0.6632370399683419)(226,0.6632370399683419)(227,0.6632370399683419)(228,0.6632370399683419)(229,0.6628413138108429)(230,0.6632370399683419)(231,0.6636327661258409)(232,0.6632370399683419)(233,0.6636327661258409)(234,0.6636327661258409)(235,0.6648199445983379)(236,0.665215670755837)(237,0.664424218440839)(238,0.664424218440839)(239,0.6648199445983379)(240,0.6648199445983379)(241,0.665215670755837)(242,0.665215670755837)(243,0.665215670755837)(244,0.665215670755837)(245,0.665215670755837)(246,0.665215670755837)(247,0.665215670755837)(248,0.665215670755837)(249,0.6656113969133359)(250,0.6656113969133359)(251,0.6656113969133359)(252,0.6656113969133359)(253,0.6656113969133359)(254,0.6656113969133359)(255,0.666402849228334)(256,0.666798575385833)(257,0.666402849228334)(258,0.666798575385833)(259,0.666007123070835)(260,0.666402849228334)(261,0.666798575385833)(262,0.667194301543332)(263,0.667194301543332)(264,0.667194301543332)(265,0.667194301543332)(266,0.667194301543332)(267,0.667194301543332)(268,0.667194301543332)(269,0.66798575385833)(270,0.66798575385833)(271,0.6683814800158291)(272,0.668777206173328)(273,0.6691729323308271)(274,0.6691729323308271)(275,0.6691729323308271)(276,0.6691729323308271)(277,0.669568658488326)(278,0.669568658488326)(279,0.669568658488326)(280,0.6703601108033241)(281,0.6703601108033241)(282,0.6711515631183221)(283,0.6711515631183221)(284,0.6711515631183221)(285,0.6711515631183221)(286,0.6711515631183221)(287,0.6707558369608231)(288,0.6711515631183221)(289,0.6711515631183221)(290,0.6711515631183221)(291,0.6715472892758211)(292,0.6715472892758211)(293,0.6715472892758211)(294,0.6715472892758211)(295,0.6715472892758211)(296,0.6715472892758211)(297,0.6715472892758211)(298,0.6715472892758211)(299,0.6715472892758211)(300,0.6715472892758211)(301,0.6715472892758211)(302,0.6715472892758211)(303,0.6715472892758211)(304,0.6715472892758211)(305,0.6715472892758211)(306,0.6723387415908192)(307,0.6715472892758211)(308,0.6715472892758211)(309,0.6723387415908192)(310,0.6723387415908192)(311,0.6723387415908192)(312,0.6723387415908192)(313,0.6723387415908192)(314,0.6719430154333201)(315,0.6719430154333201)(316,0.6719430154333201)(317,0.6715472892758211)(318,0.6715472892758211)(319,0.6715472892758211)(320,0.6719430154333201)(321,0.6715472892758211)(322,0.6715472892758211)(323,0.6723387415908192)(324,0.6723387415908192)(325,0.6727344677483181)(326,0.6727344677483181)(327,0.6731301939058172)(328,0.6731301939058172)(329,0.6731301939058172)(330,0.6735259200633161)(331,0.6735259200633161)(332,0.6735259200633161)(333,0.6735259200633161)(334,0.6735259200633161)(335,0.6735259200633161)(336,0.6735259200633161)(337,0.6739216462208152)(338,0.6739216462208152)(339,0.6739216462208152)(340,0.6739216462208152)(341,0.6739216462208152)(342,0.6739216462208152)(343,0.6739216462208152)(344,0.6739216462208152)(345,0.6743173723783142)(346,0.6743173723783142)(347,0.6747130985358132)(348,0.6747130985358132)(349,0.6747130985358132)(350,0.6747130985358132)(351,0.6747130985358132)(352,0.6747130985358132)(353,0.6747130985358132)(354,0.6751088246933122)(355,0.6755045508508113)(356,0.6755045508508113)(357,0.6755045508508113)(358,0.6751088246933122)(359,0.6751088246933122)(360,0.6755045508508113)(361,0.6755045508508113)(362,0.6751088246933122)(363,0.6762960031658093)(364,0.6766917293233082)(365,0.6766917293233082)(366,0.6766917293233082)(367,0.6766917293233082)(368,0.6762960031658093)(369,0.6766917293233082)(370,0.6766917293233082)(371,0.6766917293233082)(372,0.6762960031658093)(373,0.6766917293233082)(374,0.6762960031658093)(375,0.6762960031658093)(376,0.6766917293233082)(377,0.6766917293233082)(378,0.6770874554808073)(379,0.6770874554808073)(380,0.6770874554808073)(381,0.6766917293233082)(382,0.6770874554808073)(383,0.6766917293233082)(384,0.6770874554808073)(385,0.6770874554808073)(386,0.6770874554808073)(387,0.6774831816383063)(388,0.6770874554808073)(389,0.6770874554808073)(390,0.6774831816383063)(391,0.6770874554808073)(392,0.6770874554808073)(393,0.6770874554808073)(394,0.6774831816383063)(395,0.6774831816383063)(396,0.6774831816383063)(397,0.6774831816383063)(398,0.6778789077958053)(399,0.6782746339533043)(400,0.6778789077958053)(401,0.6778789077958053)(402,0.6778789077958053)(403,0.6778789077958053)(404,0.6778789077958053)(405,0.6778789077958053)(406,0.6778789077958053)(407,0.6778789077958053)(408,0.6778789077958053)(409,0.6778789077958053)(410,0.6778789077958053)(411,0.6778789077958053)(412,0.6778789077958053)(413,0.6778789077958053)(414,0.6778789077958053)(415,0.6778789077958053)(416,0.6778789077958053)(417,0.6778789077958053)(418,0.6782746339533043)(419,0.6778789077958053)(420,0.6778789077958053)(421,0.6778789077958053)(422,0.6778789077958053)(423,0.6778789077958053)(424,0.6778789077958053)(425,0.6778789077958053)(426,0.6778789077958053)(427,0.6778789077958053)(428,0.6778789077958053)(429,0.6778789077958053)(430,0.6778789077958053)(431,0.6794618124258014)(432,0.6794618124258014)(433,0.6794618124258014)(434,0.6794618124258014)(435,0.6794618124258014)(436,0.6798575385833003)(437,0.6798575385833003)(438,0.6798575385833003)(439,0.6798575385833003)(440,0.6798575385833003)(441,0.6798575385833003)(442,0.6798575385833003)(443,0.6798575385833003)(444,0.6798575385833003)(445,0.6798575385833003)(446,0.6802532647407994)(447,0.6802532647407994)(448,0.6802532647407994)(449,0.6802532647407994)(450,0.6802532647407994)(451,0.6802532647407994)(452,0.6802532647407994)(453,0.6802532647407994)(454,0.6802532647407994)(455,0.6802532647407994)(456,0.6802532647407994)(457,0.6802532647407994)(458,0.6802532647407994)(459,0.6802532647407994)(460,0.6802532647407994)(461,0.6802532647407994)(462,0.6802532647407994)(463,0.6802532647407994)(464,0.6802532647407994)(465,0.6802532647407994)(466,0.6802532647407994)(467,0.6802532647407994)(468,0.6802532647407994)(469,0.6802532647407994)(470,0.6802532647407994)(471,0.6802532647407994)(472,0.6802532647407994)(473,0.6802532647407994)(474,0.6806489908982983)(475,0.6806489908982983)(476,0.6806489908982983)(477,0.6810447170557974)(478,0.6810447170557974)(479,0.6810447170557974)(480,0.6810447170557974)(481,0.6810447170557974)(482,0.6814404432132964)(483,0.6814404432132964)(484,0.6810447170557974)(485,0.6810447170557974)(486,0.6814404432132964)(487,0.6814404432132964)(488,0.6814404432132964)(489,0.6814404432132964)(490,0.6814404432132964)(491,0.6814404432132964)(492,0.6814404432132964)(493,0.6814404432132964)(494,0.6822318955282944)(495,0.6818361693707954)(496,0.6818361693707954)(497,0.6822318955282944)(498,0.6822318955282944)(499,0.6822318955282944)
};\addlegendentry{ROTAF $(s=1)$}
\addplot [semithick, color = blue]
coordinates {
(0,0.043925603482390184)(1,0.37158686189157103)(2,0.6327661258409181)(3,0.6462208151958845)(4,0.6505738029283735)(5,0.6549267906608627)(6,0.6624455876533439)(7,0.7063711911357341)(8,0.6842105263157895)(9,0.6683814800158291)(10,0.7162643450732094)(11,0.7202216066481995)(12,0.7415908191531461)(13,0.7320933913731698)(14,0.7538583300356153)(15,0.738425009893154)(16,0.7245745943806886)(17,0.7692916501780768)(18,0.7700831024930748)(19,0.7613771270280966)(20,0.77997625643055)(21,0.7578155916106054)(22,0.777206173328057)(23,0.7831420656905421)(24,0.7902651365255243)(25,0.7724574594380689)(26,0.8060941828254847)(27,0.8250890383854372)(28,0.8286505738029284)(29,0.81598733676296)(30,0.7950138504155124)(31,0.8195488721804511)(32,0.8286505738029284)(33,0.8215275029679462)(34,0.8322121092204194)(35,0.8199445983379502)(36,0.8393351800554016)(37,0.8413138108428967)(38,0.8389394538979027)(39,0.8444796201028888)(40,0.8425009893153937)(41,0.8389394538979027)(42,0.8302334784329244)(43,0.8464582508903838)(44,0.84804115552038)(45,0.8472497032053818)(46,0.8512069647803719)(47,0.8476454293628809)(48,0.850415512465374)(49,0.8484368816778789)(50,0.855164226355362)(51,0.85199841709537)(52,0.8547685001978631)(53,0.850019786307875)(54,0.855164226355362)(55,0.8571428571428571)(56,0.8583300356153541)(57,0.8567471309853582)(58,0.8622872971903442)(59,0.8599129402453503)(60,0.8591214879303521)(61,0.8642659279778393)(62,0.8611001187178472)(63,0.8638702018203404)(64,0.8666402849228334)(65,0.8646616541353384)(66,0.8642659279778393)(67,0.8646616541353384)(68,0.8662445587653343)(69,0.8678274633953305)(70,0.8646616541353384)(71,0.8642659279778393)(72,0.8646616541353384)(73,0.8709932726553226)(74,0.8737633557578156)(75,0.8733676296003166)(76,0.8741590819153146)(77,0.8741590819153146)(78,0.8705975464978235)(79,0.8694103680253265)(80,0.8694103680253265)(81,0.8674317372378314)(82,0.8729719034428176)(83,0.8713889988128215)(84,0.8769291650178077)(85,0.8765334388603087)(86,0.8777206173328057)(87,0.8761377127028096)(88,0.8785120696478037)(89,0.8765334388603087)(90,0.8777206173328057)(91,0.8816778789077958)(92,0.8804907004352988)(93,0.8836565096952909)(94,0.8832607835377918)(95,0.8832607835377918)(96,0.8816778789077958)(97,0.8800949742777998)(98,0.8761377127028096)(99,0.8773248911753067)(100,0.8769291650178077)(101,0.8812821527502968)(102,0.8820736050652949)(103,0.8832607835377918)(104,0.8840522358527899)(105,0.8812821527502968)(106,0.8793035219628017)(107,0.8824693312227938)(108,0.8864265927977839)(109,0.886822318955283)(110,0.8832607835377918)(111,0.8836565096952909)(112,0.8808864265927978)(113,0.888800949742778)(114,0.888405223585279)(115,0.8848436881677879)(116,0.886822318955283)(117,0.887613771270281)(118,0.888405223585279)(119,0.8899881282152751)(120,0.889196675900277)(121,0.8899881282152751)(122,0.889196675900277)(123,0.888800949742778)(124,0.8899881282152751)(125,0.88800949742778)(126,0.8864265927977839)(127,0.886030866640285)(128,0.890383854372774)(129,0.888405223585279)(130,0.8872180451127819)(131,0.887613771270281)(132,0.888800949742778)(133,0.8872180451127819)(134,0.8852394143252869)(135,0.890383854372774)(136,0.888405223585279)(137,0.8872180451127819)(138,0.8864265927977839)(139,0.8848436881677879)(140,0.886822318955283)(141,0.8872180451127819)(142,0.8872180451127819)(143,0.8907795805302731)(144,0.8899881282152751)(145,0.889196675900277)(146,0.8899881282152751)(147,0.891175306687772)(148,0.8919667590027701)(149,0.8907795805302731)(150,0.8907795805302731)(151,0.890383854372774)(152,0.888405223585279)(153,0.8935496636327661)(154,0.8935496636327661)(155,0.8935496636327661)(156,0.8947368421052632)(157,0.8951325682627622)(158,0.8951325682627622)(159,0.8931539374752672)(160,0.8931539374752672)(161,0.8971111990502573)(162,0.8967154728927582)(163,0.8967154728927582)(164,0.8975069252077562)(165,0.8967154728927582)(166,0.8959240205777602)(167,0.8943411159477641)(168,0.8963197467352592)(169,0.8967154728927582)(170,0.8947368421052632)(171,0.8951325682627622)(172,0.8967154728927582)(173,0.8975069252077562)(174,0.8967154728927582)(175,0.8963197467352592)(176,0.8971111990502573)(177,0.8986941036802533)(178,0.8994855559952513)(179,0.8975069252077562)(180,0.8982983775227542)(181,0.8982983775227542)(182,0.9002770083102493)(183,0.8986941036802533)(184,0.9006727344677483)(185,0.9006727344677483)(186,0.9002770083102493)(187,0.9006727344677483)(188,0.8994855559952513)(189,0.9006727344677483)(190,0.8998812821527503)(191,0.8994855559952513)(192,0.8990898298377523)(193,0.8986941036802533)(194,0.8990898298377523)(195,0.8986941036802533)(196,0.8967154728927582)(197,0.8994855559952513)(198,0.8994855559952513)(199,0.8994855559952513)(200,0.9010684606252474)(201,0.9018599129402454)(202,0.9022556390977443)(203,0.9014641867827463)(204,0.9006727344677483)(205,0.9022556390977443)(206,0.9018599129402454)(207,0.9030470914127424)(208,0.9022556390977443)(209,0.9022556390977443)(210,0.9026513652552434)(211,0.9018599129402454)(212,0.9014641867827463)(213,0.9022556390977443)(214,0.9026513652552434)(215,0.9002770083102493)(216,0.9030470914127424)(217,0.9034428175702414)(218,0.9034428175702414)(219,0.9030470914127424)(220,0.9034428175702414)(221,0.9022556390977443)(222,0.9022556390977443)(223,0.9010684606252474)(224,0.8998812821527503)(225,0.9014641867827463)(226,0.9030470914127424)(227,0.9010684606252474)(228,0.9014641867827463)(229,0.9014641867827463)(230,0.9042342698852394)(231,0.9038385437277404)(232,0.9030470914127424)(233,0.9046299960427384)(234,0.9038385437277404)(235,0.9026513652552434)(236,0.9022556390977443)(237,0.9034428175702414)(238,0.9042342698852394)(239,0.9042342698852394)(240,0.9034428175702414)(241,0.9030470914127424)(242,0.9038385437277404)(243,0.9046299960427384)(244,0.9042342698852394)(245,0.9038385437277404)(246,0.9062129006727345)(247,0.9066086268302335)(248,0.9081915314602296)(249,0.9074000791452315)(250,0.9074000791452315)(251,0.9058171745152355)(252,0.9034428175702414)(253,0.9054214483577364)(254,0.9034428175702414)(255,0.9034428175702414)(256,0.9038385437277404)(257,0.9066086268302335)(258,0.9077958053027305)(259,0.9081915314602296)(260,0.9058171745152355)(261,0.9074000791452315)(262,0.9070043529877325)(263,0.9062129006727345)(264,0.9062129006727345)(265,0.9062129006727345)(266,0.9062129006727345)(267,0.9070043529877325)(268,0.9058171745152355)(269,0.9050257222002375)(270,0.9046299960427384)(271,0.9066086268302335)(272,0.9062129006727345)(273,0.9093787099327265)(274,0.9089829837752276)(275,0.9077958053027305)(276,0.9070043529877325)(277,0.9066086268302335)(278,0.9058171745152355)(279,0.9062129006727345)(280,0.9070043529877325)(281,0.9070043529877325)(282,0.9077958053027305)(283,0.9066086268302335)(284,0.9101701622477246)(285,0.9085872576177285)(286,0.9085872576177285)(287,0.9081915314602296)(288,0.9074000791452315)(289,0.9074000791452315)(290,0.9066086268302335)(291,0.9062129006727345)(292,0.9070043529877325)(293,0.9085872576177285)(294,0.9093787099327265)(295,0.9077958053027305)(296,0.9077958053027305)(297,0.9074000791452315)(298,0.9077958053027305)(299,0.9089829837752276)(300,0.9077958053027305)(301,0.9097744360902256)(302,0.9097744360902256)(303,0.9121487930352197)(304,0.9117530668777206)(305,0.9133359715077166)(306,0.9117530668777206)(307,0.9097744360902256)(308,0.9101701622477246)(309,0.9113573407202216)(310,0.9093787099327265)(311,0.9081915314602296)(312,0.9074000791452315)(313,0.9066086268302335)(314,0.9070043529877325)(315,0.9085872576177285)(316,0.9089829837752276)(317,0.9097744360902256)(318,0.9097744360902256)(319,0.9125445191927186)(320,0.9113573407202216)(321,0.9093787099327265)(322,0.9101701622477246)(323,0.9109616145627226)(324,0.9113573407202216)(325,0.9121487930352197)(326,0.9121487930352197)(327,0.9129402453502177)(328,0.9129402453502177)(329,0.9129402453502177)(330,0.9133359715077166)(331,0.9153146022952117)(332,0.9133359715077166)(333,0.9137316976652157)(334,0.9133359715077166)(335,0.9153146022952117)(336,0.9157103284527107)(337,0.9149188761377127)(338,0.9153146022952117)(339,0.9161060546102098)(340,0.9165017807677087)(341,0.9161060546102098)(342,0.9149188761377127)(343,0.9157103284527107)(344,0.9165017807677087)(345,0.9145231499802137)(346,0.9145231499802137)(347,0.9149188761377127)(348,0.9161060546102098)(349,0.9161060546102098)(350,0.9153146022952117)(351,0.9161060546102098)(352,0.9168975069252078)(353,0.9149188761377127)(354,0.9168975069252078)(355,0.9153146022952117)(356,0.9161060546102098)(357,0.9184804115552038)(358,0.9172932330827067)(359,0.9153146022952117)(360,0.9157103284527107)(361,0.9149188761377127)(362,0.9141274238227147)(363,0.9149188761377127)(364,0.9145231499802137)(365,0.9161060546102098)(366,0.9161060546102098)(367,0.9165017807677087)(368,0.9161060546102098)(369,0.9168975069252078)(370,0.9161060546102098)(371,0.9165017807677087)(372,0.9153146022952117)(373,0.9141274238227147)(374,0.9165017807677087)(375,0.9153146022952117)(376,0.9157103284527107)(377,0.9161060546102098)(378,0.9141274238227147)(379,0.9172932330827067)(380,0.9176889592402058)(381,0.9172932330827067)(382,0.9161060546102098)(383,0.9157103284527107)(384,0.9145231499802137)(385,0.9149188761377127)(386,0.9149188761377127)(387,0.9149188761377127)(388,0.9137316976652157)(389,0.9157103284527107)(390,0.9176889592402058)(391,0.9176889592402058)(392,0.9172932330827067)(393,0.9180846853977048)(394,0.9188761377127028)(395,0.9180846853977048)(396,0.9168975069252078)(397,0.9165017807677087)(398,0.9157103284527107)(399,0.9153146022952117)(400,0.9161060546102098)(401,0.9168975069252078)(402,0.9168975069252078)(403,0.9176889592402058)(404,0.9180846853977048)(405,0.9180846853977048)(406,0.9192718638702018)(407,0.9168975069252078)(408,0.9172932330827067)(409,0.9196675900277008)(410,0.9188761377127028)(411,0.9216462208151959)(412,0.9212504946576969)(413,0.9200633161851999)(414,0.9188761377127028)(415,0.9200633161851999)(416,0.9196675900277008)(417,0.9200633161851999)(418,0.9184804115552038)(419,0.9184804115552038)(420,0.9192718638702018)(421,0.9184804115552038)(422,0.9212504946576969)(423,0.9200633161851999)(424,0.9216462208151959)(425,0.9212504946576969)(426,0.9224376731301939)(427,0.9196675900277008)(428,0.9192718638702018)(429,0.9208547685001979)(430,0.9216462208151959)(431,0.9212504946576969)(432,0.9220419469726949)(433,0.925999208547685)(434,0.924812030075188)(435,0.9220419469726949)(436,0.9212504946576969)(437,0.9208547685001979)(438,0.9216462208151959)(439,0.9224376731301939)(440,0.9216462208151959)(441,0.9216462208151959)(442,0.9228333992876929)(443,0.9220419469726949)(444,0.9244163039176889)(445,0.92402057776019)(446,0.923229125445192)(447,0.923229125445192)(448,0.923229125445192)(449,0.9224376731301939)(450,0.9236248516026909)(451,0.9228333992876929)(452,0.9224376731301939)(453,0.9224376731301939)(454,0.925603482390186)(455,0.925207756232687)(456,0.927582113177681)(457,0.927582113177681)(458,0.926394934705184)(459,0.925603482390186)(460,0.924812030075188)(461,0.9228333992876929)(462,0.926790660862683)(463,0.925999208547685)(464,0.925603482390186)(465,0.92402057776019)(466,0.925603482390186)(467,0.925207756232687)(468,0.9244163039176889)(469,0.92402057776019)(470,0.92402057776019)(471,0.92402057776019)(472,0.9236248516026909)(473,0.9228333992876929)(474,0.9236248516026909)(475,0.9244163039176889)(476,0.924812030075188)(477,0.923229125445192)(478,0.925603482390186)(479,0.9244163039176889)(480,0.9224376731301939)(481,0.9236248516026909)(482,0.9216462208151959)(483,0.9216462208151959)(484,0.923229125445192)(485,0.9236248516026909)(486,0.9236248516026909)(487,0.926394934705184)(488,0.9271863870201821)(489,0.926790660862683)(490,0.9279778393351801)(491,0.9279778393351801)(492,0.9271863870201821)(493,0.9271863870201821)(494,0.926790660862683)(495,0.9271863870201821)(496,0.925999208547685)(497,0.928373565492679)(498,0.927582113177681)(499,0.927582113177681)
};\addlegendentry{ROTAF $(s = 2)$}
\addplot [semithick, color = green]
coordinates {
(0,0.043925603482390184)(1,0.38306292045904233)(2,0.6367233874159082)(3,0.6474079936683815)(4,0.6509695290858726)(5,0.6521567075583696)(6,0.666402849228334)(7,0.6739216462208152)(8,0.7214087851206965)(9,0.7332805698456668)(10,0.738425009893154)(11,0.7364463791056589)(12,0.7621685793430946)(13,0.7688959240205777)(14,0.7748318163830629)(15,0.7843292441630392)(16,0.7954095765730115)(17,0.8116343490304709)(18,0.8207360506529482)(19,0.8274633953304313)(20,0.8215275029679462)(21,0.8266719430154333)(22,0.816383062920459)(23,0.8381480015829046)(24,0.8401266323703996)(25,0.8345864661654135)(26,0.8405223585278987)(27,0.8456667985753858)(28,0.8476454293628809)(29,0.8460625247328848)(30,0.852394143252869)(31,0.850415512465374)(32,0.8579343094578552)(33,0.8674317372378314)(34,0.8678274633953305)(35,0.8662445587653343)(36,0.8694103680253265)(37,0.8614958448753463)(38,0.8698060941828255)(39,0.8698060941828255)(40,0.8658488326078354)(41,0.8702018203403245)(42,0.8678274633953305)(43,0.8713889988128215)(44,0.8765334388603087)(45,0.8800949742777998)(46,0.8820736050652949)(47,0.8745548080728136)(48,0.8800949742777998)(49,0.8800949742777998)(50,0.8757419865453107)(51,0.8844479620102889)(52,0.8832607835377918)(53,0.8828650573802929)(54,0.886030866640285)(55,0.8844479620102889)(56,0.8844479620102889)(57,0.888800949742778)(58,0.8852394143252869)(59,0.886030866640285)(60,0.8844479620102889)(61,0.886822318955283)(62,0.887613771270281)(63,0.886822318955283)(64,0.890383854372774)(65,0.891175306687772)(66,0.8919667590027701)(67,0.8915710328452711)(68,0.891175306687772)(69,0.8923624851602691)(70,0.8943411159477641)(71,0.8935496636327661)(72,0.8935496636327661)(73,0.8915710328452711)(74,0.8919667590027701)(75,0.8931539374752672)(76,0.8947368421052632)(77,0.8935496636327661)(78,0.8947368421052632)(79,0.9002770083102493)(80,0.8975069252077562)(81,0.8979026513652553)(82,0.8994855559952513)(83,0.8982983775227542)(84,0.8979026513652553)(85,0.8986941036802533)(86,0.9014641867827463)(87,0.8994855559952513)(88,0.8975069252077562)(89,0.8994855559952513)(90,0.8998812821527503)(91,0.9002770083102493)(92,0.8994855559952513)(93,0.9002770083102493)(94,0.9018599129402454)(95,0.9022556390977443)(96,0.9034428175702414)(97,0.9042342698852394)(98,0.9042342698852394)(99,0.9038385437277404)(100,0.9034428175702414)(101,0.9038385437277404)(102,0.9038385437277404)(103,0.9042342698852394)(104,0.9038385437277404)(105,0.9022556390977443)(106,0.9046299960427384)(107,0.9034428175702414)(108,0.9042342698852394)(109,0.9038385437277404)(110,0.9062129006727345)(111,0.9066086268302335)(112,0.9062129006727345)(113,0.9077958053027305)(114,0.9054214483577364)(115,0.9074000791452315)(116,0.9062129006727345)(117,0.9066086268302335)(118,0.9074000791452315)(119,0.9081915314602296)(120,0.9093787099327265)(121,0.9085872576177285)(122,0.9101701622477246)(123,0.9093787099327265)(124,0.9109616145627226)(125,0.9121487930352197)(126,0.9113573407202216)(127,0.9117530668777206)(128,0.9101701622477246)(129,0.9101701622477246)(130,0.9117530668777206)(131,0.9129402453502177)(132,0.9121487930352197)(133,0.9141274238227147)(134,0.9105658884052236)(135,0.9125445191927186)(136,0.9141274238227147)(137,0.9145231499802137)(138,0.9117530668777206)(139,0.9141274238227147)(140,0.9129402453502177)(141,0.9125445191927186)(142,0.9145231499802137)(143,0.9129402453502177)(144,0.9161060546102098)(145,0.9176889592402058)(146,0.9168975069252078)(147,0.9200633161851999)(148,0.9180846853977048)(149,0.9172932330827067)(150,0.9165017807677087)(151,0.9157103284527107)(152,0.9188761377127028)(153,0.9176889592402058)(154,0.9168975069252078)(155,0.9176889592402058)(156,0.9176889592402058)(157,0.9184804115552038)(158,0.9176889592402058)(159,0.9180846853977048)(160,0.9188761377127028)(161,0.9200633161851999)(162,0.9204590423426988)(163,0.9216462208151959)(164,0.9212504946576969)(165,0.9220419469726949)(166,0.9204590423426988)(167,0.9216462208151959)(168,0.9204590423426988)(169,0.9208547685001979)(170,0.9208547685001979)(171,0.923229125445192)(172,0.9208547685001979)(173,0.9180846853977048)(174,0.9180846853977048)(175,0.9220419469726949)(176,0.92402057776019)(177,0.9228333992876929)(178,0.9244163039176889)(179,0.924812030075188)(180,0.925999208547685)(181,0.9236248516026909)(182,0.927582113177681)(183,0.925603482390186)(184,0.9279778393351801)(185,0.925603482390186)(186,0.9236248516026909)(187,0.926790660862683)(188,0.9271863870201821)(189,0.925603482390186)(190,0.926394934705184)(191,0.925603482390186)(192,0.9271863870201821)(193,0.9279778393351801)(194,0.9299564701226751)(195,0.9303521962801741)(196,0.9299564701226751)(197,0.9271863870201821)(198,0.9303521962801741)(199,0.9295607439651761)(200,0.928373565492679)(201,0.9291650178076771)(202,0.9311436485951722)(203,0.9299564701226751)(204,0.9327265532251682)(205,0.9343094578551642)(206,0.9339137316976652)(207,0.9331222793826672)(208,0.9319351009101702)(209,0.9299564701226751)(210,0.9311436485951722)(211,0.9315393747526711)(212,0.9327265532251682)(213,0.9307479224376731)(214,0.9303521962801741)(215,0.9327265532251682)(216,0.9331222793826672)(217,0.9299564701226751)(218,0.9315393747526711)(219,0.9315393747526711)(220,0.9311436485951722)(221,0.9311436485951722)(222,0.9323308270676691)(223,0.9319351009101702)(224,0.9315393747526711)(225,0.9347051840126632)(226,0.9335180055401662)(227,0.9351009101701623)(228,0.9347051840126632)(229,0.9351009101701623)(230,0.9362880886426593)(231,0.9343094578551642)(232,0.9343094578551642)(233,0.9347051840126632)(234,0.9347051840126632)(235,0.9347051840126632)(236,0.9343094578551642)(237,0.9351009101701623)(238,0.9347051840126632)(239,0.9347051840126632)(240,0.9347051840126632)(241,0.9343094578551642)(242,0.9358923624851603)(243,0.9374752671151563)(244,0.9378709932726553)(245,0.9343094578551642)(246,0.9351009101701623)(247,0.9339137316976652)(248,0.9351009101701623)(249,0.9354966363276612)(250,0.9331222793826672)(251,0.9362880886426593)(252,0.9354966363276612)(253,0.9358923624851603)(254,0.9347051840126632)(255,0.9358923624851603)(256,0.9370795409576573)(257,0.9362880886426593)(258,0.9394538979026513)(259,0.9406410763751484)(260,0.9398496240601504)(261,0.9410368025326474)(262,0.9402453502176494)(263,0.9398496240601504)(264,0.9386624455876533)(265,0.9410368025326474)(266,0.9398496240601504)(267,0.9402453502176494)(268,0.9394538979026513)(269,0.9398496240601504)(270,0.9414325286901464)(271,0.9418282548476454)(272,0.9382667194301544)(273,0.9390581717451524)(274,0.9410368025326474)(275,0.9418282548476454)(276,0.9414325286901464)(277,0.9398496240601504)(278,0.9410368025326474)(279,0.9402453502176494)(280,0.9406410763751484)(281,0.9426197071626434)(282,0.9414325286901464)(283,0.9414325286901464)(284,0.9418282548476454)(285,0.9414325286901464)(286,0.9414325286901464)(287,0.9394538979026513)(288,0.9398496240601504)(289,0.9422239810051445)(290,0.9418282548476454)(291,0.9422239810051445)(292,0.9426197071626434)(293,0.9422239810051445)(294,0.9430154333201425)(295,0.9430154333201425)(296,0.9434111594776414)(297,0.9438068856351405)(298,0.9430154333201425)(299,0.9430154333201425)(300,0.9426197071626434)(301,0.9438068856351405)(302,0.9426197071626434)(303,0.9434111594776414)(304,0.9426197071626434)(305,0.9434111594776414)(306,0.9422239810051445)(307,0.9430154333201425)(308,0.9430154333201425)(309,0.9438068856351405)(310,0.9438068856351405)(311,0.9434111594776414)(312,0.9426197071626434)(313,0.9430154333201425)(314,0.9438068856351405)(315,0.9438068856351405)(316,0.9438068856351405)(317,0.9438068856351405)(318,0.9434111594776414)(319,0.9434111594776414)(320,0.9430154333201425)(321,0.9430154333201425)(322,0.9438068856351405)(323,0.9426197071626434)(324,0.9426197071626434)(325,0.9449940641076375)(326,0.9438068856351405)(327,0.9438068856351405)(328,0.9434111594776414)(329,0.9449940641076375)(330,0.9445983379501385)(331,0.9457855164226355)(332,0.9442026117926395)(333,0.9442026117926395)(334,0.9457855164226355)(335,0.9449940641076375)(336,0.9449940641076375)(337,0.9438068856351405)(338,0.9438068856351405)(339,0.9438068856351405)(340,0.9453897902651365)(341,0.9442026117926395)(342,0.9445983379501385)(343,0.9442026117926395)(344,0.9453897902651365)(345,0.9442026117926395)(346,0.9445983379501385)(347,0.9445983379501385)(348,0.9438068856351405)(349,0.9445983379501385)(350,0.9430154333201425)(351,0.9449940641076375)(352,0.9449940641076375)(353,0.9457855164226355)(354,0.9442026117926395)(355,0.9445983379501385)(356,0.9449940641076375)(357,0.9449940641076375)(358,0.9457855164226355)(359,0.9465769687376335)(360,0.9453897902651365)(361,0.9457855164226355)(362,0.9465769687376335)(363,0.9445983379501385)(364,0.9461812425801346)(365,0.9445983379501385)(366,0.9453897902651365)(367,0.9445983379501385)(368,0.9449940641076375)(369,0.9449940641076375)(370,0.9453897902651365)(371,0.9457855164226355)(372,0.9465769687376335)(373,0.9453897902651365)(374,0.9453897902651365)(375,0.9485555995251286)(376,0.9473684210526315)(377,0.9473684210526315)(378,0.9481598733676296)(379,0.9485555995251286)(380,0.9481598733676296)(381,0.9477641472101306)(382,0.9465769687376335)(383,0.9473684210526315)(384,0.9489513256826276)(385,0.9489513256826276)(386,0.9481598733676296)(387,0.9473684210526315)(388,0.9481598733676296)(389,0.9481598733676296)(390,0.9485555995251286)(391,0.9497427779976256)(392,0.9501385041551247)(393,0.9505342303126236)(394,0.9481598733676296)(395,0.9493470518401267)(396,0.9497427779976256)(397,0.9485555995251286)(398,0.9477641472101306)(399,0.9485555995251286)(400,0.9477641472101306)(401,0.9481598733676296)(402,0.9465769687376335)(403,0.9481598733676296)(404,0.9497427779976256)(405,0.9497427779976256)(406,0.9513256826276217)(407,0.9505342303126236)(408,0.9501385041551247)(409,0.9513256826276217)(410,0.9513256826276217)(411,0.9517214087851207)(412,0.9513256826276217)(413,0.9525128611001187)(414,0.9513256826276217)(415,0.9517214087851207)(416,0.9525128611001187)(417,0.9513256826276217)(418,0.9509299564701227)(419,0.9513256826276217)(420,0.9521171349426197)(421,0.9513256826276217)(422,0.9513256826276217)(423,0.9521171349426197)(424,0.9529085872576177)(425,0.9525128611001187)(426,0.9529085872576177)(427,0.9521171349426197)(428,0.9521171349426197)(429,0.9525128611001187)(430,0.9533043134151168)(431,0.9517214087851207)(432,0.9521171349426197)(433,0.9525128611001187)(434,0.9533043134151168)(435,0.9521171349426197)(436,0.9529085872576177)(437,0.9529085872576177)(438,0.9533043134151168)(439,0.9533043134151168)(440,0.9533043134151168)(441,0.9513256826276217)(442,0.9509299564701227)(443,0.9517214087851207)(444,0.9517214087851207)(445,0.9525128611001187)(446,0.9525128611001187)(447,0.9537000395726157)(448,0.9533043134151168)(449,0.9544914918876137)(450,0.9544914918876137)(451,0.9529085872576177)(452,0.9525128611001187)(453,0.9521171349426197)(454,0.9525128611001187)(455,0.9521171349426197)(456,0.9521171349426197)(457,0.9533043134151168)(458,0.9537000395726157)(459,0.9537000395726157)(460,0.9529085872576177)(461,0.9544914918876137)(462,0.9548872180451128)(463,0.9533043134151168)(464,0.9540957657301148)(465,0.9525128611001187)(466,0.9533043134151168)(467,0.9540957657301148)(468,0.9537000395726157)(469,0.9525128611001187)(470,0.9529085872576177)(471,0.9533043134151168)(472,0.9537000395726157)(473,0.9533043134151168)(474,0.9544914918876137)(475,0.9537000395726157)(476,0.9529085872576177)(477,0.9533043134151168)(478,0.9548872180451128)(479,0.9548872180451128)(480,0.9529085872576177)(481,0.9544914918876137)(482,0.9548872180451128)(483,0.9548872180451128)(484,0.9544914918876137)(485,0.9529085872576177)(486,0.9537000395726157)(487,0.9552829442026118)(488,0.9556786703601108)(489,0.9544914918876137)(490,0.9552829442026118)(491,0.9564701226751088)(492,0.9548872180451128)(493,0.9560743965176098)(494,0.9544914918876137)(495,0.9544914918876137)(496,0.9525128611001187)(497,0.9548872180451128)(498,0.9552829442026118)(499,0.9548872180451128)
};\addlegendentry{ROTAF $(s = 3)$}
\addplot [semithick, color = black]
coordinates {
(0,0.043925603482390184)(1,0.3802928373565493)(2,0.6276216857934309)(3,0.6474079936683815)(4,0.6521567075583696)(5,0.6588840522358528)(6,0.6703601108033241)(7,0.6952908587257618)(8,0.7127028096557182)(9,0.7277404036406806)(10,0.7506925207756233)(11,0.7554412346656114)(12,0.7716660071230709)(13,0.7870993272655322)(14,0.7977839335180056)(15,0.7902651365255243)(16,0.813612979817966)(17,0.8187574198654531)(18,0.8270676691729323)(19,0.8306292045904234)(20,0.8306292045904234)(21,0.8377522754254056)(22,0.8444796201028888)(23,0.8476454293628809)(24,0.8484368816778789)(25,0.8567471309853582)(26,0.8618915710328453)(27,0.8614958448753463)(28,0.8666402849228334)(29,0.8678274633953305)(30,0.8721804511278195)(31,0.8721804511278195)(32,0.8721804511278195)(33,0.8694103680253265)(34,0.8769291650178077)(35,0.8741590819153146)(36,0.8769291650178077)(37,0.8793035219628017)(38,0.8828650573802929)(39,0.8836565096952909)(40,0.8820736050652949)(41,0.8828650573802929)(42,0.8824693312227938)(43,0.8836565096952909)(44,0.8856351404827859)(45,0.886030866640285)(46,0.887613771270281)(47,0.88800949742778)(48,0.891175306687772)(49,0.8935496636327661)(50,0.8923624851602691)(51,0.889196675900277)(52,0.8919667590027701)(53,0.8919667590027701)(54,0.8935496636327661)(55,0.8947368421052632)(56,0.8959240205777602)(57,0.8939453897902652)(58,0.8982983775227542)(59,0.8963197467352592)(60,0.8979026513652553)(61,0.8971111990502573)(62,0.8994855559952513)(63,0.9002770083102493)(64,0.9010684606252474)(65,0.8986941036802533)(66,0.9014641867827463)(67,0.9014641867827463)(68,0.9006727344677483)(69,0.9030470914127424)(70,0.9038385437277404)(71,0.9042342698852394)(72,0.9026513652552434)(73,0.9038385437277404)(74,0.9074000791452315)(75,0.9097744360902256)(76,0.9089829837752276)(77,0.9074000791452315)(78,0.9074000791452315)(79,0.9070043529877325)(80,0.9077958053027305)(81,0.9081915314602296)(82,0.9105658884052236)(83,0.9089829837752276)(84,0.9089829837752276)(85,0.9077958053027305)(86,0.9093787099327265)(87,0.9089829837752276)(88,0.9101701622477246)(89,0.9077958053027305)(90,0.9101701622477246)(91,0.9085872576177285)(92,0.9070043529877325)(93,0.9085872576177285)(94,0.9105658884052236)(95,0.9113573407202216)(96,0.9121487930352197)(97,0.9113573407202216)(98,0.9133359715077166)(99,0.9137316976652157)(100,0.9149188761377127)(101,0.9165017807677087)(102,0.9153146022952117)(103,0.9180846853977048)(104,0.9153146022952117)(105,0.9145231499802137)(106,0.9168975069252078)(107,0.9172932330827067)(108,0.9157103284527107)(109,0.9165017807677087)(110,0.9172932330827067)(111,0.9188761377127028)(112,0.9212504946576969)(113,0.924812030075188)(114,0.924812030075188)(115,0.925999208547685)(116,0.9228333992876929)(117,0.9204590423426988)(118,0.9216462208151959)(119,0.9236248516026909)(120,0.925999208547685)(121,0.9216462208151959)(122,0.923229125445192)(123,0.924812030075188)(124,0.923229125445192)(125,0.925603482390186)(126,0.926790660862683)(127,0.925603482390186)(128,0.925207756232687)(129,0.926790660862683)(130,0.926394934705184)(131,0.92402057776019)(132,0.9271863870201821)(133,0.9291650178076771)(134,0.9279778393351801)(135,0.928373565492679)(136,0.9291650178076771)(137,0.9291650178076771)(138,0.9279778393351801)(139,0.9295607439651761)(140,0.9303521962801741)(141,0.9287692916501781)(142,0.9319351009101702)(143,0.9307479224376731)(144,0.9319351009101702)(145,0.9295607439651761)(146,0.9291650178076771)(147,0.9323308270676691)(148,0.9327265532251682)(149,0.9323308270676691)(150,0.9327265532251682)(151,0.9299564701226751)(152,0.9303521962801741)(153,0.9343094578551642)(154,0.9347051840126632)(155,0.9362880886426593)(156,0.9351009101701623)(157,0.9354966363276612)(158,0.9343094578551642)(159,0.9339137316976652)(160,0.9362880886426593)(161,0.9366838148001583)(162,0.9347051840126632)(163,0.9331222793826672)(164,0.9354966363276612)(165,0.9347051840126632)(166,0.9366838148001583)(167,0.9362880886426593)(168,0.9362880886426593)(169,0.9358923624851603)(170,0.9370795409576573)(171,0.9386624455876533)(172,0.9386624455876533)(173,0.9382667194301544)(174,0.9386624455876533)(175,0.9386624455876533)(176,0.9394538979026513)(177,0.9378709932726553)(178,0.9374752671151563)(179,0.9386624455876533)(180,0.9394538979026513)(181,0.9374752671151563)(182,0.9382667194301544)(183,0.9394538979026513)(184,0.9382667194301544)(185,0.9394538979026513)(186,0.9390581717451524)(187,0.9390581717451524)(188,0.9398496240601504)(189,0.9402453502176494)(190,0.9386624455876533)(191,0.9398496240601504)(192,0.9382667194301544)(193,0.9402453502176494)(194,0.9386624455876533)(195,0.9398496240601504)(196,0.9394538979026513)(197,0.9382667194301544)(198,0.9390581717451524)(199,0.9402453502176494)(200,0.9394538979026513)(201,0.9406410763751484)(202,0.9402453502176494)(203,0.9410368025326474)(204,0.9398496240601504)(205,0.9410368025326474)(206,0.9402453502176494)(207,0.9398496240601504)(208,0.9394538979026513)(209,0.9382667194301544)(210,0.9390581717451524)(211,0.9414325286901464)(212,0.9410368025326474)(213,0.9426197071626434)(214,0.9414325286901464)(215,0.9430154333201425)(216,0.9414325286901464)(217,0.9410368025326474)(218,0.9410368025326474)(219,0.9406410763751484)(220,0.9398496240601504)(221,0.9406410763751484)(222,0.9418282548476454)(223,0.9418282548476454)(224,0.9414325286901464)(225,0.9426197071626434)(226,0.9426197071626434)(227,0.9410368025326474)(228,0.9414325286901464)(229,0.9394538979026513)(230,0.9434111594776414)(231,0.9422239810051445)(232,0.9438068856351405)(233,0.9422239810051445)(234,0.9445983379501385)(235,0.9434111594776414)(236,0.9430154333201425)(237,0.9449940641076375)(238,0.9426197071626434)(239,0.9438068856351405)(240,0.9430154333201425)(241,0.9438068856351405)(242,0.9453897902651365)(243,0.9442026117926395)(244,0.9457855164226355)(245,0.9453897902651365)(246,0.9449940641076375)(247,0.9453897902651365)(248,0.9449940641076375)(249,0.9449940641076375)(250,0.9461812425801346)(251,0.9473684210526315)(252,0.9461812425801346)(253,0.9461812425801346)(254,0.9453897902651365)(255,0.9457855164226355)(256,0.9465769687376335)(257,0.9469726948951326)(258,0.9465769687376335)(259,0.9465769687376335)(260,0.9457855164226355)(261,0.9469726948951326)(262,0.9453897902651365)(263,0.9473684210526315)(264,0.9465769687376335)(265,0.9465769687376335)(266,0.9465769687376335)(267,0.9481598733676296)(268,0.9485555995251286)(269,0.9481598733676296)(270,0.9477641472101306)(271,0.9477641472101306)(272,0.9493470518401267)(273,0.9473684210526315)(274,0.9485555995251286)(275,0.9485555995251286)(276,0.9477641472101306)(277,0.9485555995251286)(278,0.9485555995251286)(279,0.9481598733676296)(280,0.9493470518401267)(281,0.9501385041551247)(282,0.9481598733676296)(283,0.9481598733676296)(284,0.9497427779976256)(285,0.9481598733676296)(286,0.9473684210526315)(287,0.9469726948951326)(288,0.9493470518401267)(289,0.9497427779976256)(290,0.9497427779976256)(291,0.9493470518401267)(292,0.9493470518401267)(293,0.9473684210526315)(294,0.9473684210526315)(295,0.9489513256826276)(296,0.9501385041551247)(297,0.9497427779976256)(298,0.9509299564701227)(299,0.9505342303126236)(300,0.9505342303126236)(301,0.9509299564701227)(302,0.9505342303126236)(303,0.9513256826276217)(304,0.9501385041551247)(305,0.9493470518401267)(306,0.9493470518401267)(307,0.9489513256826276)(308,0.9517214087851207)(309,0.9513256826276217)(310,0.9505342303126236)(311,0.9529085872576177)(312,0.9537000395726157)(313,0.9540957657301148)(314,0.9537000395726157)(315,0.9537000395726157)(316,0.9540957657301148)(317,0.9537000395726157)(318,0.9529085872576177)(319,0.9533043134151168)(320,0.9529085872576177)(321,0.9525128611001187)(322,0.9537000395726157)(323,0.9544914918876137)(324,0.9533043134151168)(325,0.9540957657301148)(326,0.9540957657301148)(327,0.9533043134151168)(328,0.9533043134151168)(329,0.9525128611001187)(330,0.9525128611001187)(331,0.9521171349426197)(332,0.9525128611001187)(333,0.9529085872576177)(334,0.9517214087851207)(335,0.9521171349426197)(336,0.9521171349426197)(337,0.9529085872576177)(338,0.9533043134151168)(339,0.9529085872576177)(340,0.9537000395726157)(341,0.9548872180451128)(342,0.9537000395726157)(343,0.9540957657301148)(344,0.9537000395726157)(345,0.9540957657301148)(346,0.9537000395726157)(347,0.9548872180451128)(348,0.9540957657301148)(349,0.9548872180451128)(350,0.9548872180451128)(351,0.9556786703601108)(352,0.9540957657301148)(353,0.9537000395726157)(354,0.9552829442026118)(355,0.9560743965176098)(356,0.9556786703601108)(357,0.9548872180451128)(358,0.9564701226751088)(359,0.9544914918876137)(360,0.9568658488326078)(361,0.9568658488326078)(362,0.9552829442026118)(363,0.9556786703601108)(364,0.9560743965176098)(365,0.9556786703601108)(366,0.9556786703601108)(367,0.9568658488326078)(368,0.9552829442026118)(369,0.9556786703601108)(370,0.9556786703601108)(371,0.9556786703601108)(372,0.9556786703601108)(373,0.9580530273051049)(374,0.9548872180451128)(375,0.9552829442026118)(376,0.9564701226751088)(377,0.9560743965176098)(378,0.9544914918876137)(379,0.9556786703601108)(380,0.9568658488326078)(381,0.9564701226751088)(382,0.9572615749901069)(383,0.9564701226751088)(384,0.9560743965176098)(385,0.9556786703601108)(386,0.9564701226751088)(387,0.9576573011476058)(388,0.9572615749901069)(389,0.9576573011476058)(390,0.9576573011476058)(391,0.9572615749901069)(392,0.9580530273051049)(393,0.9576573011476058)(394,0.9560743965176098)(395,0.9572615749901069)(396,0.9580530273051049)(397,0.9576573011476058)(398,0.9584487534626038)(399,0.9568658488326078)(400,0.9592402057776019)(401,0.9576573011476058)(402,0.9584487534626038)(403,0.9588444796201029)(404,0.9584487534626038)(405,0.9584487534626038)(406,0.9580530273051049)(407,0.9572615749901069)(408,0.9576573011476058)(409,0.9592402057776019)(410,0.9592402057776019)(411,0.9580530273051049)(412,0.9568658488326078)(413,0.9572615749901069)(414,0.9580530273051049)(415,0.9588444796201029)(416,0.9588444796201029)(417,0.9580530273051049)(418,0.9572615749901069)(419,0.9600316580925999)(420,0.9600316580925999)(421,0.9576573011476058)(422,0.9592402057776019)(423,0.9600316580925999)(424,0.9596359319351009)(425,0.9600316580925999)(426,0.9592402057776019)(427,0.9592402057776019)(428,0.9588444796201029)(429,0.9588444796201029)(430,0.9596359319351009)(431,0.9588444796201029)(432,0.9592402057776019)(433,0.9596359319351009)(434,0.9592402057776019)(435,0.9600316580925999)(436,0.9600316580925999)(437,0.9592402057776019)(438,0.9592402057776019)(439,0.9608231104075979)(440,0.9596359319351009)(441,0.9588444796201029)(442,0.9604273842500989)(443,0.9584487534626038)(444,0.9596359319351009)(445,0.9588444796201029)(446,0.9592402057776019)(447,0.9600316580925999)(448,0.9584487534626038)(449,0.9604273842500989)(450,0.9604273842500989)(451,0.9604273842500989)(452,0.9592402057776019)(453,0.9604273842500989)(454,0.9608231104075979)(455,0.9592402057776019)(456,0.9596359319351009)(457,0.9592402057776019)(458,0.9584487534626038)(459,0.9584487534626038)(460,0.9572615749901069)(461,0.9592402057776019)(462,0.961218836565097)(463,0.9592402057776019)(464,0.9600316580925999)(465,0.9592402057776019)(466,0.9600316580925999)(467,0.9600316580925999)(468,0.9604273842500989)(469,0.9604273842500989)(470,0.9624060150375939)(471,0.962801741195093)(472,0.962010288880095)(473,0.961218836565097)(474,0.9608231104075979)(475,0.9616145627225959)(476,0.961218836565097)(477,0.9604273842500989)(478,0.9604273842500989)(479,0.9604273842500989)(480,0.961218836565097)(481,0.9592402057776019)(482,0.961218836565097)(483,0.961218836565097)(484,0.9616145627225959)(485,0.963593193510091)(486,0.963197467352592)(487,0.963593193510091)(488,0.96398891966759)(489,0.9624060150375939)(490,0.963593193510091)(491,0.962801741195093)(492,0.963593193510091)(493,0.9624060150375939)(494,0.963593193510091)(495,0.9624060150375939)(496,0.96398891966759)(497,0.963593193510091)(498,0.963197467352592)(499,0.962801741195093)
};\addlegendentry{ROTAF $(s = 4)$}
\end{axis}
\end{tikzpicture}}
\subfigure[Logistic regression]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=500,
ymin=0.0, ymax=1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test accuracy},
ymajorgrids,
ytick style={color=black},
grid=major,
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
]
\addplot [semithick, color =red]
coordinates {
(0,0.14166996438464582)(1,0.2853185595567867)(2,0.44242184408389396)(3,0.5373961218836565)(4,0.589236248516027)(5,0.6141669964384646)(6,0.629600316580926)(7,0.6387020182034032)(8,0.6549267906608627)(9,0.6584883260783537)(10,0.6648199445983379)(11,0.66798575385833)(12,0.6683814800158291)(13,0.6699643846458251)(14,0.6727344677483181)(15,0.6743173723783142)(16,0.6774831816383063)(17,0.6778789077958053)(18,0.6778789077958053)(19,0.6790660862683023)(20,0.6798575385833003)(21,0.6806489908982983)(22,0.6818361693707954)(23,0.6846062524732884)(24,0.6846062524732884)(25,0.6838148001582904)(26,0.6834190740007915)(27,0.6830233478432924)(28,0.6830233478432924)(29,0.6834190740007915)(30,0.6834190740007915)(31,0.6838148001582904)(32,0.6842105263157895)(33,0.6846062524732884)(34,0.6842105263157895)(35,0.6842105263157895)(36,0.6846062524732884)(37,0.6850019786307875)(38,0.6853977047882865)(39,0.6850019786307875)(40,0.6853977047882865)(41,0.6861891571032845)(42,0.6861891571032845)(43,0.6861891571032845)(44,0.6861891571032845)(45,0.6861891571032845)(46,0.6857934309457855)(47,0.6857934309457855)(48,0.6857934309457855)(49,0.6857934309457855)(50,0.6857934309457855)(51,0.6865848832607835)(52,0.6865848832607835)(53,0.6861891571032845)(54,0.6857934309457855)(55,0.6857934309457855)(56,0.6857934309457855)(57,0.6857934309457855)(58,0.6861891571032845)(59,0.6857934309457855)(60,0.6861891571032845)(61,0.6861891571032845)(62,0.6861891571032845)(63,0.6861891571032845)(64,0.6861891571032845)(65,0.6861891571032845)(66,0.6861891571032845)(67,0.6857934309457855)(68,0.6857934309457855)(69,0.6857934309457855)(70,0.6853977047882865)(71,0.6861891571032845)(72,0.6857934309457855)(73,0.6857934309457855)(74,0.6861891571032845)(75,0.6861891571032845)(76,0.6861891571032845)(77,0.6861891571032845)(78,0.6865848832607835)(79,0.6865848832607835)(80,0.6865848832607835)(81,0.6865848832607835)(82,0.6865848832607835)(83,0.6869806094182825)(84,0.6869806094182825)(85,0.6869806094182825)(86,0.6873763355757816)(87,0.6877720617332805)(88,0.6877720617332805)(89,0.6881677878907796)(90,0.6877720617332805)(91,0.6877720617332805)(92,0.6877720617332805)(93,0.6877720617332805)(94,0.6877720617332805)(95,0.6877720617332805)(96,0.6877720617332805)(97,0.6881677878907796)(98,0.6881677878907796)(99,0.6881677878907796)(100,0.6881677878907796)(101,0.6881677878907796)(102,0.6881677878907796)(103,0.6881677878907796)(104,0.6885635140482786)(105,0.6885635140482786)(106,0.6881677878907796)(107,0.6881677878907796)(108,0.6881677878907796)(109,0.6881677878907796)(110,0.6881677878907796)(111,0.6881677878907796)(112,0.6881677878907796)(113,0.6885635140482786)(114,0.6885635140482786)(115,0.6881677878907796)(116,0.6881677878907796)(117,0.6877720617332805)(118,0.6877720617332805)(119,0.6873763355757816)(120,0.6873763355757816)(121,0.6873763355757816)(122,0.6873763355757816)(123,0.6873763355757816)(124,0.6873763355757816)(125,0.6873763355757816)(126,0.6873763355757816)(127,0.6873763355757816)(128,0.6873763355757816)(129,0.6873763355757816)(130,0.6873763355757816)(131,0.6873763355757816)(132,0.6873763355757816)(133,0.6873763355757816)(134,0.6877720617332805)(135,0.6877720617332805)(136,0.6877720617332805)(137,0.6881677878907796)(138,0.6881677878907796)(139,0.6881677878907796)(140,0.6881677878907796)(141,0.6881677878907796)(142,0.6881677878907796)(143,0.6881677878907796)(144,0.6881677878907796)(145,0.6881677878907796)(146,0.6885635140482786)(147,0.6885635140482786)(148,0.6885635140482786)(149,0.6885635140482786)(150,0.6885635140482786)(151,0.6885635140482786)(152,0.6885635140482786)(153,0.6885635140482786)(154,0.6885635140482786)(155,0.6885635140482786)(156,0.6885635140482786)(157,0.6885635140482786)(158,0.6885635140482786)(159,0.6885635140482786)(160,0.6889592402057776)(161,0.6889592402057776)(162,0.6889592402057776)(163,0.6889592402057776)(164,0.6889592402057776)(165,0.6889592402057776)(166,0.6889592402057776)(167,0.6889592402057776)(168,0.6893549663632766)(169,0.6893549663632766)(170,0.6893549663632766)(171,0.6893549663632766)(172,0.6893549663632766)(173,0.6893549663632766)(174,0.6893549663632766)(175,0.6893549663632766)(176,0.6893549663632766)(177,0.6893549663632766)(178,0.6893549663632766)(179,0.6893549663632766)(180,0.6889592402057776)(181,0.6889592402057776)(182,0.6889592402057776)(183,0.6889592402057776)(184,0.6893549663632766)(185,0.6893549663632766)(186,0.6893549663632766)(187,0.6893549663632766)(188,0.6893549663632766)(189,0.6893549663632766)(190,0.6893549663632766)(191,0.6893549663632766)(192,0.6893549663632766)(193,0.6893549663632766)(194,0.6893549663632766)(195,0.6893549663632766)(196,0.6893549663632766)(197,0.6893549663632766)(198,0.6893549663632766)(199,0.6893549663632766)(200,0.6897506925207756)(201,0.6893549663632766)(202,0.6897506925207756)(203,0.6897506925207756)(204,0.6893549663632766)(205,0.6893549663632766)(206,0.6893549663632766)(207,0.6897506925207756)(208,0.6897506925207756)(209,0.6897506925207756)(210,0.6897506925207756)(211,0.6897506925207756)(212,0.6897506925207756)(213,0.6897506925207756)(214,0.6897506925207756)(215,0.6897506925207756)(216,0.6897506925207756)(217,0.6897506925207756)(218,0.6897506925207756)(219,0.6901464186782746)(220,0.6901464186782746)(221,0.6901464186782746)(222,0.6901464186782746)(223,0.6901464186782746)(224,0.6905421448357737)(225,0.6901464186782746)(226,0.6901464186782746)(227,0.6905421448357737)(228,0.6905421448357737)(229,0.6905421448357737)(230,0.6905421448357737)(231,0.6905421448357737)(232,0.6905421448357737)(233,0.6905421448357737)(234,0.6905421448357737)(235,0.6913335971507717)(236,0.6913335971507717)(237,0.6913335971507717)(238,0.6917293233082706)(239,0.6917293233082706)(240,0.6917293233082706)(241,0.6917293233082706)(242,0.6917293233082706)(243,0.6917293233082706)(244,0.6917293233082706)(245,0.6917293233082706)(246,0.6917293233082706)(247,0.6917293233082706)(248,0.6917293233082706)(249,0.6917293233082706)(250,0.6917293233082706)(251,0.6917293233082706)(252,0.6917293233082706)(253,0.6917293233082706)(254,0.6921250494657697)(255,0.6917293233082706)(256,0.6921250494657697)(257,0.6921250494657697)(258,0.6925207756232687)(259,0.6925207756232687)(260,0.6925207756232687)(261,0.6925207756232687)(262,0.6925207756232687)(263,0.6925207756232687)(264,0.6925207756232687)(265,0.6925207756232687)(266,0.6925207756232687)(267,0.6925207756232687)(268,0.6925207756232687)(269,0.6925207756232687)(270,0.6925207756232687)(271,0.6925207756232687)(272,0.6925207756232687)(273,0.6925207756232687)(274,0.6929165017807677)(275,0.6929165017807677)(276,0.6929165017807677)(277,0.6929165017807677)(278,0.6929165017807677)(279,0.6929165017807677)(280,0.6925207756232687)(281,0.6925207756232687)(282,0.6925207756232687)(283,0.6925207756232687)(284,0.6925207756232687)(285,0.6925207756232687)(286,0.6925207756232687)(287,0.6929165017807677)(288,0.6929165017807677)(289,0.6929165017807677)(290,0.6929165017807677)(291,0.6929165017807677)(292,0.6929165017807677)(293,0.6933122279382667)(294,0.6933122279382667)(295,0.6933122279382667)(296,0.6933122279382667)(297,0.6933122279382667)(298,0.6933122279382667)(299,0.6933122279382667)(300,0.6933122279382667)(301,0.6933122279382667)(302,0.6933122279382667)(303,0.6933122279382667)(304,0.6933122279382667)(305,0.6933122279382667)(306,0.6933122279382667)(307,0.6933122279382667)(308,0.6933122279382667)(309,0.6933122279382667)(310,0.6933122279382667)(311,0.6933122279382667)(312,0.6933122279382667)(313,0.6937079540957657)(314,0.6937079540957657)(315,0.6937079540957657)(316,0.6941036802532647)(317,0.6941036802532647)(318,0.6944994064107638)(319,0.6944994064107638)(320,0.6944994064107638)(321,0.6944994064107638)(322,0.6944994064107638)(323,0.6944994064107638)(324,0.6944994064107638)(325,0.6944994064107638)(326,0.6944994064107638)(327,0.6944994064107638)(328,0.6944994064107638)(329,0.6944994064107638)(330,0.6944994064107638)(331,0.6944994064107638)(332,0.6944994064107638)(333,0.6948951325682627)(334,0.6948951325682627)(335,0.6948951325682627)(336,0.6948951325682627)(337,0.6948951325682627)(338,0.6948951325682627)(339,0.6948951325682627)(340,0.6948951325682627)(341,0.6952908587257618)(342,0.6952908587257618)(343,0.6952908587257618)(344,0.6952908587257618)(345,0.6952908587257618)(346,0.6956865848832607)(347,0.6956865848832607)(348,0.6956865848832607)(349,0.6956865848832607)(350,0.6960823110407598)(351,0.6960823110407598)(352,0.6960823110407598)(353,0.6960823110407598)(354,0.6960823110407598)(355,0.6960823110407598)(356,0.6960823110407598)(357,0.6960823110407598)(358,0.6960823110407598)(359,0.6960823110407598)(360,0.6960823110407598)(361,0.6960823110407598)(362,0.6960823110407598)(363,0.6960823110407598)(364,0.6960823110407598)(365,0.6960823110407598)(366,0.6960823110407598)(367,0.6960823110407598)(368,0.6960823110407598)(369,0.6960823110407598)(370,0.6960823110407598)(371,0.6960823110407598)(372,0.6960823110407598)(373,0.6956865848832607)(374,0.6956865848832607)(375,0.6956865848832607)(376,0.6956865848832607)(377,0.6956865848832607)(378,0.6956865848832607)(379,0.6956865848832607)(380,0.6956865848832607)(381,0.6956865848832607)(382,0.6956865848832607)(383,0.6956865848832607)(384,0.6956865848832607)(385,0.6956865848832607)(386,0.6960823110407598)(387,0.6960823110407598)(388,0.6960823110407598)(389,0.6960823110407598)(390,0.6964780371982588)(391,0.6964780371982588)(392,0.6960823110407598)(393,0.6964780371982588)(394,0.6964780371982588)(395,0.6964780371982588)(396,0.6964780371982588)(397,0.6964780371982588)(398,0.6964780371982588)(399,0.6964780371982588)(400,0.6964780371982588)(401,0.6964780371982588)(402,0.6964780371982588)(403,0.6964780371982588)(404,0.6964780371982588)(405,0.6968737633557578)(406,0.6968737633557578)(407,0.6968737633557578)(408,0.6968737633557578)(409,0.6968737633557578)(410,0.6968737633557578)(411,0.6968737633557578)(412,0.6968737633557578)(413,0.6968737633557578)(414,0.6968737633557578)(415,0.6968737633557578)(416,0.6968737633557578)(417,0.6972694895132568)(418,0.6968737633557578)(419,0.6972694895132568)(420,0.6972694895132568)(421,0.6972694895132568)(422,0.6972694895132568)(423,0.6972694895132568)(424,0.6972694895132568)(425,0.6972694895132568)(426,0.6972694895132568)(427,0.6972694895132568)(428,0.6972694895132568)(429,0.6972694895132568)(430,0.6972694895132568)(431,0.6972694895132568)(432,0.6972694895132568)(433,0.6972694895132568)(434,0.6972694895132568)(435,0.6972694895132568)(436,0.6972694895132568)(437,0.6972694895132568)(438,0.6972694895132568)(439,0.6972694895132568)(440,0.6972694895132568)(441,0.6972694895132568)(442,0.6972694895132568)(443,0.6972694895132568)(444,0.6972694895132568)(445,0.6972694895132568)(446,0.6976652156707558)(447,0.6976652156707558)(448,0.6976652156707558)(449,0.6976652156707558)(450,0.6976652156707558)(451,0.6976652156707558)(452,0.6976652156707558)(453,0.6976652156707558)(454,0.6976652156707558)(455,0.6976652156707558)(456,0.6976652156707558)(457,0.6976652156707558)(458,0.6980609418282548)(459,0.6980609418282548)(460,0.6980609418282548)(461,0.6980609418282548)(462,0.6980609418282548)(463,0.6980609418282548)(464,0.6980609418282548)(465,0.6980609418282548)(466,0.6984566679857539)(467,0.6984566679857539)(468,0.6988523941432528)(469,0.6988523941432528)(470,0.6988523941432528)(471,0.6988523941432528)(472,0.6988523941432528)(473,0.6988523941432528)(474,0.6992481203007519)(475,0.6992481203007519)(476,0.6992481203007519)(477,0.6992481203007519)(478,0.6992481203007519)(479,0.6992481203007519)(480,0.6992481203007519)(481,0.6992481203007519)(482,0.6992481203007519)(483,0.6992481203007519)(484,0.6992481203007519)(485,0.6992481203007519)(486,0.6992481203007519)(487,0.6996438464582508)(488,0.6996438464582508)(489,0.6996438464582508)(490,0.6996438464582508)(491,0.6996438464582508)(492,0.6996438464582508)(493,0.6996438464582508)(494,0.7000395726157499)(495,0.7000395726157499)(496,0.7000395726157499)(497,0.700435298773249)(498,0.7008310249307479)(499,0.7008310249307479)
};\addlegendentry{ROTAF $(s=1)$}
\addplot [semithick, color =blue]
coordinates {
(0,0.14166996438464582)(1,0.27502967946181245)(2,0.43213296398891965)(3,0.5350217649386625)(4,0.5829046299960428)(5,0.6197071626434507)(6,0.6450336367233874)(7,0.6596755045508508)(8,0.667194301543332)(9,0.6782746339533043)(10,0.6873763355757816)(11,0.6925207756232687)(12,0.700435298773249)(13,0.705184012663237)(14,0.7119113573407202)(15,0.7146814404432132)(16,0.7146814404432132)(17,0.7210130589631975)(18,0.7249703205381876)(19,0.7261574990106846)(20,0.7305104867431738)(21,0.7320933913731698)(22,0.7368421052631579)(23,0.7380292837356549)(24,0.740403640680649)(25,0.740403640680649)(26,0.7455480807281362)(27,0.7506925207756233)(28,0.7558369608231104)(29,0.7546497823506134)(30,0.7550455085081124)(31,0.7597942223981005)(32,0.7637514839730907)(33,0.7653343886030867)(34,0.7673130193905817)(35,0.7688959240205777)(36,0.7720617332805698)(37,0.776810447170558)(38,0.776414721013059)(39,0.7795805302730511)(40,0.77997625643055)(41,0.7803719825880491)(42,0.779184804115552)(43,0.7803719825880491)(44,0.7811634349030471)(45,0.7819548872180451)(46,0.7835377918480412)(47,0.7835377918480412)(48,0.7847249703205382)(49,0.7859121487930352)(50,0.7863078749505342)(51,0.7874950534230313)(52,0.7878907795805302)(53,0.7882865057380293)(54,0.7890779580530273)(55,0.7890779580530273)(56,0.7914523149980214)(57,0.7910565888405223)(58,0.7930352196280174)(59,0.7938266719430155)(60,0.7942223981005144)(61,0.7962010288880095)(62,0.7954095765730115)(63,0.7973882073605065)(64,0.7977839335180056)(65,0.7973882073605065)(66,0.7981796596755045)(67,0.8005540166204986)(68,0.8013454689354966)(69,0.8017411950929957)(70,0.8017411950929957)(71,0.8021369212504946)(72,0.8025326474079937)(73,0.8056984566679858)(74,0.8068856351404828)(75,0.8076770874554808)(76,0.8096557182429759)(77,0.8104471705579739)(78,0.8100514444004748)(79,0.8100514444004748)(80,0.8104471705579739)(81,0.8104471705579739)(82,0.8112386228729719)(83,0.812821527502968)(84,0.812821527502968)(85,0.8140087059754649)(86,0.8140087059754649)(87,0.815195884447962)(88,0.815195884447962)(89,0.8167787890779581)(90,0.8183616937079541)(91,0.8199445983379502)(92,0.8203403244954491)(93,0.8239018599129403)(94,0.8254847645429363)(95,0.8266719430154333)(96,0.8266719430154333)(97,0.8258804907004353)(98,0.8262762168579343)(99,0.8258804907004353)(100,0.8258804907004353)(101,0.8266719430154333)(102,0.8278591214879304)(103,0.8286505738029284)(104,0.8290462999604273)(105,0.8298377522754254)(106,0.8294420261179264)(107,0.8302334784329244)(108,0.8314206569054214)(109,0.8326078353779185)(110,0.8318163830629205)(111,0.8322121092204194)(112,0.8326078353779185)(113,0.8345864661654135)(114,0.8345864661654135)(115,0.8345864661654135)(116,0.8349821923229126)(117,0.8349821923229126)(118,0.8357736446379106)(119,0.8365650969529086)(120,0.8361693707954095)(121,0.8365650969529086)(122,0.8369608231104076)(123,0.8369608231104076)(124,0.8373565492679066)(125,0.8369608231104076)(126,0.8369608231104076)(127,0.8357736446379106)(128,0.8365650969529086)(129,0.8365650969529086)(130,0.8373565492679066)(131,0.8373565492679066)(132,0.8377522754254056)(133,0.8377522754254056)(134,0.8381480015829046)(135,0.8381480015829046)(136,0.8389394538979027)(137,0.8389394538979027)(138,0.8397309062129007)(139,0.8393351800554016)(140,0.8397309062129007)(141,0.8401266323703996)(142,0.8401266323703996)(143,0.8405223585278987)(144,0.8409180846853977)(145,0.8405223585278987)(146,0.8417095370003957)(147,0.8421052631578947)(148,0.8432924416303917)(149,0.8436881677878908)(150,0.8421052631578947)(151,0.8428967154728928)(152,0.8432924416303917)(153,0.8452710724178868)(154,0.8460625247328848)(155,0.8464582508903838)(156,0.84804115552038)(157,0.8484368816778789)(158,0.8484368816778789)(159,0.84804115552038)(160,0.848832607835378)(161,0.850415512465374)(162,0.850019786307875)(163,0.852394143252869)(164,0.852394143252869)(165,0.852394143252869)(166,0.852789869410368)(167,0.852394143252869)(168,0.85199841709537)(169,0.852394143252869)(170,0.85199841709537)(171,0.852394143252869)(172,0.853185595567867)(173,0.854372774040364)(174,0.853581321725366)(175,0.853581321725366)(176,0.853581321725366)(177,0.854372774040364)(178,0.8563514048278591)(179,0.8563514048278591)(180,0.8563514048278591)(181,0.8567471309853582)(182,0.8567471309853582)(183,0.8571428571428571)(184,0.8571428571428571)(185,0.8571428571428571)(186,0.8571428571428571)(187,0.8591214879303521)(188,0.8587257617728532)(189,0.8603086664028492)(190,0.8595172140878512)(191,0.8603086664028492)(192,0.8599129402453503)(193,0.8595172140878512)(194,0.8599129402453503)(195,0.8595172140878512)(196,0.8599129402453503)(197,0.8603086664028492)(198,0.8595172140878512)(199,0.8607043925603483)(200,0.8611001187178472)(201,0.8611001187178472)(202,0.8626830233478433)(203,0.8630787495053424)(204,0.8634744756628413)(205,0.8634744756628413)(206,0.8638702018203404)(207,0.8638702018203404)(208,0.8638702018203404)(209,0.8638702018203404)(210,0.8642659279778393)(211,0.8642659279778393)(212,0.8638702018203404)(213,0.8642659279778393)(214,0.8646616541353384)(215,0.8642659279778393)(216,0.8650573802928374)(217,0.8646616541353384)(218,0.8646616541353384)(219,0.8650573802928374)(220,0.8654531064503364)(221,0.8650573802928374)(222,0.8650573802928374)(223,0.8658488326078354)(224,0.8658488326078354)(225,0.8658488326078354)(226,0.8658488326078354)(227,0.8650573802928374)(228,0.8654531064503364)(229,0.8658488326078354)(230,0.8658488326078354)(231,0.8658488326078354)(232,0.8658488326078354)(233,0.8654531064503364)(234,0.8654531064503364)(235,0.8654531064503364)(236,0.8654531064503364)(237,0.8658488326078354)(238,0.8658488326078354)(239,0.8654531064503364)(240,0.8658488326078354)(241,0.8662445587653343)(242,0.8658488326078354)(243,0.8666402849228334)(244,0.8666402849228334)(245,0.8658488326078354)(246,0.8670360110803325)(247,0.8674317372378314)(248,0.8674317372378314)(249,0.8670360110803325)(250,0.8678274633953305)(251,0.8678274633953305)(252,0.8682231895528294)(253,0.8682231895528294)(254,0.8686189157103285)(255,0.8686189157103285)(256,0.8674317372378314)(257,0.8690146418678275)(258,0.8686189157103285)(259,0.8686189157103285)(260,0.8686189157103285)(261,0.8686189157103285)(262,0.8690146418678275)(263,0.8690146418678275)(264,0.8690146418678275)(265,0.8694103680253265)(266,0.8698060941828255)(267,0.8694103680253265)(268,0.8694103680253265)(269,0.8694103680253265)(270,0.8694103680253265)(271,0.8694103680253265)(272,0.8694103680253265)(273,0.8698060941828255)(274,0.8698060941828255)(275,0.8698060941828255)(276,0.8694103680253265)(277,0.8698060941828255)(278,0.8698060941828255)(279,0.8698060941828255)(280,0.8694103680253265)(281,0.8702018203403245)(282,0.8698060941828255)(283,0.8705975464978235)(284,0.8705975464978235)(285,0.8705975464978235)(286,0.8705975464978235)(287,0.8709932726553226)(288,0.8709932726553226)(289,0.8713889988128215)(290,0.8713889988128215)(291,0.8713889988128215)(292,0.8713889988128215)(293,0.8717847249703206)(294,0.8717847249703206)(295,0.8709932726553226)(296,0.8709932726553226)(297,0.8709932726553226)(298,0.8713889988128215)(299,0.8713889988128215)(300,0.8709932726553226)(301,0.8709932726553226)(302,0.8709932726553226)(303,0.8717847249703206)(304,0.8717847249703206)(305,0.8721804511278195)(306,0.8721804511278195)(307,0.8717847249703206)(308,0.8721804511278195)(309,0.8721804511278195)(310,0.8721804511278195)(311,0.8721804511278195)(312,0.8721804511278195)(313,0.8717847249703206)(314,0.8717847249703206)(315,0.8729719034428176)(316,0.8729719034428176)(317,0.8733676296003166)(318,0.8725761772853186)(319,0.8729719034428176)(320,0.8733676296003166)(321,0.8733676296003166)(322,0.8741590819153146)(323,0.8745548080728136)(324,0.8745548080728136)(325,0.8745548080728136)(326,0.8749505342303127)(327,0.8749505342303127)(328,0.8753462603878116)(329,0.8753462603878116)(330,0.8757419865453107)(331,0.8765334388603087)(332,0.8765334388603087)(333,0.8765334388603087)(334,0.8765334388603087)(335,0.8761377127028096)(336,0.8769291650178077)(337,0.8773248911753067)(338,0.8769291650178077)(339,0.8777206173328057)(340,0.8777206173328057)(341,0.8773248911753067)(342,0.8769291650178077)(343,0.8773248911753067)(344,0.8769291650178077)(345,0.8773248911753067)(346,0.8777206173328057)(347,0.8773248911753067)(348,0.8773248911753067)(349,0.8769291650178077)(350,0.8773248911753067)(351,0.8773248911753067)(352,0.8773248911753067)(353,0.8769291650178077)(354,0.8769291650178077)(355,0.8769291650178077)(356,0.8773248911753067)(357,0.8781163434903048)(358,0.8777206173328057)(359,0.8773248911753067)(360,0.8773248911753067)(361,0.8769291650178077)(362,0.8773248911753067)(363,0.8773248911753067)(364,0.8773248911753067)(365,0.8773248911753067)(366,0.8773248911753067)(367,0.8777206173328057)(368,0.8781163434903048)(369,0.8781163434903048)(370,0.8781163434903048)(371,0.8781163434903048)(372,0.8777206173328057)(373,0.8773248911753067)(374,0.8781163434903048)(375,0.8781163434903048)(376,0.8785120696478037)(377,0.8785120696478037)(378,0.8785120696478037)(379,0.8789077958053028)(380,0.8789077958053028)(381,0.8789077958053028)(382,0.8789077958053028)(383,0.8789077958053028)(384,0.8789077958053028)(385,0.8789077958053028)(386,0.8785120696478037)(387,0.8785120696478037)(388,0.8785120696478037)(389,0.8789077958053028)(390,0.8789077958053028)(391,0.8785120696478037)(392,0.8785120696478037)(393,0.8789077958053028)(394,0.8789077958053028)(395,0.8789077958053028)(396,0.8777206173328057)(397,0.8781163434903048)(398,0.8781163434903048)(399,0.8781163434903048)(400,0.8781163434903048)(401,0.8781163434903048)(402,0.8785120696478037)(403,0.8781163434903048)(404,0.8785120696478037)(405,0.8785120696478037)(406,0.8793035219628017)(407,0.8785120696478037)(408,0.8789077958053028)(409,0.8789077958053028)(410,0.8789077958053028)(411,0.8793035219628017)(412,0.8793035219628017)(413,0.8793035219628017)(414,0.8789077958053028)(415,0.8796992481203008)(416,0.8796992481203008)(417,0.8796992481203008)(418,0.8793035219628017)(419,0.8793035219628017)(420,0.8789077958053028)(421,0.8793035219628017)(422,0.8793035219628017)(423,0.8793035219628017)(424,0.8796992481203008)(425,0.8793035219628017)(426,0.8800949742777998)(427,0.8796992481203008)(428,0.8789077958053028)(429,0.8800949742777998)(430,0.8804907004352988)(431,0.8804907004352988)(432,0.8804907004352988)(433,0.8804907004352988)(434,0.8804907004352988)(435,0.8804907004352988)(436,0.8800949742777998)(437,0.8800949742777998)(438,0.8804907004352988)(439,0.8804907004352988)(440,0.8808864265927978)(441,0.8808864265927978)(442,0.8816778789077958)(443,0.8820736050652949)(444,0.8812821527502968)(445,0.8812821527502968)(446,0.8812821527502968)(447,0.8812821527502968)(448,0.8812821527502968)(449,0.8812821527502968)(450,0.8812821527502968)(451,0.8812821527502968)(452,0.8812821527502968)(453,0.8820736050652949)(454,0.8816778789077958)(455,0.8816778789077958)(456,0.8824693312227938)(457,0.8824693312227938)(458,0.8820736050652949)(459,0.8820736050652949)(460,0.8820736050652949)(461,0.8820736050652949)(462,0.8820736050652949)(463,0.8820736050652949)(464,0.8820736050652949)(465,0.8820736050652949)(466,0.8824693312227938)(467,0.8824693312227938)(468,0.8824693312227938)(469,0.8824693312227938)(470,0.8824693312227938)(471,0.8824693312227938)(472,0.8828650573802929)(473,0.8828650573802929)(474,0.8824693312227938)(475,0.8828650573802929)(476,0.8828650573802929)(477,0.8828650573802929)(478,0.8828650573802929)(479,0.8828650573802929)(480,0.8824693312227938)(481,0.8824693312227938)(482,0.8820736050652949)(483,0.8820736050652949)(484,0.8820736050652949)(485,0.8820736050652949)(486,0.8824693312227938)(487,0.8824693312227938)(488,0.8828650573802929)(489,0.8828650573802929)(490,0.8828650573802929)(491,0.8828650573802929)(492,0.8828650573802929)(493,0.8828650573802929)(494,0.8828650573802929)(495,0.8828650573802929)(496,0.8824693312227938)(497,0.8828650573802929)(498,0.8828650573802929)(499,0.8828650573802929)
};\addlegendentry{ROTAF $(s = 2)$}
\addplot [semithick, color = green]
coordinates {
(0,0.14166996438464582)(1,0.28690146418678275)(2,0.43925603482390185)(3,0.5441234665611397)(4,0.5951721408785121)(5,0.6264345073209339)(6,0.6462208151958845)(7,0.6573011476058568)(8,0.6691729323308271)(9,0.6834190740007915)(10,0.6913335971507717)(11,0.6984566679857539)(12,0.704392560348239)(13,0.7091412742382271)(14,0.7174515235457064)(15,0.7225959635931936)(16,0.7305104867431738)(17,0.7356549267906609)(18,0.7431737237831421)(19,0.7463395330431342)(20,0.7499010684606252)(21,0.7530668777206173)(22,0.7542540561931144)(23,0.7582113177681045)(24,0.7613771270280966)(25,0.7665215670755837)(26,0.7685001978630788)(27,0.7716660071230709)(28,0.7716660071230709)(29,0.7736446379105659)(30,0.7736446379105659)(31,0.776810447170558)(32,0.777601899485556)(33,0.7807677087455481)(34,0.7851206964780372)(35,0.7867036011080333)(36,0.7890779580530273)(37,0.7882865057380293)(38,0.7902651365255243)(39,0.7930352196280174)(40,0.7934309457855164)(41,0.7985753858330036)(42,0.7977839335180056)(43,0.8005540166204986)(44,0.8021369212504946)(45,0.8029283735654926)(46,0.8056984566679858)(47,0.8041155520379897)(48,0.8049070043529878)(49,0.8096557182429759)(50,0.8084685397704788)(51,0.8116343490304709)(52,0.8120300751879699)(53,0.8124258013454689)(54,0.814404432132964)(55,0.814800158290463)(56,0.815591610605461)(57,0.8183616937079541)(58,0.8195488721804511)(59,0.8211317768104471)(60,0.8227146814404432)(61,0.8231104075979422)(62,0.8242975860704392)(63,0.8250890383854372)(64,0.8282548476454293)(65,0.8294420261179264)(66,0.8298377522754254)(67,0.8314206569054214)(68,0.8314206569054214)(69,0.8322121092204194)(70,0.8333992876929165)(71,0.8337950138504155)(72,0.8349821923229126)(73,0.8345864661654135)(74,0.8349821923229126)(75,0.8357736446379106)(76,0.8377522754254056)(77,0.8389394538979027)(78,0.8401266323703996)(79,0.8413138108428967)(80,0.8413138108428967)(81,0.8428967154728928)(82,0.8428967154728928)(83,0.8440838939453897)(84,0.8440838939453897)(85,0.8448753462603878)(86,0.8460625247328848)(87,0.8464582508903838)(88,0.8468539770478829)(89,0.8476454293628809)(90,0.8472497032053818)(91,0.84804115552038)(92,0.8484368816778789)(93,0.850019786307875)(94,0.85199841709537)(95,0.852789869410368)(96,0.8539770478828651)(97,0.854372774040364)(98,0.8547685001978631)(99,0.8555599525128611)(100,0.8555599525128611)(101,0.8559556786703602)(102,0.8567471309853582)(103,0.8563514048278591)(104,0.8575385833003561)(105,0.8579343094578552)(106,0.8587257617728532)(107,0.8591214879303521)(108,0.8607043925603483)(109,0.8599129402453503)(110,0.8611001187178472)(111,0.8603086664028492)(112,0.8599129402453503)(113,0.8599129402453503)(114,0.8603086664028492)(115,0.8607043925603483)(116,0.8607043925603483)(117,0.8611001187178472)(118,0.8611001187178472)(119,0.8622872971903442)(120,0.8626830233478433)(121,0.8630787495053424)(122,0.8614958448753463)(123,0.8622872971903442)(124,0.8630787495053424)(125,0.8642659279778393)(126,0.8650573802928374)(127,0.8658488326078354)(128,0.8666402849228334)(129,0.8678274633953305)(130,0.8678274633953305)(131,0.8682231895528294)(132,0.8666402849228334)(133,0.8670360110803325)(134,0.8666402849228334)(135,0.8674317372378314)(136,0.8678274633953305)(137,0.8678274633953305)(138,0.8678274633953305)(139,0.8690146418678275)(140,0.8694103680253265)(141,0.8690146418678275)(142,0.8694103680253265)(143,0.8709932726553226)(144,0.8717847249703206)(145,0.8709932726553226)(146,0.8705975464978235)(147,0.8717847249703206)(148,0.8721804511278195)(149,0.8729719034428176)(150,0.8737633557578156)(151,0.8737633557578156)(152,0.8733676296003166)(153,0.8737633557578156)(154,0.8725761772853186)(155,0.8725761772853186)(156,0.8729719034428176)(157,0.8737633557578156)(158,0.8737633557578156)(159,0.8725761772853186)(160,0.8729719034428176)(161,0.8729719034428176)(162,0.8757419865453107)(163,0.8761377127028096)(164,0.8761377127028096)(165,0.8761377127028096)(166,0.8765334388603087)(167,0.8773248911753067)(168,0.8769291650178077)(169,0.8773248911753067)(170,0.8777206173328057)(171,0.8777206173328057)(172,0.8781163434903048)(173,0.8777206173328057)(174,0.8785120696478037)(175,0.8789077958053028)(176,0.8785120696478037)(177,0.8789077958053028)(178,0.8789077958053028)(179,0.8789077958053028)(180,0.8793035219628017)(181,0.8796992481203008)(182,0.8804907004352988)(183,0.8804907004352988)(184,0.8804907004352988)(185,0.8804907004352988)(186,0.8808864265927978)(187,0.8804907004352988)(188,0.8804907004352988)(189,0.8804907004352988)(190,0.8800949742777998)(191,0.8800949742777998)(192,0.8800949742777998)(193,0.8800949742777998)(194,0.8808864265927978)(195,0.8804907004352988)(196,0.8804907004352988)(197,0.8820736050652949)(198,0.8820736050652949)(199,0.8824693312227938)(200,0.8824693312227938)(201,0.8824693312227938)(202,0.8824693312227938)(203,0.8828650573802929)(204,0.8828650573802929)(205,0.8828650573802929)(206,0.8828650573802929)(207,0.8828650573802929)(208,0.8828650573802929)(209,0.8828650573802929)(210,0.8832607835377918)(211,0.8836565096952909)(212,0.8836565096952909)(213,0.8844479620102889)(214,0.8844479620102889)(215,0.8848436881677879)(216,0.8844479620102889)(217,0.8848436881677879)(218,0.8840522358527899)(219,0.8844479620102889)(220,0.8840522358527899)(221,0.8836565096952909)(222,0.8840522358527899)(223,0.8840522358527899)(224,0.8844479620102889)(225,0.8836565096952909)(226,0.8836565096952909)(227,0.8832607835377918)(228,0.8840522358527899)(229,0.8836565096952909)(230,0.8840522358527899)(231,0.8844479620102889)(232,0.8848436881677879)(233,0.8848436881677879)(234,0.8844479620102889)(235,0.8848436881677879)(236,0.8848436881677879)(237,0.8848436881677879)(238,0.8848436881677879)(239,0.8852394143252869)(240,0.8852394143252869)(241,0.8852394143252869)(242,0.8856351404827859)(243,0.8856351404827859)(244,0.8852394143252869)(245,0.8852394143252869)(246,0.8848436881677879)(247,0.8852394143252869)(248,0.8852394143252869)(249,0.8856351404827859)(250,0.8856351404827859)(251,0.8852394143252869)(252,0.8856351404827859)(253,0.8856351404827859)(254,0.8856351404827859)(255,0.886030866640285)(256,0.886030866640285)(257,0.886030866640285)(258,0.886822318955283)(259,0.8872180451127819)(260,0.8872180451127819)(261,0.887613771270281)(262,0.887613771270281)(263,0.8872180451127819)(264,0.886822318955283)(265,0.886822318955283)(266,0.888800949742778)(267,0.889592402057776)(268,0.889196675900277)(269,0.888800949742778)(270,0.889592402057776)(271,0.889196675900277)(272,0.888800949742778)(273,0.889592402057776)(274,0.889196675900277)(275,0.889196675900277)(276,0.889196675900277)(277,0.889592402057776)(278,0.889592402057776)(279,0.889592402057776)(280,0.888800949742778)(281,0.889592402057776)(282,0.889592402057776)(283,0.889592402057776)(284,0.8899881282152751)(285,0.8899881282152751)(286,0.890383854372774)(287,0.890383854372774)(288,0.8899881282152751)(289,0.8899881282152751)(290,0.8899881282152751)(291,0.8899881282152751)(292,0.8899881282152751)(293,0.8899881282152751)(294,0.8899881282152751)(295,0.890383854372774)(296,0.890383854372774)(297,0.8907795805302731)(298,0.891175306687772)(299,0.891175306687772)(300,0.8919667590027701)(301,0.8915710328452711)(302,0.8915710328452711)(303,0.8923624851602691)(304,0.8923624851602691)(305,0.8919667590027701)(306,0.8923624851602691)(307,0.8919667590027701)(308,0.8923624851602691)(309,0.8923624851602691)(310,0.8923624851602691)(311,0.8923624851602691)(312,0.8931539374752672)(313,0.8927582113177681)(314,0.8931539374752672)(315,0.8939453897902652)(316,0.8939453897902652)(317,0.8947368421052632)(318,0.8947368421052632)(319,0.8947368421052632)(320,0.8935496636327661)(321,0.8935496636327661)(322,0.8947368421052632)(323,0.8943411159477641)(324,0.8943411159477641)(325,0.8955282944202612)(326,0.8959240205777602)(327,0.8963197467352592)(328,0.8959240205777602)(329,0.8963197467352592)(330,0.8963197467352592)(331,0.8959240205777602)(332,0.8967154728927582)(333,0.8967154728927582)(334,0.8971111990502573)(335,0.8975069252077562)(336,0.8975069252077562)(337,0.8979026513652553)(338,0.8979026513652553)(339,0.8986941036802533)(340,0.8982983775227542)(341,0.8982983775227542)(342,0.8982983775227542)(343,0.8986941036802533)(344,0.8986941036802533)(345,0.8979026513652553)(346,0.8982983775227542)(347,0.8986941036802533)(348,0.8990898298377523)(349,0.8990898298377523)(350,0.8994855559952513)(351,0.8986941036802533)(352,0.8986941036802533)(353,0.8990898298377523)(354,0.8994855559952513)(355,0.8990898298377523)(356,0.8990898298377523)(357,0.8994855559952513)(358,0.8994855559952513)(359,0.9002770083102493)(360,0.8998812821527503)(361,0.8998812821527503)(362,0.9002770083102493)(363,0.9002770083102493)(364,0.8998812821527503)(365,0.9006727344677483)(366,0.8998812821527503)(367,0.9002770083102493)(368,0.9010684606252474)(369,0.9006727344677483)(370,0.9002770083102493)(371,0.9002770083102493)(372,0.9010684606252474)(373,0.9006727344677483)(374,0.9018599129402454)(375,0.9018599129402454)(376,0.9018599129402454)(377,0.9018599129402454)(378,0.9022556390977443)(379,0.9022556390977443)(380,0.9022556390977443)(381,0.9022556390977443)(382,0.9022556390977443)(383,0.9022556390977443)(384,0.9022556390977443)(385,0.9026513652552434)(386,0.9026513652552434)(387,0.9026513652552434)(388,0.9030470914127424)(389,0.9030470914127424)(390,0.9030470914127424)(391,0.9030470914127424)(392,0.9030470914127424)(393,0.9038385437277404)(394,0.9042342698852394)(395,0.9038385437277404)(396,0.9042342698852394)(397,0.9042342698852394)(398,0.9042342698852394)(399,0.9038385437277404)(400,0.9038385437277404)(401,0.9038385437277404)(402,0.9042342698852394)(403,0.9046299960427384)(404,0.9046299960427384)(405,0.9054214483577364)(406,0.9050257222002375)(407,0.9058171745152355)(408,0.9062129006727345)(409,0.9062129006727345)(410,0.9058171745152355)(411,0.9058171745152355)(412,0.9058171745152355)(413,0.9058171745152355)(414,0.9058171745152355)(415,0.9054214483577364)(416,0.9054214483577364)(417,0.9058171745152355)(418,0.9066086268302335)(419,0.9058171745152355)(420,0.9058171745152355)(421,0.9058171745152355)(422,0.9062129006727345)(423,0.9066086268302335)(424,0.9062129006727345)(425,0.9062129006727345)(426,0.9070043529877325)(427,0.9070043529877325)(428,0.9066086268302335)(429,0.9066086268302335)(430,0.9074000791452315)(431,0.9074000791452315)(432,0.9074000791452315)(433,0.9074000791452315)(434,0.9077958053027305)(435,0.9077958053027305)(436,0.9074000791452315)(437,0.9074000791452315)(438,0.9077958053027305)(439,0.9077958053027305)(440,0.9070043529877325)(441,0.9074000791452315)(442,0.9074000791452315)(443,0.9081915314602296)(444,0.9081915314602296)(445,0.9085872576177285)(446,0.9081915314602296)(447,0.9089829837752276)(448,0.9089829837752276)(449,0.9085872576177285)(450,0.9085872576177285)(451,0.9085872576177285)(452,0.9085872576177285)(453,0.9085872576177285)(454,0.9089829837752276)(455,0.9089829837752276)(456,0.9089829837752276)(457,0.9089829837752276)(458,0.9089829837752276)(459,0.9089829837752276)(460,0.9093787099327265)(461,0.9093787099327265)(462,0.9093787099327265)(463,0.9093787099327265)(464,0.9093787099327265)(465,0.9089829837752276)(466,0.9097744360902256)(467,0.9097744360902256)(468,0.9097744360902256)(469,0.9097744360902256)(470,0.9089829837752276)(471,0.9089829837752276)(472,0.9089829837752276)(473,0.9089829837752276)(474,0.9089829837752276)(475,0.9089829837752276)(476,0.9085872576177285)(477,0.9093787099327265)(478,0.9093787099327265)(479,0.9093787099327265)(480,0.9097744360902256)(481,0.9097744360902256)(482,0.9097744360902256)(483,0.9097744360902256)(484,0.9097744360902256)(485,0.9101701622477246)(486,0.9097744360902256)(487,0.9097744360902256)(488,0.9101701622477246)(489,0.9101701622477246)(490,0.9101701622477246)(491,0.9101701622477246)(492,0.9101701622477246)(493,0.9101701622477246)(494,0.9101701622477246)(495,0.9105658884052236)(496,0.9105658884052236)(497,0.9101701622477246)(498,0.9101701622477246)(499,0.9101701622477246)
};\addlegendentry{ROTAF $(s = 3)$}
\addplot [semithick, color = black]
coordinates {
(0,0.14166996438464582)(1,0.28254847645429365)(2,0.43490304709141275)(3,0.5263157894736842)(4,0.5864661654135338)(5,0.6220815195884448)(6,0.6406806489908983)(7,0.664424218440839)(8,0.6759002770083102)(9,0.6850019786307875)(10,0.6901464186782746)(11,0.6984566679857539)(12,0.705975464978235)(13,0.7130985358132172)(14,0.7194301543332015)(15,0.7285318559556787)(16,0.7340720221606648)(17,0.7380292837356549)(18,0.741986545310645)(19,0.7487138899881283)(20,0.7530668777206173)(21,0.7574198654531065)(22,0.7613771270280966)(23,0.7641472101305896)(24,0.7665215670755837)(25,0.7677087455480808)(26,0.7728531855955678)(27,0.7740403640680649)(28,0.777206173328057)(29,0.7795805302730511)(30,0.7815591610605461)(31,0.7815591610605461)(32,0.7835377918480412)(33,0.7863078749505342)(34,0.7894736842105263)(35,0.7914523149980214)(36,0.7942223981005144)(37,0.7958053027305105)(38,0.7985753858330036)(39,0.8009497427779976)(40,0.8017411950929957)(41,0.8037198258804907)(42,0.8053027305104867)(43,0.8080728136129798)(44,0.8096557182429759)(45,0.8116343490304709)(46,0.814404432132964)(47,0.814800158290463)(48,0.817965967550455)(49,0.8195488721804511)(50,0.8215275029679462)(51,0.8223189552829442)(52,0.8227146814404432)(53,0.8246933122279383)(54,0.8254847645429363)(55,0.8266719430154333)(56,0.8266719430154333)(57,0.8274633953304313)(58,0.8294420261179264)(59,0.8302334784329244)(60,0.8302334784329244)(61,0.8310249307479224)(62,0.8322121092204194)(63,0.8333992876929165)(64,0.8349821923229126)(65,0.8349821923229126)(66,0.8357736446379106)(67,0.8357736446379106)(68,0.8385437277404036)(69,0.8401266323703996)(70,0.8409180846853977)(71,0.8409180846853977)(72,0.8409180846853977)(73,0.8421052631578947)(74,0.8432924416303917)(75,0.8452710724178868)(76,0.8476454293628809)(77,0.8476454293628809)(78,0.8484368816778789)(79,0.848832607835378)(80,0.850019786307875)(81,0.849624060150376)(82,0.851602690937871)(83,0.853185595567867)(84,0.8539770478828651)(85,0.8547685001978631)(86,0.8555599525128611)(87,0.8563514048278591)(88,0.8567471309853582)(89,0.8567471309853582)(90,0.8567471309853582)(91,0.8571428571428571)(92,0.8579343094578552)(93,0.8591214879303521)(94,0.8595172140878512)(95,0.8595172140878512)(96,0.8595172140878512)(97,0.8607043925603483)(98,0.8607043925603483)(99,0.8614958448753463)(100,0.8618915710328453)(101,0.8626830233478433)(102,0.8618915710328453)(103,0.8634744756628413)(104,0.8630787495053424)(105,0.8622872971903442)(106,0.8638702018203404)(107,0.8638702018203404)(108,0.8638702018203404)(109,0.8650573802928374)(110,0.8658488326078354)(111,0.8662445587653343)(112,0.8662445587653343)(113,0.8662445587653343)(114,0.8662445587653343)(115,0.8670360110803325)(116,0.8674317372378314)(117,0.8674317372378314)(118,0.8678274633953305)(119,0.8686189157103285)(120,0.8690146418678275)(121,0.8702018203403245)(122,0.8702018203403245)(123,0.8702018203403245)(124,0.8698060941828255)(125,0.8702018203403245)(126,0.8705975464978235)(127,0.8698060941828255)(128,0.8705975464978235)(129,0.8717847249703206)(130,0.8705975464978235)(131,0.8713889988128215)(132,0.8725761772853186)(133,0.8733676296003166)(134,0.8737633557578156)(135,0.8729719034428176)(136,0.8737633557578156)(137,0.8741590819153146)(138,0.8745548080728136)(139,0.8749505342303127)(140,0.8745548080728136)(141,0.8749505342303127)(142,0.8757419865453107)(143,0.8757419865453107)(144,0.8761377127028096)(145,0.8761377127028096)(146,0.8761377127028096)(147,0.8761377127028096)(148,0.8765334388603087)(149,0.8773248911753067)(150,0.8777206173328057)(151,0.8785120696478037)(152,0.8785120696478037)(153,0.8785120696478037)(154,0.8785120696478037)(155,0.8773248911753067)(156,0.8785120696478037)(157,0.8781163434903048)(158,0.8781163434903048)(159,0.8785120696478037)(160,0.8785120696478037)(161,0.8793035219628017)(162,0.8800949742777998)(163,0.8800949742777998)(164,0.8804907004352988)(165,0.8812821527502968)(166,0.8812821527502968)(167,0.8816778789077958)(168,0.8816778789077958)(169,0.8824693312227938)(170,0.8820736050652949)(171,0.8832607835377918)(172,0.8840522358527899)(173,0.8836565096952909)(174,0.8840522358527899)(175,0.8844479620102889)(176,0.8844479620102889)(177,0.8852394143252869)(178,0.8852394143252869)(179,0.8848436881677879)(180,0.8856351404827859)(181,0.8852394143252869)(182,0.8856351404827859)(183,0.8852394143252869)(184,0.8856351404827859)(185,0.8856351404827859)(186,0.886030866640285)(187,0.886030866640285)(188,0.8864265927977839)(189,0.886822318955283)(190,0.886822318955283)(191,0.8872180451127819)(192,0.8864265927977839)(193,0.886822318955283)(194,0.886822318955283)(195,0.8872180451127819)(196,0.8872180451127819)(197,0.8872180451127819)(198,0.886822318955283)(199,0.88800949742778)(200,0.887613771270281)(201,0.88800949742778)(202,0.8872180451127819)(203,0.886822318955283)(204,0.887613771270281)(205,0.88800949742778)(206,0.88800949742778)(207,0.88800949742778)(208,0.88800949742778)(209,0.88800949742778)(210,0.888405223585279)(211,0.888800949742778)(212,0.889196675900277)(213,0.888800949742778)(214,0.889196675900277)(215,0.888405223585279)(216,0.888800949742778)(217,0.888800949742778)(218,0.889196675900277)(219,0.889196675900277)(220,0.889592402057776)(221,0.889592402057776)(222,0.8899881282152751)(223,0.889592402057776)(224,0.890383854372774)(225,0.890383854372774)(226,0.890383854372774)(227,0.890383854372774)(228,0.8915710328452711)(229,0.8907795805302731)(230,0.8915710328452711)(231,0.8915710328452711)(232,0.8923624851602691)(233,0.8919667590027701)(234,0.8927582113177681)(235,0.8919667590027701)(236,0.8923624851602691)(237,0.8923624851602691)(238,0.8931539374752672)(239,0.8935496636327661)(240,0.8935496636327661)(241,0.8935496636327661)(242,0.8931539374752672)(243,0.8939453897902652)(244,0.8935496636327661)(245,0.8939453897902652)(246,0.8935496636327661)(247,0.8939453897902652)(248,0.8935496636327661)(249,0.8943411159477641)(250,0.8947368421052632)(251,0.8943411159477641)(252,0.8943411159477641)(253,0.8955282944202612)(254,0.8955282944202612)(255,0.8955282944202612)(256,0.8963197467352592)(257,0.8967154728927582)(258,0.8971111990502573)(259,0.8967154728927582)(260,0.8967154728927582)(261,0.8967154728927582)(262,0.8967154728927582)(263,0.8967154728927582)(264,0.8967154728927582)(265,0.8971111990502573)(266,0.8967154728927582)(267,0.8967154728927582)(268,0.8975069252077562)(269,0.8967154728927582)(270,0.8963197467352592)(271,0.8967154728927582)(272,0.8967154728927582)(273,0.8986941036802533)(274,0.8986941036802533)(275,0.8990898298377523)(276,0.8982983775227542)(277,0.8990898298377523)(278,0.8982983775227542)(279,0.9002770083102493)(280,0.9002770083102493)(281,0.9006727344677483)(282,0.9002770083102493)(283,0.9002770083102493)(284,0.9006727344677483)(285,0.9010684606252474)(286,0.9010684606252474)(287,0.9010684606252474)(288,0.9010684606252474)(289,0.9010684606252474)(290,0.9014641867827463)(291,0.9014641867827463)(292,0.9014641867827463)(293,0.9014641867827463)(294,0.9018599129402454)(295,0.9022556390977443)(296,0.9026513652552434)(297,0.9026513652552434)(298,0.9022556390977443)(299,0.9022556390977443)(300,0.9026513652552434)(301,0.9030470914127424)(302,0.9026513652552434)(303,0.9038385437277404)(304,0.9034428175702414)(305,0.9034428175702414)(306,0.9034428175702414)(307,0.9038385437277404)(308,0.9038385437277404)(309,0.9034428175702414)(310,0.9034428175702414)(311,0.9038385437277404)(312,0.9034428175702414)(313,0.9038385437277404)(314,0.9042342698852394)(315,0.9046299960427384)(316,0.9050257222002375)(317,0.9050257222002375)(318,0.9054214483577364)(319,0.9050257222002375)(320,0.9054214483577364)(321,0.9054214483577364)(322,0.9050257222002375)(323,0.9058171745152355)(324,0.9062129006727345)(325,0.9062129006727345)(326,0.9058171745152355)(327,0.9054214483577364)(328,0.9066086268302335)(329,0.9062129006727345)(330,0.9062129006727345)(331,0.9066086268302335)(332,0.9062129006727345)(333,0.9062129006727345)(334,0.9062129006727345)(335,0.9066086268302335)(336,0.9070043529877325)(337,0.9070043529877325)(338,0.9074000791452315)(339,0.9074000791452315)(340,0.9074000791452315)(341,0.9074000791452315)(342,0.9077958053027305)(343,0.9074000791452315)(344,0.9074000791452315)(345,0.9077958053027305)(346,0.9077958053027305)(347,0.9081915314602296)(348,0.9081915314602296)(349,0.9085872576177285)(350,0.9081915314602296)(351,0.9081915314602296)(352,0.9089829837752276)(353,0.9085872576177285)(354,0.9081915314602296)(355,0.9081915314602296)(356,0.9085872576177285)(357,0.9089829837752276)(358,0.9093787099327265)(359,0.9085872576177285)(360,0.9093787099327265)(361,0.9093787099327265)(362,0.9097744360902256)(363,0.9097744360902256)(364,0.9085872576177285)(365,0.9093787099327265)(366,0.9093787099327265)(367,0.9089829837752276)(368,0.9085872576177285)(369,0.9085872576177285)(370,0.9093787099327265)(371,0.9089829837752276)(372,0.9093787099327265)(373,0.9089829837752276)(374,0.9085872576177285)(375,0.9089829837752276)(376,0.9089829837752276)(377,0.9089829837752276)(378,0.9089829837752276)(379,0.9093787099327265)(380,0.9093787099327265)(381,0.9093787099327265)(382,0.9093787099327265)(383,0.9093787099327265)(384,0.9093787099327265)(385,0.9093787099327265)(386,0.9089829837752276)(387,0.9089829837752276)(388,0.9093787099327265)(389,0.9097744360902256)(390,0.9097744360902256)(391,0.9097744360902256)(392,0.9097744360902256)(393,0.9093787099327265)(394,0.9093787099327265)(395,0.9093787099327265)(396,0.9093787099327265)(397,0.9093787099327265)(398,0.9093787099327265)(399,0.9097744360902256)(400,0.9097744360902256)(401,0.9097744360902256)(402,0.9101701622477246)(403,0.9101701622477246)(404,0.9093787099327265)(405,0.9097744360902256)(406,0.9093787099327265)(407,0.9097744360902256)(408,0.9101701622477246)(409,0.9101701622477246)(410,0.9101701622477246)(411,0.9101701622477246)(412,0.9105658884052236)(413,0.9105658884052236)(414,0.9109616145627226)(415,0.9105658884052236)(416,0.9105658884052236)(417,0.9101701622477246)(418,0.9101701622477246)(419,0.9101701622477246)(420,0.9105658884052236)(421,0.9101701622477246)(422,0.9105658884052236)(423,0.9105658884052236)(424,0.9113573407202216)(425,0.9109616145627226)(426,0.9109616145627226)(427,0.9105658884052236)(428,0.9105658884052236)(429,0.9105658884052236)(430,0.9105658884052236)(431,0.9105658884052236)(432,0.9105658884052236)(433,0.9105658884052236)(434,0.9105658884052236)(435,0.9113573407202216)(436,0.9113573407202216)(437,0.9117530668777206)(438,0.9113573407202216)(439,0.9121487930352197)(440,0.9121487930352197)(441,0.9121487930352197)(442,0.9121487930352197)(443,0.9121487930352197)(444,0.9121487930352197)(445,0.9121487930352197)(446,0.9125445191927186)(447,0.9121487930352197)(448,0.9125445191927186)(449,0.9133359715077166)(450,0.9125445191927186)(451,0.9129402453502177)(452,0.9129402453502177)(453,0.9125445191927186)(454,0.9125445191927186)(455,0.9129402453502177)(456,0.9125445191927186)(457,0.9129402453502177)(458,0.9125445191927186)(459,0.9125445191927186)(460,0.9141274238227147)(461,0.9149188761377127)(462,0.9149188761377127)(463,0.9145231499802137)(464,0.9145231499802137)(465,0.9141274238227147)(466,0.9137316976652157)(467,0.9137316976652157)(468,0.9137316976652157)(469,0.9137316976652157)(470,0.9141274238227147)(471,0.9145231499802137)(472,0.9145231499802137)(473,0.9149188761377127)(474,0.9153146022952117)(475,0.9149188761377127)(476,0.9145231499802137)(477,0.9145231499802137)(478,0.9149188761377127)(479,0.9149188761377127)(480,0.9145231499802137)(481,0.9145231499802137)(482,0.9149188761377127)(483,0.9149188761377127)(484,0.9149188761377127)(485,0.9153146022952117)(486,0.9161060546102098)(487,0.9161060546102098)(488,0.9157103284527107)(489,0.9161060546102098)(490,0.9161060546102098)(491,0.9161060546102098)(492,0.9161060546102098)(493,0.9165017807677087)(494,0.9176889592402058)(495,0.9176889592402058)(496,0.9172932330827067)(497,0.9172932330827067)(498,0.9168975069252078)(499,0.9172932330827067)
};\addlegendentry{ROTAF $(s = 4)$}
\end{axis}
\end{tikzpicture}}
\end{center}
\caption{Test accuracy vs. global rounds for MNIST non-i.i.d. dataset when $\gamma = 0.6$, $P=1$, and $\sigma^2=0$. The performance of the proposed approach with $(s>1)$ and without $(s=1)$ resampling is reported.}
\label{sample_dup_attack_noniid}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[$N=20$, $G=5$]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=500,
ymin=0, ymax=1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test accuracy},
ymajorgrids,
ytick style={color=black},
grid=major,
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
]
\addplot [semithick, color =red]
coordinates {
(0,0.1009)(1,0.1967)(2,0.2572)(3,0.2904)(4,0.3157)(5,0.3308)(6,0.3451)(7,0.3565)(8,0.3767)(9,0.3801)(10,0.3992)(11,0.3984)(12,0.4108)(13,0.4147)(14,0.4214)(15,0.4284)(16,0.4395)(17,0.4338)(18,0.4432)(19,0.4422)(20,0.4484)(21,0.4558)(22,0.4528)(23,0.4659)(24,0.4669)(25,0.473)(26,0.4753)(27,0.4789)(28,0.4743)(29,0.48)(30,0.4834)(31,0.4923)(32,0.4882)(33,0.4936)(34,0.4959)(35,0.51)(36,0.505)(37,0.5052)(38,0.5026)(39,0.5127)(40,0.5105)(41,0.5164)(42,0.5215)(43,0.5214)(44,0.5188)(45,0.5233)(46,0.528)(47,0.5267)(48,0.5344)(49,0.5346)(50,0.5335)(51,0.5357)(52,0.5416)(53,0.5395)(54,0.5453)(55,0.5468)(56,0.548)(57,0.5524)(58,0.5563)(59,0.5528)(60,0.5523)(61,0.5504)(62,0.5541)(63,0.5576)(64,0.5584)(65,0.5611)(66,0.5661)(67,0.5723)(68,0.5696)(69,0.5691)(70,0.5689)(71,0.5708)(72,0.5725)(73,0.5764)(74,0.5751)(75,0.5791)(76,0.5769)(77,0.5774)(78,0.579)(79,0.5842)(80,0.591)(81,0.5874)(82,0.5858)(83,0.5861)(84,0.593)(85,0.5959)(86,0.5941)(87,0.6008)(88,0.5955)(89,0.5991)(90,0.6008)(91,0.6062)(92,0.6022)(93,0.6024)(94,0.6066)(95,0.6063)(96,0.5993)(97,0.6075)(98,0.6086)(99,0.6057)(100,0.6099)(101,0.6157)(102,0.612)(103,0.613)(104,0.6123)(105,0.6236)(106,0.6179)(107,0.6229)(108,0.6181)(109,0.6253)(110,0.62)(111,0.6229)(112,0.6274)(113,0.6259)(114,0.6334)(115,0.626)(116,0.6336)(117,0.6302)(118,0.6332)(119,0.6327)(120,0.6282)(121,0.6343)(122,0.6342)(123,0.6336)(124,0.6397)(125,0.6376)(126,0.6383)(127,0.6367)(128,0.641)(129,0.6402)(130,0.6442)(131,0.6405)(132,0.6486)(133,0.6481)(134,0.6555)(135,0.6447)(136,0.6457)(137,0.6457)(138,0.6511)(139,0.6491)(140,0.6463)(141,0.6461)(142,0.6508)(143,0.6527)(144,0.6565)(145,0.6553)(146,0.6553)(147,0.6559)(148,0.6523)(149,0.6586)(150,0.6596)(151,0.6588)(152,0.6578)(153,0.6621)(154,0.6567)(155,0.6582)(156,0.662)(157,0.6655)(158,0.6641)(159,0.6663)(160,0.6623)(161,0.6596)(162,0.6598)(163,0.6671)(164,0.6735)(165,0.6708)(166,0.6695)(167,0.6719)(168,0.67)(169,0.6741)(170,0.6718)(171,0.6692)(172,0.6689)(173,0.6716)(174,0.6712)(175,0.6764)(176,0.6675)(177,0.6713)(178,0.6758)(179,0.6753)(180,0.6757)(181,0.6737)(182,0.6788)(183,0.6755)(184,0.6765)(185,0.6771)(186,0.678)(187,0.6766)(188,0.6822)(189,0.6791)(190,0.6813)(191,0.6763)(192,0.6829)(193,0.6784)(194,0.6831)(195,0.6842)(196,0.6836)(197,0.6816)(198,0.6807)(199,0.6831)(200,0.6849)(201,0.6871)(202,0.6842)(203,0.6863)(204,0.688)(205,0.688)(206,0.6882)(207,0.6869)(208,0.6845)(209,0.6913)(210,0.6871)(211,0.6918)(212,0.6874)(213,0.6883)(214,0.6874)(215,0.6917)(216,0.688)(217,0.6888)(218,0.6884)(219,0.6909)(220,0.6911)(221,0.6936)(222,0.6948)(223,0.6946)(224,0.6923)(225,0.6913)(226,0.6965)(227,0.6946)(228,0.6945)(229,0.6995)(230,0.6916)(231,0.6945)(232,0.697)(233,0.692)(234,0.6995)(235,0.6954)(236,0.7005)(237,0.7004)(238,0.7006)(239,0.7065)(240,0.7021)(241,0.7027)(242,0.6976)(243,0.6976)(244,0.7004)(245,0.6947)(246,0.7014)(247,0.7068)(248,0.6965)(249,0.7011)(250,0.703)(251,0.7)(252,0.7047)(253,0.6979)(254,0.704)(255,0.7053)(256,0.704)(257,0.7029)(258,0.6983)(259,0.7048)(260,0.7035)(261,0.7034)(262,0.6991)(263,0.7061)(264,0.7056)(265,0.7054)(266,0.7045)(267,0.7011)(268,0.7056)(269,0.7056)(270,0.7131)(271,0.7036)(272,0.71)(273,0.7045)(274,0.7012)(275,0.704)(276,0.7056)(277,0.7075)(278,0.7104)(279,0.7058)(280,0.7074)(281,0.7046)(282,0.706)(283,0.7068)(284,0.711)(285,0.7082)(286,0.7094)(287,0.7096)(288,0.7074)(289,0.7055)(290,0.7096)(291,0.7091)(292,0.7118)(293,0.706)(294,0.7132)(295,0.7096)(296,0.7154)(297,0.7087)(298,0.7084)(299,0.7039)(300,0.7082)(301,0.7106)(302,0.714)(303,0.7118)(304,0.7136)(305,0.7119)(306,0.7111)(307,0.7132)(308,0.7171)(309,0.714)(310,0.7132)(311,0.711)(312,0.713)(313,0.7154)(314,0.7131)(315,0.7133)(316,0.7136)(317,0.7144)(318,0.7158)(319,0.7143)(320,0.7089)(321,0.7171)(322,0.7146)(323,0.7143)(324,0.7134)(325,0.7177)(326,0.7149)(327,0.7143)(328,0.7151)(329,0.7213)(330,0.7141)(331,0.7165)(332,0.7103)(333,0.7128)(334,0.7179)(335,0.714)(336,0.7134)(337,0.7162)(338,0.7156)(339,0.718)(340,0.7142)(341,0.7111)(342,0.7173)(343,0.7137)(344,0.7115)(345,0.7146)(346,0.7185)(347,0.7142)(348,0.7218)(349,0.7202)(350,0.7138)(351,0.717)(352,0.718)(353,0.7173)(354,0.7169)(355,0.7206)(356,0.7184)(357,0.7137)(358,0.7205)(359,0.716)(360,0.7182)(361,0.7182)(362,0.7208)(363,0.7219)(364,0.7229)(365,0.7213)(366,0.7201)(367,0.7196)(368,0.7204)(369,0.7183)(370,0.721)(371,0.7207)(372,0.7221)(373,0.725)(374,0.7207)(375,0.7228)(376,0.7139)(377,0.7214)(378,0.7196)(379,0.7217)(380,0.7199)(381,0.7161)(382,0.7175)(383,0.7225)(384,0.7222)(385,0.7194)(386,0.7243)(387,0.7185)(388,0.7176)(389,0.7172)(390,0.7261)(391,0.7191)(392,0.7275)(393,0.7238)(394,0.722)(395,0.7251)(396,0.7194)(397,0.7198)(398,0.7222)(399,0.7237)(400,0.7223)(401,0.7196)(402,0.726)(403,0.7239)(404,0.7221)(405,0.7233)(406,0.7196)(407,0.7197)(408,0.7253)(409,0.7207)(410,0.721)(411,0.7245)(412,0.7212)(413,0.7215)(414,0.7238)(415,0.7259)(416,0.7247)(417,0.7223)(418,0.7286)(419,0.7219)(420,0.7241)(421,0.7205)(422,0.724)(423,0.7261)(424,0.7211)(425,0.7263)(426,0.7284)(427,0.7237)(428,0.7258)(429,0.7248)(430,0.7268)(431,0.721)(432,0.7264)(433,0.7196)(434,0.7207)(435,0.7274)(436,0.7237)(437,0.7258)(438,0.724)(439,0.7258)(440,0.7221)(441,0.7278)(442,0.7252)(443,0.7212)(444,0.7214)(445,0.7258)(446,0.7223)(447,0.7276)(448,0.7228)(449,0.7283)(450,0.7235)(451,0.7244)(452,0.7281)(453,0.7263)(454,0.7279)(455,0.7249)(456,0.7278)(457,0.7263)(458,0.7254)(459,0.7239)(460,0.7249)(461,0.7231)(462,0.727)(463,0.7228)(464,0.7259)(465,0.7313)(466,0.7242)(467,0.7252)(468,0.7285)(469,0.7276)(470,0.7249)(471,0.7265)(472,0.7207)(473,0.7258)(474,0.7254)(475,0.7254)(476,0.7287)(477,0.7237)(478,0.7251)(479,0.7246)(480,0.7276)(481,0.7276)(482,0.7283)(483,0.7281)(484,0.7291)(485,0.7263)(486,0.7252)(487,0.7277)(488,0.7299)(489,0.7293)(490,0.7289)(491,0.7281)(492,0.7254)(493,0.7269)(494,0.7273)(495,0.7265)(496,0.7274)(497,0.7253)(498,0.7247)(499,0.7302)
};\addlegendentry{COTAF $B=0$}
\addplot [semithick, color =blue]
coordinates {
(0,0.1009)(1,0.2019)(2,0.2536)(3,0.2897)(4,0.3099)(5,0.3292)(6,0.3482)(7,0.3583)(8,0.3665)(9,0.383)(10,0.3939)(11,0.3994)(12,0.4038)(13,0.4158)(14,0.419)(15,0.4247)(16,0.4315)(17,0.4423)(18,0.4413)(19,0.441)(20,0.4494)(21,0.4543)(22,0.4634)(23,0.4593)(24,0.4656)(25,0.4652)(26,0.4736)(27,0.4745)(28,0.4744)(29,0.4866)(30,0.4832)(31,0.4843)(32,0.4825)(33,0.4972)(34,0.4924)(35,0.4912)(36,0.4994)(37,0.5068)(38,0.5066)(39,0.5069)(40,0.5054)(41,0.5089)(42,0.5109)(43,0.5143)(44,0.5131)(45,0.5128)(46,0.5256)(47,0.5192)(48,0.532)(49,0.5316)(50,0.5361)(51,0.5358)(52,0.5325)(53,0.5394)(54,0.5368)(55,0.5354)(56,0.5427)(57,0.5426)(58,0.5446)(59,0.5478)(60,0.5498)(61,0.5513)(62,0.5539)(63,0.5539)(64,0.5582)(65,0.5541)(66,0.5623)(67,0.562)(68,0.5652)(69,0.5617)(70,0.5591)(71,0.5705)(72,0.5729)(73,0.5749)(74,0.5721)(75,0.5697)(76,0.5772)(77,0.5765)(78,0.5734)(79,0.5794)(80,0.5829)(81,0.5823)(82,0.5844)(83,0.5826)(84,0.5867)(85,0.5857)(86,0.5881)(87,0.5899)(88,0.5872)(89,0.5946)(90,0.5913)(91,0.5963)(92,0.5923)(93,0.5962)(94,0.6041)(95,0.6013)(96,0.6026)(97,0.6056)(98,0.6061)(99,0.6045)(100,0.6068)(101,0.6102)(102,0.615)(103,0.6156)(104,0.6167)(105,0.6114)(106,0.608)(107,0.6182)(108,0.6167)(109,0.6191)(110,0.6189)(111,0.6193)(112,0.6279)(113,0.6223)(114,0.6262)(115,0.6243)(116,0.62)(117,0.6207)(118,0.6222)(119,0.6287)(120,0.6272)(121,0.628)(122,0.6336)(123,0.63)(124,0.6284)(125,0.6278)(126,0.6381)(127,0.6384)(128,0.6299)(129,0.6352)(130,0.6333)(131,0.6377)(132,0.6373)(133,0.6362)(134,0.6386)(135,0.6383)(136,0.6391)(137,0.64)(138,0.6464)(139,0.6404)(140,0.6408)(141,0.6474)(142,0.6445)(143,0.6498)(144,0.6504)(145,0.6449)(146,0.6503)(147,0.6436)(148,0.6436)(149,0.6488)(150,0.6473)(151,0.6505)(152,0.6535)(153,0.6555)(154,0.6502)(155,0.6607)(156,0.6502)(157,0.6513)(158,0.6553)(159,0.6555)(160,0.6537)(161,0.6507)(162,0.654)(163,0.6561)(164,0.6567)(165,0.6533)(166,0.6581)(167,0.6589)(168,0.6588)(169,0.6583)(170,0.6625)(171,0.6613)(172,0.6643)(173,0.662)(174,0.6651)(175,0.6603)(176,0.6669)(177,0.6698)(178,0.6668)(179,0.6682)(180,0.6637)(181,0.6616)(182,0.6639)(183,0.6657)(184,0.6669)(185,0.6661)(186,0.6668)(187,0.6655)(188,0.6688)(189,0.6693)(190,0.67)(191,0.6682)(192,0.6717)(193,0.6657)(194,0.6696)(195,0.6746)(196,0.6699)(197,0.6728)(198,0.6694)(199,0.6705)(200,0.6773)(201,0.6719)(202,0.6742)(203,0.6794)(204,0.6787)(205,0.6762)(206,0.6701)(207,0.6748)(208,0.6775)(209,0.6758)(210,0.6788)(211,0.6774)(212,0.6764)(213,0.6762)(214,0.6834)(215,0.6742)(216,0.6764)(217,0.6853)(218,0.6866)(219,0.6785)(220,0.6762)(221,0.6816)(222,0.6795)(223,0.6813)(224,0.6814)(225,0.6807)(226,0.6789)(227,0.6824)(228,0.6782)(229,0.6852)(230,0.6862)(231,0.6831)(232,0.6846)(233,0.6819)(234,0.6779)(235,0.687)(236,0.6843)(237,0.6901)(238,0.6867)(239,0.6865)(240,0.6861)(241,0.687)(242,0.6899)(243,0.6899)(244,0.6838)(245,0.6888)(246,0.6892)(247,0.6898)(248,0.6893)(249,0.6898)(250,0.6867)(251,0.6933)(252,0.6956)(253,0.6817)(254,0.6887)(255,0.6859)(256,0.6916)(257,0.6958)(258,0.6914)(259,0.6887)(260,0.6914)(261,0.6887)(262,0.6932)(263,0.6933)(264,0.6944)(265,0.6872)(266,0.6922)(267,0.6943)(268,0.6888)(269,0.6871)(270,0.6906)(271,0.693)(272,0.6954)(273,0.6957)(274,0.6932)(275,0.6948)(276,0.6887)(277,0.6932)(278,0.6978)(279,0.6953)(280,0.6936)(281,0.698)(282,0.6915)(283,0.6963)(284,0.6944)(285,0.6953)(286,0.696)(287,0.6944)(288,0.6993)(289,0.6951)(290,0.6995)(291,0.6977)(292,0.7013)(293,0.6957)(294,0.6986)(295,0.6984)(296,0.7013)(297,0.7019)(298,0.7012)(299,0.6961)(300,0.6965)(301,0.6967)(302,0.6997)(303,0.7022)(304,0.6985)(305,0.6981)(306,0.6978)(307,0.7013)(308,0.7008)(309,0.7012)(310,0.6954)(311,0.7023)(312,0.6975)(313,0.7058)(314,0.7032)(315,0.7057)(316,0.7037)(317,0.6998)(318,0.7054)(319,0.7005)(320,0.7056)(321,0.6987)(322,0.7056)(323,0.6995)(324,0.7007)(325,0.705)(326,0.7055)(327,0.7049)(328,0.7021)(329,0.7039)(330,0.7036)(331,0.7081)(332,0.7017)(333,0.7037)(334,0.7064)(335,0.7069)(336,0.7016)(337,0.705)(338,0.7059)(339,0.7037)(340,0.7059)(341,0.6959)(342,0.701)(343,0.708)(344,0.7043)(345,0.6986)(346,0.7097)(347,0.7053)(348,0.7011)(349,0.7066)(350,0.7081)(351,0.7026)(352,0.7021)(353,0.7049)(354,0.7081)(355,0.7062)(356,0.7059)(357,0.7022)(358,0.7075)(359,0.7028)(360,0.7029)(361,0.7076)(362,0.7059)(363,0.7045)(364,0.7073)(365,0.7043)(366,0.7092)(367,0.7048)(368,0.7021)(369,0.7085)(370,0.7064)(371,0.7106)(372,0.7061)(373,0.7073)(374,0.7049)(375,0.7075)(376,0.7087)(377,0.7073)(378,0.7077)(379,0.7074)(380,0.7031)(381,0.7022)(382,0.7097)(383,0.7068)(384,0.7077)(385,0.7095)(386,0.7078)(387,0.7073)(388,0.7089)(389,0.7104)(390,0.707)(391,0.7056)(392,0.7097)(393,0.7123)(394,0.7055)(395,0.7104)(396,0.7102)(397,0.7086)(398,0.71)(399,0.7091)(400,0.7082)(401,0.7125)(402,0.7089)(403,0.7111)(404,0.7109)(405,0.7083)(406,0.7117)(407,0.7111)(408,0.7132)(409,0.7113)(410,0.7093)(411,0.7099)(412,0.711)(413,0.7092)(414,0.7117)(415,0.7126)(416,0.7102)(417,0.7101)(418,0.7095)(419,0.7151)(420,0.7151)(421,0.7133)(422,0.7091)(423,0.7095)(424,0.7117)(425,0.7116)(426,0.7144)(427,0.7088)(428,0.7102)(429,0.7118)(430,0.7137)(431,0.7115)(432,0.7164)(433,0.7114)(434,0.7105)(435,0.7131)(436,0.7117)(437,0.7109)(438,0.715)(439,0.7103)(440,0.7133)(441,0.7115)(442,0.7092)(443,0.7089)(444,0.7165)(445,0.7112)(446,0.7176)(447,0.714)(448,0.7113)(449,0.711)(450,0.7168)(451,0.7115)(452,0.7119)(453,0.7186)(454,0.7116)(455,0.7143)(456,0.7105)(457,0.7128)(458,0.7122)(459,0.7151)(460,0.7135)(461,0.712)(462,0.7107)(463,0.7178)(464,0.7137)(465,0.7148)(466,0.7159)(467,0.7123)(468,0.7174)(469,0.7136)(470,0.7117)(471,0.7149)(472,0.7146)(473,0.7134)(474,0.7154)(475,0.7146)(476,0.7162)(477,0.7194)(478,0.7172)(479,0.714)(480,0.7172)(481,0.7133)(482,0.7154)(483,0.7141)(484,0.7147)(485,0.7145)(486,0.7183)(487,0.7168)(488,0.7188)(489,0.7125)(490,0.7146)(491,0.7208)(492,0.7071)(493,0.7143)(494,0.7152)(495,0.7116)(496,0.7123)(497,0.7122)(498,0.7141)(499,0.7141)
};\addlegendentry{ROTAF $B=0$}
\addplot [semithick, color = olive]
coordinates {
(0,0.1009)(1,0.1024)(2,0.0964)(3,0.1104)(4,0.125)(5,0.0907)(6,0.095)(7,0.0988)(8,0.0974)(9,0.0994)(10,0.1099)(11,0.1121)(12,0.1007)(13,0.1169)(14,0.1031)(15,0.0939)(16,0.0886)(17,0.107)(18,0.1057)(19,0.1023)(20,0.098)(21,0.1001)(22,0.1009)(23,0.1122)(24,0.0968)(25,0.0957)(26,0.098)(27,0.0926)(28,0.0965)(29,0.1051)(30,0.097)(31,0.0999)(32,0.1207)(33,0.0921)(34,0.1164)(35,0.1144)(36,0.0865)(37,0.0961)(38,0.1031)(39,0.0894)(40,0.1026)(41,0.0968)(42,0.1064)(43,0.1075)(44,0.1042)(45,0.097)(46,0.1033)(47,0.1137)(48,0.1126)(49,0.0959)(50,0.1026)(51,0.1166)(52,0.0951)(53,0.111)(54,0.1078)(55,0.1003)(56,0.0971)(57,0.1072)(58,0.104)(59,0.1111)(60,0.1)(61,0.1036)(62,0.1107)(63,0.1093)(64,0.1021)(65,0.1)(66,0.1042)(67,0.0974)(68,0.0977)(69,0.0995)(70,0.1047)(71,0.0984)(72,0.1066)(73,0.0951)(74,0.1033)(75,0.1091)(76,0.1097)(77,0.0909)(78,0.0978)(79,0.0858)(80,0.107)(81,0.1015)(82,0.103)(83,0.1063)(84,0.1096)(85,0.0983)(86,0.0982)(87,0.1115)(88,0.098)(89,0.1117)(90,0.1125)(91,0.1063)(92,0.1015)(93,0.0939)(94,0.1008)(95,0.1015)(96,0.1132)(97,0.0919)(98,0.1034)(99,0.1047)(100,0.0986)(101,0.1013)(102,0.099)(103,0.1049)(104,0.1021)(105,0.0929)(106,0.1018)(107,0.1016)(108,0.1054)(109,0.1056)(110,0.1001)(111,0.096)(112,0.0955)(113,0.0966)(114,0.0952)(115,0.1002)(116,0.0918)(117,0.0958)(118,0.0966)(119,0.0959)(120,0.0964)(121,0.094)(122,0.1072)(123,0.091)(124,0.1034)(125,0.0969)(126,0.097)(127,0.0973)(128,0.1082)(129,0.0929)(130,0.101)(131,0.1174)(132,0.104)(133,0.0972)(134,0.108)(135,0.0902)(136,0.0988)(137,0.0984)(138,0.0957)(139,0.0976)(140,0.0922)(141,0.0997)(142,0.1072)(143,0.0994)(144,0.1006)(145,0.0929)(146,0.1028)(147,0.106)(148,0.0939)(149,0.1066)(150,0.1031)(151,0.1023)(152,0.1018)(153,0.0921)(154,0.0907)(155,0.1042)(156,0.0929)(157,0.0931)(158,0.1031)(159,0.1032)(160,0.1046)(161,0.1022)(162,0.1037)(163,0.1021)(164,0.0999)(165,0.0853)(166,0.1043)(167,0.109)(168,0.0969)(169,0.0917)(170,0.1007)(171,0.1089)(172,0.1177)(173,0.1083)(174,0.1018)(175,0.1008)(176,0.1035)(177,0.1004)(178,0.1033)(179,0.1094)(180,0.1015)(181,0.1036)(182,0.105)(183,0.0987)(184,0.0989)(185,0.0925)(186,0.0973)(187,0.0993)(188,0.1145)(189,0.1142)(190,0.1082)(191,0.1061)(192,0.0949)(193,0.1101)(194,0.0977)(195,0.1055)(196,0.0921)(197,0.1003)(198,0.102)(199,0.104)(200,0.089)(201,0.111)(202,0.0944)(203,0.1043)(204,0.1121)(205,0.094)(206,0.0959)(207,0.1056)(208,0.1005)(209,0.0926)(210,0.1114)(211,0.1019)(212,0.1004)(213,0.1103)(214,0.1107)(215,0.1096)(216,0.1075)(217,0.0947)(218,0.1022)(219,0.0966)(220,0.1011)(221,0.1026)(222,0.0993)(223,0.1015)(224,0.0864)(225,0.0981)(226,0.1005)(227,0.0991)(228,0.119)(229,0.1021)(230,0.1032)(231,0.1063)(232,0.1019)(233,0.1029)(234,0.113)(235,0.1006)(236,0.0969)(237,0.0904)(238,0.0916)(239,0.1084)(240,0.0995)(241,0.0868)(242,0.0964)(243,0.0948)(244,0.1111)(245,0.0896)(246,0.0907)(247,0.0991)(248,0.0994)(249,0.0938)(250,0.097)(251,0.1188)(252,0.1091)(253,0.1061)(254,0.1082)(255,0.1025)(256,0.1159)(257,0.1029)(258,0.1052)(259,0.0941)(260,0.1013)(261,0.1135)(262,0.1045)(263,0.1065)(264,0.0947)(265,0.1078)(266,0.1078)(267,0.0921)(268,0.1094)(269,0.0935)(270,0.1079)(271,0.1014)(272,0.0868)(273,0.0955)(274,0.0937)(275,0.1081)(276,0.1049)(277,0.0888)(278,0.0957)(279,0.1173)(280,0.0993)(281,0.0975)(282,0.0972)(283,0.0961)(284,0.1089)(285,0.1149)(286,0.0974)(287,0.1029)(288,0.0976)(289,0.099)(290,0.104)(291,0.0982)(292,0.0973)(293,0.102)(294,0.0981)(295,0.1009)(296,0.0964)(297,0.1006)(298,0.1205)(299,0.1061)(300,0.0916)(301,0.0989)(302,0.1182)(303,0.0948)(304,0.1119)(305,0.1072)(306,0.0924)(307,0.0838)(308,0.1007)(309,0.0966)(310,0.1098)(311,0.0964)(312,0.1126)(313,0.1016)(314,0.1219)(315,0.1087)(316,0.1043)(317,0.102)(318,0.0969)(319,0.1014)(320,0.0947)(321,0.1103)(322,0.1086)(323,0.1007)(324,0.1082)(325,0.0908)(326,0.093)(327,0.1005)(328,0.1075)(329,0.0918)(330,0.1016)(331,0.0998)(332,0.1085)(333,0.1094)(334,0.0948)(335,0.1083)(336,0.1148)(337,0.1012)(338,0.0966)(339,0.0991)(340,0.1057)(341,0.0903)(342,0.1066)(343,0.0952)(344,0.1028)(345,0.0951)(346,0.0953)(347,0.0994)(348,0.0965)(349,0.0994)(350,0.0934)(351,0.1088)(352,0.0915)(353,0.1089)(354,0.1159)(355,0.1035)(356,0.1105)(357,0.0957)(358,0.1071)(359,0.1026)(360,0.1016)(361,0.1059)(362,0.1153)(363,0.0999)(364,0.0962)(365,0.1075)(366,0.1141)(367,0.1043)(368,0.1023)(369,0.103)(370,0.0936)(371,0.1089)(372,0.0988)(373,0.1062)(374,0.1037)(375,0.103)(376,0.0968)(377,0.0854)(378,0.115)(379,0.1067)(380,0.0991)(381,0.0955)(382,0.1099)(383,0.1035)(384,0.105)(385,0.1103)(386,0.1099)(387,0.0965)(388,0.0918)(389,0.1073)(390,0.0977)(391,0.0989)(392,0.0922)(393,0.0973)(394,0.1035)(395,0.0901)(396,0.0996)(397,0.0993)(398,0.1086)(399,0.1015)(400,0.0981)(401,0.1095)(402,0.1042)(403,0.1115)(404,0.1067)(405,0.1067)(406,0.0984)(407,0.1053)(408,0.1021)(409,0.1142)(410,0.102)(411,0.0997)(412,0.0864)(413,0.0958)(414,0.1005)(415,0.096)(416,0.1009)(417,0.0993)(418,0.1034)(419,0.1045)(420,0.1052)(421,0.1026)(422,0.1028)(423,0.0981)(424,0.1)(425,0.0974)(426,0.0942)(427,0.0987)(428,0.0888)(429,0.102)(430,0.1038)(431,0.1035)(432,0.1025)(433,0.1048)(434,0.1045)(435,0.1015)(436,0.108)(437,0.0988)(438,0.1083)(439,0.1026)(440,0.1079)(441,0.1017)(442,0.1089)(443,0.1021)(444,0.1002)(445,0.0996)(446,0.0939)(447,0.094)(448,0.1067)(449,0.1038)(450,0.1101)(451,0.0998)(452,0.105)(453,0.0953)(454,0.1111)(455,0.097)(456,0.1059)(457,0.0925)(458,0.0932)(459,0.098)(460,0.0868)(461,0.1071)(462,0.0937)(463,0.1053)(464,0.1036)(465,0.099)(466,0.1164)(467,0.1054)(468,0.0943)(469,0.1124)(470,0.0978)(471,0.1098)(472,0.1007)(473,0.1016)(474,0.1001)(475,0.1034)(476,0.0981)(477,0.0978)(478,0.0977)(479,0.1021)(480,0.114)(481,0.1078)(482,0.1063)(483,0.1085)(484,0.1041)(485,0.106)(486,0.1053)(487,0.0972)(488,0.0972)(489,0.0998)(490,0.0922)(491,0.0961)(492,0.1007)(493,0.1016)(494,0.1061)(495,0.1106)(496,0.1068)(497,0.1051)(498,0.0911)(499,0.0996)
};\addlegendentry{COTAF $B=2$}
\addplot [semithick, color = green]
coordinates {
(0,0.1009)(1,0.2067)(2,0.2499)(3,0.2915)(4,0.3131)(5,0.3201)(6,0.3413)(7,0.3518)(8,0.3678)(9,0.3828)(10,0.3849)(11,0.3962)(12,0.4066)(13,0.4064)(14,0.4204)(15,0.4195)(16,0.4259)(17,0.4278)(18,0.4415)(19,0.4427)(20,0.4464)(21,0.4516)(22,0.4513)(23,0.4559)(24,0.4581)(25,0.458)(26,0.4638)(27,0.4678)(28,0.4689)(29,0.4762)(30,0.4753)(31,0.4806)(32,0.4803)(33,0.4854)(34,0.4786)(35,0.4932)(36,0.4851)(37,0.4938)(38,0.4987)(39,0.4997)(40,0.5041)(41,0.5007)(42,0.5103)(43,0.51)(44,0.5071)(45,0.5102)(46,0.5159)(47,0.5222)(48,0.5183)(49,0.5124)(50,0.5212)(51,0.5231)(52,0.5254)(53,0.5258)(54,0.5227)(55,0.5295)(56,0.5311)(57,0.5307)(58,0.5354)(59,0.5384)(60,0.5486)(61,0.5437)(62,0.5411)(63,0.5442)(64,0.5463)(65,0.5448)(66,0.5489)(67,0.5495)(68,0.5504)(69,0.551)(70,0.5556)(71,0.5505)(72,0.5535)(73,0.5555)(74,0.5548)(75,0.5637)(76,0.5669)(77,0.5558)(78,0.5613)(79,0.5647)(80,0.5703)(81,0.5714)(82,0.5718)(83,0.5637)(84,0.5702)(85,0.5682)(86,0.5771)(87,0.5796)(88,0.5833)(89,0.5715)(90,0.5835)(91,0.5816)(92,0.5832)(93,0.5802)(94,0.5797)(95,0.5847)(96,0.5847)(97,0.5929)(98,0.5913)(99,0.5892)(100,0.5871)(101,0.5879)(102,0.5904)(103,0.5943)(104,0.5925)(105,0.5873)(106,0.6013)(107,0.5999)(108,0.5963)(109,0.5968)(110,0.6026)(111,0.6047)(112,0.6046)(113,0.605)(114,0.6008)(115,0.603)(116,0.6071)(117,0.6084)(118,0.6048)(119,0.6066)(120,0.604)(121,0.6104)(122,0.6087)(123,0.6093)(124,0.6108)(125,0.6147)(126,0.6084)(127,0.6122)(128,0.613)(129,0.6166)(130,0.6228)(131,0.6199)(132,0.6193)(133,0.6228)(134,0.6212)(135,0.6233)(136,0.6199)(137,0.62)(138,0.6222)(139,0.6248)(140,0.6271)(141,0.6259)(142,0.6277)(143,0.6248)(144,0.6288)(145,0.6332)(146,0.6315)(147,0.6378)(148,0.6326)(149,0.6347)(150,0.6372)(151,0.6308)(152,0.6322)(153,0.6399)(154,0.6322)(155,0.639)(156,0.6375)(157,0.6321)(158,0.6372)(159,0.6397)(160,0.6378)(161,0.6394)(162,0.64)(163,0.6353)(164,0.6406)(165,0.6443)(166,0.6443)(167,0.6431)(168,0.6427)(169,0.6429)(170,0.6443)(171,0.6404)(172,0.6482)(173,0.6414)(174,0.643)(175,0.6436)(176,0.6457)(177,0.6488)(178,0.6474)(179,0.6445)(180,0.6452)(181,0.6539)(182,0.6528)(183,0.6471)(184,0.6477)(185,0.6493)(186,0.6518)(187,0.6491)(188,0.6495)(189,0.6522)(190,0.6577)(191,0.6524)(192,0.6544)(193,0.6537)(194,0.6509)(195,0.6463)(196,0.6516)(197,0.6488)(198,0.651)(199,0.6521)(200,0.6542)(201,0.6534)(202,0.6605)(203,0.6564)(204,0.6546)(205,0.6584)(206,0.6561)(207,0.6639)(208,0.6624)(209,0.6607)(210,0.6583)(211,0.6575)(212,0.6594)(213,0.662)(214,0.66)(215,0.6661)(216,0.6607)(217,0.6602)(218,0.6595)(219,0.6618)(220,0.6672)(221,0.6611)(222,0.6633)(223,0.6596)(224,0.6637)(225,0.6621)(226,0.6672)(227,0.6664)(228,0.6673)(229,0.6732)(230,0.669)(231,0.668)(232,0.6651)(233,0.67)(234,0.6677)(235,0.6741)(236,0.673)(237,0.6687)(238,0.6704)(239,0.6677)(240,0.668)(241,0.6695)(242,0.6686)(243,0.6693)(244,0.6671)(245,0.6732)(246,0.6731)(247,0.6771)(248,0.6699)(249,0.6722)(250,0.6715)(251,0.6741)(252,0.6701)(253,0.675)(254,0.6794)(255,0.6746)(256,0.6737)(257,0.6735)(258,0.6776)(259,0.6739)(260,0.6748)(261,0.6758)(262,0.6739)(263,0.6761)(264,0.6801)(265,0.6801)(266,0.6792)(267,0.6741)(268,0.6788)(269,0.6787)(270,0.6747)(271,0.6728)(272,0.6794)(273,0.6806)(274,0.6741)(275,0.6813)(276,0.6737)(277,0.6776)(278,0.6813)(279,0.6763)(280,0.6767)(281,0.6808)(282,0.6744)(283,0.681)(284,0.6792)(285,0.6782)(286,0.675)(287,0.6797)(288,0.6809)(289,0.6845)(290,0.6796)(291,0.6873)(292,0.6804)(293,0.6775)(294,0.6866)(295,0.6798)(296,0.6789)(297,0.6815)(298,0.6808)(299,0.6809)(300,0.6803)(301,0.6816)(302,0.6833)(303,0.6787)(304,0.6733)(305,0.6838)(306,0.6803)(307,0.6841)(308,0.6833)(309,0.6797)(310,0.6798)(311,0.6831)(312,0.6863)(313,0.6837)(314,0.6804)(315,0.6847)(316,0.6819)(317,0.6827)(318,0.6842)(319,0.6845)(320,0.6871)(321,0.683)(322,0.6864)(323,0.6875)(324,0.6841)(325,0.6841)(326,0.6813)(327,0.6874)(328,0.6877)(329,0.6877)(330,0.6822)(331,0.6866)(332,0.6866)(333,0.6826)(334,0.6839)(335,0.6829)(336,0.6871)(337,0.6899)(338,0.6845)(339,0.6836)(340,0.6891)(341,0.6875)(342,0.6842)(343,0.6857)(344,0.6888)(345,0.6865)(346,0.6828)(347,0.6839)(348,0.6882)(349,0.6862)(350,0.6857)(351,0.6894)(352,0.6883)(353,0.6851)(354,0.6896)(355,0.6886)(356,0.6885)(357,0.6901)(358,0.6878)(359,0.6938)(360,0.687)(361,0.691)(362,0.6867)(363,0.685)(364,0.6892)(365,0.6853)(366,0.695)(367,0.6888)(368,0.6881)(369,0.683)(370,0.6896)(371,0.6877)(372,0.6924)(373,0.6873)(374,0.6867)(375,0.687)(376,0.6898)(377,0.6859)(378,0.686)(379,0.6878)(380,0.6859)(381,0.6907)(382,0.6937)(383,0.6881)(384,0.6877)(385,0.6885)(386,0.6915)(387,0.6865)(388,0.6883)(389,0.6871)(390,0.6899)(391,0.6889)(392,0.6939)(393,0.6909)(394,0.6896)(395,0.688)(396,0.695)(397,0.6867)(398,0.6895)(399,0.6905)(400,0.6926)(401,0.6878)(402,0.6914)(403,0.6936)(404,0.695)(405,0.6895)(406,0.6933)(407,0.6938)(408,0.6893)(409,0.6934)(410,0.699)(411,0.6924)(412,0.6886)(413,0.6878)(414,0.6912)(415,0.6932)(416,0.6944)(417,0.6953)(418,0.6875)(419,0.692)(420,0.6958)(421,0.6929)(422,0.6885)(423,0.6919)(424,0.6899)(425,0.6904)(426,0.6945)(427,0.6908)(428,0.6891)(429,0.6929)(430,0.6921)(431,0.6984)(432,0.6932)(433,0.6975)(434,0.6902)(435,0.6979)(436,0.6948)(437,0.6911)(438,0.6951)(439,0.7043)(440,0.6964)(441,0.6958)(442,0.6944)(443,0.7029)(444,0.6994)(445,0.6943)(446,0.6971)(447,0.6985)(448,0.7004)(449,0.6974)(450,0.7022)(451,0.7025)(452,0.6952)(453,0.6916)(454,0.6975)(455,0.6986)(456,0.6928)(457,0.6958)(458,0.6983)(459,0.6964)(460,0.6966)(461,0.6958)(462,0.6988)(463,0.6986)(464,0.697)(465,0.6983)(466,0.7057)(467,0.699)(468,0.701)(469,0.696)(470,0.697)(471,0.6987)(472,0.6982)(473,0.6986)(474,0.7023)(475,0.7038)(476,0.7027)(477,0.6953)(478,0.6935)(479,0.702)(480,0.6987)(481,0.7003)(482,0.6972)(483,0.7062)(484,0.7041)(485,0.6973)(486,0.702)(487,0.7037)(488,0.7024)(489,0.6954)(490,0.7041)(491,0.699)(492,0.7009)(493,0.6959)(494,0.6993)(495,0.7052)(496,0.7023)(497,0.7018)(498,0.6996)(499,0.6977)
};\addlegendentry{ROTAF $B=2$}
\end{axis}
\end{tikzpicture}}
\subfigure[$N=100$, $G=20$]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xlabel={Global rounds},
xmajorgrids,
xmin=0, xmax=1000,
ymin=0.0, ymax=1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ylabel={Test accuracy},
ymajorgrids,
ytick style={color=black},
grid=major,
scaled ticks=true,
legend pos=south east,
grid style=densely dashed,
]
\addplot [semithick, color =green]
coordinates {
(0,0.1009)(1,0.1301)(2,0.1525)(3,0.1656)(4,0.1957)(5,0.2106)(6,0.2154)(7,0.2316)(8,0.2441)(9,0.2528)(10,0.2609)(11,0.2679)(12,0.2688)(13,0.286)(14,0.2899)(15,0.2988)(16,0.2966)(17,0.3001)(18,0.302)(19,0.314)(20,0.3156)(21,0.3135)(22,0.324)(23,0.324)(24,0.3288)(25,0.3335)(26,0.3311)(27,0.3367)(28,0.3467)(29,0.3456)(30,0.3401)(31,0.3581)(32,0.3514)(33,0.3555)(34,0.3629)(35,0.3669)(36,0.3716)(37,0.3619)(38,0.369)(39,0.373)(40,0.3758)(41,0.3838)(42,0.381)(43,0.381)(44,0.3833)(45,0.3846)(46,0.3912)(47,0.394)(48,0.3973)(49,0.398)(50,0.3921)(51,0.3918)(52,0.394)(53,0.4035)(54,0.4033)(55,0.3988)(56,0.4053)(57,0.4057)(58,0.4059)(59,0.4102)(60,0.4151)(61,0.4097)(62,0.4102)(63,0.4205)(64,0.4201)(65,0.42)(66,0.4247)(67,0.4179)(68,0.4202)(69,0.4217)(70,0.4212)(71,0.4267)(72,0.4279)(73,0.4294)(74,0.4268)(75,0.4375)(76,0.428)(77,0.4322)(78,0.4324)(79,0.4312)(80,0.4401)(81,0.4368)(82,0.4403)(83,0.4402)(84,0.4337)(85,0.4427)(86,0.4422)(87,0.4433)(88,0.4433)(89,0.4402)(90,0.4443)(91,0.4416)(92,0.4436)(93,0.4484)(94,0.4441)(95,0.4473)(96,0.4469)(97,0.4575)(98,0.4547)(99,0.4561)(100,0.4488)(101,0.449)(102,0.4578)(103,0.4574)(104,0.457)(105,0.4588)(106,0.4584)(107,0.4642)(108,0.458)(109,0.4665)(110,0.4604)(111,0.4669)(112,0.4705)(113,0.4622)(114,0.4598)(115,0.4655)(116,0.473)(117,0.4637)(118,0.4646)(119,0.4674)(120,0.467)(121,0.4653)(122,0.4664)(123,0.4692)(124,0.4695)(125,0.475)(126,0.4672)(127,0.4718)(128,0.4763)(129,0.4716)(130,0.4696)(131,0.4774)(132,0.4725)(133,0.4751)(134,0.4717)(135,0.4805)(136,0.4739)(137,0.4794)(138,0.4811)(139,0.4783)(140,0.4792)(141,0.4826)(142,0.4764)(143,0.4871)(144,0.4825)(145,0.478)(146,0.4838)(147,0.4894)(148,0.4937)(149,0.4811)(150,0.4829)(151,0.485)(152,0.4884)(153,0.4918)(154,0.4877)(155,0.4884)(156,0.4885)(157,0.4872)(158,0.4936)(159,0.4927)(160,0.4874)(161,0.4924)(162,0.493)(163,0.4923)(164,0.495)(165,0.4899)(166,0.4966)(167,0.4933)(168,0.4933)(169,0.4957)(170,0.4922)(171,0.5001)(172,0.5001)(173,0.5009)(174,0.4984)(175,0.499)(176,0.4995)(177,0.5004)(178,0.5022)(179,0.5024)(180,0.5055)(181,0.5022)(182,0.5046)(183,0.4989)(184,0.5048)(185,0.5041)(186,0.5007)(187,0.5133)(188,0.5094)(189,0.5049)(190,0.5115)(191,0.5101)(192,0.5128)(193,0.5064)(194,0.5141)(195,0.5091)(196,0.5175)(197,0.5081)(198,0.5133)(199,0.5055)(200,0.5093)(201,0.5085)(202,0.5114)(203,0.5092)(204,0.5095)(205,0.5151)(206,0.5107)(207,0.5124)(208,0.516)(209,0.5171)(210,0.5181)(211,0.523)(212,0.5185)(213,0.5177)(214,0.5136)(215,0.5165)(216,0.5211)(217,0.5197)(218,0.5202)(219,0.5144)(220,0.5213)(221,0.5229)(222,0.5276)(223,0.5222)(224,0.5199)(225,0.523)(226,0.5209)(227,0.522)(228,0.5194)(229,0.5234)(230,0.5238)(231,0.5207)(232,0.5314)(233,0.5226)(234,0.5241)(235,0.5266)(236,0.5285)(237,0.5315)(238,0.5247)(239,0.5287)(240,0.5238)(241,0.529)(242,0.523)(243,0.5274)(244,0.5308)(245,0.5294)(246,0.5275)(247,0.5272)(248,0.5262)(249,0.5332)(250,0.5333)(251,0.5356)(252,0.5319)(253,0.5323)(254,0.5378)(255,0.5326)(256,0.5285)(257,0.5315)(258,0.5318)(259,0.5321)(260,0.5364)(261,0.5349)(262,0.5298)(263,0.5341)(264,0.537)(265,0.5328)(266,0.5422)(267,0.5343)(268,0.538)(269,0.5389)(270,0.54)(271,0.5357)(272,0.5406)(273,0.5412)(274,0.5419)(275,0.5405)(276,0.5413)(277,0.5404)(278,0.5427)(279,0.5366)(280,0.5458)(281,0.5415)(282,0.5433)(283,0.5436)(284,0.5486)(285,0.5422)(286,0.548)(287,0.5497)(288,0.5449)(289,0.5425)(290,0.5447)(291,0.5534)(292,0.5474)(293,0.5459)(294,0.5482)(295,0.5489)(296,0.5453)(297,0.5514)(298,0.5498)(299,0.5428)(300,0.5468)(301,0.5407)(302,0.5495)(303,0.548)(304,0.5496)(305,0.5499)(306,0.5496)(307,0.553)(308,0.5546)(309,0.5518)(310,0.5522)(311,0.5534)(312,0.5554)(313,0.5562)(314,0.5533)(315,0.5517)(316,0.5529)(317,0.5586)(318,0.5539)(319,0.5476)(320,0.5602)(321,0.5547)(322,0.55)(323,0.5578)(324,0.5533)(325,0.5576)(326,0.5572)(327,0.5574)(328,0.5609)(329,0.558)(330,0.5576)(331,0.5574)(332,0.5555)(333,0.5586)(334,0.5551)(335,0.5627)(336,0.5627)(337,0.5624)(338,0.5628)(339,0.5606)(340,0.562)(341,0.5636)(342,0.5646)(343,0.5639)(344,0.5597)(345,0.5617)(346,0.5643)(347,0.5627)(348,0.5673)(349,0.5645)(350,0.5677)(351,0.5627)(352,0.5655)(353,0.5633)(354,0.564)(355,0.5637)(356,0.568)(357,0.5728)(358,0.5662)(359,0.5678)(360,0.5645)(361,0.5707)(362,0.5715)(363,0.5701)(364,0.5693)(365,0.5678)(366,0.5721)(367,0.5671)(368,0.5677)(369,0.5703)(370,0.5747)(371,0.5695)(372,0.5678)(373,0.5791)(374,0.5712)(375,0.5724)(376,0.5773)(377,0.576)(378,0.5731)(379,0.5797)(380,0.5693)(381,0.5725)(382,0.5705)(383,0.575)(384,0.5694)(385,0.5759)(386,0.5726)(387,0.5719)(388,0.5761)(389,0.5737)(390,0.575)(391,0.5748)(392,0.5813)(393,0.5781)(394,0.5729)(395,0.5807)(396,0.5769)(397,0.5777)(398,0.5784)(399,0.5794)(400,0.5812)(401,0.5819)(402,0.5792)(403,0.5812)(404,0.5791)(405,0.5848)(406,0.5804)(407,0.5831)(408,0.5827)(409,0.5842)(410,0.5841)(411,0.5781)(412,0.582)(413,0.5839)(414,0.5823)(415,0.5885)(416,0.5844)(417,0.5848)(418,0.5872)(419,0.5872)(420,0.588)(421,0.5905)(422,0.5858)(423,0.5839)(424,0.5851)(425,0.5896)(426,0.5888)(427,0.5872)(428,0.5786)(429,0.5893)(430,0.5906)(431,0.5871)(432,0.587)(433,0.5881)(434,0.5918)(435,0.5865)(436,0.5834)(437,0.5939)(438,0.5878)(439,0.5893)(440,0.5899)(441,0.588)(442,0.5898)(443,0.5904)(444,0.5891)(445,0.5914)(446,0.5949)(447,0.5934)(448,0.59)(449,0.5942)(450,0.5959)(451,0.5902)(452,0.5963)(453,0.5984)(454,0.5969)(455,0.5948)(456,0.5965)(457,0.5975)(458,0.595)(459,0.6047)(460,0.5963)(461,0.5947)(462,0.5969)(463,0.5955)(464,0.5957)(465,0.5954)(466,0.5987)(467,0.5957)(468,0.5971)(469,0.6016)(470,0.5972)(471,0.5948)(472,0.6009)(473,0.594)(474,0.5963)(475,0.5993)(476,0.6001)(477,0.5955)(478,0.6008)(479,0.5986)(480,0.6029)(481,0.5983)(482,0.5949)(483,0.5943)(484,0.5984)(485,0.5992)(486,0.6001)(487,0.6016)(488,0.6012)(489,0.6059)(490,0.5997)(491,0.6048)(492,0.5984)(493,0.6016)(494,0.6059)(495,0.6038)(496,0.6077)(497,0.608)(498,0.6006)(499,0.6048)(500,0.6056)(501,0.6096)(502,0.6013)(503,0.6063)(504,0.6019)(505,0.6067)(506,0.6082)(507,0.6039)(508,0.6087)(509,0.6081)(510,0.6088)(511,0.6076)(512,0.6105)(513,0.6106)(514,0.6101)(515,0.6075)(516,0.6101)(517,0.6028)(518,0.6136)(519,0.6127)(520,0.6124)(521,0.6071)(522,0.6099)(523,0.6081)(524,0.6106)(525,0.6129)(526,0.6101)(527,0.6114)(528,0.6117)(529,0.6116)(530,0.6085)(531,0.6128)(532,0.6112)(533,0.609)(534,0.6103)(535,0.6125)(536,0.6109)(537,0.6133)(538,0.6136)(539,0.6126)(540,0.6127)(541,0.6125)(542,0.6097)(543,0.6118)(544,0.6179)(545,0.6171)(546,0.6092)(547,0.6133)(548,0.6092)(549,0.6122)(550,0.6168)(551,0.6121)(552,0.6153)(553,0.6186)(554,0.6191)(555,0.6186)(556,0.6166)(557,0.6234)(558,0.617)(559,0.6196)(560,0.6148)(561,0.6208)(562,0.6193)(563,0.6158)(564,0.6178)(565,0.6152)(566,0.6224)(567,0.6199)(568,0.6178)(569,0.6163)(570,0.6209)(571,0.6205)(572,0.6154)(573,0.617)(574,0.6176)(575,0.6213)(576,0.6249)(577,0.6186)(578,0.6201)(579,0.6206)(580,0.6191)(581,0.6227)(582,0.6215)(583,0.6205)(584,0.6197)(585,0.6223)(586,0.6231)(587,0.622)(588,0.6205)(589,0.6216)(590,0.6185)(591,0.6212)(592,0.6216)(593,0.6177)(594,0.6218)(595,0.6305)(596,0.6197)(597,0.6258)(598,0.6225)(599,0.6229)(600,0.6299)(601,0.6204)(602,0.6271)(603,0.6244)(604,0.622)(605,0.6249)(606,0.6278)(607,0.6263)(608,0.6206)(609,0.6242)(610,0.6217)(611,0.6281)(612,0.623)(613,0.6264)(614,0.6283)(615,0.6235)(616,0.6328)(617,0.6295)(618,0.625)(619,0.6299)(620,0.6283)(621,0.6285)(622,0.6291)(623,0.6297)(624,0.625)(625,0.6295)(626,0.6296)(627,0.6295)(628,0.6328)(629,0.6294)(630,0.6315)(631,0.6313)(632,0.6321)(633,0.6322)(634,0.6288)(635,0.6252)(636,0.6328)(637,0.6322)(638,0.63)(639,0.6304)(640,0.6295)(641,0.6333)(642,0.633)(643,0.6332)(644,0.6311)(645,0.6237)(646,0.6311)(647,0.6313)(648,0.6361)(649,0.6373)(650,0.6299)(651,0.6308)(652,0.6319)(653,0.6387)(654,0.6351)(655,0.632)(656,0.6354)(657,0.6381)(658,0.6375)(659,0.6345)(660,0.6348)(661,0.6344)(662,0.6343)(663,0.6322)(664,0.6339)(665,0.636)(666,0.6355)(667,0.638)(668,0.6351)(669,0.6327)(670,0.6398)(671,0.6349)(672,0.6359)(673,0.6335)(674,0.6357)(675,0.6313)(676,0.6382)(677,0.6335)(678,0.6411)(679,0.6363)(680,0.6313)(681,0.6397)(682,0.635)(683,0.6325)(684,0.6345)(685,0.6382)(686,0.6372)(687,0.6433)(688,0.6362)(689,0.6382)(690,0.6376)(691,0.6411)(692,0.636)(693,0.6437)(694,0.645)(695,0.6385)(696,0.6386)(697,0.6401)(698,0.6383)(699,0.6386)(700,0.6361)(701,0.6385)(702,0.6416)(703,0.6439)(704,0.6434)(705,0.6463)(706,0.6384)(707,0.6422)(708,0.6396)(709,0.6405)(710,0.645)(711,0.6411)(712,0.6435)(713,0.6421)(714,0.6432)(715,0.6435)(716,0.6409)(717,0.6441)(718,0.6403)(719,0.6428)(720,0.6483)(721,0.645)(722,0.6483)(723,0.6428)(724,0.6399)(725,0.6398)(726,0.6458)(727,0.6462)(728,0.6419)(729,0.6422)(730,0.6471)(731,0.6396)(732,0.642)(733,0.6478)(734,0.6493)(735,0.647)(736,0.6407)(737,0.6482)(738,0.6509)(739,0.648)(740,0.6481)(741,0.646)(742,0.6453)(743,0.6431)(744,0.6452)(745,0.6479)(746,0.6477)(747,0.6406)(748,0.6465)(749,0.6479)(750,0.653)(751,0.6452)(752,0.6473)(753,0.6469)(754,0.6527)(755,0.6517)(756,0.6452)(757,0.6492)(758,0.6486)(759,0.6503)(760,0.6517)(761,0.6494)(762,0.6452)(763,0.6484)(764,0.6551)(765,0.6569)(766,0.6449)(767,0.6494)(768,0.6479)(769,0.6514)(770,0.6518)(771,0.6487)(772,0.6529)(773,0.6461)(774,0.6536)(775,0.6513)(776,0.6551)(777,0.6491)(778,0.6508)(779,0.6571)(780,0.6503)(781,0.6528)(782,0.6493)(783,0.6536)(784,0.6498)(785,0.6515)(786,0.6546)(787,0.6494)(788,0.6523)(789,0.6521)(790,0.6522)(791,0.6563)(792,0.6542)(793,0.6551)(794,0.6495)(795,0.6559)(796,0.6527)(797,0.6551)(798,0.6534)(799,0.6604)(800,0.6571)(801,0.6546)(802,0.6609)(803,0.6546)(804,0.6555)(805,0.6554)(806,0.6606)(807,0.6562)(808,0.6562)(809,0.6541)(810,0.6573)(811,0.6554)(812,0.6577)(813,0.6562)(814,0.6585)(815,0.6576)(816,0.6568)(817,0.6579)(818,0.6557)(819,0.6583)(820,0.6592)(821,0.658)(822,0.6608)(823,0.66)(824,0.6544)(825,0.6572)(826,0.6552)(827,0.6569)(828,0.6598)(829,0.659)(830,0.66)(831,0.6624)(832,0.659)(833,0.6584)(834,0.6581)(835,0.6606)(836,0.6597)(837,0.6587)(838,0.6611)(839,0.6593)(840,0.655)(841,0.6566)(842,0.659)(843,0.6626)(844,0.6592)(845,0.6553)(846,0.6608)(847,0.6561)(848,0.6593)(849,0.666)(850,0.6627)(851,0.6561)(852,0.6594)(853,0.6571)(854,0.659)(855,0.6602)(856,0.6674)(857,0.6603)(858,0.6552)(859,0.6579)(860,0.66)(861,0.6628)(862,0.6634)(863,0.6622)(864,0.6608)(865,0.661)(866,0.6613)(867,0.6626)(868,0.663)(869,0.6619)(870,0.6627)(871,0.6636)(872,0.6643)(873,0.6688)(874,0.6585)(875,0.6665)(876,0.6648)(877,0.6644)(878,0.6582)(879,0.662)(880,0.6661)(881,0.665)(882,0.6631)(883,0.6633)(884,0.6624)(885,0.66)(886,0.6646)(887,0.6646)(888,0.6669)(889,0.6636)(890,0.6651)(891,0.6638)(892,0.6699)(893,0.6687)(894,0.6688)(895,0.6622)(896,0.6675)(897,0.6683)(898,0.6654)(899,0.6668)(900,0.6663)(901,0.6638)(902,0.6608)(903,0.6636)(904,0.6665)(905,0.6631)(906,0.6668)(907,0.6683)(908,0.6628)(909,0.665)(910,0.668)(911,0.67)(912,0.6646)(913,0.6639)(914,0.6647)(915,0.6646)(916,0.6663)(917,0.6681)(918,0.6737)(919,0.6666)(920,0.6734)(921,0.6662)(922,0.6688)(923,0.6678)(924,0.6718)(925,0.6705)(926,0.668)(927,0.666)(928,0.6668)(929,0.6635)(930,0.6733)(931,0.6706)(932,0.6718)(933,0.6691)(934,0.671)(935,0.6728)(936,0.6677)(937,0.6741)(938,0.6706)(939,0.6639)(940,0.6716)(941,0.6696)(942,0.6711)(943,0.6636)(944,0.673)(945,0.6663)(946,0.6672)(947,0.6701)(948,0.6699)(949,0.6733)(950,0.6739)(951,0.6709)(952,0.664)(953,0.6695)(954,0.6725)(955,0.6746)(956,0.6735)(957,0.6695)(958,0.6742)(959,0.6715)(960,0.6757)(961,0.6718)(962,0.6764)(963,0.6699)(964,0.6722)(965,0.67)(966,0.6751)(967,0.6751)(968,0.6764)(969,0.6717)(970,0.6761)(971,0.6738)(972,0.6772)(973,0.6786)(974,0.6767)(975,0.6813)(976,0.6715)(977,0.6757)(978,0.6752)(979,0.6816)(980,0.6738)(981,0.6781)(982,0.6773)(983,0.681)(984,0.6812)(985,0.6682)(986,0.6762)(987,0.6714)(988,0.678)(989,0.6713)(990,0.6775)(991,0.6779)(992,0.6776)(993,0.6775)(994,0.6749)(995,0.6776)(996,0.676)(997,0.6739)(998,0.6782)(999,0.6744)
};\addlegendentry{ROTAF $B=2$}
\addplot [semithick, color = red]
coordinates {
(0,0.1009)(1,0.1236)(2,0.1512)(3,0.1699)(4,0.1908)(5,0.2079)(6,0.2216)(7,0.2364)(8,0.2455)(9,0.2467)(10,0.2552)(11,0.2724)(12,0.2809)(13,0.2816)(14,0.2947)(15,0.3008)(16,0.305)(17,0.3062)(18,0.315)(19,0.3162)(20,0.3185)(21,0.3238)(22,0.3283)(23,0.3253)(24,0.3337)(25,0.3361)(26,0.3338)(27,0.342)(28,0.3456)(29,0.3478)(30,0.3597)(31,0.3568)(32,0.3576)(33,0.3567)(34,0.3607)(35,0.367)(36,0.3646)(37,0.3695)(38,0.3699)(39,0.3726)(40,0.3679)(41,0.3774)(42,0.3804)(43,0.3837)(44,0.3801)(45,0.3884)(46,0.3892)(47,0.3941)(48,0.3962)(49,0.3992)(50,0.3962)(51,0.4026)(52,0.3942)(53,0.406)(54,0.401)(55,0.4107)(56,0.4071)(57,0.4105)(58,0.4118)(59,0.4175)(60,0.4177)(61,0.4094)(62,0.4111)(63,0.414)(64,0.4225)(65,0.4166)(66,0.4184)(67,0.4235)(68,0.4247)(69,0.4241)(70,0.421)(71,0.4304)(72,0.4296)(73,0.4305)(74,0.4294)(75,0.4315)(76,0.4364)(77,0.4381)(78,0.4356)(79,0.4353)(80,0.4378)(81,0.4388)(82,0.4384)(83,0.4436)(84,0.4423)(85,0.4455)(86,0.4484)(87,0.4435)(88,0.4427)(89,0.4556)(90,0.452)(91,0.4467)(92,0.4557)(93,0.4461)(94,0.4509)(95,0.4515)(96,0.4523)(97,0.4545)(98,0.4513)(99,0.4545)(100,0.4559)(101,0.4594)(102,0.461)(103,0.4579)(104,0.4583)(105,0.4595)(106,0.4634)(107,0.4598)(108,0.4665)(109,0.4683)(110,0.4655)(111,0.4692)(112,0.4757)(113,0.4611)(114,0.4728)(115,0.4655)(116,0.4662)(117,0.4706)(118,0.4669)(119,0.4706)(120,0.4705)(121,0.4704)(122,0.4677)(123,0.4719)(124,0.4758)(125,0.472)(126,0.4794)(127,0.4777)(128,0.4754)(129,0.4807)(130,0.476)(131,0.4809)(132,0.4833)(133,0.4788)(134,0.4836)(135,0.4857)(136,0.4826)(137,0.4855)(138,0.4889)(139,0.4859)(140,0.4855)(141,0.4935)(142,0.4882)(143,0.486)(144,0.49)(145,0.4948)(146,0.4878)(147,0.4931)(148,0.4912)(149,0.4892)(150,0.4896)(151,0.4945)(152,0.494)(153,0.4923)(154,0.4998)(155,0.4948)(156,0.4944)(157,0.4905)(158,0.4946)(159,0.4877)(160,0.4996)(161,0.4992)(162,0.5002)(163,0.5009)(164,0.4975)(165,0.4969)(166,0.5042)(167,0.5049)(168,0.5011)(169,0.5006)(170,0.5034)(171,0.5019)(172,0.5019)(173,0.5065)(174,0.5076)(175,0.5084)(176,0.5048)(177,0.505)(178,0.5053)(179,0.5096)(180,0.5043)(181,0.5112)(182,0.5113)(183,0.5099)(184,0.5121)(185,0.5162)(186,0.5093)(187,0.5161)(188,0.514)(189,0.5117)(190,0.5131)(191,0.5147)(192,0.512)(193,0.5151)(194,0.521)(195,0.519)(196,0.5157)(197,0.5205)(198,0.5187)(199,0.5202)(200,0.5237)(201,0.5204)(202,0.5184)(203,0.5168)(204,0.5243)(205,0.5207)(206,0.5242)(207,0.524)(208,0.523)(209,0.5216)(210,0.5201)(211,0.519)(212,0.5195)(213,0.525)(214,0.5308)(215,0.5293)(216,0.525)(217,0.5241)(218,0.5197)(219,0.5216)(220,0.5245)(221,0.53)(222,0.5277)(223,0.5255)(224,0.5309)(225,0.5224)(226,0.5252)(227,0.532)(228,0.527)(229,0.53)(230,0.5312)(231,0.5362)(232,0.5355)(233,0.5322)(234,0.5308)(235,0.5291)(236,0.5302)(237,0.5313)(238,0.5349)(239,0.5351)(240,0.539)(241,0.5325)(242,0.5339)(243,0.5373)(244,0.5347)(245,0.5401)(246,0.5378)(247,0.5411)(248,0.5367)(249,0.5344)(250,0.5415)(251,0.5401)(252,0.538)(253,0.5442)(254,0.5394)(255,0.5434)(256,0.5433)(257,0.5403)(258,0.5448)(259,0.5385)(260,0.5469)(261,0.5445)(262,0.5407)(263,0.548)(264,0.5503)(265,0.5511)(266,0.5443)(267,0.5409)(268,0.5439)(269,0.5418)(270,0.5461)(271,0.5487)(272,0.5461)(273,0.5492)(274,0.5465)(275,0.5518)(276,0.5507)(277,0.5461)(278,0.546)(279,0.5531)(280,0.5542)(281,0.5502)(282,0.5495)(283,0.5529)(284,0.5502)(285,0.5542)(286,0.5581)(287,0.5529)(288,0.5554)(289,0.5495)(290,0.5505)(291,0.5563)(292,0.5531)(293,0.5541)(294,0.552)(295,0.5522)(296,0.5531)(297,0.5564)(298,0.5598)(299,0.5568)(300,0.5597)(301,0.557)(302,0.5572)(303,0.5613)(304,0.5644)(305,0.5615)(306,0.5583)(307,0.5622)(308,0.5568)(309,0.561)(310,0.5565)(311,0.5591)(312,0.5618)(313,0.5652)(314,0.5658)(315,0.5605)(316,0.5602)(317,0.56)(318,0.5661)(319,0.5607)(320,0.5713)(321,0.5643)(322,0.5636)(323,0.5671)(324,0.5642)(325,0.5611)(326,0.5639)(327,0.5683)(328,0.5711)(329,0.5699)(330,0.5695)(331,0.5687)(332,0.5668)(333,0.5674)(334,0.5639)(335,0.5716)(336,0.564)(337,0.5654)(338,0.574)(339,0.5677)(340,0.5685)(341,0.5708)(342,0.5723)(343,0.5717)(344,0.5756)(345,0.5724)(346,0.5733)(347,0.5722)(348,0.5699)(349,0.5718)(350,0.577)(351,0.5747)(352,0.5674)(353,0.5728)(354,0.5739)(355,0.578)(356,0.5772)(357,0.5751)(358,0.5766)(359,0.5791)(360,0.5786)(361,0.5768)(362,0.5759)(363,0.5837)(364,0.5757)(365,0.5774)(366,0.5775)(367,0.5768)(368,0.5796)(369,0.5836)(370,0.5793)(371,0.5811)(372,0.5791)(373,0.5794)(374,0.5858)(375,0.5839)(376,0.5802)(377,0.5869)(378,0.583)(379,0.5858)(380,0.5863)(381,0.5831)(382,0.5797)(383,0.5834)(384,0.5801)(385,0.5767)(386,0.5847)(387,0.5821)(388,0.5914)(389,0.5797)(390,0.5869)(391,0.5884)(392,0.5891)(393,0.5843)(394,0.5856)(395,0.5887)(396,0.5885)(397,0.5868)(398,0.5915)(399,0.5846)(400,0.5904)(401,0.5908)(402,0.593)(403,0.5885)(404,0.5879)(405,0.5871)(406,0.5902)(407,0.5933)(408,0.5916)(409,0.5933)(410,0.592)(411,0.5922)(412,0.5983)(413,0.5977)(414,0.5929)(415,0.5928)(416,0.5948)(417,0.5929)(418,0.5887)(419,0.597)(420,0.5978)(421,0.5936)(422,0.5996)(423,0.5942)(424,0.5953)(425,0.5944)(426,0.594)(427,0.6007)(428,0.5963)(429,0.5974)(430,0.5926)(431,0.5969)(432,0.5941)(433,0.5973)(434,0.5969)(435,0.6043)(436,0.5965)(437,0.5967)(438,0.6005)(439,0.5986)(440,0.5995)(441,0.5957)(442,0.5961)(443,0.6047)(444,0.6001)(445,0.6001)(446,0.6045)(447,0.6013)(448,0.6059)(449,0.6016)(450,0.6054)(451,0.5972)(452,0.6011)(453,0.6046)(454,0.6065)(455,0.6073)(456,0.6011)(457,0.6034)(458,0.6048)(459,0.6082)(460,0.6059)(461,0.6076)(462,0.6064)(463,0.6009)(464,0.6051)(465,0.6053)(466,0.6047)(467,0.6058)(468,0.607)(469,0.6096)(470,0.6079)(471,0.6089)(472,0.607)(473,0.6094)(474,0.6083)(475,0.6069)(476,0.6075)(477,0.611)(478,0.6109)(479,0.6094)(480,0.6112)(481,0.6112)(482,0.6131)(483,0.6103)(484,0.611)(485,0.6106)(486,0.6133)(487,0.6082)(488,0.6135)(489,0.6105)(490,0.6095)(491,0.6098)(492,0.6166)(493,0.6107)(494,0.6074)(495,0.6083)(496,0.6157)(497,0.6124)(498,0.6181)(499,0.6193)(500,0.6144)(501,0.6165)(502,0.6104)(503,0.6105)(504,0.6185)(505,0.6177)(506,0.6139)(507,0.6104)(508,0.6162)(509,0.6176)(510,0.6178)(511,0.6139)(512,0.6173)(513,0.6155)(514,0.616)(515,0.6144)(516,0.6203)(517,0.619)(518,0.6135)(519,0.6222)(520,0.622)(521,0.6157)(522,0.6192)(523,0.6195)(524,0.6172)(525,0.6187)(526,0.6295)(527,0.6179)(528,0.619)(529,0.6252)(530,0.6203)(531,0.6218)(532,0.6156)(533,0.62)(534,0.6206)(535,0.6223)(536,0.6231)(537,0.6188)(538,0.6182)(539,0.63)(540,0.6259)(541,0.6223)(542,0.6249)(543,0.623)(544,0.6262)(545,0.6244)(546,0.6278)(547,0.6269)(548,0.6267)(549,0.6214)(550,0.6279)(551,0.6271)(552,0.6244)(553,0.629)(554,0.6258)(555,0.6279)(556,0.6244)(557,0.6281)(558,0.6279)(559,0.6294)(560,0.6249)(561,0.6299)(562,0.6206)(563,0.6268)(564,0.6236)(565,0.6353)(566,0.6266)(567,0.6267)(568,0.6277)(569,0.6247)(570,0.6293)(571,0.6324)(572,0.6298)(573,0.6216)(574,0.6325)(575,0.6265)(576,0.6309)(577,0.6327)(578,0.63)(579,0.6338)(580,0.6319)(581,0.6336)(582,0.6327)(583,0.6285)(584,0.6313)(585,0.6357)(586,0.6303)(587,0.632)(588,0.6324)(589,0.6331)(590,0.6354)(591,0.6343)(592,0.6327)(593,0.6373)(594,0.6358)(595,0.6353)(596,0.6326)(597,0.6337)(598,0.6333)(599,0.6347)(600,0.6319)(601,0.6346)(602,0.6356)(603,0.6321)(604,0.6377)(605,0.6384)(606,0.6375)(607,0.6362)(608,0.64)(609,0.6386)(610,0.6366)(611,0.6352)(612,0.6274)(613,0.6353)(614,0.6402)(615,0.6375)(616,0.6406)(617,0.639)(618,0.6421)(619,0.6434)(620,0.6382)(621,0.6411)(622,0.6393)(623,0.6447)(624,0.6393)(625,0.6367)(626,0.6386)(627,0.6416)(628,0.6402)(629,0.6416)(630,0.6366)(631,0.6418)(632,0.6407)(633,0.6395)(634,0.6361)(635,0.6413)(636,0.6442)(637,0.641)(638,0.6422)(639,0.6403)(640,0.6378)(641,0.6436)(642,0.643)(643,0.6399)(644,0.6447)(645,0.6414)(646,0.6465)(647,0.6414)(648,0.6471)(649,0.6422)(650,0.6375)(651,0.6435)(652,0.6399)(653,0.6443)(654,0.6481)(655,0.6446)(656,0.6417)(657,0.6465)(658,0.6435)(659,0.6434)(660,0.6461)(661,0.6439)(662,0.6439)(663,0.6473)(664,0.6469)(665,0.6495)(666,0.6484)(667,0.6534)(668,0.6477)(669,0.6483)(670,0.6461)(671,0.6513)(672,0.6477)(673,0.6421)(674,0.652)(675,0.6457)(676,0.6449)(677,0.6448)(678,0.6446)(679,0.6521)(680,0.6459)(681,0.6442)(682,0.6494)(683,0.6512)(684,0.648)(685,0.6514)(686,0.6505)(687,0.6501)(688,0.6552)(689,0.6436)(690,0.6441)(691,0.6508)(692,0.65)(693,0.6444)(694,0.6515)(695,0.6496)(696,0.6446)(697,0.651)(698,0.6526)(699,0.6485)(700,0.6533)(701,0.6526)(702,0.6524)(703,0.6515)(704,0.6497)(705,0.6471)(706,0.6539)(707,0.6506)(708,0.6501)(709,0.6489)(710,0.6531)(711,0.6536)(712,0.6597)(713,0.6503)(714,0.664)(715,0.6584)(716,0.6604)(717,0.6477)(718,0.653)(719,0.6534)(720,0.6569)(721,0.6541)(722,0.6559)(723,0.652)(724,0.6553)(725,0.6539)(726,0.6497)(727,0.654)(728,0.6557)(729,0.6541)(730,0.6536)(731,0.6558)(732,0.6584)(733,0.6547)(734,0.6585)(735,0.6522)(736,0.6571)(737,0.6584)(738,0.656)(739,0.6552)(740,0.6509)(741,0.6572)(742,0.6551)(743,0.6567)(744,0.6566)(745,0.6581)(746,0.6621)(747,0.6562)(748,0.6597)(749,0.6548)(750,0.6566)(751,0.6535)(752,0.6583)(753,0.6597)(754,0.657)(755,0.6589)(756,0.6596)(757,0.6589)(758,0.6628)(759,0.6566)(760,0.6614)(761,0.6572)(762,0.6622)(763,0.6612)(764,0.6611)(765,0.6594)(766,0.6631)(767,0.6631)(768,0.6647)(769,0.6606)(770,0.6502)(771,0.6612)(772,0.6606)(773,0.6638)(774,0.6598)(775,0.6602)(776,0.6613)(777,0.6612)(778,0.6606)(779,0.6568)(780,0.6633)(781,0.6654)(782,0.6637)(783,0.6588)(784,0.6666)(785,0.6622)(786,0.6662)(787,0.6625)(788,0.6685)(789,0.6613)(790,0.6612)(791,0.6633)(792,0.6654)(793,0.6619)(794,0.6649)(795,0.6663)(796,0.6633)(797,0.6639)(798,0.6639)(799,0.6599)(800,0.6652)(801,0.665)(802,0.6687)(803,0.6659)(804,0.6657)(805,0.6657)(806,0.6635)(807,0.6639)(808,0.6617)(809,0.664)(810,0.6641)(811,0.6646)(812,0.6673)(813,0.6621)(814,0.6704)(815,0.6718)(816,0.6658)(817,0.6695)(818,0.6686)(819,0.6638)(820,0.6681)(821,0.6674)(822,0.6659)(823,0.6684)(824,0.67)(825,0.6715)(826,0.6727)(827,0.6602)(828,0.6679)(829,0.6732)(830,0.6654)(831,0.6656)(832,0.6637)(833,0.6629)(834,0.6655)(835,0.6675)(836,0.6704)(837,0.6696)(838,0.6662)(839,0.6679)(840,0.6686)(841,0.6682)(842,0.6688)(843,0.6709)(844,0.6686)(845,0.6679)(846,0.6646)(847,0.6712)(848,0.67)(849,0.6677)(850,0.6679)(851,0.6688)(852,0.67)(853,0.6678)(854,0.6718)(855,0.6704)(856,0.6744)(857,0.6677)(858,0.6692)(859,0.6682)(860,0.6709)(861,0.6737)(862,0.6648)(863,0.6692)(864,0.6719)(865,0.6731)(866,0.6694)(867,0.669)(868,0.6736)(869,0.6717)(870,0.674)(871,0.6776)(872,0.6694)(873,0.6699)(874,0.6727)(875,0.672)(876,0.669)(877,0.677)(878,0.6737)(879,0.6771)(880,0.672)(881,0.6735)(882,0.6727)(883,0.6724)(884,0.6714)(885,0.6736)(886,0.6725)(887,0.6702)(888,0.6779)(889,0.6757)(890,0.6711)(891,0.6742)(892,0.6775)(893,0.6759)(894,0.674)(895,0.6744)(896,0.6717)(897,0.6764)(898,0.6769)(899,0.676)(900,0.6751)(901,0.6806)(902,0.6756)(903,0.6755)(904,0.6729)(905,0.6817)(906,0.6711)(907,0.6781)(908,0.6773)(909,0.6774)(910,0.676)(911,0.6729)(912,0.6789)(913,0.6751)(914,0.6728)(915,0.6742)(916,0.6801)(917,0.6817)(918,0.6773)(919,0.6716)(920,0.6775)(921,0.6763)(922,0.6793)(923,0.6816)(924,0.6792)(925,0.6814)(926,0.6748)(927,0.676)(928,0.681)(929,0.6795)(930,0.6758)(931,0.6756)(932,0.6743)(933,0.6762)(934,0.6789)(935,0.6795)(936,0.68)(937,0.6795)(938,0.6753)(939,0.6813)(940,0.6798)(941,0.6782)(942,0.6796)(943,0.6836)(944,0.6763)(945,0.6796)(946,0.6782)(947,0.6778)(948,0.6831)(949,0.6758)(950,0.6813)(951,0.6789)(952,0.6761)(953,0.682)(954,0.6796)(955,0.6814)(956,0.6818)(957,0.6826)(958,0.6847)(959,0.6792)(960,0.6863)(961,0.6799)(962,0.6782)(963,0.678)(964,0.6815)(965,0.684)(966,0.6757)(967,0.6819)(968,0.6852)(969,0.6788)(970,0.6851)(971,0.6838)(972,0.6839)(973,0.6849)(974,0.68)(975,0.6826)(976,0.6847)(977,0.6819)(978,0.6842)(979,0.6827)(980,0.6837)(981,0.6821)(982,0.6804)(983,0.6838)(984,0.6822)(985,0.6805)(986,0.6819)(987,0.6864)(988,0.6862)(989,0.6865)(990,0.6837)(991,0.6826)(992,0.6814)(993,0.6858)(994,0.687)(995,0.6853)(996,0.6833)(997,0.6862)(998,0.6877)(999,0.684)
};\addlegendentry{COTAF $B = 0$}
\addplot [semithick, color = olive]
coordinates {
(0,0.1009)(1,0.1119)(2,0.1248)(3,0.1206)(4,0.1205)(5,0.1246)(6,0.1365)(7,0.1385)(8,0.1484)(9,0.1526)(10,0.1526)(11,0.159)(12,0.1478)(13,0.154)(14,0.1627)(15,0.1813)(16,0.1675)(17,0.1594)(18,0.1557)(19,0.1647)(20,0.1695)(21,0.18)(22,0.1709)(23,0.1698)(24,0.1723)(25,0.1778)(26,0.1821)(27,0.1698)(28,0.1842)(29,0.1937)(30,0.1977)(31,0.1991)(32,0.1885)(33,0.1837)(34,0.1945)(35,0.1939)(36,0.2004)(37,0.2036)(38,0.1941)(39,0.1927)(40,0.2043)(41,0.1832)(42,0.1808)(43,0.1914)(44,0.1941)(45,0.1825)(46,0.1854)(47,0.191)(48,0.1881)(49,0.1884)(50,0.1924)(51,0.1798)(52,0.1842)(53,0.1813)(54,0.1911)(55,0.1844)(56,0.1857)(57,0.1957)(58,0.1975)(59,0.1914)(60,0.1919)(61,0.1885)(62,0.1945)(63,0.2014)(64,0.2044)(65,0.1974)(66,0.1974)(67,0.2057)(68,0.1897)(69,0.1942)(70,0.198)(71,0.2093)(72,0.2068)(73,0.194)(74,0.1856)(75,0.1938)(76,0.1908)(77,0.186)(78,0.1913)(79,0.1936)(80,0.1886)(81,0.1947)(82,0.1893)(83,0.1881)(84,0.1931)(85,0.1953)(86,0.204)(87,0.1976)(88,0.1943)(89,0.1927)(90,0.2089)(91,0.1934)(92,0.187)(93,0.1899)(94,0.1824)(95,0.1742)(96,0.1829)(97,0.1824)(98,0.1829)(99,0.1715)(100,0.1748)(101,0.1963)(102,0.1944)(103,0.1847)(104,0.1936)(105,0.21)(106,0.196)(107,0.19)(108,0.1995)(109,0.1797)(110,0.1809)(111,0.1847)(112,0.1892)(113,0.1947)(114,0.2014)(115,0.2056)(116,0.2124)(117,0.1993)(118,0.2043)(119,0.1993)(120,0.2075)(121,0.2022)(122,0.205)(123,0.2179)(124,0.2161)(125,0.2092)(126,0.2083)(127,0.2125)(128,0.1949)(129,0.2032)(130,0.1942)(131,0.1933)(132,0.1937)(133,0.1861)(134,0.1856)(135,0.1912)(136,0.1851)(137,0.2019)(138,0.1918)(139,0.1979)(140,0.2005)(141,0.198)(142,0.1927)(143,0.1919)(144,0.2012)(145,0.2024)(146,0.198)(147,0.1887)(148,0.1899)(149,0.1993)(150,0.201)(151,0.2104)(152,0.2023)(153,0.2107)(154,0.213)(155,0.2149)(156,0.2073)(157,0.2051)(158,0.1987)(159,0.2052)(160,0.2061)(161,0.198)(162,0.1911)(163,0.203)(164,0.2046)(165,0.1854)(166,0.1861)(167,0.1925)(168,0.2015)(169,0.1798)(170,0.1883)(171,0.1989)(172,0.1982)(173,0.2055)(174,0.1893)(175,0.1949)(176,0.189)(177,0.1839)(178,0.201)(179,0.1863)(180,0.1949)(181,0.1891)(182,0.1816)(183,0.1819)(184,0.1842)(185,0.1798)(186,0.202)(187,0.1946)(188,0.1963)(189,0.1885)(190,0.194)(191,0.1992)(192,0.2069)(193,0.1989)(194,0.1833)(195,0.1911)(196,0.1973)(197,0.1998)(198,0.1996)(199,0.1983)(200,0.1949)(201,0.1854)(202,0.1935)(203,0.1859)(204,0.1972)(205,0.1919)(206,0.2024)(207,0.1905)(208,0.2073)(209,0.1977)(210,0.2)(211,0.1858)(212,0.1987)(213,0.1866)(214,0.1938)(215,0.1912)(216,0.1931)(217,0.1964)(218,0.2002)(219,0.195)(220,0.1701)(221,0.2057)(222,0.191)(223,0.2043)(224,0.2102)(225,0.2027)(226,0.2002)(227,0.2052)(228,0.1944)(229,0.2062)(230,0.1975)(231,0.1873)(232,0.1922)(233,0.1983)(234,0.1978)(235,0.1966)(236,0.193)(237,0.1947)(238,0.2038)(239,0.2)(240,0.1903)(241,0.1982)(242,0.1816)(243,0.194)(244,0.1947)(245,0.1904)(246,0.1959)(247,0.2037)(248,0.1974)(249,0.1803)(250,0.1917)(251,0.1873)(252,0.2061)(253,0.1958)(254,0.1871)(255,0.2029)(256,0.1921)(257,0.1852)(258,0.1903)(259,0.1978)(260,0.201)(261,0.1847)(262,0.1907)(263,0.1783)(264,0.1829)(265,0.1871)(266,0.1859)(267,0.1896)(268,0.1762)(269,0.199)(270,0.1824)(271,0.1983)(272,0.2069)(273,0.1921)(274,0.1962)(275,0.1948)(276,0.1819)(277,0.1928)(278,0.1816)(279,0.1926)(280,0.1941)(281,0.1978)(282,0.1946)(283,0.2027)(284,0.2022)(285,0.2149)(286,0.2081)(287,0.2062)(288,0.1911)(289,0.1954)(290,0.1944)(291,0.1852)(292,0.186)(293,0.1975)(294,0.1952)(295,0.1992)(296,0.189)(297,0.1928)(298,0.2093)(299,0.1991)(300,0.1856)(301,0.2011)(302,0.1933)(303,0.2074)(304,0.2088)(305,0.2058)(306,0.2038)(307,0.1995)(308,0.1845)(309,0.1893)(310,0.1855)(311,0.1909)(312,0.198)(313,0.2051)(314,0.1896)(315,0.2017)(316,0.2109)(317,0.2094)(318,0.213)(319,0.2029)(320,0.2123)(321,0.206)(322,0.2062)(323,0.2088)(324,0.2018)(325,0.1926)(326,0.1938)(327,0.2043)(328,0.2096)(329,0.2098)(330,0.2042)(331,0.1929)(332,0.1953)(333,0.1919)(334,0.2034)(335,0.1965)(336,0.1977)(337,0.1963)(338,0.2028)(339,0.1941)(340,0.1986)(341,0.184)(342,0.1917)(343,0.1947)(344,0.1982)(345,0.1994)(346,0.2021)(347,0.1978)(348,0.1908)(349,0.1931)(350,0.19)(351,0.1996)(352,0.1918)(353,0.1975)(354,0.1918)(355,0.1982)(356,0.1978)(357,0.2101)(358,0.2213)(359,0.2078)(360,0.2034)(361,0.1995)(362,0.1998)(363,0.1884)(364,0.1971)(365,0.1893)(366,0.1985)(367,0.198)(368,0.198)(369,0.1942)(370,0.182)(371,0.1826)(372,0.1856)(373,0.1927)(374,0.1987)(375,0.1927)(376,0.1908)(377,0.1851)(378,0.1831)(379,0.1924)(380,0.1847)(381,0.1926)(382,0.193)(383,0.1981)(384,0.1847)(385,0.1945)(386,0.1911)(387,0.1807)(388,0.1922)(389,0.1635)(390,0.1736)(391,0.1877)(392,0.1933)(393,0.1808)(394,0.1883)(395,0.1881)(396,0.1933)(397,0.1924)(398,0.2037)(399,0.1965)(400,0.1907)(401,0.2015)(402,0.2008)(403,0.1862)(404,0.1927)(405,0.2013)(406,0.2013)(407,0.1965)(408,0.1714)(409,0.1955)(410,0.1976)(411,0.2088)(412,0.2025)(413,0.2127)(414,0.2053)(415,0.2112)(416,0.1966)(417,0.1922)(418,0.2098)(419,0.2043)(420,0.1884)(421,0.1863)(422,0.2022)(423,0.1983)(424,0.1895)(425,0.2007)(426,0.1984)(427,0.205)(428,0.1946)(429,0.1934)(430,0.1947)(431,0.1914)(432,0.1943)(433,0.1999)(434,0.1972)(435,0.2076)(436,0.1828)(437,0.1993)(438,0.1898)(439,0.1914)(440,0.1904)(441,0.2013)(442,0.1974)(443,0.1889)(444,0.1752)(445,0.1844)(446,0.1704)(447,0.1735)(448,0.186)(449,0.1965)(450,0.1996)(451,0.2034)(452,0.1933)(453,0.2005)(454,0.1924)(455,0.1999)(456,0.1905)(457,0.1988)(458,0.1832)(459,0.1939)(460,0.1902)(461,0.1949)(462,0.1994)(463,0.1958)(464,0.1824)(465,0.1866)(466,0.2048)(467,0.1911)(468,0.1981)(469,0.1943)(470,0.1834)(471,0.1868)(472,0.1935)(473,0.1927)(474,0.1984)(475,0.2001)(476,0.1891)(477,0.1956)(478,0.1954)(479,0.2014)(480,0.1681)(481,0.1831)(482,0.1888)(483,0.189)(484,0.1874)(485,0.197)(486,0.1786)(487,0.1971)(488,0.2022)(489,0.1997)(490,0.1968)(491,0.1963)(492,0.1909)(493,0.1908)(494,0.1904)(495,0.1884)(496,0.1988)(497,0.2169)(498,0.2051)(499,0.207)(500,0.1891)(501,0.1871)(502,0.1838)(503,0.189)(504,0.1938)(505,0.1954)(506,0.2013)(507,0.1912)(508,0.1913)(509,0.1889)(510,0.1918)(511,0.1932)(512,0.2076)(513,0.2033)(514,0.2007)(515,0.2092)(516,0.2035)(517,0.1958)(518,0.2145)(519,0.2039)(520,0.2045)(521,0.1917)(522,0.1914)(523,0.1805)(524,0.183)(525,0.1983)(526,0.2019)(527,0.2022)(528,0.1936)(529,0.1939)(530,0.2026)(531,0.2024)(532,0.205)(533,0.1938)(534,0.2032)(535,0.209)(536,0.1965)(537,0.1868)(538,0.2101)(539,0.2023)(540,0.2075)(541,0.1916)(542,0.1978)(543,0.1908)(544,0.1852)(545,0.1998)(546,0.1941)(547,0.1971)(548,0.1993)(549,0.1835)(550,0.2015)(551,0.189)(552,0.189)(553,0.1957)(554,0.1917)(555,0.1917)(556,0.1919)(557,0.1941)(558,0.1975)(559,0.1768)(560,0.1945)(561,0.1981)(562,0.1981)(563,0.2048)(564,0.2007)(565,0.206)(566,0.2098)(567,0.2134)(568,0.2044)(569,0.2117)(570,0.2033)(571,0.2005)(572,0.2054)(573,0.2017)(574,0.2129)(575,0.2001)(576,0.2038)(577,0.211)(578,0.1988)(579,0.1872)(580,0.208)(581,0.2129)(582,0.2092)(583,0.2104)(584,0.2003)(585,0.2095)(586,0.2182)(587,0.1955)(588,0.2059)(589,0.2102)(590,0.2059)(591,0.201)(592,0.1934)(593,0.2069)(594,0.1983)(595,0.1975)(596,0.1953)(597,0.1981)(598,0.1936)(599,0.2029)(600,0.1906)(601,0.1863)(602,0.1799)(603,0.1842)(604,0.1995)(605,0.2037)(606,0.2055)(607,0.1876)(608,0.185)(609,0.1971)(610,0.1941)(611,0.1897)(612,0.1914)(613,0.1851)(614,0.1986)(615,0.1824)(616,0.1915)(617,0.1946)(618,0.1986)(619,0.2056)(620,0.2025)(621,0.2002)(622,0.1952)(623,0.1993)(624,0.1952)(625,0.1849)(626,0.2005)(627,0.207)(628,0.2167)(629,0.2094)(630,0.1967)(631,0.1974)(632,0.1976)(633,0.2017)(634,0.2155)(635,0.2005)(636,0.1975)(637,0.194)(638,0.1849)(639,0.1994)(640,0.188)(641,0.2157)(642,0.1929)(643,0.2175)(644,0.2109)(645,0.2135)(646,0.2032)(647,0.2191)(648,0.2072)(649,0.2116)(650,0.2078)(651,0.2106)(652,0.2107)(653,0.2037)(654,0.2015)(655,0.1959)(656,0.202)(657,0.2091)(658,0.1924)(659,0.1906)(660,0.1898)(661,0.1907)(662,0.1865)(663,0.1876)(664,0.1977)(665,0.1941)(666,0.2058)(667,0.2031)(668,0.1944)(669,0.2031)(670,0.2039)(671,0.205)(672,0.1989)(673,0.199)(674,0.2135)(675,0.2054)(676,0.2031)(677,0.202)(678,0.1915)(679,0.1989)(680,0.1999)(681,0.1913)(682,0.1893)(683,0.1911)(684,0.1875)(685,0.2028)(686,0.1966)(687,0.1778)(688,0.1761)(689,0.1765)(690,0.1952)(691,0.1782)(692,0.1994)(693,0.1913)(694,0.1934)(695,0.1976)(696,0.1928)(697,0.2003)(698,0.1871)(699,0.1875)(700,0.1897)(701,0.1762)(702,0.1886)(703,0.1924)(704,0.1909)(705,0.1911)(706,0.2015)(707,0.1981)(708,0.2016)(709,0.1903)(710,0.172)(711,0.1955)(712,0.198)(713,0.1865)(714,0.1957)(715,0.2005)(716,0.19)(717,0.1984)(718,0.2069)(719,0.1906)(720,0.1887)(721,0.1857)(722,0.1957)(723,0.1908)(724,0.1982)(725,0.1964)(726,0.2)(727,0.1919)(728,0.1772)(729,0.1895)(730,0.1822)(731,0.1849)(732,0.1926)(733,0.1762)(734,0.1901)(735,0.2047)(736,0.197)(737,0.1911)(738,0.1796)(739,0.1861)(740,0.1915)(741,0.1869)(742,0.2018)(743,0.2044)(744,0.1978)(745,0.1978)(746,0.2007)(747,0.2002)(748,0.1921)(749,0.2085)(750,0.1998)(751,0.1962)(752,0.2055)(753,0.2015)(754,0.1999)(755,0.1851)(756,0.1906)(757,0.1965)(758,0.1858)(759,0.1885)(760,0.1869)(761,0.1807)(762,0.1853)(763,0.1902)(764,0.1755)(765,0.1861)(766,0.2002)(767,0.1821)(768,0.1847)(769,0.194)(770,0.2011)(771,0.1948)(772,0.2027)(773,0.2031)(774,0.2086)(775,0.1995)(776,0.2053)(777,0.1969)(778,0.2078)(779,0.2003)(780,0.2175)(781,0.2085)(782,0.1992)(783,0.1881)(784,0.2122)(785,0.1832)(786,0.2033)(787,0.195)(788,0.2058)(789,0.1907)(790,0.1815)(791,0.2002)(792,0.2135)(793,0.1994)(794,0.1914)(795,0.195)(796,0.1906)(797,0.1826)(798,0.195)(799,0.201)(800,0.1897)(801,0.1974)(802,0.1959)(803,0.1872)(804,0.1863)(805,0.1854)(806,0.1879)(807,0.2106)(808,0.1956)(809,0.1954)(810,0.1949)(811,0.1973)(812,0.2035)(813,0.1917)(814,0.1901)(815,0.2029)(816,0.1935)(817,0.2034)(818,0.2001)(819,0.2012)(820,0.213)(821,0.2028)(822,0.2013)(823,0.2004)(824,0.1972)(825,0.196)(826,0.2027)(827,0.1947)(828,0.1879)(829,0.1966)(830,0.185)(831,0.196)(832,0.1889)(833,0.1929)(834,0.185)(835,0.1875)(836,0.1807)(837,0.19)(838,0.1821)(839,0.1798)(840,0.1891)(841,0.1845)(842,0.1926)(843,0.1864)(844,0.1896)(845,0.182)(846,0.1985)(847,0.1914)(848,0.1673)(849,0.1829)(850,0.1953)(851,0.1927)(852,0.1951)(853,0.1981)(854,0.1978)(855,0.1889)(856,0.1986)(857,0.1901)(858,0.1979)(859,0.1954)(860,0.2019)(861,0.1974)(862,0.2049)(863,0.1881)(864,0.1926)(865,0.1916)(866,0.1951)(867,0.2031)(868,0.204)(869,0.1936)(870,0.1965)(871,0.1947)(872,0.192)(873,0.1979)(874,0.1924)(875,0.1959)(876,0.1936)(877,0.1915)(878,0.1846)(879,0.2049)(880,0.1995)(881,0.1984)(882,0.1945)(883,0.1974)(884,0.1907)(885,0.202)(886,0.1971)(887,0.1903)(888,0.1992)(889,0.1966)(890,0.1988)(891,0.2025)(892,0.2078)(893,0.1923)(894,0.1964)(895,0.1882)(896,0.1908)(897,0.1953)(898,0.198)(899,0.1941)(900,0.1949)(901,0.2049)(902,0.2022)(903,0.2026)(904,0.1996)(905,0.2063)(906,0.2126)(907,0.2047)(908,0.2102)(909,0.2074)(910,0.2026)(911,0.2067)(912,0.2011)(913,0.1878)(914,0.2062)(915,0.2)(916,0.2049)(917,0.2096)(918,0.1951)(919,0.1918)(920,0.2009)(921,0.1997)(922,0.2086)(923,0.2048)(924,0.2094)(925,0.215)(926,0.2247)(927,0.2227)(928,0.2152)(929,0.2113)(930,0.2138)(931,0.2039)(932,0.2113)(933,0.2089)(934,0.1944)(935,0.2104)(936,0.2107)(937,0.2099)(938,0.218)(939,0.2174)(940,0.2138)(941,0.211)(942,0.2098)(943,0.2103)(944,0.2074)(945,0.2044)(946,0.2038)(947,0.1957)(948,0.2014)(949,0.203)(950,0.2035)(951,0.2023)(952,0.2113)(953,0.2089)(954,0.1976)(955,0.1924)(956,0.205)(957,0.1992)(958,0.1886)(959,0.1882)(960,0.1872)(961,0.2021)(962,0.1977)(963,0.2118)(964,0.2167)(965,0.2086)(966,0.1879)(967,0.2011)(968,0.1979)(969,0.1961)(970,0.2067)(971,0.2118)(972,0.2176)(973,0.2078)(974,0.2046)(975,0.2024)(976,0.2135)(977,0.204)(978,0.1947)(979,0.2018)(980,0.2061)(981,0.2079)(982,0.2067)(983,0.2143)(984,0.2142)(985,0.2143)(986,0.2081)(987,0.2029)(988,0.2029)(989,0.2072)(990,0.2072)(991,0.1973)(992,0.2011)(993,0.2023)(994,0.195)(995,0.1949)(996,0.1885)(997,0.1975)(998,0.1859)(999,0.1995)
};\addlegendentry{COTAF $B = 2$}
\end{axis}
\end{tikzpicture}}
\end{center}
\caption{Test accuracy vs. global rounds on CIFAR10 i.i.d. dataset when $B$ Byzantine clients applying Gaussian attacks.}
\label{gaussian_attacks_cifar}
\end{figure}
In \figref{gaussian_attacks_cifar}, we compare the proposed framework to COTAF using CIFAR10 i.i.d. dataset. As expected, the performance of COTAF is heavily affected by Byzantine attacks while the performance of ROTAF is similar to the attack-free case.
\section{Conclusion}
\label{conclusion}
In this paper, we have proposed a novel framework to account for Byzantine attacks in OTA-FL. By dividing the distributed clients into groups, the parameter server is able to reduce the effect of Byzantine attacks by aggregating the group parameter updates. The convergence of the proposed algorithm has been studied analytically. We have extended our approach to handling the case of non-i.i.d. data. The simulation results show the robustness of the proposed method to different Byzantine attacks and its convergence for both i.i.d. and non-i.i.d. data. This work can be extended by studying other robust aggregation techniques rather than the geometric median. Considering imperfect CSI is also a possible future direction.
\section*{Appendix}
\subsection{Proof of Theorem \ref{thm1}}
To simplify notations, we assume further that all the clients in each group are transmitting at each global training round, that is, $|\mathcal{K}_{t,g}|= |\mathcal{G}_{t,g}|=m$. We think that the extension to the general case is easy and the final convergence results will be the same. We first sate the following lemma which will be used later in the proof.
\begin{lemma}\cite[Lemma 2]{9153949} Let $\mathcal V$ be a subset of random vectors distributed in
a normed vector space. If $\mathcal{V}' \subset \mathcal{V}$ such that $|\mathcal{V}'| < \frac{|\mathcal{V}|}{2}$,
then it holds that
$$
\mathbb{E}\|\underset{{\bf v}\in \mathcal{V}}{\rm geomed} ({\bf v}) \|^2 \leq C_\alpha^2 \frac{\sum_{{\bf v} \notin \mathcal{V}'}\mathbb{E}\|{\bf v}\|^2}{|\mathcal{V}| - |\mathcal{V}'| },
$$
where $C_\alpha=\frac{2-2\alpha}{1-2\alpha}$ and $\alpha= \frac{|\mathcal{V}'| }{|\mathcal{V}| }$.\label{ineq_gm_Byzantine}\end{lemma}
Define $\delta_t \triangleq \mathbb{E}\|{\bf w}_t-{\bf w}^*\|^2$, where the expectation is taken over the stochastic gradients and the channel noise. To prove Theorem \ref{thm1}, we start by finding an upper bound for $\delta_{t+1}$,
\begin{align*}
\delta_{t+1} & = \mathbb{E}\|{\bf w}_{t+1}-{\bf w}^*\|^2\\&= \mathbb{E}\|{\bf w}_t-\eta f'({\bf w}_t)-{\bf w}^*+{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2\\
&\leq \frac{1}{1-\gamma}\mathbb{E}\|{\bf w}_t-\eta f'({\bf w}_t)-{\bf w}^*\|^2 +\frac{1}{\gamma} \mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2,
\end{align*}
for any $0<\gamma<1$, where we have used the fact that $\|{\bf x}+{\bf y}\|^2\leq \frac{1}{1-\gamma}\|{\bf x}\|^2+\frac{1}{\gamma}\|{\bf y}\|^2$ for any $0<\gamma<1$. Since $f'({\bf w}^*)=0$, we can write
\begin{align*}
& \|{\bf w}_t-\eta f'({\bf w}_t)-{\bf w}^*\|^2\\& =\|{\bf w}_t-\eta ( f'({\bf w}_t)-f'({\bf w}^*) )-{\bf w}^*\|^2\\ &=
\|{\bf w}_t-{\bf w}^*\|^2 -2 \eta \langle f'({\bf w}_t)-f'({\bf w}^*), {\bf w}_t -{\bf w}^*\rangle + \eta^2 \|f'({\bf w}_t)-f'({\bf w}^*)\|^2
\\ & \overset{(a)}{\leq} \|{\bf w}_t-{\bf w}^*\|^2 -2 \eta \mu \|{\bf w}_t-{\bf w}^*\|^2 + \eta^2 L^2 \|{\bf w}_t-{\bf w}^*\|^2
\end{align*}
where $(a)$ follows from items $(i)$ and $(ii)$ of Assumption \ref{assump1}. Thus,
\begin{align*}
&\delta_{t+1} \leq \frac{1-2\eta\mu +\eta^2L^2}{1-\gamma}\delta_t + \frac{1}{\gamma} \mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2.
\end{align*}
For $\eta<\frac{2}{\mu}$, we can take $\gamma = \frac{\eta\mu}{2}$. Assuming further that $\eta \leq \frac{\mu}{2L^2}$, it holds that
$
\frac{1-2\eta\mu +\eta^2L^2}{1-\gamma} \leq 1-\eta\mu
$. Hence, for $\eta <\min(\frac{2}{\mu}, \frac{\mu}{2L^2})$,
\begin{align}
\delta_{t+1} &\leq ( 1-\eta\mu) \delta_t +\frac{2}{\eta \mu} \mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2.
\label{eq:102}
\end{align}
We treat now the second term of the right-hand side of \eqref{eq:102}. From the update rule \eqref{global_update_}, it follows that
$
{\bf w}_{t+1} -{\bf w}_{t} = {\rm geomed}(\{{\bf u}_g^t\}_{g=1}^G)
$
where ${\bf u}_g^t$ is given by
\begin{align*}
{\bf u}_g ^t&= -\frac{1}{m} \sum_{n\in \mathcal{G}_{t,g}} \eta f_{n,i_{n}^t}'({\bf w}_{t}) + {\bf z}_{t,g},
\end{align*}
Define $\mathcal{B}_t$ as the set of groups containing at least one Byzantine attacker at global iteration $t$ and $\mathcal{R}_t$ as the set of groups without any Byzantine attackers.
Applying Lemma \ref{ineq_gm_Byzantine} yields
\begin{align*}
& \mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2 = \mathbb{E}\|{\rm geomed}(\{{\bf u}_g^t\}_{g=1}^G)+\eta f'({\bf w}_t)\|^2
= \mathbb{E}\|{\rm geomed}(\{{\bf u}_g^t+\eta f'({\bf w}_t)\}_{g=1}^G)\|^2\\
& \leq \frac{C_\alpha^2 }{|\mathcal R_t|}{\sum_{g\in \mathcal R_t}\mathbb{E}\left\| {\bf u}_{g}^t + \eta f'({\bf w}_t) \right\|^2},
\end{align*}
Replacing ${\bf u}_{g}^t$ by its expression, it follows that
{\small
\begin{align}
& \mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2 \nonumber \\&\leq \frac{C_\alpha^2 }{|\mathcal R_t|} {\sum_{g \in \mathcal R_t}\mathbb{E} \left\|-\frac{\eta}{m} \sum_{n \in \mathcal G_{t,g}} f'_{n,i_{n}^t}({\bf w}_{t}) + {\bf z}_{t,g} +\eta f'({\bf w}_t) \right\|^2} \nonumber \\
&= \frac{C_\alpha^2 }{m} {\sum_{g \in \mathcal R_t}\mathbb{E} \left\|-\frac{\eta}{m} \sum_{n \in \mathcal G_{t,g}} ( f'_{n,i_{n}^t}({\bf w}_{t}) -f'_{n}({\bf w}_{t})) + {\bf z}_{t,g} \right\|^2}+ \frac{C_\alpha^2 }{|\mathcal R_t|} {\sum_{g \in \mathcal R_t} \left\|-\frac{\eta}{m} \sum_{n \in \mathcal G_{t,g}} f'_{n}({\bf w}_{t})+\eta f'({\bf w}_t)\right\|^2}
\label{main}
\end{align}}
where the last equality is obtained by noting that the cross terms vanish. First, we note that
\begin{align}
&\mathbb{E} \left\|-\frac{\eta}{m} \sum_{n \in \mathcal G_{t,g}} ( f'_{n,i_{n}^t}({\bf w}_{t}) -f'_{n}({\bf w}_{t})) + {\bf z}_{t,g} \right\|^2 \\&= \mathbb{E} \left\|-\frac{\eta}{m} \sum_{n \in \mathcal G_{t,g}} ( f'_{n,i_{n}^t}({\bf w}_{t}) -f'_{n}({\bf w}_{t})) \right\|^2 +\mathbb{E} \left\| {\bf z}_{t,g} \right\|^2 \label{eq1118}
\\& \leq \frac{\eta^2}{m}\sum_{n \in \mathcal G_{t,g}} \mathbb{E} \left\| ( f'_{n,i_{n}^t}({\bf w}_{t}) -f'_{n}({\bf w}_{t})) \right\|^2 +\mathbb{E} \left\| {\bf z}_{t,g} \right\|^2 \label{eq1119}
\end{align}
where the equality in \eqref{eq1118} follows by noting that the cross terms vanish due to the independence of the noise and the stochastic gradients while the last result follows from the fact that $\left\|\sum_{i=1}^k {\bf v}_i \right\|^2\leq k \sum_{i=1}^k\|{\bf v}_i\|^2$ for any sequence of vectors $\{{\bf v}_i\}_{i=1}^k$. Similarly, we have
\begin{align}
\left\|-\frac{\eta}{m} \sum_{n \in \mathcal G_{t,g}} f'_{n}({\bf w}_{t})+\eta f'({\bf w}_t)\right\|^2 \leq \frac{\eta^2}{m} \sum_{n \in \mathcal G_{t,g}} \| f_n'({\bf w}_t) -f'({\bf w}_t)\|^2
\end{align}
Thus, \eqref{main} can be written as
\begin{align}
\mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2 \nonumber \leq & \frac{C_\alpha^2 }{|\mathcal R_t|} {\sum_{g \in \mathcal R_t} \left[ \frac{\eta^2}{m}\sum_{n \in \mathcal G_{t,g}} \mathbb{E} \left\| ( f'_{n,i_{n}^t}({\bf w}_{t}) -f'_{n}({\bf w}_{t})) \right\|^2 +\mathbb{E} \left\| {\bf z}_{t,g} \right\|^2 \right] }\nonumber\\&+ \frac{C_\alpha^2 }{|\mathcal R_t|} {\sum_{g \in \mathcal R_t} \frac{\eta^2}{m} \sum_{n \in \mathcal G_{t,g}} \| f_n'({\bf w}_t) -f'({\bf w}_t)\|^2
}
\end{align}
We deal first with the term $\mathbb{E}\|{\bf z}_{t,g}\|^2$,
\begin{align}
&\mathbb{E}\|{\bf z}_{t,g}\|^2= \frac{p\sigma^2}{h_{min}^2 m^2\rho_t^2} = \frac{p\sigma^2}{h_{min}^2 m^2} \frac{\max_n\mathbb{E}\|{\bf m}_n^t\|^2}{P}\nonumber
\end{align}
Using item (v) in Assumption \ref{assump1}, we get
\begin{align}
\mathbb{E}\|{\bf m}_n^t\|^2 = \mathbb{E}\left\| \eta f'_{n,i_{n}^t} ({\bf w}^n_{t}) \right\|^2 \leq \eta^2K^2
\end{align}
Thus,
\begin{align}
\mathbb{E}\|{\bf z}_{t,g}\|^2\leq \frac{p\sigma^2}{m^2P h_{min}^2} \eta^2K^2.
\label{noise_bound}
\end{align}
From Assumption \ref{assump1}, we have
\begin{align}
\mathbb{E}\left\|f'_{n,i_{n}^t}({\bf w}_{t}^n)- f'_n({\bf w}_t) \right\|^2\leq \kappa^2.
\label{second}
\end{align}
Moreover, the term $\left\| f'_{n}({\bf w}_t) - f'({\bf w}_t) \right\|^2$ verifies by Assumption \ref{assump1},
\begin{align}
\left\| f'_{n}({\bf w}_t) - f'({\bf w}_t) \right\|^2 \leq \delta^2.
\label{third}
\end{align}
Combining \eqref{main} with \eqref{noise_bound} , \eqref{second} and \eqref{third}, it holds that
\begin{align}
\mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2 \!\leq C_\alpha^2 \eta^2\! \left(\delta^2+\kappa^2+\frac{p\sigma^2}{mPh_{min}^2} K^2\right)\!.
\label{eq202}
\end{align}
Combining \eqref{eq:102} and \eqref{eq202} yields for $\eta< \min(\frac{\mu}{2L^2} , \frac{2}{\mu})$
$$
\delta_{t+1}\leq ( 1-\eta\mu) \delta_t +\eta \mu A_1,
$$
where
$
A_1 \triangleq \frac{2}{\mu^2}C_\alpha^2 \left( \delta^2+\kappa^2+\frac{p\sigma^2}{mPh_{min}^2} K^2\right)
$. Thus,
\begin{align}
\delta_{t+1} & \leq ( 1-\eta\mu)^{t+1} \delta_0 + \eta \mu A_1\sum_{i=0}^{t}( 1-\eta\mu)^{i}\\
& = ( 1-\eta\mu)^{t+1} ( \delta_0 - A_1)+A_1,
\end{align}
which completes the proof.
\subsection{Proof of Theorem \ref{thm3}}
We state first the following preliminary results that will be used later in the proof.
\begin{lemma}\cite[Proposition 1]{karimireddy2021byzantinerobust} Let $\{{\bf v}_k, k \in \mathcal{G}\}$ be a subset of vectors with cardinality $G=|\mathcal{G}|$ and $\{\tilde{\bf v}_k, k\in \mathcal{G}\}$ a new set generated using the resampling method with s-replacement. Assume that $R=G-B$ vectors of the set $\{{\bf v}_k, k \in \mathcal{G}\}$ are uncontaminated (not affected by malicious updates) and $\mathcal{R}$ denoting the set of indices of these vectors. When $B<\frac{G}{2s}$, there exist a set $\mathcal{R}' \subseteq \mathcal{G}$ with at least $G-sB$ elements, such that for any $k' \in \mathcal{R}'$
\begin{align}
\mathbb{E} \tilde{\bf v}_{k'} = \frac{1}{R}\sum_{k\in \mathcal{R}}{\bf v}_k,\label{res1}
\end{align}
\begin{align}
\mathbb{E} \| \tilde{\bf v}_{k'} - \mathbb{E}\tilde{\bf v}_{k'} \|^2 = \frac{d}{R}\sum_{k\in \mathcal{R}} \left\|{\bf v}_k - \frac{1}{R}\sum_{k\in \mathcal{R}} {\bf v}_k\right\|^2,\label{res2}
\end{align}
where $d:= \frac{G-1}{sG-1}$ and the expectation is taken over the resampling process.
\label{lem3}
\end{lemma}
\begin{lemma}
Let $\{{\bf v}_k, k \in \mathcal{G}\}$ be a subset of random vectors in a normed vector space and a subset from it composed of independent vectors $\{{\bf v}_k, k \in \mathcal{R}\}$ are independent, where $\mathcal{R}$ is the indices of the vectors of this subset as defined in Lemma \ref{lem3}. Let $\{\tilde{\bf v}_k, k\in \mathcal{G}\}$ be a new set generated from $\{{\bf v}_k, k \in \mathcal{G}\}$ using the resampling method with s-replacement. When $B<\frac{G}{2s}$, there exist a set $\mathcal{R}' \subseteq \mathcal{G}$ with at least $G-sB$ elements, such that
{
\begin{align}
\frac{1}{R'}\sum_{k'\in\mathcal{R}'}\mathbb{E}\left\|\tilde{\bf v}_{k'}-\bar{\bf v} \right\|^2=&\left(d-\frac{1-d}{R}\right)\frac{1}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\|{\bf v}_k-\mathbb E {\bf v}_k\right\|^2+\frac{d}{R}\sum_{k\in\mathcal{R}}\left\| \mathbb E {\bf v}_k -\bar{\bf v}\right\|^2, \label{eqlem45}
\end{align}}
where $\bar{\bf v}=\frac{1}{R}\sum_{n\in \mathcal{R}}\mathbb E {\bf v}_n$, $d= \frac{G-1}{sG-1}$ and the expectation is taken over the resampling process and the randomness of vectors ${\bf v}_k$.
\label{lem4}
\end{lemma}
\begin{IEEEproof}
First, we note that the left hand-side of \eqref{eqlem45} can be written as
\begin{align}
\frac{1}{R'}\sum_{k'\in\mathcal{R}'}\mathbb{E}\left\|\tilde{\bf v}_{k'}-\bar{\bf v}\right\|^2&=\frac{1}{R'}\sum_{k'\in\mathcal{R}'}\mathbb{E}\left\|\tilde{\bf v}_{k'}-\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g\right\|^2 + \mathbb{E}\left\|\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\bar{\bf v}\right\|^2
\label{eqlem41}
\end{align}
where we have used the fact the expectation over the resampling process, which we denote by $\mathbb{E}_{R}$, of the cross terms vanishes as follows
$$\mathbb{E}_{R}\langle\tilde{\bf v}_{k'}-\frac{1}{R_t}\sum_{g\in\mathcal{R}} {\bf v}_g,\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\bar{\bf v}\rangle =\langle\mathbb{E}_{R} \tilde{\bf v}_{k'}-\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g,\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\bar{\bf v}\rangle= 0$$
where the last equality follows from \eqref{res1}.
Applying Lemma \ref{lem3} to the first term in the right hand-side of \eqref{eqlem41} yields
\begin{align}
\frac{1}{R'}\sum_{k'\in\mathcal{R}'}\mathbb{E}\left\|\tilde{\bf v}_{k'}-\bar{\bf v}\right\|^2&=\frac{d}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\|{\bf v}_k-\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g\right\|^2 + \mathbb{E}\left\|\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\bar{\bf v}\right\|^2
\label{eqlem42}
\end{align}
Let us now rewrite the first term of \eqref{eqlem42} as follows
\begin{align}
&\frac{d}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\|{\bf v}_k-\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g\right\|^2\\ &=\frac{d}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\|({\bf v}_k-\mathbb E {\bf v}_k ) + \left (\mathbb E {\bf v}_k -\frac{1}{R}\sum_{g\in\mathcal{R}} \mathbb E {\bf v}_g\right)+\left (\frac{1}{R}\sum_{g\in\mathcal{R}} \mathbb E {\bf v}_g-\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g\right) \right\|^2 \\
&=\frac{d}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\|{\bf v}_k-\mathbb E {\bf v}_k\right\|^2+\frac{d}{R}\sum_{k\in\mathcal{R}}\left\| \mathbb E {\bf v}_k-\frac{1}{R}\sum_{g\in\mathcal{R}} \mathbb E {\bf v}_g\right\|^2+d\mathbb E\left\|\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\frac{1}{R}\sum_{g\in\mathcal{R}} \mathbb E{\bf v}_g\right\|^2\label{eqlem43} \\&-2 \frac{d}{R} \sum_{k\in\mathcal{R}}\mathbb{E}\left\langle {\bf v}_k-\mathbb E {\bf v}_k ,\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\frac{1}{R}\sum_{g\in\mathcal{R}} \mathbb E{\bf v}_g \right\rangle\nonumber
\end{align}
where the remaining cross terms are obviously equal to 0. Moreover, it can be easily seen that
\begin{align}
\frac{1}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\langle {\bf v}_k-\mathbb E {\bf v}_k ,\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\frac{1}{R}\sum_{g\in\mathcal{R}} \mathbb E{\bf v}_g \right\rangle = \mathbb E\left\|\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\bar{\bf v}\right\|^2
\end{align}
which also can be written as
\begin{align}
\mathbb E\left\|\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g-\bar{\bf v}\right\|^2
=\mathbb E\left\|\frac{1}{R}\sum_{g\in\mathcal{R}} ({\bf v}_g- \mathbb E{\bf v}_g) \right\|^2 =\frac{1}{R^2}\sum_{g\in\mathcal{R}} \mathbb E \left\|{\bf v}_g- \mathbb E{\bf v}_g\right\|^2
\label{e201}
\end{align}
where we note here that the cross terms vanish due to the independence between the vectors $\{{\bf v}_g\}$.
Thus,
\begin{align}
\frac{d}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\|{\bf v}_k-\frac{1}{R}\sum_{g\in\mathcal{R}} {\bf v}_g\right\|^2=\left(d -\frac{d}{R}\right)\frac{1}{R}\sum_{k\in\mathcal{R}}\mathbb{E}\left\|{\bf v}_k-\mathbb E {\bf v}_k\right\|^2+\frac{d}{R}\sum_{k\in\mathcal{R}}\left\| \mathbb E {\bf v}_k -\bar {\bf v}\right\|^2.
\label{e200}
\end{align}
Combining \eqref{e200}, \eqref{e201} and \eqref{eqlem42} yields the desired result in \eqref{eqlem45}.
\end{IEEEproof}
Compared to the proof of Theorem \ref{thm1}, the main difference is the term ${\bf w}_{t+1}-{\bf w}_t$ that appears in the right hand-side of \eqref{eq:102}, which can be expressed as
$$
{\bf w}_{t+1}-{\bf w}_t = {\rm geomed}(\{\tilde{\bf u}_g^t\}_{g=1}^G)
$$
where $\{\tilde{\bf u}_g^t\}$ are generated from $\{{\bf u}_g^t\}$ using the resampling method. We start by bounding the last term in \eqref{eq:102}
\begin{align}
& \mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2 = \mathbb{E}\|{\rm geomed}(\{ \tilde{\bf u}_g^t\}_{g=1}^G)+\eta f'({\bf w}_t)\|^2\nonumber \\&
= \mathbb{E}\|{\rm geomed}(\{\tilde{\bf u}_g^t+\eta f'({\bf w}_t)\}_{g=1}^G)\|^2
\leq \frac{C_{s\alpha}^2 }{|R_t'|}{\sum_{g \in \mathcal R_t'}\mathbb{E}\left\| \tilde {\bf u}_{g}^t + \eta f'({\bf w}_t) \right\|^2},\label{eqth2}
\end{align}
where the last inequality follows from Lemma \ref{ineq_gm_Byzantine}. Define $\overline{\bf u}^t = \frac{1}{R_t}\sum_{g\in\mathcal R_t}{\mathbb E}{\bf u}_g^t$. The vectors ${\bf u}_g^t$ are composed of two terms as follows
$$
{\bf u}_g^t= -\frac{1}{m} \sum_{n\in \mathcal{G}_{t,g}} \eta f_{n,i_{n}^t}'({\bf w}_{t}) + {\bf z}_{t,g} = {\bf v}_g^t+ {\bf z}_{t,g}.
$$
where
$$
{\bf v}_g^t = -\frac{1}{m} \sum_{n\in \mathcal{G}_{t,g}} \eta f_{n,i_{n}^t}'({\bf w}_{t}).
$$
The randomness in ${\bf v}_g^t$ is with respect to the client random selection and the stochastic gradients, while the randomness in $ {\bf z}_{t,g}$ is with respect to the channel noise. The expectation of the random vectors ${\bf v}_g^t$ for $g\in \mathcal{R}_t$ over the random selection of the clients, using Lemma \ref{lem3}, can be written as
$$
{\mathbb E}_{S}{\bf v}_g^t = -\frac{1}{R}\sum_{n\in\mathcal{R}} \eta f_{n,i_{n}^t}'({\bf w}_{t}),
$$
where $ {\mathbb E}_{S}$ denote the expectation over the clients random selection. Then, taking the expectation over the stochastic gradients,
$$
{\mathbb E}{\bf v}_g^t = -\frac{1}{R}\sum_{n\in\mathcal{R}} \eta f_{n}'({\bf w}_{t})
$$
which implies
$$
{\mathbb E}{\bf u}_g^t = -\frac{1}{R}\sum_{n\in\mathcal{R}} \eta f_{n}'({\bf w}_{t}) = -\eta f'({\bf w}_t),
$$
and thus
$$
\overline{\bf u}^t = -\eta f'({\bf w}_t).
$$
Applying Lemma \ref{lem4}, we get
\begin{align}
&\frac{1 }{|R_t'|}\sum_{g \in \mathcal R_t'}\mathbb{E}\left\| \tilde {\bf u}_{g}^t + \eta f'({\bf w}_t) \right\|^2 = \frac{1 }{|R_t'|}\sum_{g \in \mathcal R_t'}\mathbb{E}\left\| \tilde {\bf u}_{g}^t -\overline{\bf u}^t \right\|^2 \\ &=\left(d+ \frac{1-d}{R_t} \right)\frac{1}{R_t} \sum_{g\in \mathcal{R}_t}\mathbb{E} \| {\bf u}_g^t- \mathbb{E}{\bf u}_g^t \|^2 +\frac{d}{R_t} \sum_{g\in \mathcal{R}_t }\norm{ \mathbb{E} {\bf u}_g^t -\overline{\bf u}^t }^2 \\ &=\left(d+ \frac{1-d}{R_t} \right)\frac{1}{R_t} \sum_{g\in \mathcal{R}_t}\mathbb{E} \| {\bf u}_g^t + \eta f'({\bf w}_t) \|^2,
\label{lem4eq44}
\end{align}
where the last result is obtained by noting that $\mathbb{E}{\bf u}_g^t= \overline{\bf u}^t = -\eta f'({\bf w}_t)$.
From the proof of Theorem \ref{thm1}, it holds that
\begin{align}
\frac{1 }{R_t}{\sum_{g\in \mathcal R_t}\mathbb{E}\left\| {\bf u}_{g}^t + \eta f'({\bf w}_t) \right\|^2}\leq \eta^2\! \left(\delta^2+\kappa^2+\frac{p\sigma^2}{mPh_{min}^2} K^2\right).
\end{align}
Thus,
\begin{align}
\frac{1 }{|R_t'|}\sum_{g \in \mathcal R_t'}\mathbb{E}\left\| \tilde {\bf u}_{g}^t + \eta f'({\bf w}_t) \right\|^2 & \leq \left(d+ \frac{1-d}{R_t} \right) \eta^2 \left( \delta^2 +\kappa^2 + \frac{p\sigma^2}{m P h_{min}^2} K^2\right).
\label{eqth3}
\end{align}
Combining \eqref{eqth3} with \eqref{eqth2} and noting that $R_t\geq G-B$, it holds that
\begin{align}
\mathbb{E}\|{\bf w}_{t+1}-{\bf w}_t+\eta f'({\bf w}_t)\|^2 \!\leq C_{s\alpha}^2 \eta^2\! \left(d+ \frac{1-d}{G-B}\right)\left(\delta^2+\kappa^2+\frac{p\sigma^2}{m P h_{min}^2} K^2\right)\!.
\label{eq202}
\end{align}
Combining \eqref{eq:102} and \eqref{eq202} yields for $\eta< \min(\frac{\mu}{2L^2} , \frac{2}{\mu})$
$$
\delta_{t+1}\leq ( 1-\eta\mu) \delta_t +\eta \mu A_2,
$$
where
$
A_2 \triangleq \frac{2}{\mu^2}C_{s\alpha}^2 \left(d+ \frac{1-d}{G-B}\right)\left(\delta^2+\kappa^2+\frac{p\sigma^2}{mPh_{min}^2} K^2\right)
$. Thus,
\begin{align}
\delta_{t+1} & \leq ( 1-\eta\mu)^{t+1} \delta_0 + \eta \mu A_2\sum_{i=0}^{t}( 1-\eta\mu)^{i}\\
& = ( 1-\eta\mu)^{t+1} ( \delta_0 - A_2)+A_2,
\end{align}
which completes the proof.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-1662 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Defects on materials, and particularly atomic vacancies, arises as a possible route for an experimental control of properties \cite{ACSNANOliang2021, CPCcavallini2022}. Although the plethora of possible materials defects, ranging from localized to extended, even the elementary structural defect (atomic single vacancy) allow for an complex phenomena to arise, for instance ruling the molecular self-assemblies over two-dimensional (2D) substrates \cite{NATMATlin2017}. Additionally, controlling such punctual defects on the precise manipulation of vacancies have allowed the construction and engineering of metamaterials \cite{PRLnguyen2018, ACSNANOli2015}. In order to further explore materials modification through vacancies, a clear picture of the intrinsic vacancy properties are needed.
Two-dimensional transition metal dichalcogenides (TMD's) are an interesting class of materials holding tremendous potential for applications, encompassing catalysis \cite{ADVMATvoiry2016, NATCOMtsai2017}, (opto)electronics \cite{NLcheng2014, NATNANOcui2015, NATNANOwang2012}, spintronics \cite{NATNANOwang2012, RMPavsar2020}, magnetism \cite{NATCOMavsar2020} and energy storage \cite{CCdu2010, ACSNchang2011}. Particularly, the presence of defects in 2D-TMD materials can directly impact their properties, causing a variety of phenomena that could be either detrimental acting like carrier scattering and localization sites \cite{NLedelberg2019} or beneficial as is the case of active catalytic sites, and magnetic orderings \cite{JACScai2015}. Furthermore, spin-orbit effects on those materials combined with point defects may give rise to interesting phenomena such as giant magnetoresistance \cite{JPCMbai2018}, topological phases \cite{NLcrasto2021}, all of which could be employed on spintronic devices. Additionally, recent experimental works have shown that the localization character of the vacancy states induces electronic transport \cite{NATCOMqiu2013}, and magnetism ordering on transition metal vacancies \cite{PRBabsor2017}.
In this study, we perform ab-initio calculations based on the density functional theory to clarify and quantify the localization character of vacancies states on TMD. We considered the pristine and defected MX$_2$ 2D systems (for M $=$ Ni, Mo, Pd, W, and Pt, and X $=$ S, Se, and Te) in their 1H, 1T, and 1T' phases (see Fig.~\ref{fig:ucell}). We highlight the most stable structural phase and most stable vacancies (of metals or chalcogen). We present an analysis of the localization nature of such vacancy states and their consequences to the energetic, electronic, and magnetic properties.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{1H, 1T, and 1T' MX$_2$ 2D structural phases.}
\label{fig:ucell}
\end{figure}
\section{Computational Approach}
Spin-polarized calculations based on density functional theory (DFT) \cite{PRhohenberg1964, PRkohn1965} were performed within the semi-local exchange-correlation functional proposed by Perdew--Burke--Ernzerhof (PBE) \cite{PRLperdew1996}. To treat the long range dispersion van der Waals (vdW) interactions, the pairwise D3 correction framework as proposed by Grimme were considered \cite{JCPgrimme2010, JCCgrimme2011}.
\begin{figure*}[ht!]
\includegraphics[width=2\columnwidth]{fig2.pdf}
\caption{(a) Relative cohesive energies for all pristine systems and structural parameters, namely (b) in-plane lattice and (c) monolayer thickness, for the most stable MX$_2$ structures.}
\label{fig:cohen}
\end{figure*}
For the total energies, $E_{tot}^{\text{DFT+vdW}}$, the electron-ion core interactions were considered within the the projector augmented wave (PAW) method \cite{PRBblochl1994, PRBkresse1999}, as implemented in the Vienna {\it Ab-Initio} Simulation Package (VASP) \cite{PRBkresse1993, PRBkresse1996}. The spin-orbit coupling was considered for the most stable configuration. For all calculations, cutoff energy for the plane-wave expansion of the Kohn--Sham orbitals was set to $520$\,eV, under an energy convergence parameter of $10^{-6}$\,eV, with all atoms relaxed until the atomic forces on every atom were smaller than $10^{-3}$\,eV\,{\AA}$^{-1}$. A uniform $4\times4\times1$ k-point mesh was considered for the Brillouin zone (BZ) integration, while a thinner mesh grid, $8\times8\times1$, was used for Density of States (DOS) calculations.
The 2D in-plane lattice parameters ($a$ and $b$) were optimized for a $1\times1$ unit cell with a fixed vacuum distance ($c$) of at least $20$\,{\AA}. Those lattice parameters were used to build up the pristine and single vacancy $5\times5$ and $3\times3$ unit cells to the 1H and 1T (1T') structures, with the former being large enough to avoid spurious interaction between periodic images.
\section{Results and discussion}
\subsection{Structural Stability}
To analyze the structural stability of the pristine structures, we calculated the cohesive energies -- being the total energy difference between the material and each isolated atom -- for the 1H, 1T, and, 1T' phases. To better present the relative stability of one phase against each other, we evaluated their relative cohesive energy, namely $\Delta E^{1H-1T}$, $\Delta E^{1H-1T'}$, and $\Delta E^{1T-1T'}$, as shown in Fig.~\ref{fig:cohen}~(a). That is, $\Delta E^{1H-1T}>0$ indicates the 1H phase is more stable than the 1T phase, and similarly to the other relative energies. As one can see, the negative values for 1H-1T ($200$--$400$\,meV\,atom$^{-1}$) and 1H-1T´ ($0$--$200$\,meV\,atom$^{-1}$) for \ce{MoX2} and \ce{WX2} indicate the 1H phase as the most stable, with this stability decreasing with the increase of the chalcogen atomic radius. Despite the slightly positive values observed for \ce{NiX2}, \ce{PdX2}, and \ce{PtX2}, the cohesive energy differences point to the 1T phase as the most stable, with the 1T-1T' differences being close to zero. Although the calculation started in the 1T' phase, after relaxation it converged barrierlessly into the 1T phase.
In general, the 1T phase has larger in-plane lattice parameters [Fig.~\ref{fig:cohen}(b)] with a smaller monolayer thickness as compared to the 1H phase and its more open structure (hexagonal holes). Consequently the monolayer thickness [Fig.~\ref{fig:cohen}(c)] decreases as the chalcogen planes move down to stabilize the equilibrium covalent bond length. In the 1H phase, the in-plane lattice parameter and monolayer thickness increase going from S$\rightarrow$Te, as a result of accommodating larger chalcogen atoms within the structure. In the case of the 1T phase, only PtX$_2$ follows a similar trend as the 1H systems. On the other hand, 1T-NiX$_2$ and 1T-PdX$_2$ undergo larger relaxation effects.
\subsection{Vacancy formation energy}
After pristine systems optimization, we built up a $5\times5$ defective monolayers considering the most stable phases for each system, namely the 1H phase for MoX$_2$ and WX$_2$ systems, and the 1T phase for NiX$_2$, PdX$_2$, and PtX$_2$. Both native point defects comprising chalcogen (V$_X$) and transition metal (V$_M$) vacancies were considered with the vacancy-vacancy distances in the 1H (1T) around $15.8$--$17.6${\AA} ($15.7$--$20.0${\AA}).
In Figure~\ref{fig:formen}, we present the formation energies, $E_f$, for both M and X vacancies for the selected systems, evaluated according to
\begin{equation}
E_f = E_{def}^{MX_2} - (E_{pristine}^{MX_2} - E_{M,X}^{free-atom} ),
\label{eq:formen}
\end{equation}
in which $E_{def}^{MX_2}$ is the energy of the defected system; $E_{pristine}^{MX_2}$ is the energy of the pristine system; and $E_{M,X}^{free-atom}$ is free-atom energy of the corresponding atom M or X which generates the vacancy defect.
As indicated in Fig.~\ref{fig:formen} both formation energies are endothermic, with the formation energies of transition metal vacancies [Fig.~\ref{fig:formen} (a)] are higher than that for the chalcogen [Fig.~\ref{fig:formen} (b)]. Considering the different structure phases, one can realize that the defect formation energies in the 1T phase are always smaller than those for the 1H phase. Particularly, Ni and Pd systems with the heavier chalcogen (Se and Te) present transition metal vacancy formation energy close to the scale values of the chalcogen vacancies, indicating a possible occurrence of such defects (although less favorable than X-vacancies). Additionally, for the X-vacancies, we present the formation energy in a $3\times3$ supercell, i.e. increased vacancy density, where the interaction between neighboring vacancies is greater. As we can see the NiSe$_2$, NiTe$_2$, and PdTe$_2$ structures present a larger difference in the chalcogen vacancy formation energy in the $3\times3$ cell when compared with the lower vacancy density case. As we will show in the next sections, such behavior is a consequence of the more delocalized nature of the vacancy states in those systems.
\begin{figure}
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{Native point defect formation energies. (a) is the transition metal vacancy defect, and (b) the chalcogen defect. In the latter, we show formation energies for both $5\times5$ (dashed lines) and $3\times3$ (solid lines) supercell sizes.}
\label{fig:formen}
\end{figure}
\subsection{Band structure analysis}
In Fig.~\ref{fig:soc-bands}, we show a defected $5\times5$ supercell model for both 1H and 1T structures alongside their characteristic band structures with and without spin-orbit coupling. The introduction of the chalcogen vacancy leads to three defect states, corresponding to the three transition metal dangling bonds over the vacancy environment. The triangular $C_{3v}$ local environment split the dangling bond states into $E$ and $A_1$ irreducible representations. Additionally, the interaction of those three intra-vacancy dangling bonds in the 1H structure is larger than the 1T, given the remaining chalcogen coupling those TM. That is, this stronger coupling leads to a large energy separation between the $E$ and $A_1$ irreducible representation. Such an interaction picture leads to three vacancy states on the 1T phase neatly lying on the system energy gap ($E$ and $A$ states), while for the 1H phase the two $E$ states remain in the energy gap, with the $A_1$ state being within the valence bands. Additionally to this picture, the SOC effect has a role in splitting the $E$ states into two sets. Such value of splitting is directly related to the SOC contribution in each system. In Fig.~\ref{fig:soc-bands}(c) we show the SOC splitting value at $\Gamma$ for the lower vacancy density system ($5\times5$ cell). Here, the TM atom mostly rules the SOC contribution, with a lower variance of the value with the chalcogen atom. Such variance is higher for WX$_2$ systems, ~50\,meV, while lower for MoX$_2$ systems, ~25\,meV. For the 5d TM, the 1H tungsten phases present the highest splitting, with the WS$_2$ reaching close to 0.2\,eV, followed by the 1T Pt phases. For the 4d TM similar trend is observed with the 1H Mo phases presenting higher SOC splitting than Pd 1T phases. Interestingly, going towards a higher vacancy density, where the interaction between adjacent vacancy states becomes important, the SOC trend is altered. First note that the vacancy states become dispersive, Fig.~\ref{fig:soc-bands}(b) right panels. The combined effect of the dispersion of the bands and SOC, induce a higher gap opening at the $\Gamma$ point for the 1T phases compared with the 1H ones. For instance, such a gap goes over 0.3\,eV for PtTe$_2$, with a dependence of the chalcogen atom being more accentuated. The dependence arises given the indirect coupling between adjacent vacancy states being ruled by the TMD matrix \cite{NLcrasto2021}. To further explore such indirect coupling we quantify the localization of such vacancy states.
\begin{figure*}
\includegraphics[width=2\columnwidth]{fig4.pdf}
\caption{(a) $5\times5$ defect supercell for 1H and 1T phases as indicated; (b), the universal schematic of band structure of each semiconducting phase, respectively, with and without spin-orbit corrections. The green arrows indicated spin-orbit splitting of the corresponding $k$-point; (c) Spin-orbit splittings calculated for both $3\times3$ and $5\times5$ defect supercells. $E^{\Gamma}_g$ is the spin-orbit splitting at the $\Gamma$ point.}
\label{fig:soc-bands}
\end{figure*}
\subsection{(De)Localization}
\begin{figure}
\includegraphics[width=\columnwidth]{fig5.pdf}
\caption{Inverse participation ratio (IPR) as a function of the Bloch wavefunction ($|\psi_{n,k}\rangle$) energy. (a1)-(a3) The IPR for 1H-MX$_2$ phases with M$=$Mo, and W and (b1)-(b3) The IPR for 1T-MX$_2$ phases, with M$=$Ni, Pd, and Pt from S to Te, respectively}
\label{fig:ipr}
\end{figure}
To take into account (de)localization effects due to the vacancy formation, we characterized the defective systems through the inverse participation ratio (IPR) \cite{PRBfocassio2021} given by
\begin{equation}
\text{IPR}_{n,k} = \frac{ \sum\limits_{i=1}^{N} | \langle i | \psi_{n,k} \rangle |^4}{ \left( \sum\limits_{i=1}^{N} | \langle i | \psi_{n,k} \rangle |^2 \right)^2},
\label{eq:ipr}
\end{equation}
in which $\langle i|\psi_{n,k}\rangle$ was taken as the sum of the orbital projected KS eigenstate for each site/atom $i$, such that $N$ is the total number of atoms in the cell. Thus, for a fully localized state the IPR should be one, while a delocalized one corresponds to the limit $\text{IPR} = 1/N$. In Fig.~\ref{fig:ipr}, we show the IPR values for each $| \psi_{n,k} \rangle$ state as a function of its eigenvalue, for the most stable phases for chalcogen vacancies. It is worth pointing out that those defective systems are in general semiconducting, with exception of NiSe$_2$, NiTe$_2$, and PdTe$_2$, which are metallic, as indicated by the energy gap at Fig.~\ref{fig:ipr} $x$-axis.
In the semiconducting cases, the localized states that appear within the gap corresponding to the vacancy states as depicted in Fig.\,\ref{fig:soc-bands}. The 1H phases present a slightly larger IPR when compared to the 1T phases. Taking the 1H phase as an example [Fig.~\ref{fig:ipr}(a1)-(a3)], after the chalcogen vacancy formation, if (in the ideal case) only the three transition metal orbitals ($|M_i \rangle$) neighboring the vacancy contributes equally (with $\langle M_i | \psi_{vac-state}\rangle =a$) to the localization, we have $\text{IPR} = 3a^4 / (3a^2)^2 = 1/3$. That is, the localization limit is $1/3$ at the chalcogen vacancy surroundings. As observed the IPR values are around $0.20$ which is close to the ideal limit, being reduced due to the spread of the vacancy states to hybridized neighboring orbitals. The same IPR limit (IPR$=1/3$) is valid for the 1T structures [Fig.~\ref{fig:ipr}(b1)-(b3)]. However, despite the vacancy states being within the bandgap, the values observed are slightly small, lying around IPR$=0.15$. In this case, an enhanced environment interaction is observed. To better visualize it, in Fig.~\ref{fig:ldos} we show the vacancy states squared wave function (partial charge density) of MoS$_2$ and PtSe$_2$ with S and Se vacancy, which respectively present an IPR of $0.2$ and $0.1$. The 1T phase forms a pyramidal-like configuration [see Fig.~\ref{fig:ldos} (b)], in which the localized state spreads to opposite surface chalcogen atoms neighboring the dangling bond Pt atom. On the other hand, the 1H phase LDOS [see Fig.~\ref{fig:ldos}(a)] are mostly localized in the M atoms close to the vacancy. Thus, the IPR for these localized states decreases for the 1T phase as compared to the 1H one. The spatial distribution of those vacancy states, although with different spreads follow the same three-fold symmetry of the vacancy structure.
\begin{figure}
\includegraphics[width=\columnwidth]{fig6.pdf}
\caption{LDOS for the chalcogen-defected systems (a) 1H-MoS$_2$ and (b) 1T-PtSe$_2$, with the same isosurface of 0.007\,e/{\AA}$^3$}.
\label{fig:ldos}
\end{figure}
In the light of the quantification of the localization through the IPR, the chalcogen vacancy formation energy and the band structure dependence of the vacancy density can be readily explained. For instance, given the more localized nature of Mo, W, and Pt vacancy states all above IPR$=0.1$, dictates that the adjacent vacancies will have lower interaction not changing the formation energy significantly. However Ni and Pd, with their IPR$<0.1$ indicate a stronger interaction between adjacent vacancies reducing the formation energy for higher vacancy densities. Particularly, for the 1T phases with Te [Fig.~\ref{fig:ipr}(b3)] which presents the lower IPR values for the vacancy states, presents also the greater variability on the chalcogen vacancy formation energy, see Fig.~\ref{fig:formen}(b). A similar analysis can be done following the band structure. Here, the 1T phases allowing a longer range of interaction between adjacent vacancies leads to higher dispersive states when compared with the 1H structure [Fig.~\ref{fig:soc-bands}(b)]. Here the localization of the chalcogen vacancy states leads to the interpretation of their interaction, additionally, TM vacancies can also lead to localized magnetic effects.
\subsection{Magnetism}
Although less energetically favorable, TM vacancies can also be found on TMD, where the introduction of such vacancies can induce a local net magnetic moment. In Table~\ref{tab:mag} we summarize the systems presenting a net magnetic moment after a TM vacancy is formed, while the ones not present on the table did not present any magnetic properties. After the TM vacancy is formed, localized magnetic moments arise on the neighboring chalcogen atoms. Here a ferromagnetic (FM) phase can be stabilized for some explored systems, being the only observed phase for the 1H structure. Interestingly, some of the 1T systems can present an antiferromagnetic (AFM) arrange of such chalcogen magnetic moments, which can be more stable than the ferromagnetic, as already observed for 1T-PtSe$_2$ \cite{NATCOMavsar2020}. Aware of this behavior, we probed this antiferromagnetic configuration for our 1T-based based systems.
The systems which presented a possible antiferromagnetic phase were 1T-NiS$_2$, 1T-PdS$_2$, 1T-PdSe$_2$, 1T-PtS$_2$, and 1T-PtSe$_2$. However, as shown in Table~\ref{tab:mag}, for Ni- and Pd-based systems the antiferromagnetic phase ($\Delta E^{AFM-FM}<0$) were not the most stable. On the other hand, for Pt-based systems we found more stable antiferromagnetic phases for 1T-PtS$_2$ and 1T-PtSe$_2$ systems with an energy difference about $-54$\,meV/vacancy, and $-33$\,meV/vacancy, respectively. That is, such values for the AFM phase together with the FM of 1T-PdSe$_2$ [with $\Delta E^{AFM-FM}=43$\,meV/vacancy], dictates such magnetic configurations to be robust close to the ambient temperature.
\begin{table}[h!]
\begin{ruledtabular}
\caption{Magnetic moments, $m$ ($\mu_B$), induced in the most stable phases with the introduction of vacancies. The last column, $\Delta E^{AFM-FM}$ (meV/vacancy) , is the energy difference between the antiferromagnetic (AFM) and ferromagnetic (FM) phases, with $\Delta E^{AFM-FM}<0$ indicating the AFM phase being more stable.}
\label{tab:lattparams}
\label{tab:mag}
\begin{tabular}{lccc}
MX$_2$ & vacancy & $m$ & $\Delta E^{AFM-FM}$ \\
\hline
1H-MoSe$_2$ & Mo & 4.00 & -- \\
1H-MoTe$_2$ & Mo & 2.00 & -- \\
1H-WTe$_2$ & W & 1.96 & -- \\
1T-NiS$_2$ & Ni & 4.00 & 19 \\
1T-PdS$_2$ & Pd & 4.00 & 17 \\
1T-PdSe$_2$ & Pd & 4.00 & 43 \\
1T-PtS$_2$ & Pt & 4.00 & -54 \\
1T-PtSe$_2$ & Pt & 4.00 & -33
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Conclusions}
We systematically investigated the energetic and electronic properties of a series of two-dimensional transition metal dichalcogenides (MX$_2$, with M$=$Ni, Mo, Pd, W, and Pt; and X$=$S, Se, and Te) presenting native point defects, namely chalcogen and transition metal vacancies, in different structural phases. Here, we found the chalcogen vacancy as the most stable for all systems, with lower formation energy in the 1T phase (Ni, Pd, and Pt systems) than those in the 1H phase (Mo and W systems). However, transition metal vacancies can still be found under experimental conditions. In this sense, our results show the appearance of localized magnetic moments induced by metal vacancies for 1T-PtS$_2$ and 1T-PtSe$_2$ that could be applied to design new 2D magnets. Furthermore, we have explored the localization effects of the chalcogen vacancy states. Such, localized states give rise to three energy levels that can be neatly lying on the TMD matrix energy gap. Here the localization strength, quantified by the defect states inverse participation rate, is shown to be greater in the 1H phases. This leads to both (i) a stronger repulsion between the three defect states (with $C_{3v}$, $E$ and $A_1$ irreducible representations) increasing the gap between $E$ and $A_1$ states, and (ii) giving rise to lower dispersion for higher vacancy densities, that is, present a lower vacancy-vacancy interaction. For the 1T phases, the more delocalized nature of the vacancy states gives rise to a stronger hopping-like interaction between adjacent vacancies. Additionally, we have shown that vacancy-vacancy interactions (given by their localization) ruled not only the materials band dispersion but also the SOC splittings. This investigation brings insightful discussions on the nature of energetic and electronic effects of vacancy defects within the different 2D-TMD material phases and vacancy concentration.
\section*{Acknowledgments}
The authors acknowledge financial support from the Brazilian agencies FAPESP (grant 20/14067-3 and 17/02317-2), INCT-Nanomateriais de Carbono, and Laborat\'{o}rio Nacional de Computa\c{c}\~{a}o Cient\'{i}fica for computer time.
| proofpile-arXiv_065-1663 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \subsection{Overview} Given a compact set $W\subset\mathbb{R}^{n+1}$ ($n\ge 1$), we consider the classical {\bf exterior isoperimetric problem} associated to $W$, namely
\begin{equation}
\label{psiv}
\psi_W(v)=\inf\big\{P(E;\Omega):E\subset\Omega=\mathbb{R}^{n+1}\setminus W\,,|E|=v\big\}\,,\qquad v>0\,,
\end{equation}
in the large volume regime $v\to\infty$. Here $|E|$ denotes the volume (Lebesgue measure) of $E$, and $P(E;\Omega)$ the (distributional) perimeter of $E$ relative to $\Omega$, so that $P(E;\Omega)=\H^n(\Omega\cap\partial E)$ whenever $\partial E$ is locally Lipschitz. Relative isoperimetric problems are well-known for their analytical \cite[Sections 6.4-6.6]{mazyaBOOKSobolevPDE} and geometric \cite[Chapter V]{ChavelBOOK} relevance.
They are also important in physical applications: beyond the obvious example of capillarity theory \cite{FinnBOOK}, exterior isoperimetry at large volumes provides an elegant approach to the Huisken-Yau theorem in general relativity, see \cite{eichmair_metzgerINV}.
When $v\to\infty$, we expect minimizers $E_v$ in \eqref{psiv} to closely resemble balls of volume $v$. Indeed, by minimality and isoperimetry, denoting by $B^{(v)}(x)$ the ball of center $x$ and volume $v$, and with $B^{(v)}=B^{(v)}(0)$, we find that
\begin{equation}
\label{basic energy estimate}
\lim_{v\to\infty}\frac{\psi_W(v)}{P(B^{(v)})}=1\,.
\end{equation}
Additional information can be obtained by combining \eqref{basic energy estimate} with quantitative isoperimetry \cite{fuscomaggipratelli,FigalliMaggiPratelliINVENTIONES}: if $0<|E|<\infty$, then
\begin{equation}
\label{quantitative euclidean isop}
P(E)\ge P(B^{(|E|)})\Big\{1+ c(n)\,\inf_{x\in\mathbb{R}^{n+1}}\,\Big(\frac{|E\Delta B^{(|E|)}(x)|}{|E|}\Big)^2\Big\}\,.
\end{equation}
The combination of \eqref{basic energy estimate} and \eqref{quantitative euclidean isop} shows that minimizers $E_v$ in $\psi_W(v)$ are close in $L^1$-distance to balls. Based on that, a somehow classical argument exploiting the local regularity theory of perimeter minimizers shows the existence of $v_0>0$ and of a function $R_0(v)\to 0^+$, $R_0(v)\,v^{1/(n+1)}\to\infty$ as $v\to\infty$, both depending on $W$, such that,
\begin{figure}
\input{noinfo2.pstex_t}
\caption{\small{Quantitative isoperimetry gives no information on how $W$ affects $\psi_W(v)$ for $v$ large.
}}
\label{fig noinfo}
\end{figure}
if $E_v$ is a minimizer of \eqref{psiv} with $v>v_0$, then (see Figure \ref{fig noinfo})
\begin{eqnarray}
\label{basic C1 estimate}
&&\mbox{$(\partial E_v)\setminus B_{R_0\,v^{1/(n+1)}}\subset$ a $C^1$-small normal graph over $\partial B^{(v)}(x)$},
\\\nonumber
&&\mbox{for some $x\in\mathbb{R}^{n+1}$ with $|x|=(v/\omega_{n+1})^{1/(n+1)}+{\rm o}(v^{1/(n+1)})$ as $v\to\infty$}\,;
\end{eqnarray}
here $\omega_m$ stands for the volume of the unit ball in $\mathbb{R}^m$, $B_r(x)$ is the ball of center $x$ and radius $r$ in $\mathbb{R}^{n+1}$, and $B_r=B_r(0)$. The picture of the situation offered by \eqref{basic energy estimate} and \eqref{basic C1 estimate} is thus incomplete under one important aspect: it offers no information related to the specific ``obstacle'' $W$ under consideration -- in other words, {\it two different obstacles are completely unrecognizable from \eqref{basic energy estimate} and \eqref{basic C1 estimate} alone}.
The first step to obtain obstacle-dependent information on $\psi_W$ is studying $L^1_{\rm loc}$-subsequential limits $F$ of exterior isoperimetric sets $E_v$ as $v\to\infty$. Since the mean curvature of $\partial E_v$ has order $v^{-1/(n+1)}$ as $v\to\infty$ in $\Omega$, each $\partial F$ is easily seen to be a minimal surface in $\Omega$. A finer analysis leads to establish a more useful characterization of such limits $F$ as minimizers in a ``Plateau's problem with free boundary on the obstacle and at infinity'', whose negative is precisely defined in \eqref{def RW} below and denoted by $\mathcal{R}(W)$. We call $\mathcal{R}(W)$ the {\bf isoperimetric residue of $W$} because it captures the ``residual effect'' of $W$ in \eqref{basic energy estimate}, as expressed by the limit identity
\begin{equation}
\label{main energy estimate}
\lim_{v\to\infty}\,\psi_W(v)-P(B^{(v)})=-\mathcal{R}(W)\,.
\end{equation}
The study of the geometric information about $W$ stored in $\mathcal{R}(W)$ is particularly interesting: roughly, $\mathcal{R}(W)$ is close to an $n$-dimensional sectional area of $W$, although its precise value is elusively determined by the behavior of certain ``plain-like'' minimal surfaces with free boundary on $W$. The proof of \eqref{main energy estimate} itself requires proving a blowdown result for such exterior minimal surfaces, and then extracting sharp decay information towards hyperplane blowdown limits. In particular, in the process of proving \eqref{main energy estimate}, we shall prove the existence of a positive $R_2$ (depending on $n$ and $W$ only) such that for every maximizer $F$ of $\mathcal{R}(W)$, $(\partial F)\setminus B_{R_2}$ is the graph of a smooth solution to the minimal surfaces equation. An application of Allard's regularity theorem \cite{Allard} leads then to complement \eqref{basic C1 estimate} with the following ``local'' resolution formula: for every $S>R_2$, for $v$ is large in terms of $n$, $W$ and $S$,
\begin{eqnarray}
\label{local C1 estimate}
&&\mbox{$(\partial E_v)\cap \big(B_S\setminus B_{R_2}\big)\subset$ a $C^1$-small normal graph over $\partial F$},
\\
\nonumber
&&\mbox{where $F$ is optimal for the isoperimetric residue $\mathcal{R}(W)$ of $W$}\,.
\end{eqnarray}
Interestingly, this already fine analysis gives no information on $\partial E_v$ in the {\it mesoscale} region $B_{R_0(v)\,v^{1/(n+1)}}\setminus B_S$ between the resolution formulas \eqref{basic C1 estimate} and \eqref{local C1 estimate}. To address this issue, we are compelled to develop what we have called a {\bf mesoscale flatness criterion} for hypersurfaces with bounded mean curvature. This kind of statement is qualitatively novel with respect to the flatness criteria typically used in the study of blowups and blowdowns of minimal surfaces -- although it is clearly related to those tools at the mere technical level -- and holds promise for applications to other geometric variational problems. In the study of the exterior isoperimetric problem, it allows us to prove the existence of positive constants $v_0$ and $R_1$, depending on $n$ and $W$ only, such that if $v>v_0$ and $E_v$ is a minimizer of $\psi_W(v)$, then
\begin{eqnarray} \nonumber
&&\mbox{$(\partial E_v)\cap \big(B_{R_1\,v^{1/(n+1)}}\setminus B_{R_2}\big)\subset$ a $C^1$-small normal graph over $\partial F$},
\\\label{main C1 estimate}
&&\mbox{where $F$ is optimal for the isoperimetric residue $\mathcal{R}(W)$ of $W$}\,.
\end{eqnarray}
The key difference between \eqref{local C1 estimate} and \eqref{main C1 estimate} is that the domain of resolution given in \eqref{main C1 estimate} {\it overlaps} with that of \eqref{basic C1 estimate}: indeed, $R_0(v)\to0^+$ as $v\to\infty$ implies that $R_0\,v^{1/(n+1)}< R_1\,v^{1/(n+1)}$ for $v>v_0$. As a by-product of this overlapping and of the graphicality of $\partial F$ outside of $B_{R_2}$, we deduce that {\it boundaries of exterior isoperimetric sets, outside of $B_{R_2}$, are diffeomorphic to $n$-dimensional disks}. Finally, when $n\le 6$, and maximizers $F$ of $\mathcal{R}(W)$ have locally smooth boundaries in $\Omega$, \eqref{main C1 estimate} can be propagated up to the obstacle itself; see Remark \ref{remark up to the obstacle} below.
Concerning the rest of this introduction: In section \ref{subsection isoperimetric residues} we present our analysis of isoperimetric residues, see Theorem \ref{thm main of residue}. In section \ref{subsection resolution of isop sets} we gather all our results concerning exterior isoperimetric sets with large volumes, see Theorem \ref{thm main psi}. Finally, we present the mesoscale flatness criterion in section \ref{subsection mesoscale flatness criterion intro} and the organization of the paper in section \ref{subsection organization}.
\subsection{Isoperimetric residues}\label{subsection isoperimetric residues} To define $\mathcal{R}(W)$ we introduce the class
\[
\mathcal F
\]
of those pairs $(F,\nu)$ with $\nu\in\SS^n$ ($=$ the unit sphere of $\mathbb{R}^{n+1}$) and $F\subset\mathbb{R}^{n+1}$ a set of locally finite perimeter in $\Omega$ (i.e., $P(F;\Omega')<\infty$ for every $\Omega'\subset\subset\Omega$), contained in slab around $\nu^\perp=\{x:x\cdot\nu=0\}$, and whose boundary (see Remark \ref{remark boundaries in RW} below) has full projection over $\nu^\perp$ itself: i.e., for some $\a,\b\in\mathbb{R}$,
\begin{eqnarray}\label{def Sigma nu 1}
&&\partial F\subset \big\{x:\a< x\cdot\nu<\b\big\}\,,
\\\label{def Sigma nu 2}
&&\mathbf{p}_{\nu^\perp}(\partial F)=\nu^\perp:=\big\{x:x\cdot\nu=0\big\}\,,
\end{eqnarray}
where $\mathbf{p}_{\nu^\perp}(x)=x-(x\cdot\nu)\,\nu$, $x\in\mathbb{R}^{n+1}$. In correspondence to $W$ compact, we define the {\bf residual perimeter functional}, ${\rm res}_W:\mathcal F\to\mathbb{R}\cup\{\pm\infty\}$, by
\[
{\rm res}_W(F,\nu)=\varlimsup_{R\to\infty}\omega_n\,R^n-P(F;\textbf{\textup{C}}_R^\nu\setminus W)\,,\qquad (F,\nu)\in\mathcal F\,,
\]
where $\textbf{\textup{C}}_R^\nu=\{x\in\mathbb{R}^{n+1}:|\mathbf{p}_{\nu^\perp}(x)|<R\}$ denotes the (unbounded) cylinder of radius $R$ with axis along $\nu$ -- and where the limsup is actually a monotone decreasing limit thanks to \eqref{def Sigma nu 1} and \eqref{def Sigma nu 2} (see \eqref{perimeter is decreasing} below for a proof). For a reasonably ``well-behaved'' $F$, e.g. if $\partial F$ is the graph of a Lipschitz function over $\nu^\perp$, $\omega_n\,R^n$ is the (obstacle-independent) leading order term of the expansion of $P(F;\textbf{\textup{C}}_R^\nu\setminus W)$ as $R\to\infty$, while ${\rm res}_W(F,\nu)$ is expected to capture the first obstacle-dependent ``residual perimeter'' contribution of $P(F;\textbf{\textup{C}}_R^\nu\setminus W)$ as $R\to\infty$. The {\bf isoperimetric residue} of $W$ is then defined by maximizing ${\rm res}_W$ over $\mathcal F$, so that
\begin{equation}
\label{def RW}
\mathcal{R}(W)=\sup_{(F,\nu)\in\mathcal F}\,{\rm res}_W(F,\nu)\,;
\end{equation}
see
\begin{figure}
\input{fnu.pstex_t}\caption{\small{If $(F,\nu)\in\mathcal F$ then $F$ is contained in a slab around $\nu^\perp$ and is such that $\partial F$ has full projection over $\nu^\perp$. Only the behavior of $\partial F$ outside $W$ matters in computing ${\rm res}_W(F,\nu)$. The perimeter of $F$ in $\mathbf{C}_R^\nu\setminus W$ (depicted as a bold line) is compared to $\omega_n\,R^n(=$perimeter of a half-space orthogonal to $\nu$ in $\mathbf{C}_R^\nu$); the corresponding ``residual'' perimeter as $R\to\infty$, is ${\rm res}_W(F,\nu)$.}}\label{fig fnu}
\end{figure}
Figure \ref{fig fnu}. Clearly $\mathcal{R}(\l\,W)=\l^n\,\mathcal{R}(W)$ if $\l>0$, and $\mathcal{R}(W)$ is trapped between the areas of the largest hyperplane section and directional projection of $W$, see \eqref{R larger than S} below. In the simple case when $n=1$ and $W$ is connected, $\mathcal{R}(W)={\rm diam}\,(W)$ by \eqref{characterization 1} and \eqref{characterization 3} below, although, in general, $\mathcal{R}(W)$ does not seem to admit a simple characterization, and it is finely tuned to the near-to-the-obstacle behavior of ``plane-like'' minimal surfaces with free boundary on $W$. Our first main result collects these (and other) properties of isoperimetric residues and of their maximizers.
\begin{theorem}[Isoperimetric residues]\label{thm main of residue} If $W\subset\mathbb{R}^{n+1}$ is compact, then there are $R_2$ and $C_0$ positive and depending on $W$ with the following property.
\noindent {\bf (i):} If $\mathcal{S}(W)=\sup\{\H^n(W\cap\Pi):\mbox{$\Pi$ is a hyperplane in $\mathbb{R}^{n+1}$}\}$ and $\P(W)=\sup\{\H^n(\mathbf{p}_{\nu^{\perp}}(W)):\nu\in\SS^n\}$, then we have
\begin{eqnarray}
\label{R larger than S}
\mathcal{S}(W)\le \mathcal{R}(W)\le \P(W)\,.
\end{eqnarray}
\noindent {\bf (ii):} The family ${\rm Max}[\mathcal{R}(W)]$ of maximizers of $\mathcal{R}(W)$ is non-empty. If $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, then $F$ is a {\bf perimeter minimizer with free boundary in $\Omega=\mathbb{R}^{n+1}\setminus W$}, i.e.
\begin{equation}\label{local perimeter minimizer}
P(F;\Omega \cap B) \leq P(G;\Omega \cap B)\,,\qquad\mbox{$\forall F\Delta G\subset\subset B$, $B$ a ball}\,;
\end{equation}
and if $\mathcal{R}(W)>0$, then $\partial F$ is contained in the smallest slab $\{x:\a\le x\cdot\nu\le\b\}$ containing $W$, and there are $a,b\in\mathbb{R}$, $c\in\nu^\perp$ with $\max\{|a|,|b|,|c|\}\le C_0$ and $f\in C^\infty(\nu^\perp)$ such that
\begin{equation}
\label{main residue graphicality of F}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(\partial F) \setminus \textbf{\textup{C}}^\nu_{R_2}=\big\{x+f(x)\,\nu:x\in\nu^\perp\,,|x|>R_2\big\}\,,
\end{equation}
\begin{eqnarray}\nonumber
&&f(x)=a\,,\hspace{5.4cm} (n=1)
\\\label{asymptotics of F}
&&\Big|f(x)-\Big(a+\frac{b}{|x|^{n-2}}+\frac{c\cdot x}{|x|^n}\Big)\Big|\le\frac{C_0}{|x|^n}\,,\qquad (n\ge 2)
\\\nonumber
&&\max\big\{|x|^{n-1}\,|\nabla f(x)|,|x|^n\,|\nabla^2f(x)|\big\}\le C_0\,,\qquad\forall x\in\nu^\perp\,,|x|>R_2\,.
\end{eqnarray}
\noindent {\bf (iii):} At fixed diameter, isoperimetric residues are maximized by balls, i.e.
\begin{equation}\label{optimal RW}
\mathcal{R}(W) \leq \omega_n \big({\rm diam}\, W/2\big)^n\,= \mathcal{R}\big({\rm cl}\big(B_{{\rm diam}\, W /2}\big)\big)\,,
\end{equation}
where $\mathrm{cl}\,(X)$ denotes topological closure of $X\subset\mathbb{R}^{n+1}$. Moreover, if equality holds in \eqref{optimal RW} and $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, then \eqref{asymptotics of F} holds with $b=0$ and $c=0$, and setting $\Pi=\big\{y:y\cdot\nu=a\big\}$, we have
\begin{equation}
\label{characterization 2}
(\partial F)\setminus W=\Pi \setminus\mathrm{cl}\,\big(B_{{\rm diam}\, W/2}(x)\big)\,,
\end{equation}
for some $x\in\Pi$. Finally, equality holds in \eqref{optimal RW} if and only if there are a hyperplane $\Pi$ and a point $x\in\Pi$ such that
\begin{eqnarray}\label{characterization 1}
\partial B_{{\rm diam}\, W/2}(x) \cap \Pi\subset W\,,
\end{eqnarray}
i.e., $W$ contains an $(n-1)$-dimensional sphere of diameter ${\rm diam}\,(W)$, and
\begin{eqnarray}
\label{characterization 3}
&&\mbox{$\Omega\setminus \big(\Pi \setminus \mathrm{cl}\,\big(B_{{\rm diam}\, W/2}(x)\big)\big)$}
\\\nonumber
&&\mbox{has exactly two unbounded connected components}.
\end{eqnarray}
\end{theorem}
\begin{remark}
{\rm The assumption $\mathcal{R}(W)>0$ is quite weak: indeed, {\bf if $\mathcal{R}(W)=0$, then $W$ is purely $\H^n$-unrectifiable}; see Proposition \ref{prop RW zero} in the appendix. For the role of the topological condition \eqref{characterization 3}, see Figure
\begin{figure}\input{second.pstex_t}\caption{\small{The obstacle $W$ (depicted in grey) is obtained by removing a cylinder $\textbf{\textup{C}}_r^{e_{n+1}}$ from a ball $B_{d/2}$ with $d/2>r$. In this way $d={\rm diam}\,(W)$ and $B_{d/2}$ is the only ball such that \eqref{characterization 1} can hold. Hyperplanes $\Pi$ satisfying \eqref{characterization 1} are exactly those passing through the center of $B_{d/2}$, and intersecting $W$ on a $(n-1)$-dimensional sphere of radius $d/2$. For every such $\Pi$, $\Omega\setminus(\Pi\setminus B_{d/2})$ has exactly one unbounded connected component, and \eqref{characterization 3} does not hold.}}\label{fig second}\end{figure}
\ref{fig second}.}
\end{remark}
\begin{remark}[Regularity of isoperimetric residues]\label{remark regularity}
{\rm In the physical dimension $n=2$, and provided $\Omega$ has boundary of class $C^{1,1}$, maximizers of $\mathcal{R}(W)$ are $C^{1,1/2}$-regular up to the obstacle, and smooth away from it. More generally, condition \eqref{local perimeter minimizer} implies that $M=\mathrm{cl}\,(\Omega\cap\partial F)$ is a smooth hypersurface with boundary in $\Omega\setminus\Sigma$, where $\Sigma$ is a closed set such that $\Sigma\cap\Omega$ is empty if $1\le n\le 6$, is locally discrete in $\Omega$ if $n=7$, and is locally $\H^{n-7}$-rectifiable in $\Omega$ if $n\ge 8$; see, e.g. \cite[Part III]{maggiBOOK}, \cite{nabervaltortaJEMS}. Of course, by \eqref{main residue graphicality of F}, $\Sigma\setminus B_{R_2}=\emptyset$ in every dimension. Moreover, justifying the initial claim concerning the case $n=2$, if we assume that $\Omega$ is an open set with $C^{1,1}$-boundary, then $M$ is a $C^{1,1/2}$-hypersurface with boundary in $\mathbb{R}^{n+1}\setminus\Sigma$, with boundary contained in $\partial\Omega$, $\Sigma\cap\partial\Omega$ is $\H^{n-3+\varepsilon}$-negligible for every $\varepsilon>0$, and Young's law $\nu_F\cdot\nu_\Omega=0$ holds on $(M\cap\partial\Omega)\setminus\Sigma$; see, e.g. \cite{gruter,gruterjost,dephilippismaggiCAP-ARMA,dephilippismaggiCAP-CRELLE}.}
\end{remark}
\begin{remark}
{\rm An interesting open direction is finding additional geometric information on $\mathcal{R}(W)$, e.g. in the class of convex obstacles.}
\end{remark}
\begin{remark}[Normalization of competitors]\label{remark boundaries in RW}
{\rm We adopt the convention that any set of locally finite perimeter $F$ in $\Omega$ open is tacitly modified on and by a set of zero Lebesgue measure so to entail $\Omega\cap\partial F=\Omega\cap\mathrm{cl}\,(\partial^*F)$, where $\partial^*F$ is the reduced boundary of $F$ in $\Omega$; see \cite[Proposition 12.19]{maggiBOOK}. Under this normalization, local perimeter minimality conditions like \eqref{local perimeter minimizer} (or \eqref{uniform lambda minimality} below) imply that $F\cap\Omega$ is open in $\mathbb{R}^{n+1}$; see, e.g. \cite[Lemma 2.16]{dephilippismaggiCAP-ARMA}.}
\end{remark}
\subsection{Resolution of exterior isoperimetric sets}\label{subsection resolution of isop sets} Denoting the family of minimizers of $\psi_W(v)$ by ${\rm Min}[\psi_W(v)]$, our second main result is as follows:
\begin{theorem}[Resolution of exterior isoperimetric sets]\label{thm main psi}
If $W\subset\mathbb{R}^{n+1}$ is compact, then ${\rm Min}[\psi_W(v)]\ne\emptyset\,\,\forall v>0$. Moreover, if $\mathcal{R}(W)>0$, then
\begin{equation}
\label{main asymptotic expansion}
\lim_{v\to\infty}\psi_W(v)-P(B^{(v)})=-\mathcal{R}(W)\,,
\end{equation}
and, depending on $n$ and $W$ only, there are $v_0$, $C_0$, $R_1$, and $R_2$ positive, and $R_0(v)$ with $R_0(v)\to 0^+$, $R_0(v)\,v^{1/(n+1)}\to\infty$ as $v\to\infty$, such that, if $E_v\in {\rm Min}[\psi_W(v)]$ and $v>v_0$, then:
\noindent {\bf (i)}: $E_v$ determines $x\in\mathbb{R}^{n+1}$ and $u\in C^\infty(\partial B^{(1)})$ such that
\begin{eqnarray}
\label{nonsharp rates}
&&\frac{|E_v\Delta B^{(v)}(x)|}v \le \frac{C_0}{v^{1/[2(n+1)]}}\,,
\\\label{x and u of Ev}
&&(\partial E_v)\setminus B_{R_0(v)\,v^{1/(n+1)}}
\\\nonumber
&&=\Big\{y+v^{1/(n+1)}\,u\Big(\frac{y-x}{v^{1/(n+1)}}\Big)\,\nu_{B^{(v)}(x)}(y):y\in\partial B^{(v)}(x)\Big\}\setminus B_{R_0(v)\,v^{1/(n+1)}}\,,
\end{eqnarray}
where, for any $G\subset\mathbb{R}^{n+1}$ with locally finite perimeter, $\nu_G$ is the outer unit normal to $G$;
\noindent {\bf (ii):} $E_v$ determines $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ and $f\in C^\infty((\partial F)\setminus B_{R_2})$ with
\begin{equation}
\label{f of Ev}
(\partial E_v)\cap A_{R_2}^{R_1\,v^{1/(n+1)}}=\big\{y+f(y)\,\nu_F(y):y\in\partial F\big\}\cap A_{R_2}^{R_1\,v^{1/(n+1)}}\,;
\end{equation}
\noindent {\bf (iii):} $(\partial E_v)\setminus B_{R_2}$ is diffeomorphic to an $n$-dimensional disk;
\noindent {\bf (iv):} Finally, with $(x,u)$ as in \eqref{x and u of Ev} and $(F,\nu,f)$ as in \eqref{f of Ev},
\begin{eqnarray*}
&&\lim_{v\to\infty}\sup_{E_v\in
{\rm Min}[\psi_W(v)]}\Big\{\Big|\frac{|x|}{v^{1/(n+1)}}-\frac{1}{\omega_{n+1}^{1/(n+1)}}\Big|\,,\Big|\nu-\frac{x}{|x|}\Big|\,,
\|u\|_{C^1(\partial B^{(1)})}\Big\}=0\,,
\\
&&\lim_{v\to\infty}\sup_{E_v\in{\rm Min}[\psi_W(v)]}\,\|f\|_{C^1(B_M\cap\partial F)}=0\,,\hspace{2cm}\forall M>R_2\,.
\end{eqnarray*}
\end{theorem}
\begin{remark}[Resolution up to the obstacle]\label{remark up to the obstacle}
{\rm By Remark \ref{remark regularity} and a covering argument, if $n\le 6$, $\delta>0$, and $v>v_0(n,W,\delta)$, then \eqref{f of Ev} holds with $B_{R_1\,v^{1/(n+1)}}\setminus I_\delta(W)$ in place of $B_{R_1\,v^{1/(n+1)}}\setminus B_{R_2}$, where $I_\delta(W)$ is the open $\delta$-neighborhood of $W$. Similarly, when $\partial\Omega\in C^{1,1}$ and $n=2$ (and thus $\Omega\cap\partial F$ is regular up to the obstacle), we can find $v_0$ (depending on $n$ and $W$ only) such that \eqref{f of Ev} holds with $B_{R_1\,v^{1/(n+1)}}\cap \Omega$ in place of $B_{R_1\,v^{1/(n+1)}}\setminus B_{R_2}$, that is, graphicality over $\partial F$ holds up to the obstacle itself.}
\end{remark}
\begin{remark}
{\rm If $W$ is convex and $J$ is an half-space, then $\psi_W(v)\ge\psi_J(v)$ for every $v>0$, with equality for $v>0$ if and only if $\partial W$ contains a flat facet supporting an half-ball of volume $v$; see \cite{choeghomiritore,fuscomoriniCONVEX}. Since $\psi_J(v)=P(B^{(v)})/2^{1/(n+1)}$ and $\psi_W(v)-P(B^{(v)})\to-\mathcal{R}(W)$ as $v\to\infty$, the bound $\psi_W(v)\ge\psi_J(v)$ is far from optimal if $v$ is large. Are there stronger global bounds than $\psi_W\ge\psi_J$ on convex obstacles? Similarly, it would be interesting to quantify the convergence towards $\mathcal{R}(W)$ in \eqref{main asymptotic expansion}, or even that of $\partial E_v$ towards $\partial B^{(v)}$ and $\partial F$ (where \eqref{nonsharp rates} should not to be sharp).}
\end{remark}
\subsection{The mesoscale flatness criterion}\label{subsection mesoscale flatness criterion intro} We work with with hypersurfaces $M$ whose mean curvature is bounded by $\Lambda\ge0$ in an annulus $B_{1/\Lambda}\setminus\overline{B}_R$, $R\in(0,1/\Lambda)$. Even without information on $M$ inside $B_R$ (where $M$ could have a non-trivial boundary, or topology, etc.) the classical proof of the monotonicity formula can be adapted to show the monotone increasing character on $r\in(R,1/\Lambda)$ of
\begin{eqnarray}\nonumber
\Theta_{M,R,\Lambda}(r)&=&
\frac{\H^n\big(M\cap(B_r\setminus B_R)\big)}{r^n}+\frac{R}{n\,r^n}\,\int_{M\cap\partial B_R}\frac{|x^{TM}|}{|x|}\,d\H^{n-1}
\\\label{theta smooth}
&&
+\Lambda\,\int_R^r\frac{\H^n\big(M\cap(B_\rho\setminus B_R)\big)}{\rho^n}\,d\rho\,,\,\,\,\,
\end{eqnarray}
(here $x^{TM}={\rm proj}_{T_xM}(x)$);
moreover, if $\Theta_{M,R,\Lambda}$ is constant over $(a,b)\subset(R,1/\Lambda)$, then $M\cap(B_b\setminus\overline{B}_a)$ is a cone. Since the constant density value corresponding to $M=H\setminus B_R$, $H$ an hyperplane through the origin, is $\omega_n$ (as a result of a double cancellation which also involves the ``boundary term'' in $\Theta_{H\setminus B_R,R,0}$), we consider the {\bf area deficit}
\begin{equation}
\label{def of delta}
\delta_{M,R,\Lambda}(r)=\omega_n-\Theta_{M,R,\Lambda}(r)\,,\qquad r\in(R,1/\Lambda)\,,
\end{equation}
which defines a decreasing quantity on $(R,1/\Lambda)$. Here we use the term ``deficit'', rather than the more usual term ``excess'', since $\delta_{M,R,\Lambda}$ does not necessarily have non-negative sign (which is one of the crucial property of ``excess quantities'' typically used in $\varepsilon$-regularity theorems, see, e.g., \cite[Lemma 22.11]{maggiBOOK}). Recalling that $A_r^s=B_s\setminus\mathrm{cl}\,(B_r)$ if $s>r>0$, we are now ready to state the following ``smooth version'' of our mesoscale flatness criterion (see Theorem \ref{theorem mesoscale criterion} below for the varifold version).
\begin{theorem}[Mesoscale flatness criterion (smooth version)]\label{theorem mesoscale smooth}
If $n\ge 2$, $\Gamma\ge 0$, and $\sigma>0$, then there are $M_0$ and $\varepsilon_0$ positive and depending on $n$, $\Gamma$ and $\sigma$ only, with the following property. Let $\Lambda\ge0$, $R\in(0,1/\Lambda)$, and $M$ be a smooth hypersurface with mean curvature bounded by $\Lambda$ in $A^{1/\Lambda}_R$, and with
\begin{equation}
\label{meso intro bounds Gamma}
\H^{n-1}\big(M\cap\partial B_{R}\big)\le\Gamma\,R^{n-1}\,,
\quad
\sup_{\rho\in(R,1/\Lambda)}\frac{\H^n\big(M\cap(B_\rho\setminus B_R)\big)}{\rho^n}\le\Gamma\,.
\end{equation}
If there is $s>0$ such that
\begin{eqnarray}\label{meso intro range of s}
\max\{M_0,64\}\,R<s<\frac{\varepsilon_0}{4\,\Lambda}\,,\qquad \H^n\big(M\cap A^{s/6}_{s/4}\big)>0\,,
\end{eqnarray}
if, for a hyperplane $H$ with $0\in H$ and unit normal $\nu_H$, the flatness conditions
\begin{equation}
\label{meso intro flat hp}
|\delta_{M,R,\Lambda}(s/8)|\le \varepsilon_0\,,\qquad
\frac1{s^n}\,\int_{M\cap A_{s/2}^{s/8}}\,\Big(\frac{|y\cdot\nu_H|}{|\mathbf{p}_H y|}\Big)^2\,d\,\H^n_y\le\varepsilon_0\,,
\end{equation}
hold (with $\mathbf{p}_H y=y-(y\cdot\nu_H)\,\nu_H$), and if, setting,
\begin{equation}
\label{meso intro propagation}
R_*=\sup\Big\{\rho\ge\frac{s}8: \delta_{M,R,\Lambda}(\rho)\ge -\varepsilon_0\Big\}\,,\qquad S_*=\min\Big\{R_*,\frac{\varepsilon_0}{\Lambda}\Big\}\,,
\end{equation}
we have $R_*>4\,s$ (and thus $S_*>4\,s$), then
\begin{equation}
\label{meso intro conclusion}
\begin{split}
&M\cap A_{s/16}^{S_*/32}=\big\{x+f(x)\,\nu_K: x\in K\big\}\cap A_{s/16}^{S_*/32}\,,
\\
&\sup\big\{|x|^{-1}\,|f(x)|+|\nabla f(x)|:x\in K\big\}\le C(n)\,\sigma\,,
\end{split}
\end{equation}
for a hyperplane $K$ with $0\in K$ and unit normal $\nu_K$, and for $f\in C^1(K)$.
\end{theorem}
\begin{remark}[Structure of the statement]\label{remark structure of the statement}
{\rm Assumption \eqref{meso intro range of s} implicitly requires $R$ to be sufficiently small in terms of $1/\Lambda$, as it introduces a mesoscale $s$ with $s<\!\!<1/\Lambda$ and $s>\!\!>R$. Assumption \eqref{meso intro flat hp} expresses the flatness of $M$ at the mesoscale $s$, both in terms of its area deficit, and in terms of its ``angular flatness'' with respect to a hyperplane through the origin $H$ (notice that this is different from the $L^2$-excess often used in similar contexts, and which would correspond to take $(|y\cdot\nu_H|/s)^2$ in place of $(|y\cdot\nu_H|/|\mathbf{p}_H y|)^2$; see, e.g., \cite[Definition 1.12(i)]{Simon83}). The final key assumption, $R_*>4\,s$, express the requirement that the area deficit does not decrease too abruptly, and stays above $-\varepsilon_0$ at least up to the scale $4\,s$. Graphicality with respect to (a possibly different) hyperplane $K$ is then inferred between a scale $s/16$ (i.e., comparable to $s$), and up to a scale $S_*/32$, which can be as large as the decay of the area deficit allows (potentially up to $\varepsilon_0/32\,\Lambda$ if $R_*=\infty$), but in any case not too large with respect to $1/\Lambda$.}
\end{remark}
\begin{remark}[Sharpness of the statement]\label{remark unduloids}
{\rm The statement is sharp in the sense that for a surface ``with bounded mean curvature and non-trivial topology inside a hole'', flatness can only be established on a mesoscale which is both large with respect to the size of the hole and small with respect to the size of the inverse mean curvature. An example is provided by unduloids $M_\varepsilon$ with waist size $\varepsilon$ and mean curvature $n$ in $\mathbb{R}^{n+1}$; see
\begin{figure}\input{undo.pstex_t}
\caption{\small{A half-period of an unduloid with mean curvature $n$ and waist size $\varepsilon$ in $\mathbb{R}^{n+1}$. By \eqref{def of veps}, the flatness of $M_\varepsilon$ is no smaller than ${\rm O}(\varepsilon^{2(n-1)/n})$, and is exactly ${\rm O}(\varepsilon^{2(n-1)/n})$ on an annulus sitting in the mesoscale ${\rm O}(\varepsilon^{(n-1)/n})$. This mesoscale is both very large with respect to waist size $\varepsilon$, and very small with respect to the size of the inverse mean curvature, which is order one.}}
\label{fig undo}
\end{figure}
Figure \ref{fig undo}. A ``half-period'' of $M_\varepsilon$ is the graph $\{x+f_\varepsilon(x)\,e_{n+1}:x\in\mathbb{R}^n\,,\varepsilon<|x|<R_\varepsilon\}$ of
\begin{equation}
\label{def of veps}
f_\varepsilon(x)=\int_\varepsilon^{|x|}\Big\{\Big(\frac{r^{n-1}}{r^n-\varepsilon^n+\varepsilon^{n-1}}\Big)^2-1\Big\}^{-1/2}\,dr\,,\qquad\varepsilon<|x|<R_\varepsilon\,,
\end{equation}
where $\varepsilon$ and $R_\varepsilon$ are the only solutions of $r^{n-1}=r^n-\varepsilon^n+\varepsilon^{n-1}$. Clearly $f_\varepsilon$ solves $-{\rm div}\,(\nabla f_\varepsilon/\sqrt{1+|\nabla f_\varepsilon|^2})=n$ with $f_\varepsilon=0$, $|\nabla f_\varepsilon|=+\infty$ on $\{|x|=\varepsilon\}$, and $|\nabla f_\varepsilon|=+\infty$ on $\{|x|=R_\varepsilon\}$, where $R_\varepsilon=1-{\rm O}(\varepsilon^{n-1})$; moreover, $\min|\nabla f_\varepsilon|$ is achieved at $r={\rm O}(\varepsilon^{(n-1)/n})$, and if $r\in(a\,\varepsilon^{(n-1)/n},b\,\varepsilon^{(n-1)/n})$ for some $b>a>0$, then $|\nabla f_\varepsilon|={\rm O}_{a,b}(\varepsilon^{2(n-1)/n})$. Thus, the horizontal flatness of $M_\varepsilon$ is no smaller than ${\rm O}(\varepsilon^{2(n-1)/n})$, and has that exact order on a scale which is both very large with respect to the hole ($\varepsilon^{(n-1)/n}>\!\!\!>\varepsilon$) and very small with respect to the inverse mean curvature ($\varepsilon^{(n-1)/n}<\!\!\!<1$).}
\end{remark}
\begin{remark}[On the application to $\psi_W(v)$]
{\rm Exterior isoperimetric sets $E_v$ with large volume $v$ have small constant mean curvature of order $\Lambda=\Lambda_0(n,W)/v^{1/(n+1)}$. We will work with ``holes'' of size $R=R_3(n,W)$, for some $R_3$ sufficiently large with respect to the radius $R_2$ appearing in Theorem \ref{thm main of residue}-(ii), and determined through the sharp decay rates \eqref{asymptotics of F}. The hyperplane $H$ such that the second condition in \eqref{meso intro flat hp} holds is $H=\nu^\perp$ for $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ such that $E_v$ is close to $F$. The decay properties of $F$ towards $\{x:x\cdot\nu=a\}$, the $C^1$-proximity of $\partial E$ to $\partial B^{(v)}(x)$ for $|x|\approx(\omega_{n+1}/v)^{1/(n+1)}$, and the $C^1$-proximity of $\partial E$ to $\partial F$ on bounded annuli of the form $A^{2\,R_3}_{R_2}$ are used in checking that \eqref{meso intro bounds Gamma} holds with $\Gamma=\Gamma(n,W)$, that $E_v$ is flat in the sense of \eqref{meso intro flat hp}, and, most importantly, that the area deficit $\delta_{M,R,\Lambda}$ of $M=(\partial E_v)\setminus B_{R_3}$ lies above $-\varepsilon_0$ up to scale $r={\rm O}(v^{1/(n+1)})$ (which is the key information to deduce $R_*\approx1/\Lambda$), and thus obtain overlapping domains of resolutions in terms of $\partial B^{(v)}(x)$ and $\partial F$.}
\end{remark}
\begin{remark}
{\rm While Theorem \ref{theorem mesoscale smooth} seems clearly applicable to other problems, there are situations where one may need to develop considerably finer ``mesoscale flatness criteria''. For example, consider the problem of ``resolving'' almost CMC boundaries undergoing bubbling \cite{ciraolomaggi2017,delgadinomaggimihailaneumayer,delgadinomaggi}. When the oscillation of the mean curvature around a constant $\Lambda$ is small, such boundaries are close to finite unions of mutually tangent spheres of radius $n/\Lambda$, and can be covered by $C^1$-small normal graphs over such spheres away from their tangency points up to distance $\varepsilon/\Lambda$, with $\varepsilon=\varepsilon(n)$, and provided the mean curvature oscillation is small in terms of $\varepsilon$. For propagating flatness up to a distance directly related to the oscillation of the mean curvature (which, in turn, seems a key step in addressing the conjecture, based on \cite{Scho83,bernsteinmaggi}, according to which, near limit tangency points, boundaries with almost constant mean curvature converge to catenoids), one would need a version of Theorem \ref{theorem mesoscale smooth} for ``double'' spherical graphs; in the setting of blowup/blowdown theorems, this would be similar to passing to the harder case of multiplicity larger than one.}
\end{remark}
\begin{remark}[Comparison with blowup/blowdown results]
{\rm From the technical viewpoint, Theorem \ref{theorem mesoscale smooth} fits into the framework set up by Allard and Almgren in \cite{allardalmgrenRADIAL} for the study of blowups and blowdowns of minimal surfaces with tangent integrable cones. At the same time, as exemplified by Remark \ref{remark unduloids}, Theorem \ref{theorem mesoscale smooth} really points in a different direction, since it pertains to situations where neither blowup or blowdown limits make sense.
Another interesting point is that, in \cite{allardalmgrenRADIAL}, the area deficit $\delta_{M,R,\Lambda}$ is considered with a sign, non-positive for blowups, and non-negative for blowdowns, see \cite[Theorem 5.9(4), Theorem 9.6(4)]{allardalmgrenRADIAL}. These signs restrictions are used there to deduce continuous decay towards limit tangent cones, and, thus, their uniqueness; however, they {\it are not needed in propagating graphicality}; and, the more, dismissing them is {\it crucial} for obtaining overlapping domains of resolutions in \eqref{basic C1 estimate} and \eqref{main C1 estimate}; see also Remark \ref{remark negative deficit}.}
\end{remark}
\begin{remark}[Extension to general minimal cones]\label{remark extensions}
{\rm Proving Theorem \ref{theorem mesoscale smooth} in higher codimension and with arbitrary {\it integrable} minimal cones should be possible with essentially the same proof presented here. We do not pursue this extension because, first, only the case of hypersurfaces and hyperplanes is needed in studying $\psi_W(v)$; and, second, in going for generality, one should work in the framework set up by Simon in \cite{Simon83,SimonMontecatini,simonETH}, which, at variance with the simpler Allard--Almgren's framework used here, allows one to dispense with the integrability assumption. In this direction, we notice that Theorem \ref{theorem mesoscale smooth} with $\Lambda=0$ and $R_*=+\infty$ is a blowdown result for exterior minimal surfaces (see also Theorem \ref{theorem mesoscale criterion}-(ii, iii)). A blowdown result for exterior minimal surfaces is outside the scope of \cite[Theorem 9.6]{allardalmgrenRADIAL} which pertains to {\it entire} minimal surfaces, but it is claimed, with a sketch of proof, on \cite[Page 269]{SimonMontecatini} as a modification of \cite[Theorem 5.5, $m<0$]{SimonMontecatini}. It should be mentioned that, to cover the case of exterior minimal surfaces, an additional term of the form $C\int_{\Sigma}(\dot u(t))^{2}$ should be added on the right side of assumption \cite[5.3, $m<0$]{SimonMontecatini}. This additional term seems not to cause difficulties with the rest of the arguments leading to \cite[Theorem 5.5, $m<0$]{SimonMontecatini}. Thus Simon's approach, in addition to giving the blowdown analysis of exterior minimal surfaces, should also be viable for generalizing our mesoscale flatness criterion.}
\end{remark}
\subsection{Organization of the paper}\label{subsection organization} In section \ref{section mesoscale} we prove Theorem \ref{theorem mesoscale smooth} (actually, its generalization to varifolds, i.e. Theorem \ref{theorem mesoscale criterion}). In section \ref{section existence and quantitative isoperimetry} we prove those parts of Theorem \ref{thm main psi} which follow simply by quantitative isoperimetry (i.e., they do not require isoperimetric residues nor our mesoscale flatness analysis); see Theorem \ref{thm existence and uniform min}. Section \ref{section isoperimetric residues} is devoted to the study of isoperimetric residues and of their maximizers, and contains the proof Theorem \ref{thm main of residue}. We also present there a statement, repeatedly used in our analysis, which summarizes some results from \cite{Scho83}; see Proposition \ref{prop schoen}. Finally, in section \ref{section resolution for exterior}, we prove the energy expansion \eqref{main asymptotic expansion} and those parts of Theorem \ref{thm main psi} left out in section \ref{section existence and quantitative isoperimetry} (i.e., statements (ii, iii, iv)). This final section is, from a certain viewpoint, the most interesting part of the paper: indeed, it is only the detailed examination of those arguments that clearly illustrates the degree of fine tuning of the preliminary analysis of exterior isoperimetric sets and of maximizers of isoperimetric residues which is needed in order to allow for the application of the mesoscale flatness criterion.
\noindent {\bf Acknowledgements:} Supported by NSF-DMS RTG 1840314, NSF-DMS FRG 1854344, and NSF-DMS 2000034. We thank William Allard and Leon Simon for clarifications on \cite{allardalmgrenRADIAL} and \cite{SimonMontecatini} respectively, and Luca Spolaor for his comments on some preliminary drafts of this work.
\section{A mesoscale flatness criterion for varifolds}\label{section mesoscale} In section \ref{subsection statement meso} we introduce the class $\mathcal{V}_n(\Lambda,R,S)$ of varifolds used to reformulate Theorem \ref{theorem mesoscale smooth}, see Theorem \ref{theorem mesoscale criterion}. In sections \ref{subsection spherical graphs}-\ref{subsection energy estimates on annuli} we present two reparametrization lemmas (Lemma \ref{lemma step one sigma} and Lemma \ref{lemma step one}) and some ``energy estimates'' (Theorem \ref{theorem 7.15AA main estimate lambda}) for spherical graphs; in section \ref{subsection monotonicity of exterior varifolds} we state the monotonicity formula in $\mathcal{V}_n(\Lambda,R,S)$ and some energy estimates involving the monotonicity gap;
in section \ref{subsection mesoscale flatness criterion}, we prove Theorem \ref{theorem mesoscale criterion}.
\subsection{Statement of the criterion}\label{subsection statement meso} Given an $n$-dimensional integer rectifiable varifold $V=\mathbf{var}\,(M,\theta)$ in $\mathbb{R}^{n+1}$, defined by a locally $\H^n$-rectifiable set $M$, and by a multiplicity function $\theta:M\to\mathbb{N}$ (see \cite{SimonLN}), we denote by $\|V\|=\theta\,\H^n\llcorner M$ the weight of $V$, and by $\delta V$ the first variation of $V$, so that
$\delta V(X)=\int\,{\rm div}\,^T\,X(x)\,dV(x,T)=\int_M\,{\rm div}\,^M\,X(x)\,\theta\,d\H^n_x$ for every $X\in C^1_c(\mathbb{R}^{n+1};\mathbb{R}^{n+1})$. Given $S>R>0$ and $\Lambda\ge0$, we consider the family
\[
\mathcal{V}_n(\Lambda,R,S)\,,
\]
of those $n$-dimensional integral varifolds $V$ with ${\rm spt}\,V\subset\mathbb{R}^{n+1}\setminus B_R$ and
\begin{eqnarray*}
\delta V(X)=\int\,X\cdot\vec{H}\,d\|V\|+\int X\cdot\nu_V^{\rm co}\,d \,\,{\rm bd}_V\,,\qquad \forall X\in C^1_c(B_S;\mathbb{R}^{n+1})\,,
\end{eqnarray*}
holds for a Radon measure ${\rm bd}_V$ in $\mathbb{R}^{n+1}$ supported in $\partial B_R$, and Borel vector fields $\vec{H}:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ with $|\vec{H}|\le \Lambda$ and $\nu_V^{\rm co}:\partial B_R\to\mathbb{R}^{n+1}$ with $|\nu_V^{\rm co}|=1$. We let $\mathcal{M}_n(\Lambda,R,S)=\{V\in\mathcal{V}_n(\Lambda,R,S):\mbox{$V=\mathbf{var}\,(M,1)$ for $M$ smooth}\}$, that is, $M\subset \mathbb{R}^{n+1}\setminus B_R$ is a smooth hypersurface with boundary in $A_R^S$, ${\rm bdry}\,(M)\subset \partial B_R$, and $|H_M|\le\Lambda$. If $V\in\mathcal{M}_n(\Lambda,R,S)$, then $\vec{H}$ is the mean curvature vector of $M$, ${\rm bd}_V=\H^{n-1}\llcorner{\rm bdry}\,(M)$, and $\nu_V^{\rm co}$ is the outer unit conormal to $M$ along $\partial B_R$. Given $V\in\mathcal{V}_n(\Lambda,R,S)$, we define $\Theta_{V,R,\Lambda}(r)$ as
\begin{eqnarray}\label{def theta}
\frac{\|V\|(B_r\setminus B_R)}{r^n}
-\frac1{n\,r^n}\,\int x\cdot\nu^{{\rm co}}_V\,d\,{\rm bd}_V+\Lambda\,\int_R^r\,\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\,d\rho\,;
\end{eqnarray}
$\Theta_{V,R,\Lambda}(r)$ is increasing for $r\in(R,S)$ (Theorem \ref{theorem 7.17AA exterior lambda}-(i) below), and equal to \eqref{theta smooth} when $V\in\mathcal{M}_n(\Lambda,R,S)$. The {\bf area deficit} of $V$ is then defined as in \eqref{def of delta}, while given a hyperplane $H$ in $\mathbb{R}^{n+1}$ with $0\in H$ we call the quantity
\begin{equation}
\label{def of omega H}
\int_{A_r^s}\,\omega_H(y)^2\,d\|V\|_y\,,\qquad \omega_H(y)={\rm arctn}\Big(\frac{|y\cdot\nu_H|}{|\mathbf{p}_H y|}\Big)\,,
\end{equation}
the {\bf angular flatness of $V$ on the annulus $A_r^s=B_s\setminus\mathrm{cl}\,(B_r)$ with respect to $H$}. (See \eqref{seeee} for the notation concerning $H$.)
\begin{theorem}[Mesoscale flatness criterion]\label{theorem mesoscale criterion}
If $n\ge 2$, $\Gamma\ge 0$, and $\sigma>0$ then there are positive constants $M_0$ and $\varepsilon_0$, depending on $n$, $\Gamma$ and $\sigma$ only, with the following property. If $\Lambda\ge0$, $R\in(0,1/\Lambda)$, $V\in\mathcal{V}_n(\Lambda,R,1/\Lambda)$,
\begin{equation}
\label{Gamma bounds}
\|{\rm bd}_V\|(\partial B_R)\le\Gamma\,R^{n-1}\,,\qquad \sup_{\rho\in(R,1/\Lambda)}\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\le\Gamma\,.
\end{equation}
and for some hyperplane $H\subset\mathbb{R}^{n+1}$ with $0\in H$ and for some $s>0$ we have
\begin{eqnarray}
\label{mesoscale bounds}
&&\frac{\varepsilon_0}{4\,\Lambda}>s>\max\{M_0,64\}\,R\,,
\\
\label{mesoscale delta small s8}
&&|\delta_{V,R,\Lambda}(s/8)|\le \varepsilon_0\,,
\\
\label{mesoscale Rstar larger s4}
&&
R_*:=\sup\Big\{\rho\ge\frac{s}8: \delta_{V,R,\Lambda}(\rho)\ge -\varepsilon_0\Big\}\ge 4\,s\,,
\\\label{mesoscale small angular flatness}
&&
\frac1{s^n}\,\int_{A_{s/8}^{s/2}}\,\omega_H^2\,d\|V\|\le\varepsilon_0\,,
\\\label{mesoscale positive area in annulus}
&&\|V\|\big(A_{s/6}^{s/4}\big)>0\,,
\end{eqnarray}
then {\bf (i):} if $S_*=\min\{R_*,\varepsilon_0/\Lambda\}<\infty$,
then there is an hyperplane $K\subset\mathbb{R}^{n+1}$ with $0\in K$ and $u\in C^1((K\cap\SS^n)\times(s/32,S_*/16))$ with
\begin{eqnarray}\nonumber
&&({\rm spt}\,V)\cap A_{s/32}^{S_*/16}=\Big\{r\,\frac{\omega+u(r,\omega)\,\nu_K}{\sqrt{1+u(r,\omega)^2}}:\omega\in K\cap\SS^n\,,r\in\big(s/32,S_*/16\big)\Big\}
\\\label{mesoscale thesis graphicality}
&&\sup_{(K\cap\SS^n)\times\big(s/32,S_*/16\big)}\Big\{|u|+|\nabla^{K\cap\SS^n}u|+|r\,\partial_ru|\Big\}\le C(n)\,\sigma\,;
\end{eqnarray}
\noindent {\bf (ii):} if $\Lambda=0$ and $\delta_{V,R,0}\ge -\varepsilon_0$ on $(s/8,\infty)$, then \eqref{mesoscale thesis graphicality} holds with $S_*=\infty$;
\noindent {\bf (iii):} if $\Lambda=0$ and $\delta_{V,R,0}\ge 0$ on $(s/8,\infty)$, then \eqref{mesoscale thesis graphicality} holds with $S_*=\infty$, and one has decay estimates, continuous in the radius, of the form
\begin{eqnarray}
\label{decay deficit for exterior minimal surfaces}
\!\!\!\!\!\!\delta_{V,R,0}(r)\!\!&\le& \!\!C(n)\,\Big(\frac{s}{r}\Big)^\a\,\delta_{V,R,0}\Big(\frac{s}8\Big)\,,\qquad\forall r>\frac{s}4\,,
\\
\label{decay flatness for exterior minimal surfaces}
\!\!\!\!\!\!\frac1{r^n}\,\int_{A_r^{2\,r}}\,\omega_K^2\,d\|V\|\!\!&\le&\!\! C(n)\,(1+\Gamma)\,\Big(\frac{s}{r}\Big)^\a\,\delta_{V,R,0}\Big(\frac{s}8\Big)\,,\qquad\forall r>\frac{s}4\,,
\end{eqnarray}
for some $\a(n)\in(0,1)$.
\end{theorem}
\begin{remark}
{\rm In Theorem \ref{theorem mesoscale criterion}, graphicality is formulated in terms of the notion of {\it spherical graph} (see section \ref{subsection spherical graphs}) which is more natural than the usual notion of ``cylindrical graph'' in setting up the iteration procedure behind Theorem \ref{theorem mesoscale criterion}. Spherical graphicality in terms a $C^1$-small $u$ as in \eqref{mesoscale thesis graphicality} translates into cylindrical graphicality in terms of $f$ as in \eqref{meso intro conclusion} with
$f(x)/|x|\approx u(|x|,\hat{x})$ and $\nabla_{\hat x} f(x)-(f(x)/|x|)\approx |x|\,\partial_r\,u(|x|,\hat{x})$ for $x\ne 0$ and $\hat{x}=x/|x|$; see, in particular, Lemma \ref{lemma D1} in appendix \ref{appendix spherical cylindrical}.}
\end{remark}
\begin{remark}[Decay rates and negativity of the area deficit]\label{remark negative deficit}
{\rm Even when $\Lambda>0$, estimates analogous to \eqref{decay deficit for exterior minimal surfaces} and \eqref{decay flatness for exterior minimal surfaces} hold (with the same proofs) on the {\it bounded} range of radii $r$ such that
\[
s/4<r<\min\big\{R_{**}/16,\varepsilon_0/(16\,\Lambda)\big\}\,,\,\,
R_{**}=\sup\big\{\rho\ge s/8: \delta_{V,R,\Lambda}(\rho)\ge 0\big\}\,.
\]
In particular, without the possibility of sending $r\to\infty$, the resulting estimates will hold for several possible choices of $K$. In the framework provided by \cite{allardalmgrenRADIAL} the non-negativity of $\delta_{V,R,\Lambda}$ is necessary to set up continuous-in-$r$ decay estimates like \eqref{decay deficit for exterior minimal surfaces} and \eqref{decay flatness for exterior minimal surfaces} (see, e.g. \eqref{ex cont} below), but it is actually dispensable if one is just interested in the iteration scheme needed for propagating ``flat graphicality'' (see \eqref{claim is spherical graph}--\eqref{claim Tj smaller deltaj-1} below). The gain is not just theoretical (because of the obvious fact that $R_*>R_{**}$). In our application to exterior isoperimetry one can see that $R_{**}$ of $V=\mathbf{var}\,((\partial E_v)\setminus\partial B_R,1)$ with $v$ large can be at most of order ${\rm O}(v^\a)$ for some $\a<1/(n+1)$: indeed, on a scale ${\rm O}(v^{1/(n+1)})$, the smooth proximity of $\partial E_v$ to a sphere $\partial B^{(v)}(x)$ will force $\delta_{V,R,\Lambda}$ to be negative. As a consequence, the overlapping of the domain of resolution of \eqref{basic C1 estimate} and \eqref{main C1 estimate} would be lost if working with $R_{**}$ (and with it, the complete resolution of exterior isoperimetric sets).}
\end{remark}
\subsection{Spherical graphs}\label{subsection spherical graphs} We start setting up some notation. We denote by
\[
\H
\]
the family of the oriented hyperplanes $H\subset\mathbb{R}^{n+1}$ with $0\in\H$, so that choice of $H\in\H$ implies that a unit normal vector $\nu_H$ to $H$. Given $H\in\H$, we set
\begin{equation}
\label{seeee}
\Sigma_H=H\cap\SS^n\,,\qquad \mathbf{p}_H:\mathbb{R}^{n+1}\to H\,,\qquad \mathbf{q}_H:\mathbb{R}^{n+1}\to H^\perp\,,
\end{equation}
for the equatorial sphere defined by $H$ on $\SS^n$ and for the orthogonal projections of $\mathbb{R}^{n+1}$ onto $H$ and onto $H^\perp=\{t\,\nu_H:t\in\mathbb{R}\}$. We set
\[
\mathcal{X}_\sigma(\Sigma_H)=\big\{u\in C^1(\S_H):\|u\|_{C^1(\S_H)}<\sigma\big\}\,,\qquad\sigma>0\,.
\]
Clearly there is $\sigma_0=\sigma_0(n)>0$ such that if $H\in\H$ and $u\in\mathcal{X}_{\sigma_0}(\S_H)$, then
\[
f_u(\omega)=\frac{\omega+u(\omega)\,\nu_H}{\sqrt{1+u(\omega)^2}}\,,\qquad \omega\in\S_H\,,
\]
defines a diffeomorphism of $\S_H$ into an hypersurface $\S_H(u)\subset\SS^n$, namely
\begin{equation}
\label{def of spherical graph}
\S_H(u)=f_u(\Sigma_H)=\Big\{\frac{\omega+u(\omega)\,\nu_H}{\sqrt{1+u(\omega)^2}}:\omega\in\S_H\Big\}\,.
\end{equation}
We call $\S_H(u)$ a {\bf spherical graph} over $\S_H$. Exploiting the fact that $\S_H$ is a minimal hypersurface in $\SS^n$ and that if $\{\tau_i\}_i$ is a local orthonormal frame on $\S_H$ then $\nu_H\cdot\nabla_{\tau_i}\tau_j=0$, a second variation computation (see, e.g., \cite[Lemma 2.1]{EngelSpolaVelichGandT}) gives, for $u\in\mathcal{X}_\sigma(\S_H)$,
\[\Big|\H^{n-1}(\S_H(u))-n\,\omega_n-\frac12\,\int_{\S_H}\!\!\!|\nabla^{\S_H} u|^2-(n-1)\,u^2\Big|\le\! C(n)\,\sigma\!\int_{\S_H}\!\!\!u^2+|\nabla^{\Sigma_H} u|^2\,,
\](where $n\,\omega_n=\H^{n-1}(\S_H)=\H^{n-1}(\S_H(0))$). We recall that $u\in L^2(\S_H)$ is a unit norm Jacobi field of $\S_H$ (i.e., a zero eigenvector of $\Delta^{\S_H}+(n-1)\,{\rm Id}\,$ with unit $L^2(\S_H)$-norm) if and only if there is $\tau\in\SS^n$ with $\tau\cdot\nu_H=0$ and $u(\omega)=c_0(n)\,(\omega\cdot\tau)$ ($\omega\in\S_H$) for $c_0(n)=(n/\H^{n-1}(\S_H))^{1/2}$. We denote by $E^0_{\S_H}$ the orthogonal projection operator of $L^2(\Omega)$ onto the span of the Jacobi fields of $\S_H$. The following lemma provides a way to reparameterize spherical graphs over equatorial spheres so that the projection over Jacobi fields is annihilated.
\begin{lemma}\label{lemma step one sigma}
There exist constants $C_0$, $\varepsilon_0$ and $\sigma_0$, depending on the dimension $n$ only, with the following properties:
\noindent {\bf (i):} if $H,K\in\H$, $|\nu_H-\nu_K|\le\varepsilon<\varepsilon_0$, and $u\in\mathcal{X}_\sigma(\S_H)$ for $\sigma<\sigma_0$, then the map $T_u^K:\S_H\to\S_K$ defined by
\[
T_u^K(\omega)=\frac{\mathbf{p}_K(f_u(\omega))}{|\mathbf{p}_K(f_u(\omega))|}=\frac{\mathbf{p}_K\omega+u(\omega)\,\mathbf{p}_K\nu_H}{|\mathbf{p}_K\omega+u(\omega)\,\mathbf{p}_K\nu_H|}\,,\qquad\omega\in\S_H\,,
\]
is a diffeomorphism between $\S_H$ and $\S_K$, and $v_u^K:\S_K\to\mathbb{R}$ defined by
\begin{equation}
\label{vuK def}
v_u^K(T_u^K(\omega))=\frac{\mathbf{q}_K(f_u(\omega))}{|\mathbf{p}_K(f_u(\omega))|}
=\frac{\nu_K\cdot(\omega+u(\omega)\,\nu_H)}{|\mathbf{p}_K\omega+u(\omega)\,\mathbf{p}_K\nu_H|}\,,\qquad\omega\in\S_H\,,
\end{equation}
is such that
\begin{eqnarray}
\label{for later}
&&v_u^K\in\mathcal{X}_{C(n)\,(\sigma+\varepsilon)}(\S_K)\,,\quad \S_H(u)=\S_K(v_u^K)\,,
\\
\label{auto 4 pre}
&&\Big|\int_{\S_K}(v_u^K)^2-\int_{\S_H}u^2\Big|\le C(n)\,\Big\{|\nu_H-\nu_K|^2+\int_{\S_H}u^2\Big\}\,.
\end{eqnarray}
\noindent {\bf (ii):} if $H\in\H$ and $u\in\mathcal{X}_{\sigma_0}(\Sigma_H)$, then there exist $K\in\H$ with $|\nu_H-\nu_K|<\varepsilon_0$ and $v\in\mathcal{X}_{C_0\,\sigma_0}(\Sigma_K)$ such that
\begin{eqnarray}\label{auto 1}
&&\Sigma_H(u)=\Sigma_K(v)\,,
\\\label{auto 2}
&&E_{\Sigma_K}^0[v]=0\,,
\\\label{auto 3}
&&|\nu_K-\nu_H|^2\le C_0(n)\,\int_{\Sigma_H}\,\big(E_{\Sigma_H}^0[u]\big)^2\,,
\\\label{auto 4}
&&\Big|\int_{\Sigma_K}v^2-\int_{\Sigma_H}u^2\Big|\le C_0(n)\,\int_{\S_H}u^2\,.
\end{eqnarray}
\end{lemma}
\begin{remark}
{\rm It may seem unnecessary to present a detailed proof of Lemma \ref{lemma step one sigma}, as we are about to do, given that, when $\Sigma_H$ is replaced by a generic integrable minimal surface $\Sigma$ in $\SS^n$, similar statements are found in the first four sections of \cite[Chapter 5]{allardalmgrenRADIAL}. However, two of those statements, namely \cite[5.3(4), 5.3(5)]{allardalmgrenRADIAL}, seem not to be correct; and the issue requires clarification, since those statements are used in the iteration arguments for the blowup/blowdown theorems \cite[Theorem 5.9/Theorem 9.6]{allardalmgrenRADIAL}; see, e.g., the second displayed chain of inequalities on \cite[Page 254]{allardalmgrenRADIAL}. To explain this issue {\bf we momentarily adopt the notation of \cite{allardalmgrenRADIAL}}. In \cite[Chapter 5]{allardalmgrenRADIAL} they consider a family of minimal surfaces $\{M_t\}_{t\in U}$ in $\SS^n$ obtained as diffeomorphic images of a minimal surface $M=M_0$. The parameter $t$ ranges in an open ball $U\subset\mathbb{R}^j$, where $j$ is the dimension of the space of Jacobi fields of $M$. Given a vector field $Z$ in $\SS^n$, defined on and normal to $M_t$, they denote by $F_t(Z)$ the diffeomorphism of $M_t$ into $\SS^n$ obtained by combining $Z$ with the exponential map of $\SS^n$ (up to lower than second order corrections in $Z$, this is equivalent to taking the graph of $Z$ over $M_t$, and then projecting it back on $\SS^n$, which is what we do, following \cite{Simon83}, in \eqref{def of spherical graph}). Then, in \cite[5.2(2)]{allardalmgrenRADIAL}, they define $\Lambda_t$ as the family of those $Z$ such that ${\rm Image}(F_t(Z))={\rm Image}(F_0(W))$ for some vector field $W$ normal to $M$, and, given $t,u\in U$ and $Z\in\Lambda_t$, they define $F_t^u:\Lambda_t\to\Lambda_u$ as the map between such classes of normal vector fields with the property that ${\rm Image}(F_t(Z))={\rm Image}(F_u(F_t^u(Z)))$: in particular, $F_t^u(Z)$ is the vector field that takes $M_u$ to the same surface to which $Z$ takes $M_t$. With this premise, in \cite[5.3(5)]{allardalmgrenRADIAL} they say that if $t,u\in U$, and $Z\in\Lambda_t$, then
\begin{equation}
\label{from AA}
\Big|\int_{M_u}|F_t^u(Z)|^2-\int_{M_t}|Z|^2\Big|\le C\,|t-u|\,\int_{M_t}|Z|^2\,,
\end{equation}
for a constant $C$ depending on $M$ only. Testing this with $Z=0$ (notice that $0\in\Lambda_t$ by \cite[5.3(1)]{allardalmgrenRADIAL}) one finds $F_t^u(0)=0$, and thus $M_t={\rm Image}(F_t(0))={\rm Image}(F_u(F_t^u(0)))={\rm Image}(F_u(0))=M_u$. In particular, $M_u=M_t$ for every $t,u\in U$, that is, $\{M_t\}_{t\in U}$ consists of a single surface, $M$ itself. But this is never the case since $\{M_t\}_{t\in U}$ always contains, to the least, every sufficiently small rotation of $M$ in $\SS^n$. An analogous problem is contained in \cite[5.3(4)]{allardalmgrenRADIAL}. Coming back to our notation, the analogous estimate to \eqref{from AA} in our setting would mean that, for every $H,K\in\H$ with $|\nu_K-\nu_H|<\varepsilon_0$ and $u\in\mathcal{X}_{\sigma_0}(\S_H)$, $v_u^K$ defined in \eqref{vuK def} satisfies
\begin{equation}
\label{equivalent to from AA}
\Big|\int_{\S_K}(v_u^K)^2-\int_{\Sigma_H}u^2\Big|\le C(n)\,|\nu_H-\nu_K|\,\int_{\S_H}u^2\,,
\end{equation}
which again gives a contradiction if $u=0$. A correct estimate, analogous in spirit to \eqref{equivalent to from AA} and still sufficiently precise to be used in iterations, is \eqref{auto 4 pre} in Lemma \ref{lemma step one sigma}. There should be no difficulty in adapting our proof to the more general context of integrable cones, and then in using the resulting generalization of \eqref{auto 4 pre} to implement the iterations needed in \cite[Theorem 5.9, Theorem 9.6]{allardalmgrenRADIAL}.}
\end{remark}
\begin{proof}
[Proof of Lemma \ref{lemma step one sigma}] The constants $\varepsilon_0$ and $\sigma_0$ in the statement will be such that $\sigma_0=\varepsilon_0/C_*$ for a sufficiently large dimension dependent constant $C_*$.
\noindent {\bf Step one:} To prove statement (i), let $H,K\in\H$, $|\nu_H-\nu_K|\le\varepsilon<\varepsilon_0$ and $u\in\mathcal{X}_{\sigma}(\S_H)$ with $\sigma<\sigma_0$. Setting (for $\omega\in\S_H$ and $x\in\mathbb{R}^{n+1}\setminus\{0\}$)
\[
g_u^K(\omega)=\mathbf{p}_K\omega+u(\omega)\,\mathbf{p}_K\nu_H\,,\qquad \Phi(x)=x/|x|\,,
\]
we have $T_u^K=\Phi\circ g_u^K$, and, if $u$ is identically $0$,
\[
g_0^K(\omega)=\mathbf{p}_K\omega\,,\qquad T_0^K(\omega)=\frac{\mathbf{p}_K\omega}{|\mathbf{p}_K\omega|}\,,\qquad\forall\omega\in\S_H\,.
\]
By $|\mathbf{p}_K\nu_H|^2=1-(\nu_H\cdot\nu_K)^2\le 2\,(1-(\nu_H\cdot\nu_K))=|\nu_H-\nu_K|^2$,
\begin{eqnarray*}
&&|g_u^K-g_0^K|=|u|\,|\mathbf{p}_K\nu_H|\le|u|\,|\nu_H-\nu_K|\,,
\\
&&|\nabla^{\S_H}g_u^K-\nabla^{\S_H}g_0^K|\le|\nabla^{\S_H}u|\,|\nu_H-\nu_K|\,.
\end{eqnarray*}
In particular, $|g_u^K|\ge 1-\sigma_0\,\varepsilon_0\ge 1/2$, and since $\Phi$ and $\nabla\Phi$ are Lipschitz continuous on $\{|x|\ge 1/2\}$, we find
\begin{equation}
\label{guK and TuK near g0K and T0K}
\max\big\{\|g_u^K-g_0^K\|_{C^1(\S_H)},\|T_u^K-T_0^K\|_{C^1(\S_H)}\big\}\le C(n)\, \|u\|_{C^1(\S_H)}\,|\nu_H-\nu_K|\,.
\end{equation}
Similarly, since $\omega\cdot\nu_K=\omega\cdot(\nu_K-\nu_H)$ for $\omega\in\S_H$, we find that
\begin{equation}
\label{g0K and T0K close to id}
\|g_0^K-{\rm id}\|_{C^1(\S_H)}\le C(n)\,|\nu_H-\nu_K|\,,\qquad \|T_0^K-{\rm id}\|_{C^1(\S_H)}\le C(n)\,|\nu_H-\nu_K|\,,
\end{equation}
and we thus conclude that $T_u^K$ is a diffeomorphism between $\S_H$ and $\S_K$. As a consequence, the definition \eqref{vuK def} of $v_u^K$ is well-posed, and \eqref{for later} immediately follows (in particular, $\S_H(u)=\S_K(v_u^K)$ is deduced easily from \eqref{vuK def} and \eqref{def of spherical graph}). Finally, if we set $F_u^K(\omega)=v_u^K(T_u^K(\omega))^2\,J^{\S_H}\,T_u^K(\omega)$ ($\omega\in\S_H$), then
\begin{eqnarray*}
\int_{\S_K}(v_u^K)^2-\int_{\S_H}u^2=\int_{\S_H}\Big(\frac{\nu_K\cdot(\omega+u\,\nu_H)}{|g_u^K(\omega)|}\Big)^2\,J^{\S_H}\,T_u^K(\omega)-u^2\,,
\end{eqnarray*}
where, using again $|\omega\cdot\nu_K|\le|\nu_H-\nu_K|$ for every $\omega\in\S_H$, we find
\begin{eqnarray*}
&&|J^{\S_H}T_u^K(\omega)-1|\le C(n)\,\|T_u^K-{\rm id}\|_{C^1(\S_H)}\le C(n)\,|\nu_H-\nu_K|\,,
\\
&&\hspace{0.4cm}\big|1-|g_u^K(\omega)|^2\big|\le\big|1-|\mathbf{p}_K\omega|^2\big|+|\mathbf{p}_K\nu_H|\,u^2+2\,|u|\,|\mathbf{p}_K\nu_H|\,|\mathbf{p}_K\omega|
\\
&&\hspace{2.9cm}\le C\,\big(|\nu_H-\nu_K|^2+u^2\big)\,,
\\
&&\big|(\nu_K\cdot(\omega+u\,\nu_H))^2-u^2\big|
\\
&&\le|\nu_K\cdot\omega|^2+u^2\,(1-(\nu_H\cdot\nu_K)^2)+2\,|u|\,|\nu_H\cdot\nu_K|\,|\omega\cdot\nu_K|
\\
&&
\le |\nu_K-\nu_K|^2+2\,u^2\,|\nu_H-\nu_K|+2\,|u|\,|\nu_H-\nu_K| \le C\,\big(|\nu_H-\nu_K|^2+u^2\big)\;
\end{eqnarray*}
and thus, \eqref{auto 4 pre}, thanks to
\begin{eqnarray*}\hspace{-1cm}
&&\!\!\!\!\!\!\Big|\int_{\S_K}(v_u^K)^2-\int_{\S_H}u^2\Big|\le
\int_{\S_H}\!\!|J^{\S_H}\,T_u^K-1|\,u^2
+2\,\frac{|(\nu_K\cdot(\omega+u\,\nu_H))^2-u^2|}{|g_u^K|^2}
\\
&&\hspace{2cm}+2\,\int_{\S_H}\,\Big|1-\frac1{|g_u^K|^2}\Big|\,u^2\le C(n)\,\Big(|\nu_H-\nu_K|^2+\int_{\S_H}u^2\Big)\,.
\end{eqnarray*}
\noindent {\bf Step two:} We prove (ii). If $E_{\Sigma_H}^0[u]=0$, then we conclude with $K=H$, $v=u$. We thus assume $\gamma^2=\int_{\Sigma_H}\,(E_{\Sigma_H}^0[u])^2>0$, and pick an orthonormal basis $\{\phi_H^i\}_{i=1}^n$ of $L^2(\S_H)\cap\{E_{\Sigma_H}^0=0\}$ with $E_{\Sigma_H}^0[u]=\gamma\,\phi_H^1$ and $\gamma=\int_{\Sigma_H}u\,\phi_H^1\ne 0$. This corresponds to choosing an orthonormal basis $\{\tau_H^i\}_{i=1}^n$ of $H$ such that
\begin{equation}
\label{what are the jacobi fields}
\phi_H^i(\omega)=c_0(n)\,\omega\cdot\tau_H^i\,,\qquad\omega\in\S_H\,,
\end{equation}
for $c_0(n)=(n/\H^{n-1}(\S_H))^{1/2}$. For each $K\in\H$ with ${\rm dist}_{\SS^n}(\nu_H,\nu_K)<\varepsilon_0$ we define an orthonormal basis $\{\tau_K^i\}_{i=1}^n$ of $K$ by parallel transport of $\{\tau_H^i\}_{i=1}^n\subset H\equiv T_{\nu_H}\SS^n$ to $K\equiv T_{\nu_K}\SS^n$. The maps $\nu\mapsto\tau^i(\nu):=\tau_{K(\nu)}^i$ define an orthonormal frame $\{\tau^i\}_{i=1}^n$ of $\SS^n$ on the open set $A=B_{\varepsilon_0}^{\SS^n}(\nu_H)=\{\nu\in\SS^n:{\rm dist}_{\SS^n}(\nu,\nu_H)<\varepsilon_0\}$. We denote by $\rho_H^K$ the rotation of $\mathbb{R}^{n+1}$ which takes $H$ into $K$ by setting $\rho_H^K(\tau_H^i)=\tau_K^i$ and $\rho_H^K(\nu_H)=\nu_K$. By the properties of parallel transport we have that
\begin{equation}
\label{small rotation}
\|\rho_H^K-{\rm Id}\|_{C^0(\Sigma_K)}\le C(n)\,{\rm dist}_{\SS^n}(\nu_H,\nu_K)\le C(n)\,\varepsilon_0\,.
\end{equation}
Finally, we define an $L^2(\Sigma_K)$-orthonormal basis $\{\phi_K^i\}_{i=1}^n$ of $L^2(\S_K)\cap\{E_{\Sigma_K}^0=0\}$ by setting
$\phi_K^i(\omega)=c_0(n)\,\omega\cdot\tau_K^i$ ($\omega\in\S_K$), and correspondingly consider the map $\Psi_u:A\to\mathbb{R}^n$ defined by setting
\[
\Psi_u(\nu)=\Big(\int_{\S_{K(\nu)}}v_u^{K(\nu)}\,\phi_{K(\nu)}^1,\dots,\int_{\S_{K(\nu)}}v_u^{K(\nu)}\,\phi_{K(\nu)}^n\Big)\,,\qquad\nu\in A\,,
\]
where $v_u^{K(\nu)}$ is well-defined for every $\nu\in A$ thanks to step one. {\bf We now claim} the existence of $\nu_*\in A$ such that
$\Psi_u(\nu_*)=0$. By the area formula, \eqref{vuK def}, and $\mathbf{q}_{K(\nu)}[e]=\nu\cdot e$, we find
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!\!(e_j\cdot\Psi_u)(\nu)
:=\int_{\S_{K(\nu)}}\!\!\!\!\!\!\!\!v_u^{K(\nu)}\,\phi_{K(\nu)}^j
=\!\!
\int_{\S_H}\!\!\!v_u^{K(\nu)}(T_u^{K(\nu)})\,\phi_{K(\nu)}^j(T_u^{K(\nu)})\,J^{\S_H}T_u^{K(\nu)}
\\
&=&
\!\!\!\!c_0(n)\,\int_{\S_H}\!\!\nu\cdot(\omega+u\,\nu_H)\Big(\rho_H^{K(\nu)}[\tau_H^j]\cdot\frac{\mathbf{p}_K(\omega+u\,\nu_H)}{|\mathbf{p}_K(\omega+u\,\nu_H)|^2}\Big)
J^{\S_H}T_u^{K(\nu)}d\H^{n-1}_\omega\,,
\end{eqnarray*}
so that \eqref{guK and TuK near g0K and T0K} gives
\begin{eqnarray}
\label{psiu vicina psi0}
&&\!\!\!\!\!\!\!\!\|\Psi_u-\Psi_0\|_{C^1(A)} \le C(n)\,\sigma_0\,,\qquad\mbox{where}
\\\nonumber
&&\!\!\!\!\!\!\!\!e_j\cdot\Psi_0(\nu)=c_0(n)\,\int_{\S_H}(\nu\cdot\omega)\,\,\Big(\rho_H^{K(\nu)}[\tau_H^j]\cdot\frac{\mathbf{p}_K\omega}{|\mathbf{p}_K\omega|^2}\Big)\,\,
J^{\S_H}\Big[\frac{\mathbf{p}_K\omega}{|\mathbf{p}_K\omega|}\Big]\,d\H^{n-1}_\omega\,.
\end{eqnarray}
By definition of $A$ and by \eqref{g0K and T0K close to id} and \eqref{small rotation},
\begin{eqnarray}\nonumber
&&\sup_{\nu\in A}\sup_{\omega\in\S_H}\,\Big|\tau_H^j\cdot\omega- \Big(\rho_H^{K(\nu)}[\tau_H^j]\cdot\frac{\mathbf{p}_K\omega}{|\mathbf{p}_K\omega|^2}\Big)\,\,
J^{\S_H}\Big[\frac{\mathbf{p}_K\omega}{|\mathbf{p}_K\omega|}\Big]\Big|\le C(n)\,\varepsilon_0\,,
\\\label{psi0 vicina psistar}
&&\mbox{and thus}\,\,\|\Psi_0-\Psi_*\|_{C^1(A)}\le C(n)\,(\sigma_0+\varepsilon_0)\,,
\end{eqnarray}
where $\Psi_*:A\to\mathbb{R}^n$ is defined by $e_j\cdot\Psi_*(\nu)=c_0(n)\,\int_{\S_H}
(\nu\cdot\omega)\,(\tau_H^j\cdot\omega)\,d\H^{n-1}_\omega$ ($\nu\in A$).
Recalling that $\{\tau^i\}_{i=1}^n$ is an orthonormal frame of $\SS^n$ on $A$, with $\nabla_{\tau^i}\nu=\tau^i(\nu)=\tau_{K(\nu)}^i=\rho_H^{K(\nu)}[\tau^i_H]$, we find
\begin{eqnarray*}
&&e_j\cdot\nabla_{\tau^i}\Psi_*(\nu)=
c_0(n)\,\int_{\S_H}
(\rho_H^{K(\nu)}[\tau^i_H]\cdot\omega)\,(\tau_H^j\cdot\omega)\,d\H^{n-1}_\omega\,,
\\
&&e_j\cdot\nabla_{\tau^i}\Psi_*(\nu_H)=c_0(n)\,\int_{\S_H}(\tau^i_{H}\cdot\omega)\,(\tau_H^j\cdot\omega)\,d\H^{n-1}_\omega=\delta_{ij}/c_0(n)\,.
\end{eqnarray*}
By \eqref{small rotation}, \eqref{psiu vicina psi0} and \eqref{psi0 vicina psistar} we conclude that
\begin{eqnarray}\label{pino 1}
&&\|\Psi_u-\Psi_*\|_{C^1(A)}\le C(n)\,(\sigma_0+\varepsilon_0)\,,
\\\label{pino 2}
&&\big\|\nabla^{\SS^n}\Psi_u-c_0(n)^{-1}\,\sum_{j=1}^n\,e_j\otimes\tau^j\big\|_{C^0(A)}\le C(n)\,(\sigma_0+\varepsilon_0)\,.
\end{eqnarray}
Let us finally consider the map $h:A\times[0,1]\to\mathbb{R}^n$,
\[
h(\nu,t)=h_t(\nu)=t\,\Psi_*(\nu)+(1-t)\,\Psi_u(\nu)\,,\qquad(\nu,t)\in A\times[0,1]\,,
\]
which defines an homotopy between $\Psi_*$ and $\Psi_u$. By \eqref{pino 1} and \eqref{pino 2} we see that if $\nu\in\partial A$, that is, if ${\rm dist}_{\SS^n}(\nu,\nu_H)=\varepsilon_0$, then, denoting by $[\nu_H,\nu]_s$ the unit-speed length minimizing geodesic from $\nu_H$ to $\nu$, considering that $[\nu_H,\nu]_s\in A$ for every $s\in(0,\varepsilon_0$), and that $\SS^n$ is close to be flat in $A$, we find
\begin{eqnarray*}
|h_t(\nu)|&\ge&\Big|\int_0^{\varepsilon_0}\frac{d}{ds}\,h_t([\nu_H,\nu]_s)\,ds\Big|-|h_t(\nu_H)|
\\
&\ge&\Big(\frac1{c_0(n)}-C(n)\,(\varepsilon_0+\sigma_0)\Big)\,\varepsilon_0-C(n)\,\sigma_0\ge\frac{\varepsilon_0}{2\,c_0(n)}\,,
\end{eqnarray*}
provided $\sigma_0=\varepsilon_0/C_*$ is small enough with respect to $\varepsilon_0$ (i.e., provided $C_*$ is large), $\varepsilon_0$ is small in terms of $c_0$, and where we have used $\Psi_*(\nu_H)=0$ and
\begin{equation}
\label{pino 3}
|\Psi_u(\nu_H)|=|\gamma|=\Big|\int_{\S_H}u\,\phi^1_H\Big|\le C(n)\,\sigma_0\,,
\end{equation}
to deduce $|h_t(\nu_H)|\le C(n)\,\sigma_0$. This proves that
$0\not\in \partial\,h_t(\partial A)$ for every $t\in[0,1]$,
so that $\deg(h_t,A,0)$ is independent of $t\in[0,1]$. In particular, $h_0=\Psi_u$ and $h_1=\Psi_*$ give
$\deg(\Psi_u,A,0)=\deg(\Psi_*,A,0)=1$,
where we have used $\Psi_*(\nu_H)=0$ and the fact that, up to decreasing the value of $\varepsilon_0$, $\Psi_*$ is injective on $A$. By $\deg(\Psi_u,A,0)=1$, there is $\nu_*\in A$ such that $\Psi_u(\nu_*)=0$, {\bf as claimed}. With $K=K(\nu_*)$ and $v=v_u^K$ we deduce \eqref{auto 1} from \eqref{for later} and \eqref{auto 2} from $\Psi_u(\nu_*)=0$. By \eqref{pino 2} and \eqref{pino 3}, if $\eta={\rm dist}_{\SS^n}(\nu_*,\nu_H)$, then
\begin{eqnarray*}
&&\Big(\int_{\Sigma_H}\!\!\!\big(E_{\Sigma_H}^0[u]\big)^2\Big)^{1/2}=|\gamma|=|\Psi_u(\nu_H)|=|\Psi_u(\nu_H)-\Psi_u(\nu_*)|
\\
&=&\Big|\int_0^{\eta}\!\!\!\frac{d}{ds}\,\Psi_u([\nu_H,\nu_*]_s)\,ds\Big|
\ge\Big(\frac1{c_0(n)}-C(n)\,(\varepsilon_0+\sigma_0)\Big)\,\eta
\ge \frac{|\nu_*-\nu_H|}{2\,c_0(n)}\,\,,
\end{eqnarray*}
that is \eqref{auto 3}. Finally, \eqref{auto 4} follows from \eqref{auto 3} and \eqref{auto 4 pre}.
\end{proof}
\subsection{Energy estimates for spherical graphs over annuli}\label{subsection energy estimates on annuli} Given $H\in\H$ and $0<r_1<r_2$ we let $\mathcal{X}_\sigma(\Sigma_H,r_1,r_2)$ be the class of those $u\in C^1(\Sigma_H\times(r_1,r_2))$ such that, setting $u_r=u(\cdot,r)$, one has
$u_r\in\mathcal{X}_\sigma(\Sigma_H)$ for every $r\in(r_1,r_2)$ and $|r\,\partial_r u|\le\sigma$ on $\Sigma_H\times(r_1,r_2)$. If $u\in\mathcal{X}_\sigma(\Sigma_H,r_1,r_2)$, then the spherical graph of $u$ over $\Sigma_H\times(r_1,r_2)$, given by
\[
\Sigma_H(u,r_1,r_2)=\Big\{r\,\frac{\omega+u_r(\omega)\,\nu_H}{\sqrt{1+u_r(\omega)^2}}:\omega\in\Sigma_H\,,r\in(r_1,r_2)\Big\}\,,
\]
is an hypersurface in $A_{r_1}^{r_2}$. It is useful to keep in mind that
$\Sigma_H(0,r_1,r_2)=\{r\,\omega:\omega\in\Sigma\,,r\in(r_1,r_2)\}=H\cap A_{r_1}^{r_2}$ is a flat annular region of area $\omega_n\,(r_2^n-r_1^n)$, and that if $\sigma<\sigma_1=\sigma_1(n)$, then
\begin{equation}
\label{equuivalence between u square and omega square}
\frac1{C(n)}\,\int_{\S_H(u,r_1,r_2)}\!\!\!\!\omega_H^2\,d\H^n\le \int_{\S_H\times(r_1,r_2)}\!\!r^{n-1}\,u^2\le C(n)\,\int_{\S_H(u,r_1,r_2)}\!\!\!\!\omega_H^2\,d\H^n\,.
\end{equation}
\begin{lemma}\label{lemma step one}
There are $\varepsilon_0$, $\sigma_0$, $C_0$ positive, depending on $n$ only, such that:
\noindent {\bf (i):} if $H,K\in\H$, $\nu_H\cdot\nu_K>0$, $|\nu_H-\nu_K|=\varepsilon<\varepsilon_0$, $u\in\mathcal{X}_{\sigma}(\S_H,r_1,r_2)$, and $\sigma<\sigma_0$, then there is $v\in \mathcal{X}_{C_0(\sigma+\varepsilon)}(\S_H,r_1,r_2)$ such that $\S_K(v,r_1,r_2)=\S_H(u,r_1,r_2)$.
\noindent {\bf (ii):} if $H\in\H$, $u\in\mathcal{X}_{\sigma_0}(\Sigma_H,r_1,r_2)$, and $(a,b)\subset\subset(r_1,r_2)$, then there exist $K\in\H$, $v\in\mathcal{X}_{C_0\,\sigma_0}(\Sigma_K,r_1,r_2)$, and $r_*\in[a,b]$ such that
\begin{eqnarray}\label{biro 1}
&&\Sigma_H(u,r_1,r_2)=\Sigma_K(v,r_1,r_2)\,,
\\\label{biro 2}
&&E_{\Sigma_K}^0\big(v_{r_*}\big)=0\,,
\\\label{biro 3}
&&|\nu_H-\nu_K|^2\le C_0(n)\,\min_{\rho\in[a,b]}\int_{\Sigma_H}\,\big(E_{\Sigma_H}^0[u_\rho]\big)^2\,.
\end{eqnarray}
Moreover, for every $r\in(r_1,r_2)$,
\begin{equation}
\label{biro 4}
\Big|\int_{\Sigma_K}(v_r)^2-\int_{\Sigma_H}(u_r)^2\Big|\le C_0(n)\,\Big\{
\min_{\rho\in[a,b]}\int_{\Sigma_H}\,(u_\rho)^2+\int_{\Sigma_H}(u_r)^2\Big\}\,.
\end{equation}
\end{lemma}
\begin{proof}
We prove statement (i). If $|\nu_H-\nu_K|=\varepsilon<\varepsilon_0$, since $u_r\in\mathcal{X}_{\sigma}(\S_H)$ for every $r\in(r_1,r_2)$, by Lemma \ref{lemma step one sigma}-(i) we see that $T_r:\S_H\to\S_K$,
\begin{equation}
\label{tr}
T_r(\omega)=|\mathbf{p}_K[\omega+u_r(\omega)\,\nu_H]|^{-1}\,\mathbf{p}_K[\omega+u_r(\omega)\,\nu_H]\qquad\omega\in\S_H\,,
\end{equation}
is a diffeomorphism between $\S_H$ and $\S_K$, and $v_r:\S_K\to\mathbb{R}$,
\begin{equation}
\label{vrtr}
v_r(T_r(\omega))=\frac{\nu_K\cdot(\omega+u_r(\omega)\,\nu_H)}{|\mathbf{p}_K[\omega+u_r(\omega)\,\nu_H]|}\,,\qquad\omega\in\S_H\,,
\end{equation}
satisfies $v_r\in\mathcal{X}_{C_0\,(\sigma+\varepsilon)}(\S_K)$, $\S_H(u_r)=\S_K(v_r)$ for every $r\in(r_1,r_2)$, and
\begin{equation}
\label{biro 5}
\Big|\int_{\S_K}\!\!\!(v_r)^2-\int_{\S_H}\!\!\!(u_r)^2\Big|\le C(n)\,\Big\{|\nu_H-\nu_K|^2+\int_{\S_H}\!\!\!(u_r)^2\Big\}\,.
\end{equation}
Since $u\in\mathcal{X}_{\sigma}(\Sigma_H,r_1,r_2)$, and $T_r$ and $v_r$ depend smoothly on $u_r$, setting $v(\omega,r):=v_r(\omega)$ we have $\S_H(u,r_1,r_2)=\S_K(v,r_1,r_2)$ (by $\S_H(u_r)=\S_K(v_r)$ for every $r\in(r_1,r_2)$), and $v\in\mathcal{X}_{C_0\,(\sigma+\varepsilon)}(\Sigma_H,r_1,r_2)$ ($|r\,\partial_rv_r|\le C_0(\sigma+\varepsilon)$ is deduced by differentiation in \eqref{tr} and \eqref{vrtr}, and by $|u_r|,|r\,\partial_ru_r|<\sigma$).
\noindent {\bf Step two:} We prove (ii). Let
$\gamma=\min_{\rho\in[a,b]}\int_{\Sigma_H}\,\big(E_{\Sigma_H}^0[u_\rho]\big)^2$,
and let $r_*\in[a,b]$ be such that the minimum $\gamma$ is achieved at $r=r_*$. If $\gamma=0$, then we set $K=H$ and $v=u$. If $\gamma>0$, then we apply Lemma \ref{lemma step one sigma}-(ii) to $u_{r_*}\in\mathcal{X}_{\sigma_0}(\S_H)$, and find $K\in\H$ with $|\nu_K-\nu_H|<\varepsilon_0$ and $v_{r_*}\in\mathcal{X}_{C_0\,s_0}(\S_K)$ such that $\Sigma_H(u_{r_*})=\Sigma_K(v_{r_*})$ and
\begin{eqnarray}
\label{biro 6}
&&E_{\Sigma_K}^0[v_{r_*}]=0\,,
\\\label{biro 7}
&&|\nu_K-\nu_H|^2\le C_0(n)\,\int_{\Sigma_H}\,\big(E_{\Sigma_H}^0[u_{r_*}]\big)^2=C_0(n)\,\gamma\,,
\\\label{biro 8}
&&\Big|\int_{\Sigma_K}(v_{r_*})^2-\int_{\Sigma_H}(u_{r_*})^2\Big|\le C_0(n)\,\int_{\S_H}(u_{r_*})^2\,.
\end{eqnarray}
Since $v_{r_*}=v(\cdot,r_*)$ for $v$ constructed in step one starting from $u$, $H$ and $K$, we deduce \eqref{biro 4} by \eqref{biro 5} and \eqref{biro 7}, while \eqref{biro 6} is \eqref{biro 2}.
\end{proof}
We will use two basic ``energy estimates'' for spherical graphs over annuli. To streamline the application of these estimates to diadic families of annuli we consider intervals $(r_1,r_2)$ and $(r_3,r_4)$ are {\bf $(\eta,\eta_0)$-related}, meaning that
\begin{equation}
\label{r1r2r3r4}
r_2=r_0(1+\eta_0)\,,\quad r_1=r_0(1-\eta_0)\,,\quad r_4=r_0(1+\eta)\,,\quad r_3=r_0(1-\eta)\,,
\end{equation}
for some $\eta_0>\eta>0$, and with $r_0=(r_1+r_2)/2=(r_3+r_4)/2$; in particular, $(r_3,r_4)$ is contained in, and concentric to, $(r_1,r_2)$. The case $\Lambda=0$ of the following statement is the codimension one, equatorial spheres case of \cite[Lemma 7.14, Theorem 7.15]{allardalmgrenRADIAL}.
\begin{theorem}[Energy estimates for spherical graphs]\label{theorem 7.15AA main estimate lambda}
If $n\ge 2$ and $\eta_0>\eta>0$, then there are $\sigma_0=\sigma_0(n,\eta_0,\eta)$ and $C_0=C_0(n,\eta_0,\eta)$ positive, with the following property. If $H\in\H$, $\Lambda\ge0$, and $u\in\mathcal{X}_{\sigma}(\Sigma_H,r_1,r_2)$ is such that $\max\{1,\Lambda\,r_2\}\,\sigma\le\sigma_0$ and $\Sigma_H(u,r_1,r_2)$ has mean curvature bounded by $\Lambda$ in $A_{r_1}^{r_2}$, then, whenever $(r_1,r_2)$ and $(r_3,r_4)$ are $(\eta,\eta_0)$-related as in \eqref{r1r2r3r4},
\[
\Big|\H^n(\Sigma_H(u,r_3,r_4))-\H^n(\Sigma_H(0,r_3,r_4))\Big|\le C_0\,\int_{\Sigma_H\times(r_1,r_2)}\!\!\!\!\!\!\!\!\!\!r^{n-1}\,\big(u^2+\Lambda\,r\,|u|\big)\,.
\]
Moreover, if there is $r\in(r_1,r_2)$ s.t. $E_{\Sigma_H}^0 u_r=0$ on $\Sigma_H$, then we also have
\[
\int_{\Sigma_H\times(r_3,r_4)} r^{n-1}\,u^2
\le C(n)\,\Lambda\,r_2\,(r_2^n-r_1^n)+ C_0\,\int_{\Sigma_H\times(r_1,r_2)}\,r^{n-1}\,(r\,\partial_ru)^2\,.
\]
\end{theorem}
\begin{proof}
See appendix \ref{appendix proof of expansion mean curvature}.
\end{proof}
\subsection{Monotonicity for exterior varifolds with bounded mean curvature}\label{subsection monotonicity of exterior varifolds}
The following theorem states the monotonicity of $\Theta_{V,R,\Lambda}$ for $V\in\mathcal{V}_n(\Lambda,R,S)$, and provides, when $V$ corresponds to a spherical graph, a quantitative lower bound for the gap in the associated monotonicity formula; the case $\Lambda=0$, $R=0$ is contained in \cite[Lemma 7.16, Theorem 7.17]{allardalmgrenRADIAL}.
\begin{theorem}\label{theorem 7.17AA exterior lambda} {\bf (i):} If $V\in\mathcal{V}_n(\Lambda,R,S)$, then
\begin{equation}
\label{lambdamonotonicity of theta ext}
\mbox{$\Theta_{V,R,\Lambda}$ is increasing on $(R,S)$}\,.
\end{equation}
\noindent {\bf (ii):} There is $\sigma_0(n)$ such that, if $V\in\mathcal{V}_n(\Lambda,R,S)$ and, for some $H\in\H$, $u\in\mathcal{X}_\sigma(\Sigma,r_1,r_2)$ with $\sigma\le\sigma_0(n)$, and $(r_1,r_2)\subset(R,S)$, we have
\begin{eqnarray}
\label{V corresponds to u monoto lambda}
\mbox{$V$ corresponds to $\Sigma_H(u,r_1,r_2)$ in $A_{r_1}^{r_2}$}\,,
\end{eqnarray}
then
\begin{eqnarray}
\label{lemma 7.16AA monoto lambda}
\int_{\S_H\times(r_1,r_2)}\!\!\!\!\!\!\!\!r^{n-1} (r\,\partial u_r)^2\le C(n)\,r_2^n\,
\Big\{\Theta_{V,R,\Lambda}(r_2)-\Theta_{V,R,\Lambda}(r_1)\Big\}\,.
\end{eqnarray}
{\bf (iii):} Finally, given $\eta_0>\eta>0$, there exist $\sigma_0$ and $C_0$ depending on $n$, $\eta_0$, and $\eta$ only, such that if the assumptions of part (i) and part (ii) hold and, in addition to that, we also have $\max\{1,\Lambda\,r_2\}\,\sigma\le\sigma_0$ and
\begin{equation}
\label{lambdaAA 7.15(4) for 7.17 hole}
\mbox{$\exists\,r\in(r_1,r_2)$ s.t. $E_{\Sigma_H}^0 u_r=0$ on $\Sigma_H$}\,,
\end{equation}
then, whenever $(r_1,r_2)$ and $(r_3,r_4)$ are $(\eta,\eta_0)$-related as in \eqref{r1r2r3r4}, we have
\begin{eqnarray}\label{lambdaAA tesi 7.17 hole}
&&\Big|\H^n(\Sigma_H(u,r_3,r_4))-\H^n(\Sigma_H(0,r_3,r_4))\Big|
\\\nonumber
&&\hspace{2cm}\le C_0\,r_2^n\,\Big\{\Theta_{V,R,\Lambda}(r_2)-\Theta_{V,R,\Lambda}(r_1)+(\Lambda\,r_2)^2\Big\}\,.
\end{eqnarray}
\end{theorem}
\begin{proof}
We give details of the proof of (i) when $V\in\mathcal{M}_n(\Lambda,R,S)$ (whereas the general case is addressed as in \cite[Section 17]{SimonLN}). By the coarea formula, the divergence theorem and $|\vec{H}|\le\Lambda$, for a.e. $\rho>R$,
\begin{eqnarray}\nonumber
\frac{d}{d\rho}\,\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}&=&
\frac1{\rho^n}\,\int_{M\cap\partial B_\rho}\frac{|x|\,d\H^{n-1}}{|x^{TM}|}
-\frac{n\,\H^n(M\cap (B_\rho\setminus B_R))}{\rho^{n+1}}
\\\nonumber
&=&
\frac1{\rho^n}\,\int_{M\cap\partial B_\rho}\frac{|x|\,d\H^{n-1}}{|x^{TM}|}-\frac1{\rho^n}\int_{M\cap(B_\rho\setminus B_R)}\,\frac{x}{\rho}\cdot\vec{H}\,d\H^n
\\\nonumber
&&
\!\!\!\!-\frac{1}{\rho^{n+1}}\Big\{\int_{M\cap\partial B_\rho}\!\!\!\!\!\nu^{{\rm co}}_M\cdot x\,d\H^{n-1}+\int_{M\cap\partial B_{R}}\!\!\!\!\!\nu^{{\rm co}}_M\cdot x\,d\H^{n-1}\Big\}
\\\nonumber
&\ge& \frac1{\rho^n}\,\int_{M\cap\partial B_\rho}\Big(\frac{|x|}{|x^{TM}|}-\frac{|x^{TM}|}{|x|}\Big)\,d\H^{n-1}
\\\nonumber
&& \!\!\!\!-\frac{1}{\rho^{n+1}}\,\int_{M\cap\partial B_R}\!\!\!\!\!\nu^{{\rm co}}_M\cdot x\,d\H^{n-1}
-\Lambda\,\frac{\H^n(M\cap(B_\rho\setminus B_R))}{\rho^n}
\\\label{conelimit}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
={\rm Mon}(V,\rho)+\frac{d}{d\rho}\,\frac1{n\,\rho^n}\,\int x\cdot\nu^{{\rm co}}_V\,d\,{\rm bd}_V-\Lambda\,\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\,\,\,\,\,\,\,\,\,\,\,
\end{eqnarray}
where ${\rm Mon}(V,\rho)=(d/d\rho)\int_{B_\rho\setminus B_R}\,|x^\perp|^2\,|x|^{-n-2}\,d\|V\|$. Since ${\rm Mon}(V,\rho)\ge0$, this proves \eqref{lambdamonotonicity of theta ext}. Assuming now \eqref{V corresponds to u monoto lambda}, by using \cite[Lemma 3.5(6)]{allardalmgrenRADIAL} as done in the proof of \cite[Lemma 7.16]{allardalmgrenRADIAL}, we see that, under \eqref{V corresponds to u monoto lambda},
\[
C(n)\,r_2^n\,\int_{r_1}^{r_2}{\rm Mon}(V,\rho)\,d\rho\ge \int_{\Sigma_H\times(r_1,r_2)}r^{n-1}\,(r\,\partial_ru)^2\,,
\]
thus proving (ii). To prove (iii), we set $a=r_0\,(1-(\eta+\eta_0)/2)$ and $b=r_0\,(1+(\eta+\eta_0)/2)$, so that $(a,b)$ and $(r_3,r_4)$ are $(\eta,(\eta+\eta_0)/2)$-related, and $(r_1,r_2)$ and $(a,b)$ are $((\eta+\eta_0)/2,\eta_0)$-related (in particular, $(r_3,r_4)\subset (a,b)\subset (r_1,r_2)$). By suitably choosing $\sigma_0$ in terms of $n$, $\eta$ and $\eta_0$, we can apply Theorem \ref{theorem 7.15AA main estimate lambda} with $(r_3,r_4)$ and $(a,b)$, so to find (with $C=C(n,\eta_0,\eta)$)
\begin{eqnarray*}
&&\Big|\H^n(\S(u,r_3,r_4))-\H^n(\S(0,r_3,r_4))\Big|\le C\,\int_{\S_H\times(a,b)} r^{n-1}\big(u^2+\Lambda\,r\,|u|\big)
\\
&&\hspace{3cm}\le C\,\Big\{(\Lambda\,b)^2\,(b^n-a^n)+\int_{\S_H\times(a,b)} r^{n-1}\,u^2\Big\}\,.
\end{eqnarray*}
Thanks to \eqref{lambdaAA 7.15(4) for 7.17 hole} we can apply Theorem \ref{theorem 7.15AA main estimate lambda} with
$(a,b)$ and $(r_1,r_2)$ to find
\[
\int_{\S_H\times(a,b)} r^{n-1}\,u^2\le
C\,\Big\{(\Lambda\,r_2)^2\,(r_2^n-r_1^n)+\int_{\S_H\times(r_1,r_2)} r^{n-1}\,(r\,\partial_ru)^2\Big\}\,.
\]
We find \eqref{lambdaAA tesi 7.17 hole} by \eqref{lemma 7.16AA monoto lambda} and $(\Lambda\,b)^2\,(b^n-a^n)\le (\Lambda\,r_2)^2\,r_2^n$.
\end{proof}
\subsection{Proof of the mesoscale flatness criterion}\label{subsection mesoscale flatness criterion} As a final preliminary result to the proof of Theorem \ref{theorem mesoscale criterion}, we prove the following lemma, where Allard's regularity theorem is combined with a compactness argument to provide the basic graphicality criterion used throughout the iteration. The statement should be compared to \cite[Lemma 5.7]{allardalmgrenRADIAL}.
\begin{lemma}[Graphicality lemma]\label{lemma graphicality lambda}
Let $n\ge 2$. For every $\sigma>0$, $\Gamma\ge0$, and $(\l_3,\l_4)\subset\subset (\l_1,\l_2)\subset\subset (0,1)$ with $\l_1\ge 1/32$, there are positive constants $\varepsilon_1$ and $M_1$, depending only on $n$, $\sigma$, $\Gamma$, $\l_1$, $\l_2$, $\l_3$, and $\l_4$ with the following property. If $\Lambda\ge0$, $R\in(0,1/\Lambda)$, $V\in\mathcal{V}_n(\Lambda,R,1/\Lambda)$,
\[
\|{\rm bd}_V\|(\partial B_{R})\le\Gamma\,R^{n-1}\,,\qquad \sup_{\rho\in(R,1/\Lambda)}\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\le\Gamma\,,
\]
and there are $r>0$ and $K\in\H$ such that,
\begin{eqnarray}
\label{conditions on r lambda}
&&\hspace{-0.2cm}\max\{M_1,64\}\,R\le r\le \frac{\varepsilon_1}\Lambda\,,
\\
\label{ext hp step two 1 lambda}
&&\hspace{1cm}|\delta_{V,R,\Lambda}(r)|\le\varepsilon_1\,,
\\\label{ext hp step two 3 lambda}
&&\hspace{0.9cm}\|V\|(A_{\l_3\,r}^{\l_4\,r})>0\,,
\\
\label{ext hp step two 2 lambda}
&&\frac1{r^n}\,\int_{A_{\l_1\,r}^{\l_2\,r}}\,\omega_K^2\,d\|V\|\le\varepsilon_1\,,
\end{eqnarray}
then $V$ corresponds to $\Sigma_K(u,r/32,r/2)$ on $A_{r/32}^{r/2}$, for $u\in\mathcal{X}_\sigma(\Sigma_K,r/32,r/2)$.
\end{lemma}
\begin{proof} Should the lemma be false, we could find $\sigma>0$, $\Gamma\ge0$, $(\l_3,\l_4)\subset\subset(\l_1,\l_2)\subset\subset(0,1)$ with $\l_1\ge 1/32$, $K_j\in H$, positive numbers $R_j$, $\Lambda_j<1/R_j$, $r_j$, and $W_j\in\mathcal{V}_n(\Lambda_j,R_j,1/\Lambda_j)$ such that
$\|W_j\|\big(A_{\l_3\,r_j}^{\l_4\,r_j}\big)>0$, $\|{\rm bd}_{W_j}\|(\partial B_{R_j})\le\Gamma\,R_j^{n-1}$, $\|W_j\|(B_\rho\setminus B_{R_j})\le\Gamma\,\rho^n$ for every $\rho\in(R_j,1/\Lambda_j)$, and $\rho_j=R_j/r_j\to 0$, $r_j\,\Lambda_j\to 0$, $\delta_{W_j,R_j,\Lambda_j}(r_j)\to 0$, and $r_j^{-n}\int_{B_{\l_2\,r_j}\setminus B_{\l_1\,r_j}} \omega_{K_j}^2\,d\|W_j\|\to 0$, but there is no $u\in\mathcal{X}_{\sigma}(\Sigma_{K_j},r_j/32,r_j/2)$ with the property that $W_j$ corresponds to $\Sigma_{K_j}(u,r_j/32,r_j/2)$ on $A_{r_j/32}^{r_j/2}$.
Hence, setting $V_j=W_j/r_j$, no $u\in\mathcal{X}_{\sigma}(\Sigma_{K_j},1/32,1/2)$ can exist such that $V_j$ corresponds to $\Sigma_{K_j}(u,1/32,1/2)$ on $A_{1/32}^{1/2}$, despite the fact that each $V_j\in\mathcal{V}_n(r_j\,\Lambda_j,\rho_j,1/(r_j\,\Lambda_j))$ satisfies
\begin{eqnarray}\nonumber
&&\|V_j\|(A_{\l_3}^{\l_4})>0\,,\,\,
\frac{\|{\rm bd}_{V_j}\|(\partial B_{\rho_j})}{\rho_j^{n-1}}\le\Gamma\,,\,\,
\sup_{\rho\in(\rho_j,1/(\Lambda_j\,r_j))}\!\!\frac{\|V_j\|(B_\rho\setminus B_{\rho_j})}{\rho^n}\le\Gamma\,,
\\
\label{lambdaext contra 1}
&&\hspace{2cm}\lim_{j\to\infty}\max\big\{\delta_{V_j,\rho_j,r_j\,\Lambda_j}(1)\,,\,\,\, \int_{A_{\l_1}^{\l_2}}\,\omega_{K_j}^2\,d\|V_j\|\big\}=0\,.
\end{eqnarray}
Clearly we can find $K\in\H$ such that, up to extracting subsequences, $K_j\cap B_1\to K\cap B_1$ in $L^1(\mathbb{R}^{n+1})$. Similarly, by \eqref{lambdaext contra 1}, we can find an $n$-dimensional integer rectifiable varifold $V$ such that $V_j\rightharpoonup V$ as varifolds in $B_1\setminus\{0\}$. Since the bound on the distributional mean curvature of $V_j$ on $B_{1/(\Lambda_j\,r_j)}\setminus \overline{B}_{\rho_j}$ is $r_j\,\Lambda_j$, and since $\rho_j\to 0^+$ and $r_j\,\Lambda_j\to 0^+$, it also follows that $V$ is stationary in $B_1\setminus \{0\}$, and thus, by a standard argument and since $n\ge 2$, on $B_1$. By $\|V_j\|(A_{\l_3}^{\l_4})>0$, for every $j$ there is $x_j\in A_{\l_3}^{\l_4}\cap{\rm spt}\,V_j$, so that, up to extracting subsequences, $x_j\to x_0$ for some $x_0\in \overline{A}_{\l_3}^{\l_4}\cap{\rm spt}\,V$.
By $(\l_3,\l_4)\subset\subset (\l_1,\l_2)$, there is $\rho>0$ such that $B_\rho(x_0)\subset A_{\l_1}^{\l_2}$, hence
$\|V\|(A_{\l_1}^{\l_2})\ge\|V\|(B_\rho(x_0))\ge \omega_n\,\rho^n>0$,
thus proving $V\,\llcorner\, A_{\l_1}^{\l_2}\ne\emptyset$. By this last fact, by $\omega_K=0$ on $({\rm spt}\,V)\cap A_{\l_1}^{\l_2}$, and by the constancy theorem \cite[Theorem 41.1]{SimonLN}, we have
$A_{\l_1}^{\l_2}\cap{\rm spt}\,V=A_{\l_1}^{\l_2}\cap K$.
Since $V$ is stationary in $B_1$, we conclude that $B_1\cap K\subset B_1\cap{\rm spt}\,V$, so that,
$\|V\|(B_1)\ge\,\omega_n$. At the same time, since $\|{\rm bd}_{V_j}\|(\partial B_{\rho_j})\le\Gamma\,\rho_j^{n-1}$ and $\|V_j\|(B_\rho\setminus B_{\rho_j})\le\Gamma\,\rho^n$ for every $\rho\in(\rho_j,1/(\Lambda_j\,r_j))\supset (\rho_j\,1)$, by \eqref{lambdaext contra 1},
\begin{eqnarray*}
\omega_n&=&\!\!\!\!\lim_{j\to\infty}\|V_j\|(B_1\setminus B_{\rho_j})-
\frac{\rho_j}n\,\|\delta V_j\|(\partial B_{\rho_j})+\Lambda_j\,r_j\,\int_{\rho_j}^1\,\frac{\|V_j\|(B_\rho\setminus B_{\rho_j})}{\rho^n}\,d\rho
\\
&\ge&\|V\|(B_1)-\Gamma\,\varlimsup_{j\to\infty}\big(\rho_j^n+\Lambda_j\,r_j\big)=\|V\|(B_1)\,,
\end{eqnarray*}
so that $\|V\|(B_1)=\omega_n$ and thus $B_1\cap K=B_1\cap({\rm spt} V)$. By Allard's regularity theorem and $V_j\rightharpoonup V$, there are $\sigma_j\to 0$ and $u_j\in\mathcal{X}_{\sigma_j}(\Sigma_K,1/32,1/2)$ such that $V_j$ corresponds to $\Sigma_K(u_j,1/32,1/2)$ in $A_{1/32}^{1/2}$ for $j$ large, which is a contradiction as soon as $\sigma_j<\sigma$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem mesoscale criterion}] We start by imposing some constraints on the constants $\varepsilon_0$ and $M_0$ appearing in the statement. For the finite set
\begin{equation}
\label{def of J lambda}
J=\Big\{\Big(\frac13,\frac16\Big),\Big(\frac23,\frac13\Big)\Big\}\subset\big\{(\eta_0,\eta):\eta_0>\eta>0\big\}\,,
\end{equation}
we let $\sigma_0=\sigma_0(n)$ such that Lemma \ref{lemma step one}-(ii), Theorem \ref{theorem 7.15AA main estimate lambda}, and Theorem \ref{theorem 7.17AA exterior lambda}-(ii,iii) hold for every $(\eta_0,\eta)\in J$, and such that
\begin{equation}
\label{choice of sigma0}
\sigma_0\le\frac{\sigma_1}{C_0}\qquad\mbox{($\sigma_1(n)$ as in \eqref{equuivalence between u square and omega square};
$C_0(n)$ as in Lemma \ref{lemma step one}-(ii))}\,;
\end{equation}
we shall henceforth assume, without loss of generality, that
\[
\sigma<\sigma_0\,.
\]
Moreover, for $\varepsilon_1$ and $M_1$ as in Lemma \ref{lemma graphicality lambda}, we let
\begin{equation}
\label{choice of M0}
M_0\ge\max\Big\{M_1\Big(n,\sigma,\Gamma,\Big(\frac1{8},\frac12\Big),\Big(\frac16,\frac14\Big)\Big),
M_1\Big(n,\sigma,\Gamma,\Big(\frac1{16},\frac18\Big),\Big(\frac{3}{32},\frac{7}{64}\Big)\Big)\Big\}\,,
\end{equation}
\begin{equation}
\label{choice of eps0}
\varepsilon_0\le
\min\Big\{\varepsilon_1\Big(n,\sigma,\Gamma,\Big(\frac1{8},\frac12\Big),\Big(\frac16,\frac14\Big)\Big),
\varepsilon_1\Big(n,\sigma,\Gamma,\Big(\frac1{16},\frac18\Big),\Big(\frac{3}{32},\frac{7}{64}\Big)\Big)\Big\}\,.
\end{equation}
We also assume that $\varepsilon_0$ is smaller than the $n$-dependent $\varepsilon_0$'s appearing in Lemma \ref{lemma step one sigma} and Lemma \ref{lemma step one}. This said, let us recall that, by assumption, $V\in\mathcal{V}_n(\Lambda,R,1/\Lambda)$ is such that
\begin{equation}
\label{bounds needed for proof meso}
\|{\rm bd}_V\|(\partial B_{R})\le\Gamma\,R^{n-1}\,,\qquad\|V\|(B_\rho\setminus B_R)\le\Gamma\,\rho^n\,,\,\,\forall \rho\in(R,1/\Lambda)\,;
\end{equation}
in particular, by Theorem \ref{theorem 7.17AA exterior lambda}-(i),
\begin{equation}
\label{is decreasing}
\mbox{$\delta_{V,R,\Lambda}$ is decreasing on $(R,1/\Lambda)$}\,.
\end{equation}
Moreover, there is $s$ with $\max\{64,M_0\}\,R<s<\varepsilon_0/4\,\Lambda$ such that
\begin{eqnarray}\label{ext bd easy 1 proof lambda}
&&|\delta_{V,R,\Lambda}(s/8)|\le \varepsilon_0\,,
\\\label{ext bd easy 3 proof lambda}
&&R_*=\sup\Big\{\rho\ge\frac{s}8: \delta_{V,R,\Lambda}(\rho)\ge -\varepsilon_0\Big\}\ge 4\,s\,,
\\\label{lambdaext bd easy 2 proof}
&&\frac1{s^n}\,\int_{A_{s/8}^{s/2}}\,\omega_H(y)^2\,d\|V\|_y\le\varepsilon_0\,,
\\
\label{lambdaext bd not so easy proof}
&&\|V\|(A_{s/6}^{s/4})>0\,.
\end{eqnarray}
By \eqref{is decreasing}, \eqref{ext bd easy 1 proof lambda} and \eqref{ext bd easy 3 proof lambda} we have
\begin{equation}
\label{delta below eps0 in absolute value}
|\delta_{V,R,\Lambda}(r)|\le\varepsilon_0\,,\qquad\forall r\in(s/8,R_*)\,.
\end{equation}
By \eqref{choice of M0}, \eqref{choice of eps0}, \eqref{bounds needed for proof meso}, \eqref{lambdaext bd easy 2 proof}, \eqref{lambdaext bd not so easy proof} and \eqref{delta below eps0 in absolute value} we can apply Lemma \ref{lemma graphicality lambda} with $(\l_1,\l_2)=(1/8,1/2)$, $(\l_3,\l_4)=(1/6,1/4)$, and $r=s$. Setting $H_0=H$, we thus find $u_0\in\mathcal{X}_\sigma(\S_{H_0},s/32,s/2)$ such that
\begin{equation}
\label{ext bd easy corresponds 0 lambda}
\mbox{$V$ corresponds to $\Sigma_{H_0}(u_0,s/32,s/2)$ on $A_{s/32}^{s/2}$}\,,
\end{equation}
and
thanks to \eqref{equuivalence between u square and omega square}, \eqref{choice of sigma0}, \eqref{ext bd easy corresponds 0 lambda}, and \eqref{lambdaext bd easy 2 proof},
\begin{eqnarray}\label{ext bd decay Tj 0 lambda}
T_0:=\frac{1}{(s/4)^n}\,\int_{s/8}^{s/4}\!\!\!\!r^{n-1}\,dr\!\!\int_{\Sigma_{H_0}}\!\![u_0]_r^2
\le\frac{C(n)}{s^n}\,\int_{A_{s/8}^{s/4}}\,\omega_H^2\,d\|V\|\le C(n)\varepsilon_0.
\end{eqnarray}
By \eqref{ext bd easy 3 proof lambda} and by $s<\varepsilon_0/4\,\Lambda$ there is $N\in\{j\in\mathbb{N}:j\ge 2\}\cup\{+\infty\}$ such that
\begin{equation}
\label{def of N}
\{0,1,..,N\}=\big\{j\in\mathbb{N}: 8\,s_j\le S_*=\min\big\{R_*,\frac{\varepsilon_0}\Lambda\big\}\big\}\,,\quad s_j=2^{j-3}\,s\,.
\end{equation}
Notice that if $\Lambda>0$ then it must be $N<\infty$. We are now in the position to make the following {\bf claim:} There exist $\tau=\tau(n)\in(0,1)$ and $\{(H_j,u_j)\}_{j=0}^{N-2}$ with $H_j\in\H$ and $u_j\in\mathcal{X}_\sigma(\Sigma_{H_j},s/32,4\,s_j)$, such that, setting,
\[
T_j=\frac1{s_{j+1}^n}\,\int_{s_j}^{s_{j+1}}\,r^{n-1}\,dr\,\int_{\Sigma_{H_j}}\,[u_j]_r^2\,,
\]
we have, for every $j=0,...,N-2$,
\begin{eqnarray}
\label{claim is spherical graph}
&&\mbox{$V$ corresponds to $\Sigma_{H_j}(u_j,s/32,4\,s_j)$ on $A_{s/32}^{4\,s_j}$}\,,
\\
\label{claim delta smaller eps0}
&&|\delta_{V,R,\Lambda}(s_j)|\le\varepsilon_0\,,
\\
\label{claim Tj smaller eps0}
&&T_j\le C(n)\,\varepsilon_0\,,
\end{eqnarray}
and, for every $j=1,...,N-2$,
\begin{eqnarray}
\label{claim decay normal vectors}
|\nu_{H_j}-\nu_{H_{j-1}}|^2&\!\!\!\!\le&\!\!\!C(n)\,T_{j-1}\,,
\\
\label{claim deltaj smaller deltaj-1}
\delta_{V,R,\Lambda}(s_j)\!\!\!\!&\le&\!\!\! \tau\,\big\{\delta_{V,R,\Lambda}(s_{j-1})+(1+\Gamma)\,\Lambda\,s_{j-1}\big\}\,,
\\
\label{claim Tj smaller deltaj-1}
T_j\!\!\!\!&\le&\!\! \!C(n)\,\big\{\delta_{V,R,\Lambda}(s_{j-1})-\delta_{V,R,\Lambda}(s_{j+2})+\Lambda\,s_{j-1}\big\}\,.
\end{eqnarray}
\noindent {\bf Proof of the claim:} We argue by induction. Clearly \eqref{claim is spherical graph}$_{j=0}$, \eqref{claim delta smaller eps0}$_{j=0}$ and \eqref{claim Tj smaller eps0}$_{j=0}$ are, respectively, \eqref{ext bd easy corresponds 0 lambda}, \eqref{ext bd easy 1 proof lambda} and \eqref{ext bd decay Tj 0 lambda}. This concludes the proof of the claim if $N=2$, therefore we shall assume $N\ge 3$ for the rest of the argument. To set up the inductive argument, we consider $\ell\in\mathbb{N}$ such that: either $\ell=0$; or $1\le\ell\le N-3$ and \eqref{claim is spherical graph}, \eqref{claim delta smaller eps0}, and \eqref{claim Tj smaller eps0} hold for $j=0,...,\ell$, and \eqref{claim decay normal vectors}, \eqref{claim deltaj smaller deltaj-1} and \eqref{claim Tj smaller deltaj-1} hold for $j=1,...,\ell$; and prove that all the conclusions of the claim holds with $j=\ell+1$.
The validity of \eqref{claim delta smaller eps0}$_{j=\ell+1}$ is of course immediate from \eqref{delta below eps0 in absolute value} and \eqref{def of N}. Also, after proving \eqref{claim Tj smaller deltaj-1}$_{j=\ell+1}$, we will be able to combine with \eqref{claim delta smaller eps0}$_{j=\ell+1}$ and \eqref{def of N} to deduce \eqref{claim Tj smaller eps0}$_{j=\ell+1}$. We now prove, in the order, \eqref{claim decay normal vectors}, \eqref{claim is spherical graph}, \eqref{claim deltaj smaller deltaj-1}, and \eqref{claim Tj smaller deltaj-1} with $j=\ell+1$. {\bf To prove \eqref{claim decay normal vectors}$_{j=\ell+1}$}: Let $[a,b]\subset\subset (s_\ell,s_{\ell+1})$ with $(b-a)=(s_{\ell+1}-s_\ell)/2$, so that
\begin{equation}
\label{ext sell star lambda}
\frac1{C(n)}\,\min_{r\in[a,b]}\int_{\Sigma_{H_\ell}}[u_\ell]_r^2\le
\frac1{s_{\ell+1}^n}\,\int_{s_{\ell}}^{s_{\ell+1}}\,r^{n-1}\,dr\,\int_{\Sigma_{H_\ell}}[u_\ell]_r^2= T_\ell\,.
\end{equation}
Keeping in mind \eqref{claim is spherical graph}$_{j=\ell}$, we can apply Lemma \ref{lemma step one}-(ii) with $(r_1,r_2)=(s/32,4\,s_\ell)$ and $[a,b]$ to find $H_{\ell+1}\in\H$, $u_{\ell+1}\in\mathcal{X}_{C_0\,\sigma_0}(\Sigma_{H_{\ell+1}},s/32,4\,s_{\ell})$ (with $C_0$ as in Lemma \ref{lemma step one}-(ii)) and
\begin{equation}
\label{where is sell star}
s_\ell^*\in[a,b]\subset(s_\ell,s_{\ell+1})\,,
\end{equation}
such that, thanks also to \eqref{ext sell star lambda},
\begin{eqnarray}\label{ext u ell plus one 1 lambda}
&&\Sigma_{H_\ell}(u_\ell,s/32,4\,s_{\ell})=\Sigma_{H_{\ell+1}}(u_{\ell+1},s/32,4\,s_{\ell})\,,
\\\label{ext u ell plus one 2 lambda}
&&E_{\Sigma_{H_\ell+1}}^0\big([u_{\ell+1}]_{s_{\ell}^*}\big)=0\,,
\\\label{ext u ell plus one 3 lambda}
&&|\nu_{H_{\ell}}-\nu_{H_{\ell+1}}|^2\le C(n)\,T_\ell\,,
\\\label{ext u ell plus one 4 lambda}
&&\int_{\Sigma_{H_{\ell+1}}}[u_{\ell+1}]_r^2\le C(n)\,\Big(T_\ell+\int_{\Sigma_{H_\ell}}[u_\ell]_r^2\Big)\,\,,\qquad\forall r\in(s/32,4\,s_{\ell})\,.\hspace{1cm}
\end{eqnarray}
In particular, \eqref{ext u ell plus one 3 lambda} is \eqref{claim decay normal vectors}$_{j=\ell+1}$. {\bf To prove \eqref{claim is spherical graph}$_{j=\ell+1}$}: Notice that \eqref{ext u ell plus one 1 lambda} and \eqref{claim is spherical graph}$_{j=\ell}$ do not imply \eqref{claim is spherical graph}$_{j=\ell+1}$, since, in \eqref{claim is spherical graph}$_{j=\ell+1}$, we are claiming the graphicality of $V$ inside $A_{s/32}^{4\,s_{\ell+1}}$ (which is strictly larger than $A_{s/32}^{4\,s_\ell}$), and we are claiming that $u_{\ell+1}$ has $C^1$-norm bounded by $\sigma$, and not just by $C_0\,\sigma_0$ (with $C_0$ as in Lemma \ref{lemma step one}-(ii)). We want to apply Lemma \ref{lemma graphicality lambda} with
\begin{equation}
\label{ext parameters for step two lambda}
r=8\,s_{\ell+1}\,,\,\,(\l_1,\l_2)=\Big(\frac1{16},\frac18\Big)\,,\,\,(\l_3,\l_4)=\Big(\frac{3}{32},\frac{7}{64}\Big)\,,\,\, K=H_{\ell+1}\,.
\end{equation}
We check the validity of \eqref{conditions on r lambda}, \eqref{ext hp step two 1 lambda}, \eqref{ext hp step two 2 lambda} and \eqref{ext hp step two 3 lambda} for these choices of $r$, $\l_1$, $\l_2$, $\l_3$, $\l_4$ and $K$. Since $r=8\,s_{\ell+1}\ge s\ge\max\{M_0,64\,R\}$, and since \eqref{def of N} and $\ell+1\le N$ give $r=8\,s_{\ell+1}\le \varepsilon_0/\Lambda$, we deduce the validity of \eqref{conditions on r lambda} with $r=8\,s_{\ell+1}$. The validity of \eqref{ext hp step two 1 lambda} with $r=8\,s_{\ell+1}$ is immediate from \eqref{delta below eps0 in absolute value} by our choice \eqref{choice of eps0} of $\varepsilon_0$. Next we notice that
\[
\|V\|(A_{\l_3\,r}^{\l_4\,r})=\|V\|(A_{3\,[8\,s_{\ell+1}]/32}^{7\,[8\,s_{\ell+1}]/64})=\|V\|(A_{3\,s_{\ell}/2}^{7\,s_\ell/4})>0
\]
thanks to \eqref{claim is spherical graph}$_{j=\ell}$, so that \eqref{ext hp step two 3 lambda} holds for $r$, $\l_3$ and $\l_4$ as in \eqref{ext parameters for step two lambda}. Finally, by \eqref{equuivalence between u square and omega square} (which applies to $u_{\ell+1}$ by \eqref{choice of sigma0}), \eqref{ext u ell plus one 1 lambda}, \eqref{claim is spherical graph}$_{j=\ell}$, and \eqref{ext u ell plus one 4 lambda},
\begin{eqnarray*}
&&\frac1{r^n}\,\int_{A_{\l_1\,r}^{\l_2\,r}}\,\omega_{H_{\ell+1}}^2\,d\|V\|\le
\frac{C(n)}{s_{\ell+1}^n}\,\int_{s_\ell}^{s_{\ell+1}}\,r^{n-1}\,dr\,\int_{\Sigma_{H_{\ell+1}}}[u_{\ell+1}]_r^2
\\
&&\le C(n)\,T_\ell+
\frac{C(n)}{s_{\ell+1}^n}\,\int_{s_\ell}^{s_{\ell+1}}\,r^{n-1}\,dr\,\int_{\Sigma_{H_{\ell}}}[u_{\ell}]_r^2
\le C(n)\,T_\ell\le C(n)\,\varepsilon_0\,,
\end{eqnarray*}
where in the last inequality we have used \eqref{claim Tj smaller eps0}$_{j=\ell}$. Again by our choice \eqref{choice of eps0} of $\varepsilon_0$, we deduce that \eqref{ext hp step two 2 lambda} holds with $r$, $\l_1$ and $\l_2$ as in \eqref{ext parameters for step two lambda}. We can thus apply Lemma \ref{lemma graphicality lambda}, and find $v\in\mathcal{X}_\sigma(\Sigma_{H_{\ell+1}},s_{\ell+1}/4,4\,s_{\ell+1})$ such that
\begin{equation}
\label{ext jelly lambda}
\mbox{$V$ corresponds to $\Sigma_{H_{\ell+1}}(v,s_{\ell+1}/4,4\,s_{\ell+1})$ on $A_{s_{\ell+1}/4}^{4\,s_{\ell+1}}$}\,.
\end{equation}
By \eqref{ext u ell plus one 1 lambda}, \eqref{claim is spherical graph}$_{j=\ell}$, and \eqref{ext jelly lambda}, $v=u_{\ell+1}$ on $\S_{H_{\ell+1}}\times(s_{\ell+1}/4, 4\,s_\ell)$. We can thus use $v$ to extend $u_{\ell+1}$ from $\S_{H_{\ell+1}}\times(s/32, 4\,s_\ell)$ to $\S_{H_{\ell+1}}\times(s/32,4\,s_{\ell+1})$, and, thanks to \eqref{ext u ell plus one 1 lambda}, \eqref{claim is spherical graph}$_{j=\ell}$ and \eqref{ext jelly lambda}, the resulting extension is such that \eqref{claim is spherical graph}$_{j=\ell+1}$ holds. {\bf To prove \eqref{claim deltaj smaller deltaj-1}$_{j=\ell+1}$}: We set $r_0=(s_\ell+s_{\ell+1})/2$ and notice that for $\eta_0=1/3$ we have
\begin{equation}
\label{ext choices 1 lambda}
r_1=r_0\,(1-\eta_0)=s_\ell\,,\qquad r_2=r_0\,(1+\eta_0)=s_{\ell+1}\,.
\end{equation}
For $\eta=1/6$ we correspondingly set
\begin{equation}
\label{ext choices 2 lambda}
r_3=r_0\,(1-\eta)=:s_\ell^-\,,\qquad r_4=r_0\,(1+\eta)=:s_\ell^+\,,
\end{equation}
and notice that $(\eta_0,\eta)\in J$, see \eqref{def of J lambda}. With the aim of applying Theorem \ref{theorem 7.17AA exterior lambda}-(iii) to these radii, we notice that \eqref{claim is spherical graph}$_{j=\ell+1}$ implies that assumption \eqref{V corresponds to u monoto lambda} holds with $H=H_{\ell+1}$ and $u=u_{\ell+1}$, while, by \eqref{ext u ell plus one 2 lambda}, $r=s_\ell^*\in(s_\ell,s_{\ell+1})$ is such that \eqref{lambdaAA 7.15(4) for 7.17 hole} holds. By $\Lambda\,s_{\ell+1}\le\varepsilon_0\le 1$, \eqref{def of N}, and \eqref{lambdaAA tesi 7.17 hole}, with $C(n)=C_0(n,1/6,1/3)$ for $C_0$ as in Theorem \ref{theorem 7.17AA exterior lambda}-(iii), we have
\begin{eqnarray}
\nonumber
&&s_{\ell+1}^{-n}\,\,\big|\|V\|\big(B_{s_\ell^+}\setminus B_{s_\ell^-}\big)-\omega_n\,\big((s_\ell^+)^n-(s_\ell^-)^n\big)\big|
\\\nonumber
&&= s_{\ell+1}^{-n}\big|\H^n(\Sigma_{H_{\ell+1}}(u_{\ell+1},s_\ell^-,s_\ell^+))-\H^n(\Sigma_{H_{\ell+1}}(0,s_\ell^-,s_\ell^+))\big|
\\\label{compare to lambda}
&&\le C(n)\,\big\{(\Lambda\,s_{\ell+1})^2\,\,
+\Theta_{V,R,\Lambda}(s_{\ell+1})-\Theta_{V,R,\Lambda}(s_{\ell})\big\}\,.
\end{eqnarray}
Setting for brevity $\delta=\delta_{V,R,\Lambda}$ and $\Theta=\Theta_{V,R,\Lambda}$, and recalling that
\begin{eqnarray*}
&&r^n\,\delta(r)=\omega_n\,r^n-\Theta(r)\,r^n
\\
&&=\omega_n\,r^n-\|V\|(B_r\setminus B_R)
-\Lambda\,r^n\,\int_R^r\,\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\,d\rho+\frac{R\,\|\delta V\|(\partial B_R)}{n}
\end{eqnarray*}
we have
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!s_\ell^{-n}\,\big|(s_{\ell}^-)^n\,\delta(s_{\ell}^-)
-(s_{\ell}^+)^n\,\delta(s_{\ell}^+)\big|
\le C(n)\,\big\{(\Lambda\,s_\ell)^2+\Theta(s_{\ell+1})-\Theta(s_{\ell})\big\}
\\
&&\!\!\!\!\!\!\!\!\!\!+C(n)\,\Lambda\,s_\ell^{-n}\Big\{(s_\ell^+)^n\int_{R}^{s_{\ell}^+}\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\,d\rho
-(s_\ell^-)^n\int_{R}^{s_{\ell}^-}\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\,d\rho\Big\}
\\
&&\!\!\le C(n)\,\Big\{(\Lambda\,s_\ell)^2 +\Theta(s_{\ell+1})-\Theta(s_{\ell})\Big\}
+C(n)\,\Lambda\, \int_{R}^{s_{\ell}^+}\frac{\|V\|(B_\rho\setminus B_R)}{\rho^n}\,d\rho\,.
\end{eqnarray*}
By $\Lambda\,s_\ell\le 1$ and since $s_{\ell}^+\le s_\ell\le \varepsilon_0/8\,\Lambda$ thanks to $\ell<N$, we can use the upper bound $\|V\|(B_\rho\setminus B_R)\le \Gamma\,\rho^n$ with $\rho\in(R,s_\ell^+)\subset(R,1/\Lambda)$, to find that
\begin{eqnarray*}
\Big|\frac{(s_{\ell}^-)^n}{s_\ell^n}\,\delta(s_{\ell}^-)
-\frac{(s_{\ell}^+)^n}{s_\ell^n}\,\delta(s_{\ell}^+)\Big|
\le\! C_*(n)\,\big\{\delta(s_{\ell})-\delta(s_{\ell+1})\big\}+C_*(n)\,(\Gamma+1)\,\Lambda\,s_\ell\,,
\end{eqnarray*}
for a constant $C_*(n)$. By rearranging terms, and using the monotonicity of $\delta$ on $(R,\infty)$ and $(s_\ell^-,s_\ell^+)\subset(s_\ell,s_{\ell+1})$ we find that
\begin{eqnarray}\nonumber
&&\big(C_*(n)\,+(s_{\ell}^+)^n/(s_\ell^n)\big)\,\delta(s_{\ell+1})\le C_*(n)\,\delta(s_{\ell+1})+\big((s_{\ell}^+)^n/(s_\ell^n)\big)\,\delta(s_{\ell}^+)
\\\nonumber
&\le&
C_*(n)\,\delta(s_\ell)+\big((s_{\ell}^-)^n/(s_\ell^n)\big)\,\delta(s_\ell^-)+C_*(n)\,(1+\Gamma)\,\Lambda \,s_\ell
\\\label{ext grandi aa lambda}
&\le&
\big(C_*(n)\,+(s_{\ell}^-)^n/(s_\ell^n)\big)\,\delta(s_{\ell})+C_*(n)\,(1+\Gamma)\,\Lambda \,s_\ell\,.
\end{eqnarray}
We finally notice that by \eqref{ext choices 1 lambda}, \eqref{ext choices 2 lambda}, $\eta_0=1/3$, and $\eta=1/6$, we have
\[
\frac{s_\ell^-}{s_\ell}=\frac{r_0\,(1-\eta)}{r_0\,(1-\eta_0)}=\frac{5}4\,,\qquad
\frac{s_\ell^+}{s_\ell}=2\,\frac{s_\ell^+}{s_{\ell+1}}=2\,\frac{1+\eta}{1+\eta_0}=\frac{7}4\,,
\]
so that, we find $\delta(s_{\ell+1})\le\tau\{\delta(s_\ell)+(1+\Gamma)\,\Lambda\,s_\ell\}$ (i.e. \eqref{claim deltaj smaller deltaj-1}$_{j=\ell+1}$) with
\[
\tau=\tau(n)=\frac{C_*(n)+(5/4)^n}{C_*(n)+(7/4)^n}\,,\qquad \tau_*=\tau_*(n)=\frac{C_*(n)}{C_*(n)+(7/4)^n}<\tau\,.
\]
{\bf We finally prove \eqref{claim Tj smaller deltaj-1}$_{j=\ell+1}$}, i.e.
\begin{equation}
\label{ext bd decay Tj j ell plus one lambda} \frac1{s_{\ell+1}^n}\int_{s_{\ell+1}}^{2\,s_{\ell+1}}\!\!\!r^{n-1}\!\!\int_{\Sigma_{H_{\ell+1}}}\!\!\!\!\![u_{\ell+1}]_r^2\le
C(n)\big\{\delta_{V,R,\Lambda}(s_{\ell})-\delta_{V,R,\Lambda}(s_{\ell+3})+\Lambda\,s_{\ell}\big\}.
\end{equation}
By \eqref{claim is spherical graph}$_{j=\ell+1}$ we know that
\begin{equation}
\label{ext yes we know lambda}
\mbox{$V$ corresponds to $\S_{H_{\ell+1}}(u_{\ell+1},s/32,4\,s_{\ell+1})$ on $A_{s/32}^{4\,s_{\ell+1}}$}\,.
\end{equation}
Now, \eqref{r1r2r3r4} holds with $r_0=3\,s_\ell$ and $(\eta_0,\eta)=(2/3,1/3)\in J$, see \eqref{def of J lambda}, if
\begin{eqnarray*}
&&r_1=s_\ell=3\,s_\ell-2\,s_\ell\,,\qquad r_2=5\,s_{\ell}=3\,s_\ell+2\,s_\ell\,,
\\
&&r_3=s_{\ell+1}=3\,s_\ell-s_\ell\,,\qquad\,\,\,\, \,r_4=2\,s_{\ell+1}=3\,s_\ell+s_\ell\,.
\end{eqnarray*}
Since $s_\ell^*\in(s_\ell,s_{\ell+1})\subset (r_1,r_2)$,
by \eqref{ext yes we know lambda}, \eqref{ext u ell plus one 2 lambda} and $(r_1,r_2)\subset (s/32,4\,s_{\ell+1})$ we can apply Theorem \ref{theorem 7.15AA main estimate lambda} to deduce that
\[
\int_{s_{\ell+1}}^{2\,s_{\ell+1}}\!\!\!r^{n-1}\!\int_{\Sigma_{H_{\ell+1}}}\,[u_{\ell+1}]_r^2\le C(n)\,\int_{s_\ell}^{5\,s_\ell}\!\!\!\!r^{n+1}\!\int_{\Sigma_{H_{\ell+1}}}\!\!\!\!\!\!(\partial_ru_{\ell+1})_r^2
+C(n)\,\Lambda\,(s_\ell)^{n+1}\,.
\]
Again by \eqref{ext yes we know lambda}, Theorem \ref{theorem 7.17AA exterior lambda}-(ii) with $(r_1,r_2)=(s_\ell,8\,s_\ell)$ gives
\begin{eqnarray*}
&&s_\ell^{-n}\,\int_{s_\ell}^{5\,s_\ell}\,r^{n+1}\,\int_{\Sigma_{H_{\ell+1}}}\,(\partial_r[u_{\ell+1}])_r^2
\le
s_\ell^{-n}\,\int_{s_\ell}^{8\,s_\ell}\,r^{n+1}\,\int_{\Sigma_{H_{\ell+1}}}\,(\partial_r[u_{\ell+1}])_r^2
\\
&&\le C(n)\,\big\{\Theta_{V,R,\Lambda}(8\,s_\ell)-\Theta_{V,R,\Lambda}(s_{\ell})\big\}
\le C(n)\,\big\{\delta_{V,R,\Lambda}(s_{\ell})-\delta_{V,R,\Lambda}(s_{\ell+3})\big\}\,.
\end{eqnarray*}
The last two estimates combined give \eqref{ext bd decay Tj j ell plus one lambda}. This completes the proof of the {\bf claim}. {\bf Proof of statement (i)}: We assume $S_*<\infty$ (that is either $\Lambda>0$ or $R_*<\infty$). In this case $N$ (as defined in \eqref{def of N}) is finite, with $2^N\le S_*/s<2^{N+1}$. By \eqref{claim is spherical graph}$_{j=N-2}$, $V$ corresponds to $\Sigma_{H_{N-2}}(u_{N-2},s/32,4\,s_{N-2})$ on $A_{s/32}^{4\,s_{N-2}}$, and since $4\,s_{N-2}=4\,2^{N-2-3}\,s=2^{N+1}\,s/16>S_*/16$,
we deduce \eqref{mesoscale thesis graphicality} with $K=H_{N-2}$ and $u=u_{N-2}$. {\bf Proof of statement (ii)}: We assume $\Lambda=0$ and $R_*=+\infty$ (that is, $\delta_{V,R,0}\ge-\varepsilon_0$ on $(s/8,\infty)$). In this case,
we have $N=+\infty$, and the iteration procedure set up in the claim actually defines a sequence $\{(H_j,u_j)\}_{j=0}^\infty$
with $u_j\in\mathcal{X}_\sigma(\S_{H_j},s/32,4\,s_j)$ and
\begin{equation}
\label{claim is spherical graph jj}
\mbox{$V$ corresponds to $\Sigma_{H_j}(u_j,s/32,4\,s_j)$ on $A_{s/32}^{4\,s_j}$}
\end{equation}
for every $j\ge0$. By compactness of $\SS^n$, we can find $K\in\H$ and a subsequence $j(k)$ such that $\varepsilon_k=|\nu_K-\nu_{H_{j(k)}}|\to 0$ as $k\to\infty$. In particular, for $k$ large enough, we have $\varepsilon_k<\varepsilon_0$, and thus, by Lemma \ref{lemma step one}-(i) and by \eqref{claim is spherical graph jj} we can find $v_k\in\mathcal{X}_{C(n)\,(\sigma+\varepsilon_k)}(\S_K;s/32,4\,s_{j(k)})$ such that
\begin{equation}
\label{claim is spherical graph jjj}
\mbox{$V$ corresponds to $\Sigma_K(v_{j(k)},s/32,4\,s_{j(k)})$ on $A_{s/32}^{4\,s_{j(k)}}$}\,.
\end{equation}
By \eqref{claim is spherical graph jjj}, $v_{j(k)+1}=v_{j(k)}$ on $\Sigma_K\times(s/32,4\,s_{j(k)})$. Since $s_{j(k)}\to\infty$, we have found $u\in\mathcal{X}_{C(n)\,\sigma}(\S_K;s/32,\infty)$ such that $V$ corresponds to $\Sigma_K(u,s/32,\infty)$ on $A_{s/32}^\infty$. This proves statement (ii). {\bf Proof of statement (iii)}: We finally assume that $\Lambda=0$ and, setting for brevity $\delta=\delta_{V,R,0}$,
\begin{equation}
\label{positive}
\delta(r)\ge0\,,\qquad\forall r\ge s/8\,.
\end{equation}
As for statement (ii), $N=+\infty$, and there is $\{(H_j,u_j)\}_{j=0}^\infty$ satisfying
\begin{eqnarray}
\label{claim is spherical graph fine}
&&\mbox{$V$ corresponds to $\Sigma_{H_j}(u_j,s/32,4\,s_j)$ on $A_{s/32}^{4\,s_j}$}\,,\qquad\forall j\ge0\,,
\end{eqnarray}
\vspace{-0.5cm}
\begin{eqnarray}
\label{claim decay normal vectors fine}
|\nu_{H_j}-\nu_{H_{j-1}}|^2&\le& C(n)\,T_{j-1}\,,\qquad\hspace{0.8cm}\mbox{if $j\ge 1$}\,,
\\
\label{claim delta fine}
\delta(s_j)&\le&
\left\{
\begin{split}
&\varepsilon_0\,,\hspace{1.75cm}\qquad\mbox{if $j=0$}\,,
\\
&\tau\,\delta(s_{j-1})\,,\hspace{0.63cm}\qquad\mbox{if $j\ge1$}\,,
\end{split}
\right .
\\
\label{claim Tj fine}
T_j&\le&\left\{
\begin{split}
&C(n)\,\varepsilon_0\,,\hspace{0.85cm}\qquad\mbox{if $j=0$}\,,
\\
&C(n)\,\delta(s_{j-1})\,,\qquad\mbox{if $j\ge1$}\,.
\end{split}
\right .
\end{eqnarray}
Notice that, in asserting the validity of \eqref{claim Tj fine} with $j\ge 1$, we have used \eqref{positive} to estimate $-\delta(s_{j+2})\le0$ in \eqref{claim Tj smaller deltaj-1}$_{j}$. By iterating \eqref{claim delta fine} we find
\begin{equation}
\label{fine deltaj}
\delta(s_j)\le \tau^j\,\delta(s/8)\le \tau^j\,\varepsilon_0\,,\qquad\forall j\ge 1\,,
\end{equation}
which, combined with \eqref{claim Tj fine} and \eqref{claim decay normal vectors fine}, gives, for every $j\ge 1$,
\begin{eqnarray}\label{fine Tj}
T_j\le C(n)\,\min\{1,\tau^{j-1}\}\,\delta(s/8)\le C(n)\,\tau^j\,\delta(s/8)\,,
\\\label{fine nuj}
|\nu_{H_j}-\nu_{H_{j-1}}|^2\le C(n)\,\min\{1,\tau^{j-2}\}\,\delta(s/8)\le C(n)\,\tau^j\,\delta(s/8)\,,
\end{eqnarray}
thanks also to $\tau=\tau(n)$ and, again, to \eqref{positive}. By \eqref{fine nuj}, for every $j\ge 0$, $k\ge 1$, we have
$|\nu_{H_{j+k}}-\nu_{H_j}|\le C(n)\,\sqrt{\delta(s/8)}\,\sum_{h=1}^{k+1}\big(\sqrt\tau\big)^{j-1+h}$,
so that $|\nu_{H_j}-\nu_K|\to 0$ for some $K\in\H$ thanks to
\begin{equation}
\label{nuK convergence}
|\nu_K-\nu_{H_j}|^2\le C(n)\,\tau^j\,\delta(s/8)\,,\qquad\forall j\ge 1\,.
\end{equation}
By arguing as in the proof of statement (ii) we find $u\in\mathcal{X}_{\sigma'}(\Sigma_K,s/32,\infty)$ for every $\sigma'>\sigma$ such that, for every $j$ large enough,
$\Sigma_K(u,s/32,4\,s_j)=\Sigma_{H_j}(u_j,s/32,4\,s_j)$, and hence, by \eqref{claim is spherical graph},
\begin{equation}
\label{lambdaext bd easy corresponds final}
\mbox{$V$ corresponds to $\Sigma_K(u,s/32,\infty)$ on $\mathbb{R}^{n+1}\setminus B_{s/32}$}\,,
\end{equation}
which is \eqref{mesoscale thesis graphicality} with $S_*=+\infty$. To prove \eqref{decay deficit for exterior minimal surfaces}, we notice that if $r\in(s_j,s_{j+1})$ for some $j\ge1$, then, setting $\tau=(1/2)^\a$ (i.e., $\a=\log_{1/2}(\tau)\in(0,1)$) and noticing that $r/s\le 2^{j+1-3}$, by \eqref{is decreasing} and \eqref{fine deltaj} we have
\begin{eqnarray}\nonumber
\delta(r)\!\!\!&\le&\!\!\!\delta(s_j)\le \tau^j\,\delta(s/8)=2^{-j\,a}\,\delta(s/8)=
4^{-\a}\,2^{-(j-2)\a}\,\delta(s/8)
\\\label{ex cont}
\!\!\!&\le&\!\!\! C(n)\,(s/r)^\a\,\delta(s/8)\,,\,\,\,\,\,
\end{eqnarray}
where in the last inequality \eqref{positive} was used again; this proves \eqref{decay deficit for exterior minimal surfaces}. To prove \eqref{decay flatness for exterior minimal surfaces}, we recall that $\omega_K(y)={\rm arctn}(|\nu_K\cdot\hat{y}|/|\mathbf{p}_K\,\hat{y}|)$, provided ${\rm arctn}$ is defined on $\mathbb{R}\cup\{\pm\infty\}$, and where $\hat{y}=y/|y|$, $y\ne 0$. Now, by \eqref{lambdaext bd easy corresponds final},
\[
y=|y|\,\frac{\mathbf{p}_K\,\hat{y}+u(\mathbf{p}_K\,\hat{y},|y|)\,\nu_K}{\sqrt{1+u(\mathbf{p}_K\,\hat{y},|y|)^2}}\,,\qquad\forall y\in({\rm spt}\,V)\setminus B_{s/32}\,,
\]
so that $|\mathbf{p}_K\,\hat{y}|\ge 1/2$ for $y\in({\rm spt} V)\setminus B_{s/32}$; therefore, by \eqref{nuK convergence}, up to further decrease the value of $\varepsilon_0$, and recalling $\delta(s/8)\le\varepsilon_0$, we conclude that
\begin{equation}
\label{nuK uniform projections}
|\mathbf{p}_{H_j}\,\hat{y}|\ge 1/3\,,\qquad\forall y\in({\rm spt} V)\setminus B_{s/32}\,,
\end{equation}
for every $j\in\mathbb{N}\cup\{+\infty\}$ (if we set $H_\infty=K$). By \eqref{nuK uniform projections} we easily find
\[
|\omega_K(y)-\omega_{H_j}(y)|\le C\,|\nu_{H_j}-\nu_K|\,,\qquad\forall y\in({\rm spt} V)\setminus B_{s/32}\,,\forall j\ge 1\,,
\]
from which we deduce that, if $j\ge 1$ and $r\in(s_j,s_{j+1})$, then
\begin{eqnarray*}
\frac1{r^n}\,\int_{A_r^{2\,r}}\omega_K^2\,d\|V\|\!\!\!&\le&\!\!\!C(n)\,\Big\{\frac1{s_j^n}\,\int_{A_{s_j}^{s_{j+1}}}\omega_K^2\,d\|V\|+
\frac1{s_{j+1}^n}\,\int_{A_{s_{j+1}}^{s_{j+2}}}\omega_K^2\,d\|V\|\Big\}
\\
\!\!\!&\le&\!\!\!C(n)\,\Big\{\frac1{s_j^n}\,\int_{A_{s_j}^{s_{j+1}}}\omega_{H_j}^2\,d\|V\|+
\frac1{s_{j+1}^n}\,\int_{A_{s_{j+1}}^{s_{j+2}}}\omega_{H_{j+1}}^2\,d\|V\|\Big\}
\\
\!\!\!&&\!\!\!+C(n)\,\Gamma\,\big\{|\nu_K-\nu_{H_j}|^2+|\nu_K-\nu_{H_{j+1}}|^2\big\}\,,
\end{eqnarray*}
where \eqref{bounds needed for proof meso} was used to bound $\|V\|(A_\rho^{2\,\rho})\le \Gamma\,(2\,\rho)^n$ with $\rho=s_j,s_{j+1}\in(R,1/\Lambda)$.
By \eqref{claim is spherical graph fine} we can exploit \eqref{equuivalence between u square and omega square} on the first two integrals, so that taking \eqref{nuK convergence} into account we find that, if $j\ge 1$ and $r\in(s_j,s_{j+1})$, then
$r^{-n}\,\int_{A_r^{2\,r}}\omega_K^2\,d\|V\|\le C(n)\{T_j+T_{j+1}\big\}+C(n)\,\Gamma\,\tau^j\,\delta(s/8)
\le C(n)\,(1+\Gamma)\,\tau^j\,\delta(s/8)$
where in the last inequality we have used \eqref{fine Tj}. Since $\tau^j\le C(n)\,(s/r)^\a$, we conclude the proof of \eqref{decay flatness for exterior minimal surfaces}, and thus, of the theorem.
\end{proof}
\section{Application of quantitative isoperimetry}\label{section existence and quantitative isoperimetry} Here we apply quantitative isoperimetry to prove Theorem \ref{thm main psi}-(i) and parts of Theorem \ref{thm main psi}-(iv).
\begin{theorem}\label{thm existence and uniform min}
If $W\subset\mathbb{R}^{n+1}$ is compact, $v>0$, then ${\rm Min}[\psi_W(v)]\ne\emptyset$. Moreover, depending on $n$ and $W$ only, there are $v_0$, $C_0$, $\Lambda_0$ positive, $s_0\in(0,1)$, and $R_0(v)$ with $R_0(v)\to 0^+$ and $R_0(v)\,v^{1/(n+1)}\to\infty$ as $v\to\infty$, such that, if $v> v_0$ and $E_v$ is a minimizer of $\psi_W(v)$, then:
\noindent {\bf (i):} $E_v$ is a {\bf $(\Lambda_0/v^{1/(n+1)},s_0\,v^{1/(n+1)})$-perimeter minimizer with free boundary in $\Omega$}, that is
\begin{equation}
\label{uniform lambda minimality}
P(E_v; \Omega\cap B_r(x))\le P(F;\Omega\cap B_r(x))+\frac{\Lambda_0}{v^{1/(n+1)}}\,\big|E_v\Delta F\big|\,,
\end{equation}
for every $F\subset\Omega=\mathbb{R}^{n+1}\setminus W$ with $E_v\Delta F\subset\subset B_r(x)$ and $r<s_0\,v^{1/(n+1)}$;
\noindent {\bf (ii):} $E_v$ determines $x\in\mathbb{R}^{n+1}$ such that
\begin{equation}\label{isop estimate 2}
|E_v\Delta B^{(v)}(x)| \le C_0\,v^{-1+1/[2(n+1)]}\,;
\end{equation}
if $\mathcal{R}(W)>0$, then $E_v$ also determines $u\in C^\infty(\partial B^{(1)})$ such that
\begin{eqnarray}
\label{x and u of Ev take 2}
&&(\partial E_v)\setminus B_{R_0\,v^{1/(n+1)}}
\\\nonumber
&&=\Big\{y+ v^{1/(n+1)}u\Big(\frac{y-x}{v^{1/(n+1)}}\Big)\,\nu_{B^{(v)}(x)}(y):y\in\partial B^{(v)}(x)\Big\}\setminus B_{R_0\,v^{1/(n+1)}}\,;
\end{eqnarray}
\noindent {\bf (iii):} if $\mathcal{R}(W)>0$ and $x$ and $u$ depend on $E_v$ as in \eqref{isop estimate 2} and \eqref{x and u of Ev take 2}, then
\begin{equation}\label{limsupmax goes to zero take 2}
\lim_{v\to\infty}\sup_{E_v\in{\rm Min}[\psi_W(v)]}\!\!\!\max\big\{\big||x|\,v^{-1/(n+1)}-\omega_{n+1}^{-1/(n+1)}\big|\,,
\|u\|_{C^1(\partial B^{(1)})}\big\}=0\,.
\end{equation}
\end{theorem}
\begin{remark}[Improved convergence]\label{remark improved convergence}
{\rm We will repeatedly use the following fact (see, e.g. \cite{FigalliMaggiARMA,F2M3,CicaleseL.ardi,CiLeMaIC1}): {\it If $\Omega$ is an open set, $\Lambda\ge0$, $s>0$, if $\{F_j\}_j$ are {\bf $(\Lambda,s)$-perimeter minimizers in $\Omega$}, i.e. if it holds that
\begin{equation}
\label{in comparing}
P(F_j;B_r(x))\le P(G_j; B_r(x))+\Lambda\,|F_j\Delta G_j|\,,
\end{equation}
whenever $G_j\Delta F_j\subset\subset B_r(x)\subset\subset \Omega$ and $r<s$, and if $F$ is an open set with smooth boundary in $\Omega$ such that $F_j\to F$ in $L^1_{{\rm loc}}(\Omega)$ as $j\to\infty$, then for every $\Omega'\subset\subset \Omega$ there is $j(\Omega')$ such that
\[
(\partial F_j)\cap\Omega'= \Big\{y+u_j(y)\,\nu_F(y):y\in \Omega\cap\partial F\Big\}\cap\Omega'\,,\qquad\forall j\ge j(\Omega')\,,
\]
for a sequence $\{u_j\}_j\subset C^1(\Omega\cap\partial F)$ with $\|u_j\|_{C^1(\Omega\cap\partial F)}\to 0$.}
Compare the terminology used in \eqref{uniform lambda minimality} and \eqref{in comparing}: when we add ``with free boundary'', the ``localizing balls'' $B_r(x)$ are not required to be compactly contained in $\Omega$, and the perimeters are computed in $B_r(x)\cap\Omega$.
}
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm existence and uniform min}] {\bf Step one:} We prove ${\rm Min}[\psi_W(v)]\ne\emptyset$ for all $v>0$. Since $W$ is compact, $B^{(v)}(x)\subset\subset\Omega$ for $|x|$ large. Hence there is $\{E_j\}_j$ with
\begin{equation}
\label{minimizing sequence}
E_j\subset\Omega\,,\,\,\, |E_j|=v\,,\,\,\,P(E_j;\Omega)\le \min\Big\{P(B^{(v)}),P(F;\Omega)\Big\}+(1/j)\,,
\end{equation}
for every $F\subset\Omega$ with $|F|=v$. Hence, up to extracting subsequences, $E_j\to E$ in $L^1_{\rm loc}(\mathbb{R}^{n+1})$ with $P(E;\Omega) \leq \varliminf_{j\to \infty}P(E_j;\Omega)$, where $E\subset\Omega$ and $|E|\le v$. We now make three remarks concerning $E$:
\noindent {\bf (a):} {\it If $\{\Omega_i\}_{i\in I}$ are the connected components of $\Omega$, then $\Omega\cap\partial^*E=\emptyset$ if and only $E = \bigcup_{i\in I_0} \Omega_i$} ($I_0\subset I$). Indeed, $\Omega \cap \partial^\ast E=\emptyset$ implies $\mathrm{cl}\,(\partial^*E)\cap \Omega =\partial E\cap \Omega$, hence $\partial E\subset\partial\Omega$ and $E = \bigcup_{i\in I_0} \Omega_i$. The converse is immediate.
\noindent {\bf (b):} {\it If $\Omega\cap\partial^*E\ne\emptyset$, then we can construct a system of ``volume--fixing variations'' for $\{E_j\}_j$}. Indeed, if $\Omega\cap\partial^*E\ne\emptyset$, then there are $B_{S_0}(x_0)\subset\subset\Omega$ with $P(E;\partial B_{S_0}(x_0))=0$ and a vector field $X\in C^\infty_c(B_{S_0}(x_0);\mathbb{R}^{n+1})$ such that $\int_E{\rm div}\,\,X=1$. By \cite[Theorem 29.14]{maggiBOOK}, there are constants $C_0,c_0>0$, depending on $E$ itself, with the following property: whenever $|(F\Delta E)\cap B_{S_0}(x_0)|<c_0$, then there is a smooth function $\Phi^F:\mathbb{R}^n\times(-c_0,c_0)\to\mathbb{R}^n$ such that, for each $|t|<c_0$, the map $\Phi_t^F=\Phi^F(\cdot,t)$ is a smooth diffeomorphism with $\{\Phi_t^F\ne{\rm id}\,\}\subset\subset B_{S_0}(x_0)$, $|\Phi_t^F(F)|=|F|+t$, and $P(\Phi_t^F(F);B_{S_0}(x_0))\le(1+C_0\,|t|)\,P(F;B_{S_0}(x_0))$. For $j$ large enough, we evidently have $|(E_j\Delta E)\cap B_{S_0}(x_0)|<c_0$, and thus we can construct smooth functions $\Phi^j:\mathbb{R}^n\times(-c_0,c_0)\to\mathbb{R}^n$ such that, for each $|t|<c_0$, the map $\Phi_t^j=\Phi^j(\cdot,t)$ is a smooth diffeomorphism with $\{\Phi_t^j\ne{\rm id}\,\}\subset\subset B_{S_0}(x_0)$, $|\Phi_t^j(E_j)|=|E_j|+t$, and $P(\Phi_t^j(E_j);B_{S_0}(x_0))\le(1+C_0\,|t|)\, P(E_j;B_{S_0}(x_0))$.
\noindent {\bf (c):} {\it If $\Omega\cap\partial^*E\ne\emptyset$, then $E$ is bounded}. Since $|E|\le v<\infty$, it is enough to prove that $\Omega\cap\partial^*E$ is bounded. In turn, taking $x_0\in\Omega\cap\partial^*E$, and since $W$ is bounded and $|E|<\infty$, the boundedness of $\Omega\cap\partial^*E$ descends immediately by the following density estimate: there is $r_1>0$ such that
\begin{equation}
\label{lower bound volume of E}
\begin{split}
&|E\cap B_r(x)|\ge c(n)\,r^{n+1}
\\
&\forall\,\,x\in\Omega\cap\partial^*E\,,\,\, r<r_1\,,\,\, B_r(x)\subset\subset \mathbb{R}^{n+1}\setminus \big(I_{r_1}(W)\cup B_{S_0}(x_0)\big)\,.
\end{split}
\end{equation}
To prove \eqref{lower bound volume of E}, let $r_1>0$ be such that $|B_{r_1}|<c_0$, let $x$ and $r$ be as in \eqref{lower bound volume of E}, and set $F_j=(\Phi_t^j(E_j)\cap B_{S_0}(x_0))\cup[E_j\setminus(B_r(x)\cup B_{S_0}(x_0))]$ for $t=|E_j\cap B_r(x)|$ (which is an admissible value of $t$ by $|B_{r_1}|<c_0$). In this way, $|F_j|=|E_j|=v$, and thus we can exploit \eqref{minimizing sequence} with $F=F_j$. A standard argument (see, e.g. \cite[Theorem 21.11]{maggiBOOK}) leads then to \eqref{lower bound volume of E}.
Now, since $\partial\Omega\subset W$ is bounded, every connected component of $\Omega$ with finite volume is bounded. Thus, by (a), (b) and (c) above, there is $R>0$ such that $W\cup E\,\subset\subset \,B_R$. Since $|E\cap [B_{R+1}\setminus B_R]|=0$, we can pick $T\in(R,R+1)$ such that $\H^n(E_j\cap\partial B_T)\to 0$ and $P(E_j\setminus B_T)=\H^n(E_j\cap\partial B_T)+P(E_j;\Omega\setminus B_T)$, and consider the sets $F_j=(E_j\cap B_T)\cup B_{\rho_j}(y)$ corresponding to $\rho_j=(|E_j\setminus B_T|/\omega_{n+1})^{1/(n+1)}$ and to $y\in\mathbb{R}^{n+1}$ which is independent from $j$ and such that $|y|>\rho_j+T$ (notice that $\sup_j\rho_j\le C(n)\,v^{1/(n+1)}$). Since $|F_j|=|E_j|=v$, \eqref{minimizing sequence} with $F=F_j$ and $P(B_{\rho_j})\le P(E_j\setminus B_T)$ give
\begin{eqnarray*}
P(E_j;\Omega)-(1/j)\!\!&\le& \!\!\!\!P(F_j;\Omega)\le P(E_j;\Omega\cap B_T)+\H^n(E_j\cap\partial B_T)+P(B_{\rho_j})
\\
&\le&\!\!\!\!P(E_j;\Omega)+2\,\H^n(E_j\cap\partial B_T)\,,
\end{eqnarray*}
so that, by the choice of $T$, $\{F_j\}_j$ is a minimizing sequence for $\psi_W(v)$, with $F_j\subset B_{T^*}$ and $T^*$ independent of $j$. We conclude by the Direct Method.
\noindent{\bf Step two:} We prove \eqref{isop estimate 2}. If $E_v$ a minimizer of $\psi_W(v)$ and $R>0$ is such that $W \subset\subset B_R$, then by $P(E_v;\Omega)\le P(B^{(v)})$ we have, for $v>v_0$, and $v_0$ and $C_0$ depending on $n$ and $W$,
\begin{eqnarray}
\label{cookie 1}
P(E_v\setminus B_{R})&\le&P(E_v;\Omega) + n\,\omega_n\,R^n\le P(B^{(v)})+C_0
\\\nonumber
&\le&(1+(C_0/v))\,P(B^{(|E_v\setminus B_{R}|)}) + C_0\,,
\end{eqnarray}
where we have used that, if $v>2\,b>0$ and $\a=n/(n+1)$, then
\[
P(B^{(v)})\,P(B^{(v-b)})^{-1}-1=(v/(v-b))^\a-1\le \a\,b/(v-b)\le 2\,\a\,b\,v^{-1}\,.
\]
By combining \eqref{quantitative euclidean isop} and \eqref{cookie 1} we conclude that, for some $x\in\mathbb{R}^{n+1}$,
\[
c(n)\,\Big(\frac{|(E_v\setminus B_{R})\Delta B^{(|E_v\setminus B_{R}|)}(x)|}{|E_v \setminus B_{R}|}\Big)^2
\le \frac{P(E_v\setminus B_{R})}{P(B^{(|E_v\setminus B_{R}|)})}-1\le \frac{C_0}{v^{n/(n+1)}}\,,
\]
provided $v>v_0$. Hence we deduce \eqref{isop estimate 2} from
\begin{eqnarray*}
&&|E_v \Delta B^{(v)}(x)|=2\,|E_v \setminus B^{(v)}(x)|\le C_0+2\,\big|(E_v\setminus B_R) \setminus B^{(v)}(x)\big|
\\
&&\le C_0+2\,\big|(E_v\setminus B_R) \setminus B^{(|E_v\setminus B_{R}|)}(x)\big|\le C_0+|E_v\setminus B_R|\,C_0\,v^{-n/2\,(n+1)}\,.
\end{eqnarray*}
\noindent {\bf Step three:} We prove the existence of $v_0$, $\Lambda_0$, and $s_0$ such that every $E_v\in{\rm Min}[\psi_W(v)]$ with $v>v_0$ satisfies \eqref{uniform lambda minimality}. Arguing by contradiction, we assume the existence of $v_j\to\infty$, $E_j\in{\rm Min}[\psi_W(v_j)]$, $F_j\subset\Omega$ with $|F_j\Delta E_j|>0$ and $F_j\Delta E_j\subset\subset B_{r_j}(x_j)$ for some $x_j\in\mathbb{R}^{n+1}$ and $r_j=v_j^{1/(n+1)}/j$, such that
\[
P(E_j;\Omega\cap B_{r_j}(x_j))\ge P(F_j;\Omega\cap B_{r_j}(x_j))+j\,v_j^{-1/(n+1)}\,\big|E_j\Delta F_j\big|\,.
\]
Denoting by $E_j^*$, $F_j^*$ and $\Omega_j$ the sets obtained by scaling $E_j$, $F_j$ and $\Omega$ by a factor $v_j^{-1/(n+1)}$, we find that $F_j^*\Delta E_j^*\subset\subset B_{1/j}(y_j)$ for some $y_j\in\mathbb{R}^{n+1}$, and
\begin{equation}
\label{unifor min contra scaled}
P(E_j^*;\Omega_j\cap B_{1/j}(y_j))\ge P(F_j^*;\Omega_j\cap B_{1/j}(y_j))+j\,\big|E_j^*\Delta F_j^*\big|\,.
\end{equation}
By \eqref{isop estimate 2} there are $z_j\in\mathbb{R}^{n+1}$ such that $|E_j^*\Delta B^{(1)}(z_j)|\to 0$. We can therefore use the volume-fixing variations of $B^{(1)}$ to find diffeomorphisms $\Phi^j_t:\mathbb{R}^n\to\mathbb{R}^n$ and constants $c(n)$ and $C(n)$ such that, for every $|t|<c(n)$, one has $\{\Phi^j_t\ne{\rm id}\,\}\subset\subset U_j$ for some open ball $U_j$ with $U_j\subset\subset \Omega_j\setminus B_{1/j}(y_j)$,
$|\Phi^j_t(E_j^*)\cap U_j|=|E_j^*\cap U_j|+t$, and $P(\Phi^j_t(E_j^*);U_j)\le (1+C(n)\,|t|)\,P(E_j^*;U_j)$. Since $F_j^*\Delta E_j^*\subset\subset B_{1/j}(y_j)$ implies $||F_j^*|-|E_j^*||<c(n)$ for $j$ large, if $t=|E_j^*|-|F_j^*|$, then $G_j^*=\Phi^j_t(F_j^*)$ is such that $|G_j^*|=|E_j^*|$, and by $E_j\in{\rm Min}[\psi_W(v_j)]$,
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!P(E_j^*;\Omega_j)\le P(G_j^*;\Omega_j)
\le P\big(E_j^*;\Omega_j\setminus(U_j\cup B_{1/j}(y_j))\big)
\\
&&+P(F_j^*;\Omega_j\cap B_{1/j}(y_j))+P(E_j^*;U_j)+C(n)\,P(E_j^*;U_j)\,\big|E_j^*\Delta F_j^*\big|\,.
\end{eqnarray*}
Taking into account $P(E^*_j;U_j)\le \psi_W(v_j)/v_j^{n/(n+1)}\le C(n)$, we thus find
\begin{eqnarray*}
P(E_j^*;\Omega_j\cap B_{1/j}(y_j))\le P(F_j^*;\Omega_j\cap B_{1/j}(y_j))+C(n)\,\big|E_j^*\Delta F_j^*\big|\,,
\end{eqnarray*}
which, by \eqref{unifor min contra scaled}, gives $j\,\big|E_j^*\Delta F_j^*\big|\le C(n)\,\big|E_j^*\Delta F_j^*\big|$. Since $|E_j^*\Delta F_j^*|>0$, this is a contradiction for $j$ large enough.
\noindent {\bf Step four:} We now prove that, if $\mathcal{R}(W)>0$, then
\begin{equation}\label{xv equation}
\lim_{v\to\infty}\,\sup_{E_v\in{\rm Min}[\psi_W(v)]}\,\big||x|\,v^{-1/(n+1)}-\omega_{n+1}^{-1/(n+1)}\big|=0\,,
\end{equation}
where $x$ is related to $E_v$ by \eqref{isop estimate 2}. In proving \eqref{xv equation} we will use the assumption $\mathcal{R}(W)>0$ and the energy upper bound
\begin{equation}
\label{bello}
\varlimsup_{v\to \infty} \psi_W(v) - P(B^{(v)})\le-\mathcal{R}(W)\,.
\end{equation}
A proof of \eqref{bello} is given in step one of the proof of Theorem \ref{thm main psi}, see section \ref{section resolution for exterior}; in turn, that proof is solely based on the results from section \ref{section isoperimetric residues}, where no part of Theorem \ref{thm existence and uniform min} (not even the existence of minimizers in $\psi_W(v)$) is ever used. This said, when $|W|>0$, and thus $\mathcal{S}(W)>0$, one can replace \eqref{bello} in the proof of \eqref{xv equation} by the simpler upper bound
\begin{equation}\label{strict}
\varlimsup_{v\to \infty} \psi_W(v) - P(B^{(v)})\le-\mathcal{S}(W)\,,
\end{equation}
where, we recall, $\mathcal{S}(W)=\sup\{\H^n(W\cap\Pi):\mbox{$\Pi$ is a hyperplane in $\mathbb{R}^{n+1}$}\}$. To prove \eqref{strict}, given $\Pi$, we construct competitors for $\psi_W(v)$ by intersecting $\Omega$ with balls $B^{(v')}(x_v)$ with $v'>v$ and $x_v$ such that
$|B^{(v')}(x_v)\setminus W|=v$ and $\H^n(W\cap\partial B^{(v')}(x_v))\to\H^n(W\cap\Pi)$ as $v\to\infty$. Hence,
$\varlimsup_{v\to\infty}\psi_W(v)-P(B^{(v)})\le-\H^n(W\cap\Pi)$,
thus giving \eqref{strict}. The proof of \eqref{bello} is identical in spirit to that of \eqref{strict}, with the difference that to glue a large ball to $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ we will need to establish the decay of $\partial F$ towards a hyperplane parallel to $\nu^\perp$ to the high degree of precision expressed in \eqref{asymptotics of F}. Now to prove \eqref{xv equation}: by contradiction, consider $v_j\to\infty$, $E_j\in{\rm Min}[\psi_W(v_j)]$, and $x_j\in\mathbb{R}^{n+1}$ with $\inf_{x\in\mathbb{R}^{n+1}}|E_j\Delta B^{(v_j)}(x)|=|E_j\Delta B^{(v_j)}(x_j)|$, such that
\begin{equation}\label{bad assumption on xv}
\varliminf_{j\to\infty}\big||x_j|\,v_j^{-1/(n+1)}-\omega_{n+1}^{-1/(n+1)}\big|>0\,,
\end{equation}
and set $\l_j=v_j^{-1/(n+1)}$, $E_j^*=\l_j\,(E_j-x_j)$, $W_j^*=\l_j\,(W-x_j)$, and $\Omega_j^*=\l_j\,(\Omega-x_j)$. By \eqref{uniform lambda minimality}, each $E_j^*$ is a $(\Lambda_0,s_0)$-perimeter minimizer with free boundary in $\Omega_j^*$. By \eqref{isop estimate 2} and the defining property of $x_j$, $E_j^*\to B^{(1)}$ in $L^1(\mathbb{R}^{n+1})$. Moreover, ${\rm diam}\,(W_j^*)\to 0$ and, by \eqref{bad assumption on xv},
\begin{equation}
\label{because}
\varliminf_{j\to\infty}{\rm dist}\big(W_j^*,\partial B^{(1)}\big)>0\,.
\end{equation}
Thus there is $z_0\not\in\partial B^{(1)}$ such that, for every $\rho<{\rm dist}(z_0,\partial B^{(1)})$, there is $j(\rho)$ such that $\{E_j^*\}_{j\ge j(\rho)}$ is a sequence of $(\Lambda_0,s_0)$-perimeter minimizers in $\mathbb{R}^{n+1}\setminus B_{\rho/2}(z_0)$. By Remark \ref{remark improved convergence}, up to increasing $j(\rho)$, $(\partial E_j^*)\setminus B_{\rho}(z_0)$ is contained in the normal graph over $\partial B^{(1)}$ of $u_j$ with $\|u_j\|_{C^1(\partial B^{(1)})}\to 0$; in particular, by \eqref{because}, $(\partial E_j^*)\setminus B_{\rho}(z_0)$ is disjoint from $W_j^*$. By the constant mean curvature condition satisfied by $\Omega\cap\partial E_j^*$, and by Alexandrov's theorem \cite{alexandrov}, $(\partial E_j^*)\setminus B_{\rho}(z_0)$ is a sphere $M_j^*$ for $j\ge j(\rho)$. Let $B_j^*$ be the ball bounded by $M_j^*$. Since $M_j^*\cap W_j^*=\emptyset$, we have either one of the following:
\noindent {\bf Case one:} $W_j^*\subset B_j^*$. We have $\partial[B_j^*\cup E_j^*]\subset M_j^*\cup[(\partial E_j^*)\setminus \mathrm{cl}\,(B_j^*)]\subset (\partial E_j^*)\setminus W_j^*$, so that, by $|B_j^*\cup E_j^*|\ge |E_j^*|+|W_j^*|\ge 1$, we find $P(E_j^*;\Omega_j^*)\ge P(B_j^*\cup E_j^*)\ge P(B^{(1)})$, that is, $\psi_W(v_j)\ge P(B^{(1)})$, against \eqref{bello}.
\noindent {\bf Case two:} $W_j^*\cap B_j^*=\emptyset$. In this case, $E_j^*=B_j^*\cup G_j^*$, where $G_j^*$ is the union of the connected components of $E_j^*$ whose boundaries have non-empty intersection with $W_j^*$: in other words, we are claiming that $B_j^*$ is the only connected component of $E_j^*$ whose closure is disjoint from $W_j^*$. Indeed, if this were not the case, we could recombine all the connected components of $E_j^*$ with closure disjoint from $W_j^*$ into a single ball of same total volume, centered far away from $W_j^*$, in such a way to strictly decrease $P(E_j^*;\Omega_j^*)$, against $E_j\in{\rm Min}[\psi_W(v_j)]$. Let us now set
$G_j=x_j+v_j^{1/(n+1)}\,G_j^*$ and $U_j=x_j+v_j^{1/(n+1)}\,B_j^*$, so that $E_j=G_j\cup U_j$ and ${\rm dist}(G_j,U_j)>0$.
If we start sliding $U_j$ from infinity towards $G_j\cup W$ along arbitrary directions, then at least one of the resulting ``contact points'' $z_j$ belongs to $\Omega\cap\partial G_j$: if this were not the case, then $G_j$ would be contained in the convex envelope of $W$, so that $|B_j|=|E_j|-|G_j|\ge v_j-C(W)$, and thus, by $\psi_W(v_j)=P(E_j;\Omega)\ge P(B_j;W)=P(B_j)$, and by $P(B_j)\ge P(B^{(v_j-C(W))})\ge P(B^{(v_j)})-C(W)\,v_j^{-1/(n+1)}$, against with \eqref{bello} for $j$ large.
By construction, there is a half-space $H_j$ such that $G_j\subset H_j$, $z_j\in(\partial G_j)\cap(\partial H_j)$, and $G_j$ is a perimeter minimizer in $B_r(z_j)$ for some small $r>0$. By the strong maximum principle, see, e.g. \cite[Lemma 2.13]{dephilippismaggiCAP-ARMA}, $G_j$ has $H_j-z_j$ as its unique blowup at $z_j$. By De Giorgi's regularity theorem, see e.g. \cite[Part III]{maggiBOOK}, $G_j$ is an open set with smooth boundary in a neighborhood of $z_j$. Therefore, if we denote by $U_j'$ the translation of $U_j$ constructed in the sliding argument, then, $E_j'=G_j\cup U_j'\in{\rm Min}[\psi_W(v)]$ and, in a neighborhood of $z_j$, $E_j'$ is the union of two disjoint sets with smooth boundary which touch tangentially at $z_j$. In particular, $|E_j'\cap B_r(z_j)|/|B_r|\to 1$ as $r\to 0^+$, against volume density estimates implied by \eqref{uniform lambda minimality}, see, e.g. \cite[Theorem 21.11]{maggiBOOK}.
\noindent {\bf Step five:} We finally show the existence of $v_0$ and $R_0(v)$ with $R_0(v)\to 0^+$ and $R_0(v)\,v^{1/(n+1)}\to \infty$, such that each $E_v\in{\rm Min}[\psi_W(v)]$ with $v>v_0$ determines $x$ and $u\in C^{\infty}(\partial B^{(1)})$ such that \eqref{x and u of Ev take 2} holds and $\sup_{E_v} \|u \|_{C^1(\partial B^{(1)})}\to 0$ as $v\to\infty$. To this end, let us consider $v_j\to\infty$, $E_j\in{\rm Min}[\psi_W(v_j)]$, and define $x_j$, $E_j^*$ and $W_j^*$ as in step four. Thanks to \eqref{xv equation}, there is $z_0\in\partial B^{(1)}$ s.t. ${\rm dist}(z_0,W_j^*)\to 0$. In particular, for every $\rho>0$, we can find $j(\rho)\in\mathbb{N}$ such that if $j\ge j(\rho)$, then $E_j^*$ is a $(\Lambda_0,s_0)$-perimeter minimizer in $\mathbb{R}^{n+1}\setminus B_\rho(z_0)$, with $E_j^*\to B^{(1)}$. By Remark \ref{remark improved convergence}, there are $u_j\in C^1(\partial B^{(1)})$ such that
\[
(\partial E_j^*)\setminus B_{2\,\rho}(z_0)=\big\{y+u_j(y)\,\nu_{B^{(1)}}(y):y\in\partial B^{(1)}\big\}\setminus B_{2\,\rho}(z_0)\,,\,\,\forall j\ge j(\rho)\,,
\]
and $\|u_j\|_{C^1(\partial B^{(1)})}\to 0$. By the arbitrariness of $\rho$ and by a contradiction argument, \eqref{x and u of Ev take 2} holds with $R_0(v)\to 0^+$ such that $R_0(v)\,v^{1/(n+1)}\to\infty$ as $v\to\infty$, and with the uniform decay of $\|u \|_{C^1(\partial B^{(1)})}$.
\end{proof}
\section{Properties of isoperimetric residues}\label{section isoperimetric residues} Here we prove Theorem \ref{thm main of residue}. It will be convenient to introduce some notation for cylinders and slabs in $\mathbb{R}^{n+1}$: precisely, given $r>0$, $\nu\in\SS^n$ and $I\subset\mathbb{R}$, and setting $\mathbf{p}_{\nu^\perp}(x)=x-(x\cdot\nu)\,\nu$ ($x\in\mathbb{R}^{n+1}$), we let
\begin{eqnarray}\nonumber
\textbf{D}_r^\nu&=&\big\{x\in\mathbb{R}^{n+1}:|\mathbf{p}_{\nu^\perp}x|<r\,,x\cdot\nu=0\big\}\,,
\\\nonumber
\textbf{C}_r^\nu&=&\big\{x\in\mathbb{R}^{n+1}:|\mathbf{p}_{\nu^\perp}x|<r\big\}\,,
\\\label{cylinders and slabs}
\textbf{C}_{r,I}^\nu&=&\big\{x\in\mathbb{R}^{n+1}:|\mathbf{p}_{\nu^\perp}x|<r\,,x\cdot\nu\in I\big\}\,,
\\\nonumber
\partial_\ell \textbf{C}_{r,I}^\nu&=&\big\{x\in\mathbb{R}^{n+1}:|\mathbf{p}_{\nu^\perp}x|=r\,,x\cdot\nu\in I\big\}\,,
\\\nonumber
\textbf{S}_{I}^\nu &=& \big\{x\in\mathbb{R}^{n+1}: x \cdot \nu \in I \big\}\,.
\end{eqnarray}
In each case, given $x\in\mathbb{R}^{n+1}$, we also set $\textbf{D}_r^\nu(x)=x+\textbf{D}_r^\nu$, $\textbf{C}_r^\nu(x)=x+\textbf{C}_r^\nu$, etc. We premise the following proposition, used in the proof of Theorem \ref{thm main of residue} and Theorem \ref{thm main psi}, and based on \cite[Proposition 1 and Proposition 3]{Scho83}.
\begin{proposition}\label{prop schoen}
Let $n\ge 2$, $\nu\in\SS^n$, and let $f$ be a Lipschitz solution to the minimal surface equation on $\nu^\perp\setminus\mathrm{cl}\,(\textbf{\textup{D}}_R^\nu)$.
If $n=2$, assume in addition that $M=\{x+f(x)\,\nu:|x|>R\}$ is stable and has natural area growth, i.e.
\begin{eqnarray}
\label{schoen 1}
\int_M\,|\nabla^M\varphi|^2-|A|^2\,\varphi^2\ge0\,,&&\qquad\forall\varphi\in C^1_c(\mathbb{R}^3\setminus B_R)\,,
\\
\label{schoen 2}
\H^2(M\cap B_r)\le C\,r^2\,,&&\qquad\forall r>R\,.
\end{eqnarray}
Then there are $a,b\in\mathbb{R}$ and $c\in\nu^\perp$ such that, for every $|x|>R$,
\begin{eqnarray}
\label{schoen conclusion 1}
&&\big|f(x)-\big(a+b\,|x|^{2-n}+(c\cdot x)\,|x|^{-n}\big)\big|\le C\,|x|^{-n}\,,\,\,(n\ge 3)
\\
\label{schoen conclusion 2}
&&\big|f(x)-\big(a+b\,\log|x|+(c\cdot x)\,|x|^{-2}\big)\big|\le C\,|x|^{-2}\,,\,\, (n=2)
\\
\label{schoen derivatives}
&&\max\Big\{|x|^{n-1}\,|\nabla f(x)|,|x|^n\,|\nabla^2f(x)|:|x|>R\Big\}\le C\,,\,\,(\mbox{every $n$})\,.
\end{eqnarray}
\end{proposition}
\begin{proof} If $n\ge 3$, the fact that $\nabla f$ is bounded allows one to represent $f$ as the convolution with a singular kernel which, by a classical result of Littman, Stampacchia, and Weinberger \cite{LSWannSNS}, is comparable to the Green's function of $\mathbb{R}^n$; \eqref{schoen conclusion 1} is then deduced starting from that representation formula. For more details, see \cite[Proposition 3]{Scho83}. In the case $n=2$, by \eqref{schoen 1} and \eqref{schoen 2}, we can exploit a classical ``logarithmic cut-off argument'' to see that $M$ has finite total curvature, i.e. $\int_M |K| \,d\H^2< \infty$, where $K$ is the Gaussian curvature of $M$. As a consequence, see, e.g. \cite[Section 1.2]{PerezRos}, the compactification $\overline{M}$ of $M$ is a Riemann surface with boundary, and $M$ is conformally equivalent to $\overline{M}\setminus\{p_1,...,p_m\}$, where $p_i$ are interior points of $\overline{M}$. One can thus conclude by the argument in \cite[Proposition 1]{Scho83} that $M$ has $m$-many ends satisfying the decay \eqref{schoen conclusion 2}, and then that $m=1$ thanks to the fact that $M=\{x+f(x)\,\nu:|x|>R\}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm main of residue}] {\bf Step one:} Given a hyperplane $\Pi$ in $\mathbb{R}^{n+1}$, if $F$ is a half-space with $\partial F=\Pi$ and $\nu$ is a unit normal to $\Pi$, then ${\rm res}_W(F,\nu)=\H^n(W\cap\Pi)$. Therefore the lower bound in \eqref{R larger than S} follows by
\begin{equation}
\label{famous lb}
\mathcal{R}(W)\ge\mathcal{S}(W)=\sup\big\{\H^n(\Pi\cap W):\mbox{$\Pi$ an hyperplane in $\mathbb{R}^{n+1}$}\big\}\,.
\end{equation}
\noindent {\bf Step two:} We notice that, if $(F,\nu)\in\mathcal F$, then by \eqref{def Sigma nu 1}, \eqref{def Sigma nu 2}, and the divergence theorem (see, e.g., \cite[Lemma 22.11]{maggiBOOK}), we can define a Radon measure on the open set $\nu^\perp\setminus\mathbf{p}_{\nu^\perp}(W)$ by setting
\begin{equation}
\label{is a radon measure}
\mu(U)=P\big(F;(\mathbf{p}_{\nu^\perp})^{-1}(U)\big)-\H^n(U)\,,\qquad U\subset\nu^\perp\setminus\mathbf{p}_{\nu^\perp}(W)\,.
\end{equation}
In particular, setting $R'=\inf\{\rho: W\subset\textbf{\textup{C}}_\rho^\nu\}$, $\mu(\textbf{\textup{D}}_R^\nu\setminus\mathbf{p}_{\nu^\perp}(W))\ge0$ gives
\[
P(F;\textbf{\textup{C}}_R^\nu\setminus W)\ge\omega_n\,R^n-\H^n(\mathbf{p}_{\nu^\perp}(W))\,,\qquad\forall R>R'\,,
\]
while the identity
\begin{eqnarray*}
\omega_n\,R^n-P(F;\textbf{\textup{C}}_R^\nu\setminus W)=-\mu(\textbf{\textup{D}}_R^\nu\setminus\textbf{\textup{D}}_{R'}^\nu)+\omega_n\,(R')^n-P(F;\textbf{\textup{C}}_{R'}^\nu\setminus W)
\end{eqnarray*}
(which possibly holds as $-\infty=-\infty$ if $P(F;\textbf{\textup{C}}_{R'}^\nu\setminus W)=+\infty$) gives that
\begin{equation}
\label{perimeter is decreasing}
R\in(R',\infty)\mapsto \omega_n\,R^n-P(F;\textbf{\textup{C}}_R^\nu\setminus W)\,\,\,
\mbox{is {\it decreasing} on $(R',\infty)$}\,.
\end{equation}
In particular, the limsup defining ${\rm res}_W$ always exists as a limit.
\noindent {\bf Step three:} We prove the existence of $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ and \eqref{local perimeter minimizer}. We first claim that if $\{(F_j,\nu_j)\}_j$ is a maximizing sequence for $\mathcal{R}(W)$, then, in addition to $\mathbf{p}_{\nu_j^\perp}(\partial F_j)=\nu_j^\perp$, one can modify $(F_j,\nu_j)$, preserving the optimality in the limit $j\to \infty$, so that (setting $X\subset^{\L^{n+1}} Y$ for $|X\setminus Y|=0$)
\begin{eqnarray}
\label{properties of Fj updated}
&&\partial F_j\subset \textbf{S}_{[A_j,B_j]}^{\nu_j}\,,\,\,\, \textbf{S}_{(-\infty, A_j)}^{\nu_j}\stackrel{\L^{n+1}}{\subset}F_j\,,
\,\,\, \textbf{S}_{(B_j,\infty)}^{\nu_j} \stackrel{\L^{n+1}}{\subset}\mathbb{R}^{n+1}\setminus F_j\,,\hspace{0.5cm}
\\
\label{E containment}
&&\mbox{where}\,\,[A_j,B_j]=\bigcap\big\{(\a,\b):W\subset \textbf{S}_{(\a,\b)}^{\nu_j}\big\}\,.
\end{eqnarray}
Indeed, since $(F_j,\nu_j)\in\mathcal F$, for some $\a_j<\b_j\in\mathbb{R}$ we have
\begin{equation}
\label{properties of Fj}
\partial F_j\subset \textbf{S}_{[\a_j,\beta_j]}^{\nu_j}\,,\qquad \mathbf{p}_{\nu_j^\perp}(\partial F_j)=\nu_j^\perp\,.
\end{equation}
Would it be that either $\textbf{S}_{(-\infty, \a_j)\cup(\b_j,\infty)}^{\nu_j} \subset_{\L^{n+1}} F_j$ or $\textbf{S}_{(-\infty, \a_j)\cup(\b_j,\infty)}^{\nu_j} \subset_{\L^{n+1}} \mathbb{R}^{n+1}\setminus F_j$, then, by the divergence theorem and by $\mathbf{p}_{\nu_j^\perp}(\partial F_j)=\nu_j^\perp$,
\[
P(F_j;\textbf{\textup{C}}_R^{\nu_j}\cap\Omega)\ge 2\,\big(\omega_n\,R^n-\H^n(\mathbf{p}_{\nu_j^\perp}(W))\big)\,,\qquad\forall R>0\,,
\]
and thus ${\rm res}_{W}(F_j,\nu_j)=-\infty$; in particular, $(F_j,\nu_j)\in\mathcal F$ being a maximizing sequence, we would have $\mathcal{R}(W)=-\infty$, against \eqref{famous lb}. This proves the validity (up to switching $F_j$ with $\mathbb{R}^{n+1}\setminus F_j$), of the inclusions
\begin{equation}
\label{properties of Fj updated new}
\textbf{S}_{(-\infty, \a_j)}^{\nu_j} \subset_{\L^{n+1}} F_j\,,
\qquad \textbf{S}_{(\b_j,\infty)}^{\nu_j} \subset_{\L^{n+1}} \mathbb{R}^{n+1}\setminus F_j\,.
\end{equation}
Thanks to \eqref{properties of Fj updated new} (and by exploiting basic set operations on sets of finite perimeter, see, e.g., \cite[Theorem 16.3]{maggiBOOK}), we see that
\begin{eqnarray}\label{truncate up to the obstacle}
&&\mbox{$F_j^*=\big(F_j\cup \textbf{S}_{(-\infty,A_j-1/j)}^{\nu_j}\big)\cap \textbf{S}_{(-\infty,B_j+1/j)}^{\nu_j}$ satisfies}
\\\nonumber
&&(F_j^*,\nu_j)\in\mathcal F\,,\qquad P\big(F_j^*;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big)\le P\big(F_j;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big)\,,\qquad\forall R>0\,;
\end{eqnarray}
in particular, $\{(F_j^*,\nu_j)\}_j$ is also a maximizing sequence for $\mathcal{R}(W)$. By standard compactness theorems there are $F$ of locally finite perimeter in $\mathbb{R}^{n+1}$ and $\nu\in\SS^n$ such that $F_j\to F$ in $L^1_{{\rm loc}}(\mathbb{R}^{n+1})$ and $\nu_j\to\nu$. If $A\subset\subset\textbf{\textup{C}}_R^\nu\setminus W$ is open, then, for $j$ large enough, $A\subset\subset \textbf{\textup{C}}_R^{\nu_j}\setminus W$, and thus
\begin{equation}
\label{liminf on cylinders}
P(F;\textbf{\textup{C}}_R^\nu\setminus W)=\sup_{A\subset\subset\textbf{\textup{C}}_R^\nu\setminus W}\,P(F;A)\le\varliminf_{j\to\infty}P(F_j;\textbf{\textup{C}}_R^{\nu_j}\setminus W)\,.
\end{equation}
By \eqref{perimeter is decreasing}, $R\mapsto \omega_n\,R^n-P(F_j;\textbf{\textup{C}}_R^{\nu_j}\setminus W)$ is decreasing on $R>R_j=\inf\{\rho:W\subset \textbf{\textup{C}}_\rho^{\nu_j}\}$. By $\sup_jR_j\le C(W)<\infty$ and \eqref{liminf on cylinders} we have
\[
\omega_n\,R^n-P(F;\textbf{\textup{C}}_R^\nu\setminus W)\ge\varlimsup_{j\to\infty}\omega_n\,R^n-P(F_j;\textbf{\textup{C}}_R^{\nu_j}\setminus W)\ge\varlimsup_{j\to\infty}{\rm res}_W(F_j,\nu_j)\,,
\]
for every $R>C(W)$; in particular, letting $R\to\infty$,
\begin{equation}
\label{max inq}
{\rm res}_W(F,\nu)\ge\varlimsup_{j\to\infty}{\rm res}_W(F_j,\nu_j)=\mathcal{R}(W)\,.
\end{equation}
By $F_j\to F$ in $L^1_{{\rm loc}}(\mathbb{R}^{n+1})$, $\partial F=\mathrm{cl}\,(\partial^*F)$ is contained in the set of accumulation points of sequences $\{x_j\}_j$ with $x_j\in\partial F_j$, so that \eqref{properties of Fj updated} gives
\begin{equation}
\label{properties of F updated}
\partial F\subset \textbf{S}_{[A,B]}^{\nu}\,,\qquad \textbf{S}_{(-\infty, A)}^{\nu} \subset_{\L^{n+1}} F\,,
\qquad \textbf{S}_{(B,\infty)}^{\nu} \subset_{\L^{n+1}}\mathbb{R}^{n+1}\setminus F\,,
\end{equation}
if $[A,B]=\bigcap\{(\a,\b):W\subset\textbf{S}^\nu_{(\a,\b)}\}$. Therefore $(F,\nu)\in\mathcal F$, and thus, by \eqref{max inq}, $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$. We now show that \eqref{max inq} implies \eqref{local perimeter minimizer}, i.e.
\begin{equation}\label{local perimeter minimizer proof}
P(F;\Omega \cap B) \leq P(G;\Omega \cap B)\,,\qquad\mbox{$\forall F\Delta G\subset\subset B$, $B$ a ball}\,.
\end{equation}
Indeed, should \eqref{local perimeter minimizer proof} fail, we could find $\delta>0$ and $G\subset\mathbb{R}^{n+1}$ with $F\Delta G\subset\subset B$ for some ball $B$, such that $P(G;B\setminus W)+\delta\le P(F;B\setminus W)$. For $R$ large enough to entail $B\subset\subset \textbf{\textup{C}}_R^\nu$ we would then find
\[
{\rm res}_W(F,\nu)+\delta\le\omega_n\,R^n-P(F;\textbf{\textup{C}}_R^\nu\setminus W)+\delta\le\omega_n\,R^n-P(G;\textbf{\textup{C}}_R^\nu\setminus W)\,,
\]
which, letting $R\to\infty$, would violate the maximality of $(F,\nu)$ in $\mathcal{R}(W)$.
\noindent {\bf Step four:} We show that if $\mathcal{R}(W)>0$ and $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, then $\partial F\subset\mathbf{S}_{[A,B]}^\nu$ for $A,B$ as in \eqref{properties of F updated}. Otherwise, by the same truncation procedure leading to \eqref{truncate up to the obstacle} and by $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, we would find
\begin{equation*}
\omega_n R^n - P\big(F^*;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big)\geq \omega_n R^n -P\big(F;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big)\geq \mathcal{R}(W) \qquad\forall R>0\,,
\end{equation*}
so that $(F^\ast,\nu)\in{\rm Max}[\mathcal{R}(W)]$ too. Now $P\big(F;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big) - P\big(F^*;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big) $ is increasing in $R$, and since ${\rm res}_W(F,\nu) = {\rm res}_W(F^\ast,\nu)$, it follows that $P\big(F;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big) = P\big(F^*;\textbf{\textup{C}}_R^{\nu_j}\setminus W\big) $ for large $R$. But this can hold only if $\partial F \cap \Omega$ is an hyperplane disjoint from $W$, in which case $\mathcal{R}(W)={\rm res}_W(F,\nu)=0$.
\noindent {\bf Step five:} Still assuming $\mathcal{R}(W)>0$, we complete the proof of statement (ii) by proving \eqref{asymptotics of F}. By \eqref{properties of F updated}, if $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, then $F/R\to H^-=\{x\in\mathbb{R}^{n+1}:x\cdot\nu<0\}$ in $L^1_{{\rm loc}}(\mathbb{R}^{n+1})$ as $R\to\infty$. By \eqref{local perimeter minimizer proof} and by improved convergence (i.e., Remark \ref{remark improved convergence} -- notice carefully that $\partial F$ is bounded in the direction $\nu$ thanks to step four), we find $R_F>0$ and functions $\{f_R\}_{R>R_F}\subset C^1(\textbf{\textup{D}}_2^\nu\setminus\textbf{\textup{D}}_1^\nu)$ such that
\[
\big(\textbf{\textup{C}}_2^\nu\setminus\textbf{\textup{C}}_1^\nu\big)\cap\partial (F/R)=\big\{x+f_R(x)\,\nu:x\in \textbf{\textup{D}}_2^\nu\setminus\textbf{\textup{D}}_1^\nu\big\}\,,\qquad\forall R>R_F\,.
\]
with $\|f_R\|_{C^1(\textbf{\textup{D}}_2^\nu\setminus\textbf{\textup{D}}_1^\nu)}\to 0$ as $R\to\infty$. Scaling back to $F$ we deduce that
\begin{equation}
\label{F represented by u}
(\partial F)\setminus\textbf{\textup{C}}_{R_F}^\nu=\big\{x+f(x)\,\nu:x\in\nu^\perp\setminus\textbf{\textup{D}}_{R_F}^\nu\big\}\,,
\end{equation}
for a (necessarily smooth) solution $f$ to the minimal surfaces equation with
\begin{equation}
\label{u estimates}
\|f\|_{C^0(\nu^\perp\setminus\textbf{\textup{D}}_{R_F}^\nu)}\le B-A\,,\qquad
\lim_{R\to\infty}\|\nabla f\|_{C^0(\textbf{\textup{D}}_{2\,R}^\nu\setminus \textbf{\textup{D}}_{R}^\nu)}=0\,,
\end{equation}
thanks to the fact that $f(x)=R\,f_R(x/R)$ if $x\in\textbf{\textup{D}}_{2\,R}^\nu\setminus \textbf{\textup{D}}_{R}^\nu$. {\bf When $n\ge 3$}, \eqref{asymptotics of F} follows by \eqref{F represented by u} and Proposition \ref{prop schoen}. {\bf When $n=2$}, \eqref{schoen 1} holds by \eqref{local perimeter minimizer proof}. To check \eqref{schoen 2}, we deduce by ${\rm res}_W(F,\nu)\ge0$ the existence of $R'>R_F$ such that $\omega_n\,R^n\ge P(F;\textbf{\textup{C}}_R^\nu\setminus W)-1$ if $R>R'$. In particular, setting $M=(\partial F)\setminus B_{R_F}$, for $R>R'$ we have
\[
\H^2(M\cap B_R)\le \H^2(M\cap W)+P(F;\textbf{\textup{C}}_R^\nu\setminus W)\le \omega_n\,R^n+1+\H^2(M\cap W)\le C\,R^n\,,
\]
provided $C=\omega_n+[(1+\H^2(M\cap W))/(R')^n]$; while if $R\in(R_F,R')$, then $\H^2(M\cap B_R)\le C\,R^n$ with $C=\H^2(M\cap B_{R'})/R_F^n$. This said, we can apply Proposition \ref{prop schoen} to deduce \eqref{schoen conclusion 2}. Since $\partial F$ is bounded in a slab, the logarithmic term in \eqref{schoen conclusion 2} must vanish (i.e. \eqref{schoen conclusion 2} holds with $b=0$), and thus \eqref{asymptotics of F} is proved. {\bf Finally, when $n=1$}, by \eqref{F represented by u} and \eqref{u estimates} there are $a_1,a_2\in\mathbb{R}$, $x_1<x_2$, $x_1,x_2\in\nu^\perp\equiv\mathbb{R}$ such that $f(x)=a_1$ for $x\in\nu^\perp$, $x<x_1$, and $f(x)=a_2$ for $x\in\nu^\perp$, $x>x_2$. Now, setting $M_1=\{x+a_1\,\nu:x\in\nu^\perp,x<x_1\}$ and $M_2=\{x+a_2\,\nu:x\in\nu^\perp,x>x_2\}$, we have that
\[
P(F;\textbf{\textup{C}}_R^\nu\setminus W)=\H^n\big(\textbf{\textup{C}}_R^\nu\cap(\partial F)\setminus(W\cup M_1\cup M_2)\big)+2\,R-|x_2-x_1|\,;
\]
while, if $L$ denotes the line through $x_1+a_1\,\nu$ and $x_2+a_2\,\nu$, then we can find $\nu_L\in\SS^1$ and a set $F_L$ such that $(F_L,\nu_L)\in\mathcal F$ with $\partial F_L=\big[\big((\partial F)\setminus(M_1\cup M_2)\big)\cup (L_1\cup L_2)\big]$, where $L_1$ and $L_2$ are the two half-lines obtained by removing from $L$ the segment joining $x_1+a_1\,\nu$ and $x_2+a_2\,\nu$. In this way, $P(F_L;\textbf{\textup{C}}_R^{\nu_L}\setminus W)=\H^n\big(\textbf{\textup{C}}_R^\nu\cap(\partial F)\setminus(W\cup M_1\cup M_2)\big)+2\,R-\big|(x_1+a_1\,\nu)-(x_2+a_2\,\nu)\big|$, so that ${\rm res}_W(F_L,\nu_L)-{\rm res}_W(F,\nu)=\big|(x_1+a_1\,\nu)-(x_2+a_2\,\nu)\big|-|x_2-x_1|>0$, against $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ if $a_1\ne a_2$. Hence, $a_1=a_2$.
We are left to prove that \eqref{F represented by u} holds with $R_2=R_2(W)$ in place of $R_F$, and the constants $a$, $b$, $c$ and $C_0$ appearing in \eqref{asymptotics of F} can be bounded in terms of $W$ only. To this end, we notice that the argument presented in step one shows that ${\rm Max}[\mathcal{R}(W)]$ is pre-compact in $L^1_{\rm loc}(\mathbb{R}^{n+1})$. Using this fact and a contradiction argument based on improved convergence (Remark \ref{remark improved convergence}), we conclude the proof of statement (ii).
\noindent {\bf Step six:} We complete the proof of statement (i) and begin the proof of statement (iii) by showing that, setting for brevity $d={\rm diam}\,(W)$, it holds
\begin{equation}
\label{RW upper}
\H^n(W\cap\Pi)\le\mathcal{R}(W) \leq \sup_{\nu\in\SS^n}\mathcal{H}^n(\mathbf{p}_{\nu^\perp} (W))\leq\omega_n\,(d/2)^n\,,
\end{equation}
whenever $\Pi$ is a hyperplane in $\mathbb{R}^{n+1}$. We have already proved the first inequality in step one. To prove the others, we notice that, if $(F,\nu)\in\mathcal F$, then $\mathbf{p}_{\nu^\perp}(\partial F)=\nu^\perp$ and \eqref{perimeter is decreasing}
give, for every $R>R'$,
\begin{eqnarray}
\nonumber
&&\!\!\!\!\!\!\!\!\!\!-{\rm res}_W(F,\nu)
\ge
P(F;\textbf{\textup{C}}_R^\nu\setminus W)-\omega_n\,R^n
\ge\H^n\big(\mathbf{p}_{\nu^\perp}(\partial F\setminus W)\cap\textbf{\textup{D}}_R^\nu\big)-\omega_n\,R^n
\\
\label{projection type inequality}
&&\!\!\!\!=-\H^n\big(\textbf{\textup{D}}_R^\nu\setminus\mathbf{p}_{\nu^\perp}(\partial F\setminus W)\big)
\ge-\H^n(\mathbf{p}_{\nu^\perp}(W))\ge-\omega_n\,(d/2)^n\,,
\end{eqnarray}
where in the last step we have used the isodiametric inequality. Maximizing over $(F,\nu)$ in \eqref{projection type inequality} we complete the proof of \eqref{RW upper}. Moreover, if $W=\mathrm{cl}\,(B_{d/2})$, then, since $\mathcal{S}(\mathrm{cl}\,(B_{d/2}))=\H^n(\mathrm{cl}\,(B_{d/2})\cap\Pi)=\omega_n\,(d/2)^n$ for any hyperplane $\Pi$ through the origin, we find that $\mathcal{R}(\mathrm{cl}\,(B_{d/2}))=\omega_n\,(d/2)^n$; in particular, \eqref{RW upper} implies \eqref{optimal RW}.
\noindent {\bf Step seven:} We continue the proof of statement (iii) by showing \eqref{characterization 2}. Let $\mathcal{R}(W)=\omega_n\,(d/2)^n$ and let $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$. Since every inequality in \eqref{projection type inequality} holds as an equality, we find in particular that
\begin{eqnarray}\label{dai 1}
&&\sup_{R>R'}P(F;\textbf{\textup{C}}_R^\nu\setminus W)-\H^n\big(\mathbf{p}_{\nu^\perp}(\partial F\setminus W)\cap\textbf{\textup{D}}_R^\nu\big)=0\,,
\\\label{dai 3}
&&\H^n(\mathbf{p}_{\nu^\perp}(W))=\omega_n\,(d/2)^n\,.
\end{eqnarray}
By \eqref{dai 3} and the discussion of the equality cases for the isodiametric inequality (see, e.g. \cite{maggiponsiglionepratelli}), we see that, for some $x_0\in\nu^\perp$,
\begin{equation}
\label{third condition}
\mathbf{p}_{\nu^\perp}(W)=\mathrm{cl}\,(\textbf{\textup{D}}_{d/2}^\nu(x_0))\,,\qquad\mbox{so that $W\subset\textbf{\textup{C}}_{d/2}^\nu(x_0)$}\,.
\end{equation}
Condition \eqref{dai 1} implies that \eqref{asymptotics of F} holds with $u\equiv a$ for some $a\in[A,B]=\bigcap\{(\a,\b):W\subset\mathbf{S}^\nu_{(\a,\b)}\}$; in particular, since $(\partial F)\setminus W$ is a minimal surface and $W\subset\textbf{\textup{C}}_{d/2}^\nu(x_0)$, by analytic continuation we find that
\begin{equation}
\label{two conditions i}
(\partial F)\setminus\textbf{\textup{C}}_{d/2}^\nu(x_0) =\Pi\setminus\textbf{\textup{C}}_{d/2}^\nu(x_0)\,,\qquad\Pi=\big\{x:x\cdot\nu=a\big\}\,.
\end{equation}
By \eqref{two conditions i}, we have that for $R>R'$,
\[
P(F;\textbf{\textup{C}}_R^\nu\setminus W)-\omega_n\,R^n=P(F;\textbf{\textup{C}}_{d/2}^\nu(x_0)\setminus W)-\omega_n\,(d/2)^n\,.
\]
Going back to \eqref{projection type inequality}, this implies $P(F;\textbf{\textup{C}}_{d/2}^\nu(x_0)\setminus W)=0$. However, since $(\partial F)\setminus W$ is (distributionally) a minimal surface, $P(F;B_\rho(x)\setminus W)\ge\omega_n\,\rho^n$ whenever $x\in (\partial F)\setminus W$ and $\rho<{\rm dist}(x,W)$, so that
$P(F;\textbf{\textup{C}}_{d/2}^\nu(x_0)\setminus W)=0$ gives $((\partial F)\setminus W)\cap\textbf{\textup{C}}_{d/2}^\nu(x_0)=\emptyset$. Hence, using also \eqref{two conditions i}, we find $(\partial F)\setminus W=\Pi\setminus\mathrm{cl}\,\big(B_{d/2}(x)\big)$ for some $x\in\Pi$, that is \eqref{characterization 2}.
\noindent {\bf Step eight:} We finally prove that $\mathcal{R}(W)=\omega_n\,(d/2)^n$ if and only if there are a hyperplane $\Pi$ and a point $x\in\Pi$ such that
\begin{eqnarray}
\label{condo 1}
&&\Pi\cap\partial B_{d/2}(x)\subset W\,,
\\
\label{condo 2}
&&\mbox{$\Omega\setminus(\Pi\setminus B_{d/2}(x))$ has two unbounded connected components}\,.
\end{eqnarray}
We first prove that the two conditions are sufficient. Let $\nu$ be a unit normal to $\Pi$ and let $\Pi^+$ and $\Pi^-$ be the two open half-spaces bounded by $\Pi$. The condition $\Pi\cup\partial B_{d/2}(x)\subset W$ implies $W\subset\textbf{\textup{C}}_{d/2}^\nu(x)$, and thus
\[
\Omega\setminus\mathrm{cl}\,\big[ \textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)\big]=(\Pi^+\cup\Pi^-)\setminus\mathrm{cl}\,\big[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)\big]\,.
\]
In particular, $\Omega \setminus(\Pi\setminus B_{d/2}(x))$ has a connected component $F$ which contains
\[
\Pi^+\setminus\mathrm{cl}\,\big[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)\big]\,;
\]
and since $\Omega\setminus(\Pi\setminus B_{d/2}(x))$ contains exactly two unbounded connected components, it cannot be that $F$ contains also $\Pi^-\setminus\mathrm{cl}\,[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)]$, therefore
\begin{equation}
\label{pi plus pi minus}
\Pi^+\setminus\mathrm{cl}\,\big[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)\big]\subset F\,,\qquad \Pi^-\setminus\mathrm{cl}\,\big[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)\big]\subset\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(F)\,.
\end{equation}
As a consequence $\partial F$ is contained in the slab $\{y:|(y-x)\cdot\nu|<d\}$, and is such that $\mathbf{p}_{\nu^\perp}(\partial F)=\nu^\perp$, that is, $(F,\nu)\in\mathcal F$. Moreover, \eqref{pi plus pi minus} implies
\[
\Pi\setminus \mathrm{cl}\,(B_{d/2}(x))\subset\Omega\cap\partial F\,,
\]
while the fact that $F$ is a connected component of $\Omega \setminus (\Pi\setminus B_{d/2}(x))$ implies $\Omega\cap\partial F\subset \Pi\setminus \mathrm{cl}\,(B_{d/2}(x))$. In conclusion, $\Omega\cap\partial F=\Pi\setminus\mathrm{cl}\,(B_{d/2}(x))$, hence
\begin{equation}
\omega_n\,(d/2)^n=
\lim_{r\to \infty}\omega_n r^n - P(F; \textbf{C}_r^{\nu}\setminus W) \leq \mathcal{R}(W) \leq \omega_n\,(d/2)^n\,,
\end{equation}
and $\mathcal{R}(W)=\omega_n\,(d/2)^n$, as claimed. We prove that the two conditions are necessary. Let $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$. As proved in step seven, there is a hyperplane $\Pi$ and $x\in\Pi$ such that $\Omega\cap\partial F=\Pi\setminus \mathrm{cl}\,(B_{d/2}(x))$. If $z\in\Pi\cap\partial B_{d/2}(x)$ but $z\in\Omega$, then there is $\rho>0$ such that $B_\rho(z)\subset\Omega$, and since $\partial F$ is a minimal surface in $\Omega$, we would obtain that $\Pi\cap B_\rho(z)\subset\Omega\cap\partial F$, against $\Omega\cap\partial F=\Pi\setminus\mathrm{cl}\,(B_{d/2}(x))$. So it must be $\Pi\cap\partial B_{d/2}(x)\subset W$, and the necessity of \eqref{condo 1} is proved. To prove the necessity of \eqref{condo 2}, we notice that since $\Pi^+\setminus\mathrm{cl}\,[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)]$ and $\Pi^-\setminus\mathrm{cl}\,[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)]$ are both open, connected, and unbounded subsets of $\Omega \setminus (\Pi\setminus B_{d/2}(x))$, and since the complement in $\Omega \setminus (\Pi\setminus B_{d/2}(x))$ of their union is bounded, it must be that $\Omega \setminus (\Pi\setminus B_{d/2}(x))$ has {\it at most} two unbounded connected components: therefore we just need to exclude that {\it it has only one}. Assuming by contradiction that this is the case, we could then connect any point $x^+\in\Pi^+\setminus\mathrm{cl}\,[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)]$ to any point $x^-\in\Pi^-\setminus\mathrm{cl}\,[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)]$ with a continuous path $\gamma$ entirely contained in $\Omega \setminus (\Pi\setminus B_{d/2}(x))$. Now, recalling that $\Omega\cap\partial F=\Pi\setminus\mathrm{cl}\,(B_{d/2}(x))$, we can pick $x_0\in\Pi\setminus\mathrm{cl}\,(B_{d/2}(x))$ and $r>0$ so that
\begin{equation}
\label{separating F}
B_r(x_0)\cap\Pi^+\subset F\,,\qquad B_r(x_0)\cap\Pi^-\subset\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(F)\,,
\end{equation}
and $B_r(x_0)\cap\mathrm{cl}\,[\textbf{\textup{C}}^\nu_{d/2,(-d,d)}(x)]=\emptyset$. We can then pick $x^+\in B_r(x_0)\cap\Pi^+$, $x^-\in B_r(x_0)\cap\Pi^-$, and then connect them by a path $\gamma$ entirely contained in $\Omega \setminus (\Pi\setminus B_{d/2}(x))$. By \eqref{separating F}, $\gamma$ must intersect $\partial F$, and since $\gamma$ is contained in $\Omega$, we see that $\gamma$ must intersect $\Omega\cap\partial F=\Pi\setminus\mathrm{cl}\,(B_{d/2}(x))$, which of course contradicts the containment of $\gamma$ in $\Omega \setminus (\Pi\setminus B_{d/2}(x))$. We have thus proved that $\Omega \setminus (\Pi\setminus B_{d/2}(x))$ has exactly two unbounded connected components.
\end{proof}
\section{Resolution theorem for exterior isoperimetric sets}\label{section resolution for exterior} The notation set in \eqref{cylinders and slabs} is in use. Given $v_j\to\infty$, we set $\l_j=v_j^{1/(n+1)}$.
\begin{proof}[Proof of Theorem \ref{thm main psi}] We recall that, throughout the proof, $\mathcal{R}(W)>0$. Theorem \ref{thm main psi}-(i) and the estimate for $|v^{-1/(n+1)}\,|x|-\omega_{n+1}^{-1/(n+1)}|$ in Theorem \ref{thm main psi}-(iv), have already been proved in Theorem \ref{thm existence and uniform min}-(ii, iii).
\noindent {\bf Step one:} We prove that
\begin{equation}\label{energy upper bound}
\varlimsup_{v \to \infty} \psi_W(v)- P(B^{(v)})\leq -\mathcal{R}(W)\,.
\end{equation}
To this end, let $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, so that by \eqref{main residue graphicality of F} and \eqref{asymptotics of F}, we have
\begin{equation}
\label{order of F0}
F\setminus\textbf{\textup{C}}_{R_2}^\nu=\big\{x+t\,\nu:x\in\nu^\perp\,,|x|>R_2\,,t<f(x)\big\}\,,
\end{equation}
for a function $f\in C^1(\nu^\perp)$ satisfying
\begin{eqnarray}
\label{order of F}
&&\big|f(x)-\big(a+b\,|x|^{2-n}+(c\cdot x)\,|x|^{-n}\big)\big|\le C_0\,|x|^{-n}\,,
\\
\nonumber
&&\max\big\{|x|^{n-1}\,|\nabla f(x)|,|x|^{n}\,|\nabla^2 f(x)|\big\}\le C_0\,,\qquad\forall x\in\nu^\perp\,,|x|>R_2\,,
\end{eqnarray}
and for some $a,b\in\mathbb{R}$ and $c\in\nu^\perp$ such that $\max\{|a|,|b|,|c|\}\le C(W)<\infty$ (moreover, we can take $b=0$, $c=0$ and $C_0=0$ if $n=1$). We are going to construct competitors for $\psi_W(v)$ with $v$ large by gluing a large sphere $S$ to $\partial F$ along $\partial\textbf{\textup{C}}_r^\nu$ for $r>R_2$. This operation comes at the price of an area error located on the cylinder $\partial\textbf{\textup{C}}_r^\nu$. We can make this error negligible thanks to the fact that \eqref{order of F} determines the distance (inside of $\partial\textbf{\textup{C}}_r^\nu$) of $\partial F$ from a hyperplane (namely, $\partial G_r$ for the half-space $G_r$ defined below) up to ${\rm o}(r^{1-n})$ as $r\to\infty$. Thus, the asymptotic expansion \eqref{asymptotics of F} is just as precise as needed in order to perform this construction, i.e. our construction would not be possible with a less precise information.
We now discuss the construction in detail. Given $r>R_2$, we consider the half-space $G_r\subset\mathbb{R}^{n+1}$ defined by the condition that
\begin{equation}
\label{def of Gr}
G_r \cap \partial\textbf{\textup{C}}_r^\nu =\big\{x+t\,\nu:x\in\nu^\perp\,,|x|=r\,, t< a + b\,r^{2-n}+(c\cdot x)\,r^{-n}\big\}\,,
\end{equation}
so that $G_r$ is the ``best half-space approximation'' of $F$ on $\partial\textbf{\textup{C}}_r^\nu$ according to \eqref{order of F}. Denoting by $\mathrm{hd}\,(X,Y)$ the Hausdorff distance between $X,Y\subset\mathbb{R}^{n+1}$, for every $r>R_2$ and $v>0$ we can define $x_{r,v}\in\mathbb{R}^{n+1}$ in such a way that $v\mapsto x_{r,v}$ is continuous and
\begin{eqnarray}
\label{hausdorff convergence to halfspace}
\lim_{v\to\infty}\mathrm{hd}\,(B^{(v)}(x_{r,v})\cap K,G_r\cap K)=0\qquad\forall\,K\subset\subset\mathbb{R}^{n+1}\,.
\end{eqnarray}
Thus, the balls $B^{(v)}(x_{r,v})$ have volume $v$ and are locally converging in Hausdorff distance, as $v\to\infty$, to the optimal half-space $G_r$. Finally, we notice that by \eqref{order of F} we can find $\a<\b$ such that
\begin{align}\label{containment in finite cylinder}
\big((\partial F) \cup (\partial G_r) \cup (G_r \Delta F)\big) \cap \textbf{\textup{C}}_r^\nu \,\,\subset\,\, \textbf{\textup{C}}_{r,(\a+1,\b-1)}^\nu\,,
\end{align}
and then define $F_{r,v}$ by setting
\begin{equation}\label{the competitors}
F_{r,v}=\big(F\cap\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)\cup\big(B^{(v)}(x_{r,v})\setminus\mathrm{cl}\,\big[\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big]\big)\,,
\end{equation}
see
\begin{figure}\input{upperbound2.pstex_t}\caption{\small{The competitors $F_{r,v}$ constructed in \eqref{the competitors}. A maximizer $F$ in the isoperimetric residue $\mathcal{R}(W)$ is joined to a ball of volume $v$, whose center $x_{r,v}$ is determined by looking at best hyperplane $\partial G_r$ approximating $\partial F$ on the ``lateral'' cylinder $\partial\textbf{\textup{C}}_r^\nu$. To ensure the area error made in joining this large sphere to $\partial F$ is negligible, the distance between $\partial F$ and the sphere inside $\partial\textbf{\textup{C}}_r^\nu$ must be ${\rm o}(r^{1-n})$ as $r\to\infty$. The asymptotic expansion \eqref{order of F} gives a hyperplane $\partial G_r$ which is close to $\partial F$ up to ${\rm O}(r^{-n})$, and is thus just as precise as needed to perform the construction.}}\label{fig upperbound}\end{figure}
Figure \ref{fig upperbound}. We claim that, by using $F_{r,v}$ as comparisons for $\psi_W(|F_{r,v}|)$, and then sending first $v\to\infty$ and then $r\to\infty$, one obtains \eqref{energy upper bound}. We first notice that by \eqref{hausdorff convergence to halfspace} and \eqref{containment in finite cylinder} (see, e.g. \cite[Theorem 16.16]{maggiBOOK}), we have
\begin{eqnarray}\nonumber
P(F_{r,v};\Omega)&=&P(F;\textbf{\textup{C}}_{r,(\a,\b)}^\nu\setminus W)+P\big(B^{(v)}(x_{r,v});\mathbb{R}^{n+1}\setminus\mathrm{cl}\,\big[\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big]\big)
\\\label{perimeter Frv}
&&+\H^n\big((F\Delta B^{(v)}(x_{r,v}))\cap\partial_\ell\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)\,,
\end{eqnarray}
where the last term is the ``gluing error'' generated by the mismatch between the boundaries of $\partial F$ and $\partial B^{(v)}(x_{r,v})$ along $\partial_\ell\textbf{\textup{C}}_{r,(\a,\b)}^\nu$. Now, thanks to \eqref{order of F} we have
$\mathrm{hd}\,(G_r\cap\partial\textbf{\textup{C}}_r^\nu,F\cap\partial\textbf{\textup{C}}_r^\nu)\le C_0\,r^{-n}$, so that
\begin{equation}
\label{small error on lateral boundary}
\H^n\big((F\Delta G_r)\cap\partial\textbf{\textup{C}}_r^\nu\big)\le n\,\omega_n\,r^{n-1}\, \mathrm{hd}\,(G_r\cap\partial\textbf{\textup{C}}_r^\nu,F\cap\partial\textbf{\textup{C}}_r^\nu)\le C(n,W)/r\,.
\end{equation}
At the same time, by \eqref{hausdorff convergence to halfspace},
\begin{equation}
\label{L1 convergence on cylinder boundaries}
\lim_{v\to\infty}\H^n\big((G_r\Delta B^{(v)}(x_{r,v}))\cap\partial_\ell\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)=0\,,
\end{equation}
and thus we have the following estimate for the gluing error,
\begin{equation}
\label{gin 1}
\varlimsup_{v\to\infty}\H^n\big((F\Delta B^{(v)}(x_{r,v}))\cap\partial_\ell\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)\le\frac{C(n,W)}r\,,\qquad\forall r>R_2\,.
\end{equation}
Again by \eqref{hausdorff convergence to halfspace}, we find
\begin{eqnarray}
\label{ball perimeter converges to halfspace perimeter}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\lim_{v\to\infty}P\big(B^{(v)}(x_{r,v});\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)=P\big(G_r;\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)
\\
\label{Gr perimeter}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!1\le(\omega_n\,r^n)^{-1}\,P\big(G_r;\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)=\fint_{\textbf{\textup{D}}_r^\nu}\sqrt{1+(c/r^n)^2}\le 1+C_0\,r^{-2\,n}\,,
\end{eqnarray}
so that, by \eqref{ball perimeter converges to halfspace perimeter} and by the lower bound in \eqref{Gr perimeter}, for every $r>R_2$,
\begin{equation}\label{gin 2}
\varlimsup_{v\to\infty}P\big(B^{(v)}(x_{r,v});\mathbb{R}^{n+1}\setminus\mathrm{cl}\,\big[\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big]\big)-P(B^{(v)})
\le -\omega_n\,r^n\,.
\end{equation}
Combining \eqref{gin 1} and \eqref{gin 2} with \eqref{perimeter Frv} and the fact that $\textbf{\textup{C}}_{r,(\a,\b)}^\nu\cap\partial F=\textbf{\textup{C}}_r^\nu\cap\partial F$ (see \eqref{containment in finite cylinder}), we find that for every $r>R_2$,
\begin{eqnarray}\nonumber
&&\varlimsup_{v\to\infty}P(F_{r,v};\Omega)-P(B^{(v)})\le P(F;\textbf{\textup{C}}_r^\nu\setminus W)-\omega_n\,r^n+ C(n,W)/r
\\
\label{perimeter Frv 2}
&&\le-{\rm res}_W(F,\nu)+C(n,W)/r=-\mathcal{R}(W)+C(n,W)/r\,.
\end{eqnarray}
where \eqref{perimeter is decreasing} has been used. Now, combining the elementary estimates
\begin{equation}
\label{volum of Frv}
\max\big\{\big||F_{r,v}|-v\big|\,,v^{-1/(n+1)}\,|P(B^{(v)})-P(B^{(|F_{r,v}|)})|\big\}\le C(n)\,r^{n+1}
\end{equation}
with \eqref{perimeter Frv 2}, we see that
\begin{eqnarray}\label{perimeter Frv 3}
\varlimsup_{v\to\infty}\psi_W(|F_{r,v}|)-P(B^{(|F_{r,v}|)})\le-\mathcal{R}(W)+ C(n,W)/r\,,\,\,\forall r>R_2\,.
\end{eqnarray}
Again by \eqref{volum of Frv} and since $v\mapsto|F_{r,v}|$ is a continuous function, we see that
$\varlimsup_{v\to\infty}\psi_W(|F_{r,v}|)-P(B^{(|F_{r,v}|)})=\varlimsup_{v\to\infty}\psi_W(v)-P(B^{(v)})$. This last identity combined with \eqref{perimeter Frv 3} implies \eqref{energy upper bound} in the limit $r\to\infty$.
\noindent {\bf Step two:} Now let $E_j\in{\rm Min}[\psi_W(v_j)]$ for $v_j \to \infty$. By \eqref{uniform lambda minimality} and a standard argument (see, e.g. \cite[Theorem 21.14]{maggiBOOK}), there is a local perimeter minimizer with free boundary $F$ in $\Omega$ such that, up to extracting subsequences,
\begin{eqnarray}\nonumber
&&\mbox{$E_j\to F$ in $L^1_{{\rm loc}}(\mathbb{R}^{n+1})$, $\H^n\llcorner\partial E_j\rightharpoonup\H^n\llcorner\partial F$ as Radon measures in $\Omega$}\,,
\\
&&\mathrm{hd}\,(K\cap\partial E_j;K\cap\partial F)\to0\qquad\mbox{for every $K\subset\subset\Omega$}\,.\label{convergence of Ej to F last proof}
\end{eqnarray}
Notice that it is not immediate to conclude from $E_j\in{\rm Min}[\psi_W(v_j)]$ that (for some $\nu\in\SS^n$) $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ (or even that $(\nu,F)\in\mathcal F$), nor that $P(E_j;\Omega)-P(B^{(v_j)})$ is asymptotically bounded from below by $-{\rm res}_W(F,\nu)$. In this step we prove some preliminary properties of $F$, and in particular we exploit the blowdown result for exterior minimal surfaces contained in Theorem \ref{theorem mesoscale criterion}-(ii) to prove that $F$ satisfies \eqref{order of F0} and \eqref{order of F} (see statement (c) below). Then, in step three, we use the decay rates \eqref{order of F} to show that $E_j$ can be ``glued'' to $F$, similarly to the construction of step one, and then derive from the corresponding energy estimates the lower bound matching \eqref{energy upper bound} and the optimality of $F$ in $\mathcal{R}(W)$.
\noindent {\bf (a)} {\it $\Omega\cap\partial F\cap\partial B_\rho\ne\emptyset$ for every $\rho$ such that $W\subset\subset B_\rho$}: If not there would be $\varepsilon>0$ such that $W\subset\subset B_{\rho-\varepsilon}$ and $\Omega\cap\partial F\cap A_{\rho-\varepsilon}^{\rho+\varepsilon}=\emptyset$ (recall that $A_r^s=\{x:s>|x|>r\}$). By \eqref{convergence of Ej to F last proof} and the constant mean curvature condition satisfied by $\Omega\cap\partial E_j$, we would then find that each $E_j$ (with $j$ large enough) has a connected component of the form $B^{(w_j)}(x_j)$, with $B^{(w_j)}(x_j)\subset\subset \mathbb{R}^{n+1}\setminus B_{\rho+\varepsilon}$ and $w_j\ge v_j-C(n)\,(\rho+\varepsilon)^{n+1}$. In particular, against $\mathcal{R}(W)>0$,
\[
\psi_W(v_j)=P(E_j;\Omega)\ge P(B^{(v_j-C\,(\rho+\varepsilon)^{n+1})})\ge P(B^{(v_j)})-C\l_j^{-1}(\rho+\varepsilon)^{n+1}\,.
\]
\noindent {\bf (b)} {\it Sharp area bound}: We combine the upper energy bound \eqref{energy upper bound} with the perimeter inequality for spherical symmetrization, to prove
\begin{equation}\label{growth bound for F}
P(F;\Omega \cap B_r) \leq \omega_n r^n - \mathcal{R}(W)\,,\qquad\mbox{for every $r$ s.t. $W\subset\subset B_r$}\,.
\end{equation}
(Notice that \eqref{growth bound for F} does not immediately imply the bound for $P(F;\Omega\cap\textbf{\textup{C}}_r^\nu)$ which would be needed to compare $\mathcal{R}(W)$ and ${\rm res}_W(F,\nu)$.) To prove \eqref{growth bound for F} we argue by contradiction, and consider the existence of $\delta>0$ and $r$ with $W\subset\subset B_r$ such that
$P(F;\Omega\cap\ B_r)\ge \omega_n\,r^n-\mathcal{R}(W)+\delta$. In particular, for $j$ large enough, we would then have
\begin{equation}\label{bad one}
P(E_j;\Omega\cap B_r)\ge\omega_nr^n - \mathcal{R}(W)+\delta\,.
\end{equation}
Again for $j$ large, it must be $\H^n(\partial E_j\cap\partial B_r)=0$: indeed, by \eqref{uniform lambda minimality}, $\Omega\cap\partial E_j$ has mean curvature of order ${\rm O}(\l_{j}^{-1})$, while of course $\partial B_r$ has constant mean curvature equal to $n/r$. Thanks to $\H^n(\partial E_j\cap\partial B_r)=0$,
\begin{equation}
\label{bad one 1}
P(E_j;\Omega)=P(E_j;\Omega\cap B_r)+P\big(E_j;\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(B_r)\big)\,.
\end{equation}
If $E_j^s$ denotes the spherical symmetral of $E_j$ such that $E_j^s\cap\partial B_\rho$ is a spherical cap in $\partial B_\rho$, centered at $\rho\,e_{n+1}$, with area equal to $\H^n(E_j\cap\partial B_\rho)$, then we have the perimeter inequality
\begin{equation}
\label{bad one 2}
P\big(E_j;\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(B_r)\big)\ge P\big(E_j^s;\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(B_r)\big)\,;
\end{equation}
see \cite{cagnettiperuginistoger}. Now, we can find a half-space $J$ orthogonal to $e_{n+1}$ and such that $\H^n(J\cap\partial B_r)=\H^n(E_j\cap\partial B_r)$. In this way, using that $|E_j^s\setminus B_r|=|E_j\setminus B_r|$ (by Fubini's theorem in spherical coordinates), and that $\H^n(B_r\cap\partial J)\le\omega_n\,r^n$ (by the fact that $\partial J$ is a hyperplane), we find
\begin{eqnarray*}
P\big(E_j^s;\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(B_r)\big)&=&P\big((E_j^s\setminus\mathrm{cl}\,(B_r))\cup (J\cap B_r)\big)-\H^n(B_r\cap\partial J)
\\
&\ge&P\big(B^{(|E_j|-|E_j\cap B_r|+|J\cap B_r|)}\big)-\omega_n\,r^n
\\
&\ge&P(B^{(v_j)})-C(n)\,r^{n+1}\,\l_{j}^{-1}-\omega_n\,r^n
\end{eqnarray*}
which, with \eqref{bad one}, \eqref{bad one 1} and \eqref{bad one 2}, finally gives
$P(E_j;\Omega)-P(B^{(v_j)})> -\mathcal{R}(W)+\delta-C(n)\,r^{n+1}\,\l_{j}^{-1}$
for $j$ large, against \eqref{energy upper bound}.
\noindent {\bf (c)} {\it Asymptotic behavior of $\partial F$}: We prove that there are $\nu\in\SS^n$, $f\in C^\infty(\nu^\perp)$, $a,b\in\mathbb{R}$, $c\in\nu^\perp$, $R'>\sup\{\rho:W\subset\textbf{\textup{C}}_\rho^\nu\}$ and $C$ positive, with
\begin{eqnarray}
\label{main residue graphicality of F end}
&&\partial F \setminus \textbf{\textup{C}}^\nu_{R'}=\big\{x+f(x)\,\nu:x\in\nu^\perp\,,|x|>R'\big\}\,,
\\\nonumber
&&f(x)=a\,,\hspace{6.6cm} (n=1)
\\\label{asymptotics of F end}
&&\big|f(x)-\big(a+b\,|x|^{2-n}+(c\cdot x)\,|x|^{-n}\big)\big|\le C\,|x|^{-n}\,,\,\,\, (n\ge 2)\,,
\\\nonumber
&&
\max\big\{|x|^{n-1}\,|\nabla f(x)|,|x|^n\,|\nabla^2f(x)|\big\}\le C_0\,,\qquad\forall x\in\nu^\perp\,,|x|>R'\,.
\end{eqnarray}
To this end, by a standard argument exploiting the local perimeter minimality of $F$ in $\Omega$, given $r_j\to\infty$, then, up to extracting subsequences, $F/r_j\stackrel{{\rm loc}}{\to} J$ in $L^1_{\rm loc}(\mathbb{R}^{n+1})$, where $J$ is a perimeter minimizer in $\mathbb{R}^{n+1}\setminus\{0\}$, $0\in\partial J$ (thanks to property (a)), $J$ is a cone with vertex at $0$
(thanks to Theorem \ref{theorem 7.17AA exterior lambda} and, in particular to \eqref{conelimit}), and $P(J;B_1)\le\omega_n$ (by \eqref{growth bound for F}). {\bf If $n\ge 2$}, then $\partial J$ has vanishing distributional mean curvature in $\mathbb{R}^{n+1}$ (as points are removable singularities for the mean curvature operator when $n\ge 2$), thus $P(J;B_1)\ge\omega_n$ by upper semicontinuity of area densities, and, finally, by $P(J;B_1)=\omega_n$ and Allard's regularity theorem, $J$ is a half-space. {\bf If $n=1$}, then $\partial J$ is the union of two half-lines $\ell_1$ and $\ell_2$ meeting at $\{0\}$. If $\ell_1$ and $\ell_2$ are not opposite (i.e., if $J$ is not a half-space), then we can find a half-space $J^*$ such that $(J\cap J^*)\Delta J\subset\subset B\subset\subset \mathbb{R}^2\setminus\{0\}$ for some ball $B$, and $P(J\cap J^*;B)<P(J;B)$, thus violating the fact that $J$ is a perimeter minimizer in $\mathbb{R}^{n+1}\setminus\{0\}$.
If $n=1$ it is immediate from the above information that, for some $R'>0$, $F\setminus B_{R'}=J\setminus B_{R'}$; this proves \eqref{main residue graphicality of F end} and \eqref{asymptotics of F end} in the case $n=1$. To prove \eqref{main residue graphicality of F end} and \eqref{asymptotics of F end} when $n\ge 2$, we let $M_0$ and $\varepsilon_0$ be as in Theorem \ref{theorem mesoscale criterion}-(ii) with $(n,\Gamma=2\,n\,\omega_n,\sigma=1)$. Since $J$ is a half-space, by using Remark \ref{remark improved convergence} and $F/r_j\stackrel{{\rm loc}}{\to} J$ on the annulus $A_{1/2}^{2\,L}$, for some $L>\max\{M_0,64\}$ to be chosen later on depending also on $\varepsilon_0$, we find that
\begin{equation}\label{between one half and L}
(\partial F)\cap A^{4\,L\,r_j}_{r_j/2}
=\big\{x+r_j\,f_j\big(x/r_j\big)\,\nu:x\in\nu^\perp\big\}\cap A^{4\,L\,r_j}_{r_j/2}\,,\qquad \nu^\perp=\partial J\,,
\end{equation}
for $f_j\in C^1(\nu^\perp)$ with $\|f_j\|_{C^1(\nu^\perp)}\to 0$. By \eqref{between one half and L},
$V_j=\mathbf{var}\,\big((\partial F)\setminus B_{r_j},1)\in\mathcal{V}_n(0,r_j,\infty)$, with (for ${\rm o}(1)\to 0$ as $j\to\infty$)
\begin{eqnarray}\label{FF1}
&&r_j^{-n}\,\int x\cdot\nu^{\rm co}_{V_j}\,d{\rm bd}_{V_j}=-n\,\omega_n+{\rm o}(1)
\\\label{FF2}
&&r_j^{1-n}\|{\rm bd}_{V_j}\|(\partial B_{r_j})=n\,\omega_n+{\rm o}(1)\,,
\\\label{FF3}
&&\sup_{r\in(r_j,3\,L\,r_j)}\big|(r^n-r_j^n)^{-1}\,\|V_j\|(B_r\setminus B_{r_j})-\omega_n\big|={\rm o}(1)\,.
\end{eqnarray}
By our choice of $\Gamma$, by \eqref{growth bound for F} and \eqref{FF1} we see that, for $j$ large, we have
\begin{equation}
\label{blowdown hp check 1}
\|{\rm bd}_{V_j}\|(\partial B_{r_j})\le \Gamma\,r_j^{n-1}\,,\qquad \|V_j\|(B_\rho\setminus B_{r_j})\le\Gamma\rho^n\,,\,\,\forall\rho>r_j\,.
\end{equation}
Moreover, we claim that setting
\[
s_j=2\,L\,r_j
\]
(so that, in particular, $s_j>\max\{M_0,64\}\,r_j$), then
\begin{eqnarray}
\label{blowdown hp check 2}
|\delta_{V_j,r_j,0}(s_j/8)|\le\varepsilon_0\,,\qquad \inf_{r>s_j/8}\delta_{V_j,r_j,0}(r)\ge-\varepsilon_0\,,
\end{eqnarray}
provided $j$ and $L$ are taken large enough depending on $\varepsilon_0$. To check the first inequality in \eqref{blowdown hp check 2} we notice that, by \eqref{FF1} and \eqref{FF3},
\begin{eqnarray*}
\delta_{V_j,r_j,0}(s_j/8)\!\!&=&\!\!\omega_n-\frac{\|V_j\|(B_{s_j/8}\setminus B_{r_j})}{(s_j/8)^n}+\frac1{n\,(s_j/8)^n}\,\int x\cdot\nu^{\rm co}_{V_j}\,d\,{\rm bd}_{V_j}
\\
&=&\!\!\omega_n-\big(\omega_n+{\rm o}(1)\big)\,\frac{(s_j/8)^n-r_j^n}{(s_j/8)^n}-\frac{\omega_n\,r_j^n}{(s_j/8)^n}\,\big(1+{\rm o}(1)\big)
\\
&=&\!\!{\rm o}(1)\,(1+(r_j/s_j)^n)={\rm o}(1)\,,
\end{eqnarray*}
so that $|\delta_{V_j,r_j,0}(s_j/8)|\le\varepsilon_0$ as soon as $j$ is large with respect to $\varepsilon_0$. Similarly, if $r>s_j/8=(L\,r_j)/4$, then by \eqref{FF1}, \eqref{FF3}, \eqref{growth bound for F}, and $r_j/r\le 4/L$,
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\delta_{V_j,r_j,0}(r)=\omega_n-\frac{\|V_j\|(B_r\setminus B_{2\,r_j})}{r^n}-\frac{\|V_j\|(B_{2\,r_j}\setminus B_{r_j})}{r^n}
-\frac{\omega_n\,r_j^n}{r^n}\,\big(1+{\rm o}(1)\big)
\\
&&\ge\omega_n-\frac{\omega_n\,r^n-\mathcal{R}(W)}{r^n}-\big(\omega_n+{\rm o}(1)\big)\,\frac{(2\,r_j)^n-r_j^n}{r^n}-\frac{\omega_n\,r_j^n}{r^n}\,\big(1+{\rm o}(1)\big)
\\
&&\ge r^{-n}\, \mathcal{R}(W) -2\,(4/L)^n\,\big(\omega_n+{\rm o}(1)\big)-(4/L)^n\,{\rm o}(1)\ge-3\,(4/L)^n\,\omega_n\,,
\end{eqnarray*}
provided $j$ is large; hence the second inequality in \eqref{blowdown hp check 2} holds if $L$ is large in terms of $\varepsilon_0$. Having proved \eqref{blowdown hp check 2}, we now claim that, with $H=\partial J=\nu^\perp$,
\begin{eqnarray}
\label{blowdown hp check 3}
\frac1{s_j^n}\,\int_{A_{s_j/8}^{s_j/2}}\,\omega_H^2\,d\|V_j\|\le\varepsilon_0\,,\qquad \|V_j\|\big(A_{s_j/6}^{s_j/4}\big)\ge c(n)\,.
\end{eqnarray}
The second condition is immediate from \eqref{between one half and L}, which also implies that if $y\in({\rm spt}\,V_j)\cap (B{s_j/2}\setminus B_{s_j/8})=(\partial F)\cap (B_{L\,r_j}\setminus B_{L\,r_j/4})$, then, taking \eqref{def of omega H} and $y=x+r_j\,f_j(x/r_j)\,\nu$ for some $x\in\nu^\perp\cap (B_{L\,r_j}\setminus B_{L\,r_j/4})$ into account,
\begin{eqnarray*}
|\mathbf{p}_{\nu^\perp}(y)|^{-1}\,|y\cdot\nu|\le |x|^{-1}\,r_j\,\|f_j\|_{C^0(\nu^\perp)}
\le (4\,L)^{-1}\,\|f_j\|_{C^0(\nu^\perp)}\,,
\end{eqnarray*}
so that, by the second inequality in \eqref{blowdown hp check 1} and by \eqref{def of omega H},
\[
s_j^{-n}\,\int_{A_{s_j/8}^{s_j/2}}\,\omega_H^2\,d\|V_j\|\le{\rm arctn}\big((4\,L)^{-1}\,\|f_j\|_{C^0(\nu^\perp)}\big)^2\,8^n\,\Gamma\,;
\]
in particular, the first inequality in \eqref{blowdown hp check 3} holds for $j$ large. By \eqref{blowdown hp check 1}, \eqref{blowdown hp check 2} and \eqref{blowdown hp check 3}, Theorem \ref{theorem mesoscale criterion}-(ii) can be applied to $(V,R,\Lambda,s)=(V_j,r_j,0,s_j)$ with $j$ large. As a consequence, passing from spherical graphs to cylindrical graphs with the aid of Lemma \ref{lemma D1}, we find that, for some large $j$,
\begin{equation}\label{between one half and L fine}
(\partial F)\setminus B_{s_j/16}
=\big\{x+f(x)\,\nu:x\in\nu^\perp\big\}\setminus B_{s_j/16}\,,
\end{equation}
where $f:\nu^\perp\to\mathbb{R}$ is a smooth function which solves the minimal surfaces equation on $\nu^\perp\setminus B_{s_j/16}$. Since $\partial F$ admits at least one sequential blowdown limit hyperplane (namely, $\nu^\perp=\partial J$), by a theorem of Simon \cite[Theorem 2]{SimonAIHP} we find that $\nabla f$ has a limit as $|x|\to\infty$; in particular, $|\nabla f|$ is bounded. Moreover, by \eqref{between one half and L fine} (or by the fact that $F$ is a local perimeter minimizer in $\Omega$), $\partial F$ is a stable minimal surface in $\mathbb{R}^{n+1}\setminus B_{s_j/16}$, which, thanks to \eqref{growth bound for F}, satisfies an area growth bound like \eqref{schoen 2}. We can thus apply Proposition \ref{prop schoen} to deduce the validity of \eqref{asymptotics of F end} when $n\ge 3$, and of $|f(x)-[a+b\,\log\,|x|+(c\cdot x)\,|x|^{-2}]|\le C\,|x|^{-2}$ for all $|x|>R'$ when $n=2$ (with $R'>s_j$). Recalling that $F$ is a local perimeter minimizer with free boundary in $\Omega$ (that is, $P(F;\Omega\cap B)\le P(F';\Omega\cap B)$ whenever $F\Delta F'\subset\subset B\subset\subset\mathbb{R}^3$) it must be that $b=0$, as it can be seen by comparing $F$ with the set $F'$ obtained by changing $F$ inside $\textbf{\textup{C}}_r^\nu$ ($r>>R'$) with the half-space $G_r$ bounded by the plane $\{x+t\,\nu:x\in\nu^\perp,t=a+b\,\log(r)+c\cdot x/r^2\}$ and such that $\H^2((F\Delta G_r)\cap\partial\textbf{\textup{C}}_r^\nu)\le C/r^2$ (we omit the details of this standard comparison argument). Having shown that $b=0$, the proof is complete.
\noindent {\bf (d)} {\it $F\cup W$ defines an element of $\mathcal F$}: With $R>R'$ as in \eqref{main residue graphicality of F end} and \eqref{asymptotics of F end}, $V_R=\mathbf{var}\,((\partial F)\cap(B_R\setminus W))$ is a stationary varifold in $\mathbb{R}^{n+1}\setminus K_R$ for
$K_R=W\cup\big\{x+f(x)\,\nu:x\in\nu^\perp\,,|x|=R\}$, and has bounded support. By the convex hull property \cite[Theorem 19.2]{SimonLN}, we deduce that, for every $R>R'$, ${\rm spt} V_R$ is contained in the convex hull of $K_R$, for every $R>R'$. Taking into account that $f(x)\to a$ as $|x|\to\infty$ we conclude that $\Omega\cap\partial F$ is contained in the smallest slab $\mathbf{S}_{[\a,\b]}^\nu$ containing both $W$ and $\{x:x\cdot\nu=a\}$. Now set
$F'=F\cup W$. Clearly $F'$ is a set of locally finite perimeter in $\Omega$ (since $P(F';\Omega')=P(F;\Omega')$ for every $\Omega'\subset\subset\Omega$). Second, $\partial F'$ is contained in $\mathbf{S}_{[\a,\b]}^\nu$ (since $\partial F'\subset [(\partial F)\cap\Omega]\cup W$). Third, by \eqref{main residue graphicality of F end} and \eqref{asymptotics of F end},
\begin{eqnarray}
\label{coo1}
&&\big\{x+t\,\nu:x\in\nu^\perp\,,|x|>R'\,,t<\a\big\}\subset F'\,,
\\
\label{coo2}
&&\big\{x+t\,\nu:x\in\nu^\perp\,,|x|>R'\,,t>\b\big\}\subset \mathbb{R}^{n+1}\setminus F'\,,
\\
\label{coo3}
&&\big\{x+t\,\nu:x\in\nu^\perp\,,|x|<R'\,,t\in\mathbb{R}\setminus[\a,\b]\big\}\cap(\partial F')=\emptyset\,.
\end{eqnarray}
By combining \eqref{coo1} and \eqref{coo3} we see that $\{x+t\,\nu:x\in\nu^\perp\,,t<\a\}\subset F'$, and by combining \eqref{coo2} and \eqref{coo3} we see that $\{x+t\,\nu:x\in\nu^\perp\,,t>\b\}\subset \mathbb{R}^{n+1}\setminus F'$: in particular, $\mathbf{p}_{\nu^\perp}(\partial F')=\nu^\perp$, and thus $(F',\nu)\in\mathcal F$.
\noindent {\bf Step three:} We prove that
\begin{equation}\label{lower bound equation}
\varliminf_{v\to \infty} \psi_W(v) - P(B^{(v)}) \geq -\mathcal{R}(W)\,.
\end{equation}
For $v_j\to\infty$ achieving the liminf in \eqref{lower bound equation}, let $E_j\in{\rm Min}[\psi_W(v_j)]$ and let $F$ be a (sub-sequential) limit of $E_j$, so that properties (a), (b), (c) and (d) in step two hold for $F$. In particular, properties \eqref{main residue graphicality of F end} and \eqref{asymptotics of F end} from (c) are entirely analogous to properties \eqref{order of F0} and \eqref{order of F} exploited in step one: therefore, the family of half-spaces $\{G_r\}_{r>R'}$ defined by \eqref{def of Gr} is such that
\begin{eqnarray}
\label{b1}
\big((\partial F) \cup (\partial G_r) \cup (G_r \Delta F)\big) \cap \textbf{\textup{C}}_r^\nu \,\,\subset\,\, \textbf{\textup{C}}_{r,(\a+1,\b-1)}^\nu\,,
\\
\label{b2}
\H^n\big((F\Delta G_r)\cap\partial\textbf{\textup{C}}_r^\nu\big)\le r^{-1}\,C(n,W)\,,
\\
\label{b3}
\big|P\big(G_r;\textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)-\omega_n\,r^n\big|\le r^{-n}\,C(n,W)\,,
\end{eqnarray}
(compare with \eqref{containment in finite cylinder}, \eqref{small error on lateral boundary}, and \eqref{Gr perimeter} in step one). By \eqref{b3} we find
\begin{equation}
\label{coo4}
-{\rm res}_W(F',\nu)
=\lim_{r\to\infty}P(F;\textbf{\textup{C}}_r^\nu\setminus W)-P(G_r;\textbf{\textup{C}}_{r,(\a,\b)}^\nu)\,.
\end{equation}
In order to relate the residue of $(F',\nu)$ to $\psi_W(v_j)-P(B^{(v_j)})$ we consider the sets
$Z_j=(G_r\cap\textbf{\textup{C}}_{r,(\a,b)}^\nu)\cup(E_j\setminus\textbf{\textup{C}}_{r,(\a,\b)}^\nu)$, which, by isoperimetry, satisfy
\begin{eqnarray}\label{thanks iso}
P(Z_j)\!\ge\! P(B^{(|E_j\setminus\textbf{\textup{C}}_{r,(\a,\b)}^\nu|)})\ge
P(B^{(v_j)})-C(n)\,r^n\,(\b-\a)\,\l_{j}^{-1}\,.
\end{eqnarray}
Since for a.e. $r>R'$ we have
\[
P(Z_j)=P(E_j;\mathbb{R}^{n+1}\!\setminus\!\textbf{\textup{C}}_{r,(\a,\b)}^\nu)+P(G_r;\textbf{\textup{C}}_{r,(\a,b)}^\nu)+\H^n\big((E_j\Delta G_r)\cap\partial \textbf{\textup{C}}_{r,(\a,b)}^\nu\big)\,,
\]
we conclude that
\begin{eqnarray*}
\psi_W(v_j)-P(B^{(v_j)})\!\!\!\!&=&\!\!\!P(E_j;\textbf{\textup{C}}_{r,(\a,\b)}^\nu\!\setminus\! W)\!+P(E_j;\mathbb{R}^{n+1}\!\setminus\!\textbf{\textup{C}}_{r,(\a,\b)}^\nu)\!-\!P(B^{(v_j)})
\\
&=&\!\!\!P(E_j;\textbf{\textup{C}}_{r,(\a,\b)}^\nu\setminus W)+P(Z_j)-P(B^{(v_j)})
\\
&&\!\!\!-P(G_r;\textbf{\textup{C}}_{r,(\a,b)}^\nu)-\H^n\big((E_j\Delta G_r)\cap\partial \textbf{\textup{C}}_{r,(\a,b)}^\nu\big)
\end{eqnarray*}
so that $E_j\to F$ in $L^1_{\rm loc}(\mathbb{R}^{n+1})$ and \eqref{thanks iso} give, for a.e. $r>R'$,
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!\!\varliminf_{j\to\infty}\psi_W(v_j)-P(B^{(v_j)})\ge P(F;\textbf{\textup{C}}_{r,(\a,\b)}^\nu\!\setminus\! W)-P(G_r;\textbf{\textup{C}}_{r,(\a,b)}^\nu)
\\
&&-\H^n\big((F\Delta G_r)\cap\partial \textbf{\textup{C}}_{r,(\a,\b)}^\nu\big)\ge P(F;\textbf{\textup{C}}_r^\nu\!\setminus\! W)-P(G_r;\textbf{\textup{C}}_r^\nu)-C(n,W)/r\,,
\end{eqnarray*}
thanks to \eqref{b2} and $(F\Delta G_r)\cap\partial\textbf{\textup{C}}_r^\nu=(F\Delta G_r)\cap\partial\textbf{\textup{C}}_{r,(\a,\b)}^\nu$. Letting $r\to\infty$, recalling \eqref{coo4}, and by $(F',\nu)\in\mathcal F$, we find $\varliminf_{j\to\infty}\psi_W(v_j)-P(B^{(v_j)})\ge-{\rm res}_W(F',\nu)\ge-\mathcal{R}(W)$. This completes the proof of \eqref{lower bound equation}, which in turn, combined with \eqref{energy upper bound}, gives \eqref{main asymptotic expansion}, and also shows that $L^1_{\rm loc}$-subsequential limits $F$ of $E_j\in{\rm Min}[\psi_W(v_j)]$ for $v_j\to\infty$ are such that, for some $\nu\in\SS^n$, $(F\cup W,\nu)\in\mathcal F$ and $F'=F\cup W\in{\rm Max}[\mathcal{R}(W)]$.
\noindent {\bf Step four:} Moving towards the proof of \eqref{f of Ev}, we prove the validity, uniformly among varifolds associated to maximizers of $\mathcal{R}(W)$, of estimates analogous to \eqref{blowdown hp check 1}, \eqref{blowdown hp check 2} and \eqref{blowdown hp check 3}.
For a constant $\Gamma>2\,n\,\omega_n$ to be determined later on (see \eqref{blowdown hp check 1 Ej part two}, \eqref{choose Gamma 1}, and \eqref{choose Gamma 2} below) in dependence of $n$ and $W$, and for $\sigma>0$, we let $M_0=M_0(n,2\,\Gamma,\sigma)$ and $\varepsilon_0=\varepsilon_0(n,2\,\Gamma,\sigma)$ be determined by Theorem \ref{theorem mesoscale criterion}. If $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, then by Theorem \ref{thm main of residue}-(ii) we can find $R_2=R_2(W)>0$, $f\in C^\infty(\nu^\perp)$ such that
\begin{equation}
\label{basta}
(\partial F)\setminus\textbf{\textup{C}}_{R_2}^\nu=\big\{x+f(x)\,\nu:x\in\nu^\perp\,,|x|>R_2\big\}\,,
\end{equation}
and such that \eqref{asymptotics of F} holds with $\max\{|a|,|b|,|c|\}\le C(W)$ and $|\nabla f(x)|\le C_0/|x|^{n-1}$ for $|x|>R_2$. Thus $\|\nabla f\|_{C^0(\nu^\perp\setminus\textbf{\textup{D}}_r^\nu)}\to 0$ as $r\to\infty$ uniformly on $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$, and there is $R_3>\max\{2\,R_2,1\}$ (depending on $W$) such that, if
$V_F=\mathbf{var}\,((\partial F)\setminus B_{R_3},1)$, then $V_F\in\mathcal{V}_n(0,R_3,\infty)$, and
\begin{equation}
\label{blowdown hp check 1 four}
\|{\rm bd}_{V_F}\|(\partial B_{R_3})\le \Gamma\,R_3^{n-1}\,,\qquad \|V_F\|(B_\rho\setminus B_{R_3})\le\Gamma\,\rho^n\qquad\forall\rho>R_3\,,
\end{equation}
(compare with \eqref{blowdown hp check 1}). Then, arguing as in step three-(c), or more simply by exploiting \eqref{basta} and the decay estimates \eqref{asymptotics of F}, we see that there is $L>\max\{M_0,64\}$, depending on $n$, $W$ and $\sigma$ only, such that, setting
\begin{equation}
\label{def of sWs}
s_W(\sigma)=2\,L\,R_3
\end{equation}
we have for some $c(n)>0$ (compare with \eqref{blowdown hp check 2} and \eqref{blowdown hp check 3})
\begin{eqnarray}
\label{blowdown hp check 2 four}
|\delta_{V_F,R_3,0}(s_W(\sigma)/8)|\le\varepsilon_0/2\,,\qquad \inf_{r>s_W(\sigma)/8}\delta_{V_F,R_3,0}(r)\ge-\varepsilon_0/2\,,
\\
\label{blowdown hp check 3 four}
\frac1{s_W(\sigma)^n}\,\int_{A_{s_W(\sigma)/8}^{s_W(\sigma)/2}}\,\omega_{\nu^\perp}^2\,d\|V_F\|\le\frac{\varepsilon_0}2\,,\qquad \|V_F\|\big(A_{s_W(\sigma)/6}^{s_W(\sigma)/4}\big)\ge c(n)\,.
\end{eqnarray}
\noindent {\bf Step five:} Given $E_j\in{\rm Min}[\psi_W(v_j)]$ for $v_j\to\infty$, we prove the existence of $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ and $h_j\in C^\infty((\partial F)\setminus B_{R_2})$ such that
\begin{eqnarray}
\label{f of Ev jj}
&&(\partial E_j)\cap A^{R_1\,\l_{j}}_{4\,R_2}
=\Big\{y+h_j(y)\,\nu_F(y):y\in\partial F\Big\}\cap A^{R_1\,\l_{j}}_{4\,R_2}\,,
\\
\label{aspetta}
&&\lim_{j\to\infty}\|h_j\|_{C^1((\partial F)\cap A_{4\,R_2}^M)}=0\,,\qquad\forall M<\infty\,;
\end{eqnarray}
and that if $x_j$ satisfies $|E_j\Delta B^{(v_j)}(x_j)|=\inf_x|E_j\Delta B^{(v_j)}(x)|$, then
\begin{equation}
\label{aspetta ancora}
\lim_{j\to\infty}||x_j|^{-1}\,x_j-\nu|=0\,;
\end{equation}
finally, we prove statement (iii) (i.e., $(\partial E_j)\setminus B_{R_2}$ is diffeomorphic to an $n$-dimensional disk). By step three, there is $(F,\nu)\in{\rm Max}[\mathcal{R}(W)]$ such that, up to extracting subsequences, \eqref{convergence of Ej to F last proof} holds. By \eqref{convergence of Ej to F last proof} and \eqref{basta}, and with $s_W(\sigma)$ defined as in step four (see \eqref{def of sWs}) starting from $F$, we can apply Remark \ref{remark improved convergence} to find $f_j\in C^\infty(\nu^\perp)$ such that
\begin{equation}
\label{basta j}
(\partial E_j)\cap A^{s_W(\sigma)}_{2\,R_2}=\big\{x+f_j(x)\,\nu:x\in\nu^\perp\big\}\cap A^{s_W(\sigma)}_{2\,R_2}\,,
\end{equation}
for $j$ large enough (in terms of $\sigma$, $n$, $W$, and $F$), and such that $f_j\to f$ in $C^1(\textbf{\textup{D}}_{s_W(\sigma)}^\nu\setminus\textbf{\textup{D}}_{2\,R_2}^\nu)$.
With $R_3$ as in step four and with the goal of applying Theorem \ref{theorem mesoscale criterion} to the varifolds
$V_j=\mathbf{var}\,((\partial E_j)\setminus B_{R_3},1)$, we notice that $V_j\in\mathcal{V}_n(\Lambda_j,R_3,\infty)$, for some $\Lambda_j\le \Lambda_0\,\l_{j}^{-1}$ (thanks to \eqref{uniform lambda minimality}). In particular, by \eqref{def of sWs}, $s_W(\sigma)$ satisfies the ``mesoscale bounds'' (compare with \eqref{mesoscale bounds})
\begin{equation}
\label{mesoscale bounds end}
\varepsilon_0\,(4\,\Lambda_j)^{-1}>s_W(\sigma)>\max\{M_0,64\}\,R_3
\end{equation}
provided $j$ is large. Moreover, by $R_3>2\,R_2$ and $s_W(\sigma)/8>2\,R_2$, by \eqref{basta}, \eqref{basta j} and $f_j\to f$ in $C^1$, we exploit \eqref{blowdown hp check 1 four}, \eqref{blowdown hp check 2 four}, and \eqref{blowdown hp check 3 four} to deduce
\begin{eqnarray}
\label{blowdown hp check 1 Ej part one}
\|{\rm bd}_{V_j}\|(\partial B_{R_3})&\le&(2\,\Gamma)\,R_3^{n-1}\,,
\\
\label{blowdown hp check 2 Ej part one}
|\delta_{V_j,R_3,0}(s_W(\sigma)/8)|&\le&(2/3)\,\varepsilon_0\,,
\\
\label{blowdown hp check 3 Ej part one}
\frac1{s_W(\sigma)^n}\,\int_{A_{s_W(\sigma)/8}^{s_W(\sigma)/2}}\,\omega_{\nu^\perp}^2\,d\|V_j\|&\le&\varepsilon_0\,,
\\
\label{blowdown hp check 3 Ej part two}
\|V_j\|\big(A_{s_W(\sigma)/6}^{s_W(\sigma)/4}\big)&\ge& c(n)/2\,,\qquad\qquad\mbox{for $j$ large}\,.
\end{eqnarray}
We claim that, up to increasing $\Gamma$ (depending on $n$ and $W$), we can entail
\begin{equation}
\label{blowdown hp check 1 Ej part two}
\|V_j\|(B_\rho\setminus B_{R_3}) \le\Gamma\,\rho^n\,,\qquad\forall \rho>R_3\,.
\end{equation}
Indeed, by Theorem \ref{thm existence and uniform min}-(i), for some positive $\Lambda_0$ and $s_0$ depending on $W$ only, $E_j$ is a $(\Lambda_0\,\l_{j}^{-1},s_0\,\l_{j})$-perimeter minimizer with free boundary in $\Omega$. Comparing $E_j$ to $E_j\setminus B_r$ by \eqref{uniform lambda minimality}, for every $r<s_0\,\l_{j}$,
\begin{equation}
\label{choose Gamma 1}
P(E_j;\Omega\cap B_r)\le C(n)\,\big(r^n+\Lambda_0\,\l_{j}^{-1}\,r^{n+1}\big)\le C(n,W)\,r^n\,;
\end{equation}
since, at the same time, if $r>s_0\,\l_{j}$, then
\begin{equation}
\label{choose Gamma 2}
P(E_j;\Omega\cap B_r)\le P(E_j;\Omega)=\psi_W(v_j)\le P(B^{(v_j)})\le C(n)\,s_0^{-n}\,r^n\,,
\end{equation}
by combining \eqref{choose Gamma 1} and \eqref{choose Gamma 2} we find \eqref{blowdown hp check 1 Ej part two}. With \eqref{blowdown hp check 1 Ej part one} and \eqref{blowdown hp check 1 Ej part two} at hand, we can also show that
\begin{equation}
\label{blowdown hp check 2 Ej part two}
|\delta_{V_j,R_3,\Lambda_j}(s_W(\sigma)/8)|\le\varepsilon_0\,.
\end{equation}
Indeed, by $s_W(\sigma)=2\,L\,R_3$ and by $\Lambda_j\le \Lambda_0\,\l_{j}^{-1}$,
\begin{eqnarray*}
&&\big|\delta_{V_j,R_3,\Lambda_j}(s_W(\sigma)/8)-\delta_{V_j,R_3,0}(s_W(\sigma)/8)\big|
\\
&&\le(\Lambda_0/\l_{j})\,\int_{R_3}^{s_W(\sigma)/8}\!\!\!\!\rho^{-n}\,\|V_j\|(B_\rho\setminus B_{R_3})\,d\rho
\le\frac{\Lambda_0\,R_3\,\Gamma}{\l_{j}}\,\Big(\frac{L}{4}-1\Big)\le\frac{\varepsilon_0}3\,,
\end{eqnarray*}
provided $j$ is large enough. To complete checking that Theorem \ref{theorem mesoscale criterion} can be applied to every $V_j$ with $j$ large enough, we now consider the quantities
\[
R_{*j}=\sup\big\{\rho>s_W(\sigma)/8:\delta_{V_j,R_3,\Lambda_j}(\rho)\ge-\varepsilon_0\big\}\,,
\]
and prove that, for a constant $\tau_0$ depending on $n$ and $W$ only, we have
\begin{equation}
\label{Rstar true}
R_{*j}\ge \tau_0\,\l_{j}\,;
\end{equation}
in particular, provided $j$ is large enough, \eqref{Rstar true} implies immediately
\begin{equation}
\label{Rstar check}
R_{*j}\ge 4\,s_W(\sigma)\,,
\end{equation}
which was the last assumption in Theorem \ref{theorem mesoscale criterion} that needed to be checked. To prove \eqref{Rstar true}, we pick $\tau_0$ such that
\begin{equation}
\label{geometry of B1}
\big|\tau_0^{-n}\,\H^n\big(B_{\tau_0}(z)\cap\partial B^{(1)}\big)-\omega_n\big|\le \varepsilon_0/2\,,\qquad\forall z\in\partial B^{(1)}\,.
\end{equation}
(Of course this condition only requires $\tau_0$ to depend on $n$.) By definition of $x_j$ and by \eqref{limsupmax goes to zero take 2}, and up to extracting a subsequence, we have $x_j\to z_0$ for some $z_0\in\partial B^{(1)}$. In particular, setting $\rho_j=\tau_0\,\l_{j}$, we find
\begin{eqnarray*}
\rho_j^{-n}\,\|V_j\|(B_{\rho_j}\setminus B_{R_3})\!\!\!&=&\!\!\!\tau_0^{-n}\,P\big((E_j-x_j)/\l_{j}\,;\,B_{\tau_0}(-x_j)\setminus B_{R_3/\rho_j}(-x_j)\big)
\\
&&\to\tau_0^{-n}\,\H^n\big(B_{\tau_0}(-z_0)\cap\partial B^{(1)}\big)\le \omega_n+(\varepsilon_0/2)\,,
\end{eqnarray*}
thus proving that, for $j$ large enough,
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!\!\delta_{V_j,R_3,\Lambda_j}(\rho_j)\!\!\ge\!\!-\frac{\varepsilon_0}2+\frac{1}{n\,\rho_j^n}\,\int\,x\cdot\nu^{\rm co}_{V_j}\,d\,{\rm bd}_{V_j}-\Lambda_j\,\int_{R_3}^{\rho_j}\frac{\|V_j\|(B_{\rho}\setminus B_{R_3})}{\rho^n}\,d\rho
\\
&&\ge\!\!-\frac{\varepsilon_0}2-\frac{2\,\Gamma\,R_3^n}{n\,\tau_0^n\,\l_{j}}
-\Lambda_0\,\Gamma\,\frac{(\rho_j-R_3)}{\l_{j}}
\ge-\frac{\varepsilon_0}2-\frac{C_*(n,W)}{\tau_0^n\,\l_{j}}-C_{**}(n,W)\,\tau_0\,,
\end{eqnarray*}
where we have used \eqref{blowdown hp check 1 Ej part one}, ${\rm spt}\,{\rm bd}_{V_j}\subset\partial B_{R_3}$, and \eqref{blowdown hp check 1 Ej part two}. Therefore, provided we pick $\tau_0$ depending on $n$ and $W$ so that $C_{**}\,\tau_0\le\varepsilon_0/4$, and then we pick $j$ large enough to entail $(C_*(n,W)/\tau_0^n)\l_{j}^{-1}\le\varepsilon_0/4$, we conclude that if $r\in(R_3,\rho_j]$, then
$\delta_{V_j,R_3,\Lambda_j}(r)\ge \delta_{V_j,R_3,\Lambda_j}(\rho_j)\ge-\varepsilon_0$,
where in the first inequality we have used Theorem \ref{theorem 7.17AA exterior lambda}-(i) and the fact that $V_j\in\mathcal{V}_n(\Lambda_j,R_3,\infty)$. In summary, by \eqref{blowdown hp check 1 Ej part one} and \eqref{blowdown hp check 1 Ej part two} (which give \eqref{Gamma bounds}), by \eqref{mesoscale bounds end} (which gives \eqref{mesoscale bounds} with $s=s_W(\sigma)/8$), and by \eqref{blowdown hp check 2 Ej part two}, \eqref{Rstar check}, \eqref{blowdown hp check 3 Ej part one} and \eqref{blowdown hp check 3 Ej part two} (which imply, respectively, \eqref{mesoscale delta small s8}, \eqref{mesoscale Rstar larger s4}, \eqref{mesoscale small angular flatness}, and \eqref{mesoscale positive area in annulus}), we see that Theorem \ref{theorem mesoscale criterion}-(i) can be applied with $V=V_j$ and $s=s_W(\sigma)/8$ provided $j$ is large in terms of $\sigma$, $n$, $W$ and the limit $F$ of the $E_j$'s. Thus, setting
\begin{equation}\label{def of Starj}
S_{*j}=\min\big\{R_{*j},\varepsilon_0/\Lambda_j\big\}\,,
\end{equation}
and noticing that by \eqref{Rstar true} and $\Lambda_j\le \Lambda_0\,\l_{j}^{-1}$ we have
\begin{equation}
\label{Starj big}
S_{*j}\ge 16\,R_1\,\l_{j}\,,
\end{equation}
(for $R_1$ depending on $n$ and $W$ only) we conclude that, for $j$ large, there are $K_j\in\H$ and $u_j\in\mathcal{X}_\sigma(\S_{K_j},\sigma_W(\sigma)/32,R_1\,\l_{j})$, such that
\begin{equation}
\label{par Ej}
(\partial E_j)\cap A_{s_W(\sigma)/32}^{R_1\,\l_{j}}=\S_{K_j}\big(u_j,s_W(\sigma)/32,R_1\,\l_{j}\big)\,.
\end{equation}
Similarly, by \eqref{blowdown hp check 1 four}, \eqref{blowdown hp check 2 four}, and \eqref{blowdown hp check 3 four}, thanks to Theorem \ref{theorem mesoscale criterion}-(ii) we have
\begin{equation}
\label{par F}
(\partial F)\cap \big(\mathbb{R}^{n+1}\setminus B_{s_W(\sigma)/32}\big)=\Sigma_{\nu^\perp}\big(u,s_W(\sigma)/32,\infty\big)\,,
\end{equation}
for $u\in\mathcal{X}_{\sigma'}(\S_{\nu^\perp},s_W(\sigma)/32,\infty)$ for every $\sigma'>\sigma$. Now, by $E_j\to F$ in $L^1_{\rm loc}(\mathbb{R}^{n+1})$, \eqref{par Ej} and \eqref{par F} can hold only if $|\nu_{K_j}-\nu|\le \zeta(\sigma)$ for a function $\zeta$, depending on $n$ and $W$ only, such that $\zeta(\sigma)\to 0$ as $\sigma\to 0^+$. In particular (denoting by $\sigma_0^*$, $\varepsilon_0^*$ and $C_0^*$ the dimension dependent constants originally introduced in Lemma \ref{lemma step one} as $\sigma_0$, $\varepsilon_0$ and $C_0$) we can find $\sigma_1=\sigma_1(n,W)\le\sigma_0^*$ such that if $\sigma<\sigma_1$, then $\varepsilon_0^*\ge \zeta(\sigma)\ge |\nu_{K_j}-\nu|$, and correspondingly, Lemma \ref{lemma step one}-(i) can be used to infer the existence of
$u_j^*\in\mathcal{X}_{C_0\,(\sigma+\zeta(\sigma))}(\S_{\nu^\perp},s_W(\sigma)/32,2\,R_1\,\l_{j})$ such that, for $j$ large,
\begin{eqnarray}
\nonumber
\S_{\nu^\perp}\big(u_j^*,s_W(\sigma)/32,2\,R_1\,\l_{j}\big)&=&\S_{K_j}\big(u_j,s_W(\sigma)/32,2\,R_1\,\l_{j}\big)
\\
&=&(\partial E_j)\cap A_{s_W(\sigma)/32}^{2\,R_1\,\l_{j}}\,.
\label{star 2 jj}
\end{eqnarray}
By \eqref{basta j} and Lemma \ref{lemma D1}, \eqref{star 2 jj} implies cylindrical graphicality: more precisely, provided $\sigma_1$ is small enough, there are $g_j\in C^1(\nu^\perp)$ such that
\begin{eqnarray}
\label{what about gj 1}
&&\sup_{x\in\nu^\perp}\{|g_j(x)|\,|x|^{-1},|\nabla g_j(x)|\}\le C\,\big(\sigma+\zeta(\sigma)\big)\,,
\\
\label{what about gj 2}
&&(\partial E_j)\cap A^{R_1\,\l_{j}}_{2\,R_2}
=\big\{x+g_j(x)\,\nu:x\in\nu^\perp\big\}\cap A^{R_1\,\l_{j}}_{2\,R_2}\,.
\end{eqnarray}
At the same time, by \eqref{basta}, \eqref{asymptotics of F}, and up to further increasing $R_2$ and decreasing $\sigma_1$, we can exploit Lemma \ref{lemma D2} in the appendix to find $h_j\in C^1(G(f))$, $G(f)=\{x+f(x)\,\nu:x\in\nu^\perp\}$, such that
\[
\big\{x+g_j(x)\,\nu:x\in\nu^\perp\big\}\setminus B_{4\,R_2}
=\big\{z+h_j(z)\,\nu_F(z):z\in G(f)\big\}\setminus B_{4\,R_2}
\]
which, combined with \eqref{basta} and \eqref{what about gj 2} shows that
\begin{equation}\nonumber
(\partial E_j)\cap A^{R_1\,\l_{j}}_{4\,R_2}
=\big\{z+h_j(z)\,\nu_F(z):z\in\partial F\big\}\cap A^{R_1\,\l_{j}}_{4\,R_2}
\end{equation}
that is \eqref{f of Ev jj}. By $E_j\to F$ in $L^1_{\rm loc}(\mathbb{R}^{n+1})$, we find $h_j\to 0$ in $L^1((\partial F)\cap A_{4\,R_2}^M)$ for every $M<\infty$, so that, by elliptic regularity, \eqref{aspetta} follows. We now recall that, by Theorem \ref{thm existence and uniform min}-(ii), $(\partial E_j)\setminus B_{R_0(v_j)\,\l_{j}}$ coincides with
\begin{eqnarray}\nonumber
&&\!\!\!\!\!\!\!\!\big\{y+ \l_{j}w_j\big((y-x_j)/\l_{j}\big)\,\nu_{B^{(v_j)}(x_j)}(y):y\in\partial B^{(v_j)}(x_j)\big\}\setminus B_{R_0(v_j)\,\l_{j}}
\\\label{x and u of Ev take 2 jjj}
&&\!\!\!\!\!\!\!\!\mbox{for $\|w_j\|_{C^1(\partial B^{(1)})}\to 0$ and $R_0(v_j)\to 0$}\,.
\end{eqnarray}
The overlapping of \eqref{what about gj 2} and \eqref{x and u of Ev take 2 jjj} (i.e., the fact that $R_0(v_j)<R_1$ if $j$ is large enough) implies statement (iii). Finally, combining \eqref{what about gj 1} and \eqref{what about gj 2} with \eqref{x and u of Ev take 2 jjj} and $\|w_j\|_{C^1(\partial B^{(1)})}\to 0$ we deduce the validity of \eqref{aspetta ancora}. More precisely, rescaling by $\l_j$ in \eqref{what about gj 1} and \eqref{what about gj 2} and setting $E_j^*=E_j/\l_j$, we find $g_j^*\in C^1(\nu^\perp)$ such that, for every $j\ge j_0(\sigma)$ and $\sigma<\sigma_1$,
\begin{eqnarray}
\label{what about gj 1 star}
&&\sup_{x\in\nu^\perp}\{|g_j^*(x)||x|^{-1},|\nabla g_j^*(x)|\}\le C\,\big(\sigma+\zeta(\sigma)\big)\,,
\\
\label{what about gj 2 star}
&&(\partial E_j^*)\cap A^{R_1}_{2\,R_2/\l_j}=\big\{x+g_j^*(x)\,\nu:x\in\nu^\perp\big\}\cap A^{R_1}_{2\,R_2/\l_j}\,,
\end{eqnarray}
while rescaling by $\l_j$ in \eqref{x and u of Ev take 2 jjj} and setting $z_j=x_j/\l_j$ we find
\begin{eqnarray}
\label{x and u of Ev take 2 jjjjj}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!(\partial E_j^*)\setminus B_{R_0(v_j)}
\!\!=\!\big\{z_j+z+w_j(z)\,\nu_{B^{(1)}}(z)::y\in\partial B^{(1)}(z_j)\big\}\!\setminus \!B_{R_0(v_j)}
\end{eqnarray}
where $||z_j|-\omega_{n+1}^{1/(n+1)}|\to 0$ thanks to \eqref{limsupmax goes to zero take 2}. Up to subsequences, $z_j\to z_0$, where $|z_0|=\omega_{n+1}^{1/(n+1)}$. Should $z_0\ne|z_0|\,\nu$, then picking $\sigma$ small enough in terms of $|\nu-(z_0/|z_0|)|>0$ and picking $j$ large enough, we would then be able to exploit \eqref{what about gj 1 star} to get a contradiction with $\|w_j\|_{C^1(\partial B^{(1)})}\to 0$.
\noindent {\bf Conclusion}: Theorem \ref{thm existence and uniform min} implies Theorem \ref{thm main psi}-(i), and \eqref{main asymptotic expansion} was proved in step three. Should Theorem \ref{thm main psi}-(ii), (iii), or (iv) fail, then we could find a sequence $\{(E_j,v_j)\}_j$ contradicting the conclusions of either step five or Theorem \ref{thm existence and uniform min}. We have thus completed the proof of Theorem \ref{thm main psi}.
\end{proof}
| proofpile-arXiv_065-1667 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Traditional PINNs}
\label{sec:pinn}
A \gls{pinn} architecture is composed of a densely connected \gls{ann} that minimizes a loss comprised of the residual equation \cref{eqn:E-HFM} evaluated at the collocation points, data, and the initial and boundary points~\cite{Raissi_JCP_2019}.
The output of the network, is the state parameter, $\bm{\stateparnull}$,
\begin{equation} \label{eqn:NN}
\bm{\stateparnull} =
\nntheta{\ensuremath{ \bm{u} }} =
\activationi{\ensuremath{m}}{
\weight{\ensuremath{m}}
\activationi{\ensuremath{m}-1}{
\weight{\ensuremath{m}-1}
\cdots
\activationi{1}{
\weight{1} \ensuremath{ \bm{u} }
+\biasi{1}
}
\cdots+\biasi{\ensuremath{m}-1}
}
+\biasi{\ensuremath{m}}
} ,
\end{equation}
where the input vector is the concatenation of the spatial and temporal location, i.e., $\ensuremath{ \bm{u} } = \transpose{ \left[ \bm{x}_i, t_i \right] } \in \realsetO{\ensuremath{ {d} }+1}$, where $\ensuremath{ {d} }$ is the dimension of physical space, $\activationi{i}{.}$ is the activation function at the $i^{\textit{th}}$-layer, $\weight{1} \in \realset{\ensuremath{w}}{\left(\ensuremath{ {d} }+1\right)}$, $\weight{i} \in \realset{\ensuremath{w}}{\ensuremath{w}}, \forall i\in \left\{2,\cdots,\ensuremath{m}-1 \right\}$, and $\weight{\ensuremath{w}} \in \realset{\ensuremath{\ensuremath{ {d} }}}{\ensuremath{w}}$ are the weights, and $\biasi{1} \in \realsetO{\ensuremath{w}}, \forall i\in \left\{1,\cdots,\ensuremath{m}-1 \right\}$, and $\biasi{\ensuremath{m}} \in \realsetO{\ensuremath{ {d} }}$ are biases.
The weights and biases are learned to minimize the so--called physics informed loss, minimizing the residual equation and the appropriate boundary and initial conditions, i.e.,
\begin{equation} \label{eqn:cost}
\ensuremath{\mathcal{\bm{L}}} = \ensuremath{ \loss_{r} } + \ensuremath{ \loss_{bc} } + \ensuremath{ \loss_{ic} },
\end{equation}
where
\begin{subequations}\label{eqn:costdefine}
\begin{align}
\ensuremath{ \loss_{r} } &= \ensuremath{ \lambda_r } \fracsum{\ensuremath{ N_{r} }} \NormvecT{ \residual{\bm{x}_i,t_i} }, \label{eqn:lossr} \\
\ensuremath{ \loss_{bc} } &= \ensuremath{ \lambda_{bc} } \fracsum{\ensuremath{ N_{bc} }} \NormvecT{ \ensuremath{ \mathcal{B} } \left[\bm{\ensuremath{ \bm{\stateparc} }}\right] \left(\bm{x}_{bc}^i, t_bc^i\right) },
\label{eqn:lossbc} \\
\ensuremath{ \loss_{ic} } &= \ensuremath{ \lambda_{ic} } \fracsum{\ensuremath{ N_{ic} }} \NormvecT{\ensuremath{ \bm{\stateparc} }\left(\bm{x}, 0\right) -
\ensuremath{ \textsl{g} } \left(\bm{x}_{ic}^i\right) },
\label{eqn:lossic}
\end{align}
\end{subequations}
and $\scriptstyle \left\{t_{r}^i, \bm{x}_{ic}^i\right\}_{i=1}^{\ensuremath{ N_{r} }}$
is the set of temporal and spatial coordinates of the collocation points where the residual is evaluated, and
$\scriptstyle \left\{\bm{x}_{ic}^i\right\}_{i=1}^{\ensuremath{ N_{ic} }}$, is the set of coordinates where the initial condition is known ($\ensuremath{ \textsl{g} } \left(\bm{x}_{ic}^i\right)$ at $t=0$), and
$\scriptstyle \left\{t_{bc}^i, \bm{x}_{ic}^i\right\}_{i=1}^{\ensuremath{ N_{bc} }}$ is a set of temporal and spatial coordinates of the boundary points ($ \ensuremath{ \mathcal{B} } \left[\bm{\ensuremath{ \bm{\stateparc} }}\right] \left(\bm{x}_{bc}^i, t_bc^i\right)$).
The hyperparameters $\ensuremath{ \lambda_r }$, $\ensuremath{ \lambda_{bc} }$, and $\ensuremath{ \lambda_{ic} }$ are scalars tuned to enhance the convergence.
Augmenting the loss using more data points leads to faster convergence, especially in convection--dominated problems~\cite{Abreu_ICCS_2021}. However, we intentionally refrain from using any data points to evaluate the convergence of \glspl{pinn} as a solver (no--data regime).
To solve convection--diffusion equation, \cref{eqn:E-HFM} is used as the residual term in~\cref{eqn:lossr}.
In this paper, without loss in generality, we limit the spatial domain to \gls{1d} space.
The periodic boundary condition is strictly enforced using a custom layer~\cite{Bihlo_JCP_2021}, and therefore $\ensuremath{ \lambda_{bc} }=0$.
The custom layer in $x \in \left[0, 2 \pi \right]$ transforms the domain to a polar coordinate, i.e.,
\begin{equation}\label{eqn:layer0}
\transformlayer{x} = \transpose{
\left[ \cos\left(x \right), \sin\left(x \right) \right]
},
\end{equation}
and out--put of this layer is fed into the traditional \gls{nn} as described in~\cref{eqn:NN} with appropriate adjustment of the dimension of the weight of the first layer, i.e., $\weight{1} \in \realset{\ensuremath{w}}{\left(2\ensuremath{ {d} }+1\right)}$, increasing the number of network variables by only $\ensuremath{w}\times\ensuremath{ {d} }$.
Further discussion of strictly enforcing the boundary conditions can be found in~\cite{Dong_JCP_2021}.
\begin{figure}[!t]
\centering
\includegraphics[scale=1.3]{data/PINN}
\caption{The traditional \glspl{pinn} architecture with periodic boundary condition.}
\label{fig:archPINN}
\end{figure}
\subsection{Proposed Lagrangian PINNs}
\label{sec:lpinn}
In this section, we describe the additional changes to the architecture of the traditional \glspl{pinn} to conform with the Lagrangian formulation to satisfy \cref{eqn:L_HFM}.
We propose a parallel architecture comprised of two branches.
The first branch solves for the characteristics curves and minimizes \cref{eqn:L_HFM_con_lines}, i.e.,
\begin{equation} \label{eqn:LNNx}
\bm{x} =
\nnthetax{\ensuremath{ \bm{u} }} =
\activationi{\ensuremath{m}}{
\weight{\ensuremath{m}}
\activationi{\ensuremath{m}-1}{
\weight{\ensuremath{m}-1}
\cdots
\activationi{1}{
\weight{1} \ensuremath{ \bm{u} }
+\biasi{1}
}
\cdots+\biasi{\ensuremath{m}-1}
}
+\biasi{\ensuremath{m}}
},
\end{equation}
and the second branch solves for the state parameter on the characteristics curves, and minimizes \cref{eqn:L_HFM_con_state}, i.e.,
\begin{equation} \label{eqn:LNNu}
\bm{\stateparnull} =
\nnthetau{\ensuremath{ \bm{u} }} =
\activationi{\ensuremath{m}}{
\weight{\ensuremath{m}}
\activationi{\ensuremath{m}-1}{
\weight{\ensuremath{m}-1}
\cdots
\activationi{1}{
\weight{1} \ensuremath{ \bm{u} }
+\biasi{1}
}
\cdots+\biasi{\ensuremath{m}-1}
}
+\biasi{\ensuremath{m}}
},
\end{equation}
where all the parameters are defined similar to the network in \cref{sec:pinn}.
The two branches can be of different width, and depth.
The output of the network is the state parameter on the characteristics curves minimizing the loss, i.e.,
\begin{equation} \label{eqn:cost_Lag}
\ensuremath{\mathcal{\bm{L}}} = \ensuremath{ \loss_{{r}_{x}} } + \ensuremath{ \loss_{{r}_{w}} } + \ensuremath{ \loss_{ic} },
\end{equation}
where $\ensuremath{ \loss_{{r}_{x}} }$ and $\ensuremath{ \loss_{{r}_{x}} }$ are the residual associated with~\cref{eqn:L_HFM}, and $\ensuremath{ \loss_{ic} }$ is the loss associated with initial condition of both the state and grid.
Finally, one can interpolate the states from the Lagrangian to the Eulerian frame of reference, $\ensuremath{\bm{\stateparnull}^{*}}$, if necessary.
The proposed architecture is depicted in \cref{fig:archLPINN}.
We recognize the residual equations in \cref{eqn:L_HFM} can also be minimized in an architecture similar to that of the traditional \glspl{pinn}.
However, the proposed two--branch architecture leverages the inherent low--dimensionality of the characteristics~\cite{Mojgani_2017}, to build a shallow and efficient network to solve \cref{eqn:L_HFM_con_lines}.
\begin{figure}[!t]
\centering
\includegraphics[scale=1.3]{data/LPINN}
\caption{The proposed \glspl{lpinn} architecture with periodic boundary condition.}
\label{fig:archLPINN}
\end{figure}
\section{Introduction}
\label{sec:intro}
This file is documentation for the SIAM \LaTeX\ style, including how
to typeset the main document, the {\scshape Bib}\TeX\xspace\ file, and any supplementary
material. More information
about SIAM's editorial style can be found in the style manual, available
at \url{https://www.siam.org/journals/pdf/stylemanual.pdf}.
The major changes in the SIAM standard class are summarized in \cref{sec:changes}.
The SIAM \LaTeX\@ files can be found at
\url{https://www.siam.org/journals/auth-info.php}. The files that
are distributed for the standard macros are given below.
\begin{itemize}
\item \texttt{siamart190516.cls} (required): Main SIAM standard \LaTeX\ class file.
\item \texttt{siamplain.bst} (required): Bibliographic style file for
{\scshape Bib}\TeX\xspace.
\item \texttt{docsiamart.tex}: Produces this documentation.
\item \texttt{references.bib}: {\scshape Bib}\TeX\xspace\ database for this
documentation and examples.
\item \texttt{ex\_article.tex}: Template for article.
\item \texttt{ex\_supplement.tex}: Template for supplement.
\item \texttt{ex\_shared.tex}: Template for shared information for
article and supplement.
\end{itemize}
To use these files, put \texttt{siamart190516.cls} and
\texttt{siamplain.bst} in the directory with your
paper or, alternatively, into your \LaTeX\@ and {\scshape Bib}\TeX\xspace\@ paths,
respectively.
The outline of a SIAM \LaTeX\ article is shown in
\cref{ex:outline}. Templates are provided and discussed in more detail
in \cref{sec:template}.
\begin{example}[label={ex:outline},listing only,%
listing options={style=siamlatex,{morekeywords=[1]{maketitle},
morekeywords=[2]{siamart190516}},}]%
{Document outline}
\documentclass{siamart190516}
\begin{document}
\maketitle
\end{document}
\end{example}
\section{Class options}
\label{sec:class-options}
Class options can be included in the bracketed argument of the
command, separated by commas. The possible class options are:
\begin{itemize}
\item \code{review} --- Recommended for submitting your manuscript to
a SIAM journal. Adds line numbers as well as the statement ``This
manuscript is for review purposes only'' to the bottom of each page.
\item \code{final} --- Turns off the black boxes that help authors
identify lines that are too long. The final published version will
have this option on.
\item \code{supplement} --- Specifies that the file is a supplement
and not the main document, causing changes in the appearance of the
title and numbering; see \cref{sec:supplement} for details.
\item \code{hidelinks} --- Turns off colors on hyperlinks; see
\cref{sec:cr+hyp}. The hyperlinks still exist, but there is no color
to differentiate them.
The final published version will have this option on.
\end{itemize}
\section{Front matter}
\label{sec:front}
The title and author parts are formatted using the standard
\code{\title}, \code{\author}, and \code{\maketitle} commands as
described in Lamport \cite{La86}. The title and author should be
declared in the preamble. The title and author names are automatically
converted to uppercase in the document.
If there is more than one author, each additional author should be preceded by
the \code{\and} command.
The addresses and support acknowledgments are added via
\code{\thanks}. Each author's thanks should specify their address.
The support acknowledgment should be put in the title thanks,
unless specific support needs to be specified for individual authors,
in which case it should follow the author address.
The header for this file was produced by the code in \cref{ex:header}, including an
example of a shared footnote. Each thanks produces a footnote, so the
footnote of the second author is \#3.
The command
\code{\headers{title}{authors}} command, with
the title (possibly shortened to fit) and the authors' names, creates
the page headers, automatically converted to uppercase.
\examplefile[label={ex:header},listing only,%
listing options={style=siamlatex,%
deletetexcs={and,thanks,title,author},%
{moretexcs=[2]{and,thanks,title,author,maketitle,headers,email}}}
]{Title and authors in preamble}{tmp_\jobname_header.tex}
\newpage
Following the author and title is the abstract, key words listing, and AMS subject
classifications, designated using the \code{abstract}, \code{keywords}, and \code{AMS}
environments.
Authors are responsible for providing AMS numbers which can be found
on the AMS web site \cite{AMSMSC2010}. The abstract, keywords, and AMS subject classifications
for this document are specified in \cref{ex:abstract}.
\examplefile[label={ex:abstract},%
before upper={\preamble{\bs newcommand\{\bs BibTeX\}\{\{\bs scshape Bib\}\bs TeX\bs xspace\}}},
listing only,%
listing options={style=siamlatex,%
{morekeywords=[2]{abstract,keywords,AMS}}}
]{Abstract, keywords, and AMS classifications}{tmp_\jobname_abstract.tex}
A more complete example, including a PDF supplement, that uses the
included files \texttt{ex\_article.tex}, \texttt{ex\_supplement.tex},
and \texttt{ex\_shared.tex} is discussed in
\cref{sec:template}. The example files can be used as a starting point
for producing a document.
\section{Cross references and hyperlinks}
\label{sec:cr+hyp}
SIAM now supports cross references and hyperlinks via the
\texttt{cleveref} and \texttt{hyperef} packages, which are loaded by
the class file.
\subsection{Cleveref}
\label{sec:cleveref}
SIAM strongly recommends using the commands provided by
the \texttt{cleveref} package for cross referencing.
The package is automatically loaded and already customized
to adhere to SIAM's style guidelines.
To create a cross reference, use the command \code{\cref} (inside
sentence) or \code{\Cref} (beginning of a sentence) in place of the
object name and \code{\ref}.
The \texttt{cleveref}
package enhances \LaTeX's cross-referencing features, allowing
the format of cross references to be determined automatically
according to the ``type" of cross reference (equation, section, etc.)
and the context in which the cross reference is used.
So, the package
\emph{automatically} inserts the object name as well as the
appropriate hyperlink; see \cref{ex:cref}.
It may require two \LaTeX\@ compilations for the references to show up
correctly.
Additional examples are shown in the sections below
for equations, tables, figures, sections, etc.
\begin{example}[label=ex:cref,bicolor,listing options={style=siamlatex,%
{morekeywords=[2]{cref,ref}}}]{Advantage of using cleveref}
The normal way to get a cross reference with a hyperlink requires a
lot of typing: \hyperref[thm:mvt]{Theorem~\ref*{thm:mvt}}.
The \texttt{cleveref} package gets both the name and hyperlink
automatically using a single macro: \cref{thm:mvt}.
It also handles multiple references with the same macro, such as
\cref{thm:mvt,fig:pgfplots,fig:testfig}.
\end{example}
\subsection{Hyperef}
\label{sec:hyperef}
Hyperlinks are created with the \code{\href} and \code{\url} commands,
as shown in \cref{ex:href}.
SIAM has also defined the \code{\email} command, as shown in
\cref{ex:header}.
You can hide links (i.e., turn off link colors) with the \code{hidelinks} option.
\begin{example}[label={ex:href},bicolor,%
listing options={style=siamlatex,%
{morekeywords=[2]{href,url}}}]{Creating hyperlinks}
The \href{https://www.siam.org}{SIAM homepage} has general information.
Note that the colored text will \emph{not} appear in the print version
nor will the hyperlink be active, so the writer may want to specify
the location explicitly instead by using \url{https://www.siam.org}.
\end{example}
Note that homepage links via \code{\url} in the \code{\thanks} environment require
special formatting for the tilde (\string~) character. The formatting
is used in the template and shown in \cref{ex:shared}.
\section{Math and equations}
\label{sec:math}
Here we show some example equations, with numbering, and examples of
referencing the equations. SIAM now includes the package
\texttt{amsmath} by default, and we include some of its features as
well, although the reader should consult the package user manual for
further guidance \cite{amsmath,shortmath}.
Several of the example are adapted
from Mittlebach and Goossen's guide to \LaTeX~\cite{MiGo04}.
\Cref{ex:textmath} is a straightforward example of inline mathematics
equations that does not use any special packages or features.
\begin{example}[label={ex:textmath},bicolor]{Inline math}
The following shows an example of math in text:
Let $S=[s_{ij}]$ ($1\leq i,j\leq n$) be a $(0,1,-1)$-matrix of order $n$.
\end{example}
In \cref{ex:bbm}, we show the recommended method for getting
blackboard fonts using the \texttt{amsfonts} package. This is not
loaded by default and must be included in the preamble.
\begin{example}[label={ex:bbm},bicolor,before upper={\preamble{\bs
usepackage\{amsfonts\}}},%
listing options={style=siamlatex,%
{morekeywords=[2]{mathbb}}}]{Blackboard math}
Blackboard bold characters, such as $\mathbb{C}$ and $\mathbb{R}$,
should be created with the \texttt{amsfonts} package, although this
is not included by default.
\end{example}
\Cref{ex:smallmatrix} shows the \code{smallmatrix} environment for an
inline matrix from the \texttt{amsmath} package, which is included by
default.
\begin{example}[label={ex:smallmatrix},bicolor,%
listing options={style=siamlatex,%
{morekeywords=[2]{smallmatrix}}}]{Inline matrix}
Matrices of no more than two rows appearing in text can be created
as shown in the next example:
$B = \bigl[ \begin{smallmatrix} B_{11} & B_{12} \\
B_{21} & B_{22} \end{smallmatrix} \bigr]$.
\end{example}
Bigger matrices can be rendered with environments from
the \texttt{amsmath} package, such as \code{bmatrix} and
\code{pmatrix} used in
\cref{ex:matrices}.
\begin{example}[label={ex:matrices},bicolor,%
listing options={style=siamlatex,%
{morekeywords=[2]{bmatrix,pmatrix}}}]{Creating matrices}
Display matrices can be rendered using environments from \texttt{amsmath}:
\begin{equation}\label{eq:matrices}
S=\begin{bmatrix}1&0\\0&0\end{bmatrix}
\quad\text{and}\quad
C=\begin{pmatrix}1&1&0\\1&1&0\\0&0&0\end{pmatrix}.
\end{equation}
\Cref{eq:matrices} shows some example matrices.
\end{example}
\newpage
\Cref{ex:dmo} shows how to use the \code{\DeclareMathOperator} command
from the \texttt{amsopn} package to declare the \code{\Range} macro.
(This example also uses the \texttt{braket} package for the
\code{\set} macro, but this is not necessarily recommended by SIAM.)
\begin{example}[label={ex:dmo},%
before upper={\preamble{\bs usepackage\{braket,amsfonts,amsopn\}}\\
\noindent\preamble{\bs DeclareMathOperator\{\bs Range\}\{Range\}}},%
bicolor,%
listing options={style=siamlatex,%
{moretexcs=[2]{Range}}}
]{Declaring math operators}
An example of a math operator:
\begin{equation}\label{eq:range}
\Range(A) = \set{ y \in \mathbb{R}^n | y = Ax }.
\end{equation}
\end{example}
\Cref{ex:foo} shows how to use the \code{align} environment from
\texttt{amsmath} to easily align multiple equations.
\begin{example}[label={ex:foo},bicolor,%
listing options={style=siamlatex,%
{morekeywords=[2]{align}}}]{Aligned equations}
\Cref{eq:a,eq:b,eq:c} show three aligned equations.
\begin{align}
f &= g, \label{eq:a} \\
f' &= g', \quad\text{and} \label{eq:b} \\
\mathcal{L}f &= \mathcal{L}g \label{eq:c}.
\end{align}
\end{example}
Another way to number a set of equations is the
\code{subequations} environment from \texttt{amsmath}, as shown in \cref{ex:aligned}.
\begin{example}[label={ex:aligned},bicolor,%
listing options={style=siamlatex,%
{morekeywords=[2]{subequations}}}]{Subequations}
We calculate the Fr\'{e}chet derivative of $F$ as follows:
\begin{subequations}
\begin{align}
F'(U,V)(H,K)
&= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T} -
P(H\Sigma V^{T} + U\Sigma K^{T})\rangle \label{eq:aa} \\
&= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T}\rangle
\nonumber \\
&= \langle R(U,V)V\Sigma^{T},H\rangle +
\langle \Sigma^{T}U^{T}R(U,V),K^{T}\rangle. \label{eq:bb}
\end{align}
\end{subequations}
\Cref{eq:aa} is the first line, and \cref{eq:bb} is the last line.
\end{example}
~
For an equation split over multiple lines, \cref{ex:ml} shows the
usage of the \code{multline} environment provided by \texttt{amsmath}.
~
\begin{example}[label={ex:ml},bicolor,%
listing options={style=siamlatex,%
{morekeywords=[2]{multline}}}]{Equation split across lines}
We claim that the projection $g(U,V)$ is given by the pair of matrices:
\begin{multline} \label{eq:ml}
g(U,V) = \biggl( \frac{R(U,V)V\Sigma^{T}U^{T}
- U\Sigma V^{T}R(U,V)^{T}}{2}U,\\
\frac{R(U,V)^{T}U\Sigma V^{T}-V \Sigma^{T}U^{T}R(U,V)}{2}V \biggr).
\end{multline}
\end{example}
\section{Theorem-like environments}
\label{sec:thm}
SIAM loads \texttt{ntheorem} package and uses it to define the following
theorem-like environments:
\code{theorem},
\code{lemma},
\code{corollary},
\code{definition}, and
\code{proposition}.
SIAM also defines a \code{proof} environment that automatically
inserts the symbol ``$\,\proofbox\,$'' at the end of any proof, even if it ends in an
equation environment. \emph{Note that the document may need to be
compiled twice for the mark to appear.}
Some of the calculus examples were adapted from \cite{CalcI}.
\Cref{ex:theorem} shows usage of the \code{theorem} environment.
An optional argument can be used to name the theorem.
\Cref{ex:cor} illustrates a corollary, without a name, and the
proof environment.
~
\begin{example}[label=ex:theorem,bicolor,parbox=false,%
listing options={style=siamlatex,%
{morekeywords=[2]{theorem}}}]{Theorem}
\begin{theorem}[Mean Value Theorem]\label{thm:mvt}
Suppose $f$ is a function that is continuous on the closed interval
$[a,b]$. and differentiable on the open interval $(a,b)$.
Then there exists a number $c$ such that $a < c < b$ and
\begin{displaymath}
f'(c) = \frac{f(b)-f(a)}{b-a}.
\end{displaymath}
In other words, $f(b)-f(a) = f'(c)(b-a)$.
\end{theorem}
\end{example}
\begin{example}[label=ex:cor,bicolor,parbox=false,%
listing options={style=siamlatex,%
{morekeywords=[2]{corollary,proof}}}]%
{Corollary and proof}
\begin{corollary}
Let $f(x)$ be continuous and differentiable everywhere. If $f(x)$
has at least two roots, then $f'(x)$ must have at least one root.
\end{corollary}
\begin{proof}
Let $a$ and $b$ be two distinct roots of $f$.
By \cref{thm:mvt}, there exists a number $c$ such that
\begin{displaymath}
f'(c) = \frac{f(b)-f(a)}{b-a} = \frac{0-0}{b-a} = 0.
\end{displaymath}
\end{proof}
\end{example}
SIAM also defines commands to create your own theorem- and
remark-like environments:
\begin{itemize}
\item \code{newsiamthm} --- Small caps header, italized body.
\item \code{newsiamremark} --- Italics header, roman body.
\end{itemize}
Each command takes two arguments. The first is the environment name,
and the second is the name to show in the document. These commands should
be used instead of \code{\newtheorem}.
\Cref{ex:claim,ex:ref} shows how to use the commands above, including how to
specify the plural version for \texttt{cleveref} if it is unusual.
\begin{example}[label=ex:claim,bicolor,%
before upper={\preamble{\bs newsiamthm\{claim\}\{Claim\}}\\
\noindent\preamble{\bs newsiamremark\{hypothesis\}\{Hypothesis\}}\\
\noindent\preamble{\bs crefname\{hypothesis\}\{Hypothesis\}\{Hypotheses\}}},%
parbox=false,%
listing options={style=siamlatex,%
{morekeywords=[2]{claim,proof,hypothesis}}}]{New theorem-like environment}
\begin{claim}\label{cl:constant}
If $f'(x) = 0$ for all $x \in (a,b)$ then $f(x)$ is constant on $(a,b)$.
\end{claim}
\begin{hypothesis}\label{hyp1}
The function $f$ is continuously differentiable.
\end{hypothesis}
\begin{hypothesis}\label{hyp2}
The random variable is normally distributed.
\end{hypothesis}
\end{example}
\begin{example}[label=ex:ref,bicolor,listing options={style=siamlatex,%
{morekeywords=[2]{cref}}}]{References}
We can reference multiple types of objects with a single reference:
\cref{cl:constant,thm:mvt,hyp1,hyp2}.
\end{example}
\section{Tables}
\label{sec:tab}
Table captions should go above the tables.
\Cref{ex:simpletable} shows the code to generate a
\cref{tab:simpletable}.
A more complicated example is shown in
\cref{ex:table}, which generates \cref{tab:KoMa14}. This
example uses subfloats via the \texttt{subfig} package, as well as
special column options from the \texttt{array} package.
\begin{tcbverbatimwrite}{tmp_\jobname_simpletable.tex}
\begin{table}[tbhp]
{\footnotesize
\caption{Example table}\label{tab:simpletable}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
Species & \bf Mean & \bf Std.~Dev. \\ \hline
1 & 3.4 & 1.2 \\
2 & 5.4 & 0.6 \\ \hline
\end{tabular}
\end{center}
}
\end{table}
\end{tcbverbatimwrite}
\examplefile[label={ex:simpletable},%
listing only, listing options={style=siamlatex}]%
{Example table.}{tmp_\jobname_simpletable.tex}
\input{tmp_\jobname_simpletable.tex}
\begin{tcbverbatimwrite}{tmp_\jobname_table.tex}
\newcolumntype{R}{>{$}r<{$}} %
\newcolumntype{V}[1]{>{[\;}*{#1}{R@{\;\;}}R<{\;]}} %
\begin{table}[tbhp]
{\footnotesize
\captionsetup{position=top}
\caption{Example table adapted from Kolda and Mayo \rm{\cite{KoMa14}}.}\label{tab:KoMa14}
\begin{center}
\subfloat[$\beta=1$]{
\begin{tabular}{|r|R|V{3}|c|r@{\,$\pm$\,}l|} \hline
occ. & \multicolumn{1}{c|}{$\lambda$} & \multicolumn{4}{c|}{$\mathbf{x}$} &
fevals & \multicolumn{2}{c|}{time (sec.)}\\ \hline
718 & 11.3476 & 0.5544 & 0.3155 & 1.2018 & 0.0977 & 45 & 0.17 & 0.06 \\ \hline
134 & 3.7394 & 0.2642 & -1.1056 & 0.2657 & -0.3160 & 31 & 0.12 & 0.05 \\ \hline
4 & \multicolumn{6}{c|}{\emph{--- Failed to converge ---}} & 0.21 & 0.10 \\ \hline
\end{tabular}}
\subfloat[$\beta=-1$]{
\begin{tabular}{|r|R|V{3}|c|r@{\,$\pm$\,}l|} \hline
occ. & \multicolumn{1}{c|}{$\lambda$} & \multicolumn{4}{c|}{$\mathbf{x}$} &
fevals & \multicolumn{2}{c|}{time (sec.)}\\ \hline
72 & -1.1507 & 0.2291 & 0.6444 & 0.3540 & -0.8990 & 34 & 0.14 & 0.06 \\ \hline
624 & -6.3985 & 0.1003 & 0.1840 & 0.5305 & 1.2438 & 48 & 0.19 & 0.08 \\ \hline
2 & \multicolumn{6}{c|}{\emph{--- Failed to converge ---}} & 0.23 & 0.02 \\ \hline
\end{tabular}}
\end{center}
}
\end{table}
\end{tcbverbatimwrite}
\examplefile[label={ex:table},%
before upper={\preamble[\scriptsize]{\bs usepackage\{array\}}\\[-0.4em]
\noindent\preamble[\scriptsize]{\bs usepackage[caption=false]\{subfig\}}},%
listing only, listing options={%
style=siamlatex,basicstyle=\ttfamily\scriptsize}]%
{Example table with subtables.}{tmp_\jobname_table.tex}
\input{tmp_\jobname_table.tex}
\section{Figures}
\label{sec:fig}
It is recommended that all figures be generated in high resolution.
In the past, SIAM has required encapsulated
postscript (EPS) format for final production. This is still an
acceptable format, but SIAM also now allows high-resolution
PDF, JPEG, and PNG figures.
If working with EPS images and using \texttt{pdflatex}, we
recommend the package \texttt{epstopdf} to automatically convert EPS
images to PDF for inclusion in PDF documents created by
\texttt{pdflatex}.
\Cref{ex:fig} shows the code to generate \cref{fig:testfig}. This
example uses the \texttt{graphicx} package for the
\code{\includegraphics} command.
\begin{tcbverbatimwrite}{tmp_\jobname_fig.tex}
\begin{figure}[tbhp]
\centering
\subfloat[$\epsilon_{\max}=5$]{\label{fig:a}\includegraphics{lexample_fig1}}
\subfloat[$\epsilon_{\max}=0.5$]{\label{fig:b}\includegraphics{lexample_fig2}}
\caption{Example figure using external image files.}
\label{fig:testfig}
\end{figure}
\end{tcbverbatimwrite}
\examplefile[label={ex:fig},%
before upper={\preamble[\scriptsize]{\bs usepackage\{graphicx,epstopdf\}}\\[-0.4em]
\noindent\preamble[\scriptsize]{\bs usepackage[caption=false]\{subfig\}}},%
listing only, listing options={%
style=siamlatex,basicstyle=\ttfamily\scriptsize}]%
{Example figure with subfigures and external files}{tmp_\jobname_fig.tex}
\input{tmp_\jobname_fig.tex}
Another option for figures is a graphics-generator that is platform- and
format-independent.
PGF is a TeX macro package for generating such graphics and works together with the most important TeX backend drivers, including pdftex and dvips.
The user-friedly syntax layer called TikZ.
Here we show an example using \texttt{PGFPLOTS}, useful for drawing high-quality
plots directly in \LaTeX.
\Cref{ex:data} and \cref{ex:pgfplots} shows the data and code, respectively, to generate \cref{fig:pgfplots}, adapted from \cite{pgfplots}.
\examplefile[label={ex:data},listing only,
listing options={style=siamlatex,basicstyle=\ttfamily\scriptsize}]%
{Example data file (data.dat)}{data.dat}
\begin{tcbverbatimwrite}{tmp_\jobname_tikz.tex}
\begin{figure}[tbhp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[height=2.75in, grid=major,
xlabel={Degrees of Freedom}, ylabel={$L_2$ Error},
legend entries={$d=2$,$d=3$}]
\addplot table [x=d2_dof,y=d2_l2_err] {data.dat};
\addplot table [x=d3_dof,y=d3_l2_err] {data.dat};
\end{loglogaxis}
\end{tikzpicture}
\caption{Example \texttt{PGFPLOTS} figure.}
\label{fig:pgfplots}
\end{figure}
\end{tcbverbatimwrite}
\examplefile[label={ex:pgfplots},%
before upper={\preamble[\scriptsize]{\bs usepackage\{pgfplots\}}},%
listing only, listing options={%
style=siamlatex}]%
{Example TikZ/PGF for platform-independent graphics.}{tmp_\jobname_tikz.tex}
\input{tmp_\jobname_tikz.tex}
\section{Algorithms}
\label{sec:algs}
SIAM automatically includes the \texttt{algorithm} package in the
class definition. This provides the float environment.
Users have the choice of \texttt{algpseudocode},
\texttt{algorithmic}, and other packages for actually formatting the algorithm.
For example, \cref{alg:buildtree} is produced by the code in \cref{ex:alg}.
In order to reference lines within the algorithm, we need to tell the
\texttt{cleveref} package how to do the referencing, which is the
second line of \cref{ex:alg}. Then we can use the code
\code{\cref{line3}} to produce \cref{line3}.
\begin{tcbverbatimwrite}{tmp_\jobname_alg.tex}
\begin{algorithm}
\caption{Build tree}
\label{alg:buildtree}
\begin{algorithmic}[1]
\STATE{Define $P:=T:=\{ \{1\},\ldots,\{d\}$\}}
\WHILE{$\#P > 1$}
\STATE\label{line3}{Choose $C^\prime\in\mathcal{C}_p(P)$ with $C^\prime := \operatorname{argmin}_{C\in\mathcal{C}_p(P)} \varrho(C)$}
\STATE{Find an optimal partition tree $T_{C^\prime}$ }
\STATE{Update $P := (P{\setminus} C^\prime) \cup \{ \bigcup_{t\in C^\prime} t \}$}
\STATE{Update $T := T \cup \{ \bigcup_{t\in\tau} t : \tau\in T_{C^\prime}{\setminus} \mathcal{L}(T_{C^\prime})\}$}
\ENDWHILE
\RETURN $T$
\end{algorithmic}
\end{algorithm}
\end{tcbverbatimwrite}
\examplefile[float=htpb,label={ex:alg},%
before upper={\preamble[\scriptsize]{\bs usepackage\{algorithmic\}}\\[-0.4em]
\preamble[\scriptsize]{\bs Crefname\{ALC@unique\}\{Line\}\{Lines\}}},%
listing only, listing options={%
style=siamlatex,basicstyle=\ttfamily\scriptsize}]%
{Example algorithm}{tmp_\jobname_alg.tex}
\input{tmp_\jobname_alg.tex}
\section{Sections}
\label{sec:sec}
Sections are denoted using standard \LaTeX\ section commands, i.e.,
\code{\section}, \code{\subsection}, etc.
If you wish to end the section title with something other that a period (the default), you have to add the command \code{\nopunct} at the end of the title.
Appendices are created with the normal sectioning commands, following
the command \code{
\subsection{Convection}
\label{sec:conv}
Consider the inviscid convection equation,
\begin{equation} \label{eqn:conv}
\frac{\partial w(x,t)}{\partial t}
- \convspeed \frac{w(x,t)}{\partial x} = 0,
\end{equation}
and its reformulation in the Lagrangian frame of reference,
\begin{subequations}\label{eqn:conv_Lag}
\begin{align}
\frac{d x}{d t} &= \convspeed, \label{eqn:conv_Lag_lines}\\
\frac{\partial w}{\partial t} &= 0 \label{eqn:conv_Lag_state}.
\end{align}
\end{subequations}
The solution to \cref{eqn:conv_Lag} is straightforward.
\Cref{eqn:conv_Lag_state} dictates the grid points to move with the constant convection velocity, $\convspeed$,
while the state variable remains constant along the moving points, \cref{eqn:conv_Lag_state}.
The accuracy of \gls{pinn} and \gls{lpinn} are compared for different convection velocity in \cref{fig:convection}.
Similar to \cite{Krishnapriyan_NIPS_2021}, the error of \gls{pinn} increases for larger values of $\convspeed$, such that for $c\ge20$, the \gls{pinn} cannot be trained.
In case of the proposed \gls{lpinn}, where the problem is simply reformulated on the Lagrangian frame of reference, the error for all cases remains below $5\%$.
Note that the reported error is also comprised of the error originating from interpolating the predicted state from the moving grid of the Lagrangian frame to the stationary grid of the Eulerian frame.
To evaluate the optimality of the trained network, the loss landscape of the network at the end of the training phase is often used as a descriptive measure \cite{Fuks_JMLMC_2020, Krishnapriyan_NIPS_2021, Rohrhofer_arxiv_2022, Basir_SCITECH_2022}.
To compute the loss landscape, the two dominant eigenvectors of the Hessian of the loss with respect to the trainable parameters of the networks, $\bm{\delta}$ and $\bm{\eta}$, are computed using the code provided in \cite{Yao_IEEE_2020}.
Subsequently, the network is perturbed along the eigenvectors and its loss, $\ensuremath{\mathcal{\bm{L}}}'$, is evaluated, i.e.,
\begin{equation} \label{eqn:loss_landscape}
\ensuremath{\mathcal{\bm{L}}}'\left( \alpha , \beta \right) = \ensuremath{\mathcal{\bm{L}}} \left(\ensuremath{ \bm{\theta} } + \alpha \bm{\delta} + \beta \bm{\eta} \right),
\end{equation}
where $\left(\alpha, \beta \right) \in \left[-\alpha_0,\alpha_0\right] \times \left[-\beta_0,\beta_0\right]$.
Finally, $\log{\left( \ensuremath{\mathcal{\bm{L}}}'\left( \alpha , \beta \right) \right)}$ is visualized in \cref{fig:landscape_conv}, for $\convspeed = \left\{0,30,50\right\}$ and for both \gls{pinn} and \gls{lpinn} architectures.
In \cref{fig:landscape_conv_PINN_0}, we recover the saddle shape of the loss landscape for small convection speed as reported for \gls{pinn} in \cite{Krishnapriyan_NIPS_2021}.
Similarly, by increasing $c$, the landscape becomes less smooth (sharper, or more rugged), implying the trained network is not at a minimizer (\crefrange{fig:landscape_conv_PINN_30}{fig:landscape_conv_PINN_50}).
In the case of \gls{lpinn} (\crefrange{fig:landscape_conv_LPINN_0}{fig:landscape_conv_LPINN_50}), the loss landscapes are significantly smoother compared to their \gls{pinn} counterparts (\crefrange{fig:landscape_conv_PINN_0}{fig:landscape_conv_PINN_50}).
Moreover, the landscape is smooth (flat), even at high $c$, increasing the confidence that the obtained minimizer is a global one.
\begin{figure}[t!]
\centering
\includegraphics[scale=1.00]{data/convergence/main-figure26}
\caption{Comparison of the error in \gls{pinn} (dashed blue) vs. the proposed \glspl{lpinn} (solid back) for the convection equation \cref{eqn:conv}.}
\label{fig:convection}
\end{figure}
\begin{figure}[t!]
\centering
\defdata/landscape/convection/dense{data/landscape/convection/dense}
\def1.00{0.6}
\subfloat[\gls{pinn}, $\convspeed = 0$]
{
\label{fig:landscape_conv_PINN_0}
\includegraphics[trim={0 0 0 2cm},clip, scale=1.00]{data/landscape/convection/dense/0PINN.png}
}
\subfloat[\gls{pinn}, $\convspeed = 30$]
{
\label{fig:landscape_conv_PINN_30}
\includegraphics[trim={0 0 0 2cm},clip,scale=1.00]{data/landscape/convection/dense/30PINN.png}
}
\subfloat[\gls{pinn}, $\convspeed = 50$]
{
\label{fig:landscape_conv_PINN_50}
\includegraphics[trim={0 0 0 2cm},clip,scale=1.00]{data/landscape/convection/dense/50PINN.png}
}
\\
\subfloat[\gls{lpinn}, $\convspeed = 0$]
{
\label{fig:landscape_conv_LPINN_0}
\includegraphics[trim={0 0 0 2cm},clip,scale=1.00]{data/landscape/convection/dense/0LPINN.png}
}
\subfloat[\gls{lpinn}, $\convspeed = 30$]
{
\label{fig:landscape_conv_LPINN_30}
\includegraphics[trim={0 0 0 2cm},clip,scale=1.00]{data/landscape/convection/dense/30LPINN.png}
}
\subfloat[\gls{lpinn}, $\convspeed = 50$]
{
\label{fig:landscape_conv_LPINN_50}
\includegraphics[trim={0 0 0 2cm},clip,scale=1.00]{data/landscape/convection/dense/50LPINN.png}
}
\caption{
The ($\log$ of) loss landscape of convection equation given different convection speeds, $\convspeed \in \left\{0,30,50\right\}$.
a-c. \gls{pinn},
d-e. \gls{lpinn}.
}
\label{fig:landscape_conv}
\end{figure}
\subsection{Convection-diffusion}
\label{sec:conv-diff}
\newcommand{\ensuremath{ \convspeed }}{\ensuremath{ \convspeed }}
\newcommand{\ensuremath{ \nu }}{\ensuremath{ \nu }}
Consider the viscous convection--diffusion equation,
\begin{equation} \label{eqn:convdiff}
\frac{\partial w(x,t)}{\partial t}
- \ensuremath{ \convspeed } \frac{w(x,t)}{\partial x} = \ensuremath{ \nu } \frac{\partial^2 w(x,t)}{\partial x^2} ,
\end{equation}
and its reformulation in the Lagrangian frame of reference,
\begin{subequations}\label{convdiff_Lag}
\begin{align}
\frac{d x}{d t} &= \ensuremath{ \convspeed },\label{eqn:convdiff_Lag_lines}\\
\frac{\partial w}{\partial t} &= \ensuremath{ \nu } \frac{\partial^2 w}{\partial x^2}\label{eqn:convdiff_Lag_state}.
\end{align}
\end{subequations}
\Cref{fig:condiff} compares the accuracy of \gls{pinn} and the proposed \glspl{lpinn}.
Similar to the inviscid case discussed in \cref{sec:conv}, the error in \glspl{pinn} increases by increasing $\convspeed$ and they fail to train after a critical $\convspeed$.
Similarly, the \glspl{lpinn} increases by increasing $\convspeed$, however, the error remains around $10 \%$, even in the most challenging cases.
\begin{figure}[tb!]
\centering
\subfloat[$\nu=1.0$]
{
\centering
\label{fig:condiff_nu=1d00}
\includegraphics[scale=1.00]{data/convergence/main-figure25}
}
\subfloat[$\nu=0.1$]
{
\centering
\label{fig:condiff_nu=0d10}
\includegraphics[scale=1.00]{data/convergence/main-figure24}
}
\subfloat[$\nu=0.01$]
{
\centering
\label{fig:condiff_nu=0d01}
\includegraphics[scale=1.00]{data/convergence/main-figure23}
}
\caption{Comparison of the error in \gls{pinn} (dashed blue) vs. the proposed \glspl{lpinn} (solid back) for the convection--diffusion equation \cref{eqn:convdiff} with different $\nu$.}
\label{fig:condiff}
\end{figure}
\subsection{Burgers' equation}
\label{sec:burgers}
Consider the viscous Burgers' equation,
\begin{equation} \label{eqn:burgers}
\frac{\partial w(x,t)}{\partial t}
- w(x,t) \frac{w(x,t)}{\partial x} = \nu \frac{\partial^2 w(x,t)}{\partial x^2} ,
\end{equation}
and its representation on the Lagrangian frame,
\begin{subequations}\label{burgers_Lag}
\begin{align}
\frac{d x}{d t} &= w(x,t),\label{eqn:burgers_Lag_lines}\\
\frac{\partial w}{\partial t} &= \ensuremath{ \nu } \frac{\partial^2 w}{\partial x^2}\label{eqn:burgers_Lag_state}.
\end{align}
\end{subequations}
While the traditional formulation of \gls{pinn} is successfully demonstrated for Burgers' equation~\cite{Raissi_JCP_2019}, the examined problem lack the main property of challenging cases for training, i.e., the large Kolmogorov n--width\ associated with the travel of the shock as in \cref{fig:burgers_svd}.
In \cref{fig:burgers}, and in a similar to the trend in convection--diffusion equation, the \gls{pinn} fails to train for $\convspeed\ge10$, while the \gls{lpinn} is trained for all cases.
The higher error in this case compared to the convection--diffusion equation is due to the higher interpolation error close to the shock.
Note that in these cases the viscosity, $\nu$, is small enough to form the high gradient shock and is large enough to avoid the intersecting characteristics. \cref{fig:burgers_space_time} shows the accuracy of the proposed \gls{lpinn} compared to the numerical solver at different simulation time steps.
\begin{figure}[tb!]
\centering
\subfloat[$\nu=1.0$]
{
\centering
\label{fig:burgers_nu=1d00}
\includegraphics[scale=1.00]{data/convergence/main-figure7}
}
\subfloat[$\nu=0.1$]
{
\centering
\label{fig:burgers_nu=0d10}
\includegraphics[scale=1.00]{data/convergence/main-figure6}
}
\subfloat[$\nu=0.01$]
{
\centering
\label{fig:burgers_nu=0d01}
\includegraphics[scale=1.00]{data/convergence/main-figure5}
}
\caption{
Comparison of the error in \gls{pinn} (dashed blue) vs. the proposed \glspl{lpinn} (solid back) for the viscous Burgers' equation \cref{eqn:burgers} with different $\nu$.
}
\label{fig:burgers}
\end{figure}
\begin{figure}[tb!]
\centering
\subfloat[$\convspeed=30$]
{
\centering
\label{fig:burgers_nu=0d01_C30}
\includegraphics[scale=1.00]{data/time_space/main-figure29}
}
\subfloat[$\convspeed=50$]
{
\centering
\label{fig:burgers_nu=0d01_C50}
\includegraphics[scale=1.00]{data/time_space/main-figure30}
}
\caption{
Comparison of the proposed \gls{lpinn} with pseudo--spectral solver for the viscous Burgers' equation \cref{eqn:burgers} ($\nu=0.01$) at $t\in\left\{0,\ensuremath{ T }/3, 2\ensuremath{ T }/3, \ensuremath{ T }\right\}$ (black circle, blue square, red triangle, blue diamond) for $\convspeed=\left\{30, 50\right\}$. \Gls{pinn} cannot be trained in both regimes.
}
\label{fig:burgers_space_time}
\end{figure}
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{Kolmogorov n--width\ of the Failure Modes of \glspl{pinn}}
\label{sec:problem}
\input{nwidth.tex}
\section{PINNs for Non--linear Convection--Diffusion}
\label{sec:convdiff}
\input{convdiff.tex}
\section{Experimental Results}
\label{sec:experiments}
\input{experiments.tex}
\section{Conclusions}
\label{sec:conclusions}
\input{conclusions.tex}
\section*{Acknowledgements}
This work was supported by an award from the ONR Young Investigator Program (N00014-20-1-2722), a grant from the NSF CSSI program (OAC-2005123).
Computational resources were provided by NSF XSEDE (allocation ATM170020) and NCAR's CISL (allocation URIC0004).
Our codes and data are available at~\url{https://github.com/rmojgani/LPINNs}.
\bibliographystyle{elsarticle-num-names}
| proofpile-arXiv_065-1668 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:1}
In recent years, recommender systems became one of the essential areas in the machine learning field. In our daily life, recommender systems affect us in one way or another, as they exist in almost all the current websites such as social media, shopping websites, and entertainment websites. Buying clothes online became a widespread trend nowadays; websites such as Zalando and Amazon are getting bigger every day. In this regard, item images play a crucial role, no one wants to buy a new shirt without seeing how it looks like. Many users also can have the same taste when they select what they want to buy; for example, one loves dark colors most of the time, others like sportswear, and so on.
In fashion recommendation adding the items images to the model has proven to show a significant lift in the recommendation performance when the model is trained not only on the relational data describing the interaction between users and items but also on how it looks. It can massively help the model recommend other items that look similar or compatible. Some recent works tackled this area in the last years, particularly adding item images into the model with different techniques.
Following this research direction, we propose a hybrid attribute-aware model that relies on adding the items' image features into the recommendation model.
The contributions through this works can be specified as follows;
\begin{itemize}
\item We propose a simple image-aware model for item recommendation that can leverage items' image features extracted by a fine-tuned ResNet50 component \cite{he2016deep}.
\item We conducted extensive experiments on two benchmark datasets, and results show that the proposed model was able to outperform more complex state-of-the-art methods.
\item We conduct an ablation study to analyze and compare three different approaches to include the items images features extracted using the ReNet50 into the recommender model.
\end{itemize}
\section{Related work}
\label{sec:2}
Many image-based recommender systems were proposed in the last few years. They are becoming more popular, and their applications are getting wider, especially in fashion e-commerce websites. Many of the proposed models in the literature relied on using \textbf{pre-trained networks} for items images features extraction. In 2016 an essential state-of-the-art model was proposed, VBPR \cite{he2015vbpr}. It uses the BPR ranking model \cite{rendle2012bpr} for prediction and takes the visual item features into account. They use a pre-trained CNN network to extract the item features; these features pass through a fully connected layer to obtain the latent embedding.
Another model which uses more than one type of external information is JRL \cite{zhang2017joint}. It incorporates three different information sources (user reviews, item images, and ratings) using a pre-trained CaffeNet and PV-DBOW model \cite{le2014distributed}.
While in 2017, Qiang et al. \cite{liu2017deepstyle} proposed the DeepStyle model, where their primary assumption is that the item representation consists of style and item category representations extracting the item image features via a CaffeNet model. To get the style features, they subtract latent factors representing the category. Additionally, a region-guided approach (SAERS) \cite{hou2019explainable} introduced the items' visual features using AlexNet to get general features and utilizing a ResNet-50 architecture for extracting semantic features representing the region of interest in the item image. Before semantic features are added to the global features, an attention mechanism using the users' preferences is applied. The final item embedding becomes the semantic features combined with the global features.
The image networks used for item image feature extraction can also be \textbf{trained end-to-end} with the recommender model; the most popular model applied this technique is the DVBPR \cite{kang2017visually} powerful model proposed in 2017 that incorporates visual item features. It does two tasks; the first task is training the BPR recommender model jointly with a CNN structure to extract the item's image pixel-level features. The second task uses Generative Adversarial Networks (GANs) to generate new item images based on user preferences.
Attribute-aware recommender system models are a family of hybrid models that can incorporate external user and item attributes. Theoretically, some of these models can be extendable to image-based settings by carefully converting the raw image features into real-valued latent features, which can be used as the item's attributes. Recently, (CoNCARS) \cite{costa2019collective} model was proposed that takes the user and item one-hot- encoded vectors as well as the user/item timestamps. The mode utilizes a convolution neural network (CNN) on top of the interaction matrix to generate the latent embeddings. Parallel work by Rashed et al. proposed an attribute-aware model (GraphRec) \cite{rashed2019attribute} that appends all the users' and items' attributes on-hot-encoded vectors. It extracts the embeddings through neural network layers that can capture the non-linearity in the user-item relation.
In the literature, using attribute-aware models has been mainly set aside for image-based items ranking problems. Hence, in this paper, we propose a simple image-aware model that utilizes the latent image features as item attributes. The experimental results show that the proposed model outperforms current complex image-based state-of-the-art models.
\section{Methodology}
\label{sec:3}
\subsection{Problem Definition}
In image-based item recommendation tasks, there exist a set of $M$ users $\mathcal{U}:=\lbrace{u_1, \cdots, u_M}\rbrace$, a set of $N$ items $\mathcal{I}:=\lbrace{i_1, \cdots, i_N}\rbrace$ with their images $X_i \in \mathbb{R}^{N \times (L \times H \times C)}$ of dimensions $L \times H$ and $C$ channels, and a sparse binary interaction matrix $R \in \mathbb{R}^{M \times N}$ that indicate user's implicit preferences on items based on historical interactions.
The recommendation task's primary goal is to generate a ranked personalized short-list of items to users by estimating the missing likelihood scores in $R$ while considering the visual information that exists in the item images.
\subsection{Proposed model}
The proposed model consists of an image features extraction component and an attribute-aware recommender system component that are jointly optimized.
\subsubsection{Recommender System Component}
Inspired by the GraphRec model \cite{rashed2019attribute}, the recommender system component utilizes the user's one-hot encoded input vector and concatenates the external features of the items directly to the items' one-hot input vectors. These vectors are then fed to their independent embedding functions $\psi_u$ : $\mathbb{R}^{M} \rightarrow \mathbb{R}^{K}$ and $\psi_i$ : $\mathbb{R}^{(N + F)} \rightarrow \mathbb{R}^{K}$ as follows:
\begin{equation}
z_{u}= \psi_u(v_u)= v_u W^{\psi_u} + b^{\psi_u}
\end{equation}
\begin{equation}
z_{i}= \psi_i(v_i)= concat(v_i, \phi(x_i)) W^{\psi_i} + b^{\psi_i}
\end{equation}
\noindent where $W^{\psi_u}$, $W^{\psi_i}$ are the weight matrices of the embedding functions, and $b^{\psi_u}$, $b^{\psi_i}$ are the bias vectors. $v_u$, $v_i$ represents the user and item one-hot encoded vectors. Additionally, $\phi(x_i)$ represents the features extraction component that embeds an item's raw image $x_i$ to a latent feature vector of size $F$.
After obtaining the user and item embeddings, the final score is calculated using the dot-product of the two embedding vectors, $\hat{y}_{u i}= z_u \cdot z_i$ to give the final output score representing how much this user $u$ will tend to like this item $i$. The final score is computed via a sigmoid function; $\sigma(\hat{y_{ui}})=1/1+e^{(\hat{y_{ui}})}$, to limit the score value from $0\rightarrow 1$. The model target is defined as $y_{u i}$ which is for implicit feedback either $0$ or $1$;
\begin{equation}
y_{u i}=
\begin{cases}
1, & \text{observed item}; \\
0, & \text{otherwise}
\end{cases}
\end{equation}
Given the users-items positive and negative interactions $D_s^+$, $D_s^-$, and the output score $\hat{y}_{u i}$ of the model and the original target $y_{ui}$, we use negative sampling for generating unobserved instances and optimize the negative log-likelihood objective function $\ell(\hat{y}; D_s)$ using ADAM optimizer, which can be defined as;
\begin{equation}
\label{eq:5}
- \sum_{(u,i)\in {D_s^+ \bigcup D_s^- }} y_{u i} \log{(\hat{y}_{u i})} + (1-y_{u i}) (1-\log{(\hat{y}_{u i})})
\end{equation}
\subsubsection{Extraction of Image Features}
To extract the latent item's image features, we propose using the ResNet50 component for combining the raw image features. To refine the image features further and get better representation, we can jointly train the whole image network simultaneously with the recommender model. However, it will require a considerable amount of memory and computational power to load and update these parameters. In this case, ResNet50 consists of 176 layers with around 23 million parameters. To mitigate this problem, we propose ImgRec End-to-End (ImgRec-EtE), where we utilize a ResNet50 \cite{he2016deep} pre-trained on ImageNet dataset \cite{imagenet_cvpr09} and jointly train part of the image network to be updated with the recommender model, and at the same time, benefit from starting with the pre-trained weights. As shown in Figure \ref{fig:3}, we selected the last 50 layers to be updated and fix the first 126 layers.
Furthermore, we added an additional separate, fully connected layer to fine-tune the image features extracted by the ResNet50. This layer will be trained simultaneously with the recommender model. Moreover, this additional layer makes the image features more compact and decreases its dimensionality further to match the user latent embedding. Thus the features extraction function $\phi(x_i)$ for ImgRec-EtE can be defined as follows;
\begin{equation}
\phi(x_i) := ReLU(ResNet50(x_i) W^{\phi} + b^{\phi})
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{ImgRec_model.pdf}
\caption{End-to-End ImgRec Architecture}
\label{fig:3}
\end{figure}
\subsection{Training Strategy}
To increase the speed of the training process, we used a two-stage training protocol. Firstly, we train the model by fixing the image network pre-trained parameters and updating the recommender model parameters until convergence. After obtaining the best model performance in the first stage, we jointly learn and fine-tune the last 50 layers of the image network further with the recommender model. This methodology allowed us to fine-tune the model after reaching the best performance given the pre-trained image network weights, also it saves time and computational power while achieve superior prediction performance compared to using only the fixed pre-trained parameters of the image network.
\section{Experiments}
Through the experimental section, we aim to answer the following research questions:\\
\textbf{RQ1} How does the proposed model fair against state-of-the-art image-based models?\\
\textbf{RQ2} What is the best method for adding images features to the model?
\subsection{Datasets}
We chose two widely used image-based recommendation datasets \textit{Amazon fashion} and \textit{Amazon men}, introduced by McAuley et al. \cite{mcauley2015image}. \textit{Amazon fashion} was collected from six different categories, "men/women's tops, bottoms, and shoes," while \textit{Amazon men} contains all subcategories (gloves, scarves, sunglasses, etc.). The number we mention of users-items of the Fashion dataset is different from the ones stated in the main paper. However, we contacted the authors\footnote{https://github.com/kang205/DVBPR/issues/6} and the numbers in Table \ref{tab:2} were found to be the correct ones.
\begin{table}[!t]
\caption{Datasets statistics}
\label{tab:2}
\begin{tabular}{p{2.3cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}}
\hline\noalign{\smallskip}
Dataset& Users & Items & Categories& Interactions\\
\noalign{\smallskip}\svhline\noalign{\smallskip}
Amazon fashion& 45184 & 166270 & 6 & 267635 \\
Amazon men& 34244 & 110636 & 50 & 186382 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\subsection{Evaluation Protocol}
To evaluate our proposed models, the data is split into training, validation, and test sets using the leave-one-out evaluation protocol as in \cite{kang2017visually, hou2019explainable,he2015vbpr}. However, we used two different numbers of negative samples in the evaluation protocol for the direct comparison against the published results of the state-of-the-art models DVBPR \cite{kang2017visually}, and SAERS \cite{hou2019explainable} because both models originally used a different number of samples, and the source code of SAERS is not available.
For the direct comparison against DVBPR, we sample 100 negative samples ($\mathcal{I}^t$) and one positive item $i$ for each user. On the other hand, for direct comparison against our second baseline SAERS \cite{hou2019explainable} we sample 500 negative items ($\mathcal{I}^t$) and one positive item $i$. To ensure our results' consistency, we report the mean of each experiment's five different trials.
For evaluation, we report the \textbf{Area Under the Curve (AUC)} as it is the primary metric in all of our baselines' papers, further more it is persistent to the number of negative samples used \cite{krichene2020sampled}:
\begin{equation}
\label{eq:9}
AUC= \frac{1}{\abs{\mathcal{U}}} \sum_{u\in \mathcal{U} } \frac{1}{\abs{\mathcal{I}^t}} \sum_{i, j\in \mathcal{I}^t} (\hat{y}_{u i} > \hat{y}_{u j})
\end{equation}
\begin{table}[!t]
\caption{Comparison of AUC scores with 100 negative samples per user, the bold results represent the best performing model and we underline the second best result.}
\label{tab:3}
\begin{tabular}{p{2.3cm}p{1.45cm}p{1.4cm}p{1.45cm}p{1.4cm}}
\hline\noalign{\smallskip}
Datasets &\multicolumn{4}{c}{\textit{Interactions}}\\
&PopRank&WARP&BPR-MF&FM\\
\noalign{\smallskip}\svhline\noalign{\smallskip}
\textit{Amazon Fashion}&0.5849&0.6065&0.6278&0.7093\\
\textit{Amazon Men} &0.6060&0.6081&0.6450&0.6654\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\begin{tabular}{p{2.3cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.5cm}}
\hline\noalign{\smallskip}
Datasets &\multicolumn{4}{c}{\textit{Interactions+Image Features}}\\
&VisRank&VBPR&DVBPR& ImgRec-EtE\\
\noalign{\smallskip}\svhline\noalign{\smallskip}
\textit{Amazon Fashion}&0.6839&0.7479&\underline{0.7964}&\textbf{0.8250}\\
\textit{Amazon Men} &0.6589&0.7089&\underline{0.7410}&\textbf{0.7899}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{Comparison of AUC scores with 500 negative samples per user.}
\label{tab:4}
\begin{tabular}{p{2.3cm}p{1.2cm}p{1.2cm}|p{1.1cm}p{1.3cm}p{1.1cm}p{1.2cm}p{1.5cm}}
\hline\noalign{\smallskip}
Datasets &\multicolumn{2}{c}{\textit{Interactions}}&\multicolumn{5}{c}{\textit{Interactions+Image Features}}\\
&PopRank&BPR-MF&VBPR&DeepStyle&JRL&SAERS&ImgRec-EtE\\
\noalign{\smallskip}\svhline\noalign{\smallskip}
\textit{Amazon Fashion}&0.5910&0.6300&0.7710&0.7600&0.7710&\underline{0.8161}&\textbf{0.8250}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\subsection{Baselines}
We compared our proposed methods to the published results of the state-of-the-art image-based models DVBPR and SAERS. We also compared our results against a set of well-known item recommendation models that were used in \cite{kang2017visually,hou2019explainable}.
\begin{itemize}
\item \textbf{PopRank}: A naive popularity-based ranking model.
\item \textbf{WARP \cite{weston2011wsabie}}: A matrix factorization model that uses Weighted Approximate-Rank Pairwise (WARP) loss.
\item \textbf{BPR-MF \cite{rendle2012bpr}}: A matrix factorization model that uses the BPR loss to get the ranking of the items.
\item \textbf{VisRank}: A content-based model that utilizes the similarity between CNN features of the items bought by the user.
\item \textbf{Factorization Machines (FM) \cite{rendle2010factorization}}: A generic method that combines the benefits of both SVM and factorization techniques using pair-wise BPR loss.
\item \textbf{VBPR \cite{he2015vbpr}}: A model that utilizes items; visual features, using pre-trained CaffeNet and a BPR ranking model.
\item \textbf{DeepStyle \cite{liu2017deepstyle}}: A model that uses the BPR framework and incorporates style features extracted by subtracting category information from CaffeNet visual features of the items.
\item \textbf{JRL\cite{zhang2017joint}}: A model that incorporates three different types of item attributes. In this case, we considered only the visual features for comparison purposes.
\item \textbf{DVBPR \cite{kang2017visually}}: State-of-the-art image-based model that adds the visual features extracted from a dedicated CNN network trained along with a BPR recommender model.
\item \textbf{SAERS \cite{hou2019explainable}}: State-of-the-art image-based model that utilizes the region of interests in the visual images of the items while also considering the global extracted features from a dedicated CNN to get the final items representations.
\end{itemize}
\subsection{Comparative study against state-of-the-art image-based models (RQ1)}
Since the DVBPR baseline conducted their results on Amazon men and Amazon fashion datasets, we compared our results directly to both datasets' published results. On the other hand, the SAERS model used only the Amazon fashion dataset, so we only report the results for this dataset using 500 negative samples per user. Table \ref{tab:3} illustrates the ImgRec-EtE obtained results against VBPR and DVBPR results. The proposed model ImgRec-EtE represents the best performance on both men and fashion datasets. It shows a 2.5\% improvement over the fashion dataset DVBPR reported performance and a 4.8\% improvement for the men dataset. The results show consistent AUC values regardless of the number of negative samples, as per the recent study by Krichene et al. \cite{krichene2020sampled}.
Table \ref{tab:4} demonstrates the comparison against the DeepStyle, JRL and SAERS models. The proposed model ImgRec-EtE represents the best performance on the fashion dataset. Despite its simplicity, the model has achieved an AUC of 0.825, which shows an improvement over the complex state-or-the-art SAERS model with 0.9\%.
\subsection{Ablation Study (RQ2) }
Besides obtaining the items features in an end-to-end fashion, it is worth mentioning that we tried other methods to incorporate the images' features. Firstly in (ImgRec-Dir), we directly concatenate the image features extracted using the output of the next to last fully connected layer of a pre-trained ResNet50 to the one-hot encoded vector representing the item.
On the other hand (ImgRec-FT) passes the features extracted using the pre-trained network to a fine-tuning layer that is trained with the recommender model and obtain better item representation. Subsequently, the item's image latent features are concatenated to the item one-hot encoded vector to form one input vector representing the item. As shown in Table \ref{tab:5} The images' features had a varying effect depending on how they were added to the model; ImgRec-Dir achieved an AUC of 0.77 on the Amazon fashion dataset and 0.736 on the Amazon men dataset. While looking into ImgRec-FT performance after adding the fine-tuning layer, we can see an improvement of 3.2\% on the Amazon fashion dataset and 1.6\% on the Amazon men dataset performances, which shows high competitiveness against the state-or-the-art models while having a much lower computational complexity. Finally, ImgRec-EtE, which jointly trains part of the ResNet50 simultaneously with the model, positively impacted the results with further improvement of 1.6\% over the ImgRec-FT performance on both datasets.
\begin{table}[!t]
\caption{Comparison of AUC scores with 100 negative samples per user, between the three ways of incorporating the image features.}
\label{tab:5}
\begin{tabular}{p{2.3cm}p{1.6cm}p{1.6cm}p{1.6cm}}
\hline\noalign{\smallskip}
Datasets & ImgRec-Dir & ImgRec-FT & ImgRec-EtE\\
\noalign{\smallskip}\svhline\noalign{\smallskip}
\textit{Amazon Fashion}&0.7770&0.8090&\textbf{0.8250}\\
\textit{Amazon Men} &0.7363&0.7735&\textbf{0.7899}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\subsection{Hyperparameters}
We ran our experiments using GPU RTX 2070 Super and CPU Xeon Gold 6230 with RAM 256 GB. We used user and item embedding sizes of 10 and 20 with \textit{Linear} activation function for both datasets. We applied grid search on the learning rate between [0.00005 and 0.0003] and the L2-regularization lambda between [0.000001 and 0.2]. The best parameters are 0.0001 and 0.1 for ImgRec-Dir and ImgRec-FT. While in ImgRec-EtE case, the best L2-regularization lambda is 0.000001 for phase1 (fixed-weights) and 0.00005 for phase 2 (joint training). The features fine-tuning layer, the best-selected embedding size is 150 with \textit{ReLU} activation function. ImgRec codes and datasets are available at https://github.com/Shereen-Elsayed/ImgRec.
\section{Conclusion}
In this work, we propose an image-based attribute-aware model for items' personalized ranking with jointly training a ResNet50 component simultaneously with the model for incorporating image features into the recommender model. Adding the image features showed significant improvement in the model's performance. ImgRec-EtE shows superior performance to all image-based recommendation approaches. Furthermore, we conducted an ablation study to compare different approaches of adding the features to the model; direct features concatenation, adding a fine-tuning fully connected layer, and jointly training part of the image network.
\section{Acknowledgements}
This work is co-funded by the industry Project “IIP-Ecosphere: Next Level Ecosphere for Intelligent Industrial Production”.
\bibliographystyle{acm}
| proofpile-arXiv_065-1695 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
As the demand for computing continues to grow year by year, so are operating expenses and the associated carbon emissions caused by consuming energy from the public power grid~\cite{Freitag_ICTClimateImpact_2021}.
So far, negative effects could partially be mitigated through advances in hardware efficiency, cooling, and the continuous shift of cloud computing towards highly energy-optimized hyperscale data centers, which already host about 50\,\% of all compute instances~\cite{Masanet_RecalibratingGlobalDCEnergyEstimates_2020}.
Still, data centers already account for more than 1\,\% of global energy consumption and this number is expected to rise further~\cite{Masanet_RecalibratingGlobalDCEnergyEstimates_2020} -- especially when considering the additional demand of novel domains like the internet of things (IoT), edge and fog computing~\cite{Wiesner_LEAF_2021}.
To reduce its carbon footprint, the IT industry is pushing to integrate more and more low-carbon energy sources into data centers~\cite{acun2022holistic}, not least because carbon pricing mechanisms, such as emission trading systems or carbon taxes, are starting to be implemented around the globe \cite{WorldBank_CarbonPricing_2020}.
For example, Google plans to operate their data centers solely on carbon-free energy by 2030 \cite{Google_CarbonFreeBy2030_2020}.
One approach towards more sustainable and cost-effective computing systems in cloud as well as edge environments is directly equipping IT infrastructure with on-site renewable energy sources like solar or wind~\cite{SDIA_OnsitePowerDataCenters_2021,LiYDPTKZ18}. %
However, especially smaller compute nodes, such as on-premise installations or edge data centers, are not always able to consume all generated power directly, as depicted in \autoref{fig:problem_overview}.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/problem_overview.pdf}
\caption{Problem setting: Renewable excess energy can ocurr at compute nodes when local demand does temporarily not cover all produced energy. %
}
\label{fig:problem_overview}
\end{figure}
Energy storage can mitigate this problem to some extent, but is expensive, therefore often not available in sufficient capacity, and may be reserved to ensure operation during power outages.
Moreover, storing energy involves power conversion loss, and frequent charging cycles accelerate battery aging~\cite{Liu_BatteryAging_2017}.
On the other hand, feeding excess energy back to the power grid is often unattractive in practice due to statutory regulations and low compensation.
Modern microgrid solutions integrate renewables and energy storage to efficiently manage locally occurring excess energy, in a modular and flexible manner~\cite{Hirsch_Microgrids_2018}.
It is precisely such systems, however, that benefit greatly from participants who are also flexible and able to adapt their energy consumption to the expected supply.
To make better use of renewable excess energy (REE), delay-tolerant workloads originating locally or within the surrounding distributed system of a compute node should be computed if there is free capacity available.
Delay-tolerant workloads are common in cloud environments, ranging from machine learning jobs, certain Function-as-a-Service (FaaS) executions, nightly backups and CI/CD runs, and other periodic jobs like generating daily reports~\cite{Wiesner_LetsWaitAwhile_2021}.
However, they may also occur in otherwise time-critical edge computing environments, such as cache and index updates as well as federated and/or iterative machine learning trainings on locally available data at edge nodes.
We propose Cucumber, an admission control policy for delay-tolerant workloads in resource-constrained compute nodes that have access to renewable energy sources but no access to energy storage.
We assume that this infrastructure usually runs high-priority, time-critical workloads with quality of service (QoS) constraints, like user-facing services, but is not always fully utilized.
Cucumber admits delay-tolerant workloads to the local system only if they can be computed within their deadlines on free capacity and without the use of grid energy.
This leads to increased use of renewable energy sources, hence reducing associated carbon emissions and electricity costs, and contributes to stabilizing the power grid.
We furthermore expect Cucumber to be an integral building block of decentralized systems that exploit the varying spatio-temporal availability of renewable energy.
Towards this, we make the following contributions:
\begin{itemize}
\item we define a method for forecasting free computational capacity that can be powered using REE only. The prediction can be tuned towards conservative or optimistic results using probabilistic forecasts of load, energy consumption and energy production
\item based on these forecasts, we propose an admission control policy that decides whether incoming delay-tolerant workloads with known size and deadline can be scheduled on free capacity using REE only
\item we evaluate our approach on two scenarios using real solar production forecasts for Berlin, Mexico City, and Cape Town in a simulation environment
\item we make all datasets and code used for this experimental evaluation publicly available for future research to build on our results\footnote{Github: \url{https://github.com/dos-group/cucumber}}
\end{itemize}
The remainder of this paper is structured as follows:
\autoref{sec:related_work} reviews related work.
\autoref{sec:apporach} proposes the admission control policy and explains how we generate forecasts on free computational capacity that can be powered by REE.
\autoref{sec:evaluation} evaluates our approach.
\autoref{sec:conclusion} concludes the paper.
\input{sections/relatedwork}
\section{Admission Control}\label{sec:apporach}
Cucumber accepts delay-tolerant workloads based on forecasts of load, power consumption, and power production.
A high-level overview and outline of the approach are presented in \autoref{fig:approach_overview}.
This section describes all steps in detail. %
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/approach_overview.pdf}
\caption{Cucumber periodically forecasts computational load, power consumption, and power production to compute the \emph{freep} capacity forecast. It determines how much computational capacity will be available in the future, that can be powered using REE only. Based on this forecast and the amount, size, and deadlines of already queued workloads, Cucumber accepts or rejects new workload requests.}
\label{fig:approach_overview}
\end{figure}
\subsection{Forecasting Load, Power Consumption, and Power Production}
Cucumber uses probabilistic multistep-ahead forecasts to predict time series of probability distributions, which inherent the uncertainty for each observation, to later infer the available REE at different confidence intervals.
If no probabilistic forecasts are available, Cucumber can still be operated in its default configuration based on the expected/median forecast.
\paragraph{Forecasting Computational Load.}
Load prediction is a widely researched field covering forecasts related to application metrics, such as the number of messages in a stream processing system~\cite{gontarska2021evaluation}, as well as the utilization of (virtualized) hardware resources like CPU, GPU or RAM~\cite{7724266}.
Although load prediction systems are usually formulated as time series forecasting problems based on historical data, they can also take information from other contexts into account.
For example, in edge computing use cases like traffic monitoring, additional information on weather, holidays, events, etc. can improve the forecast quality. %
Whatever type of forecast is most suitable in a concrete use case, Cucumber uses it to identify future time windows with free capacity.
Furthermore, these load predictions are used as a factor in the power consumption forecast.
In the following, we denote the load of a node as $U$ and any load forecasts as $U_\text{pred}$.
\paragraph{Forecasting Power Consumption.}
The power demand of IT infrastructures can be influenced by many factors like CPU or GPU usage, memory, I/O, and storage access, temperature, etc.
While perfect modeling without precise knowledge of workload and infrastructure characteristics is not possible~\cite{Koller_WattApp_2010}, it has been shown that power usage can often be modeled with sufficient accuracy based only on the node's utilization~\cite{Barroso_EnergyProportionalComputing_2007} -- which usually refers to its CPU usage.
In fact, power modeling based on CPU usage only is being used in production at modern hyper-scale data centers~\cite{Radovanovic_PowerModeling_2021}.
For simplicity, here we assume a simple linear power model to convert from a certain load $U$ to the nodes power usage $P$:
\begin{equation}
P = P_\text{static} + U \cdot (P_\text{max} - P_\text{static})
\label{eq:power}
\end{equation}
where $P_{static}$ is the amount of power a node consumes in idle state and $P_{max}$ is the amount of power the node consumes under full load.
Besides energy used for computing, the power forecast should also take the expected demand from other co-located consumers into account that are powered by the renewable energy source, like cooling or lighting, to correctly derive the actually available REE.
\paragraph{Forecasting Power Production.}
Since information on future power production is useful in many domains ranging from high-level application design to low-level grid control, the prediction of variable renewable energy sources like solar panels~\cite{BRIGHT2018118satellitederived,Khalyasmaa2019PredictionOS} and wind turbines~\cite{Alencar2017DifferentMFwindpowergenerationcasestudy,LI2021121075powerpredictionwindturbinesgaussian} is an active field of research. %
Such models are usually based on weather models for mid- and long-term forecasts as well as, in case of solar, satellite data for short-term forecasts, that enable the observation and movement of clouds~\cite{KALLIOMYERS202068irradianceforecast}.
Very short-term models with one-minute resolution can even be based on live video data using sky cameras\footnote{\url{https://solcast.com/utility-scale/solar-data-api/super-rapid-solar-farm-forecasts}}.
As wind and solar power production are known for their high variability, probabilistic prediction methods are especially common in this domain~\cite{VERBOIS2018313probabilisticforecastingsolarboosting,8982039probabilisticforecastingwindmixture}.
\subsection{Deriving the \emph{freep} capacity forecast}
Based on the previously generated forecasts, we now determine the main input to Cucumber's admission control: the \emph{freep} (\underline{f}ree \underline{REE}-\underline{p}owered) capacity forecast.
For this, we first calculate the REE forecast $P_\text{ree}$.
If no probabilistic forecasting was used to generate the power production $P_\text{prod}$ and consumption $P_\text{cons}$ forecasts, we can directly define $P_\text{ree} = \max(0,\ P_\text{prod} - P_\text{cons})$.
If probabilistic forecasting was applied, we now have the possibility to decide that $P_\text{ree}$ should describe a more optimistic or more conservative view of the future and hence manipulate the behavior of the admission control policy.
However, we need to differentiate between two kinds of probabilistic forecasts.
The first contains actual probability distributions for each forecasted observation, which, in practice, is mostly implemented as ensembles of non-deterministic single-value predictions.
In this case, the simplest way to build a joint distribution $P_\text{ree}$ is by randomly sampling from both distributions and subtracting the returned values for power production and consumption.
We can then use the quantile function $Q$
to determine a concrete single-valued time series.
\begin{equation}
P_\text{ree}^{\,\alpha} = \max(0,\ Q(\alpha,\,P_\text{ree}))
\end{equation}
where $\alpha \in [0,1]$ determines how \emph{optimistic} (big $\alpha$) or \emph{conservative} (small $\alpha$) our forecasts are.
For example, $P_\text{ree}^{\,0.95}$ returns the 95\textsuperscript{th} percentile of $P_\text{ree}$.
In the second case, one or both forecasts do not contain the actual distributions but only values for a number of pre-initialized quantiles, usually the median and an upper and lower bound like the 10\textsuperscript{th} and 90\textsuperscript{th} percentile.
In this case, we propose a fall-back method as we cannot simply join the distributions:
\begin{equation}
P_\text{ree}^{\,\alpha'} = \max(0,\ Q(\alpha,\,P_\text{prod}) - Q(1-\alpha,\,P_\text{cons}))
\end{equation}
where $\alpha'$ can only take certain values determined by the pre-initialized quantiles. Note that using this equation $\alpha'$ holds the same semantic value as $\alpha$ (e.g. big $\alpha'$ represents optimistic forecasts) but no guarantees of actual probability.
In the following, we use $\alpha$ and $\alpha'$ interchangeably.
Using the forecasts for computational load $U_\text{pred}$ and availbale REE $P_\text{ree}^{\,\alpha}$ we can now compute the \emph{freep} capacity forecast $U_\emph{freep}$, which determines how much of the free capacity in the future can be powered using only REE:
\begin{equation}
U_\emph{freep} = \min(
\overbrace{\rule{0pt}{3ex}{1 - U_\text{pred}}}^{\textstyle U_\text{free}},\
\overbrace{\rule{0pt}{3ex}{\frac{P_\text{max}^{\,\alpha} - P_\text{static}}{P_\text{max} - P_\text{static}}}}^{\textstyle U_\text{reep}}
)
\end{equation}\label{eq:uflex}
The \emph{freep} capacity forecast is defined as the minimum of $U_\text{free}$, the expected free capacity of the node, and $U_\text{reep}$, the expected fraction of capacity that could be REE-powered.
If $U_\text{pred}$ is a probabilistic forecast, it first has to be converted to a single-valued time series, for example using $Q(0.5, U_\text{pred})$.
The equation for $U_\text{reep}$ depends on the used power model and it was derived by rearranging the linear power model from~\autoref{eq:power}.
\subsection{Admission Control Policy}
\begin{wrapfigure}{r}{0.52\textwidth}
\vspace{-1.3cm}
\begin{center}
\includegraphics[width=0.50\textwidth]{figures/admission_controll_overview.pdf}
\end{center}
\vspace{-0.5cm}
\caption{Cucumber rejects workloads if it expects any future deadline violations using the \emph{freep} capacity forecast.}
\vspace{-0.5cm}
\end{wrapfigure}
Cucumber admits workload requests based on the above derived \emph{freep} capacity forecast and the amount, size, and deadlines of already queued workloads.
For this, all workload requests are expected to provide a job size estimate and a deadline.
In practice, deadlines are often provided directly by users or services or can be derived from, for example, application graphs.
Estimating the size of jobs is a common problem in scheduling and is usually performed based on previous executions of the same or similar workloads.
In the current approach, we do not consider uncertainty in job size estimates, parallelism, or additional resource constraints besides computational load, like memory.
However, Cucumber can be extended to consider such factors.
The approach is agnostic to the applied scheduling mechanism, including multiple levels of priority or preemptive workloads, as long as it can be reliably modeled with the available information.
For every incoming request, Cucumber models the expected processing of the queue if the workload was accepted and evaluates if any deadlines are being violated.
In particular, for every workload in the new queue, it progresses the time on the \emph{freep} capacity forecast until the expected (remaining) workload size is covered and then checks if the workload's deadline was violated.
If any violation occurs, the request gets rejected, otherwise accepted.
Depending on the number of workload requests and the average queue length, this basic algorithm can become computationally inefficient, since a re-evaluation has to take place for each request.
However, performance issues can be mitigated in many ways, for example by grouping jobs with the same or similar deadlines and only evaluation violations per group.
Moreover, different heuristics can be applied to decrease the number of re-evaluations, like caching the remaining time and capacity of each group until their deadline, and only performing a full re-evaluation once violations become likely.
Concrete performance adjustments depend on the nature of the underlying system, such as the level of parallelization as well as frequency, distribution, and kind of incoming workloads.
\subsection{Limiting Power Consumption at Runtime}
To ensure that accepted workloads run on REE only, their resource usage needs to be limited at runtime.
In practice, there are several ways to approach this, including adjustments of hardware power and speed settings like dynamic voltage and frequency scaling (DVFS).
Nevertheless, to propose a simple approach, modern high-level tools or resource orchestration solutions allow for conveniently controlling the usage of resources such as CPU or GPU.
For instance, the CPU usage of a process can be limited using tools like \emph{cpulimit}.
Likewise, frameworks like Docker and Kubernetes have built-in flags for limiting CPU usage by adapting the settings of a container’s \emph{cgroup}.
As load $U$ and available REE $P_\text{ree}$ can be measured periodically at runtime to derive the current $U_\text{gec}$, such tools can be used to adjust the node's power consumption to the correct level without inferring with the time-critical baseload.
However, the suitability of this simple approach depends highly on the concrete environment and more sophisticated measures might be needed in certain scenarios.
Even when performing admission control at a low $\alpha$ (meaning in conservative mode), conditions at runtime might still be worse than expected.
If less REE is available than forecasted, the previously described power limiting could lead to deadline violations of accepted jobs, although there is free computational capacity available.
While this behavior might be acceptable in some environments, usually it is more important to meet promised deadlines than ensuring that no grid energy is used at all.
To mitigate violations, Cucumber uses the \emph{freep} capacity forecasts at runtime to periodically evaluate whether the currently active jobs can still meet their deadlines.
If a running job is expected to violate its deadline, we temporarily stop power limiting and finish it using all free capacity $U_\text{free}$.
Since also load forecasts are uncertain, deadline violations still cannot be completely ruled out, but will be mitigated as effectively as possible based on the current state of knowledge.
\section{Evaluation}\label{sec:evaluation}
We evaluate Cucumber on real datasets over the course of two weeks (January 18-31) using the discrete-event simulation framework SimPy.
In total, 36 experiments were conducted: Six admission control policies (three baselines and Cucumber at $\alpha \in \{0.1, 0.5, 0.9\}$) in two scenarios at three solar sites each.
All data and simulation code are publicly available as mentioned in \autoref{sec:intro}.
\subsection{Experimental Setup}
We want to upfront explain some simplifications we made in our simulation-based evaluation.
First, we assume that the reported size of workload requests is always correct, while in practice runtime estimates are often noisy.
Yet, we consider this a problem not addressed by Cucumber.
Second, we do not explicitly model parallelism but process the workload queue next to the time-critical baseload in sequential order using non-preemptive earliest deadline first (EDF) scheduling.
Third, we do not model the energy demand of Cucumber itself.
However, we expect its overhead to be very small as forecasts are only updated every 10 minutes and the admission control itself can be implemented efficiently.
\subsubsection{Admission Control Policies}
We evaluate six admission control policies for each of the below-described scenarios and solar sites.
If deadlines are violated, jobs are not canceled but continue to run until they are completed.
\begin{itemize}
\item \emph{Optimal w/o REE} accepts workloads using perfect forecasts for $U_{pred}$ but without considering the availability of REE. It declares the upper bound for accepted jobs without deadline misses but accepts high grid power usage.
\item \emph{Optimal REE-Aware} accepts workloads using perfect load and renewable energy production forecasts. It declares the upper bound for accepted jobs without deadline misses and without any grid power usage.
\item \emph{Naive} accepts workloads only if there is currently REE available and there is no other workload in process. This approach does not rely on forecasts.
\item \emph{Conservative}, \emph{Expected}, and \emph{Optimistic} describe Cucumber admission control using realistic forecasts at $\alpha \in \{0.1, 0.5, 0.9\}$, respectively.
\end{itemize}
\subsubsection{Scenarios}
We define two scenarios where each consists of a high-priority baseload and a number of workload requests.
Exemplary baseload patterns are depicted in \autoref{fig:datasets}.
Since, to the best of our knowledge, trace datasets with information on the delay-tolerance of workloads do not exist yet, we modeled both scenarios based on related real-world datasets:
\begin{enumerate}
\item \emph{ML Training} is based on the \emph{cluster-trace-gpu-v2020} dataset from the Alibaba Cluster Trace Program\footnote{\url{https://github.com/alibaba/clusterdata}}, which contains two months of traces from a GPU production cluster~\cite{Alibaba_Data_2022}.
Baseload is modeled using tasks labeled as \emph{worker}, which are highly variable and hard to predict.
Each of the 5477 delay-tolerant workload requests corresponds to an \emph{xComputeWorker} task in the dataset.
The size of workloads is determined by the \emph{plan\_gpu} property and each workload has to be finished by midnight the day it was issued, meaning deadlines can be anywhere from 0 to 24 hours.
\item \emph{Edge Computing}: is based on the NYC Taxi Trip dataset\footnote{\url{https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page}} from Dec 2020 and Jan 2021. Baseload is modeled on the number of yellow taxi rides, which is highly seasonal. The 2967 workload requests correspond to long-distance green taxi rides: Every green taxi ride over 10\,km length emits a job at \emph{lpep\_pickup\_datetime} which has to be computed until \emph{lpep\_dropoff\_datetime}. The median deadline is 41 minutes. All jobs have the same size.
\end{enumerate}
We generated baseload forecasts by training a DeepAR~\cite{salinas2020deepar} probabilistic forecasting model\footnote{DeepAR parameters: GRU, 3 Layers, 64 nodes, 0.1 Dropout; 20-30 minutes training time on commodity hardware} on the first 1.5 months of data to then generate 24-hour forecasts with a 10-minute resolution for every 10-minute step in the last two weeks of the datasets.
Note, that the arrival rate of workload requests is not forecasted by Cucumber.
Power consumption forecasts are derived using \autoref{eq:power} with $P_\text{max} = 180\,W$ and $P_\text{static} = 30\,W$.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{figures/baseload.pdf}
\includegraphics[width=0.9\textwidth]{figures/solar_sites.pdf}
\caption{In red: actual and forecasted baseload power consumption in both scenarios at an exemplary day. In green: exemplary power production at the three solar sites.}
\label{fig:datasets}
\end{figure}
\vspace{-0.7cm}
\subsubsection{Solar Sites}
We assume every compute node has access to a solar panel with 400\,W peak production.
We collected real solar power production forecasts using the Solcast\footnote{\url{https://solcast.com}} utility-scale API during the second half of January 2022.
Like load forecasts, the solar forecasts cover 24 hours in 10-minute resolution each and were generated in 10-minute intervals.
Each forecast contains the median as well as the 10\textsuperscript{th} and 90\textsuperscript{th} percentile of expected energy production for each time step.
To evaluate the effectiveness of our approach at different geographical locations and during different seasons, we gathered forecasts at three different sites located at different continents and latitudes:
\begin{enumerate}
\item \emph{Berlin} during winter (8 hours of daylight; 2 hours of sunshine)
\item \emph{Mexico City} during the dry season (11 hours of daylight; 7 hours of sunshine)
\item \emph{Cape Town} during summer (14 hours of daylight; 11 hours of sunshine)
\end{enumerate}
For orientation, the roughly expected hours of daylight and sunshine in January at each site are listed in parentheses.
Exemplary values for each site are displayed in \autoref{fig:datasets}.
\subsection{Results}
For each experiment, we report the admission control acceptance rate and the fraction of REE that was used to actually power the workloads.
\autoref{fig:evaluation} illustrates the results.
\vspace{-0.3cm}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/evaluation.pdf}
\caption{Acceptance rate of workload requests and the fraction of these workloads that was actually powered via REE during execution (green).}
\label{fig:evaluation}
\end{figure}
\vspace{-0.3cm}
As expected, \emph{Optimal w/o REE} accepts almost all workload requests at the cost of requiring a substantial amount of grid energy.
Worth mentioning is the constant acceptance rate of 100\,\% across all experiments of the ML Training scenario, which is a result of the rather relaxed deadlines.
The stricter deadlines in the Edge Computing scenario lead to a slight decrease in acceptance rates.
Both baselines utilize perfect forecasts but only \emph{Optimal REE-Aware} considers available REE, which is why is does not use any grid energy across all experiments.
We observe that there was barely any REE available at the Berlin solar site during the observed period.
Even \emph{Optimal REE-Aware} accepts only a maximum of 2\,\% of all workloads.
Since the uncertainty and error rate of solar forecasts is extremely high at the Berlin site, only \emph{Conservative} forecasts achieved comparably low grid power usage.
Admission control based on \emph{Optimistic} and \emph{Expected} forecasts resulted in very low REE usage of 5.7\,-\,10.7\,\% and 34.8\,-\,45.7\,\%, respectively.
Under such conditions, the usage of a forecast-based admission control policy such as Cucumber can hardly be justified, as it does not show improved performance compared to a \emph{Naive} approach.
However, in Mexico City and Cape Town, which had a lot longer days and better weather during January than Berlin, Cucumber clearly outperforms the \emph{Naive} admission control, which achieves 31.1\,\% acceptance rate at 97.3\,\% REE usage in average.
Cucumber's \emph{Expected} case configuration maintains almost the same REE usage (97.0\,\%) but increases the acceptance rate to 37.8\,\%, while the \emph{Conservative} configuration manages 99.9\,\% REE usage at an acceptance rate of 31.9\,\%.
The trade-off when tuning the forecasts is clearly visible:
While \emph{Conservative} admission control results in almost perfect REE coverage, the acceptance rate was on average 18.5\,\% lower.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figures/boxplots.pdf}
\caption{Aggregated number of accepted workloads per hour for all admission control policies during the ML Training scenario in Mexico City. The orange line indicates the average solar production during a day.}
\label{fig:boxplots}
\end{figure}
\autoref{fig:boxplots} depicts the aggregated number of jobs per hour for an exemplary solar site on the ML Training scenario (all deadlines are midnight).
We observe that the acceptance rate over time differs strongly between the different approaches:
Considering that \emph{Optimal w/o REE} describes all workloads that can be accepted without deadline violations, \emph{Optimal REE-Aware} describes the optimal subset that can be computed using only REE.
The \emph{Naive} approach cannot exploit this potential, as it only accepts workloads once there is REE available.
The Cucumber admission control, on the other hand, is based on forecasts of REE and hence already accepts workloads before the sun is rising.
It can be observed, that in the the \emph{Expected} case's behaviour is close to the optimal case and almost all jobs before 11\:am get accepted.
After that, the number of accepted jobs per hour falls drastically since the forecasted solar energy until midnight is already reserved by queued workloads and forecasts in Mexico City are comparably precise.
In \emph{Conservative} mode, Cucumber is more cautious and accepts fewer jobs during early morning hours.
However, it accepts additional jobs throughout the day as uncertainty decreases when progressing in time.
We note that \emph{Optimistic} forecasts barely increase REE usage compared to Expected forecasts in most experiments.
For example, the acceptance rate for the Edge Computing scenario in Mexico City went up by 16.3\,\%, but the REE usage by only 0.5\,\%, meaning that almost all additionally accepted jobs were powered by grid energy.
Furthermore, we note that the \emph{Optimistic} experiments resulted in 1, 5, and 7 deadline misses in the Edge Computing scenario (which has tight deadlines), while none of the other configurations caused any deadline misses.
We conclude that users should pick $\alpha > 0.5$ with caution.
\section{Conclusion}\label{sec:conclusion}
This paper presents Cucumber, a configurable admission control policy for re\-source-con\-strained compute nodes with on-site renewable energy sources.
Cucumber accepts delay-tolerant workloads to increase REE utilization through probabilistic multistep-ahead forecasts of computational load, energy consumption, and energy production.
Our simulation-based evaluation uses real solar production forecasts for Berlin, Mexico City, and Cape Town and compares different configurations of our approach with baseline policies on two exemplary scenarios.
The results show, that Cucumber's default configuration shows similar acceptance rates than the optimal case baseline while achieving an REE coverage of 97.0\,\% on average in Mexico City and Cape Town.
Conservative admission results in almost perfect REE coverage at a 18.5\,\% reduced acceptance rate.
For future work, we plan to implement Cucumber in a hardware testbed to study its behavior and computational overhead under realistic conditions.
Furthermore, we want to extend the approach to also consider available energy story and make Cucumber part of a decentralized architecture that exploits the spatio-temporal availability of REE in a distributed system via local decisions.
\section*{Acknowledgments}
We sincerely thank Solcast for the uncomplicated and free access to their solar forecast APIs.
This research was supported by the German Academic Exchange Service (DAAD) as ide3a and the German Ministry for~Education and Research (BMBF) as \mbox{BIFOLD} (grant 01IS18025A) and Software Campus (01IS17050).
\bibliographystyle{splncs04}
\section{Related Work}
\label{sec:related_work}
\paragraph{Carbon-Aware and Renewable-Aware Computing.}
Incorporating the availability of renewable or low-carbon power into scheduling decisions
has been increasingly researched over the last decade.
However, many works in this context focus on load migration in geo-distributed settings or optimize for low carbon signals in the public power grid.
For example, Google employs a suite of analytics pipelines %
to defer delay-tolerant workloads if power from the public grid is associated with high carbon intensity~\cite{Radovanovic_Google_2021}.
While their work is targeted at large-scale data centers, Cucumber is meant to be deployed in resource-constrained environments with direct access to renewable energy sources.
Toosi et al.~\cite{ToosiQAB17noaprioriknowledge} proposed a load balancer for web applications that increases on-site renewable energy usage at data centers.
However, other than Cucumber, their approach is reactive and does not make use of forecasting for better decisions.
GreenSlot~\cite{Goiri_MatchingRenewableEnergyGreenDatacenters_2015} is a batch job scheduler for data centers with on-site renewable energy sources using hourly predictions of solar energy and optimizes for a low price if grid power usage is unavoidable.
Cucumber, on the other hand, aims at using REE only and tries to avoid using any grid power.
Aksanli et al.~\cite{Aksanli_GreenEnergyPredictionScheduleBatchServiceJobs_2011} proposed a scheduler using short-term predictions of wind and solar power to reduce grid power usage and the number of canceled jobs.
In contrast, Cucumber rejects workloads in danger of violating their deadlines upfront so they can be scheduled elsewhere.
The Zero-Carbon Cloud~\cite{chien2019zero} is the only project of which we are aware that aims at exploiting REE by placing data centers close to renewable energy sources.
Our approach complements these efforts and opens up a way to distribute workloads in a decentralized manner across their proposed infrastructure by making local decisions about whether or not to accept a job.
\vspace{-0.1cm}
\paragraph{Admission Control}
is a validation process in communication systems to check if sufficient resources are available to, for example, establish a network connection or process a request.
Other than most publications on admission control that operates on a network packet level, we consider workloads that can be several minutes or even hours long.
Because of this, most related work is in the context of web-based applications or cloud computing where certain requests are prioritized to improve quality of service (QoS) or maximize revenue~\cite{ChenMC01,YuanBTL16}.
An admission control policy in green computing was proposed by~\cite{Hazemi13}, where a PID controller used in industrial control applications is extended by a hybrid green policy, to reduce grid power usage.
Eco-IDC~\cite{LuoRL14} targets energy-aware admission control on a data center level by exploiting electricity price changes while dropping excessive workload if required. %
Other than these approaches, Cucumber %
optimizes for utilizing locally available REE while prioritizing time-critical workloads.
Furthermore, our approach utilizes probabilistic forecasting methods to be configurable towards more optimistic or conservative admission.
| proofpile-arXiv_065-1707 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years, interest in applying deep learning to robotic manipulation has increased. However, the lack of cheap data has proven to be a significant limitation \cite{DisbandOpenAI2021}. To enable applications such as smart and flexible manufacturing, logistics, and care-giving robots \cite{WEFRobots}, we must develop methods that learn from smaller datasets, especially if the learning is done online on real robots.
One of the simplest and most effective ways to mitigate the problem of small datasets is to use data augmentation. While data augmentation has been shown to significantly improve generalization performance in tasks like image classification, it is not straightforward to extend existing data augmentation methods to the types of data used in robotic manipulation. Furthermore, most existing augmentation methods fall into one of two categories, and both have severe limitations:
In the first category, augmentations are defined by a set of transformations, sampled independently for each example. Most image augmentation methods fall into this category, where rotations or crops are sampled randomly for each example \cite{Augerino2020,AutoAugment,BestPractice2003}. By making augmentations independent of the example being augmented, we are restricted to operations which are valid on all examples. In the second category, there are methods which learn a generative model (VAE, GAN, etc.) of the data, and then sample new training examples from that model \cite{BayesianDATran2017,MaterialsAEOhno2020,PriceForecastingAE2021}. This approach assumes a useful generative model can be learned from the given dataset, but we found these methods did not perform well when the dataset is small.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/title_figure.png}
\caption{A mock-up of a car engine bay. The robot must move the rope and place it under the engine without snagging it to set up for lifting the engine. We use data augmentation to improve task success rate during online learning for this task.}
\label{fig:real_robot_setup}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{images/rope_aug_examples.png}
\caption{Examples of augmentations of rope generated by our method. On the left is a picture of the scene in simulation from a zoomed out viewpoint. The simplified engine block model is in the center. The rope start (dark blue) and end (light blue) states are shown, with the grippers shown at the start state. The static environment geometry is shown in brown. The first row shows a transition in free space, where the resulting augmentations are particularly diverse. The final augmentation shows how our method found a transformation to move the rope underneath the hook while remaining in free space. The second row shows a transition which involves contact between the rope and the environment. The augmentations preserve this contact.}
\label{fig:rope_aug_examples}
\end{figure*}
It is not trivial to define a coherent framework for data augmentation that encompasses many domains and many types of learning problems (e.g. classification and regression). Thus, the first contribution of this paper is a formalization of the data augmentation problem. In our problem statement (Section \ref{sec:problem}), we formalize data augmentation as an optimization problem based on three key criteria: \textit{validity}, \textit{relevance}, and \textit{diversity}. We define an augmented example as \textit{valid} if it obeys the laws of physics and is possible in the real world. Augmentations are \textit{relevant} if they are similar to data that would be seen when performing the target task. \textit{Diversity} encourages the augmentations to be as varied as possible, i.e. the transformations applied to the data should be uniformly distributed to maximize diversity. Producing diverse augmentations for each original example is key to improving the generalization of the trained network.
The general definitions of validity, relevance, and diversity we propose depend on information that is intractable to compute for many manipulation problems, and therefore we also present approximations to these definitions. We do not claim that this formulation is useful for all manipulation problems, and clearly define the physical assumptions behind this formulation in Section \ref{sec:problem}.
Our second contribution is a method for solving this approximated optimization problem. Our method operates on trajectories of object poses and velocities, and searches for rigid-body transformations to apply to the moving objects in the scene to produce augmentations. Our method encourages validity by preserving contacts and the influence of gravity. Additionally, we encourage relevance by initializing the augmentations nearby the original examples and preserving near-contacts. Finally, we encourage diversity by pushing the augmentations towards randomly sampled targets.
Our results demonstrate that training on our augmentations improves downstream task performance for a simulated cluttered planar-pushing task and a simulated bimanual rope manipulation task. The learning problems in these tasks include classification and regression, and have high-dimensional inputs and outputs. Lastly, we demonstrate our augmentation in an online learning scenario on real-robot bimanual rope manipulation using noisy real-world perception data (Figure \ref{fig:real_robot_setup}). In this scenario, augmentation increased the success rate from 27\% to 50\% in only 30 trials. Additional materials such as code and video can be found on our \href{https://sites.google.com/view/data-augmentation4manipulation}{project website}.
\section{Related Work}
This paper studies the problem of learning better models from limited data, which is important for robotics applications and has received significant attention as researchers have tried to apply deep learning to robotics \cite{DisbandOpenAI2021,ReviewKroemer2021}.
Data augmentation has been applied to many machine learning problems, from material science \cite{MaterialsAEOhno2020} to financial modeling \cite{PriceForecastingAE2021} (see \cite{TimeSeriesSurveyIwana2020,NLPSurveyFeng2021,ImageAugSurvey2019} for several surveys). It is especially common in computer vision \cite{ImageAugSurvey2019,BestPractice2003,AutoAugment,RLAugLaskin2020,Augerino2020}, and is also popular in natural language processing \cite{NLPSurveyFeng2021,NLPMa2019}. In these fields the data is often in standardized data types---images, text, or vectors of non-physical features (e.g. prices). Each of the data types can be used for a wide variety of tasks, and various data augmentations have been developed for various pairings of data type and task.
However, problems in robotic manipulation use other formats of data, such as point clouds, or object poses, and may consist of time-series data mixed with time-invariant data. To the best of our knowledge, there are currently no augmentation methods designed specifically for data of the types above. Our proposed method is intended to fill this gap.
In contrast to engineering augmentations based on prior knowledge, another body of work uses unsupervised generative models to generate augmentations \cite{BayesianDATran2017,MaterialsAEOhno2020,PriceForecastingAE2021}. Typically, these methods train a model like an Auto-Encoder or Generative Adversarial Network (GAN) \cite{GANGoodfellow14}) on the data, encode the input data into the latent space, perturb the data in the latent space, then decode to produce the augmented examples. These methods can be applied to any data type, and handle both regression and classification problems. However, they do not incorporate prior knowledge, and only add small but sophisticated noise. In contrast, we embed prior knowledge about the physical and spatial nature of manipulation, and as a result can produce large and meaningful augmentations, at the cost of being less generally-applicable.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{images/cylinders_aug_examples.png}
\caption{Examples of augmentations generated for learning the dynamics of planar pushing of 9 cylinders. The pink cylinder is the robot. Time is indicated by transparency. Augmentation transforms the positions and velocities of the cylinders that moved, including the robot. All moved objects are transformed together, rigidly. Despite the clutter, we are able to find relatively large transformations that still preserve existing contacts but do not create any new ones.}
\label{fig:cylinders_aug_examples}
\end{figure*}
Although data augmentation is a popular approach, there are many other methods that have been proposed for data-efficient learning, both in general and specifically for robotic manipulation. We highlight a few important ones here, but note that these are all complementary to data augmentation. The most common technique is simply to pick a low-capacity model class, such as linear models or very small neural networks \cite{LinearAlarcon2013,TAMPC2021}. Alternatively, prior work has also developed various sets of priors specific to robotics \cite{RoboticPriors2015} which can be used as objectives during training. Another extremely useful technique is to engineer the state or action representations to include certain known invariances. For instance, a standard technique in dynamics learning methods is to represent the input positions in a local frame as opposed to a world frame, to encode translation invariance \cite{Propnet}. There are also methods for learning these kinds of invariances \cite{TAMPC2021}.
Finally, there are methods which automatically tune the parameters of a space of manually-defined augmentations, such as rotations and crops \cite{Augerino2020,AutoAugment}. These techniques are also compatible with our proposed method, and could be used to tune the hyperparameters of our algorithm.
\section{Problem Statement}
\label{sec:problem}
In this section, we formally define the form of data augmentation studied in this paper. We define a dataset $\mathcal{D}$ as a list of examples $x\in\mathcal{X}$ and, optionally, labels $\ell(x)=w$, where $\ell$ is a task-specific labeling function. We assume the space $\mathcal{X}$ is a metric space with a distance function $\mathrm{dist}$. Augmentation is a stochastic function $\augf:\exampleSpace\rightarrow\exampleSpace$ which takes in an example $x$ and produces the augmented example $\aug{\example}$. The general form is shown in Algorithm \ref{alg:aug_prototype}. Internally, augmentation will call \texttt{sample}{} to generate a vector of parameters, which we call $T$. We also define $\exampleAug_{1:\nAug}$ as a set of $k$ augmented examples produced by calling \texttt{sample}{} then \texttt{apply}{} $k$ times. The parameters $T$ describe the transformation which will be applied to the example in the \texttt{apply}{} procedure. We focus on augmentation functions that are stochastic, thus $\phi$ will sample new augmented examples each time it is called. If the dataset contains labels $w$, we assume that the labels should not change when the example is augmented.
\begin{algorithm}[t]
\caption{$\phi(x)$}\label{alg:aug_prototype}
$T = \texttt{sample}{}(x) $\\
$\aug{x} = \texttt{apply}(x,T)$\\
return $\aug{x}$\\
\end{algorithm}
We propose that useful augmentations should be \textit{valid}, \textit{relevant}, and \textit{diverse}. Let the valid set $\mathcal{X}_v$ be the set of examples which are physically possible. Let the relevant set $\mathcal{X}_r$ be the set of examples likely to occur when collecting data for or executing a specific set of tasks in a specific domain. We define $\mathrm{validity}(\aug{\example})=1$ if $\aug{\example} \in \mathcal{X}_v$ and $\mathrm{validity}(\aug{\example})=0$ otherwise. We also define $\mathrm{relevance}(\aug{\example}):=e^{-\distf{\aug{\example}}{\mathcal{X}_r}}$ and $\mathrm{diversity}(\exampleAug_{1:\nAug}):=e^{-\kl{\transformDist}{\transformUniform}}$, where $D_{KL}$ is the Kullback–Leibler divergence and $p_{\exampleAugSet}(\transform)$ is the distribution of the parameters for a set of augmented examples $\exampleAug_{1:\nAug}$. Diversity is maximized when the augmentation transformations are uniformly distributed in the range $[\transform^-,\transform^+]$. With these concepts defined, we define data augmentation as the following optimization problem, the solution to which is a set of augmentations $\exampleAug_{1:\nAug}$:
\begin{equation}
\label{eq:most_general}
\begin{array}{cc}
\underset{\exampleAug_{1:\nAug}}{\mathrm{max}} & \mathrm{diversity}(\exampleAug_{1:\nAug}) + \beta \sum_{\aug{\example}_i}\mathrm{relevance}(\aug{\example}_i) \\
\mathrm{subject\: to} & \mathrm{validity}(\aug{\example}_i)\:\:\quad \forall\,\aug{\example}_i \in \exampleAug_{1:\nAug}\\
& \ell(\aug{x})=\ell(x) \\
\end{array}
\end{equation}
\noindent where $\beta$ is a positive scalar.
This optimization problem can be solved directly if $\mathcal{X}_v$, $\mathcal{X}_r$, and $\ell$ are known. However, in manipulation tasks, that is rarely the case. Instead, we will formulate an approximation to this problem using measures of relevance, diversity, and validity that are derived from physics and useful for a variety of robotic manipulation tasks and domains.
\begin{figure}
\centering
\includegraphics[height=3.85cm]{images/car_sim_env-cropped.png}
\includegraphics[height=3.85cm]{images/cylinders_env.png}
\caption{(left) The environment for bimanual rope manipulation, in simulation. (right) The environment for cluttered planar pushing of cylinders, in simulation.}
\label{fig:sim_envs}
\end{figure}
\subsection{Assumptions}
\label{sec:assumptions}
Most augmentation algorithms rely on some expert knowledge or heuristics to define what is a valid augmentation. For instance, rotating an image for image classification makes an assumption that rotation does not change the label, and this is not always true. Similarly, the efficacy or correctness of our algorithm is also subject to certain assumptions. Here, we define the key assumptions:
\begin{itemize}
\item The geometry of the robot and all objects is known.
\item The scene can be decomposed into objects which can be assigned or detected as either moving or stationary.
\item Examples are time-series, consisting of at least two states.
\item All possible contacts between stationary vs. moving objects have the same friction coefficient.
\item Contacts between the robot and objects/environment (e.g. grasps) can be determined from the data.
\item A rigid-body transformation of an object preserves internal forces arising from its material properties.
\item Objects only move due to contact or under the force of gravity. We do not handle movement due to magnetism or wind, for example.
\end{itemize}
Notably, the assumption that a rigid-body transformation preserves internal, material forces is what allows us to handle cluttered scenes with many moving objects, as well as deformable or articulated objects. While it could be valuable to augment the deformation or relative motion of the objects, doing so in a way that is valid would be challenging. Instead, we transform them all rigidly (See examples in Figures \ref{fig:rope_aug_examples},\ref{fig:cylinders_aug_examples}).
The assumption of having a common friction coefficient between all moving versus stationary objects is in-line with much manipulation research. For example, work on planar pushing assumes friction is uniform across the plane \cite{PushSim2021,PushYu2016}. Note that we make no assumption on the coefficients of friction between two moving objects.
Naturally, there are scenarios where these assumptions do not hold and thus where our algorithm may not perform well. However, our experiments demonstrate significant improvement on two very different manipulation scenarios, and we expect these assumptions extend to other scenarios as well.
\section{Methods}
We first describe an approximation to the augmentation problem \eqref{eq:most_general}, which is specialized for manipulation. Next, we decompose this problem and describe each component in detail.
\subsection{Algorithm Overview}
Since robotic manipulation is interested specifically in moving objects, we focus on augmenting trajectories of poses and velocities of moving objects. A key insight is that objects in the scene can be categorized as either robots, moved objects, or stationary objects, and that these should be considered differently in augmentation. We denote the moved objects state as $s$, the robot state as $r$, the robot action as $a$, and the stationary objects as $e$ (also called environment). Our method augments the moved object states, the robot state, and the actions, but not the stationary objects. We do not assume any specific representation for states or actions, and examples of possible representations include sets of points, joint angles, poses, or joint velocities. Since we operate on trajectories, we bold the state ($\bm{s},\bm{r}$) and action ($\bm{a}$) variables to indicate time-series (e.g $s_{1:T}=\bm{s}$). With this categorization, we can write $x=\{\bm{s},\bm{r},\bm{a},e\}$ and $\aug{x}=\{\bm{\aug{s}},\bm{\aug{r}},\bm{\aug{a}},e\}$.
We choose the parameters $T$ to be rigid body transformations, i.e. either $SE(2)$ or $SE(3)$. We parameterize $T$ as a vector with translation and rotation components, with the rotation component with Euler angles bounded from $-\pi/2$ to $\pi/2$, which gives uniqueness and a valid distance metric. These rigid body transforms are applied to moved objects in the scene, and augmented robot state and action are computed to match. We choose rigid body transforms because we can reasonably assume that even for articulated or deformable objects, augmenting with rigid body transforms preserve the internal forces, and therefore the augmentations are likely to be valid.
It may seem that an effective method to generate augmentations is then to randomly sample transforms independent of the data. However, this is not an effective strategy because it is highly unlikely to randomly sample valid and relevant transformations. We confirm this in our ablations studies (included in the Appendix 1.A). Instead of sampling transforms randomly, we formulate an approximation to Problem \ref{eq:most_general}:
\begin{equation}
\label{eq:method}
\begin{array}{cc}
\underset{T}{\mathrm{min}} &
\loss_\mathbb{U}(T,\target{T}) +
\beta_1\loss_\text{bbox}(\bm{\aug{s}}) + \beta_2\loss_\text{valid}(T) + \\
& \beta_3\loss_\text{occ}(\bm{\aug{s}},e) + \beta_4\loss_{\Delta\minDist}(\bm{\aug{s}},e) + \\
& \loss_\text{robot}(\bm{\aug{s}},\bm{\aug{r}},\bm{\aug{a}},e) \\[1em]
\text{subject to} & \{\bm{\aug{s}},\bm{\aug{r}},\bm{\aug{a}},e\} = \texttt{apply}(\bm{s},\bm{r},\bm{a},e,T) \\
& \target{T}\sim\mathbb{U}\transformUniformRange \\
\end{array}
\end{equation}
The decision variable is now the parameters $T$, and the validity constraint is moved into the objective. We propose that diversity should be maximized by the transforms being uniformly distributed, and therefore $\loss_\mathbb{U}$ penalizes the distance to a target transform $\target{T}$ sampled uniformly within $[T^-,T^+]$. The relevance and validity terms (which are intractable to compute) are replaced with four objective functions, which are specialized to manipulation. The magnitudes of different terms are balanced by ${\beta_1,\beta_2,\beta_3,\beta_4}$, which are defined manually. We define each objective function below:
\subsubsection{Bounding Box Objective}
First is the bounding-box objective $\loss_\text{bbox}$, which keeps the augmented states $\bm{\aug{s}}$ within the workspace/scene bounds defined by $[s^-,s^+]$. The bounding box objective encourages relevance, since states outside the workspace are unlikely to be relevant for the task.
\begin{equation}
\loss_\text{bbox} = \sum_{i=1}^{|s|}{\max(0,\bm{\aug{s}}_i - s^+_i) + \max(0,s^-_i-\bm{\aug{s}}_i)}
\end{equation}
\subsubsection{Transformation Validity Objective}
The transformation validity objective $\loss_\text{valid}$ assigns high cost to transformations that are always invalid or irrelevant for the particular task or domain. It is defined by function $f_\text{valid}$, which takes in only the transformation. For example, in our rope manipulation case, it is nearly always invalid to rotate the rope so that it floats sideways. In our cluttered pushing task, in contrast, this term has no effect. This term can be chosen manually on a per-task basis, but we also describe how a transformation validity objective can be learned from data in section \ref{sec:learnValid}.
\begin{equation}
\loss_\text{valid} = f_\text{valid}(T)
\end{equation}
\subsubsection{Occupancy Objective}
The occupancy objective $\loss_\text{occ}$ is designed to ensure validity by preventing objects that were separate in the original example from penetrating each other and ensuring that any existing penetrations are preserved. In other words, we ensure that the occupancy $O(p)$ of each point $\aug{\points}_{\state,i}\in\aug{\points}_\state$ in the augmented object state matches the occupancy of the corresponding original point $\points_{\state,i}\in\points_\state$. For this term, we directly define the gradient, which moves $\aug{\points}_{\state,i}$ in the correct direction when the occupancies do not match. This involves converting the environment $e$ into a signed-distance field (SDF) and the moved objects states $\bm{s}$ into points $\points_\state$. This objective assumes that the environment has uniform friction, so that a contact/penetration in one region of the environment can be moved to another region.
\begin{equation}
\loss_\text{occ}=\sum_{\substack{\points_{\state,i}\in\points_\state \\ \aug{\points}_{\state,i}\in\aug{\points}_\state}} \mathrm{SDF}(\aug{\points}_{\state,i})\big(O(\points_{\state,i})-O(\aug{\points}_{\state,i})\big)
\end{equation}
\subsubsection{Delta Minimum Distance Objective}
The delta minimum distance objective is designed to increase relevance by preserving near-contact events in the data. We preserve near-contact events because they may signify important parts of the task, such as being near a goal object or avoiding an obstacle. We define the point among the moved object points $\points_\state$ which has the minimum distance to the environment $p_\minDist=\mathrm{argmin}_{\points_{\state,i}}\mathrm{SDF}(\points_{\state,i})$. The corresponding point in the augmented example we call $\aug{p}_\minDist$.
\begin{equation}
\loss_{\Delta\minDist} = ||\mathrm{SDF}(p_\minDist)-\mathrm{SDF}(\aug{p}_\minDist)||^2_2
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/step_project.png}
\caption{Illustration of \texttt{aug\_state}{} within Algorithm \ref{alg:aug}. All points and sets are in the space of $T$. The path of $T$ is shown in red with black arrows. The pink set, State Valid, is the set where \textit{state\_valid} is true. $T$ begins at the origin, and alternates between moving towards $\target{T}$ and projecting back into the set \textit{state\_valid} (by solving Equation \eqref{eq:project}).}
\label{fig:step_project}
\end{figure}
\subsubsection{Robot Contact Objective}
The robot contact objective $\loss_\text{robot}$ ensures validity of the robot's state and the action. This means that contacts involving the moved objects which existed in the original example must also exist in the augmented example. Let the contact points on the robot be $p^c_{r}$ and the contact points on the moved objects' state be $p^c_{s}$.
\begin{equation}
\loss_\text{robot}=\sum_i(||{p^c_{r,i} - p^c_{s,i}}||^2_2)
\end{equation}
Finally, we note that other objective functions can be added for the purpose of preserving task-specific labels, i.e. so that $\ell(x)=\ell(\aug{x})$. However, for our experiments, no additional functions were necessary.
\subsection{Solving the Augmentation Optimization Problem}
This section describes how we solve Problem \eqref{eq:method}, and the procedure is detailed in Algorithm \ref{alg:aug}. First, we split the problem into two parts, \texttt{aug\_state}{} and \texttt{aug\_robot}{}.
\begin{algorithm}[t]
\caption{$\phi(\bm{s},\bm{r},\bm{a},e)$}\label{alg:aug}
\SetKwComment{Comment}{// }{}
\Comment{\texttt{aug\_state}{}}
$ \target{T} \sim \mathbb{U}\transformUniformRange $\\
$T = \mathbf{0}$ \text{ (identity)} \\
\For{$i \in N_p$}{
$T_\text{old}=T$ \\
$T = \texttt{step\_towards}(T, \target{T})$ \\
$T = $ solve Equation \eqref{eq:project} \\
\If{$\distf{T} {T_\text{old}}<\delta_p$}{
break\\
}
}
\Comment{\texttt{aug\_robot}{}}
$\bm{\aug{s}}, \textit{state\_valid} = \texttt{apply\_state}(\bm{s},\bm{a},T)$\\
$\bm{\aug{r}},\bm{\aug{a}}$, \textit{ik\_valid} $\leftarrow \mathrm{IK}(\bm{r},\bm{a},\bm{\aug{s}},e)$\\
\uIf{$!$state\_valid or $!$ik\_valid}{
return $\bm{s},\bm{r,}\bm{a},e$\\
}
\Else {
return $\bm{\aug{s}},\bm{\aug{r}},\bm{\aug{a}},e$\\
}
\end{algorithm}
In \texttt{aug\_state}{}, we optimize the transform $T$ to produce the moved objects' state $\bm{\aug{s}}$ while considering environment $e$. To achieve diversity, we uniformly sample a target transform $\target{T}$ and step towards it iteratively. This stepping alternates with optimizing for validity and relevance. We visualize this procedure in Figure \ref{fig:step_project}, as well as in the supplementary video. The innermost optimization problem is
\begin{equation}
\label{eq:project}
\begin{array}{cc}
\underset{T}{\mathrm{argmax}} & \beta_1\loss_\text{bbox}+\beta_2\loss_\text{valid}+\beta_3\loss_\text{occ}+\beta_4\loss_{\Delta\minDist} \\
\end{array}
\end{equation}
We solve Problem \eqref{eq:project} using gradient descent, terminating after either $M_p$ steps or until the gradient is smaller than some threshold $\epsilon_p$.
Note that we start \texttt{aug\_state}{} in Algorithm \ref{alg:aug} with $T$ at the identity transformation, rather than initially sampling uniformly. This has two benefits. First, the identity transform gives the original example, which is always in the relevant set. Second, it is unlikely that a uniformly sampled transformation is valid or relevant, so starting at a random transformation would make solving Problem \eqref{eq:project} more difficult.
In \texttt{aug\_robot}{}, we are optimizing $\loss_\text{robot}$. This corresponds to computing the augmented robot states $\bm{\aug{r}}$ and actions $\bm{\aug{a}}$ given the augmented states $\bm{\aug{s}}$ and the environment $e$. Minimizing $\loss_\text{robot}$ means preserving the contacts the robot makes with the scene, which we do with inverse kinematics (Line 10 in Algorithm \ref{alg:aug}).
\subsection{Learning the Valid Transforms Objective}
\label{sec:learnValid}
As discussed above, we include a term $\loss_\text{valid}$ based only on the transformation $T$. In some cases, such as our rope manipulation example, it may not be obvious how to define this objective manually. Our rope is very flexible, and therefore rotating the rope so that it floats in a sideways arc is invalid, but it may be valid for a stiff rope or cable. To address this, we offer a simple and data efficient algorithm for learning the transformation validity function $f_\text{valid}$.
Our method for learning $f_\text{valid}$ is given in Algorithm \ref{alg:valid_transformations_data}. This algorithm repeatedly samples augmentations of increasing magnitude, and tests them on the system (lines 6 and 8). This generates ground truth states starting from an input state and action. The result is a dataset $\mathcal{D}_\text{valid}$ of examples ($T$, $y_\text{valid}$). We then train a small neural network $f_{\text{valid},\theta}(\transform)$ to predict the error $y_\text{valid}$ and use the trained model as our transformation validity objective. We collect $n_\text{valid} = \sqrt{10^d}$ examples, where $d$ is the dimensionality of the space of the transformation $T$.
\begin{algorithm}[t]
\caption{Data Collection for Learning Valid Transformations}\label{alg:valid_transformations_data}
\SetKwComment{Comment}{/* }{ */}
\KwIn{$Q_\text{valid},n_\text{valid}$}
\KwOut{$\mathcal{D}_\text{valid}$}
$\learnValidError^- = \infty$ \\
\For{$i \in [1, n_\text{valid}]$}{
\For{$(s_t,r_t,a_t,e) \in Q_\text{valid}$}{
$ \alpha_\text{valid} = i / n_\text{valid} $ \\
$ T \sim \mathbb{U}[\learnValidScalingT^-,\learnValidScalingT^+] $\\
$ s_{t+1}, r_{t+1} = \texttt{simulate}(s_t, r_t, a_t, e)$ \\
$ \aug{s}_{t,t+1}, \aug{r}_{t,t+1}, \aug{a}_{t} = \texttt{apply}(s_{t,t+1},r_{t,t+1},a_{t},T)$ \\
$ \test{\aug{s}_{t+1}}, \test{\aug{r}_{t+1}} = \texttt{simulate}(\aug{s_t}, \aug{r_t}, \aug{a}_t, e)$ \\
$y_\text{valid} = || \aug{s}_{t+1} - \test{\aug{s}_{t+1}} ||$ \\
\If {$y_\text{valid} < \learnValidError^-$}{
$\learnValidError^- = y_\text{valid},\,T_\text{min} = T,\,\learnValidError^- = y_\text{valid}$ \\
}
}
add $(T_\text{min}, \learnValidError^-)$ to $\mathcal{D}_\text{valid}$ \\
}
return $\mathcal{D}_\text{valid}$
\end{algorithm}
This method owes its efficiency and simplicity to a few key assumptions about the system/data. First, we assume that we can collect a few ($<1000$) examples from the system and test various transformations. This could be performed in a simulator, as we do in our experiments. Because the transformation validity objective is not a function of state, action, or environment, we can make simplifications to this simulation by picking states and environments which are easy to simulate. We denote this set of states and actions as $Q_\text{valid}$. Second, because the transformation parameters are low-dimensional (3 and 6 in our experiments) the trained model generalizes well with relatively few examples.
\subsection{Application to Cluttered Planar Pushing}
In this section, we describe how we apply the proposed method to learning the dynamics of pushing of 9 cylinders on a table (Figure \ref{fig:sim_envs}). The moved object state $s$ consists of the 2D positions and velocities of the cylinders. The robot state $r$ is a list of joint positions, and the actions $a$ are desired end effector positions in 2D. There is no $w$ in this problem. The parameters $T$ used are $SE(2)$ transforms. In this problem, any individual trajectory may include some moved cylinders and some stationary ones. In our formulation, the stationary cylinders are part of $e$ and are not augmented, whereas the moved ones are part of $s$ and are augmented. The robot's end effector (also a cylinder) is also augmented, and IK can be used to solve for joint configurations which match the augmented cylinders' state and preserve the contacts between the robot and the moved cylinders.
\subsection{Application to Bimanual Rope Manipulation}
In this section, we describe how we apply the proposed method to a bimanual rope manipulation problem (Figure \ref{fig:sim_envs}). In this problem, there is a binary class label, so $w\in\{0,1\}$, which is preserved under our augmentation (last constraint in Problem \eqref{eq:most_general}). The rope is the moved object, and its state $s$ is a set of 25 points in 3D. The robot state $r$ is a list of the 18 joint positions, and the actions $a$ are desired end effector positions in the robot frame. In this problem, we know that the only contacts the robot makes with the objects or environment are its grasps on the rope. Therefore, we preserve these contacts by solving for a robot state and action that match the augmented points on the rope. The parameters $T$ used are $SE(3)$ transforms.
\section{Results}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/cylinders_results1.png}
\caption{Mean position error (meters) for learning the dynamics of cluttered planar pushing.}
\label{fig:cylinders_results}
\end{figure}
We start by describing the tasks and our experimental methodology, then we present our results. These experiments are designed to show that training on augmentations generated by our method improves performance on a downstream task. We perform two simulated experiments, where we run thousands of evaluations, including several ablations (see Appendix 1.A). We also perform a real robot experiment (Figure \ref{fig:real_robot_setup}) where we run 30 iterations of online validity classifier learning, with augmentation and without. In all experiments, we train until convergence or for a fixed number of training steps. This ensures a fair comparison to training without augmentation, despite the differing number of unique training examples. In all experiments we generate 25 augmentations per original example (See Appendix 1.B). We define key hyperparameters of our method in Appendix 1.C. A link to our code is available on the \href{https://sites.google.com/view/data-augmentation4manipulation}{project website}.
\subsection{Cluttered Planar Pushing}
The cluttered planar pushing environment consists of a single robotic arm pushing 9 cylinders around on a table. The task is to learn the dynamics, so that the motion of the cylinders can be predicted given initial conditions and a sequence of robot actions. For this, we use PropNet \cite{Propnet}, and our task is inspired by the application of PropNet to planar pushing in \cite{DBRP2020}. The inputs to PropNet are an initial state $s_0$ and a sequence of actions $\bm{a}$, and the targets are the future state $(s_1,\dots,s_T$). All trajectories are of length 50. We evaluate the learned dynamics by computing the mean and maximum errors for position and velocity on a held-out test set. Example augmentations for this scenario are shown in Figure \ref{fig:cylinders_aug_examples}.
This is an interesting application of our augmentation for several reasons. First, it is a regression task, which few augmentation methods allow. Second, the output of the dynamics network is high-dimensional (900 for a prediction of length 50), which normally means large datasets are needed and engineering invariances into the data or network is difficult. Finally, the trajectories contain non-negligible velocities and are not quasi-static.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/cylinders_rollouts.png}
\caption{Predictions (blue) vs. ground truth (red) for planar pushing. The robot is in pink. Trajectories are visualized with lines. The left column shows predictions from a model trained with augmentation, the right column without.}
\label{fig:cylinders_rollouts}
\end{figure}
The original dataset contained 60 trajectories of length 50, or \SI{3000} time steps in total. For comparison, previous work on the same dynamics learning problem used over \SI{100000} time steps in their datasets \cite{Propnet,DBRP2020}. This is similar to the number of training examples we have \textit{after} augmentation, which is \SI{75000} time steps. Finally, we measured the performance of our implementation and found that for the planar pushing scenario we generate 4.5 augmentations per second on average.
The primary results are shown in Figure \ref{fig:cylinders_results}. Augmentation reduces the average position error from \SI{0.00154}{\meter} to \SI{0.00133}{\meter}, a decrease of 14\%. Additionally, we include two baselines, one which adds Gaussian noise to the state, robot, action, and environment data, and one which uses a VAE to generate augmentations as in \cite{MaterialsAEOhno2020}. The magnitude of the Gaussian noise was chosen manually to be small but visually noticeable. Our proposed augmentation method is statistically significantly better than the baseline without augmentation ($p<0.0362$), the Gaussian noise baseline ($p<0.0001$), and the VAE baseline ($p<0.0002$). This difference in error may seem small, but note that error is averaged over objects, and most objects are stationary. Two roll-outs from with-augmentation and from without-augmentation are shown in Figure \ref{fig:cylinders_rollouts}. In particular, we found that augmentation reduces ``drift,'' where the model predicts small movements for objects that should be stationary. Finally, we note that the Gaussian noise and VAE baselines perform worse than no augmentation, suggesting that data augmentation can hurt performance if the augmentations are done poorly.
\subsection{Bimanual Rope Manipulation}
In this task, the end points of a rope are held by the robot in its grippers in a scene resembling the engine bay of a car, similar to \cite{UnreliableMitrano2021}, and shown in Figure \ref{fig:sim_envs}. The robot has two 7-dof arms attached to a 3-dof torso with parallel-jaw grippers. The tasks the robot performs in this scene mimic putting on or taking off lifting straps from the car engine, installing fluid hoses, or cable harnesses. These tasks require moving the strap/hose/cable through narrow passages and around protrusions to various specified goal positions without getting caught. One iteration consists of planning to the goal, executing open-loop, then repeating planning and execution until a timeout or the goal is reached. The goal is defined as a spherical region with \SI{4.5}{\centi\meter} radius, and is satisfied when any of the points on the rope are inside this region.
The planner is an RRT with a learned constraint checker for edge validity (validity classifier), and more details are given in \cite{UnreliableMitrano2021}. We want to learn a classifier that takes in a single transition $x = (s_t,a_t,s_{t+1},e_t)$ and predicts whether the transition is valid. Without a good constraint checker, the robot will plan trajectories that result in the rope being caught on obstacles or not reaching the goal. We apply our augmentation algorithm to the data for training this constraint checker. After an execution has completed, the newly-collected data along with all previously collected data are used to train the classifier until convergence. Example augmentations for this scenario are shown in Figure \ref{fig:rope_aug_examples}. The objective is to learn the constraint checker in as few iterations as possible, achieving a higher success rate with fewer data.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/success_rate_rolling.png}
\caption{The success rate on simulated bimanual rope manipulation, using a moving window average of 10.}
\label{fig:rope_results}
\end{figure}
In this experiment, a total of \SI{3038} examples were gathered (before augmentation, averaged over the 10 repetitions). Since the purpose of our augmentations is to improve performance using small datasets, it is important that this number is small. In contrast, prior work learning a similar classifier used over \SI{100000} examples in their datasets \cite{UnreliableMitrano2021,UnreliableDale2019}. This is similar to the number of training examples we have \textit{after} augmentation, which is \SI{75950} on average. Finally, we measured the performance of our implementation and found that for the rope scenario we generate 27 augmentations per second on average.
The primary results are shown in Figure \ref{fig:rope_results}. Over the course of 100 iterations, the success of our method using augmentation is higher than the baseline of not using augmentation, as well as the Gaussian noise baseline. We omit the VAE baseline, since it performed poorly in the planer pushing experiment. Furthermore, it is computationally prohibitive to retrain the VAE at each iteration, and fine-tuning the VAE online tends to get stuck in bad local minima. The shaded regions show the 95th percentile over 10 runs. If we analyze the success rates averaged over the final 10 iterations, we find that without augmentation the success rate is 48\%, but with augmentation the success rate is 70\%. The Gaussian noise baseline has a final success rate of 31\%. A one-sided T-test confirms statistical significance ($p<0.001$ for both).
\subsection{Real Robot Results}
\label{sec:real_robot_results}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/real_robot_task_error_success.png}
\caption{The success rate and task error distribution of bimanual rope manipulation on the real robot. Task error is the distance between the goal and the final observed state of the rope.}
\label{fig:real_robot_results}
\end{figure}
In this section, we perform a similar experiment to the simulated bimanual rope manipulation experiment, but on real robot hardware. This demonstrates that our method is also effective on noisy sensor data. More importantly, it demonstrates how augmentation enables a robot to quickly learn a task in the real world. We use CDCPD2 \cite{CDCPD2} to track the rope state. The geometry of the car scene is approximated with primitive geometric shapes, like in the simulated car environment.
We ran the validity classifier learning procedure with a single start configuration and a single goal region, both with and without augmentation. After 30 iterations of learning, we stop and evaluate the learned classifiers several times. With augmentation, the robot successfully placed the rope under the engine 13/26 times. Without augmentation, it succeeded 7/26 times. The Gaussian noise and VAE baselines performs poorly in simulated experiments, therefore we omit them in the real robot experiments.
\section{Limitations}
There are problems and applications where the proposed objective functions do not ensure validity, relevance, and diversity. In these cases, the structure of our augmentation and projection procedures can remain, while new objective functions are developed. Another limitation is that our method is not compatible with image data. Much recent research in robotics has moved away from engineered state representations like poses with geometric information, and so there are many learning methods which operate directly on images. Although this is a limitation of the proposed method, many of the augmentations developed for images are also not applicable to problems in manipulation, even when images are used. For instance, pose detection, 3D reconstruction, semantic segmentation, and many other tasks may not be invariant to operations like cropping, flipping, or rotating. Creating an augmentation method for manipulation that is applicable to images is an open area for future research.
\section{Conclusion}
\label{sec:conclusion}
This paper proposes a novel data augmentation method designed for trajectories of geometric state and action robot data. We introduce the idea that augmentations should be valid, relevant, and diverse, and use these to formalize data augmentation as an optimization problem. By leveraging optimization, our augmentations are not limited to simple operations like rotation and jitter. Instead, our method can find complex and precise transformations to the data that are valid, relevant, and diverse. Our results show that this method enables significantly better downstream task performance when training on small datasets. In simulated planar pushing, augmentation decreases the prediction error by 14\%. In simulated bimanual rope manipulation, the success rate with augmentation is 70\% compared to 47\% without augmentation. We also perform the bimanual rope manipulation task in the real world, which demonstrates the effectiveness of our algorithm despite noisy sensor data. In the real world experiment, the success rate improves from 27\% to 50\% with the addition of augmentation.
\section*{Acknowledgments}
The authors would like to acknowledge Andrea Sipos for her ingenious design of a reset mechanism, allowing us to run robot experiments unattended. This work was supported in part by NSF grants IIS-1750489 and IIS-2113401, ONR grant N00014-21-1-2118, and the Toyota Research Institute. This article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.
\bibliographystyle{plainnat}
\section{Appendix}
\subsection{Ablation of Objective Terms}
\label{sec:ablations}
The objectives terms we propose are designed to be useful in many cases, but to understand this better, we ablate several of the objective terms. We evaluate the importance of the transformation validity objective, occupancy objective, and delta minimum distance objective by repeating our experiments. In each ablation, we omit one objective term. Each condition was run 10 times with different random seeds. The results are shown in Table \ref{tab:ablations} and Figure \ref{fig:rope_ablations}.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{images/rope_ablations.png}
\caption{Success vs Iterations for ablations. Bimanual rope manipulation, in simulation.}
\label{fig:rope_ablations}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{l|l|l}
Method & \thead{Task Success \% ($\uparrow$) \\ (rope)} & \thead{Position Error (m) ($\downarrow$) \\ (planar pushing)} \\
Full Method & 0.700 (0.118) & 0.0010 (0.0001) \\
No transf. valid. & 0.700 (0.185) & 0.0011 (0.0001) \\
No delta min dist & 0.675 (0.185) & 0.0012 (0.0002) \\
No occupancy & 0.24 (0.136) & 0.0028 (0.0011) \\
\end{tabular}
\caption{Ablations: Metrics on both tasks, with various objective terms removed. The standard deviation is shown in parentheses.}
\label{tab:ablations}
\end{table}
We find that across both experiments, the most important objective was the occupancy objective. Without this objective, our method produced augmentations where contacts and penetrations are not preserved. This is invalid, and training on these examples produces a worse model even than using no augmentations.
The next most important objective is the delta minimum distance objective. In our simulated rope experiment, the method without this objective performs slightly worse. By visualizing the different examples (Figure \ref{fig:rope_dmd_comparison}), we found that without this objective, the augmentation transforms examples which were near the hooks and protrusions far into free space. In contrast, with the delta minimum distance objective, these examples stay near the hooks. Since the planning tasks involve moving in narrow passages around these hooks, it's more relevant to produce more examples in this area. Hence, by preserving the distance to nearby obstacles, we also tend to keep examples in areas of interest. In the planar pushing task, however, omitting the delta minimum distance objective did not have a notable effect. This could be in part due to the cluttered nature of the scene, which means that most transformations are small.
\begin{figure}
\centering
\includegraphics[width=0.492\linewidth]{images/original_dmd_ex3_aug1.png}
\includegraphics[width=0.492\linewidth]{images/output_dmd_ex3_aug1.png}
\caption{Augmented rope data generated without the delta minimum distance objective. On the left is the original transition, under the hook. On the right is the augmented transition, which is far from the hook. Without the delta minimum distance objective, data can be augmented away from interesting regions, becoming less relevant. This is based on the heuristic that contact and near-contact events are relevant for manipulation.}
\label{fig:rope_dmd_comparison}
\end{figure}
Finally, we find that the valid transformation objective can be omitted without effecting performance in these two tasks. In the planar pushing case this is completely expected, because there are no SE(2) transformations which are always invalid or irrelevant. However, in the rope experiment, this result is less intuitive. We found that omitting this objective occasionally produced augmentations where the rope is floating sideways in physically impossible states. This result suggests that even if a few augmentations are invalid or irrelevant, training on the augmentations still significantly outperforms using no augmentations. Even if omitting the objective does not degrade performance in these experiments, it is a natural term to include and may be beneficial in other applications.
\subsection{Choosing the Number of Augmentations}
Our method produces multiple distinct augmentations for each original example. More augmentations should improve generalization after training, but at the cost of additional computation time. In this experiment, we explore how the number of augmentations per-example affects generalization. We tested this in the rope domain on the task of learning the constraint checker. The original un-augmented dataset came from the first three iterations of learning. With these examples, we then generated varying numbers of augmentations for each example and evaluated on a held-out test example drawn from the fourth iteration. Figure \ref{fig:num_augs_examples} shows three rope transitions labeled 0 from the original three training examples, followed by the held-out test example. Intuitively we would expect, with enough augmentations, that from these original training examples we would be able to generate augmented examples which are very similar to the held-out test example.
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{images/num_augs_examples.png}
\caption{(top) These three original training example show rope transitions with inaccurate predictions, where the rope is predicted to move inside the hook on the engine. (bottom) The test transition is similar, but is not identical. The proposed augmentation method can move the rope transition while ensuring it still intersects the hook in a similar way, which allows it to generate augmentations like the test example.}
\label{fig:num_augs_examples}
\end{figure}
For each number of augmentations, we generated 3 augmented datasets (using different random seeds) and trained 3 models on each dataset (again using different random seeds), for a total of 9 data points for each number of augmentations. The results are shown in Figure \ref{fig:num_augs}. The y-axis is the classifier output on the test example, which has a true label of 0 (lower is better). Without any augmentations, the classifier has an output near 1, which is incorrect. Improvement began around 10 augmentations, and plateaued by 20. Since computation cost was not significant, we chose 25 augmentations per-example for all experiments in the main text.
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{images/classifier_accuracy_vs_num_augmentations.png}
\caption{Number of augmentations versus classifier output on a test example. Lower is better. Performance plateaus at around 25 augmentations, in this example.}
\label{fig:num_augs}
\end{figure}
\subsection{Hyperparameters}
The maximum number of iterations for stepping and projecting are $N_p=5$ and $M_p=25$, and we stop the outer loop if the change in transform is ever less than $\delta_p=0.001$. When solving Problem 3, we stop if the gradient is smaller than $\epsilon_p=0.0003$. There are also learning rate, learning rate decay, and weighting parameters used when solving Problem 3. For the objective function weighting terms, we use $\beta_1=0.05,\beta_2=1,\beta_3=1,\beta_4=0.1$. The weighting terms were tuned so that the magnitude of the different weighted losses terms were comparable, and learning rate and number of iterations were tuned to maximize convergence. All values used are documented in our code, which can be found on our \href{https://sites.google.com/view/data-augmentation4manipulation}{project website}.
\end{document} | proofpile-arXiv_065-1734 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=6.5in]{motivation}
\vspace{-4mm}
\caption{Feature encoding visualization of one sampled convolutional layer across ten clients during FL.
The color of each neuron is determined by its top response class to indicate its learned feature.
\textbf{(a)} The original FedAvg with chaotic feature encoding can suffer from feature averaging conflicts among different nodes.
\textbf{(b) \& (c)} In contrast, our framework enforces structurally-aligned feature encoding by adopting group convolution and alleviates the averaging conflicts in both IID and non-IID cases.
Experiments are conducted with ten collaborative nodes (VGG9 on CIFAR10). IID: Each node has local data of 10 classes. Non-IID: Each node has local data of only 5 classes.
}
\label{fig:motivation}
\vspace{-3mm}
\end{center}
\end{figure*}
Federated Learning (FL) has achieved great popularity among various distributed deep learning frameworks due to its superior collaboration flexibility, communication efficiency, and performance robustness in vision and language learning scenarios~\cite{fl, fl1, fl2, fl3}.
It is commonly achieved by multiple FL nodes' collaboration through Federated Averaging (FedAvg), which generates a global model by periodically averaging local models' parameters.
Specifically, FedAvg follows a coordinate-based weight averaging manner~\cite{fl, fedma}.
Different local models' weights in the same layer and same index (i.e., coordinates) are averaged to be the global model's weight.
Although widely adopted, FedAvg still suffers from accuracy drop due to a common issue called weight divergence~\cite{non-iid0, pfnm, iclr}.
Especially in non-IID scenarios, highly skewed data distribution across nodes can cause distinct weight values at the same coordinates, thus hurting the model averaging performance.
Recent works elaborate one potential reason of such weight divergence is the DNN ``permutation invariance'' property.
Specifically, given a DNN model, the set of parameters in its convolutional and fully-connected layers could be arbitrarily permuted with different permutation sequences, while still yielding the same computational results~\cite{fedma}.
Due to the permutation invariance property, weight matrices of different local FL models may not be fully-aligned by coordinates.
Thus, coordinate-based FedAvg will incur weight averaging conflicts and lead to sub-optimal FL accuracy, which is commonly observed as the weight divergence issue.
Many optimization methods are proposed to alleviate the weight divergence issue by parameter-oriented weight matching, such as representation matching~\cite{rmatching}, Bayesian matching~\cite{bayesian}, FedMA~\cite{fedma}, {etc}.
Although these works have different designs, such as weight aligning by minimizing MSE distance~\cite{fedma}, or activation aligning~\cite{rmatching}, they share the similar methodology: After each local model training epoch, they first evaluate the parameter similarity across local models, and then re-permute the weight matrices so that approximate weights could be averaged together.
Although outperforming native FedAvg, these methods still have certain limitations, such as inaccurate parameter similarity, extra computation/communication overhead and compromised data privacy, etc.
Specifically, current methods’ matching accurateness highly depends on the selected similarity metric and the operation targets. For example, [11, 18] use MSE loss with Euclidean distance in weight or activation matrices.
But two weight matrices with a small distance may not necessarily mean they carry the same information and feature.
Therefore, common parameter matching methods can still suffer from the feature-level misalignment.
To tackle these limitations, we propose {{\it $Fed^2$}\xspace}, a feature-aligned federated learning framework.
Fig.~\ref{fig:motivation} demonstrates a set of feature visualization and illustrates the different feature alignment effect between FedAvg and our proposed {\it $Fed^2$}\xspace framework.
As shown in Fig.~\ref{fig:motivation} (a), FedAvg’s local models suffer from significant feature-level mismatching.
The coordinate-based FedAvg in such case will thus incur dramatic feature conflicts and cause convergence performance degradation.
By contrast, with the proposed {\it $Fed^2$}\xspace, our models' learned feature distribution conform to strict structural alignment without any averaging conflicts.
Even in extreme non-IID scenarios, {\it $Fed^2$}\xspace still maintains consistent feature alignment among local models, thus providing superior federated learning performance than prior works including higher convergence rate, accuracy, etc.
Specifically, we make the following contributions:
\begin{itemize}
\item First, we promote the previous weight-level matching methods to a feature-level alignment method by defining an feature interpretation method. Such a method analyzes and qualitatively shows the feature-level misalignment issue in current coordinate-based FedAvg algorithm;
\vspace{1mm}
\item We then propose a controllable feature allocation methodology by combining feature isolation and gradient redirection techniques. Such controllable feature allocation is achieved by a group-wise convolution and fully-connected structure adaptation, which can pre-align the feature and model structure even before the training process;
\vspace{1mm}
\item Eventually, we design a feature-aligned FL framework --- {\it $Fed^2$}\xspace, which is composed of feature-oriented structure adaptation and model fusion algorithm. By maintaining consistent feature alignment throughout FL training, {\it $Fed^2$}\xspace could achieve superior performance than FedAvg and other weight-matching methods.
\end{itemize}
We conduct extensive experiments on general datasets (CIFAR10, 100) with varied architectures (VGG9, VGG16 and MobileNet).
The experimental results of {\it $Fed^2$}\xspace demonstrate significant improvement in both convergence speed and accuracy, outperforming previous state-of-the-art works by large margins (+2.5\%$\sim$4.6\%) while having smaller computation and communication cost.
Even under highly-skewed non-IID scenarios, our work still performs effective and robust feature alignment and ensures the near-optimal FL convergence accuracy when most previous methods fail to do so.
\begin{comment}
Specifically, we design a feature-oriented regulation method (\textit{$\Psi$-Net}), which can effectively identify local models' structural hierarchies and grouped components that are associated with particular classes and corresponding features.
Based on such explicit feature information allocation, matchable structures with similar feature information can be initialized at the very early training stage, and further guarantees ordered information distribution with definite structure matching along the whole training process.
Fig.~\ref{fig:motivation} provides a set of intuitive comparisons for parameter matching across collaborative models.
With the proposed feature allocated regulation, our model's encoded feature distribution conform to structural alignments with corresponding classes, while regular FedAvg's model fusion suffers from significant parameter mismatching and therefore expected performance degradation.
\end{comment}
\section{Introduction}
\label{sec:intro}
Federated Learning (FL) have drawn great attention with its great collaborative training performance and data privacy protection benefits~\cite{fl}.
In the common setting, FL conducts collaborative training across distributed local models with a homogeneous structure by Federated Averaging (FedAvg) algorithm~\cite{fedavg}, i.e., averaging weights based on their coordinates.
However, FedAvg algorithm often suffers from non-negligible accuracy drop when learning rate is not well tuned or the data distributions are highly non-iid~\cite{niid}.
Previous hypotheses are that the performance drop of FedAvg is mainly caused by averaging diverged parameters, which is called parameter divergence~\cite{niid}.
Following that, recent works further showed the parameter divergence is one consequence of the permutation-invariance of neural networks~\cite{fedma}.
That is, two homogeneous models can have different feature encoding ordering in the structure.
Without considering the encoded feature sequences, the coordinate-wise weight averaging could lost learned information and thus hurt the convergence performance~\cite{fedma}.
To solve this problem, there are several different works proposing matching-based federated averaging algorithms, such as Representation Matching~\cite{representation}, Bayesian Matching~\cite{PFNM}, FedMA~\cite{fedma}, etc.
Most of these methods conform to the similar basic process: After certain local training, they re-permute/match the indexes of trained weights based on certain similarity metrics so that similar learned features could be averaged together.
Although achieving better performance than coordinate-wise averaging, such post-training matching algorithms have certain limitations:
(1) The non-interpretable matching metrics cannot accurately reflect the similarity of encoded features. Most matching metrics usually evaluated the weight similarity of feature similarity by MSE loss to match the averaging units, while small MSE loss doesn’t necessarily indicate similar features;
(2) Most matching algorithms often assume different local models have mostly matchable units. While when facing data heterogeneity, especially highly-skewed non-iid data, different local models can carry distinct features and a forced matching can cause catastrophic information loss;
(3) The post-matching algorithm incurs heavy extra communication overhead due to uploading weight/feature for similarity calculation. Besides, using feature-based matching metrics can further require sharing private data to get the features, which compromises the data privacy.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=5.6in]{motivation}
\vspace{-5mm}
\caption{The feature encoding comparison between FedAvg and our proposed framework.
As the feature indicator, the color of each neuron is determined by its top response class, as defined in Eq.~\ref{eq:2}.
The original FedAvg can suffer from feature averaging conflicts due to its chaotic feature encoding among different nodes.
In contrast, our framework enables structurally-aligned feature encoding among nodes and reduces such averaging conflicts in both iid and non-iid cases.
Experiments are conducted on CIFAR10 with 10 collaborative nodes. IID: Each node has data of 10 classes. Non-IID: Each node has data of 5 classes.}
\label{fig:motivation}
\vspace{-4mm}
\end{center}
\end{figure*}
To tackle these limitations, we design a pre-training feature encoding regulation to enforce model's feature allocation.
Specifically, we construct convolution groups and connect them with different task logits in order to learn the specified feature, i.e., feature allocation.
As shown in Fig.~\ref{fig:motivation}, with the feature allocated regulation, our model's feature encoding can conform to the structural form while the natural training often randomly encoded features in the model structure.
Then along with the feature encoding regulation, we further conduct matched weight averaging within sharing-feature convolutional groups, thus achieving precise feature-matched averaging.
We conduct extensive experiments on general datasets (CIFAR10, 100) with two model structures (VGG, MobileNet).
The results demonstrate that our method shows both better convergence accuracy (xx\%-xx\%) and speed (xx\%-xx\%), outperforming previous state-of-the-art works by large margins.
Especially within the highly-skewed non-iid data distribution, our method still demonstrates robust and effective matching to facilitate the convergence while most previous matching methods cannot.
\section{Background and Related Work}
\label{sec:2}
\vspace{1mm}
\subsection{Federated Learning with FedAvg}
Conventional FL frameworks usually adopt the Federated Averaging algorithm (FedAvg)~\cite{fl} to collect distributed weight parameters from local nodes and fuse for the global DNN model:
\begin{equation}
\Omega^{e+1}_{(l, i)} = \sum\nolimits_{n=1}^{N} \frac{1}{N}~\omega_{(n,l,i)}^{e},
\label{eq:fedavg}
\end{equation}
where $\Omega$ and $\omega_{i}$ are the global weights and local weights from the $n^{th}$ node ($N$ nodes in total), $l$ and $i$ denote the layer and weight indexing for parameter coordination.
$e$ denotes the epoch-wise weight averaging cycle, which FL leverages to facilitate the communication efficiency compared to iteration-wise averaging.
The FedAvg formulation in Eq.~\ref{eq:fedavg} implicitly defines a \textbf{\textit{coordinate-based parameter averaging}} for distributed local DNN models, \textit{i.e.}, weights of the same coordinates $(l, i)$ in these models are strictly designated to be averaged across the collaborative training process.
\subsection{Neural Network Permutation Invariance}
However, recent research works have proved that parameter coordination is an inaccurate DNN model fusion guidance by revealing a particular DNN property -- \textbf{\textit{weight permutation invariance}}~\cite{bayesian, pfnm, fedma}.
Such a property shows that the DNN weight matrix can be structurally shuffled while maintaining lossless calculation results.
Specifically, a weight matrix $\omega$ can be decomposed as $\omega\Pi$, where $\omega$ indicates the parameter value only and $\Pi$ defines the coordinate permutation matrix.
When $\Pi = \textbf{1}$ (identity matrix), no coordinate permutation is applied to the weight matrix, \textit{i.e.}, $\omega\textbf{1} = \omega$.
While $\textbf{1}$ can be further decomposed a pair of permutation matrices ($\Pi~\Pi^T$)\footnote{We could construct any permuted identity matrix $\Pi$ by shuffling the \textbf{1} elements to be non-diagonal but maintains the full rank.}, the lossless permutation can be formulated as:
\begin{equation}
\omega~\textbf{1} = \omega~\Pi~\Pi^T = \omega.
\label{eq:invar}
\end{equation}
Without loss of generality, considering a DNN model composed of two consecutive layers (layer weights as $\omega_{l}$ and $\omega_{l+1}$) as an example, the output $F(X)$ could be formulated as\footnote{We use fully connected layers as an example. Convolutional layers could also be transformed to similar matrix multiplication calculation by \textit{im2col}.}:
\begin{equation}
F(X) = \omega_{l+1}~(\omega_{l}~X).
\end{equation}
According to Eq.~\ref{eq:invar}, applying any permutation matrix together with its transpose onto $\omega_{l+1}$ incurs no output influence:
\begin{equation}
F(X) = (\omega_{l+1}~\Pi_{l+1}~\Pi_{l+1}^T)~(\omega_{l}~X) = (\omega_{l+1}~\Pi_{l+1})~(\Pi_{l+1}^T~\omega_{l})~X.
\end{equation}
Therefore, the original DNN weights of two layers $\omega_{l+1}, \omega_{l}$ could be losslessly re-permuted as $(\omega_{l+1}~\Pi_{l+1})$ and $(\Pi_{l+1}^T~\omega_{l})$, resulting in individual weight $\omega_i$'s allocation variation in a layer.
\subsection{FedAvg \textit{vs.} Permutation Invariance}
The permutation invariance property implies that a weight parameter could be permuted with arbitrary coordinates in a layer, which conflicts with the coordinate-based parameter averaging in FedAvg.
Specifically, suppose the DNN models of two consecutive layers ($l, l+1$) from $N$ local nodes all learn the same function $F(\cdot)$:
\begin{equation}
\begin{split}
F(\cdot) = & (\omega_{(0,l+1)} \Pi_0)~(\Pi_0^T \omega_{(0,l)}) = \cdots = (\omega_{(n,l+1)} \Pi_n)~(\Pi_n^T \omega_{(n,l)})\\
= & \cdots = (\omega_{(N,l+1)} \Pi_N)~(\Pi_N^T \omega_{(N,l)}),~\hspace{4mm}~n \in (0, N).
\label{eq:1}
\end{split}
\end{equation}
Even we assume $N$ models can have the same weight values composition (\textit{i.e.}, $\omega_{(n,l)}$ = $\omega_l$), their coordinate matrices $\Pi_n$ could be highly different as these models are trained separately during the local training epoch:
\begin{equation}
\Pi_0 \neq~\dots~\neq \Pi_n \neq~\dots~\neq \Pi_N,~n \in (0, N).
\end{equation}
Therefore, the weight parameter learning a particular information may have diverse in-layer coordinate $i$ across different local models.
As FedAvg still conducts rigid coordinate-based averaging:
\begin{equation}
\Omega_{(l+1, i)} = \sum\nolimits_{n=1}^{N} \frac{1}{N}~\omega_{(n,l,i)}\Pi_n;
\Omega_{(l, i)} = \sum\nolimits_{n=1}^{N} \frac{1}{N}~\Pi_n^T\omega_{(n,l,i)},
\end{equation}
the averaged weights can hardly match with each other, nor the corresponding information across $N$ models.
The permutation invariance gives a new explanation perspective to commonly-known FL issues, such as weight divergence and accuracy degradation, especially in non-IID data distributions where local models are even learning non-uniform information~\cite{non-iid0,fedprox,iclr}.
\subsection{Weight-Level Alignment (WLA)}
The permutation invariance not only explains the FedAvg issues, but also serves as a DNN model configuration tool to motivate many FL optimization works~\cite{fedma, bayesian, pfnm, rmatching}.
These works identify the parameters with corresponding information across local DNN models with certain similarity metrics (\textit{e.g.}, \textit{MSE}) and leverage a lossless permutation matrix to \textbf{\textit{structurally align the parameters' allocation for matched information fusion}}.
Taking the two-layer DNN model as an example, this process can be formulated as re-permuting the $l^{th}$ layer weight matrix $\omega_{(n,l)}$ on the $n^{th}$ node with a re-permutation matrix $\Pi_{trans}$ to minimize the selected distance metric --- $D$ with the global weight $\Omega_l$:
\begin{equation}
\begin{split}
\omega_{(n, l+1)}^{aligned} = & \omega_{n, l+1}\Pi_{trans}, s.t.~D~( \omega_{(n,l+1)}^{aligned}, \Omega_{l+1} ) \rightarrow 0. \\
\omega_{(n, l)}^{aligned} = & \Pi_{trans}^T\omega_{(n,l)}, s.t.~D~( \omega_{(n,l)}^{aligned}, \Omega_{l} ) \rightarrow 0.
\end{split}
\label{wosa}
\end{equation}
With the minimum layer-wise matrix similarity distance, the distributed weights $\omega_{(n,l,i)}$ with corresponding information are expected to be generally aligned by identifying the re-permutation matrix $\Pi_{trans}$ before each global averaging operation.
This can be translated to an optimization problem and be resolved by different algorithms like Bipartite matching, Wassertein barycenter and Hungarian matching~\cite{opt_trans, fedma}.
\subsection{Limitations of WLA}
Although current WLA works' significant performance escalation demonstrated the necessity of parameter alignment for FL, there is an essential question:
\textbf{\textit{Does the weight matrix distance really reflect the information mismatching across distributed DNN models?}}
Current methods' alignment accurateness highly depends on the selected similarity metrics and their operation targets, \textit{e.g.}, \textit{MSE} and Euclidean distance on weight or activation matrices~\cite{fedma, rmatching}.
However, these quantitative alignment criteria may not fully match the weights carrying the same learning feature information.
Many recent works have demonstrated the necessity of qualitative parameter interpretation and feature visualization for DNN design and optimization~\cite{bmvc,vis1,vis3}.
Furthermore, the practical FL may involve non-IID local data and even non-IID learning classes across nodes.
In such cases, parameters will encode non-uniformed information and can be not fully-matched at all~\cite{non-iid0}, and a forced value-based matching can cause catastrophic performance degradation.
Besides the alignment criteria, we would like to ask another question:
\textit{\textbf{How to effectively and efficiently guarantee the alignment across the federated learning process?}}
Most prior works adopt a post-alignment method~\cite{bayesian, pfnm, fedma, rmatching}, which analyzes and match parameters across models before every global averaging operation, resulting in heavy computation workloads.
And the parameter similarity analysis also requires activation data sharing that can compromise the input data privacy.
\section{Feature-Level Alignment (FLA): \\ A New Perspective}
We expect to address the above problems through a series of technical contributions:
We first answer the ``\textit{what-to-align}'' question by promoting the previous weight-level similarity-based parameter alignment to a feature level;
We then answer the ``\textit{how-to-align}'' question by proposing a feasible feature allocation scheme to establish firm correlations between DNN structures and their designated assigned learning features;
Eventually, we design a feature-aligned FL framework --- {\it $Fed^2$}\xspace, which enables accurate feature alignment and thus achieves superior performance than FedAvg as well as other WLA methods.
Specifically, in this section, we interpret the feature information learned by DNN parameters and propose a novel feature-aligned learning objective for FL frameworks.
\vspace{-2mm}
\subsection{Feature Definition and Interpretation}
Many prior works have elaborated neural networks' feature information from many perspectives\footnote{Here we consider image classification as our major deep learning task.}.
For example, \cite{am} utilizes activation maximization to visualize each neuron's preferred input pattern, \cite{vis4} introduces the activation-based attention mechanism to illustrate neuron's region of interest in the input, and \cite{bmvc,vis1,vis2,vis4} uses the learning class preference to illustrate the neuron functionality.
Unlike conventional quantitative approaches, these feature-oriented analysis methods provide qualitative and explicit interpretations of the DNN learning process's intrinsic mechanism.
In this work, we adopt neurons as the basic feature learning units\footnote{Here the \textit{neuron} is defined one convolutional filter if in the convolutional layer, or one neuron if in the fully connected layer.}, and practice the feature interpretation as follows:
As shown in Fig.~\ref{fig:act_grad}, one individual neuron's learning preference can be measured by observing the neuron's activation response $A(x_c)$ on inputs $x$ from different $C$ classes, as well as its gradients {\small $\partial Z_c / \partial A(x_c)$} towards a class $c$'s prediction confidence $Z_c$.
Combining these two factors and further generalizing to a multi-layer convolutional neural network, a neuron's learned feature information can be formulated as \textit{\textbf{a class preference vector}}:
\begin{equation}
\small
P = [p_1,\ldots, p_c,\ldots,p_C], \hspace{2mm}\text{where}~p_{c} = \sum_b^B A(x_{c,b}) * \frac{\partial Z_c}{\partial A(x_{c,b})},
\label{eq:feature}
\vspace{-1mm}
\end{equation}
\normalsize
where $A~(x_{c,b})$ denotes activations and {\small $\partial Z_c/\partial A(x_{c,b})$} denotes gradients from class $c$'s confidence, both of which are averaged on $B$ batch trials.
For each neuron, the largest index $Argmax_i(P_i)$ of feature vector $P$ indicates its primary learning class target.
Assembling all neurons' top preferred classes together, a layer's feature encoding vector could be then obtained.
As an example, we visualize two convolutional layers' learned feature information in Fig.~\ref{fig:act_grad_example}.
Similar to Fig.~1, each neuron is represented by one vertical color bar, while the color denotes different primary preferred classes.
In practice, we find such a feature interpretation method aligns well with previous AM visualization~\cite{am}, which demonstrates the effectiveness of our feature interpretation.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{act_grad}
\vspace{-3mm}
\caption{Activation and Gradient based Feature Analysis.}
\vspace{-3mm}
\label{fig:act_grad}
\end{figure}
\begin{figure}
\vspace{3mm}
\centering
\includegraphics[width=1\linewidth]{act_grad_example}
\vspace{-7mm}
\caption{Class Preference Vector Visualization (VGG16 on CIFAR10).}
\vspace{-3mm}
\label{fig:act_grad_example}
\end{figure}
\subsection{Feature-Aligned FL Objective}
Based on such a feature interpretation perspective, we can re-examine the model fusion process, starting with the coordinate-based FedAvg method:
Across the \textit{y}-dimension of Fig.~\ref{fig:motivation} (a), we sample and visualize the neuron learning class preferences from the same layer's feature encoding across all ten local DNN models.
As we could see, neurons with the same coordinates show dramatically different class preferences.
Since FedAvg adopts a coordinate-based averaging, massive feature averaging conflicts can happen.
The encoded information from partial nodes can thus be lost, leading to a slower convergence rate or lower accuracy.
To alleviate the feature fusion conflicts, we propose to conduct feature-level alignment during the FL model averaging process.
Formally, the \textit{\textbf{feature-aligned federated learning objective}} could be defined as minimizing the total feature-level parameter variance among $N$ collaborative nodes :
\begin{equation}
Minimize ~~\sum D~(p_{(n_1,l,i)}, ~~p_{(n_2,l,i)}), ~~\forall n_1, n_2 \in (1, ~N), ~~n_1 \neq n_2,
\label{eq:objective}
\end{equation}
where $p_{(n,l,i)}$ is the feature learned by the $i^{th}$ neuron of the $l$-th layer from the $n^{th}$ node, and $D$ is an appropriate distance metric for feature vector similarity evaluation.
Naively, to solve Eq.~\ref{eq:objective}, we could calculate the feature vector of each neuron $p_{(n,l,i)}$ and then conduct post-alignment by neuron re-permutation just like previous weight-matching methods.
However, such an approach suffers from the same limitations like inaccurate feature similarity metrics and heavy post-matching computation overheads, \textit{etc}.
Therefore, we propose a set of novel feature alignment method by establishing firm correlations between DNN structures and their designated assigned learning features without tedium feature allocation and analysis effort.
\section{Structural Feature Allocation}
\label{sec:4}
In this section, we then answer the \textit{``how-to-align''} question, \textit{i.e.}, to design effective ways for feature-level alignment.
Specifically, we propose a ``structural feature allocation'' scheme to establish firm correlations between DNN structures and their designated learning features:
Given a DNN model, we adapt the model structure by constructing separated parameter groups with the group convolution method;
Different learning classes are then assigned into individual parameter groups through a gradient redirection method;
Therefore, we enforce one parameter group to learn a set of designated classes, thus achieving the structural feature allocation at the early stage of model training.
When deployed into an FL scenario, such a structural feature allocation can guide explicit feature-level alignment with parameter structure group matching and provide the methodology foundation for later feature-aligned federated model fusion.
\subsection{Feature Isolation by Group Convolution}
How to guide particular features to be learned by designated parameters is the key to structural feature allocation.
And the primary motivation of such an approach is that we notice the group convolution structure could isolate the feature distributions with separated activation forwarding and gradient backwarding processes~\cite{alexnet,shufflenet}.
Fig.~\ref{fig:group_conv} illustrates the model structure difference between common convolution and group convolution.
Regular convolution (Fig.~\ref{fig:group_conv} (a)) follows a densely-connected computational graph.
By contrast, the group convolution structure (Fig.~\ref{fig:group_conv} (b)) separates the convolution operations into groups.
The input/output feature maps are therefore mapped only within their current group.
The group convolution is first proposed in AlexNet~\cite{alexnet} by using two groups to relieve the computational burden of single GPU.
But after training, the model learns distinct features in two convolution groups (\textit{i.e.}, shape-oriented and color-oriented features)~\cite{alexnet}.
Similar phenomenon is observed in ShuffleNet~\cite{shufflenet} that features become biased within each different convolutional group without shuffling.
Although initially designed for computational benefits, the grouped structure naturally demonstrates feature regulation and isolation effects in both models, which have been rarely explored in priori works.
Our hypothesis of such feature isolation phenomenon is due to the gradient isolation effect incurred by the separable computational graph in group convolution.
As different groups are forwarding separately, the backward gradients carrying feature information also flows only within their own groups, thus gradually leading to feature isolation.
Formally, in the regular densely-connected convolution, each output feature map $OF_i~ (i \in [1:d_o])$ is calculated by convolving on all input feature maps $IF_j~ (j \in [1:d_i])$:
\begin{equation}
OF_{1:n} = \{w_1 * IF_{1:m}, ~~w_2 * IF_{1:m}, ~~..., ~~w_n * IF_{1:m}\},
\end{equation}
where $d_o$ and $d_i$ are the output/input feature map depth, $w_i$ is weights for the $i^{th}$ convolution filter, and $*$ is the convolution operation.
The gradients of input feature $IF_i$ can be formulated as:
\begin{equation}
\nabla IF_{j} = \sum\nolimits_i \frac{\delta OF_i} {\delta IF_j}, ~~i \in (1, ~d_o).
\end{equation}
That is, the gradient of $IF_{J}$ fuses the information from all output features ($OF_i$).
Due to the interleaved and fused gradients, the input layer's feature encoding can be highly non-predictable.
In distributed FL, such random encoding thus incurs feature mismatches and averaging conflicts, lead to sub-optimal convergence.
By contrast, the group convolution separates the computational graph, as well as the convolutional inputs \& outputs into different groups $G$. The grouped output feature maps ($OF$) are:
\begin{equation}
\begin{split}
& OF_{1:G_1} = \{w_1 * IF_{1~:~G_1}, ~~..., ~~w_{G_1} * IF_{1~:~G_1} \} ~... \\
& OF_{G_i:G_{i+1}} = \{w_{G_i} * IF_{G_i~:~G_{i+1}}, ~~..., ~~w_{G_{i+1}} * IF_{G_i~:~G_{i+1}} \} ~...\\% ~..., ~G_g
& OF_{G_{g-1}:n} = \{w_{G_{g-1}} * IF_{G_{g-1}~:~m}, ~~..., ~~w_{n} * IF_{G_{g-1}~:~m}\}
\}.
\end{split}
\end{equation}
As each input feature map contributes to the within-group output feature maps only, \textit{i.e.}, $OF_{G_i:G_{i+1}}$, the backward gradient for $IF_j$ will only fuse the information within the current group $G_{g_j}$.
\begin{equation}
\nabla IF_{j} = \sum\nolimits_i \frac{\delta OF_i} {\delta IF_j}, ~~i \in (G_{g_j}, ~G_{g_j+1}).
\end{equation}
In this case, the group convolution structure builds implicit boundaries between groups, and achieves the feature isolation effect.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{grad_red}
\vspace{-7mm}
\caption{We achieve structural feature allocation by adopting group convolution and decoupled logit (fully-connected) layers. Specifically, we use group convolution for feature isolation and combine it with decoupled logit layers for feature allocation.}
\vspace{-4mm}
\label{fig:group_conv}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=6.5in]{1_system}
\vspace{-3mm}
\caption{Our proposed Fed$^2$ framework includes two major steps: \textbf{(i)} We utilize group-convolution based structure to conduct feature-to-structure allocation; \textbf{(ii)} We then propose a feature paired averaging policy to enforce the feature alignment during federated model averaging.}
\label{fig:system}
\vspace{-3mm}
\end{figure*}
\subsection{Feature Allocation by Gradient Redirection}
Building upon such a feature isolation effect, we then propose a gradient redirection method to control the feature allocated in each convolution group.
Our main idea is first separating the gradient components carrying different classes' features, and then redirecting them into different groups.
Specifically, this is done by the decoupled fully-connected layer, in which different class logits are connected only to their corresponding group.
Combining both feature isolation and allocation, we are able to achieve structural feature allocation and facilitate our feature-aligned federated learning framework.
As shown in Fig.~\ref{fig:group_conv} (a), traditional logit layers (\textit{i.e.}, the last fully-connected layer) usually fully connect all input feature maps (convolutional filters) with the logits.
The gradients carrying features from each logit thus flow through all output feature maps ($OF_i$):
\begin{equation}
OF_{1...n} \leftarrow \nabla Logit_c, ~~c \in (1, ~C).
\end{equation}
$C$ is the number of logits. Due to the fully-connected computational graph, the feature encoding in each layer becomes non-predictable.
Different from that, we decouple the original logit layer into groups as well.
An example is shown in Fig.~\ref{fig:group_conv} (b).
Each sub-layer maps the class logit(s) to one corresponding convolutional group only, enforcing the gradients flowing backwards to the mapped structure group without any leakage:
\begin{equation}
OF_{G_i~:~G_{i+1}} \leftarrow \nabla Logit_g, ~~\text{$g$ is a subset of $(1,~C)$.}
\label{eq:assign}
\end{equation}
Here $OF_{G_i~:~G_{i+1}}$ is one group of output feature maps, and $Logit_g$ is the group of class logits that are assigned to this structure group.
By the gradient redirection, each structure group acts as the anchor for the allocated features.
During the FL training process, the feature of these classes will be continuously contained within the group, thus enforcing the structural feature allocation.
\section{Fed2 Framework}
Based upon the structural feature allocation methodology, we then propose {\it $Fed^2$}\xspace, a feature-aligned federated learning framework, which further enhanced the feature-level alignment with a particular DNN structure design and model fusion scheme for FL.
\subsection{FLA-Enhanced DNN Structure Design}
Fig.~\ref{fig:system} (a) shows the overview of our model structure adaptation.
For common DNN models like VGG~\cite{vgg} and MobileNets~\cite{mobilenet}, our structural adaptation splits the model into two parts:
For the lower convolutional layers, we maintain the densely-connected structures as shared layers.
For the higher convolutional and fully connected layers, we transform them into the group-wise structure.
\vspace{1mm}
\noindent \textbf{Shared Layers for Feature Sharing}:
The design of lower convolutional layers to be shared is due to the shallow layers learn mostly basic shared features, which shows few feature averaging conflicts~\cite{bmvc, vis1, vis3}.
In such cases, blindly separating these layers into groups can prevent low-level neurons receiving gradients from all groups, leading to bad learning performance.
Our empirical study also verifies such conclusion (as we will show later).
Therefore, we reserve shallow layers as the shared layers.
\vspace{1mm}
\noindent \textbf{Decoupled Layers for Feature Isolation}:
By contrast, for the deeper convolutional layers, the encoded features diverges more and are easier to conflict with each other during averaging~\cite{critical_path, rmatching}.
Therefore, we adopt group convolution and construct separable structure groups in these layers for further feature alignment.
To determine an appropriate number of decoupled layers, we evaluate the feature divergence of layer $l$ by the total variance ($TV$) of all neurons' feature vectors $P_{l,i}$ (defined in Eq.~\ref{eq:feature}) in this layer:
\begin{equation}
\begin{split}
TV_l = \sum\nolimits_i^I \frac{1}{I}|| P_{l,i} - E(P_{l,i})||_{2}.
\label{eq:share_depth}
\end{split}
\end{equation}
$I$ is the number of neurons in layer $l$.
Such layer-wise feature total variance usually maintains low in the lower layers and surge high in later layers.
We therefore determine an appropriate decoupling depth by thresholding the $TV$ for the group-wise transformation.
\vspace{1mm}
\noindent \textbf{Feature-to-Structure Allocation Enhancement}:
After determining the decoupled layers, we construct convolution groups and conducts gradient redirection by logit(s) allocation in Eq.~\ref{eq:assign}.
Such a step accomplishes an explicit feature-to-structure allocation.
One illustrative example is shown in Fig.~\ref{fig:system} (a)\footnote{For simplicity, Fig.~\ref{fig:system} (a) shows a one-class to one-group mapping example. For large datasets with more classes (\textit{e.g.}, 100 classes), multi-classes to one-group is achievable and also yields similar feature alignment benefits, as we will evaluate later.}.
For future feature alignment, we thus can easily match different convolution group structures by the learning tasks (\textit{i.e.}, logits) mapped to them.
Another optimization is we replace the batch normalization (BN) to be group normalization (GN)~\cite{group_norm}.
Previous works have shown that BN can influence the distributed training performance as different local models tend to collect non-consistent batch mean and variance (especially in non-IID cases)~\cite{iclr}.
Our structure design, by enforcing feature allocation, alleviates the feature statistics divergence within each group.
Therefore, we incorporate the GN layer and further improve the model convergence performance.
We will demonstrate the effectiveness of GN layers in later experiments.
By the proposed structure adaptation, {\it $Fed^2$}\xspace enables structure-feature alignment before the training process.
Such structure-feature pre-alignment also greatly simplifies the following matching process, which alleviates the heavy distance-based optimization computation and achieves better feature alignment effect.
\subsection{Feature-Aligned Federated Model Fusion}
With the feature-to-structure pre-alignment, {\it $Fed^2$}\xspace can then promote previous weight-level matching methods to the feature-level.
Specifically, we propose the feature paired averaging algorithm.
Firstly, the shared layers will be averaged among $N$ collaborative nodes.
As they extract fundamental shared features with less feature conflicts, the coordinate-based FedAvg can be directly applied:
\begin{equation}
\Omega_{shared} = E(\omega_{shared}^{n}), ~~n \in (1, ~N),
\label{eq:sharelayer}
\end{equation}
where $\omega_{shared}^n$ denotes the weights of the shared layers from the $n$-{th} local model, and $\Omega_{shared}$ is the averaged global model weight.
For decoupled layers, as different groups are assigned with different class logits, weight averaging should be conducted within the groups that share the same learning tasks.
That is, only the groups that have the paired learning class are averaged together:
\begin{equation}
\begin{split}
\Omega^{g} = E(\omega^g_{i,j}), ~~\text{iff} ~~Logits(\omega^g_i) = Logits(\omega^g_j), \\
~~\forall i, j \in (1, N), ~~i \neq j.
\label{eq:pair_avg}
\end{split}
\end{equation}
Here $\Omega^{g}$ denotes the global weights of the $g$-th group structure, and $\omega^{g}_{i,j}$ are the local model weights of group $g$ on nodes $i$ and $j$.
The $g^{th}$ group's weights from two local models will be averaged if and only if they are paired, \textit{i.e.}, $Logits(\omega_g^i) = Logits(\omega_g^j)$.
The proposed feature paired averaging method accomplishes the last step for feature aligned averaging in {\it $Fed^2$}\xspace.
Benefited from the explicit feature-to-structure pre-alignment, our group pairing process only needs to match the learning logits (an one-hot class vector).
This greatly simplifies the matching complexity than previous parameter matching like weights and activations~\cite{fedma, rmatching, pfnm}.
Therefore, {\it $Fed^2$}\xspace also alleviates the heavy computation and communication overhead of traditional post-alignment methods.
\section{Experiment}
We evaluate {\it $Fed^2$}\xspace with image classification tasks on CIFAR10 and CIFAR100.
Three DNN\ models (VGG9~\cite{fedma}, VGG16~\cite{vgg}, and MobileNetv1~\cite{mobilenet}) are adopted to evaluate the generality of our structure adaptation method.
Without specific mentioning, all baselines use the original network, while {\it $Fed^2$}\xspace adopts a general decoupling step, \textit{i.e.}, decoupling the last 6 layers with 10 convolution groups for three networks.
For local data distributions, we consider both IID and non-IID scenarios.
State-of-the-art (SOTA) works including FedAvg~\cite{fl}, FedMA~\cite{fedma} and FedProx~\cite{fedprox} are compared to demonstrate the training efficiency and convergence benefits of our framework.
\begin{figure}[!b]
\centering
\vspace{-4mm}
\includegraphics[width=0.9\linewidth]{communication}
\vspace{-3mm}
\caption{Communication Efficiency Comparison.}
\vspace{-3mm}
\label{fig:communication}
\end{figure}
\begin{figure}[!tb]
\includegraphics[width=0.9\linewidth]{computation}
\vspace{-3mm}
\caption{Computational Efficiency Comparison.}
\label{fig:computation}
\end{figure}
\subsection{FL Convergence Performance}
We first compare the convergence performance of {\it $Fed^2$}\xspace with other SOTA methods.
The experimental settings are kept same with~\cite{fedma} using VGG9 on CIFAR10 dataset.
The heterogeneous data partition with $J ~(J=16)$ clients are adopted by sampling $p_c \sim Dir_J (0.5)$ and allocating a $p_{c,j}$ proportion of the training data of class $c$ to local client $j$, where $Dir_j (0.5)$ is the dirichlet data distribution.
We evaluate the FL convergence performance from two aspects:
(1) convergence rate: accuracy w.r.t. communication rounds; and
(2) computation efficiency: accuracy w.r.t. computational efforts.
\vspace{1mm}
\noindent \textbf{Convergence Rate.}
Fig.~\ref{fig:communication} compares the test accuracy curves w.r.t communication rounds between {\it $Fed^2$}\xspace (red line) and other methods.
As we can see, our {\it $Fed^2$}\xspace shows superior convergence rate compared to the other three methods.
With roughly 40 rounds, our method achieves the best accuracy 88.29\%.
In contrast, other methods can take 100 rounds but still achieve lower accuracy, \textit{e.g.}, FedMA (87.53\%, -0.76\% than ours) and FedAvg (86.29\%, -2.0\% than ours).
\vspace{1mm}
\noindent \textbf{Computation Efficiency.}
We further demonstrate the computation efficiency of {\it $Fed^2$}\xspace by comparing the model accuracy w.r.t the overall computational workloads.
Here the computational workloads are measured by the overall local training epochs on all nodes.
The results are shown in Fig.~\ref{fig:computation}.
As other methods' accuracies reported in ~\cite{fedma} are with varied local epoch settings ($E$), we conduct {\it $Fed^2$}\xspace in different settings for fair comparison.
{\it $Fed^2$}\xspace achieves better accuracy (89.1\%) than FedAvg and FedProx under E=20 settings, and meanwhile use less computation efforts (1200 vs. 2000 epochs).
Compared to FedMA under E=150 setting, {\it $Fed^2$}\xspace finally achieves 88.29\% accuracy, {+0.76\%} better than FedMA (87.53\%) with slightly higher training efforts (2000 vs. 1500 epochs).
Furthermore, {\it $Fed^2$}\xspace's optimal model accuracy achieves {89.96\%} at E=1 setting, which surpasses all other methods' accuracy by large margins (\textit{e.g.}, {+2.32\%} than FedMA) and meanwhile consumes the least training workloads (1000 epochs).
\begin{figure*}[]
\centering
\includegraphics[width=6.4in]{conv_speed}
\vspace{-2mm}
\caption{Convergence Speed Comparison between FedAvg and our proposed framework ({VGG16 on CIFAR100}).}
\label{fig:conv_speed}
\vspace{-3mm}
\end{figure*}
\begin{table}[]
\caption{Data Heterogeneity (N: \# of nodes. C: \# of classes).}
\vspace{-2mm}
\renewcommand\arraystretch{0.9}
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{cccccc}
\toprule
CIFAR10 & N * C & 10$\times$3 & 10$\times$4 & 10$\times$5 & 10$\times$10 \\ \midrule
\multirow{2}{*}{VGG9} & FedAvg~\cite{fl} & 82\% & 84\% & 85\% & 88\% \\
& \textbf{Ours} & 83\% & 88\% & 88\% & 90\% \\ \midrule
\multirow{2}{*}{MbNet} & FedAvg~\cite{fl} & 67\% & 71\% & 79\% & 85\% \\
& \textbf{Ours} & 86\% & 88\% & 90\% & 91\% \\ \bottomrule \toprule
CIFAR100 & N * C & 10$\times$30 & 10$\times$40 & 10$\times$50 & 10$\times$100 \\ \midrule
\multirow{2}{*}{VGG16} & FedAvg~\cite{fl} & 61\% & 64\% & 65\% & 66\% \\
& \textbf{Ours} & 64\% & 67\% & 68\% & 70\% \\
\bottomrule
\end{tabular}
\label{table:data_heter}}
\vspace{-6mm}
\end{table}
\vspace{-2mm}
\subsection{Scalability Evaluation}
We then conduct scalability evaluation to demonstrate the generality of {\it $Fed^2$}\xspace in varied experimental settings.
Specifically, we consider four scalability dimensions:
(i) data heterogeneity scaling from IID to Non-IID;
(ii) learning task complexity with different number of leaning classes;
(iii) FL system complexity with different number of nodes; and
(iv) low to high FL communication frequencies.
\vspace{1mm}
\noindent \textbf{Data Heterogeneity (IID to Non-IID).}
We first show that {\it $Fed^2$}\xspace provides consistent accuracy improvement under full-spectrum data heterogeneity in Table~\ref{table:data_heter}.
The experimental setting $N * C$ indicates there are $N$ nodes, and each node has only $C$ classes present in the local data.
A smaller $C$ means the data distribution on local nodes are more skewed, which usually leads to lower accuracy in FL.
Table~\ref{table:data_heter} shows the FL performance of VGG9 and MobileNet on CIFAR10.
Our {\it $Fed^2$}\xspace framework consistently outperforms FedAvg by large margins.
Specifically on VGG9, {\it $Fed^2$}\xspace achieves \textbf{+1\%$\sim$+4\%} accuracy improvement across all heterogeneity settings.
Meanwhile, we notice that MobileNet suffers more from the highly-skewed non-IID data.
Under the $10\times3$ setting, FedAvg on MobileNet only achieves 67\% accuracy.
By contrast, {\it $Fed^2$}\xspace achieve 86\% accuracy, \textbf{+19\%} than FedAvg (67\%).
The underlying reasons of the accuracy improvement is due to the structurally aligned feature distribution across different local models, as demonstrated in Fig.~\ref{fig:motivation} (c).
Such feature alignment alleviates the feature-level averaging conflicts and thus provides models higher convergence accuracy.
\vspace{1mm}
\noindent \textbf{Classification Complexity.}
We then evaluate {\it $Fed^2$}\xspace using VGG16 on CIFAR100 with more classification classes.
Full-spectrum data heterogeneity settings from $10\times30$ to $10\times100$ are used.
As we can see from Table~\ref{table:data_heter}, similar conclusion could be drawn that {\it $Fed^2$}\xspace consistently outperforms FedAvg by \textbf{+3\%$\sim$+4\%} accuracy.
Besides that, Fig.~\ref{fig:conv_speed} shows the test accuracy curves in the training process for both methods.
In all non-IID settings (a-c), {\it $Fed^2$}\xspace consistently shows higher convergence speed using only 50-80 rounds to achieve convergence, while FedAvg usually needs at least 100 rounds.
One exception is the 10x100 IID setting (d), the initial convergence rate of FedAvg is slightly faster, potentially because the IID data distribution leads to less feature divergence in the beginning stage of FL.
Nevertheless, our method soon exceeds FedAvg after 50 epochs and finally achieves \textbf{+4\%} accuracy than FedAvg, showing the necessity of feature alignment in achieving the optimal model convergence accuracy.
\begin{table}
\caption{Node Scalability (N: \# of nodes. C: \# of classes).}
\vspace{-2mm}
\renewcommand\arraystretch{0.9}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{cccccc}
\toprule
& N * C & 10$\times$5 & 20$\times$5 & 50$\times$5 & 100$\times$5 \\ \midrule
\multirow{2}{*}{VGG9} & FedAvg~\cite{fl} & 85\% & 86\% & 83\% & 83\% \\
& \textbf{Ours} & 88\% & 88\% & 86\% & 87\% \\ \midrule
\multirow{2}{*}{MbNet} & FedAvg~\cite{fl} & 79\% & 85\% & 81\% & 78\% \\
& \textbf{Ours} & 90\% & 90\% & 89\% & 88\% \\ \bottomrule
\end{tabular}}
\vspace{-5mm}
\label{table:nodes}
\end{table}
\vspace{1mm}
\noindent \textbf{Node Scalability.}
We then evaluate the scalability of {\it $Fed^2$}\xspace with the increasing number of FL nodes.
Specifically, we scale up the number of collaborative nodes from 10 to 100 with one medium data heterogeneity setting (each node only have 5 classes in the local data distribution).
The results are shown in Table~\ref{table:nodes}.
Without loss of generality, {\it $Fed^2$}\xspace provides consistently better performance ranging from \textbf{+2\%$\sim$4\%} on VGG9 and \textbf{+5\%$\sim$11\%} on MobileNetV1.
\vspace{1mm}
\noindent \textbf{Communication Frequency.}
We finally evaluate the performance of {\it $Fed^2$}\xspace under different communication frequencies.
Here we use communication per epochs ($E$) to indicate the frequency.
A larger $E$ indicates a lower frequency.
In such cases, FL performance usually becomes worse since the model collaboration are less frequent, which can incur severer feature divergence.
Fig.~\ref{fig:frequency} compares {\it $Fed^2$}\xspace with other methods under different communication frequencies.
All models are trained with 54 rounds as per settings in~\cite{fedma}.
As we can see, FedAvg (blue bar) shows lower accuracy (85.7\%$\rightarrow$78.5\%) when the frequency decreases from once per 20 epochs to once per 100 epochs.
In contrast, {\it $Fed^2$}\xspace with feature alignment averaging is not affected by the lower communication frequency, showing continuously the best performance (\textbf{88\%$\sim$90\%}) and improving FedMA by \textbf{+3.4\%$\sim$5.1\%} accuracy under all settings.
\begin{figure}[!tb]
\includegraphics[width=1\linewidth]{comm_freq}
\vspace{-6mm}
\caption{Communication Frequency Comparison.}
\label{fig:frequency}
\vspace{-5mm}
\end{figure}
\subsection{Sensitivity Analysis}
We finally conduct sensitivity analysis on three design components in {\it $Fed^2$}\xspace, including different sharing layer depth, different number of groups, and the group normalization optimization.
\vspace{1mm}
\noindent \textbf{Sharing Depth Analysis.}
We first demonstrate that our framework's performance is robust to the sharing depth hyper-parameter selection in Fig.~\ref{fig:depth}.
As we could observe, the total variance of the layers (from a pre-trained model with 50 epoch pre-training) offers good indication of the layer-wise feature divergence.
The results also show that it is necessary to keep enough layers (4$\sim$6) shared so that the fundamental features could be better learned by all nodes' collaboration.
By retaining enough shared layers in our design, {\it $Fed^2$}\xspace's performance is highly robust to the sharing-depth hyper-parameter selection, achieving consistently better accuracy than original non-grouped model in a wide range (\textit{e.g.}, 6 $\sim$ 13).
\vspace{1mm}
\noindent \textbf{Grouping Number Analysis.}
Similar analysis shows {\it $Fed^2$}\xspace's performance robustness w.r.t different number of groups selection.
The results are shown in Fig.~\ref{fig:groups} (VGG16 on CIFAR100, N*C:10$\times$50).
We evaluate three group settings (G=10, 20, 100).
Overall, three settings all show better accuracy than FedAvg, demonstrating the effectiveness of using group convolution for feature alignment.
Among them, G=10 and G=20 achieve the optimal accuracy at $\sim$68\%, +2.7\% than FedAvg with non-grouped structure (65.3\%).
G=100 setting, though achieving sub-optimal accuracy improvement (+1.9\% than FedAvg), shows the best convergence speed in the early stage (the green curve).
We hypothesize this is due to its most fine-grained feature allocation and alignment effect (The G=100 settings enable one-class to one-group mapping, while others are multi-class to one-group mapping).
However, with too many groups split, the per-group capacity (\textit{e.g.}, \# of neurons in each group) becomes limited, which slightly hinders the final convergence accuracy.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.85\linewidth]{depth}
\vspace{-3mm}
\caption{Sharing Depth Analysis.}
\label{fig:depth}
\vspace{-5mm}
\end{figure}
\vspace{1mm}
\noindent \textbf{Normalization Strategy Analysis.}
We finally conduct analysis on our normalization strategies (VGG9, CIFAR10, N*C:10x4) in Fig.~\ref{fig:batchnorm}.
The baseline FedAvg without norm achieves 84.13\% accuracy.
FedAvg+GN hurts the model performance, degrading the accuracy to 83.34\%.
By contrast, ours+GN achieves the best accuracy 88.26\%, +2.8\% than ours+BN (85.46\%).
This implies that our grouped model structure indeed incurs less statistics divergence within each group, thus GN could boost the FL performance.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.8\linewidth]{granularity}
\vspace{-3mm}
\caption{Number of Groups Analysis.}
\vspace{-5mm}
\label{fig:groups}
\end{figure}
\begin{figure}[!tb]
\centering
\includegraphics[width=0.8\linewidth]{batchnorm}
\vspace{-3mm}
\caption{Normalization Strategy Analysis.}
\vspace{-5mm}
\label{fig:batchnorm}
\end{figure}
\section{Experiment}
\vspace{-1.5mm}
\subsection{\textbf{Experiment Setup}}
\vspace{-1.5mm}
We evaluated the proposed CEL method under simulated data non-iid settings in PyTorch~\cite{?}.
We assume that all the clients will participate in the model averaging process.
The CNN models we used includes VGG9, VGG16 ~\cite{Simo:2014:arXiv} and MobileNetV1~\cite{?} while the datasets we processed are CIFAR10 and CIFAR100 ~\cite{krizhevsky2009CIFAR}.
For CIFAR10 and CIFAR100 dataset, we use data augmentation (random crops and flips) and normalization processes.
We evaluated the performance margin of our framework versus the baseline, FedAvg, and a state-of-the-art heterogeniety-resistant method, FedMA.
In this part, the \textit{$\Psi$-Net} structures were kept constant for each CNN model, e.g. we decouple the last two convolutional layers of VGG9, the last 6 convolutional layers for VGG16 and MobileNetV1 and we choose the group size as 10 classes for one-to-many convolutional group mapping in training CIFAR100.
For the first step, we compared our method and the baseline in comprehensive data non-iid settings.
In each trial, the global training data for each task is divided into several balanced portions, which are allocated to different nodes.
In the Non-iid Spectrum section (Table~\ref{?}) we tested the resistance to distributions with skewed local tasks.
In the Node Scalability part (Table~\ref{?}) where the number of local tasks is constant, we analyzed the feasibility to work with massive nodes and minor local data.
We also evaluated the convergence speed in the non-iid spectrum of VGG16 on CIFAR100 and provided the results in Fig ~\ref{?}.
To compete with FedMA, we implemented our framework in the Dirichlet non-iid data partitioning condition used by the authors of~\cite{?}.
Meanwhile, we tested the sensitivity of our framework to the number of local training epochs and analyzed its impact to local training effort.
Finally, to get insights about our \textit{$\Psi$-Net} structural configurations, we studied the impact of two major hyper-parameters, Grouping Depth and Path Granularity.
For Grouping depth sensitivity, we jointly analyzed the testing accuracy of decoupling depthes and activation divergence of diferent layers.
For path granularity, we evaluated the trade-off between fine-grained convolutional groups and model capacity loss caused in decoupling.
\begin{table}[]
\parbox{.49\linewidth}{
\caption{Non-iid Spectrum (CIFAR10).}
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{cccccc}
\toprule
& N * C & 10x3 & 10x4 & 10x5 & 10x10 \\ \midrule
\multirow{2}{*}{VGG9} & FedAvg & 82\% & 84\% & 85\% & 88\% \\
& Ours & 83\% & 88\% & 88\% & 90\% \\ \midrule
\multirow{2}{*}{MbNet} & FedAvg & 67\% & 71\% & 79\% & 85\% \\
& Ours & 86\% & 88\% & 90\% & 91\% \\ \bottomrule
\end{tabular}}}
\hfill
\parbox{.49\linewidth}{
\caption{Node Scalability (CIFAR10).}
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{cccccc}
\toprule
& N * C & 10x5 & 20x5 & 50x5 & 100x5 \\ \midrule
\multirow{2}{*}{VGG9} & FedAvg & 85\% & 86\% & 83\% & 83\% \\
& Ours & 88\% & 88\% & 86\% & 87\% \\ \midrule
\multirow{2}{*}{MbNet} & FedAvg & 79\% & 85\% & 81\% & 78\% \\
& Ours & 90\% & 90\% & 89\% & 88\% \\ \bottomrule
\end{tabular}}}
\end{table}
\begin{figure*}[]
\centering
\vspace{-3mm}
\includegraphics[width=5.5in]{conv_speed}
\vspace{-5mm}
\caption{Convergence Speed Comparison between FedAvg and Proposed Method (\textbf{CIFAR100}).}
\label{fig:conv_speed}
\vspace{-5mm}
\end{figure*}
\begin{figure*}[]
\centering
\vspace{-3mm}
\includegraphics[width=5.5in]{epoch_efforts}
\vspace{-5mm}
\caption{Local Epoch Influence Evaluation and Training Overhead Comparison with SOTA.}
\label{fig:local_epoch}
\vspace{-5mm}
\end{figure*}
\begin{figure}[]
\centering
\captionlistentry[table]{}
\captionsetup{labelformat=andtable}
\caption{Grouping Depth Sensitivity. \textbf{Left:} Total Variance of each layer. \textbf{Right:} Performance under different grouping depth, showing the grouped models' performance is robust to depth selection in a large range.}
\parbox{.32\linewidth}{
\includegraphics[width=1.8in]{4_gradient}
\label{fig:depth}
}
\qquad
\parbox{.6\linewidth}{
\centering
\vspace{-4mm}
\renewcommand\arraystretch{1.2}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{c|cccccc|c}
\toprule
VGG16 & C1-2 & C2-2 & C3-2 & C4-2 & C5-2 & FC & None \\ \midrule
CIFAR10 & 82\% & 87\% & \textbf{89\%} & \textbf{90\%} & \textbf{89\%} & 87\% & 86\% \\ \hline
CIFAR100 & 53\% & 62\% & 67\% & \textbf{68\%} & \textbf{69\%} & \textbf{68\%} & 65\% \\ \bottomrule
\end{tabular}}}
\vspace{-5mm}
\end{figure}
\begin{figure}[]
\centering
\vspace{-2mm}
\includegraphics[width=2.5in]{granularity}
\vspace{-2mm}
\caption{Path Granularity: one-to-one and one-to-many mapping.}
\label{fig:path}
\vspace{-5mm}
\end{figure}
\section{\textbf{Distributed Community Structure}}
The choice of communication topology can greatly affect the performance of our learning system. To compromise between training accuracy and communication consumption, we discussed the property of the classic Ring topology~\cite{x,x}, Mesh topology ~\cite{x,lee2006emerging} and the general connected Mesh topology we implemented.
In this section, we discuss the influence of network topologies to our decentralized collaboration system.
\subsection{\textbf{Topology Definition}}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.3in]{3_topology}
\caption{Shuffled Decentralized Communication Topology.}
\label{fig:topology}
\vspace{-8mm}
\end{center}
\end{figure}
Formally, one network topology can be described with an undirected graph $(V, E)$ where $V$ denotes vertex, \textit{i.e.}, the set of $n$ participating nodes ($V := \{1, 2, \dots, n\}$), and $E$ denotes all the edges in the graph.
As in standard representation, we use the adjacent matrix $W \in \mathbb{R}^{n\times n}$ to represent the graph $(V, E)$.
Also, the adjacent matrix is weighted since each node pair $(i, j)$ can have varied amount of shared parameters, and thus the communication overhead of edge $E(i,~j)$ is also different.
Therefore, we use the value of each entry $W_{i,j}$ to describe how much communication resource node $i$ and $j$ need for mutual collaboration.
\vspace{-3mm}
\paragraph{\textbf{Ring and Mesh Topology}}
Based on the above definition, the Ring and Mesh topologies could be represented as symmetric matrices where 0 denotes no graph edges in node $i$ and $j$ and $C_{i,j}$ denotes nodes have edges and the communication overhead is $C_{i,j}$.
\scriptsize
\begin{equation}
\label{eq:adjm}
\begin{split}
W_{Ring} = \left\{
\begin{aligned}
C_{i,j}, &~~~~\text{If} ~~|i-j| = 1;\\
0, &~~~~\text{Otherwise}.
\end{aligned}
\right. \hspace{0.1cm}
W_{Mesh} = \left\{
\begin{aligned}
C_{i,j}, &~~~~\text{If} ~~i \neq j;\\
0, &~~~~\text{Otherwise}.
\end{aligned}
\right.
\end{split}
\end{equation}
\normalsize
As we can see from Figure.~\ref{fig:topology}, networks with Ring topology has the least communication edges while Mesh network has the most. Therefore, for the communication resource consumption, the Ring topology brings the least communication overhead and vice versa for Mesh.
\vspace{-3mm}
\paragraph{\textbf{Our Implemented Topology}}
To reach a better parameter consensus in our training, we expect to implement a more interconnected system while keeping the communication load affordable.
Instead of Ring or Mesh, we implemented a general connected ``Mesh" topology, instead.
The structure of "Mesh" topology in our implementation, on the other hand, is different from the conventional Mesh.
Actually the complexity of it is between Ring and Mesh topology.
To control the communicational complexity of our topology, we set a hyper parameter $R$ to control the overall number of times that the global dataset is repeated across the whole learning system.
The tasks for each node is randomly and unrepeatedly drawn without replacement from the training data library which contains $R$ replicas of training data for each task.
\subsection{Communication Modeling}
We implemented the Ring and our general connected Mesh topology in the experiments.
To compare our communication cost with that of Federated Learning, the communication cost of our collaborative learning system is measured by the amount of data downloaded by each single node following a point-to-point protocol.
For centralized learning systems, the communication cost for our system can be formulated as:
\vspace{-1.5mm}
\begin{small}
\begin{equation}
\scriptsize
\medmuskip=-1mu
\begin{aligned}
Q = \sum_{i=1}^{L-D} B_w f_{i+1} (k_i)^{2} + P\sum_{i=D}^{L} B_w f_{i+1}F_{r}(k_i)^{2}
\end{aligned}
\label{eq:comcent}
\vspace{-0.5mm}
\end{equation}
\end{small}
where $L$ denotes the total number of convolutional layers of our DNN model; $D$ denotes the number of decoupled layers; $B_w$ denotes precision of model weights; $k_i$ denotes kernel width; $P$ denotes the number of functionality structures on each node; $f_i$ denotes the number of input channels of each filter in the $i^{th}$ convolutional layer while $f_{i+1}$ equals to the number of filters in the $i^{th}$ layer; $F_r$ denotes prunning ratio of decoupled layers.
For our learning systems with our ``Mesh" topology, our communication cost can be formulated as:
\vspace{-1.5mm}
\begin{small}
\begin{equation}
\scriptsize
\medmuskip=-1mu
\begin{aligned}
Q = N\sum_{i=1}^{L-D} B_w f_{i+1} (k_i)^{2} + P(R-1)\sum_{i=D}^{L} B_w f_{i+1}F_{r}(k_i)^{2}
\end{aligned}
\label{eq:commesh}
\vspace{-0.5mm}
\end{equation}
\end{small}
where $R$ denotes the number of times each task is repeated across the learning system.
From the formulation of communication consumption, we can find that
\section{Conclusion}
\label{sec:conc}
In this paper, we proposed {\it $Fed^2$}\xspace, a feature-aligned federated learning framework to resolve the feature fusion conflicts problem in FedAvg and enhance the FL performance.
Specifically, a feature interpretation method is first proposed to analyze the feature fusion conflicts.
To alleviate that, we propose a structural feature allocation methodology by combining feature isolation and gradient redirection.
The {\it $Fed^2$}\xspace framework is then proposed, which composed of (i) model structure adaptation and (ii) feature paired averaging, to achieve firm feature alignment throughout the FL process.
Experiment demonstrates significant improvement in convergence speed, accuracy and computation/communication efficiency than state-of-the-art works.
\section{Broader Impact}
\label{sec:conc}
\vspace{-3mm}
By comprehensively integrating the proposed techniques to an innovative “Feature-Allocated Model Regulation” federated learning ramework, broad impacts will be brought into other scientific areas: (1) The enabled heterogeneous collaboration capability may support a much wider range of deep learning application with local tasks and Non-IID data constraints, such as spectrum monitoring in 5G wireless net-work and automatic public transportation system. (2) The optimized edge computation and communication may support more practical deployment on divergent edge devices with limited resources, such as smartphones and wearable devices. (3) Moreover, the scalable decentralized framework topology can also inspire further optimization and application exploration on intelligent IoT and edge computing systems. These research outcomes and application potentials will build a solid technical foundation for next-generation deep learning to support ubiquitous artificial intelligence applications.
| proofpile-arXiv_065-1766 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Introduction}
In this paper we show the validity, in the case of algebraic fiber spaces over abelian varieties (and more generally over varieties of maximal Albanese dimension), of some conjectures on the behavior of Kodaira dimension proposed by the second author; see \cite[Conjectures 3.1 and 3.4]{Popa}.
\begin{intro-theorem}\label{main}
Let $f \colon X \to A$ be an algebraic fiber space, with $X$ a smooth projective variety and $A$ an abelian variety. Assume that $f$ is smooth over an open set $V \subseteq Y$, and denote its general fiber by $F$. Then
\begin{enumerate}
\item $\kappa (V) + \kappa (F) \ge \kappa (X)$.
\medskip
\item If $A \smallsetminus V$ has codimension at least $2$ in $A$ (e.g. if $f$ is smooth), then $\kappa (X) = \kappa (F)$. More precisely, there exists an \'etale cover $X'$ of $X$ such that
$$P_m (X') = P_m (F) \ge P_m (X) \,\,\,\,\,\,{\rm for ~all}\,\,\,\, m \ge 0.$$
\end{enumerate}
\end{intro-theorem}
Here, for a quasi-projective variety $V$, $\kappa(V)$ denotes the log Kodaira dimension, defined as follows: if we
take any smooth projective compactification $Y$ of $V$ such that $D = Y \smallsetminus V$ is a divisor with simple normal crossings, then $\kappa (V) = \kappa (Y, K_Y + D)$.
Item (2) recovers (and strengthens) in particular \cite[Corollary 3.1]{PS2}, stating that if $f$ is smooth, then
$\dim X - \kappa (X) \ge \dim A$. As for the proof, thanks to the structure of effective divisors on abelian varieties, item (1) is in fact a consequence of (2). On the other hand, Iitaka's $C_{n,m}$ conjecture on the subadditivity of the Kodaira dimension is known for algebraic fiber spaces over abelian varieties by \cite{CP} (see also \cite{HPS}), giving $\kappa (X) \ge \kappa (F)$ for any such. Thus the key point is the last statement in (2), for which we employ techniques from \cite{LPS} regarding the Chen-Jiang decomposition, as well as a hyperbolicity-type result from \cite{PS3}. We prove in fact a stronger statement:
\begin{intro-theorem}\label{trivial}
Let $f \colon X \to A$ be a surjective morphism from a smooth projective variety to an abelian variety.
Assume that $f$ is smooth away from a closed set of codimension at least $2$ in $A$, and denote its general fiber by $F$.
Then for every $m\ge 1$ we have
$$f_* \omega_X^{\otimes m} \simeq \bigoplus_{i =1}^{P_m (F)} \alpha_i,$$
where $\alpha_i \in {\rm Pic}^0 (A)$ are (possibly repeated) torsion line bundles. In particular, if $f_* \omega_X^{\otimes m}$
is globally generated for some $m$, then
$$f_* \omega_X^{\otimes m} \simeq \shf{O}_A^{\oplus P_m(F)}.$$
\end{intro-theorem}
Note that assuming Viehweg's $C_{n,m}^+$ conjecture, the result in Theorem \ref{main}(2) implies that a morphism $f \colon X \to A$ which is smooth in codimension $1$ satisfies
${\rm Var}(f) = 0$, i.e. $f$ is birationally isotrivial (cf. also \cite[Corollary 3.2]{PS2}). If $f$ is smooth and one assumes that $F$ has a good minimal model, this is also implied by the main result in \cite{Taji}, namely the proof of the Kebekus-Kov\'acs conjecture bounding ${\rm Var}(f)$ from above by the Kodaira dimension of the base. By analogy with Ueno's Conjecture $K$, we propose the following strengthening:
\begin{intro-conjecture}
Let $f \colon X \to A$ be an algebraic fiber space, with $X$ a smooth projective variety and $A$ an abelian variety, and with general fiber $F$. If $f$ is smooth away from a closed set of codimension at least $2$ in $A$ , then there exists an isogeny $A' \to A$ such that
$$X\times_{A} A' \sim A' \times F,$$
i.e. $X$ becomes birational to a product after an \'etale base change.
\end{intro-conjecture}
When $f$ is smooth with canonically polarized fibers, i.e. $K_F$ is ample, this is proved (with isomorphism) in \cite[\S2]{Kovacs}.
It turns out that the result in Theorem \ref{main} holds more generally over any variety of maximal Albanese dimension, i.e. endowed with a (not necessarily surjective) generically finite morphism to an abelian variety. We state this separately, since
the proof is a rather involved reduction to the case of abelian varieties, combined with Theorem \ref{main}.
\begin{intro-theorem}\label{mad}
Let $f \colon X \to Y$ be an algebraic fiber space, with $X$ and $Y$ smooth and projective, and $Y$ of maximal Albanese dimension. Assume that $f$ is smooth over an open set $V \subseteq Y$, and denote its general fiber by $F$. Then
\begin{enumerate}
\item $\kappa (V) + \kappa (F) \ge \kappa (X)$.
\item If $Y \smallsetminus V$ has codimension at least $2$ in $Y$, then $\kappa (X) = \kappa (F) + \kappa (Y)$.
\end{enumerate}
\end{intro-theorem}
Note that the subadditivity $\kappa (X) \ge \kappa (F) + \kappa (Y)$ is known to hold for any algebraic fiber space over $Y$ of maximal Albanese dimension, by \cite[Theorem 1.1]{HPS}.
In a different but related direction, we remark that using the klt version provided in \cite{Meng} (see also \cite{Jiang} for related results) of the global generation result for direct images in \cite{LPS} used for the results above, one can go beyond the results of \cite{CP} and \cite{HPS}. Concretely, we show the validity of the \emph{log} Iitaka conjecture over an abelian variety $A$, in the case of divisors that do not dominate $A$ (though the statement can be phrased slightly more generally). For a divisor $D$ on a smooth projective variety $X$, we use the notation $P_m (X, D) : = h^0 (X, m (K_X + D))$.
\begin{intro-theorem}\label{subadditivity}
Let $f\colon X \to A$ be an algebraic fiber space, with $X$ a smooth projective variety, and $A$ an abelian variety. Denote by $F$ the
general fiber of $f$. Let $D$ be a reduced effective divisor on $A$ and $E$ a reduced effective divisor on $X$ such that
${\rm Supp}(f^*D)\subseteq E$. Then
$$\kappa (X,K_X + E) \ge \kappa (F) + \kappa (A,D).$$
More precisely, there exists an \'etale cover $\psi \colon X' \to X$ and a fixed integer $k \ge 1$ such that
$$P_{mk} (X', \psi^* E) \ge P_{mk} (F) \cdot P_m (A, D) \,\,\,\,\,\,{\rm for ~all}\,\,\,\, m \ge 0.$$
\end{intro-theorem}
The most general case of the log Iitaka conjecture allows for divisors $E$ that dominate $A$, in other words replacing
$\kappa(F)$ by $\kappa (F, K_F + E|_{F})$. It would be solved with
similar methods if extensions of the results in \cite{LPS} were available in the setting of log canonical pairs; this is however
beyond our reach at the moment. Note also that if $E = (f^*D)_{\rm red}$, it is predicted in \cite[Conjecture 4.1]{Popa} that the inequality in Theorem \ref{subadditivity} becomes an equality if $f$ is smooth away from $D$, which would extend Theorem \ref{main}(2).
One final word: in the results of this paper, morphisms that are smooth away from a closed subset of codimension at least $2$ behave just like morphisms that are smooth everywhere. However, working under this weaker hypothesis leads to
additional technical arguments. In order to isolate the main ideas, it may be helpful to assume global smoothness at a first reading.
\noindent
{\bf Acknowledgements.} The first author would like to thank the Department of Mathematics at Harvard University for
its hospitality during the preparation of this paper.
\subsection{Background}
We will make use of the following two results from \cite{LPS}, which are shown in \emph{loc. cit.} to be equivalent.
\begin{theorem}[{\cite[Theorem B]{LPS}}]\label{thm:LPS1}
Let $f \colon X \to A$ be a morphism from a smooth projective variety to an abelian
variety. Then there exists an isogeny $\varphi \colon A' \to A$ such that if
\[
\begin{tikzcd}
X' \dar{f'} \rar & X \dar{f} \\
A' \rar{\varphi} & A
\end{tikzcd}
\]
is the fiber product, then $\varphi^* f_* \omega_X^{\otimes m} \simeq f'_* \omega_{X'}^{\otimes m}$ is globally generated for every $m \geq 1$.
\end{theorem}
Recall that an (algebraic) fiber space is a surjective projective morphism with connected fibers.
We remark, as could have been done already in \cite{LPS}, that Theorem \ref{thm:LPS1} implies a strengthening of the subadditivity of the Kodaira dimension for algebraic fiber spaces over abelian varieties proved in \cite{CP} and later also in \cite{HPS}.
\begin{corollary}\label{old}
Let $f \colon X \to A$ be an algebraic fiber space over an abelian variety, with $X$ smooth and projective, and with general fiber $F$. Then there exists an \'etale cover $X' \to X$ such that
$P_m (X') \ge P_m (F)$ for all $m \ge 1$.
\end{corollary}
\begin{proof}
We consider the construction in Theorem \ref{thm:LPS1}, so that $f'_* \omega_{X'}^{\otimes m}$ is globally generated for every $m \geq 1$. The number of independent global sections of this sheaf
is therefore at least equal to its rank, which is equivalent to the inequality
$$P_m (X') \ge P_m (F).$$
\end{proof}
For the next statement, and for later use in the paper, recall that for a coherent sheaf $\shf{F}$ on an abelian variety $A$, and for $i \ge 0$, we consider the $i$-th cohomological support locus of $\shf{F}$ given by
$$V^i (A, \shf{F}) = \{\alpha \in {\rm Pic}^0(A) ~|~ H^i(A, \shf{F} \otimes \alpha) \neq 0 \}.$$
We will use the following standard terminology:
\begin{itemize}
\item $\shf{F}$ is a \emph{GV-sheaf} if $\codim_{{\rm Pic}^0 (A)} V^i (A, \shf{F}) \ge i$ for all $i > 0$.
\item $\shf{F}$ is an \emph{M-regular sheaf} if $\codim_{{\rm Pic}^0 (A)} V^i (A, \shf{F}) > i$ for all $i > 0$.
\end{itemize}
\begin{theorem}[{\cite[Theorem C]{LPS}}]\label{thm:LPS2}
In the setting of \theoremref{thm:LPS1}, there exists a finite direct sum decomposition
\[
f_* \omega_X^{\otimes m} \simeq \bigoplus_{i \in I} \bigl( \alpha_i \otimes p^{\ast}_i \shf{F}_i \bigr),
\]
where each $p_i \colon A \to A_i$ is a quotient morphism with connected fibers to an
abelian variety, each $\shf{F}_i$ is an M-regular coherent sheaf on $A_i$, and each
$\alpha_i \in {\rm Pic}^0(A)$ is a line bundle that becomes trivial when pulled back by
the isogeny in \theoremref{thm:LPS1}.
\end{theorem}
The direct sum decomposition in this last theorem goes under the name of a \emph{Chen-Jiang decomposition}.
The only special fact about $M$-regular sheaves that we will use here is the fact that they are ample, as shown in \cite{Debarre}; similarly, $GV$-sheaves are nef, as shown in \cite{PP2}.
Finally, we will need a hyperbolicity-type result, proved in this generality in \cite{PS3}; it relies on important
ideas and results of Viehweg-Zuo and Campana-P\u aun described in \emph{loc. cit.}, as well as on the theory of Hodge modules. For simplicity, in the following statement we combine two results, and only state the special consequence needed in this paper.\footnote{The statement works in a more general setting; it is comparatively much simpler however, avoiding semistable reduction tricks, when the base is a priori known not to be uniruled.}
\begin{theorem}[{\cite[Theorem 4.1 and Theorem 3.5]{PS3}}]\label{thm:PS3}
Let $f \colon X \to Y$ be an algebraic fiber space between smooth projective varieties, such that $Y$ is not uniruled.
Assume that $f$ is smooth over the complement of a closet subset $Z \subset Y$, and that there exists $m\ge 1$ such that ${\rm det} f_* \omega_{X/Y}^{\otimes m}$ is big. Denote by $D$ the union of the divisorial components of $Z$. Then the pair $(Y,D)$ is of log general type, i.e. the line bundle $\omega_Y (D)$ is big.
\end{theorem}
\begin{remark}
The theorem above is stated in \emph{loc. cit.} only when $Z = D$, but the proof shows more generally the statement above, as all the objects it involves can be constructed from $Y$ with any codimension $2$ subset removed.
\end{remark}
\subsection{Proof of Theorem \ref{trivial}}
Let $f\colon X \to A$ be a surjective morphisms of smooth projective varieties, with $A$ an abelian variety, and let $F$ be
the general fiber of $f$. Assume that $f$ is smooth over an open subset $V\subseteq A$ whose complement has codimension at least $2$. We divide the proof of Theorem \ref{trivial} into two steps:
\noindent
\emph{Step 1.}
We first prove the last assertion; namely if $f_* \omega_X^{\otimes m}$
is globally generated for some $m\ge 1$, then
\begin{equation}\label{eq1}
f_* \omega_X^{\otimes m} \simeq \shf{O}_A^{\oplus P_m(F)}.
\end{equation}
We first prove this result when $f$ is a fiber space. We use the statement and notation of Theorem \ref{thm:LPS2}, and
we claim in fact that $\dim A_i=0$ for all $i\in I$ appearing in the decomposition in that theorem.
Since $f_*\omega_X^{\otimes m}$ is globally generated, this immediately implies ($\ref{eq1}$).
Assume on the contrary that we have $\dim A_k>0$ for one of the quotients $p_k \colon A \to A_k$. (This includes the case when $A_k = A$.) Denote the kernel of $p_k$ by $C$; this is an abelian subvariety of $A$. By Poincar\'e's complete reducibility theorem, there exists an abelian variety $B\subseteq A$ such that $B+C=A$ and $B\cap C$ is finite, so that the natural morphism
$\varphi\colon B\times C\to A$ is an isogeny. We consider the following commutative diagram, $q$ is the projection onto $C$, $c\in C$ is a general point, and $f'$ and $f'_{c}$ are obtained by base change from $f$ via $\varphi$ and the inclusion
$i_c$ of the fiber $B_c$ of $q$ over $c$ respectively:
\[
\begin{tikzcd}
X'_c\rar \dar{f'_c} & X' \rar{\varphi'} \dar{f'} \rar & X \dar{f} \\
B_c \dar \rar{i_c} & B\times C \dar{q} \rar{\varphi} & A \dar{p_k} \\
\{c\}\rar& C & A_k
\end{tikzcd}
\]
Note that by construction the composition
$$\psi_c = p_k\circ\varphi\circ i_c \colon B_c \to A_k$$
is an isogeny. Furthermore, $X'$ is smooth, since $\varphi$ is \'etale. We denote by $Z \subset A$ the closed subset of
codimension $\ge 2$ such that $f$ is smooth over $A \smallsetminus Z$. If $Z' := \varphi^{-1} (Z)$, then $Z'$ has codimension $\ge 2$ in $B \times C$ as well, and $f'$ is smooth over its complement. Moreover, $c$ can be chosen sufficiently general so that $X'_{c}$ is smooth (by generic smoothness applied to $q \circ f'$) and $\codim_{B_c} i_c^{-1}(Z')\ge 2$ as well, hence $f'_c$ inherits the same property as $f$.
\smallskip
\noindent
{\bf Claim.} The line bundle $\det \big( (f'_c)_*\omega_{X'_c}^{\otimes m}\big)$ is big (hence ample, as $B_c$ is an abelian variety).
Assuming the Claim, we can immediately conclude the proof. Indeed, by Theorem \ref{thm:PS3} this would imply
that $\omega_{B_c}$ is big, which is a contradiction and shows that we cannot have $\dim A_k > 0$.
We are left with proving the Claim. To this end, applying the base change theorem as in \cite[Proposition 4.1]{LPS}, since $c$ is general we have
\begin{equation}\label{eq4}
(f'_c)_*\omega_{X'_c}^{\otimes m} \simeq (f'_c)_*(\omega_{X'}^{\otimes m}|_{X'_c})\simeq i_c^*(f'_*\omega_{X'}^{\otimes m})\simeq i_c^*\varphi^* ( f_* \omega_X^{\otimes m}).
\end{equation}
To analyze this, we need to look more carefully at the decomposition of $f_*\omega_X^{\otimes m}$ from Theorem \ref{thm:LPS2}.
Since $f_*\omega_X^{\otimes m}$ is globally generated, we deduce that $\alpha_k\otimes p_k^*\shf{F}_k$ is globally generated, and in particular $h^0(A, \alpha_k\otimes p_k^*\shf{F}_k)>0$. It follows that $\alpha_k$ is trivial on the general, hence every, fiber of $p_k$, and so
there exists a torsion line bundle $\beta_k\in{\rm Pic}^0(A_k)$ such that $\alpha_k\simeq p_k^*\beta_k$. Moreover
$${p_k}_*(\alpha_k\otimes p_k^*\shf{F}_k)\simeq \beta_k\otimes\shf{F}_k$$
by the projection formula (applied to the $0$-th cohomology of $\mathbf{R} {p_k}_*(\alpha_k\otimes p_k^*\shf{F}_k)$).
If we denote $\shf{G} : = \bigoplus_{i\neq k}(\alpha_i\otimes p_i^*\shf{F}_i)$, we then have
\begin{equation}\label{eq3}
f_*\omega_X^{\otimes m}\simeq p_k^* (\beta_k\otimes\shf{F}_k) \oplus \shf{G}.
\end{equation}
These summands have various positivity properties. Since $\beta_k\otimes\shf{F}_k$ is an $M$-regular sheaf on $A_k$, it is ample. On the other hand, since since $f_*\omega_X^{\otimes m}$ is a $GV$-sheaf by \cite[Theorem 1.10]{PS1}, it is nef, hence so is $\shf{G}$. (For all this, see the comments after Theorem \ref{thm:LPS2}.) Since a priori they might not be locally free, it is also useful to record that $\shf{G}$ is a weakly positive sheaf, since $f_*\omega_X^{\otimes m}$ is so
by \cite[Theorem III]{Viehweg}.
Using ($\ref{eq4}$) and ($\ref{eq3}$), we deduce that
$$(f'_c)_*\omega_{X'_c}^{\otimes m}\simeq i_c^*\varphi^* ( f_* \omega_X^{\otimes m})\simeq \psi_c^*(\beta_k\otimes\mathcal{F}_k) \oplus i_c^* \varphi^*\shf{G}.$$
Note in passing that if $f$ is smooth, and so also $f'$ and $f'_c$ are smooth to begin with, then all of the sheaves above are locally free (by the deformation invariance of plurigenera), so by the previous paragraph this a sum of an ample and a nef vector bundle, hence its determinant is clearly ample. In general we have to be slightly more careful in order to draw the same conclusion. Since $\psi_c$ is an isogeny, we deduce that $\psi_c^*(\beta_k\otimes\mathcal{F}_k)$ is an ample sheaf as well, while $i_c^* \varphi^*\shf{G}$ is weakly positive just as $\shf{G}$ (being a summand of $(f'_c)_*\omega_{X'_c}^{\otimes m}$). In other words, we have that
$$(f'_c)_*\omega_{X'_c}^{\otimes m}\simeq \mathcal{H}_1 \oplus \mathcal{H}_2,$$
with $\mathcal{H}_1$ ample and $\mathcal{H}_2$ weakly positive. But an ample sheaf is big in the sense of Viehweg
(see e.g. \cite[\S2 a)]{Debarre} and \cite[\S5 p.293]{Mori}), and so its determinant\footnote{Recall that the determinant of a torsion-free sheaf $\shf{F}$ of generic rank $r$ is the line bundle $(\wedge^r \shf{F})^{\vee \vee}$, i.e. the unique extension of the determinant line bundle from the big open set on which $\shf{F}$ is locally free.} $\det \mathcal{H}_1$ is a big line bundle by \cite[Lemma 3.2(iii)]{Viehweg2} (see also \cite[5.1.1]{Mori}). On the other hand,
$\det \mathcal{H}_2$ is weakly positive, e.g. also by \cite[Lemma 3.2(iii)]{Viehweg2}; for a line bundle this is the same as being pseudoeffective. Their tensor product is therefore also big, hence finally the line bundle $\det \big( (f'_c)_*\omega_{X'_c}^{\otimes m}\big)$ is big. This concludes the proof in the case of fiber spaces.
If $f$ is not a fiber space, we consider its Stein factorization $f = g\circ h$.
Here $B$ is a normal projective variety, $h\colon X\to B$ is a fiber space, and $g\colon B\to A$ is a finite surjective morphism. Note that $h$ smooth over $B\smallsetminus g^{-1}(Z)$ and $g$ is \'etale over $A\smallsetminus Z$; see e.g.
\cite[Lemma 2.4]{FG}. By the purity of branch locus, it follows that $g$ is actually \'etale over $A$, and thus $B$ is also an abelian variety. Moreover the canonical morphism
$$g^*g_*h_*\omega_X^{\otimes m}\to h_*\omega_X^{\otimes m}$$
is surjective, which implies that $h_*\omega_X^{\otimes m}$ is globally generated as well. Using what we showed above
for fiber spaces that are smooth in codimension $1$, we deduce that
$$h_*\omega_X^{\otimes m}\simeq \shf{O}_B^{\oplus P_m(H)},$$
where $H$ denotes the general fiber of $h$. Furthermore, we have
$$g_*\shf{O}_{B}\simeq\bigoplus_{\alpha\in{\rm Ker} (\hat{g})}\alpha,$$
hence $f_*\omega_X^{\otimes m}$ is a direct sum of torsion line bundles on $A$. Since $f_*\omega_X^{\otimes m}$ is globally generated, we obtain the same conclusion ($\ref{eq1}$).
\noindent
\emph{Step 2.}
We deduce next the general case from the statement for globally generated sheaves proved in Step 1. By the same Stein factorization argument as above, we may assume that $f$ is a fiber space.
We now use the statement and notation of Theorem \ref{thm:LPS1}. By Step 1, we have that
$$\varphi^* f_* \omega_X^{\otimes m} \simeq f'_*\omega_{X'}^{\otimes m}\simeq \shf{O}_{A'}^{\oplus P_m(F)},$$
which implies that $f_* \omega_X^{\otimes m}$ is a direct summand of $(\varphi_*\shf{O}_{A'})^{\oplus P_m(F)}$. Since
$$\varphi_*\shf{O}_{A'}\simeq\bigoplus_{\beta\in{\rm Ker} (\hat{\varphi})}\beta,$$
we deduce that
$$V^0(A, f_* \omega_X^{\otimes m})\subseteq{\rm Ker} (\hat{\varphi})$$
and in particular $\dim V^0(A, f_* \omega_X^{\otimes m})=0$.
We consider again the Chen-Jiang decomposition of $f_* \omega_X^{\otimes m}$ provided by
Theorem \ref{thm:LPS2}. By \cite[Lemma 3.3]{LPS}, we have
$$V^0(A, f_* \omega_X^{\otimes m})=\bigcup_{i\in I}\alpha_i^{-1}\otimes p_i^*{\rm Pic}^0(A_i).$$
Since $\dim V^0(A, f_* \omega_X^{\otimes m})=0$, we deduce that $\dim A_i=0$ for all $i\in I$, which in turn leads to a
decomposition
$$f_* \omega_X^{\otimes m} \simeq \bigoplus_{i =1}^{P_m (F)} \alpha_i$$
with $\alpha_i$ torsion line bundles on $A$, as desired.
\subsection{Proof of Theorem \ref{main}}
We start with part (2), which follows immediately from Theorem \ref{trivial}. Indeed, we consider the \'etale base change
$X' \to X$ as in Theorem \ref{thm:LPS1}, so that $f'_* \omega_{X'}^{\otimes m}$ is globally generated for $m\ge 1$. According to Theorem \ref{trivial}, it follows that
$$f'_* \omega_{X'}^{\otimes m} \simeq \shf{O}_{A'}^{\oplus P_m (F)},$$
which gives $P_m (X') = P_m (F)$. We obviously have $P_m (X') \ge P_m (X)$, and in particular we deduce that
$\kappa (F) \ge \kappa (X)$. The opposite inequality is the subadditivity of the Kodaira dimension over abelian varieties,
proved in \cite{CP} (see also \cite{HPS}).
The rest of the section is devoted to proving (1), essentially by reducing it to (2).
We consider the closed subset $Z:=A\smallsetminus V$, and decompose it as $Z = D\cup W$, where each irreducible component of $D$ has codimension $1$, while $W$ has codimension at least $2$. We first show that
$$\kappa(V)\ge\kappa(A, D).$$
To this end, we take a log resolution $\mu\colon Y\to A$ of the pair $(X, Z)$; thus $\mu^{-1}( Z) \cup{\rm Exc}(\mu)$ is a divisor with simple normal crossing support. Denote $C = (\mu^{-1} (Z))_{{\rm red}}$. Since $K_A\sim 0$, we have $K_Y\sim E$, where $E$ is an effective and $\mu$-exceptional divisor which has the same support as ${\rm Exc}(\mu)$.
Then, by definition,
$$\kappa(V)=\kappa(Y, K_Y+C)=\kappa(Y, E+C).$$
On the other hand $\Supp(\mu^*D)\subseteq C$, hence we deduce that
$$\kappa(V)=\kappa(Y, E+C)\ge\kappa(Y, \mu^*D)=\kappa(A, D).$$
We conclude that in order to finish the proof, we only need to show that
$$\kappa(A, D) + \kappa (F) \ge \kappa (X).$$
If $D = \emptyset$, then we are done by (2), so we assume that $D$ is non-trivial. There are two cases, according to whether $D$ is an ample divisor or not.
If $D$ is ample, then $\kappa (A, D) = \dim A$, and the inequality
$$\dim A + \kappa (F) \ge \kappa (X)$$
is simply the Easy Addition formula (see e.g. \cite[Corollary 2.3]{Mori}).
If $D$ is not ample, a well-known structural fact says that there exist a fibration $q\colon A \to B$ of abelian varieties and an ample effective divisor $H$ on $B$ such that $\dim A > \dim B$ and $D=q^*H$.
\[
\begin{tikzcd}
X \rar{f} \arrow[bend right=40]{rr}{g} & A \rar{q} & B
\end{tikzcd}
\]
Denote the general fiber of $g$ by $G$, and the general fiber of $q$ by $Q$, which is again an abelian variety.
Since $\kappa (B, H) = \dim B$, again by Easy Addition we have
$$\kappa (B, H) + \kappa (G) \ge \kappa (X).$$
Since $q$ is a fiber space, we also have
$$\kappa (A, D)=\kappa (B, H).$$
Let's now assume more precisely that $G$ and $Q$ are fibers over a point in $b \in B \smallsetminus H$. We can choose $b$ sufficiently general such that $\codim_Q (Q\cap W) \ge 2$. Since $Q \cap D = \emptyset$, we then also have
$$\codim_Q (Q \cap Z)\ge 2.$$
We obtain a fiber space $f_b\colon G\to Q$ induced by $f$ by restriction, which is smooth over the complement of a closed subset of $Q$ whose codimension is at least $2$. By part (2) we deduce that $\kappa (G)=\kappa(F)$, and thus
finally
$$\kappa (A, D) + \kappa (F) \ge \kappa (X).$$
\subsection{Proof of Theorem \ref{mad}}
We start with a general set-up. Since $Y$ is of maximal Albanese dimension, we can take a Stein factorization of the Albanese morphism $a_Y\colon Y\to\Alb(Y)$ such that $a_Y=g\circ h$, where $h\colon Y\to Z$ is birational, $g\colon Z\to \Alb(Y)$ is finite onto its image, and $Z$ is a normal projective variety.
By \cite[Theorem 13]{Kawamata}, there exists an \'etale cover $\varphi\colon Z'\to Z$, an abelian variety $A$, and a normal projective variety $W$ such that
$$Z'\simeq W\times A \,\,\,\,\,\,{\rm and} \,\,\,\,\,\, \kappa(W)=\dim W=\kappa(Z) = \kappa (Y).$$
Denote the projection of $Z'$ onto $W$ by $p$. We consider the following commutative diagram, where the morphisms in the middle column are obtained by base change from $h$ and $f$ via $\varphi$ and then $\varphi'$, while on the left we have the respective fibers over a general point $w \in W$. (The horizontal maps between the two left columns are all inclusions.)
\begin{equation}\label{eq2}
\begin{tikzcd}
X'_w\rar \dar{f'_w} & X' \dar{f'} \rar{} & X \dar{f} \\
Y'_w\rar{i_w} \dar{h'_w} & Y' \dar{h'} \rar{\varphi'} & Y \dar{h} \\
A= A_w \dar \rar{} & Z' \dar{p} \rar{\varphi} & Z\\
\{w\}\rar& W
\end{tikzcd}
\end{equation}
The projective varieties $X'$ and $Y'$ are both smooth, since $\varphi$ is \'etale. We can choose $w$ sufficiently general so that $X'_w$ and $Y'_w$ are smooth, and $h'_w$ is birational. Moreover $f'_w$ is a fiber space.
After this preparation, we are ready to prove Theorem \ref{mad}. We start again with part (2), which is somewhat less involved.
Recall that in (2) we are assuming that $f$ is smooth over an open set $V \subseteq Y$, and denote $T = Y \smallsetminus V$. The morphism $f'$ is smooth over $Y'\smallsetminus \varphi'^{-1}(T)$, and $\codim\varphi'^{-1}(T)\ge 2$. If
$T'_w:=(\varphi'\circ i_w)^{-1}(T)$, we also that $f'_w$ is smooth over $Y'_w\smallsetminus T'_w$ as well, while choosing
$w$ carefully we can also ensure that $\codim_{Y'_w} T'_w \ge 2$. Since $h'_w$ is a birational morphism, it is immediate to check that $h'_w\circ f'_w$ is also smooth over the complement in $A$ of a closed subset of codimension at least $2$,
and its general fiber is $F$.
At this stage, by Theorem \ref{main}(2) we have $\kappa(X'_w)=\kappa(F)$. On the other hand, since $W$ is of general type, the additivity of the Kodaira dimension holds for the fiber space $X' \to W$ with fiber $X'_w$ (see
\cite[Corollary IV]{Viehweg}),\footnote{Note that $W$ being normal suffices, since for this statement one can pass to
a base change by a resolution of $W$.} to finally give
$$\kappa(X)=\kappa(X')=\kappa(X'_w)+\kappa(W)=\kappa(F)+\kappa(Y).$$
\medskip
We now prove (1). We note first that we can assume $Y \smallsetminus V$ to be a simple
normal crossing divisor. Let $\mu\colon \widetilde{Y}\to Y$ be a log resolution of $Y \smallsetminus V$ which is an isomorphism over $V$. In particular $\mu^{-1}(Y\smallsetminus V)$ is a divisor with simple normal crossing support, and $\widetilde{Y}$ is still of maximal Albanese dimension. We consider the commutative diagram
\[
\begin{tikzcd}
\widetilde{X}\rar \drar{\widetilde{f}} & (X\times_Y \widetilde{Y})_{\rm{main}} \dar{m} \rar{} & X \dar{f} \\
& \widetilde{Y} \rar{\mu} & Y
\end{tikzcd}
\]
where $(X\times_Y \widetilde{Y})_{\rm{main}}$ is the main component of $X\times_Y \widetilde{Y}$, and $\widetilde{X}$ is a resolution which is an isomorphism over $(\mu\circ m)^{-1}(V)$. The algebraic fiber space $\widetilde{f}\colon \widetilde{X}\to \widetilde{Y}$ is smooth over $\mu^{-1}(V) \simeq V$, with general fiber $F$. Thus we can assume from the start that $D:=Y\smallsetminus V$ is a simple normal crossing divisor, and
$$\kappa (V)=\kappa(Y, K_Y+D).$$
We next show that we can also assume that there exists a birational morphism $h\colon Y\to A$, where $A$ is an abelian variety. We consider again the diagram ($\ref{eq2}$) in our set-up, but we now keep track of the divisor $D$ and its pullbacks as well. Since $\varphi'$ is \'etale, $D':=\varphi'^*D$ is also a simple normal crossing divisor.
For $w$ sufficiently general, in addition to all the properties described in the set-up, we can also assume that
$D'_w:=D'|_{Y'_w}$ is a simple normal crossing divisor. Since $W$ is of general type, as above we have
$$\kappa(X)=\kappa(X')=\kappa(X'_w)+\dim W,$$
but we also have the additivity of log Kodaira dimension
$$\kappa(Y, K_Y+D)=\kappa(Y', K_{Y'}+D')=\kappa(Y'_w, K_{Y'_w}+D'_w)+\dim W.$$
(See \cite[Theorem 1.7]{Fujino} and the references therein.)
Now the general fiber of $f'_w$ is $F$. Thus to obtain the conclusion $\kappa (V) + \kappa (F) \ge \kappa (X)$, we only need to prove the inequality
$$\kappa(Y'_w, K_{Y'_w}+D'_w)+ \kappa (F)\ge \kappa(X'_w).$$
Noting that the morphism $f'_w$ is smooth over $V'_w:=(\varphi'\circ i_w)^{-1}(V) = Y'_w\smallsetminus D'_w$, and
$$\kappa(V'_w)=\kappa(Y'_w, K_{Y'_w}+D'_w),$$
this allows us indeed to assume from the start (replacing $f\colon X \to Y$ by $f'_w\colon X'_w \to Y'_w$) that there exists a birational morphism $h\colon Y\to A$, with $A$ is an abelian variety.
The picture is now
\[
\begin{tikzcd}
X \rar{f} \arrow[bend right=40]{rr}{g} & Y \rar{h} & A
\end{tikzcd}
\]
At this stage we can forget about the previous steps in the proof, and reuse some of the notation symbols.
The birational morphism $h$ is an isomorphism away from $Z:=h\big(\Supp (D)\cup{\rm Exc}(h)\big)$
and thus $g$ is smooth over $A\smallsetminus Z$. By Theorem \ref{main}(1), we then have
$$\kappa (A\smallsetminus Z) + \kappa (F) \ge \kappa (X).$$
We can take a log resolution $\mu\colon Y'\to Y$ which is an isomorphism over $Y\smallsetminus \big(\Supp (D)\cup{\rm Exc} (h)\big)$, such that the support of $\mu^{-1}\big(\Supp (D)\cup{\rm Exc} (h)\big)\cup{\rm Exc}(\mu)$ is a simple normal crossing divisor $D'$. Since $h^{-1}(Z)=\Supp (D) \cup{\rm Exc} (h)$, we deduce that
$$\kappa (A\smallsetminus Z)=\kappa(Y', K_{Y'}+D').$$
Now $K_{Y'}\sim\mu^*K_Y+E'$, where $E'$ is an effective and $\mu$-exceptional divisor. Since $A$ is an abelian variety, we also have $K_Y\sim E$, where $E$ is an effective and $h$-exceptional divisor supported on ${\rm Exc} (h)$. We deduce that
$$K_{Y'}+D'\sim\mu^*E+E'+D'\ge 0,$$
and since $\Supp(\mu^*E+E'+D')=\Supp(\mu^*E+E'+\mu^*D)$, we finally have
$$\kappa (A\smallsetminus Z)=\kappa(Y', K_{Y'}+D')=\kappa(Y', \mu^*E+E'+D')$$
$$=\kappa(Y', \mu^*E+E'+\mu^*D) = \kappa(Y', \mu^*(K_Y +D) + E') =\kappa(Y, K_Y+D)=\kappa (V).$$
Putting everything together, we obtain
$$\kappa (V) + \kappa (F) \ge \kappa (X).$$
\subsection{Proof of Theorem \ref{subadditivity}}
It is clear that
$$\kappa (X, K_X+ E)\ge \kappa (X, K_X+(f^*D)_{\rm red}),$$
hence to prove the theorem it suffices to assume $E = (f^*D)_{\rm red}$.
We first show that we can reduce to the case when $E$ has simple normal crossings. Take a log resolution $\mu\colon Y\to X$ of the pair $(X, E)$, so that $\mu^* E $ has simple normal crossing support, and
let $K_Y=\mu^*K_X+G$ with $G$ an effective exceptional divisor on $Y$. We may assume that $\mu$ is an isomorphism over $X\smallsetminus E$. Thus the general fiber of $g = f\circ \mu$ is isomorphic to $F$.
\[
\begin{tikzcd}
Y \rar{\mu} \arrow[bend right=40]{rr}{g} & X \rar{f} & A
\end{tikzcd}
\]
We have that $\mu^* E \ge (g^*D)_{\rm red}$, and since $G$ is effective and exceptional, we deduce that
$$\kappa (X, K_X+ E)=\kappa (Y, \mu^*(K_X+ E)+G)$$
$$=\kappa (Y, K_Y+\mu^*E)\geq \kappa (Y, K_Y+(g^*D)_{\rm red})\ge \kappa (F) + \kappa (A, D),$$
where the last inequality holds if we assume the simple normal crossings case.
We proceed therefore assuming that $E$ has simple normal crossings. Let $k \ge 1$ be the maximal coefficient of the irreducible components of $f^*D$. Thus the coefficients of the divisor $E-\frac{1}{k}f^*D$ are nonnegative and strictly smaller than $1$. Since it has simple normal crossing support, we deduce that the log pair $(X, E-\frac{1}{k}f^*D)$ is klt. We consider the divisor class
$$T:=kK_X+k E-f^*D\sim_{\mathbb{Q}} k(K_X+ E-\frac{1}{k}f^*D).$$
By \cite[Theorem 1.1]{Meng}, generalizing Theorem \ref{thm:LPS1}, there exists an isogeny $\varphi \colon A' \to A$ and
an induced fiber diagram
\[
\begin{tikzcd}
X' \dar{f'} \rar{\varphi'} & X \dar{f} \\
A' \rar{\varphi} & A
\end{tikzcd}
\]
such that $ f'_* \shf{O}_{X'}(\varphi'^*(mT))\simeq \varphi^*f_*\shf{O}_X(mT)$ is globally generated for all $m\ge 1$.
Note that by the base change theorem
$${\rm rk} \big( f_*\shf{O}_X(mT) \big)=P_{mk}(F).$$
We conclude that for each $m\ge 1$ there exist injective coherent sheaf homomorphisms
$$\shf{O}_{A'}^{\oplus P_{mk}(F)}\hookrightarrow f'_* \shf{O}_{X'}(\varphi'^*(mT)).$$
Tensoring with $\shf{O}_{A'}(\varphi^*(mD))$, and noting that
$$f'_* \shf{O}_{X'}(\varphi'^*(mT))\otimes\shf{O}_{A'}(\varphi^*(mD))\simeq
f'_*\shf{O}_{X'}\big(\varphi'^*(mk(K_X+E))\big),$$
this leads to injections
$$\shf{O}_{A'}(\varphi^*(mD))^{\oplus P_{mk}(F)}\hookrightarrow f'_* \shf{O}_{X'}\big(mk (K_{X'}+ \varphi'^*E)\big).$$
Thus we deduce that
$$P_{mk} (X', \varphi'^*E) \ge P_{mk} (F) \cdot P_m (A', \varphi^*D) \ge P_{mk} (F) \cdot P_m (A, D)\footnote{This is analogous to Corollary \ref{old}.}$$
for all $m$. Since $k$ is fixed and $\varphi'^*\omega_X \simeq \omega_{X'}$, it is immediate to see that this implies
$$\kappa(X', K_{X'} +\varphi'^*E)\ge \kappa(F)+\kappa(A, D).$$
Now $\varphi'$ is surjective, hence $\kappa\big(X', \varphi'^*(K_X+E)\big)=\kappa(X, K_X+ E)$, and so
finally
$$\kappa(X, K_X+ E) \ge \kappa(F)+\kappa(A, D).$$
\section*{References}
\begin{biblist}
\bib{CP}{article}{
author={ Cao, J.},
author={P\u{a}un, M.},
title={Kodaira dimension of algebraic fiber spaces over abelian varieties},
journal={Invent. Math.},
volume={207},
date={2017},
number={1},
pages={345--387},
}
\bib{Debarre}{article}{
author={Debarre, Olivier},
title={On coverings of simple abelian varieties},
journal={Bull. Soc. Math. France},
volume={134},
date={2006},
number={2},
pages={253--260},
}
\bib{Fujino}{article}{
author={Fujino, Osamu},
title={Notes on the weak positivity theorems},
journal={in \emph{Algebraic varieties and automorphism groups}, Adv. Stud. Pure Math., Math. Soc. Japan},
volume={75},
date={2017},
pages={73-118},
}
\bib{FG}{article}{
author={Fujino, O.},
author={Gongyo, Y.},
title={On images of weak Fano manifolds},
journal={Math. Z.},
volume={270},
date={2012},
pages={531--544},
}
\bib{HPS}{article}{
author={Hacon, C.},
author={Popa, M.},
author={Schnell, C.},
title={Algebraic
fiber spaces over abelian varieties: around a recent theorem by {C}ao and {P}{\u{a}}un},
journal={Contemporary Mathematics},
volume={712},
date={2018},
series={Local and global methods in Algebraic Geometry: volume in honor of L. Ein's 60th birthday},
pages={143--195},
}
\bib{Jiang}{article}{
author={Jiang, Zhi},
title={$M$-regular decomposition of pushforwards of pluricanonical bundles of pairs to abelian varieties},
journal={arXiv:2006.02393, to appear in IMRN},
date={2021},
}
\bib{Kawamata}{article}{
author={Kawamata, Yujiro},
title={Characterization of abelian varieties},
journal={Compositio Math.},
volume={43},
date={1981},
number={2},
pages={253--276},
}
\bib{Kovacs}{article}{
author={Kov\'acs, Sandor},
title = {Families over a base with a birationally nef tangent bundle},
journal={Math. Ann.},
volume = {308},
date= {1997},
pages = {347--359},
}
\bib{LPS}{article}{
author={Lombardi, L.},
author={Popa, M.},
author={Schnell, C.},
title={Pushforwards of pluricanonical bundles under morphisms to abelian varieties},
journal={J. Eur. Math. Soc.},
volume={22},
date={2020},
number={8},
pages={2511--2536},
}
\bib{Meng}{article}{
author={Meng, Fanjun},
title={Pushforwards of klt pairs under morphisms to abelian varieties},
journal={Math. Ann.},
volume={380},
date={2021},
number={3},
pages={1655--1685},
}
\bib{Mori}{article}{
author={Mori, S.},
title={Classification of higher-dimensional varieties},
journal={Algebraic geometry, {B}owdoin, 1985 ({B}runswick, {M}aine, 1985), Proc.
Sympos. Pure Math.},
volume={46},
publisher={Amer. Math. Soc., Providence, RI},
date={1987},
pages={269--331},
}
\bib{Popa}{article}{
author={Popa, Mihnea},
title={Conjectures on the Kodaira dimension},
journal={preprint arXiv:2111.10900},
date={2021},
}
\bib{PP1}{article}{
author={Pareschi, Giuseppe},
author = {Popa, Mihnea},
title = {Regularity on abelian varieties {I}},
journal={J. Amer. Math. Soc.},
volume = {16},
date= {2003},
number={2},
pages = {285--302},
}
\bib{PP2}{article}{
author={Pareschi, G.},
author = {Popa, M.},
title = {Regularity on abelian varieties {III}: relationship with generic vanishing and applications},
journal={Grassmannians, moduli spaces and vector bundles, Amer. Math. Soc., Providence, RI, Clay Math. Proc.},
volume = {14},
date= {2011},
pages = {141--167},
}
\bib{PS1}{article}{
author={Popa, Mihnea},
author={Schnell, Christian},
title={On direct images of pluricanonical bundles},
journal={Algebra Number Theory},
volume={8},
date={2014},
pages={2273--2295},
}
\bib{PS2}{article}{
author={Popa, Mihnea},
author={Schnell, C.},
title={Kodaira dimension and zeros of holomorphic one-forms},
journal={Ann. of Math.},
volume={179},
date={2014},
number={3},
pages={1109--1120},
}
\bib{PS3}{article}{
author={Popa, Mihnea},
author={Schnell, Christian},
title={Viehweg's hyperbolicity conjecture for families with maximal variation},
journal={Invent. Math.},
volume={208},
date={2017},
number={3},
pages={677--713},
}
\bib{Taji}{article}{
author={Taji, Behrouz},
title={Birational geometry of smooth families of varieties admitting good minimal models},
journal={preprint arXiv:2005.01025},
date={2020},
}
\bib{Viehweg}{article}{
author={Viehweg, Eckart},
title={Weak positivity and the additivity of the Kodaira dimension of certain fiber spaces},
journal={Adv. Studies Pure Math.},
volume={1},
date={1983},
pages={329--353},
}
\bib{Viehweg2}{article}{
author={Viehweg, E.},
title={Weak positivity and the additivity of the Kodaira dimension. II. The local Torelli map},
journal={ Classification of algebraic and analytic manifolds (Katata, 1982), Progr. Math., Birkh\"auser Boston,},
volume={39},
date={1983},
pages={567--589},
}
\bib{ViehwegZuo}{article}{
author={Viehweg, Eckart},
author={Zuo, Kang},
title={On the isotriviality of families of projective manifolds over curves},
journal={J. Algebraic Geom.},
volume={10},
date={2001},
pages={781--799},
}
\end{biblist}
\end{document}
\subsection{\texorpdfstring{}{}}}
\newcommand{\subsection{}}{\subsection{}}
\newcommand{\parref}[1]{\hyperref[#1]{\S\ref*{#1}}}
\makeatletter
\newcommand*\if@single[3]{%
\setbox0\hbox{${\mathaccent"0362{#1}}^H$}%
\setbox2\hbox{${\mathaccent"0362{\kern0pt#1}}^H$}%
\ifdim\ht0=\ht2 #3\else #2\fi
}
\newcommand*\rel@kern[1]{\kern#1\dimexpr\macc@kerna}
\newcommand*\widebar[1]{\@ifnextchar^{{\wide@bar{#1}{0}}}{\wide@bar{#1}{1}}}
\newcommand*\wide@bar[2]{\if@single{#1}{\wide@bar@{#1}{#2}{1}}{\wide@bar@{#1}{#2}{2}}}
\newcommand*\wide@bar@[3]{%
\begingroup
\def\mathaccent##1##2{%
\if#32 \let\macc@nucleus\first@char \fi
\setbox\z@\hbox{$\macc@style{\macc@nucleus}_{}$}%
\setbox\tw@\hbox{$\macc@style{\macc@nucleus}{}_{}$}%
\dimen@\wd\tw@
\advance\dimen@-\wd\z@
\divide\dimen@ 3
\@tempdima\wd\tw@
\advance\@tempdima-\scriptspace
\divide\@tempdima 10
\advance\dimen@-\@tempdima
\ifdim\dimen@>\z@ \dimen@0pt\fi
\rel@kern{0.6}\kern-\dimen@
\if#31
\overline{\rel@kern{-0.6}\kern\dimen@\macc@nucleus\rel@kern{0.4}\kern\dimen@}%
\advance\[email protected]\dimexpr\macc@kerna
\let\final@kern#2%
\ifdim\dimen@<\z@ \let\final@kern1\fi
\if\final@kern1 \kern-\dimen@\fi
\else
\overline{\rel@kern{-0.6}\kern\dimen@#1}%
\fi
}%
\macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax}%
\macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\if#31
\macc@nested@a\relax111{#1}%
\else
\def\gobble@till@marker##1\endmarker{}%
\futurelet\first@char\gobble@till@marker#1\endmarker
\ifcat\noexpand\first@char A\else
\def\first@char{}%
\fi
\macc@nested@a\relax111{\first@char}%
\fi
\endgroup
}
\makeatother
| proofpile-arXiv_065-1770 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Communication problem naturally arises when traveling in a foreign country where you do not speak the native language, which necessitates exploring non-linguistic means of communication, such as drawings. Due to its \textit{iconic} nature (\ie, perceptual resemblance to or natural association with the referent), drawings serve as a powerful tool to communicate concepts transcending language barriers~\cite{fay2014iconicity}. In fact, we humans started to use drawings to convey messages dating back to 40,000--60,000 years ago~\cite{hoffmann2018u,hawkins2019disentangling}. Some studies from cognitive science hypothesize a transition from sketch-based communication before the formation of sign systems and provide evidence that iconic signs can gradually become \textit{symbolic} through repeated communication~\cite{fay2014iconicity}. In contrast to \textit{icons}, \textit{symbols} are special forms bearing arbitrary relations to the referents. \cref{fig:concept} describes a typical scenario of such phenomena: Alice (in green) uses a sketch to communicate the concept ``rooster'' to Bob (in yellow). Initially, they need to ground the sketch to the referent. Later, details of the visual concept, such as strokes of head and body, are gradually abstracted away, leaving only the most salient part, the crown. The iconicity in the communicated sketch drops while the symbolicity rises.
While models of emerging communication protocols has attracted attention~\cite{lazaridou2017multi,cao2018emergent,evtimova2018emergent,havrylov2017emergence,lazaridou2018emergence,lazaridou2020emergent,mordatch2018emergence,ren2020Compositional,eccles2019biases,graesser2019emergent}, the initial and primary communication medium is presumed and limited to be symbolic rather than iconic. By simulating a multi-agent \textit{referential game}, prior work seeks for environmental driving forces behind the emergence of effective communications. In a typical setup of referential games, two agents play similar roles as in the above Alice-Bob example but share a primitive set of arbitrary tokens (\ie, the vocabulary). Using these tokens, an agent (the sender) attempts to communicate a message to another agent (the receiver). A communication convention is emerged when two agents successfully communicate by associating visual concepts in the images with tokens in the pre-selected vocabulary. Though this line of work has probed into some linguistic properties of the established communication conventions~\cite{lazaridou2017multi,lazaridou2018emergence,ren2020Compositional}, some intriguing questions remain open: How do agents make a trade-off between iconicity and symbolicity to emerge symbolic sign systems?
In this work, we present the very first step of modeling and simulating the evolution process of \textit{graphical conventions}~\cite{hawkins2019disentangling}, a two-participant communication convention whose medium is drawings in an abstract form. Specifically, we consider our contributions in three folds:
First, we model a multi-agent visual communication game and propose a learning framework, wherein the sender and the receiver evolves jointly. This visual communication game is an alternating sequential decision-making process, in which the sender generates a sequence of strokes step by step, terminated by the receiver. In contrast to discretized tokens in prior work, strokes can be naturally parametrized in a continuous space~\cite{ha2018neural,huang2019learning} such that the derivatives of learning objectives can be more effectively back-propagated through communication channels~\cite{foerster2016learning}. We further derive a novel training surrogate for multi-agent reinforcement learning based on a joint value function and the eligibility trace method~\cite{sutton2018reinforcement}. In experiments, we empirically demonstrate that such an integration of function approximation and Monte Carlo sampling in sequential communication facilitates the agents to be aware of the correlation between complex and simple sketches, hereby enabling a smooth abstraction process.
Second, we define essential properties in studying evolved sketches. Specifically, we define \textit{iconicity}~\cite{fay2014iconicity} as the drawings exhibiting high visual resemblance to the corresponding images, such that they are proximal to the latter when measured on the high-level embedding of a general-purpose visual system; we define \textit{symbolicity}~\cite{fay2018create} as these drawings being consistently separable in the high-level visual embedding, which facilitates new communication participants to easily distinguish between categories without grounding them to referents; we define \textit{semanticity}~\cite{harispe2015semantic} as the topography of the high-level embedding space of the drawings being strongly correlated to that of images, such that semantically similar instances and categories lie close to each other in the embedding space. Of note, this is not the only way to define these cognitive concepts; our intention is to align readers on critical concepts in our work.
Third, we present a suite of quantitative and qualitative methods to evaluate the emergent graphical conventions based on the carefully defined \textit{iconicity}, \textit{symbolicity}, and \textit{semanticity}. This is neccesary because a high communication rate does not imply good representations~\cite{bouchacourt2018agents}. The graphical nature of the communication medium mandates us to repurpose representation learning metrics rather than adopt linguistic metrics in emergent symbolic communication. We evaluate the contribution of different environmental drivers, early decision, sender's update, and sequential communication, to the three properties of the emergent conventions. Critically, the empirical results assessed on our metrics well align with our prediction based on the findings of human graphical conventions~\cite{fay2010interactive,hawkins2019disentangling}, justifying our environment, model, and evaluation. One of these setups emerges conventions where the three properties are consistent with our expectation of a sign system. Particularly, we find two inspiring phenomena: (i) Evolved sketches from semantically similar classes are perceptually more similar to each other than those falling into different superclasses. (ii) To communicate concepts not included in their established conventions, evolved agents can return to more iconic communications as we humans do. We hope our work can invoke the investigation of emergent communication in the unexplored modality of sketches and facilitate the study of cognitive evolutionary theories of pictographic sign systems.
\section{Related work}\label{sec:related}
\paragraph{Learning to sketch}
Ha and Eck~\cite{ha2018neural} begin the endeavor of teaching modern neural models to sketch stroke by stroke. However, generating meaningful stroke sequences directly from various categories of natural images is still in early phase~\cite{song2018learning,wang2021sketchembednet,zou2018sketchyscene}. To assure the interestingness of the category-level sketch communication, we design a stage-wise agent that first transfers a natural image into a pixel-level sketch through a CNN-based model~\cite{kampelmuhler2020synthesizing} and then draws the sketch on the canvas stroke by stroke with a policy~\cite{huang2019learning}. To pretrain our neural agents to sketch, we select Sketchy~\cite{sangkloy2016sketchy} from many datasets~\cite{yu2016sketch,ha2018neural,eitz2012humans,sangkloy2016sketchy} for its fine-grained photo-sketch correspondence, rich stroke-level annotations, and well-organized categorization structure.
\paragraph{Communication games}
While learning to sketch is formulated as a single-agent task with explicit supervision, our focus is on \textbf{how sketches would evolve} when utilized as the communication medium between two cooperative agents. The tasks the two agents cooperate on are always formulated as communication games, recently adopted to study phenomena in natural languages, such as symbolic language acquisition~\cite{graesser2019emergent} and the emergence of compositionality~\cite{ren2020Compositional}. Some notable works~\cite{lazaridou2017multi,lazaridou2018emergence,evtimova2018emergent,havrylov2017emergence} have devised interesting metrics, such as \textit{purity}~\cite{lazaridou2017multi} and \textit{topographic similarity}~\cite{lazaridou2018emergence}. In comparison, our work is unique due to the distinctive communication medium, continuously parametrized sketches. Although a concurrent work~\cite{mihai2021learning} also enables the agents to sketch in a communication game, it focuses only on drawing interpretable sketches without abstracting into graphical symbols along the communication process. We position our work as an alternative to emergent symbolic communication, since the emergent graphical symbols may better represent the continuum of semantics, as encoded in the vector representation of tokens~\cite{mikolov2013efficient}. Methodologically, we devise new evaluation metrics for this unexplored modality, assessing the \textit{iconicity}, \textit{symbolicity}, and \textit{semanticity} in the evolved sketches.
\paragraph{Emergent graphical conventions}
Evolving from sketches to graphical conventions/symbols is an active field in cognitive science under the banner of ``emergent sign systems.'' Fay \emph{et al}\onedot~\cite{fay2010interactive} show that pair interaction can form their local conventions when they play the Pictionary game. Using natural images instead of texts as the prompt, Hawkins \emph{et al}\onedot~\cite{hawkins2019disentangling} show that, besides partners' shared interaction history, visual properties of the images also influence the formed graphical conventions. That is, the evolved sketches highlight visually salient parts. Nevertheless, only a few computational models exist apart from these human behavior studies. Fan \emph{et al}\onedot~\cite{fan2020pragmatic} describe a model for selecting complex or simple sketches considering the communication context. Bhunia \emph{et al}\onedot~\cite{bhunia2020pixelor} and Muhammad \emph{et al}\onedot~\cite{muhammad2018learning} consider stroke selection and reordering to simplify the sketch. In contrast to sketch or stroke selection, we model \textbf{embodied} agents who can draw and recognize sketches, a more natural setting if we were to fulfill the goal of modeling the transition from iconic sketches to graphical symbols.
\section{The visual communication game}\label{sec:game}
Our visual communication game is formulated as a tuple
\begin{equation*}
(\mathcal{I}, C, \mathcal{A}_S, \mathcal{A}_R, G, r, \gamma),
\end{equation*}
where $\mathcal{I}$ is the image set to be presented to the sender $S$ and the receiver $R$. These images contain a single object in the foreground, and hence the image space $\mathcal{I}$ can be partitioned into $N$ classes according to the category of the objects. In each round of the game, the sender is presented with one image $I_S$, and the receiver is presented with $M$ images $\{I_R^1, ..., I_R^M\}$. Inspired by the category-level game introduced by Lazaridou \emph{et al}\onedot~\cite{lazaridou2017multi}, we make the observations of $S$ and $R$ disjoint (\ie, $I_S\notin\{I_R^1, ..., I_R^M\}$), but with a target image ${I}_R^{*}$ in the same class as $I_S$. We refer to the $M$ images that the receiver can see as the \textit{context}. Neither the receiver nor the sender would see the image(s) presented to their partner; they can only communicate this information by drawing sketches on the canvas $C$, observable to both players. As shown in \cref{fig:model}, at the beginning of each round, $C_0$ is initialized to be blank. Only the sender can draw on the canvas with actions chosen from $\mathcal{A}_S$. The action at each time step consists of $5$ strokes, which are continuous vectors in $\mathbb{R}^6$. We constrain each dimension to be in $(0, 1)$ due to limited space on the canvas. The canvas is updated from $C_t$ to $C_{t+1}$ by the renderer $G$ after each step of the sender's sketching. The receiver, after observing the updated canvas, would have to choose among the $M$ images or wait for the next step from the sender; these $M+1$ possible actions constitute $\mathcal{A}_R$. A game round terminates when the receiver gives up waiting and chooses one from the images. After the termination, the sender and the receiver will receive a shared reward or penalty, depending on if the receiver makes the right choice:
\begin{equation*}
r:\mathcal{I} \times \mathcal{I}\to\{-1, 1\}.
\end{equation*}
This reward/penalty is temporally decayed with a decay factor $\gamma$. That is, if the receiver decides to choose from the images at step $t$, this cooperating pair will receive either $\gamma^t$ or $-\gamma^t$. Hence, even though the players do not receive an explicit penalty for long conversations, there is an implicit penalty/reward for delayed positive/negative answers. No reward will be assigned if the receiver chooses to wait. The next round starts after the reward/penalty assignment.
\section{Agents}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{model}
\caption{\textbf{Communication process}. In our visual communication game, a sender $S$ and a receiver $R$ only share the observation of the canvas $C$. The sender first converts the natural image $I_S$ to a pixel-level sketch $\hat I_S$. At each step, the sender first draws five strokes $a_S$ through the renderer $G$, which updates the canvas to $C_{t+1}$. Next, the receiver uses the updated canvas $C_{t+1}$ to query from the context images $\{I_R^1, ..., I_R^M\}$ and the last canvas $C_{t}$, deciding the action $a_R$ at this step. The game continues if the receiver chooses to wait.}
\label{fig:model}
\end{figure*}
The two agents involved in the visual communication game are modeled with two decision policies, $\pi_S$ and $\pi_R$, for the sender and the receiver, respectively. These policies are stochastic mapping from the agents' observation space to the action space:
\begin{equation}
\pi_S: \mathcal{I} \times C \to \mathcal{P}(\mathcal{A}_S), \quad \pi_R: \mathcal{I}^M \times C \to \mathcal{P}(\mathcal{A}_R),
\end{equation}
where $\mathcal{P}(\mathcal{A})$ is a distribution over the support set $\mathcal{A}$. As shown in \cref{fig:model}, at each time step $t\in\{0, ...T\}$, the sender first emits the stroke parameters for the next five strokes $a_{St}\sim\pi_S(I_S, C_{t})$. These strokes are applied to the canvas by a differentiable renderer, $C_{t+1}=G(C_t, a_{St})$. Next, the updated canvas is transmitted to the receiver. The receiver decides whether to terminate the game and make its prediction (\ie, $a_{Rt}\in \{1, ..., M\}$) or wait for another round (\ie, $a_{Rt}=M+1$); its decision is sampled from $\pi_R$. If a prediction is made, it is used to select the image ${I}_R^{a_R}$ from $\{I_R^1, ... I_R^M\}$ and score this game with $r(I_S, {I}_R^{a_R})$. Otherwise, this routine repeats in the next step $t \leftarrow t+1$.
\subsection{Sender}
Prior to playing the visual communication game, the sender should be able to (i) extract edges from natural images~\cite{xie2015holistically} and (ii) draw sketches that closely resemble the configurations of salient edges~\cite{li2019photo}, just as humans~\cite{sayim2011line} do. To endow the sender with these capabilities, we design a stage-wise architecture $h_S = g_S \circ f_S$. Specifically, $I_S$ is first converted to a target sketch $\hat I_S$ using a visual module $f_S$~\cite{kampelmuhler2020synthesizing}, capturing the salient edge information in the natural image; we directly adopt the pretrained model from the referred work. Next, $\hat I_S$ is concatenated with the current canvas $C_t$ and fed to the sketching module $g_S$, whose architecture is built upon Huang \emph{et al}\onedot~\cite{huang2019learning}. This sketching module outputs five vectors in the form $(x_0, y_0, x_1, y_1, x_2, y_2)$, which parametrizes the curve of one stroke. The policy is parametrized as a Gaussian distribution during training,
\begin{equation}
\pi_S = \mathcal{N}( \mu_t, \sigma^2), \quad \mu_t = h_S(I_S, C_t), \quad \sigma^2 = c\cdot\mathbf{I},
\end{equation}
where $\mathbf{I}$ is the identity matrix, and $c$ is a constant hyperparameter. During testing, we set $c=0$.
These stroke parameters $a_{St}$ are fed into a pretrained renderer $G$~\cite{huang2019learning} to update the canvas, $C_{t+1}=G(C_t, a_{St})$. This renderer is fully differentiable, enabling end-to-end model-based training~\cite{hafner2019dream} of the sketching module $g_S$. We pretrain $g_S$ on Sketchy~\cite{sangkloy2016sketchy}; see Supp. for results.
\subsection{Receiver}
The receiver, similar to the sender, should also carry some rudimentary visual capability to this game. Unlike the low-level vision needed for the sender, the requirement for the recevier is high-level visual recognition. Therefore, we adopt a pretrained VGG16~\cite{simonyan2015very} as the visual module $f_R: \mathcal{I}\to\mathbb{R}^{4096}$ of the receiver, following a similar practice in recent literature~\cite{lazaridou2017multi,havrylov2017emergence}. The output of this visual module is a vector, and further transformed by two separate linear layers, $g_R^K$ and $g_R^Q$, into visual embeddings, $h_R^K(I)$ and $h_R^Q(I)$. That is, $h_R^K = g_R^K \circ f_R$, $h_R^Q = g_R^Q \circ f_R$.
When observing both the context $\{I_R^1, ..., I_R^M\}$ and the canvas $C_t$, the receiver first embeds each of them with $h_R$. Next, it makes the decision based on the similarity between the current canvas and each option in the context. The decision module is thus realized by a Boltzmann distribution based on resemblance:
\begin{equation}
\resizebox{0.9\linewidth}{!}{$
\displaystyle
\pi_R(a_{Rt}|I_R^1, ... I_R^M, C_{t-1}, C_t) = \frac{\exp(h^Q_R(C_t) \cdot h^K_R(I_R^{a_{Rt}}) )}{\sum_{m=1}^{M+1} \exp(h^Q_R(C_t) \cdot h^K_R(I_R^m))}$,
}
\end{equation}
where $I_R^{M+1} = C_{t-1}$. Although a similar policy was proposed before~\cite{lazaridou2018emergence,havrylov2017emergence}, our $\pi_R$ is distinct as it is endowed with an external memory of $C_{t-1}$. Intuitively, if the receiver finds the current canvas $C_t$ closer to the last canvas $C_{t-1}$ in the embedding space than all $M$ options in the context, it will choose to emit $a_{Rt}=M+1$ and delay the decision to the next step; a prediction can only be made when the receiver finds the current canvas is informative enough. As a result, the sender would draw informative strokes as early as possible to avoid the implicit penalty in the decayed reward.
\subsection{Learning to communicate}
Policies of the sender and the receiver are trained jointly to maximize the objective of the stochastic game in \cref{sec:game}:
\begin{equation}
\label{eq:obj}
\pi_S^{*}, \pi_R^{*} = \argmax_{\pi_S, \pi_R}\mathbb{E}_{\tau\thicksim(\pi_S, \pi_R)}[\sum\nolimits_t \gamma^t r_{t}],
\end{equation}
where $\tau = \{C_0, a_{S0}, C_1, a_{R1}, a_{S1}, ...\}$ is the simulated episodic trajectory. As well known in reinforcement learning, the analytical expectation in \cref{eq:obj} is intractable to calculate along the trajectory $\tau$. We devise value functions $\mathcal{V}(X_t)$ and $V_\lambda(X_{t})$ for an optimization surrogate:
\begin{equation}
\resizebox{0.9\linewidth}{!}{$
\displaystyle
\label{eq:complte_v}
\mathcal{V}(X_t) = \mathbb{E}_{\pi_{S}(a_{St}|I_S, C_{t-1}),\pi_{R}(a_{Rt}|\hat{X}_t)}[(r_t+\gamma\delta(a_{Rt}) V_\lambda(X_{t+1})]$,
}
\end{equation}
where $\hat{X}_t=[I_R^1, ... I_R^M, C_{t-1}, C_t]$, $X_t=\text{cat}([I_S],\hat{X}_t)$, $\delta(a_{Rt})$ is the Dirac delta function that returns $1$ when the action is \textit{wait} and $0$ otherwise. The expectation $\mathbb{E}_{\pi_{S}(a_{St}|I_S, C_{t-1})}[\cdot]$ is approximated with the point estimate, as in the reparametrization in VAE~\cite{kingma2013auto}. The expectation $\mathbb{E}_{\pi_{R}(a_{Rt}|\hat{X}_t)}[\cdot]$ can be calculated analytically because $\pi_{Rt}$ is a categorical distribution. The connection between these two expectation is one of our contributions. Of note, $C_t$ in $\hat{X}_t$ in $\pi_{Rt}(a_{Rt}|\hat{X}_t)$ is generated by the differentiable renderer $G$ with the actions $a_{St}$ from the sender policy $\pi_{S}(a_{St}|I_S, C_{t-1})$. Hence, we can have both $\partial\mathcal{V}/\partial\pi_{Rt}$ and $\partial\mathcal{V}/\partial\pi_{St}$ based on \cref{eq:complte_v}. This results in a novel multi-agent variant of the general policy gradient~\cite{sutton2000policy,silver2014deterministic}.
$V_\lambda(X_{t})$ in \cref{eq:complte_v} is an eligibility trace approximation~\cite{sutton2018reinforcement} of the ground-truth value function. Intuitively, a value estimate with eligibility trace $V_\lambda$ mixes the bootstrapped Monte Carlo estimate $V_N^k(X_t)=\mathbb{E}_{\pi_S, \pi_R}[\sum_{n=t}^{h-1}\gamma^{n-t}r_n + \gamma^{h-t}\delta(a_{Rh})\upsilon_\phi(X_h)]$ at different roll-out lengths $k$, with $h=min(t+k, T_{choice})$ being the maximal timestep. $T_{choice}$ is the timestamp when the receiver stops waiting. The articulation of such termination also makes our eligibility trace deviate from the general derivation with infinite horizon. We derive an episodic version as
\begin{equation}
\resizebox{0.9\linewidth}{!}{$
\displaystyle
\label{eq:lambda_v}
V_\lambda(X_t) =
\begin{cases}
(1-\lambda)\sum_{n=1}^{H-1}\lambda^{n-1}V_N^n(X_t)+\lambda^{H-1}V_N^H(X_t)\\\quad\quad\quad\quad\quad\text{if }t\leq T_{choice}\\
v_\phi(X_t)\quad\quad\text{otherwise}
\end{cases}$
}
\end{equation}
where $H=T_{choice}-t+1$. Please refer to the supplementary material for a detailed derivation and the algorithm. Finally, $v_\phi(X_t)$ is trained by regressing the value returns:
\begin{equation}
\phi^{*} = \argmax_{\phi}\mathbb{E}_{\pi_S, \pi_R}[\sum\nolimits_t \frac{1}{2}||v_\phi(X_t)-V_\lambda(X_t)||^2].
\end{equation}
\section{Experiments}
\subsection{Settings}\label{sec:settings}
\paragraph{Images}
We used the Sketchy dataset~\cite{sangkloy2016sketchy} as the source of images. Due to the limited performance of the sketching module on open-domain image-to-sketch sequential generation, we select 40 categories (10 images per category) that enable satisfactory sketching behaviors.
\paragraph{Environmental drivers}
With the visual communication game and the learning agents at hand, we investigate the causal factors in emergent graphical conventions with controlled experiments. \cref{tab:setting} lists all designed settings. Specifically, we consider the following factors:
\begin{itemize}[leftmargin=*,noitemsep,nolistsep]
\item \textit{Can receiver make early decisions?} The hypothesis is that the receiver's decision before exhausting the allowed communication steps may inform the sender the marginal benefit of each stroke and incentivize it to prioritize the most informative strokes. The corresponding control setting is \textit{max-step}, where the receiver can only make the choice after the sender finishes its drawing at the maximum step. This factor is on in other settings.
\item \textit{Can sender change its way of drawing?} The hypothesis is that the mutual adaptation of the sender and the receiver may lead to better abstraction in the evolved sketches. Particularly, the sender may develop new ways of drawing in the evolution. The corresponding control setting is \textit{sender-fixed}, wherein we freeze the parameters of the sender, such that the receiver has to adapt to its partner. This factor is on in other settings.
\item \textit{Is the game sequential, and can the receiver observe more complex drawings?} The hypothesis is that the modeling of a sequential decision-making game and the evolution from more complex sketches may regularize the arbitrariness, which is unique for graphical conventions. The corresponding control setting is \textit{one-step}: There only exists one step of sketching, thus no sequential decision-making at all. This factor is on in other settings.
\end{itemize}
\begin{table*}[ht!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccclcccc}
\toprule
\rowcolor{mygray}
\multicolumn{5}{c}{Game Settings}&
\multicolumn{3}{c}{Communication Accuracy (\%) $\pm$ SD (avg. step)}\\
\midrule
\thead{early\\decision} & \thead{update\\sender} & \thead{max/one\\step} & description& setting names & seen & unseen instance & unseen class \\
\midrule
\midrule
yes & yes & max & our experimental setting & complete & 98.07 $\pm$ 0.01(1.03) & 70.37 $\pm$ 0.04(2.36) & 39.40 $\pm$ 0.05(3.76)\\
no & yes & max & control setting for early decision & max-step & 86.27 $\pm$ 0.03(7.00) & 67.93 $\pm$ 0.02(7.00) & 38.40 $\pm$ 0.04(7.00)\\
yes & no & max & control setting for evolving sender & sender-fixed & 99.60 $\pm$ 0.01(2.41) & 71.80 $\pm$ 0.02(3.83) & 45.40 $\pm$ 0.02(4.75) \\
yes & yes & one & control setting for sequential game & one-step & 22.87 $\pm$ 0.23(1.00) & 14.07 $\pm$ 0.15(1.00) & 9.60 $\pm$ 0.09(1.00)\\
no & no & max & baseline for all settings above & retrieve & 99.47 $\pm$ 0.01(7.00) & 76.80 $\pm$ 0.02(7.00) & 48.00 $\pm$ 0.02(7.00)\\
\bottomrule
\end{tabular}%
}%
\caption{\textbf{Game settings and results.} The first three columns represent the configurations of the environmental drivers. Setting names and descriptions specify our purposes for intervention. The last three columns show success rates and conversation length in testing games. ``seen'' are validation games with the same image set as training. ``unseen'' are testing games with unseen images.}
\label{tab:setting}
\end{table*}
\paragraph{Training, validation and generalization test}
We train the sender and the receiver with batched forward and back-propagation, with a batch size of 64 and maximum roll-out step $T=7$. We update using Adam~\cite{kingma2015adam} with the learning rate 0.0001 for a total of 30k iterations. Except for the \textit{sender-fixed} setting, there is a warm-up phase in the first 2000 iterations for the sender where we linearly increase the learning rate to 0.0001. After the warm-up phase, the learning rate of both agents will be decreased exponentially by $0.99^{\frac{i-2000}{1000}}$, where $i$ is the number of training iterations. In all settings, we set $M=3$, $\gamma=0.85$. Between iterations, we randomly sample another 10 batches for validation. We held-out 10 images per category for unseen-instance test and 10 categories for unseen-class test. Each image will be communicated as the target, resulting in 300$+$100 pairs in the test set. Results below are statistics of 5 random seeds.
\subsection{Results}\label{sec:results}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.333\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/precision.pdf}
\caption{validation accuracy}
\label{fig:acc}
\end{subfigure}%
\begin{subfigure}[t]{0.333\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cum_average_step.pdf}
\caption{average communication steps}
\label{fig:cum_avg_step}
\end{subfigure}%
\begin{subfigure}[t]{0.333\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cum_precision.pdf}
\caption{prediction accuracy}
\label{fig:cum_acc}
\end{subfigure}%
\caption{\textbf{Statistics aggregated over all random seeds.} (a) The validation accuracy of different game settings and the ablation baseline. (b) The average communication steps under different settings and ablation baselines. $\gamma$ is 0.85 by default, 0.95 in complete95 and cumulative95. (c) The prediction accuracy when receivers are presented with sketches drawn by corresponding senders at 1, 3, 5, and 7 time steps, respectively. These sketches are collected in a standalone roll-out after each iteration, where early decision is disabled; agents are trained with the \textit{complete} setting, where ealy decision is enabled. }
\label{fig:cum_step_and_acc}
\end{figure*}
\subsubsection{Communication efficacy and sketch abstraction}\label{sec:rate_duration}
We record both the success rate and the communication steps over the training iterations; see \cref{fig:cum_step_and_acc}. In \cref{fig:acc}, agents in all settings except \textit{one-step} converge to a success rate above $80\%$. Among them, the communicating pairs in the \textit{complete} setting and the \textit{sender-fixed} setting evolve to achieve a comparable success rate with the \textit{retrieve} baseline. Interestingly, these two pairs also emerge a phenomenon resembling the natural observation in human study, named \textit{systematic reduction}~\cite{lewis1969convention}: The average steps first increase and then gradually decrease as in \cref{fig:cum_avg_step}. Contrasting \textit{complete} and \textit{sender-fixed}, we can see: (i) The emergent conventions in the former is much simpler than the latter (less steps in \cref{fig:cum_avg_step}), which implies the contribution of mutual adaptation in sketch abstraction. (ii) The success rate in \cref{fig:acc} in the former converges a bit more slowly, which is reasonable since the senders can explore to change the way of drawing. In comparison, if the receiver cannot make early decisions, it has no intention to relate sketches (\ie, $C_{t-1}$ and $C_t$) at consecutive timesteps. The sender is thus \textit{unaware} of each stroke's marginal information gain, which in return makes their learning harder. This might explain the relatively low success rate in the \textit{max-step} setting. The failure of the \textit{one-step} pairs reveals the irreplacable roles of sequential decision-making and observing complex sketches in emergent graphical communication.
To further inspect how our proposed modeling and learning on sequential decision-making facilitate the desired evolution in the sketches, we conduct an ablation study by comparing our proposed learning surrogate (\cref{eq:complte_v}) and a vanilla policy gradient baseline, REINFORCE~\cite{williams1992simple} with Monte Carlo cumulative rewards $ \mathbb{E}_{\pi_S, \pi_R}[\sum_t[\nabla\log \pi_R(a_{Rt}|\hat{X}_t) \sum_{n=t}^{T}\gamma^{n-t} r_{n}]]$.
Our comparison spans three axes. First, the REINFORCE baseline converges much more slowly than the proposed surrogate; see \textit{cumulative} vs \textit{complete} in \cref{fig:acc}. Second, we check the robustness under variation of decay factor $\gamma$. As shown in \cref{fig:cum_avg_step}, while the proposed method shows stable convergence in the communication steps under $\gamma=0.85$ and $\gamma=0.95$, the REINFORCE baseline exhibits high-variance behavior under certain setup ($\gamma=0.95$). Finally, we check if agents' early terminations are \textit{caused} by their \textit{awareness} of the indistinguishable performance in longer and shorter episodes. Given a \textit{precondition} that the longer the episodes are, the earlier the success rate increases, it should be the increase in the average performance of shorter episodes that \textit{causes} the average timesteps to decrease. Taking 1-step and 3-step communication for example, in the \textit{complete} setting, we shall see the success rate of the 3-step to achieve high earlier, which is then caught up but not exceeded by the 1-step. The not exceeding condition is a crucial cue to validate that the communicating partners were \textit{actively} pursuing the Pareto front~\cite{kim2005adaptive} / efficiency bound~\cite{zaslavsky2018efficient} of accuracy and complexity. This is exactly what our proposed method emerges as shown in \cref{fig:cum_acc}. In contrast, in the REINFORCE baseline, under the same decay factor, the performance of 1-step surpasses 3-step communication. It seems as if the incapability of \textit{learning} long episodes \textit{caused} the agents to \textit{avoid} them.
All our results on success rate and communication steps are consistent with predictions based on our hypotheses, which justifies our environments and models. However, a high success rate does not necessarily imply high convention quality. Another essential property for conventions is \textit{stability}~\cite{lewis1969convention}: There should exist common patterns in the repeated usage of conventions to convey similar concepts. We take the viewpoint of representation learning and concretize the vague \textit{stability} with three formally defined properties: \textit{iconicity}, \textit{symbolicity} and \textit{semanticity}. In the following, we introduce our experiments to measure these properties.
\subsubsection{Iconicity: generalizing to unseen images}
We start from \textit{iconicity} since it is the most distinctive property in visual communication. We define \textit{iconicity} as the drawings exhibiting high visual resemblance with the corresponding images, such that they are proximal to the latter when measured on the high-level embedding of a general-purpose visual system. Based on this definition, a more \textit{iconic} drawing should facilitate more \textit{generalizable} graphical conventions. Namely, in more naturalistic open-domain communication, agents would always see novel scenes containing known or unknown concepts. They should still be able to communicate with established conventions or with unconventionalized iconic drawings. Such \textit{generalizability} can be measured in two test cases: unseen instances of seen classes and unseen classes. A successful generalization to unseen instances implies senders' ways of drawing preserve \textit{iconicity} in the evolution. A successful generalization to unseen classes (\ie, zero-shot generalization) is more difficult than unseen instances; hence, partners may increase the conversation length and communicate with more complex sketches. This requires both the senders to preserve \textit{iconicity} in drawings and the receivers to be sensitive to the information change in the communicated sketches.
\cref{tab:setting} reports the success rates and average timesteps in our generalization tests. The \textit{retrieve} setting is the baseline, since there is no evolution at all and the sketches should resemble the original images the most (\ie, possessing the highest \textit{iconicity}). Unsurprisingly, its generalization performance upper-bounds all other settings. Among the experimental and controlled settings, the \textit{complete}, the \textit{max-step}, and the \textit{sender-fixed} agents generalize relatively well in unseen instances ($70.37 \pm 0.04$, $67.93 \pm 0.02$, $71.80 \pm 0.02$) and generalize above chance ($39.40 \pm 0.05,38.40 \pm 0.04,45.40 \pm 0.02>$25.00) in unseen classes. Interestingly, \textit{complete} and \textit{sender-fixed} communicating partners intelligently turn to longer episodes for better generalization, better than the \textit{max-step} agents. This finding implies the partners may turn to more iconic communication when there is no established conventions/symbols, just as we humans do. Strikingly, the \textit{max-step} conventions seem to loose more \textit{iconicity}, possibly due to confusion on marginal information gains. The \textit{one-step} drawings seem to lack \textit{iconicity}.
\subsubsection{Symbolicity: separating evolved sketches}\label{sec:classification}
Next, we measure \textit{symbolicity} to evaluate the graphical conventionalization. We define \textit{symbolicity} as the drawings being consistently separable in the high-level visual embedding, which facilitates new communication partners to easily distinguish between categories without grounding them to referents. Based on this definition, a more \textit{symbolic} drawing should be more \textit{easily separable} into their corresponding categories. To measure such \textit{separability}, we use a pretrained VGG16 as the new learner and finetune the last fully connected layer to classify evolved sketches into the 30 categories. Technically, we first get the 300 final canvases from the communication game, 10 for each category. Among them, we use $70\%$ for training and $30\%$ for testing.
The bar plot in \cref{fig:class} shows the classification results. Since agents in the \textit{one-step} setting do not play the game successfully, they may not form a consistent way to communicate. Agents in the \textit{complete} setting achieve the highest accuracy, higher even compared with the result of the original images. This finding indicates that agents in the \textit{complete} setting agree on a graphical convention that consistently highlights some features across all training instances in each category, which are also distinguishable between categories. Comparing the \textit{max-step} with the \textit{complete} setting, we know that early decision is an important factor for more \textit{symbolic} conventions. Comparison between the \textit{sender-fixed} setting and the \textit{complete} setting suggests that the sender's evolution also contributes to high \textit{symbolicity}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{classification}
\caption{\textbf{Testing results of classifiers trained with sketches from different settings.} \textit{img} denotes images, and \textit{retrieve} denotes unevolved sketches.}
\label{fig:class}
\end{figure}
\subsubsection{Semanticity: correlating category embedding}
\begin{figure*}
\centering
\vspace{-12pt}
\includegraphics[width=\linewidth]{tsne_3setting}
\caption{\textbf{t-SNE of visual embedding of the original images (left), unevolved sketches in the \textit{retrieve} setting (middle), and evolved sketches in the \textit{complete} setting (right).} These embeddings are from the finetuned VGGNets in \cref{sec:classification}. The evolved sketches have clearer boundaries due to the discrimination game. But they still maintain the topography that similar concepts are close to each other.}
\label{fig:category_embedding}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{evolve}
\caption{\textbf{Sketch evolution of rabbit and giraffe through game iterations.} For each example, sketches from the left to the right show the change of the final-step canvas from iteration 0 to 30,000. Please refer to Supp for more results.}
\label{fig:temporal evolve}
\end{figure*}
The last desired property of graphical conventions is that the evolved sketches can preserve the perceptual metric in images~\cite{zhang2018unreasonable}. We define \textit{semanticity} as the topography of the high-level visual embedding space of drawings being strongly correlated to that of images, such that semantically similar instances and categories lie close to each other in the embedding space. To examine such \textit{topographic correlation}, we project the embeddings obtained in \cref{sec:classification} to a 2D space via t-SNE~\cite{van2008visualizing}. \cref{fig:category_embedding} show the visualization of the original images and the sketches in the \textit{retrieve} and \textit{complete} settings; please refer to supplementary for results of other settings. Images/drawings from the same category are marked in the same color. As shown, boundaries between categories are clearer in the evolved drawings in the \textit{complete} setting than the unevolved sketches in \textit{retrieve} or original images; but semantically similar categories are still close to each other. For example, cow, deer, horse, and camel are proximal to each other, while burger and apple are far from them. These results highlight another uniqueness of visual communication over its symbolic counterpart: The similarity in the visual cues in the conventions may hint the \textit{semantic} correlation between the referents.
\subsubsection{Visualizing sketch evolution}
To better understand the nature of the emerged conventions, we inspect the intermediate sketches in the evolution processes. Specifically, we choose to visualize the process under the \textit{complete} setting. \cref{fig:temporal evolve} shows three instances in two categories. For each example, drawings from the left to the right show the change of the final-step canvas from iteration 0 to 30,000. Sketches' complexity gradually decreases after an initial increase, echoing the trend of reduction described in \cref{sec:rate_duration}. For rabbit, at the beginning, the strokes may depict instances from different perspectives; through iterations, they converge to highlight the rabbit's long ear. As for giraffe, the agents gradually learn to emphasize the long neck. Particularly, in the third example, although the giraffe lowers its head, we can still see an exaggerated vertical stroke for the neck, similar to the first example where the giraffe's head is raised. These examples show how the sender gradually unifies drawings of the same category. It can also be seen that after evolution, the sender is inclined to use the first several strokes to depict the most salient parts of visual concepts.
\section{Conclusion}\label{sec:conclusion}
In this work, we present the first step of modeling and simulating the evolution of graphical conventions between two agents in a visual communication game. Agents modeled in the proposed framework can successfully communicate visual concepts using sketches as the medium. We measure the emergent graphical conventions over three carefully defined properties, \textit{iconicity}, \textit{symbolicty}, and \textit{semanticity}. The experimental results under different controls suggest that early decision, mutual adaptation, and sequential decision-making can encourage \textit{symbolicity} while preserving \textit{iconicity} and \textit{semanticity}. However, there are some limitations regarding the two-stage pretrained senders. An ideal sender would not need to convert the images to a sketches before it starts sketching. The limitations in the pretrained sketching module also constrain the discriminative need among the selected classes in our game. We will investigate and resolve these limitations in future research. With the uniqueness of visual conventions demonstrated, we hope our work can invoke the study of emergent communication in the unexplored modality of sketches.
{\small
\bibliographystyle{ieee_fullname}
| proofpile-arXiv_065-1779 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION} \label{sec:introduction}
Parity-violating physics in the early universe may cause an effect
known as cosmic birefringence, in which photons with different
polarizations travel differently along their propagation paths,
resulting in a net rotation on the polarization directions of cosmic
microwave background (CMB) photons. Such an effect can arise from many
types of beyond-the-Standard-Model physics, such as from the coupling
between axion-like particles and photons through a
Chern-Simons interaction (see, e.g., \cite{Li:2008}), from pseudoscalar
fields introduced in early dark energy models to resolve the Hubble
tension \cite{Capparelli:2020}, or from primordial magnetic fields
through Faraday rotation (see, e.g., \cite{Kosowsky:1996:FR}).
Cosmic birefringence can cause both isotropic and anisotropic
rotation of the microwave background polarization. Since
the polarization field is dominated by an E-mode signal from primordial
density perturbations, small rotations of polarization effectively turn
E-mode into B-mode polarization, leaving observable imprints in the
polarization power spectra. Isotropic birefringence, in particular, leads to non-zero
parity-odd power spectra in the CMB including TB and EB (see, e.g.,
\cite{Li:2008, Zhai:2020a}). Various experiments have placed
constraints on isotropic rotation angle, such as Planck \cite{Planck:2016soo},
WMAP \cite{2011}, and ACT \cite{ACT:2020frw}.
The observational challenge in constraining
isotropic birefringence is that its effect is highly degenerate
to that of a calibration error in the orientation of polarized detectors
(see, e.g., \cite{Keating:2013,Kaufman:2014}).
Anisotropic birefringence, on the other hand, leads
only to parity-even spectra and contributes non-negligibly
to the B-mode power spectrum. Anisotropic rotation also induces
off-diagonal correlations in the microwave background multipoles, which allows
reconstruction of the anisotropic rotation field using a quadratic estimator
approach similar to lensing reconstruction of the deflection field (see, e.g.,
\cite{Gluscevic:2009,Yadav:2012a,Namikawa:2017}). Such an effect has been used
to derive observational constraints on anisotropic rotation; for examples,
Planck \cite{PlanckCollaboration:2016}, BICEP2 / Keck \cite{BICEP2Collaboration:2017},
ACT \cite{Namikawa:2020}, and SPT \cite{Bianchini:2020} have all derived upper bounds on
anisotropic rotation field with a scale-invariant power spectrum.
Despite the physical importance of a possible rotation field, to our knowledge
no publicly available codes exist that compute CMB power spectra from cosmic
birefringence. Here we present a modified
version of \texttt{class}
\cite{software:class}\footnote{\url{https://github.com/lesgourg/class_public}},
named
\texttt{class\_rot}\footnote{\url{https://github.com/catketchup/class_rot}},
which implements this calculation and allows for fast computation of
the rotated EB, TB, EE, and BB power spectra due to both
isotropic and anisotropic rotation from cosmic birefringence. In particular, we
implement a non-perturbative calculation based on the angular
correlation function of the rotation field \cite{Li:2008,Li:2013}.
Our code has an accuracy better than 1\% at all multipoles from
$l=2$ to $l=4000$, which we verify through comparison with power
spectra of simulated sky maps including random rotation fields.
This paper is structured as follows. In Sec.~\ref{sec:rotation}, we
describe the basics of cosmic birefringence. In Sec.~\ref{sec:rotated
ps} we show the non-perturbative calculation method that is implemented in
\texttt{class\_rot}, focusing on the effect of cosmic birefringence on
the CMB power spectra. In Sec.~\ref{sec:code}, we demonstrate the code
implementation and give usage examples, and we present
comparisons between the results from \texttt{class\_rot} and numerical
simulations.
Sec.~\ref{sec:conclusion} provides a brief concluding discussion about the uses
of this code in the context of current and upcoming experiments.
\section{COSMIC ROTATION FIELD}
\label{sec:rotation}
The rotation effect from cosmic birefringence can be effectively
expressed as a rotation field $\alpha(\hat{\bm{n}})$, which can have
both an isotropic part and an anisotropic part \cite{Zhai:2020a},
given by
\begin{equation}
\label{eq:alpha}
\alpha(\hat{\bm{n}})=\bar{\alpha}+\delta \alpha(\hat{\bm{n}}),
\end{equation}
with $\bar{\alpha}$ the isotropic part, and
$\delta \alpha(\hat{\bm{n}})$ the anisotropic part with a zero mean,
\begin{equation}
\label{eq:rotation parts}
\expect{\delta \alpha(\hat{\bm{n}})}=0.
\end{equation}
As a result of rotation, Stokes parameter $Q$ and $U$ transform as
\begin{equation}
\label{eq:rotation}
(\tilde{Q} \pm i \tilde{U})(\hat{\bm{n}})=\exp (\pm i 2 \alpha(\hat{\bm{n}}))(Q \pm i U)(\hat{\bm{n}}),
\end{equation}
where we have used tildes to denote rotated quantities.
To illustrate how such a rotation field can arise from parity-violating
physics in the early universe, consider for example a
Chern-Simons-type interaction of photons and axions
with a Lagrangian given by
\begin{equation}
\label{eq:cs term}
\mathcal{L}_{c s}=\frac{\beta \phi}{2 M} F^{\mu \nu} \tilde{F}_{\mu \nu},
\end{equation}
where $\beta$ is a dimensionless coupling constant, $\phi$ is the
axion field, $M$ is its mass scale, and $F^{\mu \nu}$ is the
electromagnetic tensor with $\tilde{F}_{\mu \nu}$ being its dual. This term
modifies the Euler-Lagrange equations for electromagnetic field and induces a
rotation in the polarization direction of a photon if $\phi$
varies along its propagation path \cite{1997PhRvD..55.6760C, 1998PhRvD..58k6002C,Leon:2017}, with the rotation
angle given by
\begin{equation}
\label{eq:alpha and phi}
\alpha=\frac{\beta}{M} \Delta \phi,
\end{equation}
where $\Delta \phi$ is the change of $\phi$ along the photon path.
In the case that the axion field $\phi$ is spatially
homogeneous, Eq.~\eqref{eq:alpha and phi} introduces an
isotropic rotation field to the CMB; an inhomogeneous axion field
gives an anisotropic rotation field in the CMB.
A convenient way to express an anisotropic rotation field,
$\alpha(\hat{\bm{n}})$, is to expand it in the basis of spherical
harmonics as
\begin{equation}
\label{eq:alpha alm}
\delta \alpha(\hat{\bm{n}})=\sum_{L M} \alpha_{L M} Y_{L M}(\hat{\bm{n}}).
\end{equation}
We assume that $\alpha(\hat{\bm{n}})$ follows Gaussian random
statistics, in which case the statistical information of the rotation
field $\alpha(\hat{\bm{n}})$ can be completely specified by its power
spectrum $C_L^{\alpha\alpha}$, given by
\begin{equation}
\label{eq:alpha ps}
\expect{a_{L M} a_{L' M'}} = \delta_{L L'}\delta_{M M'}C_{L}^{\alpha\alpha}.
\end{equation}
In this paper we only consider a scale-invariant power spectrum of
the anisotropic rotation field, which is physically well-motivated
\cite{2011PhRvD..84d3504C}, though the formalism presented here is broadly
applicable to an arbitrary rotation field power spectrum. Following the convention in \cite{Abazajian:2019eic}, we parametrize a scale-invariant power spectrum as
\begin{equation}
\label{eq:cl_aa}
\frac{L(L+1)}{2 \pi} C_{L}^{\alpha \alpha}=A_{C B},
\end{equation}
with $A_{CB}$ the amplitude of the cosmic birefringence power
spectrum\footnote{Note that $A_{CB}$ defined in this paper is $10^{-4}$ times of that in \cite{Namikawa:2020} and $10^{-5}$ of that in \cite{Namikawa:2017}.}.
\section{Impacts on Microwave Background Polarization Power Spectra}
\label{sec:rotated ps}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{./figs/ps.pdf}
\caption{Microwave background polarization BB power spectrum contributions from a scale-invariant tensor mode ($r=0.004$), gravitational lensing, isotropic rotation ($\bar{\alpha}=0.1^{\circ}$) and scale-invariant anisotropic rotation ($A_{CB}=10^{-5}$) are given in the upper panel. The absolute TB and EB power spectra from isotropic rotation ($A_{CB}=10^{-5}$) are shown in the lower panel.}
\label{fig:ps.pdf}
\centering
\end{figure}
In this section, we briefly review the rotated CMB power spectra calculation implemented in \texttt{class\_rot}. We consider a rotation field with both an isotropic contribution and an Gaussian random anisotropic contribution as described in Eq.~\eqref{eq:alpha}. We adopt the non-perturbative method introduced in \cite{Li:2008,Li:2013}, which is similar to the calculation method of lensed CMB power spectra in \cite{Challinor:2005}. Here we briefly review the non-perturbative calculations relevant to the implementation of \texttt{class\_rot}; we refer interested readers to \cite{Li:2008,Li:2013} for more calculation details.
In this method, the starting point is to connect the real-space correlation functions of rotated quantities, such as $\tilde{T}(\hat{\bm{n}})$, $\tilde{Q}(\hat{\bm{n}})$, and $\tilde{U}(\hat{\bm{n}})$, to the rotated power spectra, e.g., $\tilde{C}_{\ell'}^{E E}$, $\tilde{C}_{\ell'}^{B B}$, with
\begin{equation}
\label{eq:xi spherical}
\begin{aligned}
\tilde{\xi}_{+}(\beta) &\equiv\left\langle(\tilde{Q}+i \tilde{U})^{*}(\hat{\bm{n}})(\tilde{Q}+i \tilde{U})\left(\hat{\bm{n}}^{\prime}\right)\right\rangle\\
&= \sum_{\ell'} \frac{2\ell'+1}{4 \pi}\left(\tilde{C}_{\ell'}^{E E}+\tilde{C}_{\ell'}^{B B}\right) d_{22}^{\ell'}(\beta),\\
\tilde{\xi}_{-}(\beta) &\equiv\left\langle(\tilde{Q}+i \tilde{U})(\hat{\bm{n}})(\tilde{Q}+i \tilde{U})\left(\hat{\bm{n}}^{\prime}\right)\right\rangle\\
&= \sum_{\ell'} \frac{2\ell'+1}{4 \pi}\left(\tilde{C}_{\ell'}^{E E}-\tilde{C}_{\ell'}^{B B}+2 i \tilde{C}_{\ell'}^{E B}\right) d_{-22}^{\ell'}(\beta), \\
\tilde{\xi}_{X}(\beta) &\equiv \left\langle T(\hat{\bm{n}})(\tilde{Q}+i \tilde{U})\left(\hat{\bm{n}}^{\prime}\right)\right\rangle\\
&= -\sum_{\ell'} \frac{2\ell'+1}{4 \pi}\left(\tilde{C}_{\ell'}^{T E}+i \tilde{C}_{\ell'}^{T B}\right) d_{02}^{\ell'}(\beta),
\end{aligned}
\end{equation}
where $\hat{\bm{n}}$ and $\hat{\bm{n}}^{\prime}$ are two directions in the spherical coordinate system, $\cos\beta = \hat{\bm{n}} \cdot \hat{\bm{n}}^{\prime}$, and $d_{mm'}^{\ell}$ is the Wigner d-function. Taking advantages of the orthogonality relations of Wigner d-functions,
\begin{equation}
\label{eq:w-d orthogonality}
\int_{-1}^{1} d \cos \beta\: d_{mk}^{\ell}(\beta) d_{m'k'}^{\ell'}(\beta) = \frac{2}{2\ell+1}\delta_{mm'}\delta_{kk'}\delta_{\ell \ell'},
\end{equation}
one can invert Eq.~\eqref{eq:xi spherical} to express rotated power spectra in terms of correlation functions, such as
\begin{equation}
\label{eq:xi reverse}
\tilde{C}_{\ell}^{E E}+\tilde{C}_{\ell}^{B B}=2 \pi \int_{-1}^{1} d \cos \beta\:\tilde{\xi}_{+}(\beta) d_{22}^{\ell}(\beta) .
\end{equation}
Applying Eq.~\eqref{eq:rotation}, $\tilde{\xi}_{+}(\beta)$ can be expressed by un-rotated quantities as
\begin{equation}
\label{eq:xi}
\tilde{\xi}_{+}(\beta) =e^{-4C^{\alpha}(0)+4C^{\alpha}(\beta)}\sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}+C_{\ell'}^{BB})d_{22}^{\ell'}(\beta).
\end{equation}
Here $C^{\alpha}(\beta)$ is the correlation function of rotation angles in the two directions separated by $\beta$ and can be expressed as
\begin{equation}
\label{eq:cla}
\begin{aligned}
C^{\alpha}(\beta)=\left\langle\delta \alpha\left(\hat{\bm{n}}_{1}\right) \delta \alpha\left(\hat{\bm{n}}_{2}\right)\right\rangle=&\ \sum_{L} \frac{2 L+1}{4 \pi} C_{L}^{\alpha \alpha} P_{L}(\cos \beta)\\
=&\ \sum_{L} \frac{2 L+1}{4 \pi} C_{L}^{\alpha \alpha} d_{00}^{L}(\beta),
\end{aligned}
\end{equation}
where $C_{L}^{\alpha \alpha}$ is a generic rotation field power spectrum introduced in Eq.~\eqref{eq:alpha ps}, $P_{L}(\cos \beta)$ is the Legendre Polynomial, and we have applied $P_{L}(\cos \beta) = d_{00}^{L}(\beta)$.
Equipped with Eq.~\eqref{eq:xi}, Eq.~\eqref{eq:xi reverse} can be written as
\begin{equation}
\label{eq:rotated ps EE BB}
\begin{aligned}
\tilde{C}_{\ell}^{E E}+\tilde{C}_{\ell}^{B B} &=\frac{1}{2} e^{-4 C^{\alpha}(0)} \int d\cos \beta\: e^{4C^{\alpha}(\beta)} d_{22}^{\ell}(\beta) \\ &\left[ \sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}+C_{\ell'}^{BB})d_{22}^{\ell'}(\beta)\right].
\end{aligned}
\end{equation}
Similarly, one can also obtain
\begin{equation}
\label{eq:rotated ps}
\begin{aligned}
\tilde{C}_{\ell}^{T E} &=C_{\ell}^{T E} \cos (2 \bar{\alpha}) e^{-2 C^{\alpha}(0)},\\
\tilde{C}_{\ell}^{T B} &=C_{\ell}^{T E} \sin (2 \bar{\alpha}) e^{-2 C^{\alpha}(0)},\\
\tilde{C}_{\ell}^{E E}-\tilde{C}_{\ell}^{B B} &=\frac{1}{2} e^{-4 C^{\alpha}(0)}\cos 4\bar{\alpha} \int d\cos \beta\: e^{-4C^{\alpha}(\beta)} d_{-22}^{\ell}(\beta)\\ &\left[ \sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}-C_{\ell'}^{BB})d_{-22}^{\ell'}(\beta)\right],\\
\tilde{C}_{\ell}^{E B} &=\frac{1}{2} e^{-4 C^{\alpha}(0)} \sin 4\bar{\alpha} \int d\cos \beta\: e^{-4C^{\alpha}(\beta)} d_{-22}^{\ell}(\beta)\\ &\left[ \sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}-C_{\ell'}^{BB})d_{-22}^{\ell'}(\beta)\right].
\end{aligned}
\end{equation}
Note that the rotated CMB EE, BB and EB power spectra in Eq.~\eqref{eq:rotated ps EE BB} and Eq.~\eqref{eq:rotated ps} are given by real-space integrals, which avoids convolution in the $\ell m$ space which is computationally expensive. A similar strategy that uses real-space integral instead of convolution in $\ell m$ space can be found in delensing calculation \cite{Smith:2012} which significantly reduces computational cost. Also note that we have ignored the correlations between the rotation field and both CMB temperature and (unrotated) E-polarization fields, which may arise in certain axion-like models, such as models with nonzero potential under adiabatic initial conditions \cite{2011PhRvD..84d3504C}. A similar calculation that takes account of these correlations can be found in \cite{Zhai:2020a}.
We can see from Eq.~\eqref{eq:rotated ps EE BB} and Eq.~\eqref{eq:rotated ps} that both isotropic and anisotropic rotations contribute to BB power spectrum. In the upper panel of Fig.~\ref{fig:ps.pdf}, we show the BB power spectrum contributed by an isotropic rotation field with $\bar{\alpha}=0.1^{\circ}$ and a scale-invariant anisotropic rotation field with $A_{CB}=10^{-5}$, respectively. As a comparison, we also show the contributions from primordial tensor mode with $r=0.004$ where $r$ is the tensor-to-scalar ratio, and the contribution from CMB lensing. One can see that the B-mode signal from rotation fields can be larger than that from the primordial tensor mode at $\ell \gtrsim 150$, which suggests that, apart from searching for parity-violating physics, rotation field is also an important systematic when searching for primordial tensor mode. We also note that rotation field generally contributes less than CMB lensing to B-mode polarization; this suggests that the ability to ``de-lens" the CMB will help tighten the constraints on cosmic birefringence.
From Eq.~\eqref{eq:rotated ps} we can also see that both $\tilde{C}_{\ell}^{T B}$ and $\tilde{C}_{\ell}^{E B}$ become non-zero when $\bar{\alpha}$ is non-zero; this is consistent with the fact that an isotropic rotation field violates parity symmetry and induces odd-parity CMB power spectra (see the lower panel of Fig.~\ref{fig:ps.pdf} for example).
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{./figs/ps_sims.pdf}
\caption{rotated CMB BB, TB and EB power spectra from simulation and theory. The theory curves are calculated by \texttt{class\_rot}. The parameters are chosen as: $r=0.004$, $\bar{\alpha}=0.1^{\circ}$ and $A_{CB}=10^{-5}$.}
\label{fig:ps_sims.pdf}
\centering
\end{figure}
\section{The Software Package}
\label{sec:code}
In this section, we describe briefly the implementation of \texttt{class\_rot}, give usage examples of its Python interface, and show comparisons to numerical simulations.
\vspace{0.2cm}
\textbf{Code implementation:}
In \texttt{class\_rot}, the calculations described in Sec.~\ref{sec:rotated ps} are implemented as a new module to \texttt{class}, contained in \texttt{rotation.c} \source{rotation.c}. Internally, this \texttt{rotation} module takes the power spectra calculated from the \texttt{harmonic} module as inputs, by doing so we have implicitly neglected the effect of CMB lensing when calculating the rotated power spectrum. This assumption significantly simplifies our code implementation and will only lead to sub-percent to percent level error due to the smallness of $C_\ell^{BB}$ relative to $C_\ell^{EE}$; to incorporate the effect of CMB lensing in the \texttt{rotation} module will be the subject of future work.
The \texttt{rotation} module can be turned on by specifying \texttt{rotation = yes} in the parameter file, and it can take two additional parameters that specify the rotation field, \texttt{alpha} and \texttt{A\_cb}, which correspond to $\bar{\alpha}$, in unit of degrees, and $A_{CB}$, in radians as defined in Eq.~\eqref{eq:cl_aa}, respectively. The rest of the parameters are identical to those in \texttt{class}. Note that by using $A_{CB}$ we implicitly assume that the rotation field follows a scale-invariant power spectrum -- a choice of preference rather than necessity; other rotation power spectrum can be implemented by changing the \texttt{rotation\_cl\_aa\_at\_l} function defined in \texttt{rotation.c} \source{rotation.c}. We leave the support for taking in a generic rotational power spectrum as input to a future work.
The parameters can be specified in a parameter file and passed to the compiled \texttt{class} binary executable, in the same way as the original \texttt{class}. An example parameter file, \texttt{explanatory\_ROT.ini} \href{https://github.com/catketchup/class_rot/blob/main/explanatory_ROT.ini}{{\codeicon}}\, is also provided as part of \texttt{class\_rot} to illustrate the use of parameters. Note that this parameter file is only needed when calling \texttt{class\_rot} from the command-line interface using its compiled binary executable. We have also provided Python bindings to the functions in the rotation module allowing them to be called in the Python interface, and we show some usage example below.
\vspace{0.2cm}
\textbf{Usage example:}
Here we give an example of how to calculate the rotated CMB power spectra using the Python interface of \texttt{class\_rot}:
\begin{lstlisting}[language=Python]
from classy import Class
params = {
"output": "tCl,pCl,rCl",
"l_max_scalars": 4000,
"rotation": "yes",
"alpha": 0.1,
"A_cb": 1E-5,
}
cosmo = Class()
cosmo.set(params)
cosmo.compute(level=["rotation"])
cosmo.rotated_cl()
\end{lstlisting}
One can see that \texttt{class\_rot} is meant to be used as a drop-in replacement to the original \texttt{class} as it is imported the same way and follows the same usage pattern. The parameters are specified in a Python dictionary, \texttt{param}, and passed to the \texttt{cosmo} object. Note that it is important to include \texttt{rCl} in the \texttt{output} option as it is required for computing the rotated power spectra. The option \texttt{rotation} turns on the rotation module when its value is \texttt{yes}; \texttt{alpha} and \texttt{A\_cb} specify the rotation parameters as can be used in a parameter file. Also note that when computing cosmological model with the function \texttt{cosmo.compute()}, one needs to include \texttt{level=["rotation"]} so that the rotation module and its dependencies are initialized properly. After running \texttt{cosmo.compute()}, the rotated power spectra can be obtained by the function call \texttt{cosmo.rotated\_cl()}, in the form of a Python dictionary following the convention from \texttt{class}. This illustrates a basic usage of \texttt{class\_rot}; we refer interested readers to the examples provided in the bundled Jupyter notebook in \texttt{class\_rot} to find more detailed examples and explanations \href{https://github.com/catketchup/class_rot/blob/main/notebooks_rot}{{\codeicon}}.
\vspace{0.2cm}
\textbf{Comparison with simulations:}
To demonstrate the accuracy of \texttt{class\_rot}, we compare the rotated CMB power spectra from \texttt{class\_rot} with those from full-sky simulations. In particular, we first generate 100 realizations of un-rotated CMB maps in T, Q, and U based on a fiducial model given by the best-fit cosmology from Planck 2018 \cite{Planck2018:VI:CP} with $l_{\rm max} = 6000$. Additionally we set a non-zero tensor-to-scalar ratio $r=0.004$. Next we generate 100 realizations of a full-sky rotation map with $\bar{\alpha}=0.1^{\circ}$ and $A_{CB}=10^{-5}$, which are then used to rotate each realization of unrotated CMB maps. These full-sky simulations are generated using \texttt{pixell} \cite{2021ascl.soft02003N} in rectangular pixelization and CAR projection with a resolution of 1 arcminute. We apply each rotation field to rotate one realization of simulated CMB maps in pixel space using Eq.~\eqref{eq:rotation} and then calculate its power spectra after the rotations. We repeat this procedure for each realization to get 100 sets of rotated CMB power spectra.
In Fig.~\ref{fig:ps_sims.pdf}, we show the average of the 100 realizations of rotated power spectra in comparison to the corresponding theory spectrum obtained from \texttt{class\_rot}. One can clearly see that the output of \texttt{class\_rot} is in an excellent agreement with simulations.
For $C_\ell^{BB}$ we estimate an error of $\lesssim 1\%$ at $\ell\lesssim 4000$; the accuracy noticeably degrades at larger $\ell$ likely due to a combination of pixel effect, numerical precision, and the smallness of the signal of interests. Both $C_\ell^{TE}$ and $C_\ell^{EB}$ from \texttt{class\_rot} agree with the simulations within the expected cosmic variance of the averaged power spectra up to $\ell = 6000$, which is the highest multipole we have tested.
\section{Discussion and Conclusion}
\label{sec:conclusion}
In this paper we present \texttt{class\_rot}, a new publicly available
modified \texttt{class} code, which calculates rotated CMB power
spectra from cosmic birefringence using a non-perturbative
method. \texttt{class\_rot} supports both isotropic and anisotropic
rotations, as can be specified by the isotropic rotation angle,
$\bar{\alpha}$, and the amplitude of scale-invariant rotation power
spectrum, $A_{CB}$, respectively. Hence, \texttt{class\_rot} can be
effectively used to search for cosmic birefringence signal
that features a scale-invariant rotation power spectrum or an isotropic
rotation in CMB polarization rotation, such as that from the coupling
between axion-like particles and photons via Chern-Simons interaction.
We leave the implementation of a more generic (i.e., not scale-invariant)
rotation power spectrum in \texttt{class\_rot} to a future work which
will allow us to search for a broader range of rotation signal such
as that caused by Faraday rotation from primordial magnetic field, which,
depending on its generation mechanism, may induce a rotation field that is
not scale-invariant (see \cite{2013A&ARv..21...62D} for a review).
In this paper we have also briefly reviewed the non-perturbative calculation
implemented in \texttt{class\_rot}, which makes use of the angular correlation
function of the rotation field and does not require the rotation to be perturbatively
small. Hence the calculation in \texttt{class\_rot} offers a broader range of
applicability. We leave the implementation of a perturbative calculation as
well as a detailed comparison between the non-perturbative and perturbative methods,
in terms of both speed and accuracy, to a future work.
We have briefly described the coding implementation and given an example of how
to use \texttt{class\_rot} with its Python interface. To demonstrate its accuracy we
have compared the rotated CMB power spectra such as BB, TB, and EB obtained
from \texttt{class\_rot} to full-sky simulations and shown that they are in
good agreements with $\lesssim 1\%$ error. The upcoming experiments are expected to
constrain cosmic birefringence with much higher precision. For example, while the current best limits lie around $\mathcal{O}(10')$ for isotropic rotation \cite{Planck:2016soo,ACT:2020frw} and around $\mathcal{O}(10^{-6})$ for $A_{CB}$ \cite{Namikawa:2020,Bianchini:2020}, it has been forecasted that Simons Observatory \cite{SO:2019:SciGoal} can improve the current limits by nearly an order of magnitude, achieving an uncertainty level of around 0.7$'$ for isotropic rotation and around $10^{-7}$ for $A_{CB}$ \cite{Pogosian:2019}. These limits will be further improved by the CMB-S4 experiment \cite{S4:2016:SciBook}, reaching an uncertainty level of around $0.2'$ for isotropic rotation \cite{Abazajian:2019eic} and around $10^{-8}$ for $A_{CB}$ \cite{Pogosian:2019}; this will allow for percent-level determinations of $\bar{\alpha}$ and $A_{CB}$ should there be a cosmic birefringence signal at our current observational limit. In light of these future prospects, it is important to have a robust code that computes the effect of cosmic birefringence in power spectra with better than percent-level accuracy. Hence, \texttt{class\_rot} can be a powerful tool for searches of cosmic birefringence signal in the future.
\section*{Acknowledgments}
We thank Arthur Kosowsky for very helpful comments. We thank Toshiya Namikawa, J. Colin Hill, Mathew S. Madhavacheril, and
Lisong Chen for useful discussion. This work uses resources of the
National Energy Research Scientific Computing Center, and open source
softwares including \texttt{healpy} \cite{2019JOSS....4.1298Z} and
\texttt{pixell} \cite{2021ascl.soft02003N}.
| proofpile-arXiv_065-1785 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the recent years, there has been remarkable progress in fashion related research including attribute/object recognition \cite{Dong,Liu,simo2016fashion,li2019two}, attribute discovery \cite{han2017automatic,hsiao2017learning,vittayakorn2016automatic}, recommendation \cite{al2017fashion,han2017learning,jing2019low,hsiao2018creating,kang2019complete,sattar2019fashion,singhal2020towards,zhan2021a3}, human/fashion parsing \cite{liang2015human,Simo-serra,yamaguchi2013paper}, retrieval \cite{corbiere2017leveraging,hadi2015buy,vo2019composing,ak2018shirt,lang2020plagiarism}. Furthermore, with the success of generative networks \cite{goodfellow2014generative,salimans2017pixelcnn++,karras2020analyzing,zhu2017unpaired,choi2017stargan,arjovsky2017wasserstein,ak2020incorporating,zou2020edge}, an extensive research is being conducted that involves synthesis of fashion images synthesis \cite{han2019finet,han2018viton,ak2019attribute,kenan_ICIP,ak2020semantically,zhu2017your,cui2021dressing}. Among these wide range of problems, the focus of this paper is on image retrieval. While the key issue addressed in the mainstream image retrieval research is on cross-domain image retrieval, a major challenge is managing situations when there is a large number of attributes that can be used for attribute manipulation of the query image, which requires flexible \& discriminative representations in the retrieval systems.
Conducting an image search by changing a certain attribute of the query image is tricky, since it may be hard to find the right balance between ``maintaining the current attributes" and ``adding/replacing an attribute". In \cite{zhaobo_atman}, this problem is defined as image retrieval with attribute manipulation and Attribute Manipulation Network (AMNet) is proposed to address the problem.
AMNet, trained with triplet ranking loss, fuses features of the query image with the desired attribute representation to find the desired image. Another approach involves allowing users to decide which image is more preferred through ``relevance feedback" but it can be computationally intensive \cite{Kovashka2012, Li2016, Tong2000}. In any case, these methods do not explore attribute localization to estimate feature representations, which could help to leave out some artifacts of the unwanted attributes.
In this paper, we introduce FashionSearchNet-v2, an image retrieval network based on a query image, and given an attribute manipulation condition. Figure \ref{fig:first_fig} illustrates two attribute manipulation scenarios for given query images. For the first scenario, the color attribute is changed to ``black" while the other attributes in the query image are maintained. In the second example, we show that the proposed idea can be applied to find similar people with a beard attribute.
In image retrieval with attribute manipulation, there may be many attributes that the user may want to change where each attribute is located in a different region of the image. Focusing on regions of interest is more feasible for images with multiple attributes since some attributes may only be present at certain locations. In line with this objective, the proposed FashionSearchNet-v2 initially aims to learn attribute similarities by leveraging a weakly supervised localization method: class activation mapping (CAM) \cite{zhou2016learning}, i.e., no information is given about the location of the attributes. The localization module is based on the global average pooling (GAP) layer and used to generate attribute activation maps (AAMs), i.e., spatial attention. Consequently, each attribute can be represented by its most relevant region. Next, the region of interest (ROI) pooling is performed to feed these attribute-relevant features into a set of fully connected layers, which serve as local attribute representations. The classification and triplet losses are used to train FashionSearchNet-v2 and learn attribute representations/similarities.
Traditionally, employing the triplet loss requires anchor, positive and negative images. When the information about the anchor and positive images is not available, one can choose to select them based on their attribute similarities. However, this may lead to having very few examples in the training set. Therefore, we propose to use ``triplets of regions", which can be computed independently for each individual attribute representation. By disconnecting attribute representations into different layers, the triplet loss for computing similarity between query and candidate images can now be computed on triplets of regions where each region is estimated from the AAMs. Consequently, this process increases the number of possible triplets immensely as the learning is now independent of the attribute quantity.
\begin{figure}[t]
\centering
\includegraphics[width=8.9cm]{images/Fig1.pdf}
\caption{For each given query image, the user is able to manipulate specific attributes. Each attribute manipulation (shown with the \textcolor{red}{red font}) redefines the representation of the query image based on the desired attribute, which is then used by FashionSearchNet-v2 to retrieve the most similar images by using the ``redefined" query image.
}
\label{fig:first_fig}
\end{figure}
After the training of attribute representations is completed, the undesired attribute representation of the query image can be directly replaced for a given attribute manipulation by utilizing the proposed attribute memory block. Next, the network combines attribute representations of the query image into a single global representation and the image retrieval can be performed. In order to allow this, we use the global ranking loss function to teach FashionSearchNet-v2 which representation should have more importance depending on the attribute manipulation.
Compared to the earlier version: FashionSearchNet \cite{ak2018learning}, the proposed FashionSearchNet-v2 includes an updatable attribute memory block to estimate more precise attribute representation prototypes. Additionally, we include a feature fusion method that combines features with and without the localization method. This feature fusion method is especially important when AAMs are inaccurate or there is a dependency among several attributes. We further test our proposed idea on the CelebA dataset \cite{liu2015faceattributes} and show our method can be generalized for different domains for image retrieval with attribute manipulation. We also present more details on the algorithm while providing numerous experiments that include different ablation studies.
The main contributions of the paper can be summarized as follows:
\begin{itemize}
\item We introduce a novel image retrieval with attribute manipulation network: FashionSearchNet-v2 that is able to conduct attribute representation learning by localizing towards attributes and acquire better attribute representations for attribute manipulations.
\item In addition to using attribute activation maps to discover attribute relevant regions, we utilize feature fusion and an updatable memory block to store prototype attribute representations.
\item Our experiments show that each module significantly contributes to the performance of the proposed network while being able to handle different types of domains including fashion and face images \& attributes.
\end{itemize}
\section{Related Work}
\subsection{Attribute Recognition}
Attributes are tools that can be used to describe and categorize objects \cite{sudowe2015person,sarafianos2018deep,siddiquie2011image}. Preliminary works \cite{Bossard2013,Chen,kiapour2014hipster} relied on combining hand-crafted features such as SIFT \cite{Lowe2004} and HoG \cite{Dalal2005} with support vector machines \cite{cortes1995support}. Along with the introduction of deep neural networks, more powerful methods have been proposed for attribute recognition \cite{wang2019pedestrian,tang2019improving,han2019attribute}.
In terms of fashion products, attributes also provide a useful tool to assess and process clothing items. Mix and match \cite{Yamaguchi} combined a deep network with conditional random fields to explore the compatibility of clothing items and attributes. Chen et al. \cite{Chen2015} focused on solving the problem of describing people based on fine-grained clothing attributes. Several works utilized weakly labeled image-text pairs to discover attributes \cite{vittayakorn2016automatic, yashima2016learning}. Abdulnabi et al. \cite{abdulnabi2015multi} proposed a multi-task based approach to learn an algorithm to predict multi-attributes. Li et al. \cite{li2019two} proposed a two-stream network for fashion recognition. More recently, a technique for hard-aware attribute classification \cite{ye2019hard} is proposed to address the issue of imbalanced data distribution. To enhance the extraction of shape and texture features, a two-stream network \cite{Zhang_2020_CVPR} is proposed. Attribute recognition plays an important part in this work as it is utilized in both localization and retrieval abilities of FashionSearchNet-v2.
\subsection{Attribute Localization}
Being able to localize towards objects has been proven to be important in fine-grained recognition \cite{huang2016part,jaderberg2015spatial,huynh2020fine} and image retrieval \cite{bell2015learning,gordo2016deep}. More specifically, in fashion images, DARN \cite{Huang2015a} utilized from a module to detect clothing items, which subsequently improved the performance of the image retrieval network. Similar to DARN \cite{Huang2015a}, Song et al. \cite{song2017learning} proposed a unified method to localize and classify apparels. More interestingly, in FashionNet \cite{Liu} joint prediction of clothing attributes and landmarks shown to be highly effective.
However, most aforementioned methods require annotation of bounding boxes or key points to correctly detect the object of interest. In \cite{singh2016end}, an end-to-end method is proposed to simultaneously localize and rank relative attributes in a weakly supervised manner with the help of Spatial Transformer Networks (STN) \cite{jaderberg2015spatial}. Nevertheless, the fact that a new model must be trained for each attribute for the method proposed in \cite{singh2016end} makes it hard to implement for images with multiple attributes. In \cite{tang2019improving}, the attribute localization module based on a spatial transformer is shown to improve pedestrian attribute recognition. Another method that allows estimating weakly-supervised localization information is class activation mapping (CAM) \cite{zhou2016learning}, which has been shown to be highly efficient in localizing the most representative regions of attributes. Class activation mapping can be used for many problems such as localizing towards attributes of pedestrians \cite{liu2018localization}, semantic segmentation \cite{wei2017object} and person re-identification \cite{yang2019towards}. As it is not quite possible to annotate bounding boxes for every attribute, we were inspired to innovate by incorporating a weakly annotated attention mechanism to conduct multi-attribute based similarity learning.
\subsection{Image Retrieval}
The success of deep learning based approaches provided better performance for the content-based image retrieval \\(CBIR) \cite{babenko2014neural, wan2014deep, krizhevsky2011using, Huang2015a} compared to the traditional image feature extraction methods. The most popular CBIR system focuses on searching for same/similar items from query images \cite{hadi2015buy,Liu,shankar2017deep,simo2016fashion} or videos \cite{cheng2017video2shop,garcia2017dress}, while another set of works investigate the problem of recommendation \cite{al2017fashion, Liu2012, tangseng2017recommending, veit2015learning, hou2019explainable}, which is also closely related to image retrieval. More recently, retrieval of fashion images is investigated via various methods \cite{Kuang_2019_ICCV,Lin_2020_CVPR,lang2020plagiarism}.
Different than retrieval by the query, there are several works that focus on the problem of adjusting certain aspects of the query image and performing the image search. The user feedback can be incorporated with attribute \cite{han2017automatic,zhaobo_atman,ak2018learning} or text \cite{vo2019composing,Chen_CVPR20,Chen_ECCV20}. Another work \cite{shin2019semi} explores fashion instance-level image retrieval. Han et al. \cite{han2017automatic} focused on spatially aware concept discovery from image/text pairs and conducted attribute-feedback product retrieval with the word2vec \cite{mikolov2013efficient} method. On the other hand, Zhao et al. \cite{zhaobo_atman} proposed a method to manipulate clothing products using a memory gate where the problem is defined as ``fashion search with attribute manipulation". In contrast to \cite{zhaobo_atman}, our proposed method follows a different approach by including the localization of attributes, focusing on attribute representation learning.
\begin{figure*}
\centering
\includegraphics[width=15cm]{images/System_diagram.pdf}
\caption{Overview of the FashionSearchNet-v2. With the input image fed through the network, the global average pooling (GAP) layer is used to generate attribute activation maps (AAMs) for each attribute which are used to estimate several regions of interests (ROIs). Next, attribute-specific features are pooled from the $conv5$ layer by using the estimated ROIs. The pooled features are linked to a set of fully connected layers where the similarity learning between attributes is conducted and attribute representations are produced ($fc_{10\_1}, fc_{10\_2}, ...$). Finally, attribute representations are combined into a global representation to be used in the fashion search.}
\label{fig:FashionSearchNet-v2}
\end{figure*}
\section{FashionSearchNet-v2}
This section presents an overview of the proposed network, FashionSearchNet-v2. First, the network is trained with the classification loss to generate AAMs \cite{zhou2016learning}. Our motivation for using AAMs is to discover the most informative regions for each attribute concurrently, thus ignoring unrelated features. The network is then trained once more with a combination of triplet and classification losses to learn attribute representations with the help of estimated AAMs. Finally, a weighted combination of the attribute representations into a global representation is performed through the global ranking loss. The global representation is computed from the fusion of attribute representations with and without the localization module. The attribute memory block is also updated throughout the training to estimate prototype attribute descriptions to be used in the attribute manipulation step.
\subsection{Architecture Overview}\label{sect:architecture}
The proposed FashionSearchNet-v2 shown in Figure \ref{fig:FashionSearchNet-v2} is based on the AlexNet \cite{Krizhevsky} architecture with the following modifications applied to the baseline network: all fully connected layers are removed and two convolutional layers are added after the $conv5$ layer to compensate the effect of removing the fully connected layers. In the first stage, the network is trained with classification loss to learn accurate AAMs, which represent the most activated regions of attributes. Next, regions of interests (ROIs) are extracted by using the AAMs. In order to learn local attribute representations, feature maps from the $conv5$ layer are passed into a set of fully connected layers with and without ROI pooling.
In the second stage, the network is trained with the joint combination of classification and ranking losses. These learned attribute representations are combined into a global representation to represent the input image and target attributes. In the final stage, the global ranking loss is applied to estimate the importance of attribute representations depending on the attribute manipulation and learn attribute prototypes.
\subsection{Learning Attribute Representations}\label{sect:LAR}
\textbf{Attribute Activation Maps.} The classification loss is used to discover the most relevant regions of attributes. Initially, the GAP layer is applied to the last convolutional layer which corresponds to $conv7$ as follows:
\begin{equation}
x_{I, k} = \sum_{i,j}conv7_k(I,i,j) \text{ for } k \in 1, 2, ..., K
\end{equation}
where $x_{I, k}$ is the features extracted from the image $I$ for channel $k$ and $conv7_k(I,i,j)$ is the $k$'th feature map of the $conv7$'th layer at spatial location $(i,j)$. The multi-attribute classification network is trained using the following classification loss:
\begin{equation}\label{eq:L_C}
L_{C} = -{\sum_{I=1}^{N}}{\sum_{a=1}^{A}} \text{log}(p(g_{Ia}|x_{I}w_{a}))
\end{equation}
where $g_{Ia}$ represents the ground truth of the $a'th$ attribute for image $I$. $x_I w_a$\footnote{The dimensions of $w_a$ is [number of feature maps, number of classes associated with $a$]} calculates weighted linear combination of $x_I$ for attribute $a$, $N$ is the number of training examples and $A$ is the number of attributes. The posterior probability estimates the probability of $x_{I}w_{a}$ to be classified as $g_{Ia}$. We next define $M_{a_c}(I,i,j)$ as AAM for class $c$ of an attribute $a$ as follows:
\begin{equation}
M_{a_c}(I,i,j) = \sum_{k}{w_{a_{(k,c)}}conv7_k(I,i,j)}
\end{equation}
where $w_{a_{(k,c)}}$ is the weight variable of attribute $a$ associated with $k'th$ feature map of class $c$ and $c$ is determined from the class, which maximizes the classification confidence. Using $M_{a_c}$, attribute localization can be added to the network. In order to do so, $A$ number of maps are estimated with a simple hard threshold technique. As per the implementation in \cite{zhou2016learning}, the pixel values that are above 20$\%$ of the maximum value in the generated map are segmented. This is followed by estimating a bounding box, that encloses the largest connected region in the AAM. This step is repeated for each attribute. \\
\textbf{Ranking with Triplets of Regions.} FashionSearchNet-v2's ability to identify ROIs enables it to ignore regions with unrelated features which may confuse the attribute similarity learning capability of the network. A structure similar to ROI pooling layer \cite{girshick2015fast} is used to pass features from the $conv5$ layer to a set of fully connected layers.
The example in Figure \ref{fig:triplets} investigates an example for learning collar attribute similarity to show the intuition behind the triplets of regions ranking loss function. At the first glance, anchor image $(\hat{I})$ may look similar to the negative image $(I^-)$, due to color attribute correlation. In fact, the collar attribute of $\hat{I}$ is the same as $I^+$. If the output of the network's $h$'th layer without triplet ranking training is used to compare Euclidean distances, $d(h(\hat{I}),h(I^+)) > d(h(\hat{I}),h(I^-))$ would be the case, meaning $\hat{I}$ would be closer to $I^-$ than $I^+$ in the feature space, which undesired.
\begin{figure}
\centering
\includegraphics[width=8cm]{images/Triplet.pdf}
\caption{Examples for the triplets of regions of the collar attribute: Anchor $(\hat{I})$, Positive $(I^+)$ and Negative $(I^-)$. The generated collar attribute activation maps tend to be near the collar region and irrelevant regions are eliminated; thus, enabling a better attribute similarity learning.}
\label{fig:triplets}
\end{figure}
The first step of the proposed method involves estimating the corresponding AAMs as shown in Figure \ref{fig:triplets}. Note that the heatmaps of $\hat{I}$ and $I^+$ cover a smaller area compared to $I^-$ thus confirming the localization ability since the collar attribute of $I^-$ covers a wider region. It is evident that as the AAMs localize towards the collar attribute, the unrelated regions such as sleeve and torso are ignored without any intervention. Thus, FashionSearchNet-v2 is able to differentiate the collar attribute while ignoring irrelevant attributes (e.g., color, pattern, etc.).
When the triplet ranking loss function defined in \cite{Huang2015a,shankar2017deep} is used in FashionSearchNet-v2, the observed loss was tremendous unless an extremely small learning rate is used. Inspired by \cite{hoffer2015deep}, the soft-triplet ranking function is utilized which normalizes the distances to the range of (0, 1) with the softmax function and formulated as follows:
\begin{equation}
\scriptstyle
d^+(h(\hat{I}),h(I^+),h(I^-)) = \dfrac{\scriptstyle \text{exp}(||h(\hat{I}) - h(I^+)||_2)} {\scriptstyle \text{exp}(||h(\hat{I}) - h(I^+)||_2) + \text{exp}(||h(\hat{I}) - h(I^-)||_2)}
\end{equation}
\begin{equation}
\scriptstyle
d^-(h(\hat{I}),h(I^+),h(I^-)) = \dfrac{\scriptstyle \text{exp}(||h(\hat{I}) - h(I^-)||_2)} {\scriptstyle \text{exp}(||h(\hat{I}) - h(I^+)||_2) + \text{exp}(||h(\hat{I}) - h(I^-)||_2)}
\end{equation}
Given $||d^+,d^--1||_2^2 $ $= const.(d^+)^2$ and $h=fc_{10\_a}$ the ranking loss function can be written as:
\begin{equation} \label{eq:L_t}
L_{T} = {\sum_{I=1}^{N}}{\sum_{a=1}^{A}}d^+(fc_{10\_a}(\hat{I}),fc_{10\_a} (I^+),fc_{10\_a} (I^-))
\end{equation}
where A is the number of fully connected layers, which is also equal to the number of attributes. The role of Eq. \ref{eq:L_t} is to learn a representation for each attribute using the final set of fully connected layers: $fc_{10\_a}$. We minimize $||fc_{10\_a}(\hat{I}),fc_{10\_a}(I^+)||_2$ and maximize $||fc_{10\_a}(\hat{I}),fc_{10\_a}(I^-)||$. The rule for picking triplets is as follows:, $\hat{I}$ and $I^+$ must share the same label while $I^-$ is chosen randomly from a different label. For instance, given if an anchor includes ``blue" color label, the positive image can be any image with ``blue" color label.
Both ranking and classification losses are used in the optimization processes leading to the attribute representations. It is necessary to use the classification loss as it was observed from experiments that using only the ranking loss significantly diminishes the discriminative ability of the network. This classification loss denoted as $L_{TC}$ is formulated as in Eq. \ref{eq:L_C}, except $x_{I}w_{a}$ is replaced with the output of $fc_{10\_a}$ layers.
\\
\textbf{Feature Fusion.} Compared to the previous version, i.e., FashionSearchNet \cite{ak2018learning}, we also include global image features from $conv5$ layer when computing feature representations by concatenating features from localized \& global feature maps before the first fully connected layer.
\subsection{Attribute Manipulation \& Learning Global Representation} \label{sect:learn_global}
In the previous subsection, we showed how FashionSearchNet-v2 is taught to localize and learn attribute representations. Combining all these learned features would achieve good results when conducting image searches. However, such combinations may allocate too much memory and thus slow down the search process. Incorporating additional training also helps the network to learn how to merge feature representations for attribute manipulation.
By associating each attribute with a different fully connected layer ($fc_{10\_1}, ..., fc_{10\_A}$), the fashion search with attribute manipulation becomes straightforward. After the training, features with the same attribute value are extracted from the training images and stored in a local representation block $M$ $\in$ $\mathbb{R}^{C \times D}$ where $C$ is the total number of attribute values and $D$ is the feature dimension.
Given an attribute manipulation $t \in \mathbb{R}^{1 \times C}$, the corresponding attribute representation can be retrieved via $g=tM$ as visualized in Figure \ref{fig:attribute_retrieval_module}. After retrieving the new representation $g$, it is combined with feature representation of the query image $f$ and the undesired representation of the query image is overthrown. This formulation also enables the FashionSearchNet-v2 to update features in $M$, which would improve the retrieval performance.
In order to reduce the dimension of the concatenated feature representation of $(f, g)$, a weight parameter $w_{r}$ is applied to reduce the concatenated feature-length to the dimension $r$. Moreover, we use an additional ranking function and letting a weight variable learn ``which features are more important" to boost the performance vastly. This is because some attribute representations such as ``color" could be more important than other attribute representations say, ``fabric". $A$ number of weight parameters denoted by $\lambda _{a}$ are learned. The training is conducted with the following global ranking loss function, $L_G$ for a given attribute manipulation ${t}$, which replaces attribute $a$:
\begin{equation}\label{eq:F_IT}
F_{I,t} = [fc_{10\_1}(I)\lambda_{1},..., g\lambda_a, ...,fc_{10\_A}(I)\lambda_A]w_{{a^*}}
\end{equation}
\begin{equation}\label{eq:L_G}
L_{G} = {\sum_{I=1}^{N}}d^+(F_{I, t} ,F_{I^+} ,F_{I^-})
\end{equation}
where $F_{I^+}$ and $F_{I^-}$ are features of positive and negative triplet samples, respectively. For the training of global representations, the unwanted attribute representation of the query image is replaced with $g$, which corresponds to $F_{I, t}$ and weight variables applied as shown in Eq. \ref{eq:F_IT}. The training is conducted with the loss function given in Eq. \ref{eq:L_G}. The same procedure is applied to all possible attribute manipulation options. The rule for picking triplets for the global ranking loss is that $F_{I,t}$ and $F_{I^+}$ must be identical in terms of attributes after the attribute manipulation $t$ while $F_{I^-}$ is chosen randomly.
Another advantage of global representation learning is not only weights are updated but also attribute representations in $M$ as all operations are differentiable. Following such updates on the memory block allows the network to optimize its parameters to find the most representative features for an attribute manipulation.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{images/System_diagram_cont.pdf}
\caption{Attribute retrieval module example. Given a query image and attribute manipulation (red color), the network is trained with a triplet loss where the positive image is the one matches all attributes and the negative image is chosen randomly.}
\label{fig:attribute_retrieval_module}
\end{figure}
\subsection{Optimization}
FashionSearchNet-v2 utilizes different loss functions in its optimization steps. The joint loss can be written as follows:
\begin{equation}\label{eq:joint}
L_{joint} = \lambda_{C}L_{C} + \lambda_{T}L_{T} + \lambda_{TC}L_{TC} + \lambda_{G}L_{G}
\end{equation}
The network is first trained only using $L_{C}$ as the other processes depend on how reliable the AAMs are. In the second stage, we assign the following weights: $\lambda_{C}=1, \lambda_{T}=1.5, \lambda_{TC}=1, \lambda_{G}=0$ to further train the network. After the training is finished, memory block $M$ is constructed and another training is performed for global representation learning using only $L_{G}$ where local attribute representations are kept fixed. Note that, it is possible to perform another joint training but due to memory efficiency we used fixed features for the global ranking loss.
\section{Experiments}
\subsection{Implementation Details}
We use pre-trained ImageNet weights for AlexNet \cite{Krizhevsky} up until $conv4 'th$ layer and reinitialize other layers. For $conv5$, $conv6$, $conv7$ layers, sizes of feature maps are 384, 512, 512, respectively. As we use regions of triplets ranking constraint, the selection of $(\hat{I}, I^+, I^-)$ becomes easy. For each mini-batch, images with the same attribute are chosen to be $\hat{I}$ and $I^+$. $I^-$ is picked such that it has a different attribute. Gradients are calculated for each loss function and they are backpropagated through the network.
The network is trained with the stochastic gradient descent algorithm using the learning rate of $0.01$. No pre-pro-cessing steps other than removing the means from each channel of images were conducted. For the ROI pooling, Tensorflow's \cite{abadi2016tensorflow} ``tf.image.crop\_and\_resize'' function is used to feed the $conv5$ features with the estimated bounding boxes into a set of fully connected layers. Dropout was used on all fully connected layers with $p = 0.5$. For all datasets, networks are trained for 12 epochs for the first stage, 12 epochs for the second stage (joint loss), and 2 epochs for global ranking loss.
\begin{table*}[ht]
\caption{Top-20 retrieval accuracy for each available attribute in their respective datasets.}
\centering
\begin{subtable}[h]{\textwidth}
\centering
\begin{tabular}{c c c c c c c c c c c c c | c}
\hline
Approach & Category & Color & Collar & Fabric & Fasten. & Fit & Gender & Neckline & Pocket & Pattern & Sleeve & Sport & \textbf{Avg.} \\ \hline
AMNet & 0.135 & 0.278 & 0.326 & 0.351 & 0.232 & 0.393 & 0.092 & 0.204 & 0.278 & 0.304 & 0.096 & 0.227 & 0.243 \\ \hline
w/o Rank & 0.085 & 0.271 & 0.223 & 0.317 & 0.199 & 0.343 & 0.118 & 0.139 & 0.222 & 0.422 & 0.101 & 0.199 & 0.220
\\
Rank & 0.141 & 0.306 & 0.379 & 0.357 & 0.258 & 0.396 & 0.200 & 0.177 & 0.292 & 0.490 & 0.240 & 0.215 & 0.288 \\
Rank-L & 0.168 & 0.410 & 0.431 & 0.390 & 0.265 & 0.398 & 0.249 & 0.222 & 0.335 & 0.461 & 0.334 & 0.247 & 0.326
\\
Rank-LG & 0.350 & \textbf{0.611} & 0.605 & 0.403 & 0.344 & 0.489 & 0.426 & 0.476 & 0.519 & 0.510 & 0.563 & 0.374 & 0.473
\\
Full & 0.336 & 0.569 & \textbf{0.613} & 0.395 & \textbf{0.364} & \textbf{0.510} & \textbf{0.446} & \textbf{0.491} & \textbf{0.521} & \textbf{0.549} & \textbf{0.590} & \textbf{0.410} & \textbf{0.483}
\\
Full w/ FF & \textbf{0.369} & 0.549 & 0.599 & \textbf{0.405} & \textbf{0.364} & 0.470 & 0.429 & 0.447 & 0.504 & 0.500 & 0.565 & 0.275 & 0.456
\\ \hline
\end{tabular}
\caption{Shopping100k dataset.}
\hfill
\end{subtable}
\begin{subtable}[h]{\textwidth}
\centering
\begin{tabular}{c c c c c c c c c c | c}
\hline
Approach & Button & Category & Collar & Color & Length & Pattern & Shape & Sleeve Len. & Sleeve Shp. & \textbf{Avg.} \\ \hline
AMNet & 0.253 & 0.183 & 0.191 & 0.202 & 0.185 & 0.168 & 0.205 & 0.100 & 0.173 & 0.184
\\ \hline
w/o Rank & 0.192 & 0.097 & 0.156 & 0.110 & 0.115 & 0.123 & 0.124 & 0.127 & 0.099 & 0.127
\\
Rank & 0.262 & 0.162 & 0.337 & 0.227 & 0.152 & 0.202 & 0.198 & 0.159 & 0.148 & 0.205
\\
Rank-L & 0.353 & 0.330 & 0.412 & 0.354 & 0.258 & 0.276 & 0.308 & 0.245 & 0.249 & 0.309
\\
Rank-LG & \textbf{0.401} & 0.353 & \textbf{0.414} & 0.391 & 0.363 & \textbf{0.298} & \textbf{0.377} & 0.359 & 0.247 & 0.356
\\
Full & 0.398 & \textbf{0.388} & 0.387 & 0.398 & \textbf{0.369} & 0.280 & 0.355 & 0.327 & 0.252 & 0.350
\\
Full w/ FF & 0.383 & 0.350 & 0.408 & \textbf{0.406} & 0.365 & 0.287 & 0.374 & \textbf{0.432} & \textbf{0.252} & \textbf{0.362}
\\ \hline
\end{tabular}
\caption{DARN dataset}
\hfill
\end{subtable}
\begin{subtable}[h]{\textwidth}
\centering
\begin{tabular}{c c c c c c c c c | c}
\hline
Approach & Category & Color & Gender & Material & Neckline & Pattern & Sleeve & Style & \textbf{Avg.} \\ \hline
AMNet & 0.039 & 0.067 & 0.111 & 0.050 & 0.046 & 0.071 & 0.062 & 0.038 & 0.061
\\ \hline
w/o Rank & 0.024 & 0.037 & 0.095 & 0.038 & 0.049 & 0.028 & 0.017 & 0.025 & 0.039
\\
Rank & 0.043 & 0.065 & 0.124 & 0.055 & 0.065 & 0.057 & 0.047 & 0.056 & 0.064
\\
Rank-L & 0.048 & 0.098 & 0.111 & 0.066 & 0.073 & 0.092 & 0.074 & 0.060 & 0.078
\\
Rank-LG & 0.076 & 0.138 & \textbf{0.160} & 0.080 & 0.091 & 0.114 & 0.082 & 0.059 & 0.100
\\
Full & \textbf{0.093} & \textbf{0.144} & 0.137 & \textbf{0.102} & \textbf{0.104} & \textbf{0.131} & 0.091 & \textbf{0.070} & \textbf{0.109} \\
Full w/ FF & 0.088 & 0.134 & 0.154 & 0.094 & 0.096 & 0.129 & \textbf{0.092} & \textbf{0.070} & 0.107
\\ \hline
\hline
\end{tabular}
\caption{iMaterialist dataset}
\hfill
\end{subtable}
\begin{subtable}[h]{\textwidth}
\centering
\begin{tabular}{c c c c c c c c c | c}
\hline
Approach & Hair Color & Beard & Hair Type & Smiling & Eyeglass & Gender & Hat & Age & \textbf{Avg.} \\ \hline
AMNet & 0.604 & 0.439 & 0.556 & 0.496 & 0.275 & 0.293 & 0.155 & 0.370 & 0.399
\\ \hline
w/o Rank & 0.468 & 0.290 & 0.530 & 0.563 & 0.163 & 0.315 & 0.127 & 0.396 & 0.357
\\
Rank & 0.816 & 0.500 & 0.710 & 0.805 & 0.747 & 0.782 & 0.642 & 0.515 & 0.690
\\
Rank-L & 0.818 & 0.556 & 0.752 & 0.806 & 0.779 & 0.812 & 0.748 & 0.537 & 0.726
\\
Rank-LG & 0.836 & 0.598 & \textbf{0.788} & 0.809 & 0.785 & 0.818 & \textbf{0.769} & 0.759 & 0.770
\\
Full & \textbf{0.841} & 0.604 & 0.787 & \textbf{0.815} & 0.785 & \textbf{0.829} & 0.766 & \textbf{0.772} & \textbf{0.775}
\\
Full w/ FF & 0.827 & \textbf{0.613} & 0.783 & 0.805 & \textbf{0.789} & 0.813 & \textbf{0.771} & 0.766 & 0.771
\\ \hline
\end{tabular}
\caption{CelebA dataset}
\end{subtable}
\label{tab:top_20_table}
\end{table*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.495\textwidth}
\includegraphics[width=\textwidth]{images/graph_shopping100k.pdf}
\caption{Shopping100k}
\end{subfigure}
\begin{subfigure}[b]{0.495\textwidth}
\includegraphics[width=\textwidth]{images/graph_DARN.pdf}
\caption{DARN}
\end{subfigure}
\begin{subfigure}[b]{0.495\textwidth}
\includegraphics[width=\textwidth]{images/graph_iMaterialist.pdf}
\caption{iMaterialist}
\end{subfigure}
\begin{subfigure}[b]{0.495\textwidth}
\includegraphics[width=\textwidth]{images/graph_CelebA.pdf}
\caption{CelebA}
\end{subfigure}
\caption{Top-K retrieval accuracy with varying K value for attirbute manipulation experiments in (a) Shopping100k, (b) DARN, (c) iMaterialist and (d) CelebA datasets.}
\label{fig:all_plots}
\end{figure*}
\subsection{Datasets}
Of the many datasets available that include fashion images and attributes \cite{ak2018efficient, hadi2015buy,Huang2015a,kiapour2014hipster,Liu,jia2020fashionpedia}, we decided to use Shopping100k \cite{ak2018efficient}, DARN \cite{Huang2015a} and iMaterialist \cite{guo2019imaterialist} datasets in our experiments. These fashion datasets cover a large variety of attributes and are rich in the number of images. Additionally, we use the CelebA dataset \cite{liu2015faceattributes} to show that the proposed method can be generalized to non-fashion images too. In all our experiments, we use 2,000 images to serve as query and 18,000 images as retrieval gallery, the rest are used for training the networks. Some details of these datasets listed as follows:
\begin{itemize}
\item \textbf{Shopping100k} dataset contains 101,021 images with 12 attributes and in total 151 unique values are available. Different than the others, this dataset does not include any person in the image where only the clothing product with a simple background is available.
\item \textbf{DARN} dataset includes 272,711 fashion images with 9 clothing attributes and the total number of attribute values is 179.
\item \textbf{iMaterialist} dataset has around 1 million fashion images. After removing images with noisy labels, we select a subset of 250,000 images to be used in our experiments. We also group some similar category labels together reducing the number of unique category labels from 228 to 24. In total there are 8 attributes with 147 attribute values.
\item \textbf{CelebFaces Attributes (CelebA)} dataset is a large-scale face attributes dataset with 202,599 number of face images where each image has 40 binary attributes. We group these attributes as hair-color, beard, hair-type, smiling, eyeglasses, gender, hat, and age for our experiments leading to 8 attributes with 21 attribute values.
\end{itemize}
\subsection{Competing Methods}
We investigate several state-of-the-art approaches and found that FashionNet \cite{Liu} and StyleNet \cite{simo2016fashion} which solve different fashion retrieval problems and are not suitable for attribute manipulation. We compare the performance of FashionSearchNet-v2 with its earlier version \cite{ak2018learning} and AMNet \cite{zhaobo_atman}.
We additionally use different variations of the proposed method in ablation experiments to investigate the effect of each novel component such as attribute localization, global representation learning, updatable memory block and feature fusion.
\subsection{Evaluation Metric}
For the qualitative experiments, we define the retrieval accuracy as follows. Given a query image and an attribute manipulation, the search algorithm finds the ``best K" image matches i.e., ``Top-K" matches. If there is a match (i.e., having the same attributes as the query image) after attribute manipulation, it corresponds to a hit ($1$) otherwise it is a miss ($0$).
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/Examples_all.pdf}
\caption{The Top-4 retrieval results of fashion search with attribute manipulation for a given a query image for all datasets. The green bounding boxes denote images retrieved by FashionSearchNet-v2 match all desired attributes when the search is conducted after the attribute manipulations.}
\label{fig:all_examples}
\end{figure*}
\\
\textbf{Search Strategy. }As discussed in \cite{Datta2008}, the search involves the use of very precise terms resulting in short search sessions leading to an end-result. We also adopt simple fashion queries by only changing a single attribute from the query image.
\subsection{Attribute Manipulation Experiments}
Our experiments involve replacing certain feature representations of the query image with the desired and comparing Euclidean distances of global representations with the retrieval gallery. For these experiments, every possible attribute manipulation that is available in the retrieval gallery is applied to the query images.
Top-K retrieval accuracy results are presented in Figure \ref{fig:all_plots} and Table \ref{tab:top_20_table} for (a) Shopping100k, (b) DARN, (c) iMaterialist and (d) CelebA datasets, respectively. FashionSearchNet-v2 achieves the best performance throughtout all experiments. Compared to the only other available baseline \textit{AMNet}, it can be seen in Table \ref{tab:top_20_table} and Figure \ref{fig:all_plots}, there is a large disperency compared to the variations of our proposed methods: \textit{Full} and \textit{Full w/ FF}. The main reason for this performance gap is the lack of attribute specific learning and localization of attributes in \textit{AMNet}. Next, we perform an ablation study to investigate different aspects of the proposed method.
\textbf{Ranking Loss. }To obtain insights about the proposed architecture, we start our experiments from simpler baselines and build on top of each preceding model. \textit{w/o Rank} model is based on AlexNet with additional final fully connected layers to match the number of attributes and does not incorporate any ranking loss, trained with the classification loss. For attribute manipulation, we use a technique to directly replace the matching representation from the learned attribute memory $M$. Compared to the \textit{w/o Rank} model, the inclusion of the ranking loss \textit{Rank} brings out a significant performance boost in all experiments as shown in Table \ref{tab:top_20_table} and Figure \ref{fig:all_plots}.
\textbf{Localization. }Next, we extend the \textit{Rank} model and incorporate the AAMs to the framework: \textit{Rank-L}, which provides the localization mechanism. We observe that \textit{Rank-L} significantly outperforms \textit{Rank} throughout all datasets in Table \ref{tab:top_20_table}, indicating the importance of localization. Looking at Figure \ref{fig:all_plots}, the contribution of the localization module is much more significant in the DARN dataset than Shopping100k. This is mostly due to more complicated images as seen in Figure \ref{fig:all_examples} where it is harder to estimate good attribute representations.
\textbf{Global Representation Learning. }In order to learn the importance of attribute representations, we introduced a technique in Sect. \ref{sect:learn_global}. By incorporating this into \textit{Rank-L} , we construct \textit{Rank-LG} where we observe a great improvement in Top-K retrieval accuracy for all datasets as shown in Table \ref{tab:top_20_table} and Figure \ref{fig:all_plots}. This technique is significant to determine which features are more important during attribute manipulation. \textit{Rank-LG} corresponds the earlier version of FashionSearchNet-v2 \cite{ak2018learning}.
\textbf{Updatable Memory Block. }In \textit{Rank-LG}, the memory $M$ is fixed the same during the global representation learning. Since the retrieval of attribute representation from the memory block is a differential operation, we can also update features in $M$. We denote this model as \textit{Full}, which achieves improvements over \textit{Rank-LG} but the margin increase is mostly not as significant as the other proposed techniques. Improvements are observed in Table \ref{tab:top_20_table}, except the DARN dataset.
\textbf{Feature Fusion. }In our final experiments, we also include Feature Fusion (FF) to \textit{Full}, which results to \textit{Full w/ FF}. The inclusion of feature fusion is important when the attribute localization method makes mistakes and in that case, the addition of the whole feature map can be helpful to recover the issue caused by the incorrect localization. Feature fusion achieves some improvements but the overall performance is close to the \textit{Full} model. This study also shows that the localized attribute representations from the localization mechanism are generally correct and feature fusion does not result in a significant improvement.
With regard to the fashion datasets, it can be seen that the overall performance in Shopping100k is greater than the other datasets. This is mostly due to simple, clean images of fashion items that enable the models to learn faster. In the iMaterialist dataset, the results are much lower compared to DARN where both datasets have people wearing fashion items. This is mostly due to noisy street pictures in the iMaterialist dataset.
Lastly, in the CelebA dataset, the overall performance is much higher as face pictures are taken in frontal and the number of attributes is smaller than those of the fashion datasets. Additionally, the facial attributes are not as complicated compared to the fashion datasets.
\subsection{Qualitative Results}
Figure \ref{fig:all_examples} shows several examples of the fashion search experiments for all datasets used in experiments. Images with the green bounding box mean that all attributes are matched with the query image and attribute manipulation. In most cases, the proposed algorithm can retrieve several relevant images with various attribute manipulations.
For the Shopping100k dataset, images do not have any wearer in them, which makes it easier than the other two fashion datasets due to more clean shot photos. We provide several examples in Figure \ref{fig:all_examples} wherein each row, the proposed method can retrieve images with the desired attribute manipulation. In unsuccessful cases, although the desired attribute is included in the retrieved images, the other attributes of the query image get affected by the attribute manipulation, which leads to the retrieval of wrong images.
For the DARN dataset, there are more various images compared to the Shopping100k dataset in terms of pose and lighting variations. The proposed method can handle both global (+army green) and local (+long sleeve) attribute manipulations. In the final row, a more challenging query formulation (dress to skirt) is where the proposed method can also handle.
The iMaterialist dataset is more difficult as Top-K retrieval accuracy in overall is much lower than the other datasets according to Table \ref{tab:top_20_table} and Figure \ref{fig:all_plots}. This is mostly due to the number of attributes and images are taken from different perspectives. We show several successful examples in Figure \ref{fig:all_examples}.
Lastly, in the CelebA dataset, the proposed method is mostly successful and can retrieve many relevant images given the query image and attribute manipulation. In the final row, the method can retrieve only one image that matches the conditions however other images also seem visually correct. This may be due to the labeling of the dataset where two similar people may be labeled slightly different.
\subsection{Run Time Performance}
Our FashionSearchNet-v2 is trained on Intel i7-5820K CPU and 64 GB RAM memory with GeForce GTX TITAN X GPU. FashionSearchNet-v2 can extract features from 10,000 images in around 60 seconds which is very close to the attribute-based method. Compared to the AlexNet implementation \cite{Krizhevsky}, the proposed FashionSearchNet-v2 has several additional layers to learn attribute representations. However, by using smaller fully connected layers, the run-time performance of FashionSearchNet-v2 is still efficient compared to the attribute-based method. Moreover, using the ROI pooling layer, all images just need to be fed into the network once which saves a lot of computation time. The extraction of ROIs for all attributes is efficient as it takes about only 0.002 seconds for each image.
\section{Conclusion}
This paper presents a new approach for conducting fashion searches using query images and attribute manipulations. The proposed FashionSearchNet-v2 is able to estimate efficient feature representations for fashion search and its good localization ability enables it to identify the most relevant regions for attributes of interest. In addition to being able to combine attribute representations into a single global representation for attribute manipulations, FashionSearchNet-v2 incorporates different techniques such as memory update and feature fusion. The proposed FashionSearchNet-v2 is shown to outperform the baseline fashion search methods including AMNet \cite{zhaobo_atman}. An interesting problem for future work would be to extend FashionSearchNet-v2 for more flexible attribute queries and comparative operations.
| proofpile-arXiv_065-1790 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
\paragraph{Context and literature review.} The study of dynamical systems of interacting agents has gained vigourous momentum in the last years, since their flexibility and tractability has allowed them to faithfully model the main emerging patterns of several biological, social and economic phenomena: from the collective motion of animals \cite{parisi08} and pedestrians \cite{bellomo2011SR, helbing2001RMP}, to the behavior of market participants \cite{cordier2005kinetic} and the emergence of price volatility \cite{ahn13}, passing through the study of optimization heuristics \cite{dorigo2005ant,kennedy2011particle}. For recent developments on their application in the modeling of complex systems, we point to the surveys \cite{bak2013nature,bonginichapsparse,carrillo2017review,viza12} and references therein.
Beside the literature on this topic, another complementary literature has grown in parallel at the same pace: the one on the control of such systems. This literature can be roughly divided into two main streams. The one on \textit{centralized controls} focuses on the modeling approach of an external control agent that exerts its force to steer the system towards a desired optimal configuration. Many papers have studied this framework in the discrete \cite{CFPT12,bonginioriginsparse} as well as in the continuous setting \cite{bongini2016pontryagin, MFOC, piccoli2015control, fornasier2019mean, burger2020instantaneous}, where the framework has come to be known as \textit{mean-field optimal control}.
Alongside this centralized approach to control, there is the stream dedicated to decentralized control strategies: these consist in assuming that each agent, besides being subjected to forces induced in a feedback manner by the rest of the population, follows an individual strategy to coordinate with the other agents based on their position. \textit{Mean-field games} (in short MFG) stem from this control setting when the number of agents is very large, and were first introduced in \cite{lasry2007mean} and indipendently in the engineering community by \cite{huang2003individual}. For a comparison of the MFG setting and the mean field optimal control one, we refer the interested reader to the book \cite{bensoussan2013mean}.
MFG have provided a powerful framework for the description of large systems of mutually competing agents, such as pedestrian dynamics (for which we highlight to the reader the interesting papers \cite{burger2013mean,santambrogio2011modest}) and market dynamics (a topic which has seen a wealth of literature flourishing since the seminal paper \cite{gueant2011mean}). For these modeling scenarios it is of particular relevance the study of MFG with Dirichlet boundary conditions since they arise naturally in situations where agents can leave the domain, such as in evacuation problems, in the case of financial default or in an exhaustible resources framework \cite{graber2015existence}. We mention that, in the paper \cite{ferreira2019existence}, the authors have proven the existence and uniqueness of solutions to a stationary MFG with Dirichlet boundary conditions.
However, the MFG framework may not be enough to fully capture the behavior of competing agents since in real-life situations, when choosing a strategy, they usually take into account not only the positions of their competitors but also their strategy. For such reason, a more general class of MFG systems were introduced in \cite{gomes2014existence} under the terminology of \textit{extended MFG models} or \textit{mean-field of control} to allow for stochastic dynamics depending upon the joint distribution of the controlled state and the control process. The mean-field of control setting allowed the authors of \cite{cardaliaguet2016mean} to analyze optimal liquidation strategies in a context where trader's choices affect the public price of a good, and has since been employed by several authors \cite{acciaio2019extended,alasseur2020extended, fu2021mean}. Existence and uniqueness results for mean-field of controls on $\mathbb{T}^d$ were discussed in the recente work \cite{kobeissi2019classical}.
Beside the recent advances on the theoretical side, a rich literature on numerical methods for MFG has flourished. The first results go back to the pioneering times of MFG theory, such as
\cite{MR2679575}, then many other developments have been published (such as the articles \cite{MR3097034, MR2888257, MR3575615}).
We refer to \cite{MR4214777} for an up-to-date and more complete picture on the state-of-the-art on the numerics of MFG equations.
However, if we consider the study of mean-field games systems in a mean-field of control setting, the literature is much less developed.
In particular, \cite{MR4146720} has proposed a finite difference scheme coupled with an iterative method for solving the systems of nonlinear equations that arise in the discrete setting. Further details on this topic can be found in \cite{lauriere2021numerical}.
\paragraph{Aim of the paper and main contributions.} In this paper we study the following family of mean-field of control models with Dirichlet boundary conditions that naturally arise in the context of pedestrian and investor dynamics:
\begin{align}\label{eq:mfg}
\left\{\begin{aligned}
&\partial_t u(t,x) + \sigma \Delta u(t,x) - H(t,x,D u(t,x);\mu_t) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&\partial_t m_t(x) - \sigma \Delta m_t(x) - \textup{div}(D_p H(t,x,D u(t,x);\mu_t)m_t(x)) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&\mu_t = (\textup{Id},\alpha^*(t,\cdot, Du(t,\cdot);\mu_t))_{\sharp}m_t, &\text{ for } t \in[0,T]\\
&m_0(x) = \overline{m}(x), u(T,x) = \psi(x,m_T), & \text{ for } x\in\overline{\Omega} \\
&m_t(x) = 0, u(t,x) = \psi(x,m_t), & \text{ for } x\in\partial\Omega\text{ and } t \in[0,T]
\end{aligned}\right.
\end{align}
Here, $m_t$ denotes the distribution of the agents' states, $u$ is the value function of the optimal control problem (with optimal control $\alpha^*$), and $\mu_t$ stands for the joint distribution of the state and the control. As we shall see in a moment, the above system is obtained from a standard MFG setting as soon as we require that agents need to optimize their strategy by taking into account the strategy of the others, which leads to the fixed-point relationship
$$\mu_t = (\textup{Id},\alpha^*(t,\cdot, Du(t,\cdot);\mu_t))_{\sharp}m_t$$
between the joint distribution $\mu_t$ and its marginals $m_t$ and $\alpha^*$.
A primary contribution of this paper is that we give sufficient conditions for the existence of Nash Equilibria for such systems (that is, their well-posedness). To prove the result we need to employ a double fixed-point method, in accordance to the formulation of the system. To this end we need to establish several a priori regularity estimates on the dynamics of our system. The Dirichlet boundary condition prevents us to leverage upon the conservation of mass of $m_t$ for such estimates, hence we need to rely on weaker results to work around the issue, and to resort to techniques that, to the best of our knowledge, are novel in the mean-field of controls framework. The importance of Dirichlet boundary conditions in applications can be appreciated in \cite{MR4146720}, which provides novel numerical schemes to simulate solutions of such problems, although without providing existence results. Moreover, apart from the technical leap of dealing with Dirichlet boundary conditions, we remark that the setting is more general from the one of \cite{kobeissi2019classical} since we also prove our existence result on $\mathbb{R}^d$ (rather than on $\mathbb{T}^d$). Furthermore, our framework allows for richer agents' dynamics like, for instance, the Cucker-Smale and the Hegselmann-Krause models in crowd motion. All these differences introduce several specific technical complications that set the two works apart, as far as technical details and modeling applications are concerned. The uniqueness problem for our modeling framework shall be the subject of a dedicated forthcoming work.
A second contribution of the paper is that we provide two explicit models fitting into our framework belonging to very different paradigms: one in the context of evacuation problems, the other formulated in a debt refinancing setting. This shows the flexibility of our framework and its potential to be exported to different modeling scenarios.
A third and final contribution is the implementation, through an iterative procedure based on the particle method, of mean-field games system with Dirichlet boundary conditions in a mean-field of control setting.
This method draws its theoretical justification with the results of convergence of large population games to mean field games (see, for example,
\cite{lauriere2020convergence}.
Up to the best of our knowledge, this is one of the first uses of particle methods to numerically solve
mean-field games system with Dirichlet boundary conditions in a mean-field of control setting, in which an iterative procedure is implemented for reducing the control space.
\paragraph{Derivation of the model.} To help the reader in the interpretability of the mean-field of controls system \eqref{eq:mfg}, we now give a formal derivation from the particle dynamics of an individual agent optimizing its strategy.
We denote by $X_t\in\overline{\Omega}$ the state of an \textit{infinitesimal} agent of the system in the \textit{state space} $\overline{\Omega}$, and by $\alpha_t$ its control, which takes values in the \textit{control space} $A \subset \mathbb{R}^d$. The distribution density of the state-control pair $(X_t,\alpha_t)$ shall be denoted by $\mu_t$, which is a probability measure on the product space $\overline{\Omega}\times A$ whose first marginal $\pi_{1\sharp}\mu_t$ is the distribution of the agents' states $m_t$.
We assume the dynamics of an agent to be
\begin{align*}
\left\{\begin{aligned}
d X_t &= b(t,X_t,\alpha^*_t;\mu_t)dt + 2\sqrt{\sigma}dW_t,\\
X_{t_0} &= x_0,
\end{aligned}\right.
\end{align*}
where $W_t$ is a $d$-dimensional Brownian motion. Due to the presence of a boundary, the integral form of the dynamics is given by
\begin{align*}
X_t = x_0 + \int_0^{t\wedge\tau} b(s,X_s,\alpha^*_s;\mu_s) ds + 2\sigma\int^{t\wedge\tau}_0 dW_s
\end{align*}
for any $t \leq \tau$, where $\tau$ denotes the \textit{exit time of the agent from $\Omega$}
\begin{align}\label{eq:exitime}
\tau := \inf \{t\in[0,T]\mid X_t \in \partial \Omega \text{ or } t = T \}.
\end{align}
The optimal control $\alpha^*_t$ is chosen to solve
\begin{align}\label{eq:costfunmfg}
\min_{\alpha\in \mathcal{U}} J(t_0,x_0,\alpha;\mu) = \mathbb{E}\left[\int^{T\wedge\tau}_0 \mathcal{L}(t,X_t,\alpha_t;\mu_t)dt + \psi(X_{T\wedge\tau},m_{T\wedge\tau})\right].
\end{align}
The control set $\mathcal{U}$ is the set of progressively measurable functions $\alpha:[0,T]\rightarrow A$ with respect to the filtration generated by the process $\{W_t\}_{t\in[0,T]}$. Notice that, since we are in a \textit{mean-field of controls} setting, the dynamics of each agent is affected not only by the average position of the rest of the agents, given by the distribution $m_t$, but also by their average optimal choice $\alpha^*_t$, whence the dependence of $\mathcal{L}$ on $\mu_t$, instead of just its first marginal as in the usual MFG framework.
For a given $\mu:[0,T]\rightarrow\mathcal{M}_1(\overline{\Omega}\times A)$, the value function
\begin{align*}
u(t_0,x_0) = \inf_{\alpha\in\mathcal{U}} J(t_0,x_0,\alpha;\mu)
\end{align*}
satisfies the following Hamilton-Jacobi-Bellman equation with Dirichlet boundary conditions
\begin{align}\label{eq:mfghjb}
\left\{\begin{aligned}
&\partial_t u(t,x) + \sigma \Delta u(t,x) - H(t,x,D u(t,x);\mu_t) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&u(T,x) = \psi(x,m_T) & \text{ for } x\in\overline{\Omega}\\
&u(t,x) = \psi(x,m_t) & \text{ for } x\in\partial\Omega\text{ and } t \in[0,T]
\end{aligned}\right.
\end{align}
where $H$ stands for the Hamiltonian of the system
\begin{align}\label{def:hamiltonian}
H(t,x,p;\nu) := \sup_{\alpha \in A}\left\{-p\cdot b(t,x,\alpha;\nu) - \mathcal{L}(t,x,\alpha;\nu) \right\}
\end{align}
If we assume that for every $(t,x,p,\nu)$ there exists a unique $\alpha^*(t,x,p,\nu)$ for which
$$H(t,x,p;\nu) = -p\cdot b(t,x,\alpha^*(t,x,p,\nu);\nu) - \mathcal{L}(t,x,\alpha^*(t,x,p,\nu);\nu),$$
then the optimal control of an infinitesimal agent at time $t$ is given by $\alpha^*(t,X_t, Du(t,X_t);\mu_t)$. This means that, in an equilibrium configuration, the measure $\mu_t$ is the pushforward of $m_t$ by the map $x\mapsto (x,\alpha^*(t,x, Du(t,x);\mu_t))$, i.e., it holds the fixed-point relationship
\begin{align}\label{eq:fixpointmu}
\mu_t = (\textup{Id},\alpha^*(t,\cdot, Du(t,\cdot);\mu_t))_{\sharp}m_t.
\end{align}
This, in particular means that $\mu_t$ is uniquely determined by $Du_t$ and $m_t$, hence for every $t \in [0,T]$ it holds
\begin{align}\label{eq:mufunction}
\mu_t = \Gamma(Du_t,m_t)
\end{align}
for some function $\Gamma$ (which will be investigated thoroughly in Section \ref{sec:gammasection}).
At the same time, the evolution of the mass $m_t$ satisfies a Fokker-Planck type equation with drift given by
$$b(t,x,\alpha^*(t,x,Du(t,x);\mu_t)) = -D_p H(t,x,Du(t,x);\mu_t),$$
which yields the system
\begin{align}\label{eq:fokkerplanck}
\left\{\begin{aligned}
&\partial_t m_t(x) - \sigma \Delta m_t(x) - \textup{div}(D_p H(t,x,D u(t,x);\mu_t)m_t(x)) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&m_0(x) = \overline{m}(x), & \text{ for } x\in\overline{\Omega} \\
&m_t(x) = 0, & \text{ for } x\in\partial\Omega\text{ and } t \in[0,T]
\end{aligned}\right.
\end{align}
Plugging together the dynamics \eqref{eq:mfghjb} of $u$ and \eqref{eq:fokkerplanck} of $m$, as well as the fixed-point relationship \eqref{eq:fixpointmu} for $\mu_t$, we get the \textit{mean-field games system} \eqref{eq:mfg}.
\paragraph{Structure of the paper.} The paper is organized as follows: in Section \ref{sec:assumption} we shall state the main assumptions under which we will establish the existence of Nash equilibria of system \eqref{eq:mfg}. Section \ref{sec:estimates} contains the main result of the paper, that is the well-posedness of system \eqref{eq:mfg} under the set of assumptions stated in Section \ref{sec:assumption} through a priori estimates and regularity techniques. In Section \ref{sec:examples} we shall provide examples of systems satisfying our set of hypotheses. We conclude the paper with Section \ref{sec:numerics} which reports a numerical implementation of our modeling scenario.
\section{Preliminaries and main assumptions}\label{sec:assumption}
In this section we shall first give some preliminary notations and results that will be instrumental in the following sections. We then move on to list the assumptions under which we shall prove the well-posedness of the mean-field of controls system \eqref{eq:mfg}.
Let $\Omega\subseteq\mathbb{R}^d$ be an open, compact set with smooth boundary $\partial \Omega$, fix $T > 0$ and define $\overline{Q}_T := \overline{\Omega}\times[0,T]$. We shall denote by $|\cdot|$ any finite $\mathbb{R}^d$-norm. For any domains $X, Y$, we define by
\begin{itemize}
\item $\mathcal{C}^n(X;Y)$ the space of all functions from $X$ to $Y$ which are continuous together with all their derivatives up to order $n$, equipped with the norm $\|\cdot\|_{\mathcal{C}^n(X;Y)}$ (or simply $\|\cdot\|_{\mathcal{C}^n})$ whenever clear from the context);
\item $\mathcal{C}^\alpha(X;Y)$ for any $\alpha \in (0,1)$ the space of all H\"older
continuous functions from $X$ to $Y$ with exponent $\alpha$;
\item $\mathcal{C}^{n+\alpha}(X;Y)$ the set of all functions whose
$n$ derivatives are all in $\mathcal{C}^\alpha(X;Y)$;
\item for any subset $X \subseteq Q_T$, $\Hoelder{n +\alpha}{(n+\alpha)/2}{X;Y}$ the set of functions from $X$ to $Y \subseteq \mathbb{R}^n$ which
are continuous together with all the derivatives $D^r_t D^s_x$ for $2r + s \leq n$ and whose derivatives of order $n$ are H\"older continuous with exponent $\alpha$ in space and exponent $\alpha/2$ in time;
\end{itemize}
As a shorthand notation, we shall write $\Hoelder{\gamma}{\gamma/2}{X;Y}$ in place of $\Hoelder{n +\alpha}{(n+\alpha)/2}{X;Y}$ with the convention that $n = \lfloor\gamma\rfloor$ and $\alpha = n - \gamma$. We denote the $\gamma$-H\"older norm of $\Hoelder{\gamma}{\gamma/2}{X;Y}$ by $\|\cdot\|_{\Hoelder{\gamma}{\gamma/2}{X;Y}}$. We remember that the $\gamma$-H\"older norm is the sum of the supremum norms for derivatives up to $\lfloor\gamma\rfloor$-order, plus the sum of $(n - \gamma)$-H\"older coefficients for the $\lfloor\gamma\rfloor$-order derivatives, for a precise definition we refer the reader to \cite[pages 7-8]{solonnikov}.
Given a compact set $B\subset\mathbb{R}^d$, we shall denote by $\mathcal{M}_1(B)$ the set of all positive measures over $m:B\rightarrow\mathbb{R}_+$ such that $m(B) \leq 1.$ It is well-known that
the weak$^*$ convergence in $\mathcal{M}_1(B)$ can be metrized by any of the following metrics (parametrized by $a > 0$):
\begin{align*}
d_{a}(\mu,\nu):= \sum^{\infty}_{k = 1} \frac{1}{{a}^{ k}}\frac{\left|\int_{B}f_k(x)d(\mu - \nu)(x)\right|}{1 + \left|\int_{B}f_k(x)d(\mu - \nu)(x)\right|}.
\end{align*}
where, by the Stone-Weierstrass theorem, the set $(f_k)_{k\in\mathbb{N}^+}$ can be obtained by putting together the basis of the vector space of polynomials of degree $n$ over the ring $\mathbb{R}[x_1,\ldots,x_d]$ for every $n\in\mathbb{N}$.
The following technical result shall be helpful in the following.
\begin{lemma}\label{le:techweakconv}
Let $B\subset \mathbb{R}^D$ be a compact set with diameter $\delta(B)$ and let $h:B\rightarrow\mathbb{R}^d$ be a bounded function. Then for every $a>0$ it holds
$$\left|\int_{B}h(x) d(\nu_1-\nu_2)(x)\right| \leq a(1+2\delta(B))\|h\|_{\infty} d_a(\nu_1,\nu_2)$$
holds for every $\nu_1,\nu_2\in\mathcal{M}_1(B)$, where $\|\cdot\|_{\infty}$ denotes the supremum norm.
\end{lemma}
\begin{proof}
Remember that the diameter of $B$ is defined as $\delta(B) := \sup \{|x| \mid x \in B\}$ (notice that $\delta <+\infty$ since $B$ is compact). If $(f_k)_{k\in\mathbb{N}^+}$ is a basis of polynomials then each $f_k$ satisfies
$$\left|\int_{B}f_k(x) d(\nu_1-\nu_2)(x)\right| \leq \sup_{x\in B}|f_k(x)| \left|\nu_1(B) - \nu_2(B)\right| \leq 2 \delta(B)^{\deg(f_k)},$$
where we used the fact that $\nu_1,\nu_2\in\mathcal{M}_1(B)$ and we denote by $\deg(f_k)$ the degree of $f_k$. But then, using that $f_1 \equiv 1$ we have
\begin{align*}
\left|\int_{B}h(x) d(\nu_1-\nu_2)(x)\right| &\leq \|h\|_{\infty}\left|\int_{B} d(\nu_1-\nu_2)(x)\right|\\
&= \|h\|_{\infty}a\left(1+\left|\int_{B}f_1(x) d(\nu_1-\nu_2)(x)\right|\right)\frac{1}{a}\frac{\left|\int_{B}f_1(x) d(\nu_1-\nu_2)(x)\right|}{1+\left|\int_{B} f_1(x)d(\nu_1-\nu_2)(x)\right|}\\
&\leq \|h\|_{\infty}a(1+2\delta(B))\sum^\infty_{k=1}\frac{1}{a^k}\frac{\left|\int_{B}f_k(x) d(\nu_1-\nu_2)(x)\right|}{1+\left|\int_{B}f_k(x) d(\nu_1-\nu_2)(x)\right|}\\
&\leq a(1+2\delta(B))\|h\|_{\infty} d_a(\nu_1,\nu_2).
\end{align*}
This concludes the proof.
\end{proof}
We now state the necessary hypotheses that the various functions of our extended MFG system need to fulfil.
\begin{framed}[1.1\textwidth]
\begin{center}
{\bf Assumptions (A)}
\end{center}
In the following, fix $\beta > 0$ and a bounded function $\xi:\overline{\Omega}\times A\rightarrow\mathbb{R}^d$. We shall assume that
\begin{enumerate}[label=(A\arabic*)]
\item\label{item:compactset} The sets $\overline{\Omega}$ and $A$ are compact subsets of $\mathbb{R}^d$, and we denote by $\delta$ the diameter of $\overline{\Omega}\times A$.
\item\label{item:positivem} $\overline{m}\in \mathcal{C}^{2+\beta}(\overline{\Omega};\mathbb{R})$ such that $\overline{m}(x)\geq0$ for every $x\in\overline{\Omega}$ and
$$\int_{\overline{\Omega}}\overline{m}(x)dx=1.$$
\item\label{item:hoelderpsi} The map $\psi:[0,T]\times \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}} \rightarrow \mathbb{R}$ satisfies the following conditions: if $m \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ then
\begin{enumerate}[label=$(\roman*)$]
\item\label{item:hoelderpsi1} $\psi(\cdot,m) \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$,
\item\label{item:hoelderpsi2} $\|\psi(\cdot,m)\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}} \leq \mathcal{K}_{\psi}\left(\|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}\right)$ for some continuous function $\mathcal{K}_{\psi}:\mathbb{R}\rightarrow\mathbb{R}$.
\end{enumerate}
\item\label{item:inverseb} The map $b:[0,T]\times\overline{\Omega}\times A\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow\mathbb{R}^d$
satisfies the following conditions:
\begin{enumerate}[label=$(\roman*)$]
\item\label{item:inverseb1} the map $\alpha \mapsto b(t,x,\alpha;\nu)$ is bijective with smooth inverse,
\item\label{item:inverseb2} if $\alpha \in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}$ and $\nu_t \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$, then it holds
$$b(t,x,\alpha(t,x);\nu_t);\nu_t) \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}.$$
\end{enumerate}
\item\label{item:monotoneL} The map $\mathcal{L}:[0,T]\times\overline{\Omega}\times A\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow\mathbb{R}$ satisfies
the following conditions:
\begin{enumerate}[label=$(\roman*)$]
\item\label{item:monotoneL1} for every $\nu^1,\nu^2 \in \mathcal{M}_1(\overline{\Omega}\times A)$ and for every $t \in [0,T]$ the monotonicity condition holds:
\begin{align*}
\int_{\overline{\Omega}\times A} (\mathcal{L}(t,x,\alpha;\nu^1) - \mathcal{L}(t,x,\alpha;\nu^2))d(\nu^1 - \nu^2)(x,\alpha) \geq 0,
\end{align*}
\item\label{item:monotoneL2} if $\alpha \in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}$ and $\nu_t \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$, then it holds
$$ \mathcal{L}(t,x,\alpha(t,x);\nu_t) \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}.$$
\end{enumerate}
\item\label{item:contalpha} For each $t,x,p,\nu\in[0,T]\times\overline{\Omega}\times\mathbb{R}^d\times\mathcal{M}_1(\overline{\Omega}\times A)$ there exists a unique maximum point $\alpha^*(t,x,p;\nu) \in A$ of
\begin{align}\label{eq:alphamaximum}
- p\cdot b(t,x,\alpha^*;\nu) - \mathcal{L}(t,x,\alpha^*;\nu
\end{align}
and the corresponding function $\alpha^*:[0,T]\times\overline{\Omega}\times\mathbb{R}^d\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow A$ satisfies
\begin{enumerate}[label=$(\roman*)$]
\item\label{item:contalpha1} if $p \in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}$ and $\nu_t \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$, then it holds
$$\alpha(t,x,p(t,x);\nu_t) \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;A},$$
\item \label{item:contalpha4} for every $p^1, p^2 \in \mathbb{R}^d$ and $\nu^1, \nu^2 \in \mathcal{M}_1(\overline{\Omega}\times A)$ it holds
\begin{align*}
\int_{\overline{\Omega}}|\alpha^*(t,x,p^1;\nu^1)-\alpha^*(t,x,p^2;\nu^2)| dx \leq
L\left(|p^1 - p^2| + \left|\int_{\overline{\Omega}\times A}\xi(x,\alpha) d(\nu^1-\nu^2)(x,\alpha)\right|\right),
\end{align*}
where the constant $L$ satisfies
\begin{align}\label{eq:alphalipbound}
L < \frac{1}{(3+3\delta)(1+2\delta)\|\xi\|_{\infty}}.
\end{align}
\end{enumerate}
\item\label{item:hoelderH} The Hamiltonian $H:[0,T]\times \overline{\Omega} \times \mathbb{R}^d \times \mathcal{M}_1(\overline{\Omega}\times A) \rightarrow \mathbb{R}$ defined in \eqref{def:hamiltonian} satisfies the following statement: there exist two continuous functions $\mathcal{H}_1, \mathcal{H}_2:\mathbb{R}\rightarrow\mathbb{R}$ such that, for every
$(p,m)\in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}\times \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ it holds
\begin{align*}
\|H(t,x,p(t,x);\Gamma(p_t,m_t))\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}
\leq \mathcal{H}_1\left( \|p\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}} \right),
\end{align*}
as well as
\begin{align*}
\|D_pH(t,x,p(t,x);\Gamma(p_t,m_t))\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}}
\leq \mathcal{H}_2\left( \|p\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}} \right),
\end{align*}
where $\Gamma$ is the function defined in \eqref{eq:mufunction}.
\end{enumerate}
\end{framed}
Solutions of system \eqref{eq:mfg} will be interpreted in the classical sense.
\begin{definition} \label{def:solution}
Fix $\beta > 0$. We define a $\beta$-\textit{classical solution of system} \eqref{eq:mfg} any function $(u,m) \in \Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$ which satisfies system \eqref{eq:mfg} in the classical sense.
\end{definition}
What follows is the main theoretical result of the paper, the well-posedness of system \eqref{eq:mfg}.
\begin{theorem}[Existence of Nash equilibria for system \eqref{eq:mfg}] \label{th:mainresult}
Let Assumptions (A) hold for some constant $\beta > 0$ and some bounded function $\xi:\overline{\Omega}\times A\rightarrow\mathbb{R}^d$. Then there exists a $\beta$-classical solution of system \eqref{eq:mfg}.
\end{theorem}
\section{Existence of Nash equilibria}\label{sec:estimates}
In this section we shall prove that, under the Assumptions (A) stated in the previous section, there exists a $\beta$-classical solution of system \eqref{eq:mfg}.
Throughout this section, we will fix a $\beta > 0$ and a bounded function $\xi:\overline{\Omega}\times A\rightarrow\mathbb{R}^d$, and assume Assumptions (A) hold. We shall start with some preliminary results on the Fokker-Planck equation and the fixed point relationship \eqref{eq:fixpointmu}, and then we shall derive sharper a priori regularity estimates for $u$, $m$ and $\mu$, which will then be instrumental in obtaining the well-posedness of system \eqref{eq:mfg}.
\subsection{Existence and stability of the map $\Gamma$}\label{sec:gammasection}
To establish the existence of the fixed point \eqref{eq:fixpointmu} we will need that $m_t \in \mathcal{M}_1(\overline{\Omega})$, so that we may rely on the weak$^*$ compactness of this space. However, since we consider smooth solutions of the Fokker-Planck equation \eqref{eq:fokkerplanck}, there is no guarantee that this holds true. We shall prove this fact by using a similar technique to the one employed in \cite{graber2015existence}.
\begin{proposition}\label{prop:measfp}
Assume that $u:\overline{Q}_T\rightarrow\mathbb{R}$ and $\mu:[0,T]\rightarrow\mathcal{M}_1(\overline{\Omega}\times A)$ are such that
$$D_p H(t,x,D u(t,x);\mu_t) \in L^{\infty}(\overline{Q}_T)$$
and let $m$ be a smooth solution of \eqref{eq:fokkerplanck} with $u$ and $\mu$. Then for every $t \in [0,T]$ and $x \in \overline{\Omega}$ it holds
\begin{align*}
m_t(x) \geq 0 \quad \text{ and } \quad \|m_t\|_{L^1(\Omega)} \leq \|\overline{m}\|_{L^1(\Omega)}.
\end{align*}
Therefore, under assumption \ref{item:positivem}, we have $m_t \in \mathcal{M}_1(\overline{\Omega})$ for every $t\in[0,T]$.
\end{proposition}
\begin{proof}
Let $\delta > 0$ and $\phi_{\delta}:\mathbb{R}\rightarrow\mathbb{R}$ be defined as
\begin{align*}
\phi_{\delta}(s) := (s^-)^{2+\delta},
\end{align*}
where $s^-:= (|s|-s)/2$. Notice that $\phi_\delta \in \mathcal{C}^2(\mathbb{R})$ and
\begin{align*}
\phi_{\delta}(s) =
\begin{cases}
0 & \text{ if } s \geq 0,\\
(-s)^{2+\delta} & \text{ if } s < 0.
\end{cases}
\end{align*}
Furthermore
\begin{align*}
\phi'_{\delta}(s) =
\begin{cases}
0 & \text{ if } s \geq 0,\\
-(2+\delta)(-s)^{1+\delta} & \text{ if } s < 0,
\end{cases}\quad\text{ and }\quad
\phi''_{\delta}(s) =
\begin{cases}
0 & \text{ if } s \geq 0,\\
-(2+\delta)(1+\delta)(-s)^{\delta} & \text{ if } s < 0,
\end{cases}
\end{align*}
which implies that for $\delta \rightarrow 0^+$ we have
\begin{align*}
\phi_\delta(s)\rightarrow (s^-)^2, \quad \phi'_{\delta}(s) \rightarrow 2s^- \quad \text{ and } \quad \phi''_{\delta}(s) \rightarrow -2\cdot\mathbbm{1}_{\left\{s < 0\right\}}(s)
\end{align*}
pointwise. Finally, notice that $\phi'_{\delta}(0) = 0$.
We now multiply the function $\phi'_{\delta}(m_t(x))$ to \eqref{eq:fokkerplanck} and we integrate in time and space to get
\begin{align}\begin{split}\label{eq:fokkerplanckweak}
\int^t_0\int_{\Omega}\partial_t m_s(x)&\phi'_{\delta}(m_s(x))dxds \\
&= \int^t_0\int_{\Omega}\big(\sigma\Delta m_s(x)+\textup{div}(D_p H(s,x,D u(s,x);\mu_s)m_s(x)) \big)\phi'_{\delta}(m_s(x)) dxdt.
\end{split}\end{align}
We use integration by parts on both sides of the equation. On the left, we have the identity
\begin{align*}
\int^t_0\int_{\Omega}\partial_t m_s(x)\phi'_{\delta}(m_s(x))dxds = \int_{\Omega}\phi_{\delta}(m_t(x))dx - \int_{\Omega}\phi_{\delta}(\overline{m}(x))dx;
\end{align*}
so that now it holds
\begin{align*}
\int_{\Omega}&\phi_{\delta}(m_t(x))dx - \int_{\Omega}\phi_{\delta}(\overline{m}(x))dx \\
&= - \int^t_0\int_{\Omega}\big(\sigma|D m_s(x)|^2 +D_p H(s,x,D u(s,x);\mu_t)m_s(x)\cdot D m_s(x) \big) \phi''_{\delta}(m_s(x)) dxds.
\end{align*}
We can now invoke the dominated convergence theorem to pass the limit $\delta\rightarrow 0^+$ inside the integrals in order to get
\begin{align*}
\int_{\Omega}&|m^-_t(x)|^2dx - \int_{\Omega}|\overline{m}^-(x)|^2dx \\
&= - \int^t_0\int_{\Omega}\big(\sigma D m^-_s(x)|^2 +D_p H(s,x,D u(s,x);\mu_s)m^-_s(x)\cdot D m^-_s(x) \big) dxds.
\end{align*}
By invoking assumptions \ref{item:positivem} (yielding $\overline{m}^- = 0$) and Young's inequality we get
\begin{align*}
\int_{\Omega}|m^-_t(x)|^2dx &\leq - \sigma\int^t_0\int_{\Omega}|D m^-_s(x)|^2dxds + \frac{\sigma}{2}\int^t_0\int_{\Omega}|D m^-_s(x)|^2dxds \\
&\quad\quad+ \frac{1}{2\sigma}\int^t_0\int_{\Omega}|D_p H(s,x,D u(s,x);\mu_s)m^-_s(x)|^2 dxds.
\end{align*}
As a shorthand notation, set $K(t,x) := D_p H(t,x,D u(t,x);\mu_t)$. By the assumptions on $u$ and $\mu$ we have that $\|K\|_{L^{\infty}(\overline{Q}_T)}<+\infty$, therefore H\"older inequality and Poincar\'{e} inequality yield the following estimate from above
\begin{align*}
\int_{\Omega}|m^-_t(x)|^2dx &\leq - \frac{\sigma}{2}\int^t_0\int_{\Omega}|D m^-_s(x)|^2dxds +\frac{1}{2\sigma}\| K\|^2_{L^{\infty}(\overline{Q}_T)}\int^t_0\int_{\Omega}|m^-_s(x)|^2 dxds\\
&\leq \left(-\frac{\sigma C_{\Omega,2}}{2}+\frac{1}{2\sigma}\|K\|^2_{L^{\infty}(\overline{Q}_T)}\right)\int^t_0\int_{\Omega}|m^-_s(x)|^2 dxds,
\end{align*}
where $C_{\Omega,2}$ denotes Poincar\'{e} constant. Using Gronwall's lemma we finally get $m^-_t(x) = 0$ for every $t \in [0,T]$ and $x\in\overline{\Omega}$ as desired.
To get the inequality $\|m_t\|_{L^1(\Omega)} \leq \|\overline{m}\|_{L^1(\Omega)}$ it suffices to use the function $\phi_\delta(s) = s^{1+\delta}$. Indeed, using this new version of $\phi'_{\delta}(m(t,x))$ in \eqref{eq:fokkerplanckweak} we obtain
\begin{align*}
\int_{\Omega}m_t(x)^{1+\delta}dx - \int_{\Omega}\overline{m}(x)^{1+\delta}dx
&= - \int^t_0\int_{\Omega}\frac{(1+\delta)\delta}{m_s(x)^{1-\delta}}\sigma|D m_s(x)|^2dxds \\
&\quad \quad -\int^t_0\int_{\Omega}(1+\delta)\delta m_s(x)^{\delta}D_p H(s,x,D u(s,x);\mu_s)\cdot D m_s(x) dxds.
\end{align*}
Therefore, passing to the limit $\delta\rightarrow 0^+$ we get
\begin{align*}
\int_{\Omega}m_t(x)dx - \int_{\Omega}\overline{m}(x)dx & = \limsup_{\delta \rightarrow 0^+}\left[\int_{\Omega}m_t(x)^{1+\delta}dx - \int_{\Omega}\overline{m}(x)^{1+\delta}dx\right]\\
&= -\liminf_{\delta\rightarrow 0^+} \int^t_0\int_{\Omega}\frac{(1+\delta)\delta}{m_t(x)^{1-\delta}}\sigma|D m_s(x)|^2dxds \\
&\quad \quad -\liminf_{\delta\rightarrow 0^+}\int^t_0\int_{\Omega}(1+\delta)\delta m_s(x)^{\delta}D_p H(s,x,D u(s,x);\mu_s)\cdot D m_s(x) dxds\\
&\leq 0,
\end{align*}
having invoked Fatou's lemma for the first term and the dominated convergence theorem for the second term.
\end{proof}
As anticipated, the well-posedness of the fixed-point map \eqref{eq:fixpointmu} shall follow from the compactness properties of the measure space $\mathcal{M}_1$. The strategy of this proof follows \cite{cardaliaguet2016mean}.
\begin{lemma}\label{lem:fixpointmeas}
Assume that $m \in \mathcal{M}_1(\overline{\Omega}), p \in L^{\infty}(\overline{\Omega};\mathbb{R}^d)$ and let $\alpha^*$ be the unique maximizer of \eqref{eq:alphamaximum}. Then, for every $t \in [0,T]$, the map $\Phi[p,m]:\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow\mathcal{M}_1(\overline{\Omega}\times A)$ defined as
\begin{align*}
\Phi[p,m](\mu) := (\textup{Id},\alpha^*(t,\cdot,p(\cdot);\mu))_{\sharp}m
\end{align*}
admits a unique fixed point $\mu^* = \Gamma(p,m)$.
\end{lemma}
\begin{proof}
Let us first show that the map $\Phi$ is well-defined, that is $\Phi[p,m](\mu) \in \mathcal{M}_1(\overline{\Omega}\times A)$. We have
\begin{align*}
\int_{\overline{\Omega}\times A} d\Phi[p,m](\mu)(x,\alpha) = \int_{\mathbb{R}^d} \mathbbm{1}_{\overline{\Omega}\times A}(x,\alpha^*(t,x,p(x);\mu))m(x)dx = \int_{\overline{\Omega}}m(x)dx \leq 1.
\end{align*}
Since $\mathcal{M}_1(\overline{\Omega}\times A)$ is trivially convex and compact with respect to the weak$^*$ convergence of measures (by the Banach-Alaoglu theorem), to show that $\Phi[p,m]$ has a fixed point via the Schauder fixed point theorem we just need to prove that $\Phi[p,m]$ is continuous. To this end, let $(\mu_n)_{n\in\mathbb{N}}$ be a sequence in $\mathcal{M}_1(\overline{\Omega}\times A)$ weak$^*$ converging to $\mu \in \mathcal{M}_1(\overline{\Omega}\times A)$. Then, by using Assumption \ref{item:contalpha}-\ref{item:contalpha4}, for every $\eta\in\mathcal{C}_b(\overline{\Omega}\times A;\mathbb{R})$ we get
\begin{align*}
\lim_{n\rightarrow\infty}\int_{\overline{\Omega}\times A} &\eta(x,\alpha)d\Phi[p,m](\mu_n)(x,\alpha) =\\
& = \lim_{n\rightarrow\infty}\int_{\overline{\Omega}} \eta(x,\alpha^*(t,x,p(x);\mu_n))m(x)dx\\
& = \int_{\overline{\Omega}} \eta(x,\alpha^*(t,x,p(x);\mu))m(x)dx\\
& = \int_{\overline{\Omega}\times A} \eta(x,\alpha)d\Phi[p,m](\mu)(x,\alpha).
\end{align*}
This shows existence. To prove uniqueness, assume that $\mu_1$ and $\mu_2$ are two fixed points of $\Phi[p,m]$. Then by assumption \ref{item:monotoneL}-\ref{item:monotoneL1} we get
\begin{align*}
0 & \leq\int_{\overline{\Omega}\times A} (\mathcal{L}(t,x,\alpha;\mu_1) - \mathcal{L}(t,x,\alpha;\mu_2))d(\mu_1 - \mu_2)(x,\alpha)\\
& = \int_{\overline{\Omega}} (\mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_1);\mu_1) - \mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_1);\mu_2))m(x)dx \\
& \quad \quad - \int_{\overline{\Omega}} (\mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_2);\mu_1) - \mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_2);\mu_2))m(x)dx.
\end{align*}
If we add and subtract the terms $p(x)\cdot b(t,x,\alpha^*(t,x,p(x);\mu_i))$ for $i = 1,2$ and we rearrange the expression, we get
\begin{align*}
0 & \leq \int_{\overline{\Omega}} \Big(p(x)\cdot b(t,x,\alpha^*(t,x,p(x);\mu_1)) + \mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_1);\mu_1) \\
& \quad \quad \quad \quad- p(x)\cdot b(t,x,\alpha^*(t,x,p(x);\mu_1))- \mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_1);\mu_2)\Big)m(x)dx \\
& \quad \quad + \int_{\overline{\Omega}} \Big(p(x)\cdot b(t,x,\alpha^*(t,x,p(x);\mu_2)) + \mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_2);\mu_2) \\
&\quad \quad \quad \quad- p(x)\cdot b(t,x,\alpha^*(t,x,p(x);\mu_2)) - \mathcal{L}(t,x,\alpha^*(t,x,p(x);\mu_2);\mu_1)\Big)m(x)dx.
\end{align*}
However, since $\alpha^*(t,x,p(x);\mu_i)$ is the maximum of $- p(x)\cdot b(t,x,\alpha;\mu_i) - \mathcal{L}(t,x,\alpha;\mu_i)$ by Assumption \ref{item:contalpha}, then all the integrands are nonpositive, which implies that $\alpha^*(t,x,p(x);\mu_1) = \alpha^*(t,x,p(x);\mu_2)$ $m$-almost everywhere, which in turn implies $\Phi[p,m](\mu_1) = \Phi[p,m](\mu_2)$, and hence $\mu_1 = \mu_2$.
\end{proof}
Next we show the stability of the fixed-point map $\Gamma.$
\begin{lemma}\label{lem:stability}
Let $(m_n)_{n\in\mathbb{N}}$ be a sequence weakly$^*$ converging in $\mathcal{M}_1(\overline{\Omega})$ to $m$, and let $(p_n)_{n\in\mathbb{N}}$ be a sequence converging a.e. in $L^{\infty}(\overline{\Omega};\mathbb{R}^d)$. Then $\Gamma(p_n,m_n)$ converges weakly$^*$ in $\mathcal{M}_1(\overline{\Omega}\times A)$ to $\Gamma(p,m)$.
\end{lemma}
\begin{proof}
Denote by $\mu_n := \Gamma(p_n,m_n)$. By definition, $\mu_n = \Phi(p_n,m_n)(\mu_n)$ and, since $\mathcal{M}_1(\overline{\Omega}\times A)$ is weakly$^*$ compact, up to subsequences $(\mu_n)_{n\in\mathbb{N}}$ weak$^*$ converges to some $\mu$. We need to show that
$\mu = \Phi(p,m)(\mu)$.
Notice that, by Assumption \ref{item:contalpha}-\ref{item:contalpha4} for every $\eta \in \mathcal{C}_b(\overline{\Omega}\times A;\mathbb{R})$ we have
\begin{align*}
\lim_{n\rightarrow\infty}\eta(x,\alpha^*(t,x,p_n(x);\mu_n)) = \eta(x,\alpha^*(t,x,p(x);\mu))
\end{align*}
Therefore, if we invoke the dominated convergence theorem we get
\begin{align*}
\lim_{n\rightarrow\infty}\int_{\overline{\Omega}\times A}\eta(x,\alpha)d\Phi[p_n,m_n](\Gamma(p_n,m_n))(x,\alpha)&=\lim_{n\rightarrow\infty}\int_{\overline{\Omega}}\eta(x,\alpha^*(t,x,p_n(x);\Gamma(p_n,m_n)))m_n(x)dx \\
&=\int_{\overline{\Omega}}\eta(x,\alpha^*(t,x,p(x);\mu))m(x)dx\\
& = \int_{\overline{\Omega}\times A}\eta(x,\alpha)d\Phi[p,m](\mu)(x,\alpha),
\end{align*}
which implies that $(\Phi[p_n,m_n](\mu_n))_{n\in\mathbb{N}}$ weak$^*$ converges to $\Phi[p,m](\mu)$. But the fixed-point relationship $\mu_n = \Phi(p_n,m_n)(\mu_n)$ and the uniqueness of the weak$^*$ limit then imply $\mu = \Phi(p,m)(\mu)$, as desired.
\end{proof}
\subsection{A priori regularity estimates}
We shall now extend Lemma \ref{lem:fixpointmeas} to H\"older continuous measure-valued curves.
\begin{lemma}\label{lem:hoelderfixpointmeas}
Let $P_0,M_0>0$ and let $p \in \Hoelder{\beta}{\beta/2}{[0,T];\mathbb{R}^d})$ and $m \in \Hoelder{\beta}{\beta/2}{[0,T];\mathbb{R}}$ be such that
$$\|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}} \leq M_0 \quad \text{ and } \quad \|p\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} \leq P_0,$$
with $m_t\in\mathcal{M}_1(\overline{\Omega})$ for every $t\in[0,T]$.
Then there exists a $C_0 := C_0(M_0,P_0) > 0$ and a metric on $\mathcal{M}_1(\overline{\Omega}\times A)$ that metrizes the weak$^*$ convergence such that the application $\Psi$ from the set
\begin{align}\label{eq:compactsetmu}
\left\{\mu \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A)) \mid \|\mu\|_{\mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))}\leq C_0\right\}
\end{align}
onto itself defined as
\begin{align*}
\Psi[p,m](\mu) := (\Phi[p_t,m_t](\mu_t))_{t \in [0,T]}
\end{align*}
has the measure-valued curve $(\Gamma(p_t,m_t))_{t\in[0,T]}$ as unique fixed point. In particular, $(\Gamma(p_t,m_t))_{t\in[0,T]}$ is H\"older continuous in time with exponent $\beta/2$ with respect to such metric.
\end{lemma}
\begin{proof}
As already argued in Section \ref{sec:assumption}, the metric
\begin{align*}
\lambda_{\omega}(\mu,\nu) := \sum^{\infty}_{k = 1} \frac{1}{{(\delta+1)}^{\omega k}}\frac{\left|\int_{\overline{\Omega}\times A}f_k(x,\alpha)d(\mu - \nu)(x,\alpha)\right|}{1 + \left|\int_{\overline{\Omega}\times A}f_k(x,\alpha)d(\mu - \nu)(x,\alpha)\right|}.
\end{align*}
metrizes the weak$^*$ convergence over $\mathcal{M}_1(\overline{\Omega}\times A)$, where $(f_k)_{k\in\mathbb{N}^+}$ is a family of polynomials such that the degree of $f_k$ is precisely
$$\deg(k) = \sup \left\{n\in\mathbb{N}\mid\sum^n_{i = 0}\binom{i+d-1}{d-1} \leq k \right\}.$$
Notice the trivial estimate $\deg(k)\leq k$ for any $k\in\mathbb{N}^+$. We shall prove that, for a proper choice of $C_0$ and $\omega$, the map $\Psi$ defines a contraction over the set \eqref{eq:compactsetmu} (where each $\mu$ is H\"older continuous with respect to the metric $\lambda_\omega$) by using the fact that
\begin{align*}
\lambda_{\omega}(\mu,\nu) \leq \sum^{\infty}_{k = 1} \frac{1}{{(\delta+1)}^{\omega k}}\left|\int_{\overline{\Omega}\times A}f_k(x,\alpha)d(\mu - \nu)(x,\alpha)\right|,
\end{align*}
and obtaining a bound from above for the right-hand side term.
To choose $C_0$ and $\omega$, we start by first choosing $a > 0$ such that
$$3(\delta+1)e < a < \frac{e}{La(1+2\delta)\|\xi\|_{\infty}},$$
(which exists by the bound \eqref{eq:alphalipbound}), which implies that $a$ also satisfies
$$\log(a) > \log(3(\delta+1)) + \frac{La(1+2\delta)\|\xi\|_{\infty}}{e}.$$
Now denote by $L_1$ the H\"older constant in \ref{item:contalpha}-\ref{item:contalpha1}.
Since for every $M_0, P_0 > 0$ it holds
$$\lim_{C\rightarrow+\infty} \frac{L_1 + LP_0 + M_0 + La(1+2\delta)\|\xi\|_{\infty}C}{e C} = \frac{La(1+2\delta)\|\xi\|_{\infty}}{e},$$
choose $C_0 := C(M_0,P_0)$ large enough such that
$$\log(a) > \log(3(\delta+1)) + \frac{L_1 + LP_0 + M_0 + La(1+2\delta)\|\xi\|_{\infty}C_0}{e C_0}$$
still holds true. It is easy to check that
$$\frac{L_1 + LP_0 + M_0 + La(1+2\delta)\|\xi\|_{\infty}C_0}{e C_0} \geq \sup_{k\in\mathbb{N}^+}\frac{\log\left(\frac{k}{C_0}(L_1 + LP_0 + M_0 + La(1+2\delta)\|\xi\|_{\infty}C_0)\right)}{k},$$
so that it actualy holds
$$\log(a) > \log(3(\delta+1)) + \sup_{k\in\mathbb{N}^+}\frac{\log\left(\frac{k}{C_0}(L_1 + LP_0 + M_0 + La(1+2\delta)\|\xi\|_{\infty}C_0)\right)}{k}.$$
We can therefore fix $\omega := \omega(M_0,P_0) > 0$ satisfying
\begin{align}\label{eq:omegachoice}
\frac{\log(a)}{\log(\delta+1)} > \omega > \sup_{k\in\mathbb{N}^+}\frac{k\log(3(\delta+1)) +\log\left(\frac{k}{C_0}(L_1 + LP_0 + M_0 + La(1+2\delta)\|\xi\|_{\infty}C_0)\right)}{k\log(\delta+1)}.
\end{align}
We therefore pass to bound from above the distance $\lambda_\omega$. Denote by $\overline{\mu}_t := \Phi[p_t,m_t](\mu_t)$, whose existence and uniqueness for every $t$ is guaranteed by the hypotheses on $p$ and $m$ and by Lemma \ref{lem:fixpointmeas}. We have
\begin{align}\begin{split}\label{eq:metricest}
\lambda_{\omega}(\overline{\mu}_t,\overline{\mu}_s) &\leq \sum^{\infty}_{k = 1} \frac{1}{{(\delta+1)}^{\omega k}}\left|\int_{\overline{\Omega}}(f_k(x,\alpha^*(t,x,p_t(x);\mu_t))m_t(x) - f_k(x,\alpha^*(s,x,p_s(x);\mu_s))m_s(x))dx\right|\\
&= \sum^{\infty}_{k = 1} \frac{1}{{(\delta+1)}^{\omega k}}\int_{\overline{\Omega}}\left|f_k(x,\alpha^*(t,x,p_t(x);\mu_t)) - f_k(x,\alpha^*(s,x,p_s(x);\mu_s))\right||m_t(x)|dx \\
&\quad \quad + \sum^{\infty}_{k = 1} \frac{1}{{(\delta+1)}^{\omega k}}\int_{\overline{\Omega}}\left|f_k(x,\alpha^*(t,x,p_s(x);\mu_t))\right|\left|m_t(x) - m_s(x)\right|dx.
\end{split}
\end{align}
We now focus on the first term of the right-hand side of \eqref{eq:metricest}. Notice that, by Assumption \ref{item:contalpha}-\ref{item:contalpha4} and Lemma \ref{le:techweakconv} with $h = \xi$, $B = \overline{\Omega}\times A$ and our choice of $a$, we have the estimate
\begin{align*}
\int_{\overline{\Omega}}|\alpha^*(t,x,p_t(x);\mu_t)) - \alpha^*(t,x,p_t(x);\mu_s)|dx
& \leq L\Big|\int_{\overline{\Omega}\times A} \xi(x,\alpha)d(\mu_t -\mu_s)(x,\alpha)\Big|\\
& \leq La(1+2\delta)\|\xi\|_{\infty}d_a(\mu_t,\mu_s)\\
& \leq La(1+2\delta)\|\xi\|_{\infty}\lambda_\omega(\mu_t,\mu_s),
\end{align*}
where $d_a \leq \lambda_\omega$ comes from $a > (\delta+1)^\omega$ (as implied by \eqref{eq:omegachoice}).
Moreover, the family $(f_k)_{k\in\mathbb{N}}$ is a basis of Lipschitz continuous functions with
$$\textup{Lip}(f_k;\overline{\Omega}\times A) \leq \deg(k)\delta^{\deg(k)-1
<k{(\delta+1)}^{k}.$$
Taking these information into account we may write
\begin{align*}
\sum^{\infty}_{k = 1} \frac{1}{{(\delta+1)}^{\omega k}}&\int_{\overline{\Omega}}\left|f_k(x,\alpha^*(t,x,p_t(x);\mu_t)) - f_k(x,\alpha^*(s,x,p_s(x);\mu_s))\right||m_t(x)|dx \\
& \leq \sum^{\infty}_{k = 1} k {(\delta+1)}^{k(1-\omega)}\int_{\overline{\Omega}}\Big|\alpha^*(t,x,p_t(x);\mu_t)) - \alpha^*(s,x,p_s(x);\mu_s)\Big||m_t(x)|dx\\
&\leq \sum^{\infty}_{k = 1} k {(\delta+1)}^{k(1-\omega)}\int_{\overline{\Omega}}\Big(L_1|t-s|^{\beta/2} + L|p(t,x) - p(s,x)| +\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ La(1+2\delta)\|\xi\|_{\infty}\lambda_\omega(\mu_t,\mu_s)\Big)|m_t(x)|dx\\
&\leq \sum^{\infty}_{k = 1} k {(\delta+1)}^{k(1-\omega)}\left(L_1 + L\|p\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}}+La(1+2\delta)\|\xi\|_{\infty}\|\mu\|_\beta\right)|t-s|^{\beta/2},
\end{align*}
where, as a shorthand notation, we denoted by $\|\mu\|_\beta := \|\mu\|_{\mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))}$. Notice that we used the H\"older continuity in time of the functions $p$ and $\mu$.
Concerning the second term on the right-hand side of \eqref{eq:metricest}, since it holds that
$$\sup_{(x,\alpha) \in \overline{\Omega}\times A} |f_k(x,\alpha)| \leq {\delta}^{\deg(k)}< k{(\delta+1)}^{k},$$
then we also have
\begin{align*}
\sum^{\infty}_{k = 1} & \frac{1}{{(\delta+1)}^{\omega k}}\int_{\overline{\Omega}}\left|f_k(x,\alpha^*(t,x,p_s(x);\mu_t))\right|\left|m_t(x) - m_s(x)\right|dx\\
&\leq\sum^{\infty}_{k = 1} {k(\delta+1)}^{k(1-\omega)}\|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}|t-s|^{\beta/2}.
\end{align*}
In conclusion, using $\|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}} \leq M_0$, $\|p\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} \leq P_0$ and $\|\mu\|_\beta \leq C_0$, we arrive at
\begin{align*}
\lambda_{\omega}(\overline{\mu}_t,\overline{\mu}_s) &\leq \sum^{\infty}_{k = 1}{k(\delta+1)}^{k(1-\omega)}\left(L_1 + LP_0+ M_0 + La(1+2\delta)\|\xi\|_{\infty}C_0\right)|t-s|^{\beta/2}.
\end{align*}
The choice of $\omega$ in \eqref{eq:omegachoice} then yields
\begin{align*}
\lambda_{\omega}(\overline{\mu}_t,\overline{\mu}_s) \leq \sum^{\infty}_{k = 1}\frac{C_0}{3^k}|t-s|^{\beta/2}= \frac{C_0}{2}|t-s|^{\beta/2}<C_0|t-s|^{\beta/2}.
\end{align*}
This, in turn, implies that $\Psi[p,m]$ is an application from the set
\begin{align*}
\left\{\mu \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A)) \mid \|\mu\|_\beta\leq C_0\right\}
\end{align*}
onto itself. Notice that the above set is convex and compact in $\mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$ (by the Ascoli-Arzel\'a theorem). Moreover $\Psi[p,m]$ is a continuous application, since the map $\Phi[p_t,m_t]$ is continuous with respect to the weak$^*$ convergence of measures by Lemma \ref{lem:fixpointmeas}. We can therefore invoke the Schauder fixed point theorem to infer the existence of a fixed point for $\Psi[p,m]$. Uniqueness, instead, follows from uniqueness of the fixed point for $\Phi[p_t,m_t]$ for every $t\in[0,T]$.
\end{proof}
Next, we extend the stability result of Lemma \ref{lem:stability} to the map $\Psi$.
\begin{lemma}\label{lem:hoelderstability}
Assume that the sequence
$$(p_n,m_n)_{n\in\mathbb{N}}\subset\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}\times\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}} \quad \text{ with } m_n \in \mathcal{M}_1(\overline{\Omega})$$
converges to $(p,m)_{n\in\mathbb{N}}\in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}\times\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ in the product topology, and consider the family of functions $\vartheta_n(t) = \Gamma(p_{n,t},m_{n,t})$, as well as $\vartheta(t):= \Gamma(p_{t},m_{t})$. Then $(\vartheta_n)_{n\in\mathbb{N}}$ converges uniformly to $\vartheta$.
\end{lemma}
\begin{proof}
Since the sequences $(p_n)_{n\in\mathbb{N}}$ and $(m_n)_{n\in\mathbb{N}}$ are convergent, they are bounded in their respective $\beta$-H\"older norm by two positive constants $P_0$ and $M_0$. Then we can take a uniform $C_0$ and $\omega$ for the entire sequence by using such $P_0$ and $M_0$ in Lemma \ref{lem:hoelderfixpointmeas}. This implies that the metric $\lambda_\omega$ on $\mathcal{M}_1(\overline{\Omega}\times A)$ of the previous result is such that it holds
\begin{align*}
(\mu_n)_{n\in\mathbb{N}}\subset Z:=\left\{\mu \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A)) \mid \|\mu\|_{\mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))}\leq C_0\right\}.
\end{align*}
Since the set $Z$ is compact by the Ascoli-Arzel\'a theorem, we can extract from $(\mu_n)_{n\in\mathbb{N}}$ a convergent subsequence to some $\mu \in Z$ and the convergence is uniform in time, that is
$$\lambda_\omega(\mu_{n,t},\mu_t) \rightarrow 0 \quad \text{ as } \quad n\rightarrow\infty \quad \text{ uniformly in } t.$$
%
On the other hand, we know from Lemma \ref{lem:stability} that the sequence $(\Gamma(p_{n,t},m_{n,t}))_{t\in[0,T]}$ converges pointwise to $(\Gamma(p_t,m_t))_{t\in[0,T]} \in Z$, therefore by the uniqueness of the pointwise limit we have
$$\mu_t = \Gamma(p_t,m_t) \quad \text{ for every } t\in[0,T].$$
which in turn implies that $(\Gamma(p_{n,t},m_{n,t}))_{t\in[0,T]}$ converges to $(\Gamma(p_t,m_t))_{t\in[0,T]}$ uniformly.
\end{proof}
Notice that our smoothness assumptions on $b$ and $\mathcal{L}$ transfer easily to the Hamiltonian of the system, as the following technical result shows.
\begin{proposition}\label{prop:holderHamilton}
The Hamiltonian $H$ satisfies the following statement: let $p \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}$ and let $m \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ with $m \in \mathcal{M}_1(\overline{\Omega})$, then it holds
$$H(t,x,p(t,x);\Gamma(p_t,m_t)) \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}},$$
as well as
$$D_p H(t,x,p(t,x);\Gamma(p_t,m_t)) \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}.$$
\end{proposition}
\begin{proof}
By Assumption \ref{item:contalpha}, we know that there exists an $\alpha^*:[0,T]\times\mathbb{R}^d\times\mathbb{R}^d\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow\mathbb{R}$ for which
\begin{align}\label{eq:Hrep}
H(t,x,p;\nu) = -p\cdot b(t,x,\alpha^*(t,x,p;\nu);\nu) - \mathcal{L}(t,x,\alpha^*(t,x,p;\nu);\nu)
\end{align}
holds, which in turn implies
\begin{align}\label{eq:DpHrep}
D_p H(t,x,p;\nu) = -b(t,x,\alpha^*(t,x,p;\nu);\nu).
\end{align}
By Lemma \ref{lem:hoelderfixpointmeas} we have that $\Gamma(p_t,m_t) \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$. The thesis then follows from Assumptions \ref{item:inverseb}-\ref{item:inverseb1}, \ref{item:monotoneL}-\ref{item:monotoneL1} and \ref{item:contalpha}-\ref{item:contalpha1}.
\end{proof}
We can now show that the stability of the map $\Gamma$ easily translates into stability for the functions $H$ and $D_p H$.
\begin{lemma}\label{le:Hstability}
Assume that the sequence
$$(p_n,m_n)_{n\in\mathbb{N}}\subset\Hoelder{\beta}{\beta/2}{[0,T];\mathbb{R}^d}\times \Hoelder{\beta}{\beta/2}{[0,T];\mathbb{R}} \quad \text{ with } m_n \in \mathcal{M}_1(\overline{\Omega})$$
converges to $(p,m)_{n\in\mathbb{N}}\in\Hoelder{\beta}{\beta/2}{[0,T];\mathbb{R}^d}\times \Hoelder{\beta}{\beta/2}{[0,T];\mathbb{R}}$ in the product topology. Then for $n\rightarrow\infty$ it holds
\begin{align*}
H(t,x,p_{n,t}(x);\Gamma(p_{n,t},m_{n,t})) &\rightarrow H(t,x,p_{t}(x);\Gamma(p_{t},m_{t})) \quad \text{ uniformly,
\end{align*}
as well as
\begin{align*}
D_p H(t,x,p_{n,t}(x);\Gamma(p_{n,t},m_{n,t})) &\rightarrow D_p H(t,x,p_{t}(x);\Gamma(p_{t},m_{t})) \quad \text{ uniformly.}
\end{align*}
\end{lemma}
\begin{proof}
We shall first show that
\begin{align*}
\alpha^*(t,x,p_{n,t}(x);\Gamma(p_{n,t},m_{n,t})) \rightarrow \alpha^*(t,x,p_{t}(x);\Gamma(p_{t},m_{t})) \quad \text{ uniformly.}
\end{align*}
The thesis will then follow by the regularity assumptions on $b$ and $\mathcal{L}$, namely \ref{item:inverseb}-\ref{item:inverseb2} and \ref{item:monotoneL}-\ref{item:monotoneL2}, and the fact that $H$ satisfies \eqref{eq:Hrep} and \eqref{eq:DpHrep} by Proposition \ref{prop:holderHamilton}.
Let $\omega$ be determined by \eqref{eq:omegachoice} (for $P_0$ and $M_0$ chosen according to the bounds of the sequences $(p_n)_{n \in \mathbb{N}}$ and $(m_n)_{n \in \mathbb{N}}$). Then the following estimate follows easily from Assumption \ref{item:contalpha}-\ref{item:contalpha4} together with Lemma \ref{le:techweakconv}:
\begin{align*}
|\alpha^*(t,x,p_{n,t}(x);\Gamma(p_{n,t},m_{n,t}))& - \alpha^*(t,x,p_{t}(x);\Gamma(p_{t},m_{t})) | \\
& \leq L|p_{n,t}(x)-p_{t}(x)| \\
& \quad \quad+ L\left|\int_{\overline{\Omega}\times A}\xi(x,\alpha)d\Gamma(p_{n,t},m_{n,t})(x,\alpha)-\int_{\overline{\Omega}\times A}\xi(x,\alpha)d\Gamma(p_{t},m_{t})(x,\alpha)\right|\\
& \leq L|p_{n,t}(x)-p_{t}(x)| + L\|\xi\|_{\infty}(\delta+1)^\omega(1+2\delta)\lambda_{\omega}(\Gamma(p_{n,t},m_{n,t}),\Gamma(p_{t},m_{t})).
\end{align*}
By taking the supremum over $(t,x)\in \overline{Q}_T$, the statement is proven by invoking Lemma \ref{lem:hoelderstability}.
\end{proof}
The following a priori H\"older bound for $u$ and $m$ follows easily from the regularity assumptions on the data (i.e., the sets, constants and functions listed in the assumptions) of the problem and the previous lemmas.
\begin{lemma}\label{lem:unifboundsol}
There exists a constant $\overline{C}$ depending only on the data of the problem such that, if $(u,m) \in \Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times \Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$ with $m_t \in \mathcal{M}_1(\overline{\Omega})$ is a $\beta$-classical solution of \eqref{eq:mfg} then
\begin{align*}
\|u\|_{\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} + \|m\|_{\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} \leq \overline{C}.
\end{align*}
\end{lemma}
\begin{proof}
By Lemma \ref{lem:hoelderfixpointmeas}, if $(u,m) \in \Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times \Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$ then the measure valued curve $\mu_t = \Gamma(Du(t,\cdot),m_t)$ for every $t\in[0,T]$ is H\"older continuous in time with exponent $\beta/2$. Moreover, by Proposition \ref{prop:holderHamilton} follows that the coefficients of the PDEs \eqref{eq:mfghjb} and \eqref{eq:fokkerplanck} are bounded in the parabolic H\"older space $\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ and $\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}$, respectively.
Therefore, since $u\in\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$, $m$ is a solution to a nondegenerate parabolic equation with coefficients bounded in $\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$. We can thus invoke \cite[Theorem V.6.1, Equation (6.12)-(6.12')]{solonnikov} to get the uniform bound
\begin{align}\label{eq:unifboundm}
\|m\|_{\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} \leq C_1,
\end{align}
for some constant $C_1$ depending only on the data. For the same reasons, since $m\in\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$,
then $u$ also solves a nondegenerate parabolic equation with coefficients in $\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ and it satisfies
\begin{align}\label{eq:unifboundu}
\|u\|_{\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} \leq C_2,
\end{align}
where the constant $C_2$ depends only on the data. Putting together the two estimates we get the thesis for $\overline{C}:= C_1+C_2$.
\end{proof}
\subsection{Well-posedness of system \eqref{eq:mfg}}
We are now ready to prove the main result of the paper.
\begin{proof}[Proof of Theorem \ref{th:mainresult}]
Denote by $X$ the subset of $\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$ such that if $(u,m)\in X$ then $m_t\in\mathcal{M}_1(\overline{\Omega})$ for every $t\in[0,T]$, and equip it with the product norm (which we denote by $\|\cdot\|_X$).
\medskip
\begin{claim}
$X$ is a closed subset of $\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$.
\end{claim}
\begin{claimproof}
We just need to check that if $(u_n,m_n)_{n\in\mathbb{N}}$ is a sequence in $X$ converging to $(u,m)$, then $m_t\in\mathcal{M}_1(\overline{\Omega})$ for every $t\in[0,T]$. However, since $(m_n)_{n\in\mathbb{N}}$ converges to $m$ uniformly then it holds
$$\lim_{n\rightarrow\infty}\int_{\overline{\Omega}}m_n(x)dx = \int_{\overline{\Omega}}m(x)dx.$$
Therefore, by the fact that $m_n(\overline{\Omega}) \leq 1$ for every $n\in\mathbb{N}$, we have the desired statement.
\end{claimproof}
\medskip
We now introduce the map $T:X\times [0,1]\rightarrow X$ that associates to every $(u,m) \in X$ and every $\varepsilon \in\ [0,1]$ a solution $(v,\rho)$ of the PDE system
\begin{align}\label{eq:mfgfixedpoint}
\left\{\begin{aligned}
&\partial_t v(t,x) + \sigma \Delta v(t,x) - \varepsilon H(t,x,D u(t,x);\mu_t) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&\partial_t \rho(t,x) - \sigma \Delta \rho(t,x) - \varepsilon\textup{div}(D_p H(t,x,D u(t,x);\mu_t)\rho(t,x)) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&\mu_t = (\textup{Id},\alpha^*(t,\cdot,D u(t,\cdot);\mu_t))_{\sharp}m_t&\text{ for } t \in[0,T]\\
&\rho(0,x) = \varepsilon\overline{m}(x), v(T,x) = \varepsilon\psi(x,m_T) & \text{ for } x\in\overline{\Omega} \\
&\rho(t,x) = 0, v(t,x) = \varepsilon\psi(x,m_t) & \text{ for } x\in\partial\Omega\text{ and } t \in[0,T]
\end{aligned}\right.
\end{align}
\medskip
\begin{claim}{$T(\cdot,\varepsilon)$ is a well-defined operator for every $\varepsilon \in [0,1]$.}
\end{claim}
\begin{claimproof}
Set $(v,\rho) := T(u,m,\varepsilon)$. Since the couple $(Du,m)$ satisfies the assumptions of Lemma \ref{lem:hoelderfixpointmeas}, the measure-valued curve $\mu_t = \Gamma(Du_t,m_t)$ satisfies the fixed-point relationship
$$\mu_t = (\textup{Id},\alpha^*(t,\cdot,D u(t,\cdot);\mu_t))_{\sharp}m_t,$$
for every $t\in[0,T]$. Morever, $\mu_t$ belongs to $\mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$.
Now, by Proposition \ref{prop:holderHamilton}, if $(u,m) \in X$ then the term $H(t,x,D u(t,x);\mu_t)$ satisfies the hypotheses of \cite[Theorem IV.5.2]{solonnikov}, hence there exists a unique solution $v\in \Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T}$ of
\begin{align}\label{eq:hjbfixedpoint}
\left\{\begin{aligned}
&\partial_t v(t,x) + \sigma \Delta v(t,x) - \varepsilon H(t,x,D u(t,x);\mu_t) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&v(T,x) = \varepsilon \psi(x,m_T) & \text{ for } x\in\overline{\Omega} \\
&v(t,x) = \varepsilon \psi(x,m_t) & \text{ for } x\in\partial\Omega
\end{aligned}\right.
\end{align}
such that
\begin{align}\begin{split}\label{eq:boundv}
\|v\|_{\Hoelder{2+\beta}{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} &\leq C_1 \left(\|H(t,x,D u(t,x);\mu_t)\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}} + \|\psi(x,m_t)\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}\right)\\
&\leq C_2 \left(\mathcal{H}_1\left(\|D u(t,x)\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}\right) + \mathcal{K}_{\psi}\left(\|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}\right)\right)\\
&\leq \mathcal{K}_1\left(\|(u,m)\|_X\right),
\end{split}
\end{align}
for some continuous function $\mathcal{K}_1:\mathbb{R}\rightarrow\mathbb{R}$ and constants $C_1,C_2>0$ depending only on the data. Here we used Assumption \ref{item:hoelderpsi}-\ref{item:hoelderpsi1}, as well as Assumption \ref{item:hoelderH}.
Furthermore, since $D_p H(t,x,D u(t,x);\mu_t)$ is H\"older continuous as well, for the same reasons there exists also a solution $\rho \in \Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T}$ of
\begin{align}\label{eq:fpfixedpoint}
\left\{\begin{aligned}
&\partial_t \rho(t,x) - \sigma \Delta \rho(t,x) - \varepsilon\textup{div}(D_p H(t,x,D u(t,x);\mu_t)\rho(t,x)) = 0, & \text{ for } x\in\Omega \text{ and } t \in[0,T]\\
&\rho(0,x) = \varepsilon\overline{m}_0(x) & \text{ for } x\in\overline{\Omega} \\
&\rho(t,x) = 0 & \text{ for } x\in\partial\Omega
\end{aligned}\right.
\end{align}
which satisfies
\begin{align}\begin{split}
\label{eq:boundrho}
\|\rho\|_{\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} &\leq C_3 \left(\|D_p H(t,x,D u(t,x);\mu_t)\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} + \|\overline{m}_0\|_{\mathcal{C}^{\beta}(\overline{\Omega};\mathbb{R})}\right)\\
&\leq C_4 \left(\mathcal{H}_2\left(\|D u(t,x)\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}\right) + \|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}}\right)\\
&\leq \mathcal{K}_2\left(\|(u,m)\|_X\right),
\end{split}
\end{align}
again for some continuous function $\mathcal{K}_2:\mathbb{R}\rightarrow\mathbb{R}$ and constants $C_3,C_4>0$ depending only on the data, having used Assumption \ref{item:hoelderH}.
Moreover, by Proposition \ref{prop:measfp}, it holds that $\rho_t\in\mathcal{M}_1(\overline{\Omega})$ for every $t\in[0,T]$.
Summing \eqref{eq:boundv} and \eqref{eq:boundrho} together we arrive at
\begin{align}\begin{split}\label{eq:contboundT}
\|T(u,m,\varepsilon)&\|_{\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} \\
& = \|(v,\rho)\|_{\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}}\\
& = \|v\|_{\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} + \|\rho\|_{\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}}\\
& \leq \mathcal{K}\left(\|(u,m)\|_X\right),
\end{split}
\end{align}
for some continuous function $\mathcal{K}:\mathbb{R}\rightarrow\mathbb{R}$ depending only on the data. It then follows that $(v,\rho) = T(u,m,\varepsilon) \in X$, and hence $T(\cdot,\varepsilon)$ is a well-defined map.
\end{claimproof}
\medskip
\begin{claim}
$T(\cdot,\varepsilon)$ is a continuous map for every $\varepsilon \in [0,1]$.
\end{claim}
\begin{claimproof}
Let $(u_n,m_n)_{n\in\mathbb{N}}\in X$ be a sequence converging to $(u,m)$ in the norm of $X$, and denote by $(v_n,\rho_n) := T(u_n,m_n,\varepsilon)$ for every $n\in\mathbb{N}$. Then $(u_n,m_n)_{n\in\mathbb{N}}\in X$ is also bounded, and so is $\mathcal{K}(\|(u_n,m_n)\|_X)_{n\in\mathbb{N}}$ by the continuity of the function $\mathcal{K}$ in \eqref{eq:contboundT}. Therefore, by the estimate \eqref{eq:contboundT} we infer that $(v_n,\rho_n)_{n\in\mathbb{N}}\in X$ is bounded in $\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$ as well. Hence, by Ascoli-Arzel\'a Theorem and the fact that $X$ is closed, we may infer the existence of $(v,\rho)\in X$ such that $(v_n,\rho_n)_{n\in\mathbb{N}}$ converges to $(v,\rho)$ in H\"older norm (up to subsequences). We now have to show that $T(u,m,\varepsilon) = (v,\rho)$.
To do so notice that, since $(D u_n)_{n\in\mathbb{N}}$ is converging in $\Hoelder{\beta}{\beta/2}{\overline{Q}_T;A}$ and $(m_n)_{n\in\mathbb{N}}$ in $\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ with $m_n \in \mathcal{M}_1(\overline{\Omega})$, we can invoke Lemma \ref{lem:hoelderstability} to infer the convergence (up to subsequences) in $\mathcal{C}^{\beta/2}$ of the sequence $(\mu_n)_{n\in\mathbb{N}} \subset \mathcal{C}^{\beta/2}(\overline{Q}_T;\mathcal{M}_1(\overline{\Omega}\times A))$ defined as
$$\mu_n := (\Gamma(D u_{n}(t,\cdot),m_{n,t}))_{t\in[0,T]}$$
to $\mu = (\Gamma(D u(t,\cdot),m_{t}))_{t\in[0,T]}$. Since gradients converge uniformly, by Lemma \ref{le:Hstability} it follows that
\begin{align*}
H(t,x,D u_n(t,x);\mu_{n,t}) &\rightarrow H(t,x,D u(t,x);\mu_{t}) \quad \text{ uniformly.
\end{align*}
Therefore, using well-known stability results for viscosity solutions of second-order equations, it holds that $v$ is a classical solution of \eqref{eq:hjbfixedpoint}. Similarly, since by Lemma \ref{le:Hstability} we also get
\begin{align*}
D_p H(t,x,D u_n(t,x);\mu_{n,t}) &\rightarrow D_p H(t,x,D u_n(t,x);\mu_{n,t}) \quad \text{ uniformly,}
\end{align*}
then we have that $\rho$ solves \eqref{eq:fpfixedpoint}, which in turn implies $T(u,m,\varepsilon) = (v,\rho)$.
\end{claimproof}
\medskip
\begin{claim}
$T(\cdot,\varepsilon)$ is a compact operator for every $\varepsilon \in [0,1]$.
\end{claim}
\begin{claimproof}
To show compactness we need to prove that it maps bounded sets into precompact sets. Therefore, fix $M>0$ and consider the bounded set
$$X_M:= \{(u,m)\in X\mid \|(u,m)\|_X\leq M \}.$$
By \eqref{eq:contboundT} it holds that $T(\cdot,\varepsilon)$ maps $X_M$ into the subset of $\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}$ with H\"older norm bounded by the constant $\sup_{[0,M]}\mathcal{K}$. Since $X$ is closed by a previous claim, the image of $X_M$ by $T(\cdot,\varepsilon)$ is precompact by the Ascoli-Arzel\'a theorem, as desired.
\end{claimproof}
\medskip
\begin{claim}
$T(\cdot,\varepsilon)$ has a fixed point for every $\varepsilon \in [0,1]$.
\end{claim}
\begin{claimproof}
We shall apply the Leray-Schauder fixed point theorem. We already showed that for every $\varepsilon\in[0,1]$ the map $T(\cdot,\varepsilon)$ is a continuous compact mapping, and it trivially holds $T(u,m,0) = (0,0)$ by the maximum principle for the heat equation. Therefore, assume that $(u,m)$ is a fixed point of $T(\cdot,\varepsilon)$, i.e., it is a classical solution of \eqref{eq:mfg} with $\varepsilon H$, $\varepsilon D_p H$, $\varepsilon \psi$ and $\varepsilon \overline{m}$ in place of $H$, $D_pH$, $\psi$ and $\overline{m}$. Then from \eqref{eq:unifboundm} and \eqref{eq:unifboundu} in the proof of Lemma \ref{lem:unifboundsol} we see that we can select a constant $\overline{C}>0$ independent from $\varepsilon \in [0,1]$ such that
$$\|(u,m)\|_{\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}\times\Hoelder{2+\beta }{1+\beta/2}{\overline{Q}_T;\mathbb{R}}} \leq \overline{C}.$$
Therefore, by the Leray-Schauder fixed point theorem we have that the application $T(\cdot,\varepsilon)$ has a fixed point for every $\varepsilon\in[0,1]$.
\end{claimproof}
To conclude the proof, we simply notice that a fixed point $(u,m)$ of $T(\cdot,1)$ is also a classical solution of \eqref{eq:mfg}.
\end{proof}
\section{Modeling applications}\label{sec:examples}
We now present a set of sufficient conditions for some of the assumptions reported in Section \ref{sec:assumption}, and then we shall provide two models that satisfy them, so that we may infer for them the existence of Nash equilibria by Theorem \ref{th:mainresult}.
\subsection{Sufficient conditions for Assumption \ref{item:monotoneL}}
We shall first focus on the Lagrangian and we will show a functional form satisfying Assumptions \ref{item:monotoneL} which can be easily found in the context of multi-agent models. In such contexts, one of the most employed modeling tools is the convolution of the mass of agents with an \textit{interaction kernel}, which models the interaction of the mass with itself. We recall that the \textit{convolution} between two functions $f:Y_1\rightarrow Y_2$ and $m \in \mathcal{M}_1(Y_1)$ is the function $f* m:Y_1\rightarrow Y_2$ defined as
$$f* m(x) := \int_{Y_1} f(x - y) dm(y),$$
whenever this quantity is well-defined (for instance, when $m$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}^d$ and $f \in L_1(\mathbb{R}^d)$). Convolutions with sufficiently regular interaction kernels play a crucial role, both in the dynamics as well as inside the cost functional, in several pedestrian and financial markets models (see for instance \cite{carrillo1501.07054} for the former and \cite{ahn13} for the latter).
\begin{definition}[Multi-agent interaction Lagrangian]
Fix $\ell:[0,T]\times\overline{\Omega}\rightarrow\mathbb{R}$, $h:\mathbb{R}\rightarrow\mathbb{R}$, $Q:\mathbb{R}^d\rightarrow\mathbb{R}$, $g:A\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow\mathbb{R}$ and $\varepsilon \geq 0$. We define the \textit{multi-agent interaction Lagrangian} to be
\begin{align}\label{eq:multiagentL}
\mathcal{L}(t,x,\alpha;\nu) := \ell(t,x) + h((Q*\pi_{1\sharp}\nu)(x)) + g(\alpha,\nu) + \varepsilon|\alpha|^2.
\end{align}
\end{definition}
For this kind of cost function, a rather easy sufficient condition can be formulated for Assumption \ref{item:monotoneL}-\ref{item:monotoneL1}.
\begin{proposition}\label{prop:monotoneLprop}
If there exists $F:\overline{\Omega}\times A\rightarrow\mathbb{R}$ such that for every $(t,x,\alpha)\in[0,T]\times\overline{\Omega}\times A$ and $\nu^1,\nu^2 \in\mathcal{M}_1(\overline{\Omega}\times A)$ it holds
\begin{align}\begin{split}\label{eq:suffmonotone}
h((Q*\pi_{1\sharp}\nu^1)(x)) +g(\alpha,\nu^1) - h((Q*\pi_{1\sharp}\nu^2)(x))& -g(\alpha,\nu^2) \\
& \geq F(x,\alpha)\int_{\overline{\Omega}\times A}F(\tilde{x},\tilde{\alpha})d(\nu^1-\nu^2)(\tilde{x},\tilde{\alpha}),
\end{split}
\end{align}
then the Lagrangian \eqref{eq:multiagentL} satisfies Assumption \ref{item:monotoneL}-\ref{item:monotoneL1}.
\end{proposition}
\begin{proof}
Follows easily from the fact that
\begin{align*}
\int_{\overline{\Omega}\times A}& (\mathcal{L}(t,x,\alpha;\nu^1) - \mathcal{L}(t,x,\alpha;\nu^2))d(\nu^1 - \nu^2)(x,\alpha)\\
& = \int_{\overline{\Omega}\times A}\left(h((Q*\pi_{1\sharp}\nu^1)(x)) +g(\alpha,\nu^1) - h((Q*\pi_{1\sharp}\nu^2)(x)) -g(\alpha,\nu^2)\right)d(\nu^1 - \nu^2)(x,\alpha) \\
& \geq \left[\int_{\overline{\Omega}\times A}F(x,\alpha)d(\nu^1-\nu^2)(x,\alpha)\right]^2 \geq 0.
\end{align*}
This concludes the proof.
\end{proof}
The regularity condition of Assumption \ref{item:monotoneL}-\ref{item:monotoneL2} requires to be more precise in the choice of the function $g$. As we want to model situations where agents optimize their choices taking into account the average strategy of the other agents, we first define the operator
\begin{align}\label{eq:intmu}
\Theta(\nu) := \int_{\overline{\Omega}\times A}\alpha d\nu(x,\alpha).
\end{align}
The operator $\Theta$ measures the average control strategy of the mass of players. Such quantity will be considered inside the Lagrangian in the form of the following cost
\begin{align}\label{eq:gintmu}
g(\alpha,\nu) := \varphi(\alpha \cdot \Theta(\nu)),
\end{align}
where $\varphi$ is a sufficiently smooth, convex and monotone function. Minimizing such cost leads the infinitesimal agents to align its strategy $\alpha^*$ with the direction of the average strategy $\Theta(\nu)$, provided that $\varphi$ is decreasing. On the other hand, if $\varphi$ is increasing, the optimal strategy $\alpha^*$ should go on the opposite direction of $\Theta(\nu)$.
\begin{remark}
In several applications, the function $g$ is chosen as
\begin{align}\label{eq:gintmuclassic}
g(\alpha,\nu) := \varphi(\alpha - \Theta(\nu)),
\end{align}
for some smooth, convex and monotone function $\varphi$ (with $\varphi(z) = z^2$ being the favourite choice, see e.g. \cite{MR4146720,kobeissi2019classical}). Our unconventional choice of $g$ is done on the basis that all the results that will be proven for \eqref{eq:gintmu} can be trivially extended to \eqref{eq:gintmuclassic} as well, and that our cost leads to nice closed loop solutions (see, for instance, Remark \ref{rem:expcontrol}) and to interesting agent dynamics, as shown in Section \ref{sec:numerics}.
\end{remark}
We first show that the H\"older regularity in time of a measure $\nu$ passes naturally to $\Theta(\nu)$, as the following result shows.
\begin{proposition}\label{prop:holderintegral}
Assume that $\mu\in\mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$ with respect to the metric $\lambda_{\omega}$ of Lemma \ref{lem:hoelderfixpointmeas}. Then the function $\Theta(\mu_t)$
belongs to $\mathcal{C}^{\beta/2}([0,T];\mathbb{R})$.
\end{proposition}
\begin{proof}
By Lemma \ref{le:techweakconv} for the choice $h = \alpha$, $a = (\delta+1)^\omega$ and $B = \overline{\Omega}\times A$, for every $t,s\in[0,T]$ we have
\begin{align*}
|\Theta(\mu_t) - \Theta(\mu_s)| & \leq \left|\int_{\overline{\Omega}\times A}\alpha d\mu_t(x,\alpha) - \int_{\overline{\Omega}\times A}\alpha d\mu_s(x,\alpha)\right| \\
& \leq \delta(\delta+1)^{\omega}(1+2\delta)\lambda_\omega(\mu_t,\mu_s)\\
& \leq \delta(\delta+1)^{\omega}(1+2\delta) |\mu|_{\mathcal{C}^{\beta}([0,T])}|t-s|^{\beta/2}.
\end{align*}
This concludes the proof.
\end{proof}
This result allows us to extend the regularity of $\alpha$ and $\nu$ to the multi-agent interaction Langrangian, as the following result shows.
\begin{proposition}\label{prop:A5prop}
Assume that $\ell$, $h$, $Q$ and $\varphi$ are smooth functions and that $g$ satisfies \eqref{eq:gintmu}. Then the multi-agent interaction Lagrangian \eqref{eq:multiagentL} satisfies Assumption \ref{item:monotoneL}-\ref{item:monotoneL2}.
\end{proposition}
\begin{proof}
Assume that $\alpha \in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;A}$ and that $\nu_t \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$. By Proposition \ref{prop:holderintegral} we have that $\Theta(\nu_t)$ belongs to $\mathcal{C}^{\beta/2}([0,T];\mathbb{R})$, so that Lemma \ref{le:lipbasic} implies that $g(\alpha; \nu_t)$ is.
Concerning the convolution term, notice that for every $(x_1,t_1), (x_2,t_2) \in \overline{Q}_T$ it holds
\begin{align*}
|h((Q*\pi_{1\sharp}&\nu_{t_1})(x_1)) - h((Q*\pi_{1\sharp}\nu_{t_2})(x_2))| \\
& \leq \textup{Lip}(h)|(Q*\pi_{1\sharp}\nu_{t_1})(x_1) - (Q*\pi_{1\sharp}\nu_{t_2})(x_2)|\\
& = \textup{Lip}(h)\left|(Q*\pi_{1\sharp}\nu_{t_1})(x_1) - (Q*\pi_{1\sharp}\nu_{t_1})(x_2) + (Q*\pi_{1\sharp}\nu_{t_1})(x_2) - (Q*\pi_{1\sharp}\nu_{t_2})(x_2)\right|\\
& \leq \textup{Lip}(h)\int_{\overline{\Omega}}\left|Q(x_1 - y) - Q(x_2 - y)\right||\pi_{1\sharp}\nu_{t_1}(y)|dy \\
& \qquad + \textup{Lip}(h)\int_{\overline{\Omega}}\left|Q(x_2 - y)\right||\pi_{1\sharp}\nu_{t_1}(y) - \pi_{1\sharp}\nu_{t_2}(y)|dy,
\end{align*}
hence the desired regularity comes from the smoothness of $h$, $Q$ and the $\mathcal{C}^{\beta/2}$ regularity in time for $\nu_t$ (which passes to $\pi_{1\sharp}\nu_t$).
Since the sum of H\"older continuous functions is still H\"older continuous, we get $\mathcal{L}(t,x,\alpha(t,x);\nu_t) \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$, which is the desired statement.
\end{proof}
\subsection{Sufficient conditions for Assumption \ref{item:contalpha}}
We now pass to give sufficient conditions for Assumption \ref{item:contalpha}. The framework we consider here is coherent with the one in the previous subsection, so that we will be satisfying the sufficient conditions for Assumption \ref{item:monotoneL} at basically no cost if we already satisfying the following sufficient conditions.
\begin{proposition}\label{prop:A6prop}
Fix $M > 0$ and $\varepsilon > 0$ such that $0 < 2\varepsilon - 6M^3 - 9M^2 - 3M \leq 2\varepsilon \leq 1$. Assume that
\begin{enumerate}
\item $\delta(\overline{\Omega}\times A)\leq M$ and $A \subseteq B_M(0)$,
\item $b(t,x,\alpha;\nu) = b_1(t,x,\nu) + \alpha$ for some $b_1:[0,T]\times\overline{\Omega}\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow \mathbb{R}^d$
\item $\mathcal{L}(t,x,\alpha;\nu) = \ell(t,x,\nu) + \varphi\left(\alpha\cdot\Theta(\nu)\right) + \varepsilon|\alpha|^2$ where $\ell:[0,T]\times\overline{\Omega}\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow \mathbb{R}^d$ and $\varphi:\mathbb{R}\rightarrow\mathbb{R}$ satisfies
\begin{enumerate}
\item $\varphi$ is convex,
\item\label{item:A6supprime} $\textup{Lip}(\varphi^\prime) < \frac{2\varepsilon - 6M^3 - 9M^2 - 3M}{M^{2}}$,
\item\label{item:A6lipprime} $\|\varphi^\prime\|_{\infty} \leq 2\varepsilon - \textup{Lip}(\varphi^\prime)M^2$.
\end{enumerate}
\end{enumerate}
Then there exists a unique function $\alpha^*:[0,T]\times\overline{\Omega}\times\mathbb{R}^d\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow A$ satisfying \eqref{eq:alphamaximum} and Assumption \ref{item:contalpha}-\ref{item:contalpha4} for the choice $\xi(x,\alpha) := \alpha$.
\end{proposition}
\begin{proof}
If $\varphi$ is convex, with the above choice of $b$ and $\mathcal{L}$, the Lagrangian function $\mathcal{J}$ is strictly convex in $\alpha$, and thus admits a unique minimizer $\alpha^*$ for every $(t,x,p,\nu) \in [0,T]\times\overline{\Omega}\times\mathbb{R}^d\times\mathcal{M}_1(\overline{\Omega}\times A)$. Since, under the above hypotheses, the Hamiltonian $H$ reads
\begin{align*}
H(t,x,p,\nu) = \sup_{\alpha}\left\{-p\cdot b_1(t,x,\nu) - p\cdot \alpha - \ell(t,x,\nu) - \varphi\left(\alpha\cdot\Theta(\nu)\right) - \varepsilon|\alpha|^2\right\},
\end{align*}
then optimal control $\alpha^*(t,x,p,\nu)$ satisfies the identity
\begin{align}\label{eq:firstoder}
p + 2\varepsilon\alpha + \varphi^\prime\left(\alpha\cdot\Theta(\nu)\right)\Theta(\nu) = 0.
\end{align}
In order to prove that Assumption \ref{item:contalpha}-\ref{item:contalpha4} holds, let $p^1, p^2 \in \mathbb{R}^d$ and $\nu^1, \nu^2 \in \mathcal{M}_1(\overline{\Omega}\times A)$ and denote by $\alpha_i := \alpha^*(t,x,p_i,\nu_i)$ for every $i = 1,2$. Then by \eqref{eq:firstoder} follows
\begin{align*}
2\varepsilon|\alpha_1 - \alpha_2| & = |p_1 + \varphi^\prime\left(\alpha_1\cdot\Theta(\nu_1)\right)\Theta(\nu_1) -p_2 - \varphi^\prime\left(\alpha_2\cdot\Theta(\nu_2)\right)\Theta(\nu_2)| \\
& \leq |p_1 - p_2| + |\varphi^\prime\left(\alpha_1\cdot\Theta(\nu_1)\right)\Theta(\nu_1) - \varphi^\prime\left(\alpha_2\cdot\Theta(\nu_2)\right)\Theta(\nu_2)| \\
& \leq |p_1 - p_2| + |\varphi^\prime\left(\alpha_1\cdot\Theta(\nu_1)\right)\Theta(\nu_1) - \varphi^\prime\left(\alpha_1\cdot\Theta(\nu_1)\right)\Theta(\nu_2)| \\
& \qquad + |\varphi^\prime\left(\alpha_1\cdot\Theta(\nu_1)\right)\Theta(\nu_2) - \varphi^\prime\left(\alpha_2\cdot\Theta(\nu_2)\right)\Theta(\nu_2)|\\
& \leq |p_1 - p_2| + |\varphi^\prime\left(\alpha_1\cdot\Theta(\nu_1)\right)||\Theta(\nu_1) - \Theta(\nu_2)| \\
& \qquad + |\Theta(\nu_2)||\varphi^\prime\left(\alpha_1\cdot\Theta(\nu_1)\right) - \varphi^\prime\left(\alpha_2\cdot\Theta(\nu_2)\right)|\\
& \leq |p_1 - p_2| + \|\varphi^\prime\|_{\infty}|\Theta(\nu_1) - \Theta(\nu_2)| \\
& \qquad + \textup{Lip}(\varphi^\prime)|\Theta(\nu_2)||\alpha_1\cdot\Theta(\nu_1)-\alpha_2\cdot\Theta(\nu_2)|\\
& \leq |p_1 - p_2| + \|\varphi^\prime\|_{\infty}|\Theta(\nu_1) - \Theta(\nu_2)| \\
& \qquad + \textup{Lip}(\varphi^\prime)|\Theta(\nu_2)||\Theta(\nu_1)||\alpha_1-\alpha_2| + \textup{Lip}(\varphi^\prime)|\Theta(\nu_2)||\alpha_2||\Theta(\nu_1) - \Theta(\nu_2)|\\
& = |p_1 - p_2| + \left(\|\varphi^\prime\|_{\infty} +\textup{Lip}(\varphi^\prime)|\Theta(\nu_2)||\alpha_2| \right)|\Theta(\nu_1) - \Theta(\nu_2)| \\
& \qquad + \textup{Lip}(\varphi^\prime)|\Theta(\nu_2)||\Theta(\nu_1)||\alpha_1-\alpha_2|.
\end{align*}
Since $A \subseteq B_M(0)$ then it follows $|\alpha_2| \leq M$ as well as
$$|\Theta(\nu)| = \left|\int_{\overline{\Omega}\times A}\alpha d\nu(x,\alpha)\right| \leq \int_{\overline{\Omega}\times A}\left|\alpha\right| d\nu(x,\alpha) \leq M\nu(\overline{\Omega}\times A) \leq M.$$
We can therefore conclude the above chain of inequalities with
\begin{align*}
2\varepsilon|\alpha_1 - \alpha_2| & \leq |p_1 - p_2| + \left(\|\varphi^\prime\|_{\infty} +\textup{Lip}(\varphi^\prime)M^2 \right)|\Theta(\nu_1) - \Theta(\nu_2)| \\
& \qquad + \textup{Lip}(\varphi^\prime)M^2|\alpha_1-\alpha_2|,
\end{align*}
which, after rearranging, yields
\begin{align*}
\left(2\varepsilon - \textup{Lip}(\varphi^\prime)M^2\right)|\alpha_1 - \alpha_2| & \leq |p_1 - p_2| + \left(\|\varphi^\prime\|_{\infty} +\textup{Lip}(\varphi^\prime)M^2 \right)\left|\int_{\overline{\Omega}\times A}\alpha d(\nu_1-\nu_2)(x,\alpha)\right|,
\end{align*}
and since $\textup{Lip}(\varphi^\prime) < 2\varepsilon M^{-2}$ by assumption \eqref{item:A6lipprime} and the choice of $M$ and $\varepsilon$, we have
\begin{align*}
|\alpha_1 - \alpha_2| & \leq \frac{1}{2\varepsilon - \textup{Lip}(\varphi^\prime)M^2}|p_1 - p_2| + \frac{\|\varphi^\prime\|_{\infty} +\textup{Lip}(\varphi^\prime)M^2 }{2\varepsilon - \textup{Lip}(\varphi^\prime)M^2}\left|\int_{\overline{\Omega}\times A}\alpha d(\nu_1-\nu_2)(x,\alpha)\right|\\
&\leq \frac{1}{2\varepsilon - \textup{Lip}(\varphi^\prime)M^2}\left(|p_1 - p_2| + \left|\int_{\overline{\Omega}\times A}\alpha d(\nu_1-\nu_2)(x,\alpha)\right|\right),
\end{align*}
where we used the fact that, by assumption \eqref{item:A6supprime}, it holds $\|\varphi^\prime\|_{\infty} \leq 2\varepsilon- \textup{Lip}(\varphi^\prime)M^2$, whence $\|\varphi^\prime\|_{\infty} + \textup{Lip}(\varphi^\prime)M^2 \leq 2\varepsilon \leq 1$. It remains to show that the Lipschitz coefficient satisfies condition \eqref{eq:alphalipbound}, that is
\begin{align*}
\frac{1}{2\varepsilon - \textup{Lip}(\varphi^\prime)M^2} < \frac{1}{(3+3\delta)(1+2\delta)\|\xi\|_{\infty}},
\end{align*}
which is equivalent to
\begin{align*}
\textup{Lip}(\varphi^\prime)M^2 < 2\varepsilon - (3+3\delta)(1+2\delta)\|\xi\|_{\infty}.
\end{align*}
However, since by assumption $\delta \leq M$ and, due to the choice of $\xi(x,\alpha) = \alpha$, we have that $\|\xi\|_{\infty} \leq M$, then it is enough to show that
\begin{align*}
\textup{Lip}(\varphi^\prime)M^2 < 2\varepsilon - (3+3M)(1+2M)M,
\end{align*}
which is, however, equivalent to assumption \eqref{item:A6lipprime} and it is well defined by the choice of $M$ and $\varepsilon$. This concludes the proof.
\end{proof}
We now provide two explicit optimal control strategies satisfying both Proposition \ref{prop:A6prop} and Assumption \ref{item:contalpha}-\ref{item:contalpha1}, so that these controls satisfy Assumption \ref{item:contalpha} in full.
\begin{remark}[Linear-cost optimal control]\label{rem:linearcontrol}
In the special case $\varphi(z) := M_1 z$ then we can easily compute $\alpha^*$ from the first order condition \eqref{eq:firstoder} to get
\begin{align}\label{eq:linearcontrol}
\alpha^*(t,x,p;\nu) = -p - M_1\Theta(\nu),
\end{align}
which satisfies the assumptions of Proposition \ref{prop:A6prop} for $M_1$ small enough. Furthermore, if $p \in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}$ and $\nu_t \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$, then by Proposition \ref{prop:holderintegral}, the control $\alpha^*$ satisfies Assumption \ref{item:contalpha}-\ref{item:contalpha1}.
In this particular case we can actually solve the self-referencing hidden in \eqref{eq:linearcontrol} by means of a simple computation. Indeed, for given $p$ and $m$ we have
\begin{align*}
\alpha^*&(t,x,p_t(x);\Gamma(p_t,m_t)) = -p_t(x) - M_1\Theta(\Gamma(p_t,m_t)) \\
& \Longleftrightarrow \alpha^*(t,x,p_t;\Gamma(p_t,m_t))m_t(x) = -p_t(x)m_t(x) - M_1\Theta(\Gamma(p_t,m_t))m_t(x) \\
& \Longleftrightarrow \int_{\overline{\Omega}}\alpha^*(t,x,p_t;\Gamma(p_t,m_t))m_t(x)dx = -\int_{\overline{\Omega}}p_t(x)m_t(x)dx -M_1\Theta(\Gamma(p_t,m_t))\int_{\overline{\Omega}}m_t(x)dx \\
& \Longleftrightarrow \Theta(\Gamma(p_t,m_t)) = -\int_{\overline{\Omega}}p_t(x)m_t(x)dx -M_1m_t(\overline{\Omega})\Theta(\Gamma(p_t,m_t)) \\
& \Longleftrightarrow \Theta(\Gamma(p_t,m_t)) = -\frac{1}{1 +M_1m_t(\overline{\Omega}) }\int_{\overline{\Omega}}p_t(x)m_t(x)dx,
\end{align*}
hence we get the closed-loop form
\begin{align}\label{eq:linearcontrolexplicit}
\alpha^*(t,x,p_t(x);\Gamma(p_t,m_t)) = -p_t(x) + \frac{M_1}{1 +M_1m_t(\overline{\Omega}) }\int_{\overline{\Omega}}p_t(x)m_t(x)dx.
\end{align}
This representation clearly shows that, along the typical adjoint term $-p_t$, in this modeling scenario the average adjoint with respect to the mass of players contributes in determining the optimal choice. Notice that this extra term vanishes as soon as $m_t$ does.
\end{remark}
\begin{remark}[Exponential-cost optimal control]\label{rem:expcontrol}
If we assume that $\varphi(z) := M_1 \exp(M_2 z)$ then the first order equation \eqref{eq:firstoder} reads
\begin{align}\label{eq:productlog}
p + \alpha + M_1M_2\exp\left(M_2\alpha\cdot\Theta(\nu)\right)\Theta(\nu) = 0,
\end{align}
whose solution by Proposition \ref{prop:productlog} is given by
\begin{align*}
\alpha^*(t,x,p;\nu) = -p - M_1M_2 \exp\left(M_2 W\left(M_1M_2^2|\Theta(\nu)|^2 p\cdot \Theta(\nu) \exp\left(-M_1M_2p\cdot \Theta(\nu)\right) \right) \right) \Theta(\nu).
\end{align*}
Notice how this functional form resembles \eqref{eq:linearcontrol}, except for the multiplier for $\Theta(\nu)$. This solution satisfies Assumption \ref{item:contalpha}--\ref{item:contalpha4} for the appropriate choices of $M_1$ and $M_2$, thanks to Proposition \ref{prop:A6prop}. In addition to this, by invoking Lemma \ref{le:lipbasic} and Proposition \ref{prop:holderintegral}, if $p \in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}$ and $\nu_t \in \mathcal{C}^{\beta/2}([0,T];\mathcal{M}_1(\overline{\Omega}\times A))$ then $\alpha^*$ satisfies the H\"older regularity conditions of Assumption \ref{item:contalpha}-\ref{item:contalpha1}.
\end{remark}
\subsection{Sufficient conditions for Assumption \ref{item:hoelderH}}\label{sec:assumption7alt}
Before moving to actual applications of our modeling framework, we show how Assumption \ref{item:hoelderH} can be satisfied. We first introduce the following definition.
\begin{definition}
Let $Y \subseteq \mathbb{R}^n$ for some $n \in \mathbb{N}$. A function $f:[0,T]\times\overline{\Omega}\times \mathbb{R}^d\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow Y$ is \textit{bounded along fixed points} iff there exists a continuos function $\mathcal{H}_f:\mathbb{R}\rightarrow\mathbb{R}$ such that for every $(p,m)\in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}\times \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ it holds
\begin{align*}
\|f(t,x,p(t,x);\Gamma(p_t,m_t))\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;Y}}\leq \mathcal{H}_f\left( \|p\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}} \right).
\end{align*}
\end{definition}
In order to give concreteness to the above definition, we immediately give an example of an optimal control strategy which is bounded along fixed points.
\begin{lemma}\label{le:linearcontrolbounded}
The linear-cost optimal control of Remark \ref{rem:linearcontrol} is bounded along fixed points.
\end{lemma}
\begin{proof}
We start from the equivalent formulation of the linear-cost optimal control given by \eqref{eq:linearcontrolexplicit}. Since all quantities inside integrals are bounded, for every $r,s$ such that $2r+s\leq\beta$ by Leibniz integral rule and Fa\`a di Bruno's formula we get the identity
\begin{align*}
D^r_tD^s_x &\alpha^*(t,x,p_t(x),\Gamma(p_t,m_t)) = -D^r_tD^s_xp_t(x) + D^r_t\left(\frac{M_1}{1 +M_1m_t(\overline{\Omega}) }\int_{\overline{\Omega}}p_t(x)m_t(x)dx\right) \\
& = -D^r_tD^s_xp_t(x) \\
& \qquad+ \sum_{k = 0}^r \binom{r}{k}\left(D^k_t\left(\frac{M_1}{1 +M_1m_t(\overline{\Omega})}\right)\int_{\overline{\Omega}}p_t(x)m_t(x)dx + \frac{M_1}{1 +M_1m_t(\overline{\Omega})}D^k_t\int_{\overline{\Omega}}p_t(x)m_t(x)dx \right)\\
& = -D^r_tD^s_xp_t(x) \\
& \qquad+ \sum_{k = 0}^r \binom{r}{k}\Bigg(\sum^k_{j=0}\frac{(-1)^j j! M_1^{j+1}}{(1 +M_1m_t(\overline{\Omega}))^{j+1}}B_{k,j}\left(D_tm_t(\overline{\Omega}), \ldots, D^{k-j+1}_tm_t(\overline{\Omega})\right)\int_{\overline{\Omega}}p_t(x)m_t(x)dx \\
&\qquad + \frac{M_1}{1 +M_1m_t(\overline{\Omega})}\sum^k_{j=0}\binom{k}{j}\int_{\overline{\Omega}}D^j_tp_t(x)D^{k-j}_tm_t(x)dx \Bigg),
\end{align*}
where $B_{k,j}$ denotes the $j$-th summand of the $k$-th complete exponential Bell polynomial.
Since all the quantities involved on the right-hand side involve integrals of $p_t$ and $m_t$, it can be easily shown that
\begin{align*}
\|D^r_tD^s_x \alpha^*(t,x,p_t(x),\Gamma(p_t,m_t))\|_{\infty} \leq \mathcal{H}_{r,s}\left(\|p\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}}}\right),
\end{align*}
for some continuous function $\mathcal{H}_{r,s}:\mathbb{R}\rightarrow\mathbb{R}$. In a similar fashion, the above identity allows to bound the H\"older norm of $D^r_tD^s_x \alpha^*$ by a continuous function of the H\"older norms of $p$ and $m$. Summing all contributions for all $r, s$ such that $2r+s\leq\beta$, we get the result.
\end{proof}
Assumption \ref{item:hoelderH} can be simply reformulated by saying that the Hamiltonian $H$ as well as its derivative $D_pH$ are bounded along fixed points.
The following result shows that if the optimal control is bounded along fixed points and if the functionals of the system preserve boundedness along fixed points, then the Hamiltonian satisfies Assumption \ref{item:hoelderH}.
\begin{proposition} \label{eq:prophoeldeH}
Assume that
\begin{enumerate}
\item The function $\alpha^*:[0,T]\times\overline{\Omega}\times A \times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow A$ is bounded along fixed points,
\item the function $b:[0,T]\times\overline{\Omega}\times A\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow\mathbb{R}^d$ is such that, if $\alpha^*(t,x,p;\nu)$ is bounded along fixed points, then also $\overline{b}(t,x,p;\nu) := b(t,x,\alpha^*(t,x,p;\nu);\nu)$ is,
\item The function $\mathcal{L}:[0,T]\times\overline{\Omega}\times A \times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow\mathbb{R}$ is such that, if $\alpha^*(t,x,p;\nu)$ is bounded along fixed points, then also $\overline{\mathcal{L}}(t,x,p;\nu) := \mathcal{L}(t,x,\alpha^*(t,x,p;\nu);\nu)$ is.
\end{enumerate}
Then the Hamiltonian $H$ satisfies Assumption \ref{item:hoelderH}.
\end{proposition}
\begin{proof}
Let $\mathcal{H}_{b}, \mathcal{H}_{\mathcal{L}}:\mathbb{R}\rightarrow\mathbb{R}$ be the continuous function associated to the functions $b$ and $\mathcal{L}$, respectively. From the definition \eqref{def:hamiltonian} we get
\begin{align*}
\|H&(t,x,p_t(x);\Gamma(p_t,m_t))\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}}} \leq \\
& \leq \|p \|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}^d}}\| b(t,x,\alpha^*(t,x,p_t(x);\Gamma(p_t,m_t));\Gamma(p_t,m_t))\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}^d}} \\
& \qquad\qquad\qquad+\| \mathcal{L}(t,x,\alpha^*(t,x,p_t(x);\Gamma(p_t,m_t));\Gamma(p_t,m_t))\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}}}\\
&\leq \|p \|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}^d}} \mathcal{H}_{b}\left(\|p\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}}}\right) \\
& \qquad\qquad\qquad+\mathcal{H}_{\mathcal{L}}\left(\|p\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}^d}} + \|m\|_{\Hoelder{\beta}{\beta/2}{\mathbb{R}^d;\mathbb{R}}}\right),
\end{align*}
which shows that $H$ is bounded along fixed points by means of the function $\mathcal{H}_1(x):= x\mathcal{H}_b(x) + \mathcal{H}_{\mathcal{L}}(x)$. Since $D_pH$ is trivially bounded along fixed points by the function $\mathcal{H}_2(x):= \mathcal{H}_b(x)$, the statement is proven.
\end{proof}
We now show examples of functions that preserve boundedness along fixed points (and thus can be employed in the definition of $b$ and $\mathcal{L}$ to give rise to a Hamiltonian for which Proposition \ref{eq:prophoeldeH} holds).
\begin{lemma} \label{le:thetalip}
If $\alpha^*(t,x,p;\nu)$ is bounded along fixed points, then also $\Theta(\nu)$ is.
\end{lemma}
\begin{proof}
We start by noticing that, since $\alpha^*(t,x,p(t,x);\Gamma(p_t,m_t)$ is a bounded $\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$-function for any $(p,m)\in\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}\times \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$, then by the Leibniz integral rule it holds
\begin{align*}
D^r_t\Theta(\Gamma(p_t, m_t)) & = D^r_t\int_{\overline{\Omega}\times A}\alpha d\Gamma(p_t, m_t)(y,\alpha)\\
& = D^r_t\int_{\overline{\Omega}}\alpha^*(t,y,p(t,y);\Gamma(p_t, m_t)) m_t(y) dy\\
& = \int_{\overline{\Omega}}D^r_t\left(\alpha^*(t,y,p(t,y);\Gamma(p_t, m_t)) m_t(y)\right)dy\\
& = \sum_{k = 1}^r \binom{r}{k}\int_{\overline{\Omega}} D^k_t\alpha^*(t,y,p(t,y);\Gamma(p_t, m_t)) D^{r-k}_t m_t(y)dy.
\end{align*}
We can therefore argue as in the proof of Lemma \ref{le:linearcontrolbounded} to conclude that $\Theta(\nu)$ is bounded along fixed points.
\end{proof}
The following statements are easy corollaries of the above result.
\begin{corollary}\label{cor:fixpointscost}
Let $\phi:A\rightarrow\mathbb{R}^d$ be a smooth function. Then if $\alpha^*(t,x,p;\nu)$ is bounded along fixed points, also $\overline{f}(t,x,p;\nu) := \phi(\alpha^*(t,x,p;\nu)\cdot \Theta(\nu))$ is.
\end{corollary}
\begin{corollary}\label{cor:fixpointsdynrates}
Let $\phi:\mathbb{R}\rightarrow\mathbb{R}^d$ be a smooth function. Then if $\alpha^*(t,x,p;\nu)$ is bounded along fixed points, also $\overline{f}(x;\nu) := (1 + \phi(\Theta(\nu))x$ is.
\end{corollary}
Another corollary is the following result which shows that the convolution dynamics preserve boundedness along fixed points.
\begin{corollary}
Let $K:\mathbb{R}^d\rightarrow\mathbb{R}^d$ be a smooth bounded function and set
$$f(t,x;\nu) := (K*\pi_{1\sharp}\nu)(x).$$
Then if $\alpha^*(t,x,p;\nu)$ is bounded along fixed points, also $f(t,x;\nu)$ is.
\end{corollary}
\begin{proof}
By $\pi_{1\sharp}\Gamma(p_t, m_t) = m_t$ and the properties of the convolution operator follows that if $m \in \Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}}$ then
\begin{align*}
D^r_tD^s_x (K*\pi_{1\sharp}\Gamma(p_t, m_t) )(x) & = D^r_tD^s_x (K*m_t )(x)\\
& = \int_{\overline{\Omega}}K(x-y) m_t(y) dy\\
& = \int_{\overline{\Omega}}D^s_xK(x-y) D^r_tm_t(x-y) dy\\
& = (D^s_x K)*\left(D^r_tm_t\right) (x),
\end{align*}
where we passed derivation inside the integral because of Leibniz integral rule. This easily implies that
\begin{align*}
\|K*\pi_{1\sharp}\Gamma(p_t, m_t) \|_{\Hoelder{\beta}{\beta/2}{\overline{Q}_T;\mathbb{R}^d}}\leq \|K\|_{\mathcal{C}^\infty}|\overline{\Omega}| \|m_t \|_{\mathcal{C}^{\beta/2}{([0,T];\mathbb{R})}},
\end{align*}
which in turn yields the statement.
\end{proof}
\subsection{Refinancing dynamics}\label{sec:refinancing}
We now pass to show the first application of our modeling framework.
Assume that $d=1$ and $\Omega = (-1,1)$, and let us consider a continuum of indistiguishable firms with $X_t \in [-1,1]$ denoting the face value of single-period debt of each firm. Notice that being indistiguishable implies that the impact of the individual choice on debt value must be infinitesimal. To simplify the setting, we assume that the level $X_t = 1$ corresponds to the default of the firm (e.g., for the lack of collateral to pledge in return), while $X_t = -1$ means that the firm quits the debt market since it has enough savings for the rest of its lifetime.
At each $t$, the firm decides the adjustment $\alpha^*$ of its debt level according to the following minimization problem
\begin{align*}
\min_{\alpha\in \mathcal{U}} J(t_0,x_0,\alpha;\nu) = \mathbb{E}\left[\int^{T\wedge\tau}_0 \left(\ell(t,X_t) + g(\alpha_t,\nu_t) + \varepsilon|\alpha_t|^2 \right)dt + \psi(X_{T\wedge\tau},m_{T\wedge \tau})\right],
\end{align*}
where the term $|\alpha_t|^2$ should be considered as a \textit{deadweight adjustment cost}. The stopping time $\tau$ can be interpreted as the exit time from the debt market (if an agent could exit only in the case $X_t = 1$ it could be interpreted as a default time, but in the present case we allow to exit also for the case $X_t = -1$). Furthermore, the infinitesimal firm rolls over its entire stock of debt by paying a rate of return $R(\nu_t)$ which depends on the current demand for debt in the market, i.e., for some $\rho:\mathbb{R}\rightarrow\mathbb{R}$ sufficiently smooth we have
\begin{align*}
R(\nu) := \rho\left(\int_{\overline{\Omega}\times A} \alpha d\nu(x,\alpha)\right) = \rho(\Theta(\nu)).
\end{align*}
Similarly, we shall prescribe that the function $g:A\times\mathcal{M}_1(\overline{\Omega}\times A)\rightarrow \mathbb{R}$ is given by
\begin{align*}
g(\alpha,\nu) := \varphi\left(\alpha\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu(\tilde{x},\tilde{\alpha})\right) = \varphi(\alpha\Theta(\nu)),
\end{align*}
with $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ smooth, increasing and convex with $\varphi'\geq B$ for some $B \geq 0$. With this cost, it is advantageous for the individual firm to \say{go against} the average demand for debt. The motivation is that it is better for a firm to increase its stock of debt (i.e., $\alpha > 0$) while there is an excess supply of funds, corresponding to the condition
$$\Theta(\nu) = \int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu(\tilde{x},\tilde{\alpha}) < 0,$$
than while there is excess demand
This setting yields the minimization problem
\begin{align*}
\min_{\alpha\in \mathcal{U}} J(t_0,x_0,\alpha;\mu) = \mathbb{E}\left[\int^{T\wedge\tau}_0 \left(\ell(t,X_t) + g(\alpha_t,\mu_t) + \varepsilon|\alpha_t|^2 \right)dt + \psi(X_{T\wedge\tau},m_{T\wedge \tau})\right]
\end{align*}
subject to the stopped SDE
\begin{align*}
\left\{\begin{aligned}
d X_t &= \left((1+R(\mu_t))X_t + \alpha^*(t,X_t,Du(t,X_t);\mu_t)\right)dt + 2\sqrt{\sigma}dW_t & \quad \text{ for } t \in [0,T\wedge\tau],\\
X_t & = X_{\tau} & \quad \text{ for } t \in [T\wedge\tau,T]
\end{aligned}\right.
\end{align*}
and the fixed point relationship for $\mu_t$ given by \eqref{eq:fixpointmu}.
It is quite easy to show that we fall under the general framework outlined in Section \ref{sec:assumption} as soon as we choose $\varphi(z) = M_1z$ for the appropriate value of $M_1>0$. As Assumptions \ref{item:compactset}--\ref{item:inverseb} are trivially satisfied, let us see how we can employ the sufficient conditions of the previous sections to show that also Assumptions \ref{item:monotoneL}-\ref{item:hoelderH} hold true.
Firstly, Assumption \ref{item:monotoneL} holds as soon as we show that Proposition \ref{prop:monotoneLprop} applies to our choice of $g$: since it holds
\begin{align*}
g(\alpha,\nu_1) - g(\alpha,\nu_2) & = \varphi\left(\alpha\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu_1(\tilde{x},\tilde{\alpha})\right) - \varphi\left(\alpha\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu_2(\tilde{x},\tilde{\alpha})\right)\\
& = M_1\left(\alpha\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu_1(\tilde{x},\tilde{\alpha}) - \alpha\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu_2(\tilde{x},\tilde{\alpha})\right)\\
& = M_1\alpha\int_{\overline{\Omega}\times A} \tilde{\alpha} d(\nu_1 - \nu_2)(\tilde{x},\tilde{\alpha}).
\end{align*}
then we have \eqref{eq:suffmonotone} for $F(x,\alpha) = \alpha\sqrt{M_1}$.
Secondly, Proposition \ref{prop:A6prop} holds true as soon as we choose $M_1 \leq 2\varepsilon$, so that in this case also Assumption \ref{item:contalpha} holds.
Finally, as we are in the linear-cost optimal control case, we can employ all the results of Section \ref{sec:assumption7alt} to establish the validity of Assumption \ref{item:hoelderH}, in particular Corollaries \ref{cor:fixpointscost} and \ref{cor:fixpointsdynrates}.
\subsection{Evacuation of pedestrians}\label{ss:evac_ped}
Let $d = 2$, $\Omega\subset\mathbb{R}^2$ be an open subset and $A = B_M(0)$ for some $M>0$. We consider a continuum of infinitesimal pedestrian whose dynamics is given by
$$dX_t = K*m_t(X_t)dt + \alpha^*dt + 2\sqrt{\sigma}dW_t,$$
for some smooth convolution kernel $K:\mathbb{R}^d\rightarrow\mathbb{R}^d$ (like the smoothed Hegselmann-Krause one). The optimal velocity $\alpha^*$ should be chosen according to two main principles: firstly, pedestrians should try to avoid congestions, which can be done by minimizing the functional
\begin{align*}
\int_0^T Q*m_t(X_t) dt = \int_0^T\int_{\overline{\Omega}}Q(X_t-\tilde{x})dm_t(\tilde{x})dt,
\end{align*}
where $Q:\mathbb{R}^d\rightarrow\mathbb{R}$ is a decreasing radial function (see \cite{bongini2016optimal} for an analysis of this cost term in the context of pedestrian dynamics) satisfying $\inf_{\overline{\Omega}} Q\geq B$ for some $B>0$. Secondly, as argued in \cite{albi2015invisible}, pedestrian have a natural attitude to follow their mates, a tendency that can be reproduced by minimizing the quantity
\begin{align*}
\int_0^T \varphi(\alpha\cdot\Theta(\nu))dt = \int_0^T\varphi\left(\alpha\cdot\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu(\tilde{y},\tilde{\alpha})\right)dt,
\end{align*}
where $\varphi:\mathbb{R}\rightarrow\mathbb{R}$ is a strictly convex, decreasing functional with $\varphi' \geq -B/M^2$. Indeed, if this quantity is minimized, then the angle between the optimal velocity $\alpha^*$ and the average velocity chosen by the rest of the agents is going to be \say{small}. As a result, this model has the advantage of being able to treat alignment of agents even if it is a first-order model.
Putting things together, we obtain the minimization problem
\begin{align*}
\min_{\alpha \in \mathcal{U}} J(t_0,x_0,\alpha;\mu) = \mathbb{E}\left[\int^{T\wedge\tau}_0 \left(Q*\pi_{1\sharp}\mu_t(X_t) +\varphi(\alpha\cdot\Theta(\nu)) + \varepsilon|\alpha_t|^2 \right)dt + \psi(X_{T\wedge\tau},m_{T\wedge\tau})\right]
\end{align*}
subject to the stopped SDE
\begin{align*}
\left\{\begin{aligned}
d X_t & = K*m_t(X_t)dt + \alpha^*(t,X_t,Du(t,X_t);\mu_t)dt + 2\sqrt{\sigma}dW_t & \quad \text{ for } t \in [0,T\wedge\tau],\\
X_t & = X_{\tau} & \quad \text{ for } t \in [T\wedge\tau,T],
\end{aligned}\right.
\end{align*}
and the fixed point relationship for $\mu_t$ given by \eqref{eq:fixpointmu}.
Arguing similarly to Section \ref{sec:refinancing}, we can easily show that the general framework presented above applies to this model as well as soon as we choose $\varphi(z) = -M_1z$ with $M_1 \leq B/M^2$. We shall only show that Proposition \ref{prop:monotoneLprop} holds true, since we trivially have
\begin{align*}
M_1\alpha&\cdot\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu_1(\tilde{x},\tilde{\alpha})+\int_{\overline{\Omega}}Q(x-\tilde{x})d\pi_{1\sharp}\nu_1(\tilde{x})- M_1\alpha\cdot\int_{\overline{\Omega}\times A}\tilde{\alpha}d\nu_2(x,\tilde{\alpha}) -\int_{\overline{\Omega}}Q(x-\tilde{x})d\pi_{1\sharp}\nu_2(\tilde{x})\\
& = M_1\left(\alpha\cdot\int_{\overline{\Omega}\times A}\tilde{\alpha}d(\nu_1-\nu_2)(x,\tilde{\alpha})\right)+\int_{\overline{\Omega}}Q(x-\tilde{x})d\pi_{1\sharp}(\nu_1-\nu_2)(\tilde{x})\\
&\geq -M_1M^2\frac{|\alpha|}{M}\int_{\overline{\Omega}\times A}\frac{|\tilde{\alpha}|}{M}d(\nu_1-\nu_2)(\tilde{x},\tilde{\alpha}) + B\int_{\overline{\Omega}\times A}d(\nu_1-\nu_2)(\tilde{x},\tilde{\alpha})\\
& \geq -M_1M^2\int_{\overline{\Omega}\times A}d(\nu_1-\nu_2)(\tilde{x},\tilde{\alpha})+B\int_{\overline{\Omega}\times A}d(\nu_1-\nu_2)(\tilde{x},\tilde{\alpha}) \geq 0,
\end{align*}
where we used the fact that $M \geq |\alpha|$. However the above chain of inequalities reduces to \eqref{eq:suffmonotone} for the trivial choice $F \equiv 0$.
\section{Numerical experiments} \label{sec:numerics}
\subsection{The numerical method}
In this article, we have chosen a particle method for simulating system \eqref{eq:mfg}.
The strategy consists in describing a population, composed of a great number of individuals,
by means of a reduced set of numerical particles, which obey to the action criterion of the problem.
Let $\varphi_{\varepsilon}$ be an $\varepsilon$-dependent smooth function, such that $\varphi_{\varepsilon}(x):=\varphi( \varepsilon^{-1} x)/\varepsilon^d$.
It is often denoted \textit{shape function} because it is used in reconstructing the density of particle starting from the numerical particles. A careful choice of the shape function is crucial for eliminating (or, at least, greatly reducing) the noise of the density profile and for respecting the boundary conditions of the continuous problem.
The first step of the particle method we are going to employ consists in discretizing the initial density $f^{\rm in}$ by means of a set of smooth shape functions centred on the individual positions and controls, in such a way that
\begin{equation} \label{finieps}
f^{\rm in}_{\varepsilon,N_m}(x,\alpha) = \sum_{k=1}^{N_m} \omega_k \, \varphi_{\varepsilon}(x - x_k^{0})\, \varphi_{\varepsilon}(\alpha - \alpha_k^0),
\end{equation}
where $N_m$ represents the number of numerical particles, $(x_k^{0})_{1\leq k \leq N_m}$ their initial positions and $(\alpha_k^{0})_{1\leq k \leq N_m}$ their initial velocities (i.e. their controls). We underline that each numerical particle is weighted by means of a set of parameters $\omega_k \in \mathbb{R}_+$ (which in our application shall be uniform in $k$). Once the number $N_m$ of numerical particles has been chosen, the initial positions $(x_k^{0})_{1\leq k \leq N_m}$ are sampled according to the initial density $f^{\rm in}$, either in a deterministic way or thanks to a Monte-Carlo procedure. In the first iteration (since the exit cost of our cost functional is going to be set to zero), we choose $(v_k^{0})_{1\leq k \leq N_m}$ such that the particles move in the direction of the boundary point closest to their initial position with velocities of maximum norm.
We introduce a time discretization of step $\Delta t$ so that we set $t^n := n \Delta t$. The density of the continuous problem at time $t^n$ is hence
\begin{equation} \label{fneps}
f^{n}_{\varepsilon,N_m}(x,\alpha) = \sum_{k=1}^{N_m} \omega_k \, \varphi_{\varepsilon}(x - x_k^n)\, \varphi_{\varepsilon}(\alpha - \alpha_k^n),
\end{equation}
where $(x_k^{n})_{1\leq k \leq N_m}$ and $(v_k^{n})_{1\leq k \leq N_m}$ are the positions and the velocities of the numerical particles at time $t^n$. The positions of the numerical particles evolve in time by minimizing the individual cost functional specific to the problem, in a time interval $[n \Delta t,(n+1) \Delta t]$. In the next Section, we will provide an example of such a cost. Once we identify the velocity (that is, the control strategy) which minimizes the individual cost, the numerical particles move according to such velocity.
In order to find the Nash equilibrium solution of the problem, an iterative numerical scheme has been implemented.
We begin the iteration procedure by considering a simplified problem, and use this solution as the starting point of the iteration scheme.
First, the admissible velocities are discretized in an appropriated way in order to reduce them to a finite set.
Then, the next steps are implemented as follows.
In what follows, we denote by $j$ the index labelling the iteration step, by $(x_{k,j}^{n})_{1\leq k \leq N_m}$ and $(\alpha_{k,j}^{n})_{1\leq k \leq N_m}$ the positions and the velocities of the particle labelled with the index $k$ at time $t^n$ at the $j$-th iteration.
At the beginning of the $j$-th iteration, we randomly order the numerical particles, thanks to a sample from the uniform distribution. We then consider the first particle $x^{1}_{1,j}$ (with respect to the random order) at the numerical time $t^1$ and choose $\alpha^1_{1,j}$ by testing the possible costs between $t^1$ and $t^2$ of this particle with respect to the set of discretized admissible velocities, by assuming that the other particles have the positions and the velocities computed at the $(j-1)$-th iteration (i.e., using $(\alpha_{k,j-1}^{0})_{2\leq k \leq N_m}$).
Once we find the minimal cost for the first particle, we update the position and the velocity of such particle, which will be used for the computations of the next step. At the numerical time $t^2$, we consider the second particle $x^2_{2,j}$ (with respect to the random order). We choose $\alpha^2_{2,j}$ by testing the possible costs between $t^2$ and $t^3$ of this particle with respect to the set of discretetized admissible velocities, by assuming that the other particles have the positions and the velocities computed at the $(j-1)$-th iteration, except for the first particle, whose position and velocity have been updated in the previous step (i.e., using $\alpha_{1,j}^1$ and $(\alpha_{k,j-1}^{0})_{3\leq k \leq N_m}$). We then end the procedure either once all the particles have been tested, or when the time horizon of the problem has been reached.
The updated positions and velocities are then the starting point for the $(j+1)$-th iteration.
The procedure ends when the difference between the distribution obtained at the $(j+1)$-th iteration and the distribution obtained at the $j$-th iteration
is below a given threshold.
A peculiar feature of the method is the absence of a stability condition. This could be an advantage for simulating the long-time behaviour of a MFG system, especially in space dimensions higher than one.
Moreover, this technique is very well-suited to treat not only second-order systems of PDEs in space, but also first-order systems. In such case, it allows to manage non-conventional boundary conditions, such as absorbing boundary conditions or specular reflection boundary conditions. We finally underline that particle methods suffer from artificial numerical diffusion much less than finite-difference methods.
On the other hand, in order to have accurate simulations, the number of required numerical particles should be very high and the simulations can be
time-consuming.
In some sense, our procedure, based on successive approximations, mimics a real-life repeated game and is linked to the concepts of
\say{best reply} \cite{MR3268055} and \say{fictitious play} \cite{MR3608094}. We underline that iterative procedures for the numerical implementation of MFG, not based on finite difference methods, have been proposed in the context of non-compulsory vaccination \cite{MR3810807}.
\subsection{Numerical simulations}
In this section, we discuss some numerical results obtained from the implementation of the method described in the previous subsection to the evacuation model of Section \ref{ss:evac_ped}.
We have considered the space domain $\Omega:=(0,1)$, the control space $A:=[-.2,.2]$, the convolution kernel $K = 0$, the initial measure $\overline{m}$ as a smoothed version of the step function
\begin{equation*}
f^{\rm in}:=\left\{
\begin{array}{ll}
0 & x\in(0, 0.5) \\[10pt]
2 & x\in(0.5,1),
\end{array}
\right.
\end{equation*}
and the cost functional given by
\begin{equation}
\begin{aligned}
\label{f:int}
\mathcal{J}(t,x,\alpha;\nu)& := \mathbb{E}\Bigg[\eta \int_0^{T\wedge\tau} \left (\left \vert \int_{\Omega} y\, \pi_{1\sharp}\nu_t(y,\tilde{\alpha})dy -x \right\vert +.2\right)^{-1}dt \\
& \qquad \qquad +\beta \int_0^{T\wedge\tau} \alpha_t\int_{\Omega\times A}{\tilde{\alpha}}\, \nu_t({y},\tilde{\alpha})dy \, d\tilde{\alpha} \, dt + \varepsilon \int_0^{T\wedge\tau}|\alpha_t|^2dt\Bigg],
\end{aligned}
\end{equation}
with $\eta=4$, $\beta=1$, $\varepsilon = 1/2$, and where $\tau$ is the exit time from the domain. Notice that we assume that there is no exit cost, i.e. $\phi=0$. It is easy to show that this functional satisfies the hypotheses of Proposition \ref{prop:A5prop}, Proposition \ref{prop:A6prop} and Proposition \ref{eq:prophoeldeH}. Hence, Assumptions (A) listed in Section \ref{sec:assumption} are satisfied and the pedestrian evacuation model admits a Nash equilibrium.
Notice that the first integral in \eqref{f:int} takes into account the tendency of pedestrians to avoid congestions and the second one the tendency to follow their mates (see Subsection \ref{ss:evac_ped} for more details concerning the meaning of this functional).
In the numerical experiments, we discretized the above quantities in the following way.
First, the initial density used in the numerical experiments is $f^{\rm in}$. We underline that $f^{\rm in}$ has total mass equal to one and its center of mass is located at $x=0.75$.
Then, the simulation has been obtained by using $N=6\times 10^3$ numerical particles.
To discretize the control space $A$, we choose $N_\alpha\in \mathbb{N}$, with $N_\alpha >2$ and denoted by $\Delta\alpha = 0.4/(N_\alpha-1)$. In our simulations we have considered the control set
$$
A_\alpha := \{-0.2, -0.2+\Delta\alpha, \dots, -0.2 + (N_\alpha-1)\Delta\alpha , 0.2\}
$$
obtained by discretizing the control space $A$. At each instant, the numerical particles choose their optimal velocity $\alpha^*\in A_\alpha$
in order to minimize the functional \eqref{f:int}.
We have moreover assumed that $N_\alpha=11$ and that $\sigma=2.5\times 10^{-9}$.
The time history of the density is plotted in Figure \ref{fig:1}. We see that the support of the density is split in two disjoints subsets. Then, when the position of the center of mass is modified by the exits from the domain, the new position of the center of mass induces some numerical particles to modify their velocity in order to reduce their global cost.
Moreover, we observe that the numerical boundary conditions are coherent with the problem (i.e., the density vanishes in $x=0$ and in $x=1$).
\begin{figure}[h!]
\begin{center}
a)\ \includegraphics[width=0.45\textwidth]{1-eps-converted-to.pdf}\
b)\ \includegraphics[width=0.45\textwidth]{2-eps-converted-to.pdf}\\[1.2ex]
c)\ \includegraphics[width=0.45\textwidth]{3-eps-converted-to.pdf}\
d)\ \includegraphics[width=0.45\textwidth]{4-eps-converted-to.pdf}\\ [1.2ex]
e)\ \includegraphics[width=0.45\textwidth]{5-eps-converted-to.pdf}
f)\ \includegraphics[width=0.45\textwidth]{6-eps-converted-to.pdf}
\caption{Profiles of the spatial distribution for six different time instants: a) $t=0.012$, b) $t=0.2$, c) $t=0.8$,
d) $t=1$, e) $t=1.4$ and f) $t=2.4$.} \label{fig:1}
\end{center}
\end{figure}
Moreover, we have plotted, in Figure \ref{fig:2} (left) the time evolution of the total mass with the data of our simulation and, in Figure \ref{fig:2} (right) the time evolution of the center of mass of the population.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{mass-eps-converted-to.pdf}
\includegraphics[width=0.45\textwidth]{center_of_mass-eps-converted-to.pdf}
\caption{Time evolution of the total mass (left) and time evolution of the center of mass (right).}
\label{fig:2}
\end{figure}
The profile of the center of mass is non-monotone and all the oscillations are coherent with the loss of mass during the time evolution of the population.
It is clear that, in principle, an iterative procedure may be time-consuming.
In particular, the computational time heavily depend both on the number of points of the discretized control set and on the number of numerical particles.
A strategy used in our article has been to work in two steps. In the first step, we have computed the numerical solution by using the full discretized control set and
a reduced number of numerical particles. Then, this solution of the numerical simulation with a relatively low number of numerical particles has been the basis for reducing the control set, by eliminating the elements of the discretized control set which are not relevant for the minimization of the cost fuction.
We have observed, in our numerical experiments, that the fixed point is reached quite rapidly once the set of admissible control velocities is reduced:
as shown in Figure
\ref{fig:3} (left), 3 iterations have been enough to reach the Nash equilibrium in the studied case. In the first iteration, by summing all the modification in
each time step, 1,064,896 individual strategy modifications have been observed; in the second iteration, 8,382 individual modification have been carried
out and in the third one, 1,005 individual strategy modifications have been reported. Finally, in the forth iteration step no individual modifications have been observed, and hence the Nash equilibrium has been reached.
We underline that, in the time span of the simulation, by multiplying the number of time steps by the number of numerical particles,
the total number of possible strategy modifications is equal to $6\times 10^6$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{istogramma-eps-converted-to.pdf}
\includegraphics[width=0.45\textwidth]{value-eps-converted-to.pdf}
\caption{Individual strategy modifications before reaching the Nash equilibrium (left) and time evolution of the value function at $x=0.6$, $x=0.7$
and $x=0.8$ (right).}
\label{fig:3}
\end{figure}
We conclude our analysis of the numerical experiment by underlying that our method allows to reconstruct the value function not only as a function of
space and time, but also as a function of time for a particle starting at a given point $x\in\Omega$ as $t=0$. In Figure
\ref{fig:3} (right), we have plotted the time evolution of the value function for particles starting at $x=0.6$, $x=0.7$ and $x=0.8$. All of the are curves are strictly decreasing functions of time and are identically equal to zero for times greater than the exit time of the corresponding particle (for example,
$t\geq 3,684$ for a particle located at $x=0.7$ when $t=0$).
\section{Appendix} \label{sec:appendix}
We here recall some basic facts concerning H\"older continuous functions and provide a proof of the explicit solution to the product log equation \eqref{eq:productlog}.
\begin{lemma}\label{le:lipbasic}
Assume that $k:X\rightarrow Y$ is Hö\"older continuous with exponent $\alpha$ and coefficient $\|k\|_{\alpha}$. Then the following hold:
\begin{enumerate}
\item if $h:X\rightarrow Y$ is H\"older continuous with exponent $\alpha$ and coefficient $\|h\|_{\alpha}$, then $h + k$ is H\"older continuous with exponent $\alpha$ and coefficient $\|h\|_{\alpha}+\|k\|_{\alpha}$,
\item if $X$ is bounded and $h:X\rightarrow Y$ is H\"older continuous with exponent $\alpha$ and coefficient $\|h\|_{\alpha}$, then $hk$ is H\"older continuous with exponent $\alpha$ and coefficient $\|k\|_{\infty}\|h\|_{\alpha}+\|h\|_{\infty}\|k\|_{\alpha}$
\item if $h:X\rightarrow Y$ is H\"older continuous with exponent $\beta$ and coefficient $\|h\|_{\beta}$, then $h \circ k$ is H\"older continuous with exponent $\alpha\beta$ and coefficient $\|h\|_{\beta}\|k\|^{\beta}_{\alpha}$.
\item if $X$ is bounded and $h:X\rightarrow Y$ is bounded, then $h * k$ is H\"older continuous with exponent $\alpha$ and coefficient $|X|\|h\|_{\infty}\|k\|_{\alpha}$, where $|X|$ denotes the measure of $X$.
\end{enumerate}
\end{lemma}
\begin{proof}
The statement about $f + g$ is trivial.
Assume that $h$ is H\"older continuous with exponent $\beta$ and coefficient $|h|_{\beta}$ and that $k$ is Hö\"older continuous with exponent $\alpha$ and coefficient $|k|_{\alpha}$. Then for every $x_1, x_2 \in X$ we have
\begin{align}\begin{split}\label{eq:holdprod}
|h(x_1)k(x_1) - h(x_2)k(x_2)| &\leq |h(x_1)k(x_1) -h(x_1)k(x_2) +h(x_1)k(x_2) - h(x_2)k(x_2)|\\
&\leq |h(x_1)|k(x_1) -k(x_2)| +|k(x_2)||h(x_1) - h(x_2)|\\
&\leq \|h\|_{\infty}\|k\|_{\alpha}|x_1 -x_2|^{\alpha} +\|k\|_{\infty}\|h\|_{\beta}|x_1 - x_2|^{\beta},
\end{split}
\end{align}
which proves the statement about $hk$. Concerning $h \circ k$ we have
\begin{align}\begin{split}\label{eq:holdcirc}
|h(k(x_1)) - h(k(x_2))| &\leq \|h\|_{\beta}|k(x_1) - k(x_2)|^{\beta}\\
&\leq \|h\|_{\beta}\|k\|_{\alpha}^{\beta}|x_1 - x_2|^{\alpha\beta}.
\end{split}
\end{align}
Finally,
\begin{align}\begin{split}\label{eq:convcirc}
|h*k(x_1) - h*k(x_2)| & = \left|\int_{X} h(y)k(x_1 - y)dy - \int_{X} h(y)k(x_2 - y)dy\right|\\
&\leq \int_{X} |h(y)|\left|k(x_1 - y) - k(x_2 - y)\right|dy\\
&\leq |X|\|h\|_{\infty}\|k\|_{\beta}|x_1 - x_2|^{\beta}.
\end{split}
\end{align}
This concludes the proof.
\end{proof}
\begin{proposition}\label{prop:productlog}
Let $p,q\in\mathbb{R}^d$ and $a,b\in\mathbb{R}$. Then the solution of
\begin{align}\label{eq:productlogeq}
p + \alpha + a \exp\left(b \alpha \cdot q\right) q = 0
\end{align}
is given by
\begin{align}\label{eq:productlogsol}
\alpha = -p - a \exp\left(b W\left(ab |q|^2 p\cdot q \exp\left(-bp\cdot q\right) \right) \right)q,
\end{align}
where $W$ denotes the principal branch of Lambert W function \cite{corless1996lambertw}.
\end{proposition}
\begin{proof}
First notice that if $q = 0$ then the statement is straightforward. Assume therefore that w.l.o.g. $q_1 \ne 0$.
Notice that any $\alpha = (\alpha_1, \ldots, \alpha_d) \in \mathbb{R}^d$ which solves \eqref{eq:productlogeq} must also satisfy the system
\begin{align*}
\begin{cases}
p_1 + \alpha_1 + a q_1 \exp\left(b \alpha \cdot q\right) = & 0 \\
\qquad\quad\vdots \notag & \\
p_d + \alpha_d + a q_d \exp\left(b \alpha \cdot q\right) = & 0
\end{cases}
\end{align*}
which implies that for every $i = 2, \ldots, d$ it holds
\begin{align}\label{eq:linearalpha}
\alpha_i = \frac{q_i}{q_1}\alpha_1 + \frac{q_i}{q_1}p_1 - p_i.
\end{align}
This in turn implies that
\begin{align*}
\alpha \cdot q & = \alpha_1q_1 + \sum^d_{i = 2}\left(\frac{q_i}{q_1}\alpha_1 + \frac{q_i}{q_1}p_1 - p_i\right)q_i\\
& = \alpha_1\frac{1}{q_1}\sum^d_{i = 1}q^2_i + p_1\frac{1}{q_1}\sum^d_{i = 1}q^2_i - \sum^d_{i = 1}p_iq_i\\
& = \alpha_1 \frac{|q|^2}{q_1} + \frac{p_1}{q_1}|q|^2 -p\cdot q.
\end{align*}
If we plug this identity into the equation for $\alpha_1$ we get
\begin{align*}
p_1 + \alpha_1 + & a q_1 \exp\left(\alpha_1 \frac{b|q|^2}{q_1} + \frac{bp_1}{q_1}|q|^2 -bp\cdot q\right) = 0 \\
& \Longleftrightarrow \alpha_1 = -p_1 - a q_1\exp\left(\frac{bp_1}{q_1}|q|^2 -bp\cdot q\right) \exp\left(\alpha_1 \frac{b|q|^2}{q_1}\right)
\end{align*}
It is well-known that the solution for the above equation in $\alpha_1$ is given by
\begin{align*}
\alpha_1 &= -p_1 - \frac{q_1}{b |q|^2}W\left(ab|q|^2 \exp\left(\frac{bp_1}{q_1}|q|^2 -bp\cdot q\right) \exp\left(-\frac{bp_1}{q_1}|q|^2\right) \right) \\
& \alpha_1 = -p_1 - \frac{q_1}{b |q|^2}W\left(ab|q|^2 \exp\left(-bp\cdot q\right) \right)\\
& \alpha_1 = -p_1 - \frac{q_1}{b |q|^2}ab|q|^2 \exp\left(-bp\cdot q\right) \exp\left(W\left(ab|q|^2 \exp\left(-bp\cdot q\right)\right) \right)\\
& \alpha_1 = -p_1 - a \exp\left(b W\left(ab|q|^2 p\cdot q \exp\left(-bp\cdot q\right) \right) \right)q_1,
\end{align*}
where we used the Lambert W identity $W(z) = z\exp\left(-W(z)\right)$. If we plug the above identity into equation \eqref{eq:linearalpha} we get the statement.
\end{proof}
\bigskip
\noindent{\bf Acknowledgements.}
This work has been carried out in the framework of the project \textsl{Kimega} (ANR-14-ACHN-0030-01).
This research was moreover supported by the Italian Ministry of Education, University and Research (MIUR), \textsl{Dipartimenti di Eccellenza} Program - Department of Mathematics \say{F. Casorati}, University of Pavia.
The authors wish to thank Prof. Pierre Cardaliaguet for pointing out the problem studied in the article as well as for useful comments and suggestions.
| proofpile-arXiv_065-1802 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Source separation is an important issue in many fields such as audio processing, image processing \cite{melnik2021critic}, EEG \cite{melnik2017systems, melnik2017eeg}, etc. Music source separation from mixed audio is a challenging problem, especially if the source itself should be learned from a dataset of examples. Additionally, such models are expensive to train from scratch. We tested our model on the MUSDB18-HQ \cite{MUSDB18HQ} dataset which supplies full songs with ground truth stems of 'bass', 'drums', 'vocals' and 'other', which includes instruments such as guitars, synths, etc. The task is to separate a mixed audio channel into the separately recorded instruments, called stems here. Most baseline models in the Music Demixing Challenge 2021 \cite{mitsufuji2021music} used masking of input transformed to the frequency domain by short-time Fourier transformation. \textit{Demucs} \cite{DBLP:journals/corr/abs-1909-01174} showed a successful approach that works in waveform domain. \textit{Demucs} is an autoencoder, based on a bidirectional long short-term memory network, with an architecture inspired by generative approaches. This encouraged us to adapt \textit{Jukebox} \cite{dhariwal2020jukebox}, a powerful, generative model using multiple, deep Vector Quantized-Variational Autoencoders (VQ-VAE) \cite{DBLP:journals/corr/abs-1711-00937} to automatically generate real sounding music, and using its publicly available pre-trained weights for the task.
\section{Related Work}
Transfer learning helped deep learning models reach new heights in many domains, such as natural language processing \cite{DBLP:journals/corr/abs-1810-04805,DBLP:journals/corr/abs-1910-10683} and computer vision \cite{HAN201843,https://doi.org/10.1111/mice.12363}. Although relatively unexplored for the audio domain, \cite{7472128} proved feature representation learned on speech data could be used to classify sound events. Their results verify that cross-acoustic transfer learning performs significantly better than a baseline trained from scratch. TRILL \cite{Shor_2020} showed great results of pre-training deep learning models with an unsupervised task on a big dataset of speech samples. Its learned representations exceeded SOTA performance on several downstream tasks with datasets of limited size.
We take a similar approach that is heavily based on \textit{Jukebox} \cite{dhariwal2020jukebox}. It uses multiple VQ-VAEs to compress raw audio into discrete codes. They are trained self-supervised, on a large dataset of about 1.2 million songs, needing the compute power of 256 V100 to train in an acceptable time. Our experiments show that \textit{Jukebox's} learned representations can be used for the task of source separation.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.75\textwidth]{architecture_updated.jpg}
\caption{Visualization of the proposed transfer learning model architecture.}
\label{fig:fig1}
\end{figure*}
\section{Method}
\subsection{Architecture}
Our architecture utilizes \textit{Jukebox's} \cite{dhariwal2020jukebox} standard variant of the publicly available pre-trained VQ-VAE model. \textit{Jukebox} uses three separated VQ-VAEs. We use only the smallest one with the strongest compression. It employs dilated 1-D convolutions in multiple residual blocks to find a less complex sequence representation of music. An audio sequence $x_t$ gets mapped by an encoder $E_1$ to a latent space $e_t=E_1(x_t)$ of 64 dimensions so that it can be mapped to the closest prototype vector in a collection $C$ of vectors called \textit{codebook}. These 2048 prototype vectors, denoted $c_{st}$, are learned in training and help to form a high-quality representation.
The rate of compression for a sequence is called the hop length, for which a value of 8 is used. It depends on the stride values of the convolutional layers. We set the stride value to 2 as well as the downsampling to 3. All other values remain as defined in \cite{dhariwal2020jukebox}. After mapping to the codebook, a decoder $D$ aims to reconstruct the original sequence. In summary, equation (\ref{eq:1})
\begin{equation}
\label{eq:1}
y_t=D(argmin_{c}(\|E_1(x_t)- c)\|) \;\; \text{for} \;\; c \in C
\end{equation}
describes a full forward pass through the VQ-VAE, where $ y_t $ is the prediction for an input sequence $x_t$ and $\|.\|$ is the euclidean norm. For further technical details on the used VQ-VAE architecture, refer to the paper of Dhariwal et al. \cite{dhariwal2020jukebox}. The model is fine-tuned on data for one stem, learning good representations for a single instrument. In addition, we train a second encoder $E_2$, identical to the one already mentioned, to project an input sequence of the mixture to the space already known by the codebook and decoder. For deployment, the encoder of the VQ-VAE is switched with the second one, effectively mapping from the full mixture to one stem.
\subsection{Data}
Our models are trained on the MUSDB18-HQ \cite{musdb18-hq} dataset, also used in the music demixing challenge \cite{mitsufuji2021music}. It consists of 150 full-length songs, sampled at 44KHz, providing the full audio mixture and four stems, 'vocals', 'bass', 'drums', and 'other' for each sample, which can be regarded as ground truth in the context of source separation. We train on the full train set composed of 100 songs, testing is done on the remaining 50.
\subsection{Training}
For each stem i=1..4, we train a model in two phases (see Fig.~\ref{fig:fig1}). In
the first phase, the model is trained on data that present the chosen
stem in isolation (i.e. not embedded in a mixture). This produces a
VQ-VAE with a "single stem encoder" ($\text{SE}_i$) that can map a single
stem into a good latent representation, followed by a "stem decoder"
($\text{SD}_i$) tuned to reconstruct the input after the "discretization
bottleneck" as faithfully as possible. Training of each VQ-VAE is based
on the same three losses as chosen in the original Jukebox paper
\cite{dhariwal2020jukebox} : $L = L_{recons} + L_{codebook} + \beta L_{commit}$. However, our
final goal is to process each stem when it is part of a mixture of all
four stems. Such embedding will introduce distortion of each stem,
requiring to replace each single stem encoder $\text{SE}_i$ from phase 1 by a
corresponding "mixture stem encoder" ($\text{ME}_i$) that is trained in phase
2, to map its changed (mixture embedded) stem $
i$ input onto the representation (stem-i codebook prototypes) created
in phase by the $\text{SE}_{i}$-$\text{SD}_i$ encoder-decoder pair. So, for each stem $i$
(now omitting index $i$ in the following) for each training sample ($x_{mt}$:
the sequence of the mixed audio, $ x_{st}$ : the sequence of stem audio),
we feed $x_{st}$, to the already trained encoder SE, producing $e_{st}$.
Separately, the full mixture $x_{mt}$ is passed through the new encoder
SM, yielding $e_{mt}$. Now, we can backpropagate through SM using MSE
loss $||e_{st}-e_{mt}||^2$ (keeping SE fixed throughout stage 2). Finally, we
obtain our mixture-adapted final VQ-VAE by concatenating the
trained mixture stem encoder ME with the stem decoder SD. Note
that this procedure will be carried out for each of the four stems,
yielding four correspondingly optimized "stem mixture encoder-
decoders" that together provide our decomposition of the mixture input into its stem constituents. On a more technical note, in both training stages and deployment, the data is processed chunk-wise, with a size of about 9 seconds.
For a clear overview of the content of this chapter refer to Figure ~\ref{fig:fig1}.
For all conducted experiments that will be defined in the next section, two Tesla GPUs with 16Gb each are used. The length of each input sequence is equal to 393216 data points as used by \textit{Jukebox}. The batch size is equal to 4.
\begin{strip}
\begin{equation}
\label{eq:2}
SDR_{stem} = 10log_{10}\left( \cfrac{\Sigma_n(s_{\text{stem,leftchannel}}(n))^2+\Sigma_n(s_{\text{stem,rightchannel}}(n))^2}{{\Sigma_n(s_{\text{stem,leftchannel}}(n)-\hat{s}_{\text{stem,leftchannel}}(n))^2}+\Sigma_n(s_{\text{stem,rightchannel}}(n)-\hat{s}_{\text{stem,rightchannel}}(n))^2}\right)
\end{equation}
\end{strip}
To benchmark the conducted experiments, signal-to-distortion ratio (SDR) metric is used, which is a common metric in other SOTA papers\cite{DBLP:journals/corr/abs-1909-01174}\cite{Stoeter2019}\cite{Hennequin2020}\cite{sawata2021all}\cite{stoller2018waveunet}.
It is computed by equation (\ref{eq:2}), as stated in \cite{mitsufuji2021music}, where $s_{\text{stem}}(n)$ is the values of the ground truth and $\hat{s}_{\text{stem}}(n)$ depicts the values of the prediction. 'Total' SDR is the mean SDR for all stems.
\section{Experiments and Results}
\begin{table}[htb]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{ |c||c|c|c|c|c|c| }
\hline
\multicolumn{6}{|c|}{SDR Values} \\
\hline
Method& Drum & Bass & Other & Vocal & Total\\
\hline
DEMUCS & 6.509 & 6.470 & 4.018 & 6.496 & 5.873 \\
Our Approach & 4.925 & 4.073 & 2.695 & 5.060 & 4.188 \\
Wave-U-Net & 4.22 & 3.21 & 2.25 & 3.25 & 3.23 \\
ScaledMixturePredictor & 0.578 & 0.745 & 1.136 & 1.090 & 0.887 \\
\hline
\end{tabular}
}
\caption{Comparison of SDR values per stem and in total. Our approach outperforms both the \textit{ScaledMixturePredictor}, the basic baseline in the Music Demixing Challenge \cite{mitsufuji2021music} and \textit{Wave-U-Net} \cite{stoller2018waveunet}, a classic approach of source seperation in the waveform domain while \textit{Demucs}~\cite{DBLP:journals/corr/abs-1909-01174} achieves current SOTA performance on the Dataset.}
\label{tab:sdr_tab}
\end{table}
The main key point of this paper consists of demonstrating that it is
possible to get decent audio quality by using transfer learning. For
this, we did two different experiments (i),(ii) on the four audio stems.
In experiment (i) we trained each audio stem's stem encoder SE from
scratch without using any pretraining values. In a second experiment
(ii) we trained the SE's with initial weights chosen as the pre-trained
weights of the Jukebox. For all these VQ-VAE we pick the checkpoint
80K and train in stage 2 the corresponding mixture encoders ME. For
these, and in both experiments (i) and (ii), we initialized their weights
randomly. For the first experiment, we found out that all the results
are low, and no good audio quality is reached for the extracted
stems. The SDR values are equal to or near 0 for all four stems. For
the second experiment, the model converges after 32 hours of
training in total on two Tesla GPU units with 16GB of VRAM each
Figure~\ref{fig:res_1} demonstrates decent SDR values for networks trained with pre-trained weights in comparison to others trained with randomly initialized weights from scratch. It can also be deduced that in order to get fairly good SDR values, it is enough to train until early checkpoint values, such as 20K. Then, the checkpoint 20K is reached after 16 hours for each of the two models on two Tesla GPUs.\newline
Table \ref{tab:sdr_tab} gives a comparison of different approaches for audio signal separation. Our approach achieves here comparable results when benchmarked against other state-of-the-art networks.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{fig_1.pdf}
\caption{SDR results of the 4 audio signal stems for the second experiment.}
\label{fig:res_1}
\end{figure}
\section{Conclusion}
In this work, we demonstrate how to use transfer learning for the problem of audio signal processing and in particular for demixing an audio signal from a single mixed audio channel into four different stems: 'drums', 'bass', 'vocals' and 'other'. We show that it is possible to succeed with a small-sized dataset and relatively short training time on only two GPUs, using pre-trained weights from \textit{Jukebox}~\cite{dhariwal2020jukebox}. Similar results were impossible to obtain in comparable time when training from scratch, showing potential to reduce training times and improve results by utilizing transfer learning in the audio domain.
\bibliographystyle{IEEEbib}
| proofpile-arXiv_065-1805 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The central aim of scientific inquiry has been to deduce new concepts from existing knowledge or generalized observations. The biological sciences offer numerous such challenges. The rise of deep learning has spurred major interest in using machine learning to explore uncharted molecular and functional spaces in biology and medicine, ranging from `deorphanizing' G-protein coupled receptors\cite{DISAE} and translating cell-line screens to patient drug responses\cite{few-shot-repurposing}\cite{he2021robust}, to predicting novel protein structures\cite{alphafold1}\cite{alphafold2}\cite{rosetta-fold}, to identifying new cell types from single-cell omics data\cite{cellline}. Illuminating the dark space of human knowledge is a fundamental problem that one can attempt to address via deep learning---that is, to generalize a ``well-trained'' model to unseen data that lies out-of-the-distribution (OOD) of the training data, in order to successfully predict outcomes from conditions that the model has never before encountered. While deep learning is capable, in theory, of simulating any functional mapping, its generalization power is notoriously limited in the case of distribution shifts\cite{causal-repr}.
The training of a deep learning model starts with a domain-specific model architecture. The final model instance that is selected, and its performance, are determined by a series of data-dependent design choices, including model initialization, data used for training, validation, and testing, optimization of loss function, and evaluation metrics. Each of these design choices impacts the generalization power of a trained model. The development of several recent deep learning-based approaches---notably transfer learning\cite{transfer-OOD-1}, self-supervised representation learning\cite{albert},{\linebreak[0]} and meta-learning{\linebreak[0]}\cite{maml}\cite{DBLP:meta-survey}{\linebreak[0]}---has been motivated by the OOD challenge. However, each of these methods focuses on only one aspect in the training pipeline of a deep model. Causal learning and mechanism-based modeling could be a more effective solution to the OOD problem \cite{causal-repr}, but at present these approaches can be applied only on modest scales because of data scarcity and limited domain knowledge. Solving large-scale OOD problems in biomedicine, via machine learning, would benefit from a systematic framework for integrative, beginning-to-end model development, training, and testing.
Here, we propose a new deep learning framework, called {\textit{Portal Learning}}, that systematically addresses the three OOD vulnerabilities in a training pipeline: specifically, we employ biology-inspired model initialization, optimization on an OOD loss, and model selection methods. We define `{\textit{portal}}' as a model with an initialized instance that is (preferably) close to the global optimum in some learning `{\textit{universe}}'. The {\textit{universe}} includes a specific input data-set, specific tasks, and a model architecture that provides a functional mapping from the data-set (and associated distributions) to the tasks. Note that, even with the same model architecture, changes in a pipeline's associated data-set correspond to changes in the universe. Portal Learning takes a global view to design training schemes that are task-specific and use domain knowledge as constraints to guide the exploration of the learning space.
To assess the utility of Portal Learning, we implemented this concept as a concrete framework, termed {\textit{PortalCG}}, for predicting small-molecule binding to dark gene families (i.e., those with no annotated ligands). Despite tremendous progress in high-throughput screening, the majority of chemical genomics space remains unexplored or `dark' \cite{dark2019} (more details in results). Elucidating dark gene families can illuminate many fundamental but only poorly characterized biological pathways, such as microbiome-host interactions mediated by metabolite-protein interactions. Such efforts could also provide novel approaches for identifying new druggable targets and discovering effective therapeutic strategies for currently incurable diseases; for instance, in Alzheimer's disease (AD) many disease-associated genes have been identified from multiple omics studies, but are currently considered un-druggable \cite{disgenet}. Accurately predicting chemical-protein interactions (CPIs) on a genome-wide scale is a challenging OOD problem\cite{DISAE}. If one considers only the reported area under the receiver operating characteristic curve (AUROC), which has achieved 0.9 in many state-of-the-art methods\cite{deepaffinity}\cite{DeepDTA}, it may seem the problem has been solved. However, the performance has been primarily measured in scenarios where the data distribution in the test set does not differ significantly from that in the training set, in terms of identities of proteins or types of chemicals. Few sequence-based methods have been developed and evaluated for an out-of-gene family scenario, where proteins in the test set belong to different (non-homologous) gene families than in the training set; this sampling bias is even more severe in considering cases where the new gene family does not have any reliable three-dimensional (3D) structural information. Therefore, one can fairly claim that all existing work has been confined to just narrow regions of chemical genomics space, without validated generalizability into the dark genome.
Rigorous benchmarking studies, reported herein, show that PortalCG significantly outperforms the leading methods that are available for predicting ligand binding to (dark) proteins. We applied PortalCG to predict candidate drug compounds for undrugged disease genes in the dark human genome, and we prioritized hundreds of undrugged genes that can be efficaciously targeted by existing drugs (notably, many of which involve alternative splicing and transcription factor). These novel genes and their lead compounds provide new opportunities for drug discovery. Furthermore, using PortalCG, we identified polypharmacological agents that might leverage novel drug targets in order to disrupt interactions between SARS-CoV-2 and human proteins. The rapid emergence of SARS-CoV-2 variants has posed a significant challenge to existing vaccine and anti-viral development paradigms. Gordon et al. experimentally identified 332 human proteins that interact with the SARS-CoV-2 virus\cite{sars2-interactor}. This PPI map provides unique opportunities for anti-SARS-CoV-2 drug discovery: targeting the host proteins involved in PPIs can disrupt human SARS-CoV-2 interactions, thereby thwarting the onset of COVID-19. By not aiming to directly kill virions, this indirect strategy should lessen the selection pressure on viral genome evolution. A polypharmacological agent that interacts moderately strongly with multiple human proteins could be a potentially quite effective and safe anti-COVID-19 therapeutic: on the one hand, the normal functions of human proteins should not be significantly perturbed while, on the other hand, the interactions required for successful SARS-CoV-2 infection would be inhibited. Here, we virtually screened compounds in the Drug Repurposing Hub\cite{drug-hub} against the 332 human SARS-CoV-2 interactors. Two drugs, Fenebrutinib and NMS-P715, ranked highly; interestingly, both of these anti-tumorigenic compounds inhibit kinases. Their interactions with putative human targets were supported by further (structure-based) analyses of protein-ligand binding poses.
In summary, the contributions of this work are three-fold:
\begin{enumerate}
\item A novel, generalized training scheme, {\textit{Portal Learning}}, is proposed as a way to guide biology-inspired systematic design in order to improve the generalization power of machine learning on OOD problems, such as is found in the dark regions of molecular/functional space.
\item To concretely illustrate the Portal Learning approach, a specific algorithm, PortalCG, is proposed and implemented. Comprehensive benchmark studies demonstrate the promise of PortalCG when applied to OOD problems, specifically for exploring the dark regions of chemical genomics space.
\item Using PortalCG, we shed new light on unknown protein functions in dark genomes (viz. small molecule-binding properties), and open new avenues in polypharmacology and drug repurposing; as demonstrated by identifying novel drug targets and lead compounds for AD and anti-SARS-CoV-2 polypharmacology.
\end{enumerate}
\section{Conceptual basis of Portal Learning}
\begin{figure}[ht]
\centering
\includegraphics[width=0.90\linewidth]{figs/OOD-ML.png}
\caption{Illustration of two of the three major Portal Learning components for OOD problems, End-to-end step-wise transfer learning (STL) and out-of-cluster meta-learning (OOC-ML), using the prediction of out-of-gene family chemical-protein interactions (CPIs) as an example:
\textbf{A. STL}: 3D structure of protein ligand binding site is in the center connecting protein sequences to CPIs. There are two portals, the first traveling from the protein sequence universe to the binding site structure universe by pre-training a protein language model that is optimal in the protein sequence universe and leads to a model initialization instance closer to the global optimum in the binding site structure universe. The optimization based on this initialized instance leads to the discovery of the second portal through which protein function universe gets a model initialization instance closer to its own global optimum.
\textbf{B. Problem formulation of OOC-ML in comparison with MAML}: Different from MAML where training data is grouped based on the task, the training data in OOC-ML is clustered in the instance space. Instead of decomposing the data in all clusters into support and query set like MAML, there is only a query set in certain training clusters and all testing clusters in OOC-ML to simulate OOD scenario.
\textbf{C. Optimization of OOC-ML in comparison with MAML}: Intuitively, OOC-ML first performs local optimizations on each cluster of training data with the support/query decomposition, then meta optimizations on the training set that has only query sets by ensembling the knowledge learned from the local optimization. The optimized model is applied to the test data in a zero-shot learning setting. In contrast, the meta-optimization in MAML requires query sets in the setting of few-shot learning.
}
\label{fig:wholepipeline}
\end{figure}
\begin{table}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|l|}
\hline
\textbf{data split} &
\textbf{Common practice} &
\textbf{\begin{tabular}[c]{@{}c@{}}classic scheme\\ applied in OOD\end{tabular}} &
\textbf{Portal learning} &
\multicolumn{1}{c|}{\textbf{specification}} \\ \hline
\multirow{2}{*}{train} & IID train & IID train & / & each batch is from the same distribution \\ \cline{2-5}
& / & / & OOD train & differentiate sub-distributions in each batch \\ \hline
\multirow{2}{*}{dev} & IID-dev & IID-dev & / & from the same distribution as the train set \\ \cline{2-5}
& / & / & OOD-dev & from a different distribution from the training set \\ \hline
\multirow{2}{*}{test} & IID-test & / & / & from the same distribution as the training set \\ \cline{2-5}
& / & OOD-test & OOD-test & from a different distribution from both OOD-dev and training set \\ \hline
\end{tabular}%
}
\caption{Data split for stress model instance selection}
\label{tab:4split}
\end{table}
To enable the exploration of dark regions of chemical and biological space, Portal Learning rests upon a systematic, well-principled training strategy, the underpinnings of which are shown in Figure 1. In Portal Learning, a model architecture together with a data set and a task defines a \textbf{universe}. Each universe has some global optimum with respect to the task based on a pre-defined loss function. The model-initialized instance in a universe---which could be a local optimum in the current universe, but which facilitates moving the model to the global optimum in the ultimately targeted universe---is called a \textbf{portal}. The portal is similar to a catalyst that lows the energy barrier via a transition state for a chemical reaction to occur. The dark chemical genomics space cannot be explored effectively if the learning process is confined only to the observed universe of protein sequences that have known ligands, as the known data are highly sparse and biased (details in Result section). Hence, it is critical to successfully identify portals into the dark chemical genomics universe starting from the observed protein sequence and structure universe. For clarity and ease of reference, key terms related to Portal Learning are given in the Supplemental Materials.
The remainder of this section describes the three key components of the Portal Learning approach---namely, end-to-end step-wise transfer learning (STL), out-of-cluster meta-learning (OOC-ML), and stress model selection.
\textbf{End-to-end step-wise transfer learning (STL)}. Information flow in biological systems generally involves multiple intermediate steps, from a source instance to a target. For example, a discrete genotype (source) ultimately yields a downstream phenotype (target) via many steps of gene expression, in some environmental context. For predicting genotype-phenotype associations, explicit machine learning models that represent information transmission from DNA to RNA to cellular phenotype are more powerful than those that ignore the intermediate steps \cite{di-cleit}. In Portal Learning, transcriptomics profiles can be used as a portal to link the source genetic variation (e.g., variants, SNPs, homologs, etc.) and target cellular phenotype (e.g., drug sensitivity). Using deep neural networks, this process can be modeled in an end-to-end fashion.
\textbf{Out-of-cluster meta-learning (OOC-ML)}. Even if we can successfully transfer the information needed for the target through intermediate portals from the source universe, we still need additional portals to reach those many sparsely-populated regions of the dark universe that lack labeled data in the target. Inspired by Model Agnostic Meta-Learning (MAML)\cite{maml}, we designed a new OOC-ML approach to explore the dark biological space. MAML cannot be directly applied to Portal Learning in the context of the OOD problem because it is designed for few-shot learning under a multi-task formulation. Few-shot learning expects to have a few labeled samples from the test data set to update the trained model during inference for a new task. This approach cannot be directly applied to predicting gene functions of dark gene families where the task (e.g., binary classification of ligand binding) is unchanged, but rather there are no labeled data for a unseen distribution that may differ significantly from the training data. In a sense, rather than MAML's "few-shot/multi-task" problem context, mapping dark chemical/biological space is more of a "zero-shot/single-task" learning problem. A key insight of OOC-ML is to define sub-distributions (clusters) for the labeled data in the source instance universe. An example demonstrated in this paper is to define sub-distributions using Pfam families when the source instance is a protein sequence. Intuitively, OOC-ML involves a two-stage learning process. In the first stage, a model is trained using each individual labeled cluster (e.g., a given Pfam ID), thereby learning whatever knowledge is (implicitly) specific to each cluster. In the second stage, all trained models from the first stage are combined and a new ensemble model is trained, using labeled clusters that were not used in the first stage. In this way, we may extract common intrinsic patterns shared by all clusters and apply the learned essential knowledge to dark ones.
\textbf{Stress model selection}. Finally, training should be stopped at a suitable point in order to avoid overfitting. This was achieved by stress model selection. Stress model selection is designed to basically recapitulate an OOD scenario by splitting the data into OOD train, OOD development, and OOD test sets as listed in Table \ref{tab:4split}; in this procedure, the data distribution for the development set differs from that of the training data, and the distribution of the test data set differs from both the training and development data.
For additional details and perspective, the conceptual and theoretical basis of Portal Learning is further described in the Methods section of the Supplemental Materials.
\section{Results and Discussion}
\subsection{Overview of PortalCG}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/in-a-nut-shell.png}
\caption{Scheme of PortalCG. PortalGC enables to predict chemical protein interactions (CPIs) for dark genes across gene families. It includes three key components: end-to-end transfer learning following sequence-structure-function paradigm, Out-of-cluster (OOC) meta-learning, and stress model selection.}
\label{fig:in-a-nutshell}
\end{figure}
We implemented the Portal Learning concept as a concrete model, PortalCG, for exploring the dark chemical genomics space. In terms of Portal Learning's three key components (STL, OOC-ML, and stress model selection), PortalCG makes the following design choices (see also Figure \ref{fig:in-a-nutshell}).
\textbf{End-to-end sequence-structure-function STL}. The function of a protein---e.g., serving as a target receptor for ligand binding---stems from its three-dimensional (3D) shape and dynamics which, in turn, is ultimately encoded in its primary amino acid sequence. In general, information about a protein's structure is more powerful than purely sequence-based information for predicting its molecular function because sequences drift/diverge far more rapidly than do 3D structures on evolutionary timescales. Although the number of experimentally-determined structures continues to exponentially increase, and now AlphaFold2 can reliably predict 3D structures of most single-domain proteins, it nevertheless remains quite challenging to directly use protein structures as input for predicting ligand-binding properties of dark proteins. In PortalCG, protein structure information is used as a portal to connect a source protein sequence and a corresponding target protein function (Figure \ref{fig:wholepipeline}A). We begin by performing self-supervised training to map tens of millions of sequences into a universal embedding space, using our recent {\textit{distilled sequence alignment embedding}} (DISAE) algorithm \cite{DISAE}. Then, 3D structural information about the ligand-binding site is used to fine-tune the sequence embedding. Finally, this structure-regularized protein embedding was used as a hidden layer for supervised learning of cross-gene family CPIs, following an end-to-end sequence-structure-function training process. By encapsulating the role of structure in this way, inaccuracies and uncertainties in structure prediction are `insulated' and will not propagate to the function prediction.
\textbf{Out-of-cluster meta-learning}. In the OOC-ML framework, Pfam gene families provide natural clusters as sub-distributions. In each Pfam family, the data is split into support set and query set as shown in Figure \ref{fig:wholepipeline}(B). Specifically, a model is trained for a single Pfam family independently to reach a local minimum using the support set of the Pfam family as shown in the inner loop IID optimization in Figure \ref{fig:wholepipeline}(C.1). Then a query set from the same Pfam family is used on the locally optimized model to get a loss from the local loss landscape, i.e. outer loop IID meta optimization in Figure \ref{fig:wholepipeline}(C.1). Local losses from the query sets of multiple Pfam families will be aggregated to calculate the loss on a global loss landscape, i.e. meta optimization in Figure \ref{fig:wholepipeline}(C.1). For some cluster with very limited number of data, they don't have a support set hence will only participate in the optimization on the global loss landscape. There could be many choices of aggregations. A simple way is to calculate the average loss. The aggregated loss will be used to optimize the model on the global loss landscape. Note that weights learned on each local loss landscape will be memorized during the global optimization. In our implementation, it is realized by creating a copy of the model trained from the each family's local optimization. In this way, the local knowledge learned is ensured to be only passed to the global loss landscape by the query set loss.
\textbf{Stress model selection}. The final model was selected using Pfam families that were not used in the training stage (Figure \ref{fig:in-a-nutshell}, right panel).
The Supplemental Materials provide further methodological details, covering data pre-processing, the core algorithm, model configuration, and implementation details.
\subsection{There are significantly unexplored dark spaces in chemical genomics}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/dark-space-bubble.png}
\caption{\textbf{Chemical genomics space in statistics: The ratio of proteins that have at least a known ligand in each Pfam family.} Each color bubble represents a Pfam family. The size of a bubble is proportional to the total number of proteins in the Pfam family. Y-axis is the ratio of proteins with known ligand(s) in a Pfam family. Around $2,000$ Pfam families have at least one known small molecule ligand. Most of these Pfam families have less than $1\%$ proteins with known ligands. Furthermore, around 90\% of total $19,179$ Pfam families are in the dark chemical genomics space without any known ligand information. }
\label{fig:darkspace}
\end{figure}
We inspected the known CPIs between (i) molecules in the manually-curated ChEMBL database, which consists of only a small portion of all chemical space, and (ii) proteins annotated in Pfam-A \cite{Pfam}, which represents only a narrow slice of the whole protein sequence universe. The ChEMBL26\cite{chembl26} database supplies $1,950,765$ chemicals paired to $13,377$ protein targets, constituting $15,996,368$ known interaction pairs. Even for just this small portion of chemical genomics space, unexplored CPIs are enormous, can be seen in the dark region in Figure \ref{fig:darkspace}. Approximately 90\% of Pfam-A families do not have any known small-molecule binder. Even in Pfam families with annotated CPIs (e.g., GPCRs), there exists a significant number of `orphan' receptors with unknown cognate ligands (Figure \ref{fig:darkspace}). Fewer than $1\%$ of chemicals bind to more than two proteins, and $<0.4\%$ of chemicals bind to more than five proteins, as shown in Supplemental Figures S1, S2 and S3. Because protein sequences and chemical structures in the dark chemical genomics space could be significantly different from those for the known CPIs, predicting CPIs in the dark space is an archetypal, unaddressed OOD problem.
\subsection{Portal Learning significantly outperforms state-of-the-art approaches to predicting dark CPIs}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.8\textwidth}
\centering
\includegraphics[scale=0.55]{figs/AUC-curves.PNG}{(a)}
\end{subfigure}
\vfill
\begin{subfigure}[b]{0.8\textwidth}
\centering
\includegraphics[scale=0.7]{figs/deployment-gap.PNG}{(b)}
\end{subfigure}
\caption{Comparison of PortalCG with the state-of-the-art method DISAE as baseline using the shifted evaluation test. (a) ROC and Precision-Recall curves for the ``best'' model instance selected by stress test; (b)
Deployment gaps where the deployment gap of PortalCG is steadily around zero as training step increases while the deployment performance of DISAE deteriorates.}
\label{fig:auc-curve}
\end{figure}
When compared with the state-of-the-art method DISAE\cite{DISAE}, which already was shown to outperform other leading methods for predicting CPIs of orphan receptors, PortalCG demonstrates superior performance in terms of both Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves, as shown in Figure \ref{fig:auc-curve}(a). Because the ratio of positive and negative cases is imbalanced, the PR curve is more informative than the ROC curve. The PR-AUC of PortalCG and DISAE is 0.714 and 0.603, respectively. In this regard, the performance gain of Portal Learning (18.4\%) is significant (p-value < $1e-40$). Performance breakdowns for binding and non-binding classes can be found in Supplemental Figure S4.
PortalCG exhibits much higher recall and precision scores for positive cases (i.e., a chemical-protein pair that is predicted to bind) versus negative, as shown in Supplemental Figure S4; this is a highly encouraging result, given that there are many more negative (non-binding) than positive cases. The deployment gap, shown in Figure \ref{fig:auc-curve}(b), is steadily around zero for PortalCG; this promising finding means that we can expect that, when applied to the dark genomics space, the performance will be similar to that measured using the development data set.
With the advent of high-accuracy protein structural models, predicted by AlphaFold2 \cite{alphafold2}, it now becomes feasible to use reversed protein-ligand docking (RPLD)\cite{huang2018reverse} to predict ligand-binding sites and poses on dark proteins, on a genome-wide scale. In order to compare our method with the RPLD approach, blind docking to putative targets was performed via Autodock Vina\cite{autodock}. After removing proteins that failed in the RPLD experiments (mainly due to extended structural loops), docking scores for 28,909 chemical-protein pairs were obtained. The performance of RPLD was compared with that of PortalGC and DISAE. As shown in Figure \ref{fig:auc-curve}(a), both ROC and PR for RPLD are significantly worse than for PortalGC and DISAE. It is well known that PLD suffers from a high false-positive rate due to poor modeling of protein dynamics, solvation effects, crystallized waters, and other challenges \cite{DTI-challenge-structure}; often, small-molecule ligands will indiscriminately `stick' to concave, pocket-like patches on protein surfaces. For these reasons, although AlphaFold2 can accurately predict many protein structures, the relatively low reliability of PLD still poses a significant limitation, even with a limitless supply of predicted structures \cite{virtual-screen-performance}. Thus, the direct application of RPLD remains a challenge for predicting ligand binding to dark proteins. PortalCG's end-to-end sequence-structure-function learning could be a more effective strategy: protein structure information is not used as a fixed input, but rather as an intermediate layer that can be tuned using various structural and functional information. From this perspective, again the role of protein structure in PortalCG can be seen as that of a portal (sequence$\rightarrow$function; Figure \ref{fig:wholepipeline}) and a regularizer (Figure \ref{fig:in-a-nutshell}).
\subsection{Both the STL and OOC-ML stages contribute to the improved performance of PortalCG}
\begin{table}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{c|}{\textbf{models}} &
\textbf{\begin{tabular}[c]{@{}c@{}}PR-AUC\\ (OOD-test set)\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}ROC-AUC\\ (OOD-test set)\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}ROC-AUC\\ Deployment Gap\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}PR-AUC\\ Deployment Gap\end{tabular}} \\ \hline
DIASE &
PortalCG w/o STL \& OOC-ML &
0.603±0.005 &
0.636±0.004 &
-0.275±0.016 &
-0.345±0.012 \\ \hline
variant 1 &
\begin{tabular}[c]{@{}l@{}}PortalCG w/o OOC-ML\end{tabular} &
0.629±0.005 &
0.661±0.004 &
--- &
--- \\ \hline
variant 2 &
\begin{tabular}[c]{@{}l@{}}PortalCG w/o STL\end{tabular} &
0.698±0.015 &
0.654±0.062 &
--- &
--- \\ \hline
PortalCG &
\begin{tabular}[c]{@{}l@{}}Portal learning\end{tabular} &
0.714±0.010 &
0.677±0.010 &
0.010±0.009 &
0.005±0.010 \\ \hline
\end{tabular}%
}
\caption{Ablation study of PortalCG.}
\label{tab:PRAUC}
\end{table}
To gauge the potential contribution of each component of PortalCG to the overall system effectiveness in predicting dark CPIs, we systematically compared the four models shown in Table \ref{tab:PRAUC}. Details of the exact model configurations for these experiments can be found in the Supplemental Materials Table S10 and Figure S13. As shown in Table \ref{tab:PRAUC}, Variant 1, with a higher PR-AUC compared to the DISAE baseline, is the direct gain from transfer learning through 3D binding site information, all else being equal; yet, with transfer learning alone and without OOC-ML as an optimization algorithm in the target universe (i.e., Variant 2 versus Variant 1), the PR-AUC gain is minor. Variant 2 yields a 15\% improvement while Variant 1 achieves only a 4\% improvement. PortalCG (i.e., full Portal Learning), in comparison, has the best PR-AUC score. With all other factors held constant, the advantage of PortalCG appears to be the synergistic effect of both STL and OOC-ML. The performance gain measured by PR-AUC under a shifted evaluation setting is significant (p-value < 1e-40), as shown in Supplemental Figure S5.
We find that stress model selection is able to mitigate potential overfitting problems, as expected. Training curves for the stress model selection are in Supplemental Figures S4 and S6. As shown in Supplemental Figure S6, the baseline DISAE approach tends to over-fit with training, and IID-dev performances are all higher than PortalCG but deteriorate in OOD-test performance. Hence, the deployment gap for the baseline is -0.275 and -0.345 on ROC-AUC and PR-AUC, respectively, while PortalCG deployment is around 0.01 and 0.005, respectively.
\subsection{Application of PortalCG to explore dark chemical genomics space}
A production-level model using PortalCG was trained with ensemble methods for the deployment. Details are in the Supplemental Methods section. The trained PortalCG model was applied to two case-studies in order to assess its utility in the exploration of dark space. As long as a protein and chemical pair was presented to this model with their respective sequence and SMILES string, a prediction could be made, along with a corresponding prediction score. To select high confidence predictions, a histogram of prediction scores was built based on known pairs (Supplemental Figure S7). A threshold of $0.67$, corresponding to a false positive rate of 2.18e-05, was identified to filter out high-confidence positive predictions. Around 6,000 drugs from the Drug Repurposing Hub\cite{clue} were used in the screening. The remainder of this section describes the two case-studies that were examined with PortalCG, namely (i) COVID-19 polypharmacology and (ii) the `undruggable' portion of the human genome.
\subsubsection{COVID-19 polypharmacology}
In order to identify lead compounds that may disrupt SARS-CoV-2-Human interactions, we screened 5,886 approved and investigational drugs against the 332 human proteins known to interact with SARS-CoV-2. We considered a drug-protein pair as a positive hit and selected it for further analysis only when all models in an ensemble vote as positive and the false positive rate does not exceed \sout{is} 2.18e-05. Drugs involved in these positive pairs were ranked according to the number of proteins to which they are predicted to bind. Detailed information is given in Supplemental Table S1. Most of these drugs are protein kinase inhibitors and are already in Phase 2 clinical trials. Among them, Fenebrutinib and NMS-P715 are predicted to bind to seven human SARS-CoV-2 interactors, as shown in Table \ref{tab:docking}.
In order to elucidate how these drug molecules might associate with a SARS-CoV-2 interactor partner, we performed molecular docking for Fenebrutinib and NMS-P715. Structures of two SARS-CoV-2 interactors were obtained from the Protein Data Bank; the remaining five proteins do not have experimentally solved structures so their predicted structures (via AlphaFold2) were used for docking. For most of these structures, the binding pockets are unknown. Therefore, blind docking was employed, using Autodock Vina\cite{autodock} to search the full surfaces (the accessible molecular envelope) and identify putative binding sites of Fenebrutinib and NMS-P715 on these interactors. Docking conformations with the best (lowest) predicted binding energies were selected for each protein; the respective binding energies are listed in Table \ref{tab:docking}.
Components of the exosome complex are predicted targets for both Fenebrutinib and NMS-P715. The exosome complex is a multi-protein, intracellular complex which is involved in degradation of many types of RNA molecules (e.g., via 3'$\rightarrow$5' exonuclease activities). As shown in Figure \ref{fig:docking conformation}, the subunits of the exosomal assembly form a central channel; RNA passes through this region as part of the degradation/processing. Intriguingly, SARS-CoV-2's genomic RNA has been found to be localized in the exosomal cargo, suggesting a key mechanistic role for the channel region in SARS-CoV-2 virion infectivity pathways \cite{Exosomes}. Fenebrutinib and NMS-P715 were also predicted to bind to a specific exonuclease, RRP43, of the exosome complex, while NMS-P715 was also predicted to bind yet another exonuclease, RRP46.
The predicted binding poses for Fenebrutinib and NMS-P715 with the exosomal complex components are shown in Figure \ref{fig:docking conformation}. The physicochemical/interatomic interactions between these two drugs and the exosome complex components are also schematized as a 2D layout in this figure. The favorable hydrogen bond, pi-alkyl, pi-cation and Van der Waals interactions provide additional support that Fenebrutinib and NMS-P715 do indeed bind to these components of the exosome complex. The predicted binding poses and 2D interactions maps for Fenebrutinib and NMS-P715 with other targeted proteins are shown in Supplementary Figures S8, S9, and S10.
\begin{table}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{Docking scores of \textbf{Fenebrutinib} binding to predicted targets} \\ \hline
Uniprot ID & Protein name & PDB ID & \begin{tabular}[c]{@{}c@{}}Docking score \\ (kcal/mol)\end{tabular} \\ \hline
Q96B26 & Exosome complex component RRP43 & 2NN6\_C & -7.9 \\ \hline
Q5JRX3 & Presequence protease, mitochondrial & 4L3T\_A & -10.8 \\ \hline
Q99720 & Sigma non-opioid intracellular receptor 1 & 5HK1\_A & -9.6 \\ \hline
Q5VT66 & Mitochondrial amidoxime-reducing component 1 & 6FW2\_A & -10.4 \\ \hline
P29122 & Proprotein convertase subtilisin/kexin type 6 & \multicolumn{1}{l|}{AF-P29122-F1 (157-622)} & -8.5 \\ \hline
Q96K12 & Fatty acyl-CoA reductase 2 & \multicolumn{1}{l|}{AF-Q96K12-F1 (1-478)} & -10.1 \\ \hline
O94973 & AP-2 complex subunit alpha-2 & \multicolumn{1}{l|}{AF-O94973-F1 (3-622)} & -8.6 \\ \hline
\multicolumn{4}{|c|}{Docking scores of \textbf{NMS-P715} binding to predicted targets} \\ \hline
Uniprot ID & Protein name & PDB ID & \begin{tabular}[c]{@{}c@{}}Docking score\\ (kcal/mol)\end{tabular} \\ \hline
Q9UN86 & Ras GTPase-activating protein-binding protein 2 & 5DRV\_A & -9.5 \\ \hline
P67870 & Casein kinase II subunit beta & 1QF8\_A & -8.6 \\ \hline
Q96B26 & Exosome complex component RRP43 & 2NN6\_C & -9.3 \\ \hline
P62877 & E3 ubiquitin-protein ligase RBX1 & 2HYE\_D & -7.9 \\ \hline
P61962 & DDB1- and CUL4-associated factor 7 & \multicolumn{1}{l|}{AF-P61962-F1 (9-341)} & -8.7 \\ \hline
Q9NXH9 & tRNA (guanine(26)-N(2))-dimethyltransferase & \multicolumn{1}{l|}{AF-Q9NXH9-F1 (53-556)} & -9.0 \\ \hline
Q9NQT4 & Exosome complex component RRP46 & 2NN6\_D & -8.6 \\ \hline
\end{tabular}%
}
\caption{Docking scores for Fenebrutinib and NMS-P715}
\label{tab:docking}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figs/combined.png}
\caption{The 3D structure of the exosome complex and the binding conformations of Fenebrutinib and NMS-P715 on the complex components predicted by using Autodock: (a) The exosome complex structure; Left: yellow circles shows the binding pocket of NMS-P715 on RRP43 and RRP46, purple hexagon shows the gate; Right: red circle shows the binding pocket of Fenebrutinib on RRP43. (b) Fenebrutinib on RRP43. (c) NMS-P715 on RRP43. (d) NMS-P715 on RRP46.}
\label{fig:docking conformation}
\end{figure}
\subsubsection{Illuminating the undruggable human genome}
It is well known that only a small subset of the human genome is considered druggable \cite{finan2017druggable}. Many proteins are deemed ``undruggable'' because there is no information on their ligand-binding properties or other interactions with small-molecule compounds (be they endogenous or exogenous ligands). Here, we built an ``undruggable'' human disease protein database by removing the druggable proteins in Pharos \cite{pharos} and Casas's druggable proteins \cite{cesas} from human disease associated genes \cite{disgenet} and applied PortalCG to predict the probability for these ``undruggable'' proteins to bind to drug-like molecules. A total of 12,475 proteins were included in our disease-associated undruggable human protein list. These proteins were ranked according to their probability scores, and 267 of them have a false positive rate lower than 2.18e-05, as listed in the supplementary material Table S2. Table \ref{tab:enrichment} shows the statistically significantly enriched functions of these top ranked proteins as determined by DAVID \cite{David}. The most enriched proteins are involved in alternative splicing of mRNA transcripts. Malfunctions in alternative splicing are linked to many diseases, including several cancers \cite{alternative-splicing}\cite{alternative-splicing2} and Alzheimer's disease \cite{love2015alternative}. However, pharmaceutical modulation of alternative splicing process is a challenging task. Identifying new drug targets and their lead compounds for targeting alternative splicing pathways may open new doors to developing novel therapeutics for complex diseases with few treatment options. Diseases associated with these 267 human proteins were also listed in Table \ref{tab:disease}. Since one protein is always related to multiple diseases, these diseases are ranked by the number of their associated proteins. Most of top ranked diseases are related with cancer development. 21 drugs that are approved or in clinical development are predicted to interact with these proteins as shown in Table S3. Several of these drugs are highly promiscuous. For example, AI-10-49, a molecule that disrupts protein-protein interaction between CBFb-SMMHC and tumor suppressor RUNX1, may bind to more than 60 other proteins. The off-target binding profile of these proteins may provide invaluable information on potential side effects and opportunities for drug repurposing and polypharmacology. The drug-target interaction network built for predicted positive proteins associated with Alzheimer's disease was shown in Figure \ref{fig:AD-drug-target}. Functional enrichment, disease associations, and top ranked drugs for the undruggable proteins with well-studied biology (classified as Tbio in Pharos) and those excluding Tbio are list in Supplemental Table S4-S9.
\begin{comment}
\end{comment}
\begin{table}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{David Functional Annotation enrichment analysis}
\\ \hline
\begin{tabular}[c]{@{}c@{}}Enriched terms in \\ UniProtKB keywords\end{tabular} & \begin{tabular}[c]{@{}c@{}}Number of \\ proteins involved\end{tabular} & \begin{tabular}[c]{@{}c@{}}Percentage of \\ proteins involved\end{tabular} & P-value & \begin{tabular}[c]{@{}c@{}}Modified \\ Benjamini p-value\end{tabular} \\ \hline
{\color[HTML]{080808} Alternative splicing} & 171 & 66.5 & 7.70E-07 & 2.00E-04 \\ \hline
{\color[HTML]{080808} Phosphoprotein} & 140 & 54.5 & 2.60E-06 & 3.40E-04 \\ \hline
{\color[HTML]{080808} Cytoplasm} & 91 & 35.4 & 1.30E-05 & 1.10E-03 \\ \hline
{\color[HTML]{080808} Nucleus} & 93 & 36.2 & 1.20E-04 & 8.10E-03 \\ \hline
{\color[HTML]{080808} Metal-binding} & 68 & 26.5 & 4.20E-04 & 2.20E-02 \\ \hline
{\color[HTML]{080808} Zinc} & 48 & 18.7 & 6.60E-04 & 2.90E-02 \\ \hline
\end{tabular}%
}
\caption{Functional Annotation enrichment for undruggable human disease proteins selected by PortalCG}
\label{tab:enrichment}
\end{table}
\begin{table}[ht]
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|}
\hline
DiseaseName & \# of undruggable proteins associated with disease \\ \hline
Breast Carcinoma & 90 \\ \hline
Tumor Cell Invasion & 86 \\ \hline
Carcinogenesis & 83 \\ \hline
Neoplasm Metastasis & 75 \\ \hline
Colorectal Carcinoma & 73 \\ \hline
Liver carcinoma & 66 \\ \hline
Malignant neoplasm of lung & 56 \\ \hline
Non-Small Cell Lung Carcinoma & 56 \\ \hline
Carcinoma of lung & 54 \\ \hline
Alzheimer's Disease & 54 \\ \hline
\end{tabular}%
}
\caption{Top ranked diseases associated with the undruggable human disease proteins selected by PortalCG}
\label{tab:disease}
\end{table}
\begin{figure}[ht]
\includegraphics[width=1.0\linewidth]{figs/AD-drug-target.jpeg}
\caption {Drug-target interaction network for proteins associated with Alzheimer's disease. Green represents drugs and pink represents targets. }
\label{fig:AD-drug-target}
\end{figure}
\section{Conclusion }
This paper confronts the challenge of exploring dark chemical genomics space by recognizing it as an OOD generalization problem in machine learning, and by developing a new learning framework to treat this type of problem. We propose Portal Learning as a general framework that enables systematic control of the OOD generalization risk. As a concrete algorithmic example and use-case, PortalCG was implemented under the Portal Learning framework. Systematic examination of the PortalCG method revealed its superior performance compared to (i) a state-of-the-art deep learning model (DISAE), and (ii) an AlphaFold2-enabled, structure-based reverse docking approach. PortalCG showed significant improvements in terms of both sensitivity and specificity, as well as close to zero deployment performance gap. With this approach, we were able to explore the dark regions of the druggable genome. Applications of PortalCG to COVID-19 polypharmacology and to the targeting of hitherto undruggable human proteins affords novel new directions in drug discovery.
\section{Methods}
\subsection{Full algorithm details }
Portal learning as a system level framework involves collaborative new design from data preprocessing, data splitting to model architecture, model initialization, and model optimization and evaluation. The main illustrations are Figure \ref{fig:wholepipeline} and Figure \ref{fig:in-a-nutshell}. Extensive explanation of each of the component and their motivations are available in Supplemental Materials section Methods with Figure S11, and Algorithm1.
\subsection{Data}
PortalCG uses three database, Pfam\cite{Pfam}, Protein Data Bank (PDB)\cite{pdb} and ChEMBL\cite{chembl26}. Two applications are demonstrated, COVID-19 polypharmacology and undruggable human proteins, for which known approved drugs are collected from CLUE\cite{clue}, 332 human proteins interacting SARS-CoV-2 are listed in recent publication\cite{nature-covid-target}, 12,475 undruggable proteins are collected by removing the druggable proteins in Pharos \cite{pharos} and Casas's druggable proteins \cite{cesas} from human disease associated genes \cite{disgenet}. Detailed explanation of how each data set is used can be found in Supplemental Materials Methods section.
Major data statistics are demonstrated in Figure \ref{fig:darkspace} and Supplemental Materials Figure S1, S2, and S3.
\subsection{Experiment implementation}
Experiments are first organized to test PortalCG performance against baseline models, DISAE\cite{DISAE} and AlphFold2\cite{alphafold2}. DISAE is a protein language which predicts protein function based on protein sequence information alone. AlphaFold2 uses protein sequence information to predict protein structure, combing docking methods, can be used to predict protein function. Main results are shown with Table \ref{tab:PRAUC} and Figure \ref{fig:auc-curve}. Ablation studies is also performed mainly to test some variants of PortalCG components such as binding site distance prediction as shown in Supplemental Figure S12. Since Portal Learning is a general framework, there could be many interesting variants to pursue in future studies. To enhance application accuracy, a production level model is built with ensemble learning, and high confidence predictions are selected as demonstrated in Supplemental Material Figure S7. Evaluation metrics used are F1, ROC-AUC and PR-AUC.
Extensive details can be found in Supplemental Materials Methods section.
\subsection{Related works}
A literature review of related works could be found in Supplemental Materials section Related Works.
\clearpage
\begin{comment}
\tableofcontents
\clearpage
\end{comment}
\addcontentsline{toc}{section}{Author Contributions}
\section*{Author Contributions}
TC conceived the concept of Portal Learning, implemented the algorithms, performed the experiments, and wrote the manuscript; Li Xie prepared data, performed the experiments, and wrote the manuscript; MC implemented algorithms; YL implemented algorithms; SZ prepared data; CM and PEB refined the concepts and wrote the manuscript; Lei Xie conceived and planned the experiments, wrote the manuscript.
\addcontentsline{toc}{section}{Data and software availability}
\section*{Data and software availability}
Data, a pre-trained PortalCG model, and PortalCG codes can be found in the following link: \url{ https://github.com/XieResearchGroup/PortalLearning}
\addcontentsline{toc}{section}{Acknowledgement}
\section*{Acknowledgement}
This project has been funded with federal funds from the National Institute of General Medical Sciences of National Institute of Health (R01GM122845) and the National Institute on Aging of the National Institute of Health (R01AD057555).
We appreciate that Hansaim Lim helped with proof reading and provided constructive suggestions.
\clearpage
\addcontentsline{toc}{section}{Reference}
| proofpile-arXiv_065-1816 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{S-Introduction}
The development of solar active regions until the maximum state and their decay afterwards are governed by different mechanisms. The detailed description of these events is important for the understanding of the complex system of interactions between solar magnetic and velocity fields. The investigation of sunspot decay started with the study of single spots. \citet{1946MNRAS.106..218C} published the theoretical consideration that the decay of a spot cannot be caused merely by diffusion, the surrounding velocity fields have to play a definitive role in it. This has been worked out in detail in the paper by \citet{1997SoPh..176..249P}. \citet{1946MNRAS.106..218C} also compared the evolutionary curve of two groups with their magnetic-life histories and obtained that the magnetic field increases very rapidly and reaches its maximum almost in the same time as the area. \citet{2005ApJ...635..659H} presented details of the moving magnetic features (MMFs) by tracking eight sunspots. MMFs are small knots around the sunspots with roughly 10$^3$ km diameter which transport magnetic flux away from the spots. \citet{2009SoPh..258...13K} studied the convective motions and the sunspot decay on a sample of 788 active regions and found that the strong upflow changes to downflow at a certain depths during the decay. \citet{2005ApJ...635..659H} pointed out that there are more MMFs around the larger spots by studying eight sunspots.
The decay of sunspot groups is a more complex series of events and interactions.
In the model of \citet{1975Ap&SS..34..347P} the sunspot groups start to decay when the flux ropes lose their twists. Thus unwinding of the flux ropes frays the rope itself. The large, long-lived sunspots are bound with an annular moat and measured an outward velocity in the moat. When the spots start to decay small magnetic knots can be observed moving outward across the moat. These knots carry away the flux from the spot. This model describes strong plasma control of the flux tube. \citet{2017ApJ...842....3N} studied the decay in some cases and found higher decay rate in the following part and obtained a relation between the rate of the MMFs and the leading/following decay rates. The average value of the decaying flux is in agreement with the rate obtained by \citet{2005ApJ...635..659H}. \citet{2009SoPh..258...13K} studied the convective motions and the sunspot decay on a sample of 788 active regions and found that the strong upflow changes to downflow at a certain depths during the decay. \citet{2007ApJ...671.1013D} observed a decaying follower sunspot over six days and pointed out that the umbra/total areas increased from 15.9 \% to 19 \% during the decay, showing that the umbra decays slower than the penumbra. Although the decay rate was found as almost constant, the decay process is not uniform. They described the decay as a three steps process. Firstly, the fragmentation of the sunspot can be observed, then the flux cancellation of MMFs that encounter the opposite polarity network at the edge of the moat region, while at the end the flux is transported by MMFs. \citet{2014SoPh..289.1531G} followed the evolution of four ARs by using intensitygrams obtained by the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). They found that the largest contributor to the total area decay rates of spots is the decay rates of penumbra, while the umbral decay rates are much lower. These results are in agreement with the observations of \citet{2010AN....331..563S}.
\citet{2014ApJ...785...90R} in their theoretical simulation obtained that lifetimes of sunspots are too short in the absence of a penumbra. They concluded that the penumbra may either stabilize the surface layers of sunspots or it delays the fragmentation of the subsurface layers of sunspots.
There are some works that analyze different asymmetries during the evolution of active regions. \citet{2012Ap&SS.338..217J} examined the rates of growth and decay of sunspot groups and pointed out differences between the hemispheric values. Some other works studied the tilt angles of sunspot groups and their variations during the decay and observed a difference between the northern and southern values \citep{2012ApJ...758..115L, 2013SoPh..287..215M}, while \citet{1993SoPh..145...95H} investigated the dependence of the decay of sunspot groups on axial tilt angles. \citet{2014SoPh..289..563M} studied the area and number asymmetries of sunspot groups at their maximum states and pointed out an asymmetry in the compactness of groups, i.e., the number of spots tends to be smaller, while their mean area is larger in the leading part at the maximum phase.
\citet{1993A&A...274..521M} obtained that the total area to umbra area ratio is about 4–6, and noted that this parameter is independent of the evolutionary phase of the spot except the very last stage when the leading umbra is the only remnant of the sunspot. \citet{2007ApJ...671.1013D} also analyzed the UP/U ratio during the decay of the NOAA 10773 AR and found it varies between 5–6. \citet{2018ApJ...865...88C} studied the decay and the U/P ratio of sunspots during the Maunder Minimum and revealed that the value of the U/P ratio varies between 0.15–0.25, and the higher the U/P ratio the faster the decay of the sunspot.
\citet{2019SoPh..294...72J} obtained 5.5–6 for the P/U value and pointed out that this ratio is independent of cycle strength, latitude, and cycle phase. The results of \citet{2013SoPh..286..347H} are in agreement with this, however he found that this ratio increases with the increasing total sunspot group area. \citet{1990SoPh..129..191B} studied 126 sunspots observed around 1980 and pointed out the U/P value is 0.24 for small and 0.32 for large spots. \citet{2018SoPh..293..104C} found different behavior in the variation of U/P ratio of the small and large groups by using the Royal Greenwich Observatory (RGO) series. The larger groups do not show significant changes from year to year while the smaller groups do. \citet{1997rscc.book.....H} noted that the rate of sunspot decay is proportional to the convective velocity, which means that the higher the convective velocities the higher the U/P values and the faster the sunspot decay.
The aforementioned investigations dealt with the decay process of sunspot groups and some of them focused on the internal process as well. After the calculation of the distinct decay rates \citep{2021ApJ...908..133M} this study aims to describe the variation of the asymmetries within the sunspot groups during their decay.
\section{Data and methods}
\subsection{Observational data}
The present study has been made by using the SoHO/MDI--Debrecen Database (SDD) \citep{2016SoPh..291.3081B} which is one of the sunspot catalogs made in the Debrecen Observatory and contains sunspot data from 1996 until 2010. This database has about 1.5 hours temporal resolution which is allowed by the observations of the Solar and Heliospheric Observatory (SoHO) spacecraft and besides the area and position data contains also magnetic data for each observable sunspot group as well as for each sunspot within them. Thus, the leading and following parts of the sunspot groups can be distinguished, and the temporal resolution makes it possible to track the evolution of the groups and their parts.
In order to select from this huge database those sunspot groups that are definitely in the decay phase, the following strict criteria were set. The sunspot groups should have two opposite polarity parts at the time of the maximum area. The development area has to be observed at least for two days while the decaying area has to be tracked at least during four consecutive days after the maximum and the first and last observed areas could be at most 40 \% of the maximum area. All the three areas (total area, leading and following areas) have to decrease during the decay phase, and these criteria were visually inspected in each case as well. The total number of the selected sample is 142.
\begin{figure}[H]
\includegraphics[width=\textwidth]{1.eps}
\caption{Variation of the area asymmetry indices of AR 8086 (on the left) and 9037 (on the right) after their maximum areas during the decay phase. The red dots mark those data which are in Table~\ref{table:noaas}. }
\label{fig:samplears}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{ |p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}||p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}| }
\hline
\multicolumn{1}{|c}{}&\multicolumn{3}{c}{NOAA 8086; A$_{up}$=432 MSH}& \multicolumn{1}{c||}{}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{NOAA 9037; A$_{up}$=340 MSH}& \multicolumn{1}{c|}{}\\
\hline
\hline
$a_{up}$ & ADP$_{up} $ & lead. $a_{up}$ & foll. $a_{up}$ & $AIa_{up}$ & $a_{up}$ & ADP$_{up}$ & lead. $a_{up}$ & foll. $a_{up}$ & $AIa_{up}$ \\
\hline
432 & 0 & 243 & 189 & 0.125 & 340 & 0 & 133 & 207 & -0.218\\
400 & 7.407 & 239 & 161 & 0.195 & 315 & 7.353 & 126 & 189 & -0.200\\
299 & 30.787 & 199 & 100 & 0.331 & 231 & 32.059 & 94 & 137 & -0.186\\
225 & 47.917 & 180 & 45 & 0.6 & 174 & 48.824 & 60 & 114 & -0.310\\
132 & 69.444 & 129 & 3 & 0.955 & 102 & 70 & 32 & 70 & -0.373\\
47 & 89.120 & 47 & NO & 1 & 12 & 96.471 & NO & 12 & -1\\
\hline
\end{tabular}
\caption{Detailed data of the ARs of Fig.~\ref{fig:samplears}, i.e. AR 8086 (first set of six columns) and AR 9037 (last set of six columns), respectively. $A_{up}$ is the maximum area of the group in millionths of solar hemipshere (MSH) at ADP 0. Columns of the sets are as follows. $a_{up}$ means the observed umbra+penumbra area measured in MSH, ADP is calculated by using Eq.~\ref{eq:adp} measured in \%, areas of the leading and following parts in MSH, while the last column describes the area asymmetry index calculated by using Eq.~\ref{eq:asymmetry}. NO means that leading or following sunspot already can not be observed at all.}
\label{table:noaas}
\end{table}
\subsection{Method}
In this study the normalized asymmetry index (AI) is used.
The asymmetry index between the leading and following part of sunspot groups is calculated as
\begin{equation}
AIx=\frac{x_{l}-x_{f}}{x_{l}+x_{f}}.
\label{eq:asymmetry}
\end{equation}
where $x_{l}$ and $x_{f}$ mean the parameter of the leading and following parts, respectively. This property may be the area of the sunspot groups (a$_{up}$) or that of the umbrae (a$_u$).
The decay phase of the sunspot groups will be characterized by the area decay phase (hereafter ADP) which is determined for each observational time of the groups
\begin{equation}
ADP=\Big(1-\frac{a}{A}\Big)*100
\label{eq:adp}
\end{equation}
where $A$ is the maximum area of the group and $a$ is the observed area. If the value of the asymmetry index is 0 that means the parameters of the leading and following parts are equal, while 1 and -1 mean that the following or leading part is missing and the relevant parameter only refers to the existing part. ADP=0 marks the maximum value of the sunspot group's area (see Table~\ref{table:noaas}).
These asymmetry indices are calculated for each observational time or area decay phase. Then the obtained asymmetry indices are averaged over 10 percent bins of the ADP. Although in typical cases the mean size of the follower spots is smaller than that of the leader ones (left panel of Fig.~\ref{fig:samplears}), there are exceptions where the mean size of following spots is larger than that of the leader ones (right panel of Fig.~\ref{fig:samplears}) these are characterized by negative asymmetry index. These data, i.e., the ADP, and the AIa$_{up}$ can be found in Table~\ref{table:noaas} for the ARs NOAA 8086 and NOAA 9037. The table contains only six raws for each active region after their maximum area of the whole group. ADP=0 denotes the maximum area of sunspot groups.
\section{Results and Discussion}
In a previous paper \citep{2021ApJ...908..133M} it has been shown that the leading and following parts of sunspot groups decay with different rates. After the determination of their decay rates, the dynamics of the variation of the two parts are calculated. First of all, the area asymmetry index is studied for both the whole sunspot groups and only the umbrae.
\begin{figure}[H]
\includegraphics[scale=0.9, angle=0]{21.eps}
\includegraphics[scale=0.9, angle=0]{22.eps}
\caption{Left: Dependence of the normalized asymmetry index on the area decay phase calculated for the total (umbra+penumbra) areas (top panel) and the umbral area (bottom panel). The numbers of the positive/negative cases are marked at the left corners of the panels, the numbers of positive cases are significantly higher. Right: The same as in the left panels but the asymmetry indices are calculated for three area ranges. \label{fig:aiacucup}}
\end{figure}
The panels of Fig.~\ref{fig:aiacucup} show the combined history of the decay and the asymmetry variation of sunspot groups after their maximum state, the area asymmetry indices are averaged over 10 percent bins of the area decay phase. The sunspot groups of positive and negative asymmetries (according to Eq.~\ref{eq:asymmetry}) are plotted separately. The left columns show all sunspot group sizes together. It is a common property of the diagrams that the increase of the asymmetry begins at about 35\% of the area decay but it is conspicuous that after this the variation is steepest for the umbral areas of sunspot groups of positive asymmetry (upper half of the lower frame in the left column). This can be considered to be the most typical case, the process starts at the maximum state with very small asymmetry that reaches the value of +1 at the end with a single leading spot. In contrast, the sunspot groups of negative area asymmetry (lower frame, lower diagram) do not end with asymmetry of -1, i.e. with only a follower polarity.
Overall, it can be said that the leading/following area ratio of the maximum which is near by 0, means the area of both parts is almost equal and is preserved during the first phase of the decay. This is followed by the steeper variation when the smaller part of the group starts to disappear and in the last phase of the decay it almost or totally disappears and the total umbral area is dominated by only the part with a larger area.
During the first phase, the area of the leading/following part is about 50 percent larger than the area of the following/leading part. This ratio increases during the decomposition and reaches about the 0.7 in the case of the total area, while this value is higher, about 0.9 in the case of the umbrae. In an earlier work \citep{2014SoPh..289..563M} similar variation has been observed in the asymmetric emergence of the leading-following parts.
\citet{2007ApJ...671.1013D} also described the decay as a three steps process but only on the decay of one following sunspot. They identified the three steps with fragmentation, the flux cancellation and the flux transport by MMFs, respectively.
\\The right column shows the same variations in three area ranges. For umbra+penumbra areas they are indicated in the upper panel, the umbral area ranges are defined as the one fifths of the total area ranges as in a previous paper \citep{2021ApJ...908..133M} this ratio was found at the time of maximum. It is conspicuous that the most typical decay pattern is exhibited by the largest sunspot groups where the asymmetry is close to zero at the maximum area, its absolute value starts rising around one third of the ADP of the group and at the end it reaches the final values of about +1 or -1. The time profiles of the smaller groups are more flattened, especially those of negative asymmetry.
The two hemispheres have been examined separately. Fig.~\ref{fig:NS} shows the umbral ADP variation during the decay phase by distinguishing the types of asymmetries and the hemispheres, thus the diagram is a more detailed version of the lower left frame of Fig.~\ref{fig:aiacucup}. The additional information can be read from the lower frame of the diagram showing the data of groups of negative asymmetry. Here the data of the southern hemisphere follow the standard time profile: unambiguous strengthening of the asymmetry starting at one third of the decay time interval and ending close to -1, while the data set of the northern hemisphere exhibits a weaker variation. This explains the similarly weak variation of the combined North-South data in the negative domain of the lower left frame in Fig.~\ref{fig:aiacucup}.
\begin{figure
\includegraphics[scale=1.00, angle=0]{3.eps}
\caption{Hemispheric variation of the normalized umbral area asymmetry index calculated between the leading and following parts (AIa$_u$) during the decay of the sunspot groups. The northern hemispheric values are marked by full rhombuses, the southern values are depicted by empty rhombuses. The top and bottom panels depict the cases of positive/negative asymmetries, the two hemispheres are distinguished in both subsets. \label{fig:NS}}
\end{figure}
These distinctions permit a conjecture. The parallel course of asymmetry variation and decay takes the above formulated standard form typically in large sunspot groups even in the less frequent cases of negative asymmetry. This can be seen in the lower right frame of Fig.~\ref{fig:aiacucup} in its positive and negative halves. On the other hand Fig.~\ref{fig:NS} also presents an indirect evidence for this, the time profile of negative asymmetries in the southern hemisphere corresponds to this pattern. The southern hemisphere is more active in most of the solar cycle 23 covered by the applied data \citep{2013ApJ...768..188C}, it also predominates in the applied sample shown in Fig.~\ref{fig:NS}, furthermore, the umbral area asymmetry index is always higher in the southern hemisphere. This may imply that the strong flux ropes emerging from the strong toroidal magnetic fields, presumably from deeper layers are subjects to a different set of impacts than the smaller sunspot groups. This may be a consequence of the higher sunspot activity in cycle 23. Several hemispheric asymmetries have been pointed out e.g. in the decay rate \citep{2021ApJ...908..133M} or in the tilt angles of ARs \citep{2012ApJ...758..115L, 2013SoPh..287..215M}.
\begin{figure
\includegraphics[scale=1.0, angle=0]{4.eps}
\caption{ Mean sunspot area of the leading part (dots) and following part (circle) averaged over 10 percent bins of the decay phase of the whole group. 0\% means the maximum area of the groups. The three panels show the variations of three different area ranges. \label{fig:avgspotsize}}
\end{figure}
The average area of the sunspot group is also studied (Fig.~\ref{fig:avgspotsize}). The three panels concern different sizes of sunspot groups from the smallest ones (A$_{up} <$100 MSH) to the biggest groups (A$_{up} >$300 MSH). The shapes of the courses are similar in the cases of the smallest and the medium size sunspot groups. These groups show that the average sizes of sunspots of the leading and following parts decrease simultaneously until 45 \% of the ADP$_{up}$ where reach they their plateaus and after 75 \% of the ADP$_{up}$ they decrease again. The average sizes of the leading spots are higher during the whole decay in all three cases.
The course of the decay is somewhat different in the case of the biggest groups. Here the average sizes of spots show an increasing trend after the short first phase of decay. This is caused by the sudden drop in the number of spots, i.e. the smallest spots disappear around 70\% of the ADP$_{up}$, while the largest spots survive. This is more pronounced in the case of the leading spots. At the end of the ADP$_{up}$ the average sizes of sunspots will be nearly the same in each case.
The area ratio of the leading/following spots is different in these three area ranges. The ratio decreases toward the end of decay in the cases of the smallest and the middle groups, but in the case of the biggest groups this ratio increases in the middle of ADP.
It can also be seen that the higher the area of groups the smaller the ratio between the leading and following spots around the maxima. The total area of leading spots is about twice larger than that of the following ones in smallest groups and this ratio is about 1.5 in medium and large groups.
The penumbrae (P) are formed by the strongly inclined field lines of sunspots at the surface layers \citep{2021ApJ...907..102P} so their decay may be controlled by processes different from those of the umbrae (U) whereby influencing the variation of their area ratio.
\begin{figure
\includegraphics[scale=0.9, angle=0]{5x.eps}
\caption{The a$_u$/a$_{p}$ value at the time of maximum umbrae averaged over 10 MSH bins as a function of maximum umbral area depicted with black dots. The umbral decay rate \citep{2021ApJ...908..133M} averaged also over 10 MSH bins as a function of maximum umbral area plotted with empty triangles.
\label{fig:uperp}}
\end{figure}
\noindent
Fig.~\ref{fig:uperp} shows the U/P area ratios of spots in the states of maximum umbral areas plotted with black dots. Their distribution exhibits a clear inverse relationship with the maximum area, larger spots have relatively smaller umbrae with respect to the penumbrae. The other diagram, the decay rates vs. maximum area is taken from an earlier paper \citep{2021ApJ...908..133M}, the data are plotted with empty triangles. Both datasets are averaged over 10 MSH bins of maximum umbral area. The opposite trends of the two diagrams is conspicuous, larger umbrae decay faster than smaller ones and their areas with respect to their penumbrae are smaller than in small sunspots. This is in agreement with the theoretical result of \citet{2014ApJ...785...90R} that the larger penumbra stabilizes the sunspot; but it contradicts to the results of \citet{1990SoPh..129..191B} and \citet{2018SoPh..293..104C} and \citet{2013SoPh..286..347H}. This dependence is also in contrast to \citet{1997rscc.book.....H}, who found a linear relationship between the U/P values and the decay rates.
\begin{figure
\includegraphics[scale=0.8, angle=0]{6_1x.eps}
\includegraphics[scale=0.8, angle=0]{6_2x.eps}
\includegraphics[scale=0.8, angle=0]{6_3x.eps}
\caption{Umbra and penumbra ratios of the whole groups (crosses) and their leading (dots) and following (empty circles) parts as a function of the umbral area decay phase. The values of a$_u$/a$_{p}$ are averaged over 10 percent bins of the umbral ADP. 0\% marks the maximum umbral area. Left top panel: A$_u\ge$ 20 MSH, right top panel: 20 MSH $\le$ A$_u\le$ 60 MSH, left bottom panel: A$_u\ge$ 60 MSH.
\label{fig:uperpdepa}}
\end{figure}
\noindent
The decay process also exerts an impact on the variation of U/P ratio as is shown in Fig.~\ref{fig:uperpdepa}. The sample is divided into three groups of sizes as in Fig.~\ref {fig:avgspotsize}, the data of leading and following parts as well as the entire groups are plotted separately and the decay is represented again in a standard time interval normalized to the lengths of decays starting with the maximum area (at zero Area Decay Phase, ADP). The most striking feature of these diagrams is the definite decrease of the U/P ratio during the decay in the following parts of the sunspot groups (indicated with empty circles), which means that in the trailing regions the umbrae disappear more quickly than the penumbrae. In the leading parts the decreasing trend is also present but with some temporary strengthening. This may be due to the typically larger leading umbrae which may be more resistant to the disintegrating due to external impacts than those of the trailing part. Anyhow, the overall trend is that the deeper rooted umbrae are more intensively exposed to the decomposing impact of the external processes than the penumbrae close to the surface layers.
The courses of the decays of the leading part and the whole group are similar in each case. As a result of this U/P study one can conclude that the smaller the sunspot group area the higher the U/P ratio and the difference between values of the leading and following parts. Moreover the smaller the sunspot groups the higher the variation of this ratio during the decay.
\section{Summary and conclusion}
During the decay process of sunspot groups several characteristic variations happen in their internal structures.
The results can be summarized as follows.
(i) The sunspot group’s decay can be divided into three parts where the leading/following asymmetries vary with different rates (left panel of Fig.~\ref{fig:aiacucup}). This asymmetry is almost constant in the first phase of the decay, its ratio slightly varies and is preserved from the time of the maximum of the groups. Then that varies faster during the middle phase of the decay. After this steeper variation the area asymmetry seems to be stabilized.
(ii) The variation of the leading-following umbral area asymmetry depends on the sunspot group’s maximum size. It rises earlier in small groups which contain typically small spots that disappear more quickly. The asymmetry variation of the total (U+P) area is less sensitive to the disintegrating impacts, the variation of curves of the penumbrae are more flattened than that of the umbrae (right panel of Fig.~\ref{fig:aiacucup}).
(iii) The leading/following area asymmetry also exhibits hemispheric difference (Fig.~\ref{fig:NS}). During the sunspot group’s decay the area asymmetry index has higher values in the southern hemisphere.
(iv) The umbra–penumbra ratio at the time of maximum umbra exhibits anticorrelation with the area (Fig.~\ref{fig:uperp}). The decay rate and the U/P ratio are inversely proportional.
(v) The variation of the umbra-penumbra ratio during the decay depends on the maximum area of the group and also on the leading or following positions of the spots (Fig.~\ref{fig:uperpdepa}). The variation is typically a decrease which is the strongest in the following parts of small groups, but in the leading parts some temporary strengthenings may occur. The presented processes imply that the umbrae are more exposed to disintegrating effects than the penumbrae which are only affected by surface velocity fields.
The behavior of the larger groups differs from that of medium and small groups in many ways e.g. the average area of sunspots within the groups, the U/P ratio, as well as the variation of the area asymmetry index. The physical conditions affecting them are different. The leading--following area asymmetry changes rapidly, the following spots vanish earlier but the ratio between the umbra--penumbra hardly changes mainly in the case of the leading spots. This means that the decay of the larger groups is a smooth process, while the small groups behave more chaotically.
\section*{Acknowledgements}
This research has received funding from National Research, Development and Innovation Office -- NKFIH, 129137. Thanks are due to Dr. Andr\'as Ludm\'any for reading and discussing the manuscript and the anonymous referee whose comments made this article easier to understand.
| proofpile-arXiv_065-1826 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{S1}
Let $\{\xi_i\}_{i\in\ZZ_+}$ be a sequence of independent and bounded random variables (defined on a probability space $(\Omega,\mathcal{F}, \Prob)$), and let $S_t\coloneqq\xi_0+\cdots+\xi_{t}$ for $t\in\ZZ_+$. Assume that $\Prob(a_i\le\xi_i\le b_i)=1$ for some $\{a_i\}_{i\in\ZZ_+},\{b_i\}_{i\in\ZZ_+}\subset\R$.
The classical Chebyshev inequality then implies that for every $\varepsilon>0$, $$ \Prob\bigr(\lvert S_{t-1}-\mathbb{E}[S_{t-1}]\rvert>\varepsilon t\bigl)\le \frac{\sum_{i=0}^{t-1}(b_i-a_i)^2}{
\varepsilon^2t^2}.$$
In his seminal work W. Hoeffding \cite{Hoeffding-1963} has improved this result and showed that
$$\Prob\bigr(\lvert S_{t-1}-\mathbb{E}[S_{t-1}]\rvert>\varepsilon t\bigl)\le 2\exp\left\{-2\varepsilon^2t^2/\sum_{i=0}^{t-1}(b_i-a_i)^2\right\}.$$
Hoeffding's inequality has been widely applied in many problems arising in probability and statistics.
However, the independence assumption limits its applicability in many situations. This, for instance, includes problems characterized by Markovian dependence, such as Markov chain Monte Carlo methods, time series analysis and reinforcement learning problems, see e.g.\ \cite{Fan-Jiang-Sun-2021}, \cite{Ormoneit-Glynn-2002} and \cite{Tang-2007}. Motivated by this, Hoeffding's inequality has been extended to bounded measurable functions of a class of Markov models.
However, to the best of our knowledge, in all the works on this topic (see the literature review part below) a common assumption is that the underlying Markov model is irreducible. In this article, we complement these results and discuss Hoeffding's inequality for bounded Lipschitz functions of a class of not necessarily irreducible Markov models.
\section{Main result} \label{S2}
Let $\mathsf{S}$ be a Polish space, i.e.\
separable completely metrizable topological space.
Denote the corresponding metric by $\mathsf{d}$. We endow $(\mathsf{S},\mathsf{d})$ with its Borel $\sigma$-algebra
$\mathfrak{B}(\mathsf{S})$.
Further, let $\mathbb{T}=\R_+$ or $\ZZ_+$ be
the time parameter set, and let
$(\Omega,\mathcal{F}, \{\mathcal{F}_t\}_{t\in\mathbb{T}},\{\theta_t\}_{t\in\mathbb{T}},
\{X_t\}_{t\in\mathbb{T}},$ $\{\Prob_x\}_{x\in\mathsf{S}})$, denoted by $\{X_t\}_{t\in\mathbb{T}}$
in the sequel, be a time-homogeneous conservative strong Markov model with
state space
$(\mathsf{S},\mathfrak{B}(\mathsf{S}))$, in the sense of \cite{Blumenthal-Getoor-1968}. Recall that in the case when
$\mathbb{T}=\ZZ_+$, $\{X_t\}_{t\in\mathbb{T}}$ is usually called a Markov chain, and in the case when
$\mathbb{T}=\R_+$, $\{X_t\}_{t\in\mathbb{T}}$ is called a Markov process.
In the latter case we also assume that $\{X_t\}_{t\in\mathbb{T}}$ is progressively measurable (with respect to $\{\mathcal{F}_t\}_{t\in\mathbb{T}}$), i.e.\ the map $(s,\omega)\mapsto X_s(\omega)$ from $[0,t]\times\Omega$ to $\mathsf{S}$ is $\mathfrak{B}([0,t])\times\mathcal{F}_t/\mathfrak{B}(\mathsf{S})$ measurable for all $t\geq0$. This will be in particular satisfied if $t\mapsto X_t(\omega)$ is right continuous for all $\omega\in\Omega$ (see \cite[Exercise I.6.13]{Blumenthal-Getoor-1968}).
Further, denote by
$\mathcal{P}^{t}(x,\mathrm{d} y)\coloneqq\mathbb{P}_{x}(X_t\in \mathrm{d} y)$ the transition function of $\{X_t\}_{t\in\mathbb{T}}$, and let
$\mathscr{P}_1(\mathsf{S})$ be the class of all probability measures on $\mathfrak{B}(\mathsf{S})$ having finite first moment.
The
$\mathrm{L}^1$-Wasserstein distance on $\mathscr{P}_1(\mathsf{S})$ is defined by
\begin{equation*}
\mathscr{W}(\upmu_1,\upmu_2)\coloneqq\inf_{\Pi\in\mathcal{C}(\upmu_1,\upmu_2)}
\int_{\mathsf{S}\times\mathsf{S}}\mathsf{d}(x,y)
\Pi(\mathrm{d}{x},\mathrm{d}{y}),
\end{equation*}
where $\mathcal{C}(\upmu_1,\upmu_2)$ is the family of couplings of
$\upmu_1(\mathrm{d} x)$ and $\upmu_2(\mathrm{d} y)$,
i.e.\ $\Pi\in\mathcal{C}(\upmu_1,\upmu_2)$ if, and only if, $\Pi(\mathrm{d} x,\mathrm{d} y)$
is a probability
measure on $\mathfrak{B}(\mathsf{S})\times\mathfrak{B}(\mathsf{S})$ having $\upmu_1(\mathrm{d} x)$ and $\upmu_2(\mathrm{d} y)$ as its marginals. By Kantorovich-Rubinstein theorem it holds that
$$
\mathscr{W}(\upmu_1,\upmu_2) = \sup_{\{f\colon\mathrm{Lip}(f)\le1\}}\,
| \upmu_1(f)-\upmu_2(f)|,
$$
where the supremum is taken over all Lipschitz continuous functions
$f\colon\mathsf{S}\to\R$ with Lipschitz constant $\mathrm{Lip}(f)\le1$ and, for a probability measure $\upmu$ on $\mathfrak{B}(\mathsf{S})$ and a measurable function $f:\mathsf{S}\to\R$, the symbol $\upmu(f)$ stands for $\int_\mathsf{S} f(x)\upmu(\mathrm{d} x)$, whenever the integral is well defined.
We now state the main result of this article.
\begin{theorem}\label{TM} Let $f:\mathsf{S}\to\R$ be bounded and Lipschitz continuous, and let $S_{t-1}\coloneqq\int_{[0,t)} f(X_t)\uptau(\mathrm{d} t)$ for $t\in\mathbb{T}$. Here, $\uptau(\mathrm{d} t)$ stands for the counting measure when $\mathbb{T}=\ZZ_+$ and the Lebesgue measure when $\mathbb{T}=\R_+$. Assume that
$\{X_t\}_{t\in\mathbb{T}}$ admits an invariant probability measure $\uppi(\mathrm{d} x)$ (i.e. a measure satisfying $\int_{\mathsf{S}}\mathcal{P}^{t}(x,\mathrm{d} y)\uppi(\mathrm{d} x)=\uppi(\mathrm{d} y)$ for all $t\in\mathbb{T}$) such that \begin{equation}\label{eq:TM}\gamma\coloneqq\sup_{x\in \mathsf{S}}\int_{\mathbb{T}} \mathscr{W}\bigl(\mathcal{P}^{t}(x,\cdot),\uppi(\cdot)\bigr)\uptau(\mathrm{d} t)<\infty.\end{equation} Then
for any $\varepsilon>0$, \begin{equation}\label{eq:TM1}
\Prob_x\bigr(|S_{t-1}-\uppi(f)t|>t\varepsilon\bigl)\le
\begin{cases}
2\exp\left\{\frac{-(\varepsilon t-2\mathrm{Lip}(f)\gamma)^2}{8(\mathrm{Lip}(f)\gamma+\lVert f\rVert_\infty)t}\right\}, & \mathbb{T}=\ZZ_+,\\[10pt]
2 \exp\left\{\frac{-(\varepsilon t-2\mathrm{Lip}(f)\gamma)^2}{8(\mathrm{Lip}(f)\gamma+\lVert f\rVert_\infty)(t+1)}\right\}, & \mathbb{T}=\R_+.
\end{cases}\end{equation}
\end{theorem}
\bigskip
According to \cite[Theorems 2.1 and 2.4]{Butkovsky-2014} the relation in \cref{eq:TM} will hold if
\begin{itemize}
\item [(i)] the metric $\mathsf{d}$ is bounded (without loss of generality by $1$)
\item[(ii)] there is $\rho\in(0,1)$ such that for all $x,y\in\mathsf{S}$ and all $t$ large enough, $$\mathscr{W}\bigl(\mathcal{P}^{t}(x,\cdot),\mathcal{P}^{t}(y,\cdot)\bigr)\le(1-\rho)\mathsf{d}(x,y)$$
\item[(iii)] there are $\kappa\in\R$, measurable and bounded $\mathcal{V}\colon\mathsf{S}\to\R_+$ and concave, differentiable and increasing to infinity function $\phi:\R_+\to\R_+$ satisfying $\phi(0)=0$, such that \begin{equation}\label{eq:MR}\mathbb{E}_x\bigl[\mathcal{V}(X_t)\bigr]-\mathcal{V}(x)\le \kappa t-\int_{[0,t)}\mathbb{E}_x\bigl[\phi\circ\mathcal{V}(X_s)\bigr]\uptau(\mathrm{d} s)\end{equation}
\item[(iv)] there is $\epsilon\in(0,1)$ such that $$\int_{[1,\infty)}\bigl(\phi\circ\Phi^{-1}(t)\bigr)^{\epsilon-1} \uptau(\mathrm{d} t)<\infty,$$ where $\Phi(u)\coloneqq\int_1^u1/\phi(v)\mathrm{d} v$.
\end{itemize}
Examples satisfying conditions (i)-(iv) are given in \Cref{S5}.
\section{Literature review}\label{S3}
Hoeffding's inequality is a key tool in the analysis of many problems arising in both probability and statistics.
As already mentioned above, it was originally proved by W. Hoeffding \cite{Hoeffding-1963} in the context of independent and bounded random variables.
However, many applied problems require an extension of the result to the case where certain dependence of the components is involved, in particular Markovian dependence (see e.g.\ \cite{Fan-Jiang-Sun-2021}, \cite{Ormoneit-Glynn-2002} and \cite{Tang-2007}).
Therefore, variants of the Hoeffding's inequality in the context of different types of Markov models have been studied recently.
There are two main approaches to this problem: (i) based on spectral methods (see \cite{Chung-Lam-Liu-Mitzenmacher-2012}, \cite{Fan-Jiang-Sun-2021}, \cite{Leon-Perron-2004}, \cite{Lezaud-1998} \cite{Miasojedow-2014} and \cite{Rao-2019}) and (ii) based on Foster-Lyapunov inequality (see \cite{Adamczak-Bednorz-2015}, \cite{Boucher-2009}, \cite{Choi-Li-2019}, \cite{Douc-Moulines-Olsson-vanHandel-2011}, \cite{Glynn-Ormoneit-2002} and \cite{Liu-Liu-2021}).
A common assumption in all these works is that the underlying Markov model is irreducible. Recall, a Markov model $\{X_t\}_{t\in\mathbb{T}}$ is said to be irreducible if there is a non-trivial measure $\upphi(\mathrm{d} x)$ on $\mathfrak{B}(\mathsf{S})$ such that $\int_\mathbb{T} \mathcal{P}^t(x,B)\uptau(\mathrm{d} t)>0$ for all $x\in\mathsf{S}$, whenever $\upphi(B)>0$. In this article, we complement these results and obtain Hoeffding's inequality in the case when the underlying Markov model is not necessarily irreducible. Our result should be compared to the results obtained in \cite{Liu-Liu-2021} (see also \cite{Boucher-2009}), where Hoeffding's inequality for an irreducible Markov model $\{X_t\}_{t\in\mathbb{T}}$ satisfying the following Foster-Lyapunov inequality
\begin{equation}\label{eq:LR}\mathbb{E}_x\bigl[\mathcal{V}(X_t)\bigr]-\mathcal{V}(x)\le t-\kappa \int_{[0,t)}\mathbb{E}_x\bigl[\mathbb{1}_\mathcal{C}(X_s)\bigr]\uptau(\mathrm{d} s)\end{equation} has been obtained. Compare this inequality to \cref{eq:MR}.
Here,
$\mathcal{V}\colon\mathsf{S}\to\R_+$ is measurable and bounded, $\kappa\in\R$ and $\mathcal{C}\in\mathfrak{B}(\mathsf{S})$ is such that there are an atom $\alpha$ for $\{X_t\}_{t\in\mathbb{T}}$, $t_0\in\mathbb{T}$ and a non-trivial measure $\upnu(\mathrm{d} x)$ on $\mathfrak{B}(\mathsf{S})$, such that $\alpha\subseteq \mathcal{C}$, $\upnu(\alpha)>0$ and $\mathcal{P}^{t_0}(x,B)\ge\upnu(B)$ for all $x\in \mathcal{C}$ and $B\in\mathfrak{B}(\mathsf{S})$.
Recall, a set $\alpha\in\mathfrak{B}(\mathsf{S})$ is called an atom for $\{X_t\}_{t\in\mathbb{T}}$ if $\mathcal{P}^t(x,B)=\mathcal{P}^t(y,B)$ for all $x,y\in\alpha$, $t\in\mathbb{T}$ and $B\in\mathfrak{B}(\mathsf{S})$. Let us remark here that this result can be slightly generalized, i.e.\ the conclusions of \cite[Proposition 1 and Theorem 3]{Liu-Liu-2021}, and then also the main results (Hoeffding's inequality) given in \cite[Theorems 1 and 2]{Liu-Liu-2021}, remain valid by assuming \cref{eq:LR} with $\mathcal{C}$ being a petite set for $\{X_t\}_{t\in\mathbb{T}}$. Namely, under these assumptions in \cite[Theorems 2.3 and 3.2]{Glynn-Meyn-1996}) it has been shown that the solution to the corresponding stochastic Poisson equation (see \cref{eq:PE}) is uniformly bounded, which is the main step in the proof of \cite[Theorems 1 and 2]{Liu-Liu-2021}.
Recall, a set $C\in\mathfrak{B}(\mathsf{S})$ is said to be petite for $\{X_t\}_{t\in\mathbb{T}}$ if there is probability measure $\upchi(\mathrm{d} t)$ on $\mathbb{T}$ and a non-trivial measure $\upmu(\mathrm{d} x)$ on $\mathfrak{B}(\mathsf{S})$, such that $\int_\mathbb{T}\mathcal{P}^t(x,B)\upchi(\mathrm{d} t)\ge\upmu(B)$ for all $x\in C$ and $B\in\mathfrak{B}(\mathsf{S})$. It is evident that the set $\mathcal{C}$ used in \cref{eq:LR} is petite for $\{X_t\}_{t\in\mathbb{T}}$.
\section{Proof of \Cref{TM}}\label{S4}
In this section, we prove \Cref{TM}. We follow and adapt the approach from \cite{Glynn-Ormoneit-2002}. By Kantorovich-Rubenstein theorem we have that $$\left|\mathbb{E}_x\bigl[f(X_t)-\uppi(f)\bigr]\right|\le \mathrm{Lip}(f) \mathscr{W}\bigl(\mathcal{P}^{t}(x,\cdot),\uppi(\cdot)\bigr).$$ Hence, according to \cref{eq:TM} it follows that $$\hat{f}(x)\coloneqq\int_{\mathbb{T}} \mathbb{E}_x\bigl[f(X_t)-\uppi(f)\bigr]\uptau(\mathrm{d} t)$$ is well defined and bounded. Furthermore, it clearly solves the stochastic Poisson equation
\begin{equation}\label{eq:PE}\mathbb{E}_x\bigl[\hat{f}(X_t)\bigr]-\hat{f}(x)=-\int_{[0,t)}\mathbb{E}_x\bigl[f(X_s)-\uppi(f)\bigr]\uptau(\mathrm{d} s),\end{equation} which in turn implies that $$M_t\coloneqq \hat{f}(X_t)-\hat{f}(X_0)+\int_{[0,t)}\bigl(f(X_s)-\uppi(f)\bigr)\uptau(\mathrm{d} s),\qquad t\in\mathbb{T},$$ is a bounded martingale.
Namely, for $s<t$ it follows that $$
\mathbb{E}_x\bigl[M_t|\mathcal{F}_s\bigr]=M_s+\mathbb{E}_{X_s}\bigl[\hat f(X_{t-s})\bigr]-\hat f(X_s)+\int_{[0,t-s)}\left(\mathbb{E}_{X_s}\bigl[f(X_u)\bigr]-\uppi(f)\right)\uptau(\mathrm{d} u)=M_s.
$$
By employing Markov inequality, for any $\varepsilon>0$ and $\theta\ge0$ it follows that
\begin{align*}
\Prob_x\bigr( S_{t-1}-\uppi(f)t>t\varepsilon\bigl)&\le \mathrm{e}^{-\theta\varepsilon t}\mathbb{E}_x\bigl[\mathrm{e}^{\theta(S_{t-1}-\uppi(f)t)}\bigr]\\&=\mathrm{e}^{-\theta\varepsilon t}\mathbb{E}_x\bigl[\mathrm{e}^{\theta(M_t-\hat{f}(X_t)+\hat{f}(X_0))}\bigr]\\&\le\mathrm{e}^{-\theta\varepsilon t+2\theta\lVert \hat f\rVert_\infty}\mathbb{E}_x\left[\exp\left\{\theta\left(\sum_{s=1}^{\lfloor t\rfloor}(M_s-M_{s-1})+M_t-M_{\lfloor t\rfloor}\right)\right\}\right].
\end{align*}
Observe that when $\mathbb{T}=\ZZ_+$, then $t=\lfloor t\rfloor$. Further, it clearly holds that $$|M_s-M_{s-1}|\le 2\lVert \hat f\rVert_\infty+2\lVert f\rVert_\infty\qquad \textrm{and}\qquad |M_t-M_{\lfloor t\rfloor}|\le 2\lVert \hat f\rVert_\infty+2\lVert f\rVert_\infty,$$ and
from the proof of \cite[Lemma 8.1]{Devroye-Gyorfi-Lugosi-Book-1996} it then follows that
$$\mathbb{E}_x\left[\mathrm{e}^{\theta(M_s-M_{s-1})}\rvert\mathcal{F}_{s-1}\right]\le \mathrm{e}^{2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)}\qquad \text{and} \qquad \mathbb{E}_x\left[\mathrm{e}^{\theta(M_t-M_{\lfloor t\rfloor})}\rvert\mathcal{F}_{\lfloor t\rfloor}\right]\le \mathrm{e}^{2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)}.$$
Thus,
$$\mathbb{E}_x\left[\exp\left\{\theta\left(\sum_{s=1}^{\lfloor t\rfloor}(M_s-M_{s-1})+M_t-M_{\lfloor t\rfloor}\right)\right\}\right]\le \begin{cases}
\mathrm{e}^{2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)t}, & \mathbb{T}=\ZZ_+,\\
\mathrm{e}^{2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)(t+1)}, & \mathbb{T}=\R_+.
\end{cases}$$
We then have
$$
\Prob_x\bigr(S_{t-1}-\uppi(f)t>t\varepsilon\bigl)\le
\begin{cases}
\mathrm{e}^{-\theta\varepsilon t+2\theta\lVert \hat f\rVert_\infty+2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)t}, & \mathbb{T}=\ZZ_+,\\
\mathrm{e}^{-\theta\varepsilon t+2\theta\lVert \hat f\rVert_\infty+2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)(t+1)}, & \mathbb{T}=\R_+.
\end{cases}$$ Analogously we conclude that $$
\Prob_x\bigr(S_{t-1}-\uppi(f)t<-t\varepsilon\bigl)\le
\begin{cases}
\mathrm{e}^{-\theta\varepsilon t+2\theta\lVert \hat f\rVert_\infty+2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)t}, & \mathbb{T}=\ZZ_+,\\
\mathrm{e}^{-\theta\varepsilon t+2\theta\lVert \hat f\rVert_\infty+2\theta^2(\lVert \hat f\rVert_\infty+\lVert f\rVert_\infty)(t+1)}, & \mathbb{T}=\R_+.
\end{cases}$$
Finally, using $\lVert\hat f\rVert_{\infty}\le\mathrm{Lip}(f)\gamma$ (recall that $\gamma=\sup_{x\in \mathsf{S}}\int_{\mathbb{T}}\mathscr{W}\bigl(\mathcal{P}^{t}(x,\cdot),\uppi(\cdot)\bigr)\uptau(\mathrm{d} t)$) and optimizing over $\theta$ we obtain \cref{eq:TM1}.
\section{Examples}\label{S5}
In this section, we discuss several examples of non-irreducible Markov models satisfying conditions of \Cref{TM}.
\begin{example}[Deterministic SDE] \label{EX1}{\rm Consider the following one-dimensional (deterministic) SDE:
\begin{align*}\mathrm{d} X_t&=-|X_t|^\alpha\mathrm{d} t\\
X_0&=x\in[-1,1]
\end{align*} with $\alpha\in[1,2)$.
The SDE is well posed and it admits a unique strong solution which is a conservative strong Markov process with continuous sample paths on $\mathsf{S}=[-1,1]$ (endowed with the standard Euclidean metric $\mathsf{d}(x,y)=|x-y|$ and Borel $\sigma$-algebra $\mathfrak{B}([-1,1])$). When $\alpha=1$ the solution is given by $X_t=x\mathrm{e}^{-t}$, and when $\alpha\in(1,2)$ it is given by
$$X_t=
\frac{x}{((\alpha-1)|x|^{\alpha-1}t+1)^{1/(\alpha-1)}}.$$
Furthermore, it clearly holds that $\mathcal{P}^t(x,\mathrm{d} y)=\updelta_{X_t}(\mathrm{d} y)$, and the unique invariant probability measure of $\{X_t\}_{t\in\R_+}$ is $\updelta_{0}(\mathrm{d} y)$. Here, $\updelta_{x}(\mathrm{d} y)$ stands for the Dirac delta measure at $x\in\mathsf{S}$. We now have that
\begin{align*} \mathscr{W}\bigl(\mathcal{P}^{t}(x,\cdot),\updelta_0(\cdot)\bigr)=|X_t|\le\begin{cases}
\mathrm{e}^{-t}, & \alpha=1,\\
\frac{1}{((\alpha-1)t+1)^{1/(\alpha-1)}}, & \alpha\in(1,2).
\end{cases} \end{align*}
Thus,
$$\sup_{x\in \mathsf{S}}\int_{0}^\infty \mathscr{W}\bigl(\mathcal{P}^{t}(x,\cdot),\updelta_0(\cdot)\bigr)\mathrm{d} t<\infty,$$ which shows that the condition in \cref{eq:TM} is satisfied and we can apply \Cref{TM} with any Lipschitz function $f:[-1,1]\to\R$. Observe also that $\{X_t\}_{t\in\R_+}$ is not irreducible and $\mathcal{P}^{t}(x,\mathrm{d} y)$ cannot converge to $\updelta_0(\mathrm{d} y)$, as $t\to\infty$, in the total variation distance. \qed
}
\end{example}
We now give two examples of discrete-time Markov models satisfying conditions (i)-(iv) from \Cref{S2}. We first consider an autoregressive model of order one (see e.g.\ \cite{Meyn-Tweedie-Book-2009}).
\begin{example}[Autoregressive model]\label{EX2}{\rm Let $X_0$ and $\{\xi_i\}_{i\geq1}$ be random variables defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$, such that $X_0$ is independent of $\{\xi_i\}_{i\geq1}$, $\{\xi_i\}_{i\geq1}$ is an i.i.d.\ sequence, $\mathbb{P}(X_0\in[0,1])=1$ and $\mathbb{P}(\xi_i=0)=\mathbb{P}(\xi_i=1/2)=1/2$. Define $$X_{t+1}\coloneqq\frac{1}{2}X_t+\xi_{t+1}.$$
Clearly, $\{X_t\}_{t\in\ZZ_+}$ is a Markov chain on $\mathsf{S}=[0,1]$ (endowed with the standard Euclidean metric $\mathsf{d}(x,y)=|x-y|$ and Borel $\sigma$-algebra $\mathfrak{B}([0,1])$) with transition function $\mathcal{P}(x,\mathrm{d} y)=\mathbb{P}(\xi_i+x/2\in \mathrm{d} y)$.
Observe that $\{X_t\}_{t\in\ZZ_+}$ is not irreducible. Namely, for $x\in[0,1]\cap\mathbb{Q}$ it holds that $\mathcal{P}^t(x,[0,1]\cap\mathbb{Q}^c)=0$ for all $t\ge1$, and analogously for $x\in[0,1]\cap\mathbb{Q}^c$ it holds that $\mathcal{P}^t(x,[0,1]\cap\mathbb{Q})=0$ for all $t\ge1$.
Next, a straightforward computation shows that $$\mathscr{W}\bigl(\mathcal{P}(x,\cdot),\mathcal{P}(y,\cdot)\bigr)\le\frac{1}{2}\mathsf{d}(x,y),$$ and since
\begin{equation}\label{CONT} \begin{aligned}\mathscr{W}\bigl(\mathcal{P}^{2}(x,\cdot),\mathcal{P}^{2}(y,\cdot)\bigr)&\le \inf_{\Pi\in\mathcal{C}(\mathcal{P}(x,\cdot),\mathcal{P}(y,\cdot))}
\int_{\mathsf{S}\times\mathsf{S}}\mathscr{W}\bigl(\mathcal{P}(u,\cdot),\mathcal{P}(v,\cdot)\bigr)
\Pi(\mathrm{d}{u},\mathrm{d}{v})\\&\le\frac{1}{2}\inf_{\Pi\in\mathcal{C}(\mathcal{P}(x,\cdot),\mathcal{P}(y,\cdot))}
\int_{\mathsf{S}\times\mathsf{S}}\mathsf{d}(u,v)
\Pi(\mathrm{d}{u},\mathrm{d}{v})\\
&=\frac{1}{2} \mathscr{W}\bigl(\mathcal{P}(x,\cdot),\mathcal{P}(y,\cdot)\bigr),\end{aligned}
\end{equation}
we conclude that $$\mathscr{W}\bigl(\mathcal{P}^t(x,\cdot),\mathcal{P}^t(y,\cdot)\bigr)\le\frac{1}{2^t}\mathsf{d}(x,y).$$
Thus,
condition (ii) from \Cref{S2} holds with $\rho\le1/2$. Conditions (iii) and (iv) trivially hold by taking $\kappa=1$, $\mathcal{V}(x)\equiv1$ and $\phi(t)=t$. Hence, we can apply \Cref{TM} to $\{X_t\}_{t\in\ZZ_+}$ and any Lipschitz function $f:[0,1]\to\R$. Observe also that $\mathrm{Leb}(\mathrm{d} y)$ (on $\mathfrak{B}([0,1])$) is the (unique) invariant probability measure for $\{X_t\}_{t\in\ZZ_+}$, which is singular with respect to $\mathcal{P}^{t}(x,\mathrm{d} y)$ for any $t\in\ZZ_+$ and $x\in[0,1]$. Hence, $\mathcal{P}^{t}(x,\mathrm{d} y)$ cannot converge to $\mathrm{Leb}(\mathrm{d} y)$, as $t\to\infty$, in the total variation distance.
Let us remark here that from \cite[Theorem 2.1]{Butkovsky-2014} follows that for any $\epsilon\in(0,1)$ there are $c_1(\epsilon),c_2(\epsilon)>0$, such that $$\mathscr{W}\bigl(\mathcal{P}^t(x,\cdot),\mathrm{Leb}(\cdot)\bigr)\le c_1(\epsilon) \mathrm{e}^{-c_2(\epsilon) t}.$$
\qed }
\end{example}
We now discuss a simple symmetric random walk on torus.
\begin{example}[Random walk on torus]\label{EX3}{\rm Let $Y_0$ and $\{\xi_i\}_{i\geq1}$ be random variables defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$, such that $Y_0$ is independent of $\{\xi_i\}_{i\geq1}$, $\{\xi_i\}_{i\geq1}$ is an i.i.d.\ sequence and $\mathbb{P}(\xi_i=-1)=\mathbb{P}(\xi_i=1)=1/2$. Define $$Y_{t+1}\coloneqq Y_t+\xi_{t+1}.$$ Clearly, $\{Y_t\}_{t\in\ZZ_+}$ is a Markov model on $(\R,\mathfrak{B}(\R))$. Denote the corresponding transition function bt $\mathcal{P}^t_Y(x,\mathrm{d} y).$ Next, for $x\in\R$ let
$$[x]\coloneqq\bigl\{y\in\R\colon x-y\in2\pi\ZZ\bigr\},\qquad\textrm{and}\qquad
\mathbb{S}^1\coloneqq\bigl\{[x]\colon x\in\R\bigr\}.$$
Clearly,
$\mathbb{S}^1$ is obtained
by identifying the opposite
faces of $[0,2\pi]$.
The corresponding Borel $\sigma$-algebra is denoted by $\mathfrak{B}(\mathbb{S}^1)$, which can be identified with the sub-$\sigma$-algebra of $\mathfrak{B}(\R)$ of sets of the form $\bigcup_{k\in2\pi\ZZ}\{x+k\colon x\in B\}$ for $B\in\mathfrak{B}([0,2\pi])$.
The covering map $\R\ni x\mapsto [x]\in\mathbb{S}^1$ is denoted by
$\Pi(x)$. The projection of $\{Y_t\}_{t\in\ZZ_+}$, with respect to $\Pi(x)$, on the torus $\mathbb{S}^1$, denoted by $\{X_t\}_{t\in\ZZ_+}$, is a Markov model on $(\mathbb{S}^1,\mathfrak{B}(\mathbb{S}^1))$ with transition kernel given by
\begin{equation*}
\mathcal{P}^t(x,B)= \mathcal{P}_Y^t\bigl(z_x,\Pi^{-1}(B)\bigr)\end{equation*} for $x\in \mathbb{S}^1$, $B \in \mathfrak{B}(\mathbb{S}^1)$ and $z_x \in \Pi^{-1}(\{x\})$. Denote by
$\mathsf{d}(x,y)$ the arc-length metric on $\mathbb{S}^1$. It is evident that $\mathfrak{B}(\mathbb{S}^1)$ is generated by this metric.
It is also clear that $\{X_t\}_{t\in\ZZ_+}$ is not irreducible. For example, for $x=[1]$ it holds that $\mathcal{P}^t(x,\Pi(\{k+2\ell\pi\colon k,l\in\ZZ\}^c))=0$ for all $t\ge1$, and analogously for $x=[\sqrt{2}]$ it holds that $\mathcal{P}^t(x,\Pi(\{k+2\ell\pi\colon k,l\in\ZZ\}))=0$ for all $t\ge1$.
A straightforward computation shows that $$\mathscr{W}\bigl(\mathcal{P}(x,\cdot),\mathcal{P}(y,\cdot)\bigr)\le\frac{1}{2}\mathsf{d}(x,y),$$ and similarly as in \cref{CONT}
we conclude that $$\mathscr{W}\bigl(\mathcal{P}^t(x,\cdot),\mathcal{P}^t(y,\cdot)\bigr)\le\frac{1}{2^t}\mathsf{d}(x,y),$$
which is exactly
condition (ii) from \Cref{S2} (with $\rho\le1/2$). As in \Cref{EX2}, conditions (iii) and (iv) trivially hold by taking $\kappa=1$, $\mathcal{V}(x)\equiv1$ and $\phi(t)=t$. Hence, we can apply \Cref{TM} to $\{X_t\}_{t\in\ZZ_+}$ and any Lipschitz function $f:\mathbb{S}^1\to\R$.
Similarly as in the previous example, $\mathrm{Leb}(\mathrm{d} y)$ (on $\mathfrak{B}(\mathbb{S}^1)$) is the (unique) invariant probability measure for $\{X_t\}_{t\in\ZZ_+}$, which is singular with respect to $\mathcal{P}^{t}(x,\mathrm{d} y)$ for any $t\in\ZZ_+$ and $x\in\mathbb{S}^1$. Hence, $\mathcal{P}^{t}(x,\mathrm{d} y)$ cannot converge to $\mathrm{Leb}(\mathrm{d} y)$, as $t\to\infty$, in the total variation distance.
From \cite[Theorem 2.1]{Butkovsky-2014} it follows that for any $\epsilon\in(0,1)$ there are $c_1(\epsilon),c_2(\epsilon)>0$, such that $$\mathscr{W}\bigl(\mathcal{P}^t(x,\cdot),\mathrm{Leb}(\cdot)\bigr)\le c_1(\epsilon) \mathrm{e}^{-c_2(\epsilon) t}.$$\qed
}
\end{example}
\bigskip
At the end, we remark that one of typical ways of obtaining Markov models from a given Markov model is through a random time-change method.
Recall, a subordinator $\{S_t\}_{t\in\mathbb{T}_S}$ is a non-decreasing right-continuous (in the case when $\mathbb{T}_S=\R_+$) stochastic process
on $\R_+$ with stationary and independent increments. If $\mathbb{T}_S=\ZZ_+$, $\{S_t\}_{t\in\mathbb{T}_S}$ is a random walk; and if $\mathbb{T}_S=\R_+$, it is a L\'evy process.
Let now $\{X_t\}_{t\in\mathbb{T}}$ be a Markov model with
transition kernel $\mathcal{P}^t(x,\mathrm{d} y)$, and let
$\{S_t\}_{t\in\mathbb{T}_S}$ be a subordinator
independent of $\{X_t\}_{t\in\mathbb{T}}$. If $\mathbb{T}=\ZZ_+$, we assume that $\{S_t\}_{t\in\mathbb{T}_S}$ takes values in $\ZZ_+$.
The process $X^{S}_t\coloneqq X_{S_t}$ obtained from $\{X_t\}_{t\in\mathbb{T}}$ by
a random time change through $\{S_t\}_{t\in\mathbb{T}_S}$, is referred to as the subordinate
process $\{X_t\}_{t\in\mathbb{T}}$ with subordinator $\{S_t\}_{t\in\mathbb{T}_S}$.
It is easy to see that $\{X^S_t\}_{t\in\mathbb{T}_S}$ is again a Markov model with
transition kernel
\begin{equation*}\mathcal{P}_S^t(x,\mathrm{d} y)=\int_{\mathbb{T}_S} \mathcal{P}^s(x,\mathrm{d} y)\,\upmu_t(\mathrm{d} s),\end{equation*}
where $\upmu_t(\mathrm{d} s)=\mathbb{P}(S_t\in\mathrm{d} s)$.
It is also elementary to check that if $\uppi(\mathrm{d} x)$ is an invariant probability measure for
$\{X_t\}_{t\in\mathbb{T}}$, then it is also invariant for the subordinate process $\{X^S_t\}_{t\in\mathbb{T}_S}$.
Furthermore, in \cite[Proposition 1.1]{Arapostathis-Pang-Sandric-2020} it has been shown that if
$\mathscr{W}(\mathcal{P}^t(x,\cdot),\uppi(\cdot))\le c(x)r(t)$ for some Borel measurable $c\colon\mathsf{S}\to\R_+$ and $r\colon\mathbb{T}\to\R_+$,
then
\begin{equation*}
\mathscr{W}\bigl(\mathcal{P}_S^t(x,\cdot),\uppi(\cdot)\bigr)\le c(x) \mathbb{E}\bigl[r(S_t)\bigr].
\end{equation*}
Let us now apply this method to Markov models from \Cref{EX1,EX2,EX3}. Assume first that $\mathbb{T}_S=\ZZ_+$. In particular, this means that $\{S_t\}_{t\in\ZZ_+}$ is given as $S_t=S_{t-1}+\xi_t$, where $S_0=0$ and $\{\xi_i\}_{i\ge1}$ is a sequence of i.i.d.\ non-negative integer-valued random variables. Assume additionally that $\Prob(\xi_i=0)=0$. This procedure is sometimes referred to as discrete subordination and it was introduced in \cite{Bendikov-Saloff-Coste-2012}. Then, in order to apply \Cref{TM} to $\{X^S_t\}_{t\in\ZZ_+}$, it suffices to show that $\sum_{t\in\ZZ_+} \mathbb{E}[r(S_t)]<\infty.$
Observe that in the case of \Cref{EX1} we have that $c(x)=1$ and $$r(t)=\begin{cases}
\mathrm{e}^{-t}, & \alpha=1,\\
\frac{1}{((\alpha-1)t+1)^{1/(\alpha-1)}}, & \alpha\in(1,2),
\end{cases}$$ while in \Cref{EX2,EX3}, for fixed $\epsilon\in(0,1)$, $c(x)=c_1(\epsilon)$ and $r(t)=\mathrm{e}^{-c_2(\epsilon)t}.$
We now have
\begin{align*}\sum_{t\in\ZZ_+} \mathbb{E}\bigl[r(S_t)\bigr]&=
\begin{cases}
\sum_{t\in\ZZ_+}\left(\mathbb{E}\bigl[\mathrm{e}^{-\xi_1}\bigr]\right)^t, & \text{\Cref{EX1} with }\alpha=1,\\
\sum_{t\in\ZZ_+}\mathbb{E}\bigl[\frac{1}{((\alpha-1)S_t+1)^{1/(\alpha-1)}}\bigr], & \text{\Cref{EX1} with }\alpha\in(1,2),\\
\sum_{t\in\ZZ_+}\left(\mathbb{E}\bigl[\mathrm{e}^{-c_2(\epsilon)\xi_1}\bigr]\right)^t,& \text{\Cref{EX2,EX3}},
\end{cases}\\
&\le\begin{cases}
\sum_{t\in\ZZ_+}\mathrm{e}^{-t}, & \text{\Cref{EX1} with }\alpha=1,\\
\sum_{t\in\ZZ_+}\frac{1}{((\alpha-1)t+1)^{1/(\alpha-1)}}, & \text{\Cref{EX1} with }\alpha\in(1,2),\\
\sum_{t\in\ZZ_+}\mathrm{e}^{-c_2(\epsilon)t} ,& \text{\Cref{EX2,EX3}}.
\end{cases}\end{align*}
Thus, we can apply \Cref{TM} to $\{X^S_t\}_{t\in\ZZ_+}$ and any Lipschitz function $f:[0,1]\to\R$, $f:[-1,1]\to\R$ and, respectively, $f:\mathbb{S}^1\to\R.$ Observe also that in all three cases $\{X^S_t\}_{t\in\ZZ_+}$ is not irreducible and the corresponding transition function cannot converge to the invariant probability measure in the total variation distance.
Let now $\mathbb{T}_S=\R_+$. In this case, the
Laplace transform of $\{S_t\}_{t\in\R_+}$ takes the form
$\mathbb{E}[\mathrm{e}^{-uS_t}] = \mathrm{e}^{-t\psi(u)}$.
The characteristic (Laplace) exponent $\psi\colon(0,\infty)\to(0,\infty)$
is a Bernstein function, i.e.\ it is of class $C^\infty$ and
$(-1)^n\psi^{(n)}(u)\ge0$ for all $n\in\ZZ_+$.
It is well known that every Bernstein function admits a unique
(L\'{e}vy-Khintchine) representation
\begin{equation*}\psi(u)=bu+\int_{(0,\infty)}(1-\mathrm{e}^{-uy})\,\upnu(\mathrm{d} y),\end{equation*}
where $b\geq0$ is the drift parameter and $\upnu(\mathrm{d} y)$ is a L\'{e}vy measure,
i.e.\ a Borel measure on $\mathfrak{B}\bigl((0,\infty)\bigr)$ satisfying
$\int_{(0,\infty)}(1\wedge y)\,\upnu(\mathrm{d} y)<\infty$.
For additional reading on Bernstein functions we refer the reader to the
monograph \cite{Schilling-Song-Vondracek-Book-2012}.
Let now $\{S_t\}_{t\in\R_+}$ be the Poisson process (with parameter $\lambda>0$) as the simplest (non-trivial) continuous-time subordinator. Observe that in this case $b=0$ and $\upnu(\mathrm{d} y)=\lambda \updelta_1(\mathrm{d} y).$ We then have
\begin{align*}\int_0^\infty \mathbb{E}\bigl[r(S_t)\bigr]\mathrm{d} t&=
\begin{cases}
\int_0^\infty\sum_{n\in\ZZ_+}\mathrm{e}^{-n}\frac{(\lambda t)^n}{n!}\mathrm{e}^{-\lambda t}\mathrm{d} t, & \text{\Cref{EX1} with }\alpha=1,\\
\int_0^\infty\sum_{n\in\ZZ_+}\frac{1}{((\alpha-1)n+1)^{1/(\alpha-1)}}\frac{(\lambda t)^n}{n!}\mathrm{e}^{-\lambda t}\mathrm{d} t, & \text{\Cref{EX1} with }\alpha\in(1,2),\\
\int_0^\infty\sum_{n\in\ZZ_+}\frac{1}{((\alpha-1)n+1)^{1/(\alpha-1)}}\frac{(\lambda t)^n}{n!}\mathrm{e}^{-\lambda t}\mathrm{d} t,& \text{\Cref{EX2,EX3}},
\end{cases}\\
&=\begin{cases}
\frac{\mathrm{e}}{\lambda(\mathrm{e}-1)}, & \text{\Cref{EX1} with }\alpha=1,\\
\frac{1}{\lambda}\sum_{n\in\ZZ_+}\frac{1}{((\alpha-1)n+1)^{1/(\alpha-1)}}, & \text{\Cref{EX1} with }\alpha\in(1,2),\\
\frac{\mathrm{e}^{c_2(\epsilon)}}{\lambda(\mathrm{e}^{c_2(\epsilon)}-1)} ,& \text{\Cref{EX2,EX3}}.
\end{cases}\end{align*}
Thus, we can again apply \Cref{TM} to $\{X^S_t\}_{t\in\R_+}$ and any Lipschitz function $f:[-1,1]\to\R$, $f:[0,1]\to\R$ and, respectively, $f:\mathbb{S}^1\to\R.$ In all three cases $\{X^S_t\}_{t\in\R_+}$ is not irreducible and the corresponding transition function cannot converge to the invariant probability measure in the total variation distance.
So far we have considered subordinators taking values in $\ZZ_+$ only. However, the Markov model from \Cref{EX1} can be subordinated by more general subordinators. In order to apply \Cref{TM} to such processes we again need to guarantee that \begin{equation}\label{R}\int_0^\infty\mathbb{E}\bigl[r(S_t)\bigr]\mathrm{d} t<\infty.\end{equation}
Note that for $\alpha\in[1,2)$ we have that $r(t)\le c(1+t)^{-\beta}$ for some $c>0$ and $\beta>1$ (possibly depending on $\alpha$). From \cite[Theorem 1.1 and Lemma 3.1]{Deng-Schilling-Song-2017} we know that if \begin{equation}\label{R2}\liminf_{s\to\infty}\frac{\psi(u)}{\log u}>0\qquad\text{and}\qquad \liminf_{s\to0}\frac{\psi(\rho u)}{\psi(u)}>1\end{equation} for some $\rho>1$, then $$\mathbb{E}\bigl[r(S_t)\bigr]\le c\left(1\wedge\psi^{-1}(1/u)\right)^{1/(\alpha-1)}.$$
Hence, \cref{R} holds if \cref{R2} and $$\int_0^\infty\left(1\wedge\psi^{-1}(1/u)\right)^{1/(\alpha-1)}\mathrm{d} t<\infty$$ hold true. Typical examples of such characteristic exponents (subordinators) are given by $\psi(u)=u^\gamma$ for $\gamma\in(0,1)$ ($\gamma$-stable subordinator) and $\psi(u)=\log (1+u)$ (geometric $1$-stable subordinator).
\section*{Acknowledgements}
Financial support through \textit{Alexander von Humboldt Foundation} (No. HRV 1151902 HFST-E) and \textit{Croatian Science Foundation} under project 8958 (for N.\ Sandri\'c), and the \textit{Croatian Science Foundation} under project 4197 (for S.\ \v Sebek) is gratefully acknowledged.
\bibliographystyle{alpha}
| proofpile-arXiv_065-1830 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
How do the excitations of a quantum field above a given vacuum, the {\it quasiparticles},
propagate in a system
which has nonlocal and anisotropic particle interactions? This seemingly simple question
can be connected in fact to a plethora of related issues
in different branches of physics and mathematics because
of the very nature of the particle interactions.
Indeed, in early quantum field theory, nonlocal alternatives for field theories were sought by the scientific community within the program of ``handling divergences'' \cite{Efimov,Cornish}, and foundations of such nonlocal quantum field theories were extensively studied, with particular emphasis on S-matrix properties cf., e.g., Refs.~\cite{Yukawa1,*Yukawa2,Yennie,Efimov2}. However, the success and simplicity in studying low-energy phenomena afforded by the renormalization program of local quantum field theories
eventually won as the primary paradigm. Nevertheless, instances of such nonlocal field theories appear, e.g., as necessary tools in investigating electromagnetic phenomena in material media \cite{Huttner,Matloob1,Matloob2,Lang}, for studying trans-Planckian physics \cite{Briscese} and the universality of Hawking radiation \cite{Unruh2005}, and to assess the influence of boundaries \cite{Amooghorban,Saravani_2018,Saravani_2019}.
To study such nonlocal field theories in the lab, dipolar Bose-Einstein condensates (BECs)
\cite{Goral}, realized in the quantum optical context of
ultracold gases, cf., e.g., \cite{Chromium,PhysRevLett.107.190401,PhysRevLett.108.210401},
offer a rich environment \cite{Baranov}.
For example, quantum fluctuations in dipolar condensates, which lead to a peculiar
Lee-Huang-Yang equation of state \cite{Ralf,Pelster}, and the associated behavior of the thermodynamic
pressure can lead to droplet stabilization, as observed in \cite{Ferrier}
and supersolid behavior \cite{Hertkorn}, see for a review \cite{Pfau}. The droplet
stabilization is becoming particularly intricate in the case of quasi-one-dimensional (quasi-1D)
dipolar gases cf., e.g., Refs.~\cite{Sinha,Santos,Edmonds}.
Among the signatures of the dipolar interaction of particular importance for our analysis is the existence of rotonic excitations \cite{Santos2003},
which are caused by the anisotropy of the interaction, which is partly positive and partly negative.
Roton modes, in particular, occur when the dipolar interaction dominates
the interaction at high enough densities of the atoms or molecules.
They play a pivotal role in the description of the dynamical instability emerging in the dimensional crossover from
dynamically stable quasi-1D \cite{Giovanazzi2004} or quasi-2D \cite{Fischer2006} condensates
to 3D dipole-dominated BECs, which are always dynamically unstable.
We focus in what follows on quasi-1D trapping
The appearance of a roton minimum in the dimensional crossover
signals a marked departure of the standard Bogoliubov dispersion relation from its contact interaction form,
which is what is obtained in a local field theory. Together with the associated maxon maximum, it corresponds to
a {\it non-monotonic} dispersion.
Indeed, examples of intriguing effects related to rotonic excitations include the enhancement of many-body entanglement \cite{Fischer2018}, of density oscillations \cite{Wilson}, and the occurrence of roton confinement \cite{Jona}. On the experimental
side, rotons in elongated BECs have been observed, e.g., in \cite{Chomaz,Petter,HertkornRoton}.
We are however not aware
of a solution of the Bogoliubov de Gennes (BdG) equation describing quasiparticle propagation in dipolar
Bose-Einstein condensates in an inhomogeneous setup, e.g., presenting an interface between regions of distinct quasiparticle spectra.
Finding such solutions is key to provide a general answer on the apparently basic
question posed at the beginning of this Introduction, and we provide in the below
an {\it ab initio} answer, in which we put the inhomogeneous dipolar BdG equation of a quasi-1D gas
into the form of a singular integral
equation, and solve this equation.
To the best of our knowledge, this singular integral equation is {\it novel}, in that it provides an extension of the well known Cauchy-type singular kernels \cite{Beyrami}. Specifically, the integral kernel we obtain is a combination of Cauchy-type kernels almost everywhere, with the exception of two {\em isolated} points where the singularity is stronger and the kernel becomes hypersingular.
Our case is however different from established textbook examples of hypersingular kernels \cite{Lifanov},
where the set of singular points has nonzero measure.
We also provide a discretized version of the singular integral equation
for the inhomogeneous dipolar BdG equation, and demonstrate its excellent performance for already a moderate number of discretization steps.
To give some intuition why the solution of this problem is nontrivial, note that
within the instantaneous approximation for the dipolar interactions, a signal sent towards the barrier will
interact with it before and after the signal has reached it. This is in striking contrast to the standard contact interaction
case, where the signal
interacts with the barrier only locally.
Therefore, we expect nontrivial scattering phenomena to emerge. As we will show, these nontrivial phenomena are even more pronounced when roton
excitations are involved due to the then increased number of the types of elementary excitations present in the system.
Indeed, to assume an inhomogeneous configuration (e.g., a gas containing a sound barrier), as we will show, greatly increases the mathematical complexity of the perturbations in such systems, which in general forbids a fully analytical treatment.
Below, we reveal in detail how quasiparticles propagate in systems containing a sound barrier.
Our results represent a major step for constructing a complete nonlocal field theory of dipolar BECs.
The particular model we study, which encapsulates all required features, is a
trapped BEC at rest with aligned magnetic or electric
dipoles, which provides a sound barrier constructed by tuning locally the contact interaction between its particles, which is then separating the system into two regions with distinct sound velocities. Our goal is to make solutions to the nonlocal dipolar BdG equations as analytically amenable as possible.
We show how the solutions we find can be used to build the S-matrix,
which in the context of wave scattering comprises the reflection and transmission coefficients in such a way
that unitarity is manifest.
We shall see that when the dipolar interactions are present and the roton minimum exists, the increased number of the types of
elementary excitations present in the system
implies that the dimension of this matrix is larger than the $2\times2$ S-matrix for the case of contact-only interactions.
We shall discuss two methods of solving the BdG equation, one based on an approximate model, and the second one given in terms of special functions solutions to the novel class of singular integral equations we put forth. The method based on approximating the model has the advantage of allowing for analytic solutions, while the singular integral equation is treated numerically. Our results show that whenever the barrier exists, the dipolar interactions give rise to a continuum of evanescent channels bound to the barrier, which potentially play a role in near-boundary physics like recently explored in the context of the
analogue gravity of sonic black holes \cite{Curtis}.
Furthermore, novel characteristic features include a decrease in the barrier's transmittance when the roton minimum is about to form and the barrier's complete transmittance/reflectance for particular signals when the roton minimum exists even
in the limit of ``weak'' barriers. These findings represent a remarkable departure from the homogeneous system (no barrier), where one may naively expect to see a continuous dependence of sound propagation on barrier height. Yet, complete transmittance/reflectance is observed even for vanishing barriers, near the roton and maxon frequencies, in marked disagreement with the continuous dependence obtained in contact-interaction condensates.
Previous studies have dealt with phonon scattering and the associated S-matrix for contact interactions,
e.g. in the context of acoustic Hawking radiation \cite{PhysRevA.80.043603}. Yet,
to the best of our knowledge we present the first complete {\it ab initio} nonlocal field theory of quasiparticle scattering
at an inhomogeneity in the presence of a Bose-Einstein condensate on top of which
the quasiparticles reside.
We reveal, in particular, the impact of the anisotropy of interactions and the existence of a
roton minimum on the scattering matrix. While a recent study
explored the scattering properties of quasiparticles in polar dielectrics \cite{Simone}, our results are more general, do not assume
any a priori knowledge of boundary conditions imposed by the dipolar metamaterial geometry and constitution
and incorporate, in distinction to \cite{Simone}, the existence of a condensate.
Considering a dipolar BEC
with a stepwise discontinuous contact interaction,
the structure of the S-matrix is derived from first principles.
Therefore, our study has, as a further application, the potential to describe metamaterials built from dipolar BECs,
by establishing a clear recipe of how to predict scattering phenomena
in such systems. It thus paves the way towards a plethora of applications obtained by generalizations of our model. For instance, our results can be readily applied for an inhomogeneous extension of the recent experiment reported in \cite{Petter2021}. In this work, the crossover regime of a dipolar condensate to a supersolid and isolated droplet regimes was obtained by tuning the contact interaction, and studied using Bragg scattering of high energy excitations. By tuning the contact interaction locally, our model predicts the system response at any energy scale.
\section{Dipolar interactions in quasi-1D condensates}
\label{secgtilde}
\subsection{Interaction kernel after dimensional reduction}
We start with an elongated dipolar condensate with its dipoles oriented along a given
direction $\bm{d}$ ($|\bm{d}|=1$), such that its particles interact via the long-range instantaneous interaction energy
\begin{equation}
H_{\rm d}=\frac{C_{\rm dd}}{8\pi}\int\mathrm{d}^3x\mathrm{d}^3x'|\Phi(t,\bm{x})|^2U_{\rm d}(\bm{x}-\bm{x}')|\Phi(t,\bm{x}')|^2,\label{nlenergy}
\end{equation}
in terms of the order parameter $\Phi$ and dipolar interaction strength $C_{\rm dd}$ \footnote{$C_{\rm dd}=\mu_0 d_{\rm m}^2$
for magnetic and $C_{\rm dd}=d_{\rm e}^2/\epsilon_0$ for electric dipoles, with dipole moments $d_{\rm m}$ and $d_{\rm e}$, and where
$\mu_0$ and $\epsilon_0$ are permeability and permittivity of the vacuum, respectively}.
The interaction kernel $U_{\rm d}$ is given by
\begin{equation}
U_{\rm d}(\bm{x})=\frac{\bm{x}^2-3(\bm{x}\cdot\bm{d})^2}{|\bm{x}|^5}.
\end{equation}
Let us assume the system is subjected
to a strong radially symmetric trapping potential in such a way that the order parameter separation ansatz
$\Phi(t,\bm{x})=\phi_{\bot}(|\bm{x}_{\bot}|)\phi(t,x)$ holds, where
$\phi_{\bot}$ is normalized as $\int\mathrm{d}^2x_{\bot}|\phi_{\bot}|^2=1$,
assuming the geometry presented in Fig.~\ref{condensate}.
\begin{figure}[t]
\includegraphics[scale=0.9]{condensatev1.pdf}
\caption{Schematics of the elongated dipolar condensate under consideration. The system symmetry axis is here taken to be the $x$ axis, and the dipoles are oriented by an external field along the direction $\bm{d}$, which defines the angle $\theta$ as shown.}
\label{condensate}
\end{figure}
For the particular case of dipolar interactions, {\it under the assumed radially symmetric trapping}, it was shown \cite{Shinn}
that the only contribution from the interaction kernel
in Eq.~\eqref{nlenergy}
is given by the Fourier transform
\begin{equation}
U_{\rm d}(\bm{x})=\frac{4\pi}{3(2\pi)^3}\left(1-\frac{3}{2}\sin^2\theta\right)\int\mathrm{d}^3k\mbox{e}^{i\bm{k}\cdot\bm{x}}\left(\frac{3k_{x}^2}{\bm{k}^2}-1\right),\label{3Dker}
\end{equation}
where $\theta$ is the angle between $\bm{d}$ and the $x$ axis, see for an illustration Fig.~\ref{condensate}.
It then follows from the order parameter separation ansatz that ($\Delta x = x-x'$)
\begin{align}
H_{\rm d}=\frac{g_{\rm d}}{2}\Big[\int \mathrm{d} x |\phi|^4-3\int\mathrm{d} x\mathrm{d} x'|\phi(x)|^2G(\Delta x)|\phi(x')|^2\Big],\label{nlenergy2}
\end{align}
where $G$ is defined via its Fourier transform $\tilde{G}$ as \footnote{We note here that Ref.~\cite{Shinn} derives an exact expression for the quasi-1D dipolar interaction kernel in real space (for harmonic transverse trapping),
whereas \cite{Giovanazzi2004} presents an approximation.}
\begin{eqnarray}
\tilde{G}(\ell_{\bot}k_x)=\frac{\ell_{\bot}^2k_x^2}{2}\int_{-\infty}^{\infty}\mathrm{d} k_{\bot}\frac{\mbox{sgn}(k_{\bot})\mathfrak{X}(\ell_{\bot}^2k_{\bot}^2)}{k_{\bot}+ik_x},\nonumber\\
\mbox{where} \qquad \mathfrak{X}(\ell_{\bot}^2\bm{k}_{\bot}^2)\coloneqq \frac{|\tilde{n}_{\bot}(\bm{k}_{\bot})|^2}{2\pi\ell_{\bot}^2\int\mathrm{d}^2x_{\bot}|\phi_{\bot}|^4}, \label{gfourier}
\end{eqnarray}
and with
$\tilde{n}_{\bot}(\bm{k}_{\bot})=\int\mathrm{d}^2x_{\bot}\exp(-i\bm{k}_{\bot}\cdot\bm{x}_{\bot})|\phi_{\bot}(|\bm{x}_{\bot}|)|^2$. Here, $\ell_{\bot}$ denotes the typical length scale of the transverse trapping. Moreover, we have set as the effective quasi-1D dipole coupling
\begin{equation}
g_{\rm d}=g_{\rm d}(\theta,\ell_\perp) =-\frac{C_{\rm dd}}{3}\left(1-\frac{3}{2}\sin^2\theta\right)\int\mathrm{d}^2x_{\bot}|\phi_{\bot}|^4.
\label{gddef}
\end{equation}
We note at this point that $g_{\rm d}>0$ is required for the system to be stable in the thermodynamic limit and for vanishing
contact interaction.
For the (commonly employed) particular case of a strong harmonic trapping, one has the Gaussian approximation $|\phi_{\bot}(|\bm{x}_{\bot}|)|^2=\exp(-\bm{x}_{\bot}^2/\ell_{\bot}^2)/(\pi\ell_{\bot}^2)$, where then
$\ell_{\bot}$ is the harmonic oscillator length).
For harmonic trapping, $\tilde{G}(\eta)=
(\eta^2/2)\exp(\eta^2/2)E_{1}(\eta^2/2)$, $E_{1}$ being the first exponential integral function \cite{Shinn}.
In our work one particular property of this function plays an important role: it has a discontinuity branch on the imaginary axis of the complex $k$ plane, which (for the Gaussian transverse profile)
comes from the function $E_{1}$ \cite{Gradshteyn2007}. This feature increases the mathematical complexity of the condensate perturbations when some form of sound barrier exists in comparison to the case of contact-only interactions, and before we proceed to the model, let us pinpoint the origin of such a discontinuity and how it is related to the reduction to the quasi-1D regime. It is, in particular, not a feature of the transverse harmonic
trapping
per se, but occurs generically for any radial trapping, e.g., also for cylindrical box traps.
\subsection{Analyticity of the kernel}
Assuming an analytical interaction kernel (in Fourier space) is a
simplifying hypothesis in nonlocal field theories \cite{Unruh2005}, which leads us to question whether this property is fulfilled by
our $\tilde{G}$.
For the particular case of a dipolar interaction, inspection of Eq.~\eqref{gfourier} reveals the analytical structure of $\tilde{G}$ in the complex plane.
It has a discontinuity branch on the imaginary axis as can be seen from the application of the Sokhotski-Plemelj identity $1/(q\pm i\epsilon)=1/q\mp i\pi\delta(q)$ \cite{Galapon} as $k_x$ approaches the imaginary axis. Indeed, if $iq$ for real $q$ is any point on the imaginary axis, then straightforward manipulations lead to the jump magnitude measured by $\Delta \tilde{G}(iq)\coloneqq \lim_{\epsilon\rightarrow0}[\tilde{G}(iq+\epsilon)-\tilde{G}(iq-\epsilon)]$, which reads
\begin{equation}
\Delta \tilde{G}(iq)=i\pi q^2\mbox{sgn}(q)\mathfrak{X}(q^2).\label{disc}
\end{equation}
The above equation highlights the advantage
of writing the interaction kernel in the integral form of Eq.~\eqref{gfourier}, as it shows that the branch of $\tilde{G}$ exists for any shape of radial trapping, which in turn determines the discontinuity branch jump through the form factor $\mathfrak{X}$ in Eq.~\eqref{disc}. For the Gaussian profile, the latter reads $\mathfrak{X}(q^2)=\exp(-q^2/2)$, while for a cylindrical box trap, for which
$|\phi_{\bot}(|\bm{x}_{\bot}|)|^2=1/(\pi\ell_\bot^2)$ for $|\bm{x}_{\bot}|<\ell_\bot$ and zero otherwise, we find $\mathfrak{X}(q^2)=2J_1^2(|q|)/|q|^2$, where $J_1$ is a Bessel function \cite{Gradshteyn2007}.
By tracing back from Eq.~\eqref{gfourier} to Eq.~\eqref{3Dker}, we see that the existence of this discontinuity branch comes from the poles
of the dipolar interaction in Fourier space, and Eq.~\eqref{disc} reflects the fact that different radial wavevectors add up to form the quasi-1D system. This is manifest in Eq.~\eqref{3Dker}, where the pole at $\bm{k}^2= k_{x}^2+\bm{k}_{\bot}^2=0$ (in Fourier space) is evident.
This gives rise to two first order poles at $k_{x}=\pm i k_{\bot}$ (manifest in Eq.~\eqref{gfourier}), which upon integration produces the discontinuity branch.
In conclusion, both the dipolar interaction and the dimensional reduction combine to give rise to the
discontinuity branch.
The relevance of Eq.~\eqref{disc}
to our model is that it greatly modifies the structure of the perturbations in the system when a sound barrier exists. Moreover, while noting that we are ultimately interested in the case where the kernel describes dipolar interactions, these same conclusions also hold for a gas of charged bosons (e.g., a Cooper pair gas in a superconductor) whose pairwise interaction follows
the Coulomb law. The latter, however, which represents isotropic interactions, does not give rise to a roton minimum.
Finally, the particular details of the trapping mechanism enter the analysis only through $|\phi_{\bot}(|\bm{x}_{\bot}|)|^2$, which we assume henceforth to be given by the Gaussian approximation.
We shall also omit the subscript $x$ from the momentum $k_x$ along the weakly confining direction
and denote it as $k$ in what follows.
\section{Quasiparticles in the presence of a sound barrier}
\subsection{Formulation of the Bogoliubov de Gennes problem}
We consider a background condensate of homogeneous density, $\phi=\sqrt{n}\exp(-i\mu t)$ ($\hbar=1$), with particle density $n$ and chemical potential $\mu$. In order to model a sound barrier for the phonons in this system, we also allow the particles to interact via the Feshbach-tunable contact term $H_{\rm c}=g_{\rm c}\int \mathrm{d} x |\phi|^4/2$, for an almost everywhere constant $g_{\rm c}$ with a steplike discontinuity at $x=0$. Then, as the local sound velocity is defined as $c=\sqrt{n(g_{\rm c}+g_{\rm d})}$
(setting the mass of the dipolar atoms or molecules
$m=1$), we see that this setup corresponds to a system in which the sound velocity has a sudden jump --- the sound barrier --- at $x=0$. Bearing in mind that this simplified physical system already requires a complex mathematical treatment, we shall assume at once that the region for $x<0$ is dipole-dipole dominated $g_{\rm c}/g_{\rm d}\sim0$ and for $x>0$, we have $g_{\rm c}/g_{\rm d}>0$. Moreover, we shall assume for the sake of simplicity that the region $x>0$ is such that $g_{\rm c}$ prevents the formation a rotonic excitations. The discussion that follows can be easily extended to include the case in which rotonic excitations exist on both sides of the barrier
and also to inhomogeneous condensates in which the sound barrier is modeled by a variable density instead of a variable contact coupling $g_{\rm c}$.
Small disturbances in a stationary condensate are modeled by the Bogoliubov expansion $\phi=\exp(-i\mu t)(\sqrt{n}+\psi)$, where $|\psi|^2\ll n$, and $\psi$ is a solution of the Bogoliubov-de Gennes (BdG) equation
\begin{align}
i\partial_{t}\psi=&-\frac{\partial^2_{x}}2\psi+n(g_{\rm c}+g_{\rm d})(\psi+\psi^{*})
-3 ng_{\rm d}
G*(\psi+\psi^{*}),\label{bogo}
\end{align}
where $(G*\psi)(x)=\int\mathrm{d} x'G(\Delta x)\psi(x')$ denotes the convolution.
We scale from now on lengths with $\xi_{\rm d}=\sqrt{1/ng_{\rm d}}$, wavevectors
with $1/\xi_{\rm d}$, and frequencies with $1/\xi_{\rm d}^2$, assuming thereby that $g_{\rm d}$ is always rendered finite and positive.
When thus fixing the scale $\xi_{\rm d}$, it should be kept in mind that $g_{\rm d}$ depends on both the dipole orientation angle $\theta$ and the transverse trapping scale $\ell_\bot$ via Eq.~\eqref{gddef}.
Our goal in this work is to study the solutions of Eq.~\eqref{bogo}. They
are more easily found in terms of the Nambu field $\Psi=(\psi,\psi^*)^{\rm t}$, as demonstrated in detail for our type
of system in \cite{Curtis}. Note that because of stationarity, the field modes still assume the general form $\Psi(t,x)=\exp(-i\omega t)\Psi_{\omega}(x)$, and $\Psi_{\omega}$ satisfies
\begin{align}
\omega \sigma_3\Psi_{\omega}=\left[-\frac{\partial^2_{x}}2+\left(1+\frac{g_{\rm c}}{g_{\rm d}}\right)\sigma_{4}\right]\Psi_{\omega}-3\sigma_{4}G*\Psi_{\omega},\label{bogonambu}
\end{align}
where $\sigma_{i}$, $i=1,2,3$ are the usual Pauli matrices, with $\sigma_{4}=\mathbb{1}+\sigma_1$. As usual, if $\Psi_{\omega}$ is a solution for Eq.~\eqref{bogonambu}, then $\sigma_1\Psi_{\omega}^*$ is also a solution with $-\omega^*$. Accordingly, we might focus on the field modes with $\omega>0$.
However, the analytical properties of $\tilde{G}$ prevent $\Psi_{\omega}$ from being a finite combination of exponential functions when $g_{\rm c}$ is discontinuous (see for details Appendix \ref{analytic}).
Far from the interface the solutions simplify to (a combination) of plane waves of the form $\Phi_{k}\exp(ikx)$
for constant $\Phi_{k}$, where
\begin{equation}
\left\{\omega\sigma_{3}-\frac{k^2}{2}-\left[1+\frac{g_{\rm c}}{g_{\rm d}}-3\tilde{G}(\beta k)\right]\sigma_4\right\}\Phi_{k}=0.\label{eqphi}
\end{equation}
Here, the dimensionless parameter
$\beta=\ell_{\bot}/\xi_{\rm d}=\sqrt{\ell_{\bot}^2ng_{\rm d}(\theta,\ell_{\bot})}$ (reinstating here $\xi_d$ for clarity)
measures the extent to which we are in the quasi-1D regime (in the dipole-dominated case).
The proper quasi-1D limit, with all perpendicular motion frozen out, is achieved when $\beta \rightarrow 0$.
The corresponding Bogoliubov dispersion relation is conveniently written as $f_{\omega}(k)=0$, where
\begin{equation}
f_{\omega}(k)=\omega^2-k^2\left[1+\frac{g_{\rm c}}{g_{\rm d}}-3\tilde{G}(\beta k)+\frac{k^2}{4}\right]=0,\label{dis}
\end{equation}
as we show in Appendices \ref{analytic} and \ref{exact}.
We shall present below two routes
for obtaining the solutions of Eq.~\eqref{bogonambu}.
The solution for this equation is demonstrated in subsection \ref{subsecC},
and a model approximation in which we discretize the integral of Eq.~\eqref{gfourier} is put forth in subsection \ref{subsecD}.
\subsection{Classification of field modes}
We can enumerate all the possible field modes as presented in Fig.~\ref{figmain2}: each plane wave propagating towards the barrier corresponds to a field mode.
\begin{figure}[t]
\includegraphics[scale=0.535]{figmain2.pdf}
\vspace*{-1em}
\includegraphics[scale=0.9]{figmain4.pdf}
\caption{Upper panel: Bogoliubov dispersion relation. Left: solutions in the region $x\ll0$, characterized by a fully dipole dominated interaction; $\Omega^{\rm (r)}$ and $\Omega^{\rm (m)}$
are the roton and maxon frequencies, respectively. Right: solutions for the region $x\gg0$, where a contribution from contact interaction is present.
Lower panel: Asymptotics of all possible field modes for the system under study. Zigzagged (respectively~curved) lines represent propagating (respectively~evanescent) channels. We note that $k_{\rm in}$ in the right panel represents the three possible choices $k_{\rm in1}$, $k_{\rm in2}$, and $k_{\rm in3}$. Also, arrows pointing to the right (respectively left) have $V_{g}>0$ (respectively $V_{g}<0$), where $V_{\rm g}=\mathrm{d}\mathcal{\omega}/\mathrm{d} k$. Lower panel top part (respectively~bottom part): schematics of the modes initiating at $x<0$ (respectively~$x>0$).}
\label{figmain2}
\end{figure}
From Fig.~\ref{figmain2}, we see the characteristic roton minimum formation in the dispersion relation caused by the dipole interactions \cite{Santos2003,Giovanazzi2004,Fischer2006,Seok,Shinn}, which singles out the spectrum subset defined by $\Omega^{\rm (r)}<\omega<\Omega^{\rm (m)}$.
For the sake of organization, we shall denote by $k$ (respectively~$p$) the wavevector solutions at $x<0$ (respectively~$x>0$). Note that, in contrast to the contact-only interaction case, in each region the rotonic
dispersion relation always admits 6 possible wavevector solutions, and for the propagating
ones, each corresponding group velocity sign (graph slope) indicates if the solution represents plane waves traveling towards or away from the interface at $x=0$. Accordingly, if $\omega\notin (\Omega^{\rm (r)},\Omega^{\rm (m)})$, we have only one solution at $x\ll0$ (respectively $x\gg0$) propagating towards the interface, denoted by $k_{\rm in}$ (respectively $p_{\rm in}$), and one solution propagating away from it, $k_{1}$ (respectively $p_{1}$). Moreover, we also find at each side of the boundary evanescent channels,
i.e., channels that are exponentially suppressed far from the barrier,
denoted by $k_2$, $k_3$, $p_2$, and $p_3$. These channels have $\mbox{Im}\ k_i<0$ and $\mbox{Im}\ p_i>0$, $i=2,3$.
The complementary case --- $\omega\in(\Omega^{\rm (r)},\Omega^{\rm (m)})$ --- is the physically richer one. We notice from Fig.~\ref{figmain2} that all the six solutions at the left hand side of the barrier represent propagating waves, with three of them, $k_{\rm in1}<k_{\rm in2}<k_{\rm in3}$ propagating towards the barrier. We shall label the channels propagating away from the barrier as $k_{1}<k_{2}<k_{3}$. To summarize the above discussion, we depict in Fig.~\ref{figmain2} lower panel the schematics of all possible field modes in our model.
The distinct behavior for dipolar interactions when a roton minimum is present comes from the shaded region
on the LHS of the top panel Fig.~\ref{figmain2}, where three modes
$k_{\rm in1}$, $k_{\rm in2}$, and $k_{\rm in3}$ in the band between
$\Omega^{\rm (r)}$ and $\Omega^{\rm (m)}$, the roton and maxon frequencies, respectively, can
propagate towards the barrier (cf.~lower panel on the left). This is in marked distinction to the contact-dominated case on the right ($x>0$) of the barrier.
\subsection{Singular integral equation for the dipolar Bogoliubov de Gennes problem}
\label{subsecC}
The details of how to solve Eq.~\eqref{bogonambu} are presented thoroughly in Appendix \ref{exact}, and here we synthesize the main points. Following the asymptotic behavior just presented, we know that the field modes are labeled by each incoming signal ($k_{\rm in}$ or $p_{\rm in}$), and far from the barrier they reduce to linear combinations of the channels shown in Fig.~\ref{figmain2} lower panel. Our strategy here can be understood in terms of the latter channels
as follows. For local, contact-only condensates, field modes are built by combining the plane waves given by the Bogoliubov dispersion relation, i.e., local solutions to the BdG equation, and imposing matching conditions at the barrier. This procedure fails in the dipolar case, as the plane waves from the dispersion relation are not local solutions to the BdG equation. Accordingly, we shall develop a procedure to build, from the plane waves of Fig.~\ref{figmain2} local solutions which can then be combined in the same fashion as for contact-interaction
condensates. This ``completion'' procedure, performed via the BdG equation, gives rise to singular integral equations and associated special functions, but has the benefit of expressing the field modes in a manner that leaves manifest their asymptotic properties while allowing for an easier numerical treatment than the direct solution of the BdG equation.
Because each field mode has a continuum of evanescent channels as imposed by the convolution in Eq.~\eqref{bogonambu} (Appendix \ref{analytic}), solutions can be found with the aid of the ansatz
\begin{align}
\Psi_{\omega}=\sum_{k}S_{k}\zeta_{k}(x)\Phi_{k}+\sum_{p}S_{p}\zeta_{p}(x)\Phi_{p}
\label{gensolexact}
\end{align}
where the $k's, p's, \phi_{k}$, and $\phi_{p}$ are given by Eqs.~\eqref{eqphi} and \eqref{dis}. The quantities $\zeta_{k}$ and $\zeta_{p}$ are matrix-valued functions, given by
\begin{align}
\zeta_{k}(x)=\left\{
\begin{array}{c}
i\int_{0}^{\infty}\mathrm{d} q\Lambda_{k,q}e^{-qx}\Pi(q),\ x>0,\\
e^{ikx}-i\int_{-\infty}^{0}\mathrm{d} q\Lambda_{k,q}e^{-qx}\Pi(q),\ x<0,
\end{array}\right.\label{zetak}
\end{align}
and
\begin{align}
\zeta_{p}(x)=\left\{
\begin{array}{c}
-e^{ipx}+i\int_{0}^{\infty}\mathrm{d} q\Lambda_{p,q}e^{-qx}\Pi(q),\ x>0,\\
-i\int_{-\infty}^{0}\mathrm{d} q\Lambda_{k,q}e^{-qx}\Pi(q),\ x<0.
\end{array}\right.\label{zetap}
\end{align}
with $\Pi(q)=\left(q^2/2-\omega\sigma_3\right)\sigma_4/(q-q_{-})(q-q_{+})$. Furthermore, the functions $\Lambda_{k,q}$ and $\Lambda_{p,q}$ are solutions of the novel singular integral equation
\begin{widetext}
\begin{align}
&\frac{h(q)\Lambda_{k,q}}{(q-q_{-})(q-q_{+})}
+\frac{3i\Delta\tilde{G}(i\beta q)}{2\pi(q_{-}-q_{+})}\Bigg[\frac{1}{q-q_{-}}\int_{-\infty}^{\infty}\mathrm{d} q'q'^{2}\Lambda_{k,q'}\left(\frac{1}{q_{-}-q'}-\frac{1}{q-q'}\right)
- \{q_- \leftrightarrow q_+\}
\Bigg]
=-\frac{3i\Delta\tilde{G}(i\beta q)}{2\pi(i q-k)},\label{inteq}
\end{align}
\end{widetext}
where the integrals are Cauchy principal values, $\Delta\tilde{G}(iq)$ was defined in Eq.~\eqref{disc}, and the function $h(q)$ is defined by
\begin{equation}
h(q)=\frac{q^4}{4}-\omega^2-q^2\left[1+\frac{g_{\rm c}(q)}{g_{\rm d}}-3\overline{G}(i\beta q)\right],\label{hfunction}
\end{equation}
with $\overline{G}(iq)=\lim_{\epsilon\rightarrow0^+}[\tilde{G}(iq+\epsilon)+\tilde{G}(iq-\epsilon)]/2$, i.e., the average of $\tilde{G}$ along the discontinuity branch. Finally, the real parameters $q_{-}<0<q_{+}$ are the two simple zeros of $h$: $h(q_{\pm})=0$.
We stress that the ansatz \eqref{gensolexact} was constructed in such a way that each $\zeta_{k}(x)\Phi_{k}, \zeta_{p}(x)\Phi_{p}$ is a local solution to the BdG equation, i.e., they are built to satisfy Eq.~\eqref{bogonambu} at all points except at the barrier ($x=0$). Thus, the continuum of evanescent channels in Eqs.~\eqref{zetak} and \eqref{zetap} gives a succinct representation of the fact that the Bogoliubov channels of Fig.~\ref{figmain2} fail from being local solutions. Moreover, a few features of the ansatz \eqref{gensolexact} are revealed by direct inspection of Eq.~\eqref{inteq}. In general, for analytic interaction kernels, one has $\Delta\tilde{G}(iq)\coloneqq 0$, which implies $\Lambda_{k,q},\Lambda_{p,q}\coloneqq 0$ for all $k$'s and $p$'s, and the ansatz \eqref{gensolexact}, through Eqs.~\eqref{zetak} and \eqref{zetap}, reduce to a finite combination of the exponentials given by the Bogoliubov dispersion relation \eqref{dis}. This is, in particular, the case for contact-type interactions. Furthermore, the decay of $\Lambda_{k,q},\Lambda_{p,q}$ as functions of $q$ depends on the radial trap through $\Delta\tilde{G}(iq)$. For the Gaussian approximation adopted here, Eq.~\eqref{inteq} shows that $\Lambda_{k,q},\Lambda_{p,q}$ are exponentially suppressed for large $q$, whereas for the box trap profile, $\Lambda_{k,q},\Lambda_{p,q}$ decay with a power law [cf.~Eq.~\eqref{disc} and the discussion after it].
In order to find a solution in the form \eqref{gensolexact}, the scattering coefficients $S_{k}$ and $S_{p}$ must be uniquely fixed
up to an overall phase and a normalization constant.
We note first that substitution of this ansatz into the BdG equation implies (after a lengthy calculation) that a solution in this form exists only if
\begin{align}
&\sum_{k}S_{k}\Lambda_{k,q_{-}}\sigma_4\Phi_{k}+\sum_{p}S_{p}\Lambda_{p,q_{-}}\sigma_4\Phi_{p}=0,\label{cond1}\\
&\sum_{k}S_{k}\Lambda_{k,q_{+}}\sigma_4\Phi_{k}+\sum_{p}S_{p}\Lambda_{p,q_{+}}\sigma_4\Phi_{p}=0,\label{cond2}
\end{align}
are satisfied, which thus fixes two of the six scattering coefficients. These equations are necessary conditions for Eq.~\eqref{inteq} to hold (details in Appendix \ref{exact}). The remaining boundary conditions are fixed by standard wave mechanics techniques applied to Eq.~\eqref{bogonambu}: $\Psi_{\omega}$ and $\partial_x\Psi_{\omega}$ are continuous at the barrier.
A few more general comments about the ansatz \eqref{gensolexact} are in order.
First of all, the solution just found is build from the solutions of the singular integral equation \eqref{inteq}, which in fact can be even more difficult to solve than the BdG equation itself.
However, this equation can be solved numerically to any precision by means of cubic splines \cite{Eric}. {Although this
requires a considerable numerical effort, the latter method has an advantage over e.g. collocation schemes \cite{Polyanin},
as it does not rely on prior assumptions on the form of the solutions
(for which explicit examples are contained in Appendix \ref{examples}).
The sensitivity towards choosing the appropriate numerical scheme additionally serves to illustrate
the mathematical challenge posed by nonlocal field theories.}
Also, we see from the definitions in Eqs.~\eqref{zetak} and \eqref{zetap} that in addition to the evanescent channels coming from the dispersion relation \eqref{dis},
the nonlocal dipolar interactions give rise to a continuum of evanescent channels {\it when a sound barrier exists}.
This can be traced back to the analytical properties of $\tilde{G}$ discussed in Sec.~\ref{secgtilde}, as we see from there that if $\Delta\tilde{G}\neq0$ (see Eq.~\eqref{disc}), then $\Lambda_{k,q}$ given by Eq.~\eqref{inteq} is nonvanishing. Furthermore, we know that when the barrier is absent ($g_{\rm c}=0$), the solutions to the BdG equation are single propagating exponentials \cite{Fischer2006,Fischer2018,Shinn}. The numerical implementation of the general solution \eqref{gensolexact} for this particular case recovers this fact, as we verified when building the numerical solutions presented in Appendix \ref{examples}.
We conclude this presentation of
solutions to the BdG equation with a further remark regarding the level of mathematical complexity in this system when compared with sound propagation in contact-only interacting condensates. In the latter, the analogue situation of a sound barrier modeled by a sudden $g_{\rm c}$ jump over a homogeneous condensate
results in the field modes being combinations of only a few simple exponentials \cite{Curtis}.
This inspiration, gleaned from contact interactions,
motivates us to seek for a different strategy in solving for the field modes, which consists in an approximation to the kernel \eqref{gfourier}, such that the corresponding field modes are also expressed as a finite combination of exponentials.
\subsection{Approximate solution}
\label{subsecD}
We shall now build approximate field modes obtained by the substitution of Eq.~\eqref{gfourier},
provided by a family of approximating models which recover in the continuum limit
the original field modes. A natural way of doing that is to change the integral in Eq.~\eqref{gfourier} to its finite Riemann sum, which then produces a family of approximating meromorphic functions. Specifically, we
have $\tilde{G}\rightarrow \tilde{\mathcal{G}}$, where
\begin{equation}
\tilde{\mathcal{G}}(k)= \frac{k^2}{\sum_{j=0}^{\mathcal{N}}j\Delta q^2e^{-j^2\Delta q^2/2}}\sum_{j=0}^{\mathcal{N}}\frac{j\Delta q^2e^{-j^2\Delta q^2/2}}{j^2\Delta q^2+k^2},\label{approxG}
\end{equation}
where the two parameters $\Delta q>0$ and the integer $\mathcal{N}$ are free. We note that $\mathcal{N}\rightarrow \infty$ and $\Delta q\rightarrow 0$ reproduces exactly Eq.~\eqref{gfourier}.
Furthermore, $2\mathcal{N}$ is the number of (simple) poles in this function, which are located at $k=j (i\Delta q)$, for $-\mathcal{N}\leq j\leq \mathcal{N}$, and, naturally, $\Delta q$ is the distance between two consecutive poles. In Fig.~\ref{figmain1} we show some examples for the accuracy potential of our approximation.
In particular, we see from Fig.~\ref{figmain1} that for $\mathcal{N}=10$ and $\Delta q=1/3$, i.e., the approximating $\tilde{\mathcal{G}}$ containing only 20 poles, a reasonable agreement is already obtained. Let us now build the solutions for $\Psi_{\omega}$ for the model of Eq.~\eqref{approxG}.
\begin{figure}[t]
\includegraphics[scale=0.5]{figmain1.pdf}
\caption{Approximations for the Fourier space kernel
$\tilde{G}$ in \eqref{gfourier} for two sets of $\{\mathcal{N},\Delta q\}$.
For $\mathcal{N}=100$ the approximation is indistinguishable on the scale of the figure from the exact result.
}
\label{figmain1}
\end{figure}
The general asymptotic properties of the possible field modes are exactly the same as presented in Fig.~\ref{figmain2}. In the absence of $\tilde{\mathcal{G}}$, Eq.~\eqref{dis} is a degree four polynomial equation, whose solutions are exactly found for each $\omega>0$, whereas for $\tilde{\mathcal{G}}$ given in Eq.~\eqref{approxG}, in addition to these four solutions, another $2\mathcal{N}$ solutions are present, one for each pole in $\tilde{\mathcal{G}}$.
Furthermore, in view of the numerator of Eq.~\eqref{approxG}, it is easy to notice that no solution coincides with the poles of $\tilde{\mathcal{G}}$ in the complex $k$ plane.
For the approximate model under study, the solutions for the dispersion relation can be grouped in the single zero level set $f_{\omega}(k)=0$, as done in Fig.~\ref{figmain3}.
\begin{figure}[b]
\includegraphics[scale=0.64]{figmain3.pdf}
\caption{Zero level sets for the real and imaginary parts of Eq.~\eqref{dis}. Blue circles are solutions to $f_{\omega}=0$. We have
chosen $\mathcal{N}=10$ and $\Delta q=1/3$ in the discrete representation \eqref{approxG}, with $\omega=0.4$. $k_{r}$ and $k_i$ denote the real and imaginary parts of $k$, respectively. Left: (Right:) dispersion relation at $x<0$ ($x>0$). The solutions on the imaginary axis tend to a continuum of evanescent channels when $\mathcal{N}\rightarrow\infty$ and $\Delta q\rightarrow0$.}
\label{figmain3}
\end{figure}
As explained, in each region of the condensate and for each $\omega>0$, Eq.~\eqref{dis} has $4+2\mathcal{N}$ solutions, with the real solutions depicted in Fig.~\ref{figmain2}, and the remaining being necessarily complex. We recall that when $\tilde{G}$ is the exact one given in Eq.~\eqref{gfourier}, the wave function presents a continuum of evanescent modes on the imaginary axis. We thus readily see that the approximation implemented here also discretizes this continuum, as indicated by Fig.~\ref{figmain3}, where in addition to the six channels existing when $\tilde{G}$ is used, we have the extra blue points on the imaginary axis. Furthermore, as $\mathcal{N}\rightarrow\infty$ and $\Delta q\rightarrow 0$, these poles clump together, thus recovering the continuum of the
solutions (Eq.~\eqref{gensolexact}) and reinforcing the validity of our approximation. We label the evanescent channels at $x<0$ by $k_{j}$ and at $x>0$ by $p_j$. Clearly, $\mbox{Im}\ k_j\leq0$ and $\mbox{Im}\ p_j\geq0$.
It turns out that these exponentials, the solutions to the dispersion relation Eq.~\eqref{dis} of the approximated model,
can be used to construct solutions to the wave equation. Indeed, let us look for a solution with the ansatz
\begin{align}
\Psi_{\omega}=\left\{
\begin{array}{c}
\sum_{p}S_{p}e^{ipx}\Phi_{p},\ x>0,\\
\sum_{k}S_{k}e^{ikx}\Phi_{k},\ x<0,
\end{array}\right.\label{gensol}
\end{align}
whose relation to Eq.~\eqref{gensolexact} is manifest: the integrals in Eqs.~\eqref{zetak} and \eqref{zetap} are substituted by finite sums of evanescent channels (see Fig.~\ref{figmain3}).
Each incoming signal at the barrier gives rise to a distinct solution, which is associated to reflected and evanescent channels. By carefully counting the number of unknown coefficients $S_k$ and $S_p$, we find that for a given $\mathcal{N}$, we have $2+\mathcal{N}$ $S_{k}$'s and $2+\mathcal{N}$ $S_{p}$'s, leading to a total $4+2\mathcal{N}$ unknown coefficients for each field mode. If a solution exists in the form \eqref{gensol}, then the coefficients are fixed by Eq.~\eqref{bogonambu}. We note, however, that the application of standard wave mechanics techniques to the field equation
only produces 4 boundary conditions, namely, $\Psi_{\omega}$ and $\partial_{x}\Psi_{\omega}$ are continuous at the barrier. This is caused because the convolution in Eq.~\eqref{bogonambu} is always continuous (see Appendix \ref{analytic}).
We conclude, therefore, that $2\mathcal{N}$ boundary conditions appear to be ``missing,'' and the solution to this puzzle is found from the form of Eq.~\eqref{gensol}. Indeed, the convolution $\sigma_4 G*\Psi_{\omega}$ of Eq.~\eqref{bogonambu} is conveniently written as
\begin{equation}
\frac{1}{2\pi}\int\mathrm{d} k e^{ikx}\tilde{\mathcal{G}}(\beta k)\sigma_4\tilde{\Psi}_{\omega}(k),\label{conf}
\end{equation}
where $\tilde{\Psi}_{\omega}(k)$ is the Fourier transform of $\Psi_{\omega}$ (cf.~Appendix \ref{analytic}), and the (simple) poles of $\tilde{\Psi}_{\omega}$ are precisely the $k$'s and $p$'s appearing in Eq.~\eqref{gensol}. Thus, as the poles of $\tilde{\mathcal{G}}(\beta k)$ cannot coincide with the poles of $\tilde{\Psi}_{\omega}$ (which are given by the dispersion relation \eqref{dis}), the convolution \eqref{conf} does not have evanescent channels at the poles of $\tilde{\mathcal{G}}(\beta k)$
if and only if $\sigma_4\tilde{\Psi}_{\omega}$ vanishes at these poles. Therefore, as $\tilde{\mathcal{G}}(\beta k)$ has $2\mathcal{N}$ poles at the imaginary axis, a solution of the form \eqref{gensol} exists if and only if
\begin{equation}
\sigma_4\tilde{\Psi}_{\omega}(ij\Delta q)=0,\label{boundaryfactorization}
\end{equation}
for $-\mathcal{N}\leq j\leq \mathcal{N}$, $j\neq0$. This gives us an additional set of $2\mathcal{N}$ boundary conditions, as required.
This concludes the construction of the field modes for the system under study. We emphasize at this point
that the approximate solutions described in the above greatly reduce the numerical effort necessary to
simulate any observable quantity in the inhomogeneous dipolar BEC.
\section{S-matrix and unitarity}
All the field modes thus constructed admit the in/out state interpretation:
For each field mode, the incoming channel, labeled by $k_{\rm in}$'s or $p_{\rm in}$, represents a signal sent towards the boundary at the asymptotic past which emerges at the asymptotic future propagating away from the barrier through the various reflected/transmitted channels.
This can be read off directly from the solutions in Eqs.~\eqref{gensolexact} and \eqref{gensol}, as far away from the boundary each elementary excitation reduces to a sum of plane waves.
The scattering coefficients involved in all of these processes can be conveniently grouped into a unitary matrix --- the S-matrix --- which is simply an expression for the conservation of field mode normalization throughout the system's causal development. When only contact interactions are present, this conservation is implied by the BdG equation to be $\partial_{x}J_{\omega}=0$, where $J_{\omega}=\mbox{Im}\ (\Psi_{\omega}^{\dagger}\partial_{x}\Psi_{\omega})$. The important consequence of this equation is that the total flux in the system is conserved: $J_{\omega}(\infty)-J_{\omega}(-\infty)=0$, giving rise to the S-matrix conveniently defined in terms of the norm-preserving condition
\begin{equation}
\sum_{k\ {\rm prop}}k\Phi_{k}^{\dagger}\Phi_{k}|S_{k}|^2\stackrel{\mbox{contact}}{=}\sum_{p\ {\rm prop}}p\Phi_{p}^{\dagger}\Phi_{p}|S_{p}|^2,\label{fluxcontact}
\end{equation}
where the sums are performed over the propagating channels only.
However, when dipolar interactions are present, $J_{\omega}$ is no longer conserved (see for the
extended discussion in Appendix \ref{Aflux}):
\begin{equation}
\partial_{x}J_{\omega}=6
\mbox{Im}[(G*\Psi_{\omega}^{\dagger})\sigma_4 \Psi_{\omega}].
\end{equation}
This should not be read as implying that there is no flux conservation, but that the quantity $J_{\omega}$ is no longer a bona fide representation for the system total flux. We show in detail in Appendix \ref{Aflux} that for both families of solutions found in the previous section the dipolar analogue of Eq.~\eqref{fluxcontact} acquires the form
\begin{align}
\sum_{k\ {\rm prop}}\Phi_{k}^{\dagger}&\left(k-3\frac{\mathrm{d}\tilde{\mathcal{G}}}{\mathrm{d} k}\sigma_4\right)\Phi_{k}|S_{k}|^2=\nonumber\\
&\sum_{p\ {\rm prop}}\Phi_{p}^{\dagger}\left(p-3\frac{\mathrm{d}\tilde{\mathcal{G}}}{\mathrm{d} p}\sigma_4\right)\Phi_{p}|S_{p}|^2.\label{flux}
\end{align}
The meaning of this relation is more readily grasped by constructing the S-matrix. To that end, we
take the solution for $\Phi_{k}$ to be normalized as
\begin{align}
\Phi_{k}=&\left|\frac{k^2}{4\pi V_{\rm g}\omega(\omega-k^2/2)^{2}}\right|^{1/2}\nonumber\\&\times
\left(\begin{array}{c}
1+g_{\rm c}/g_{\rm d}-3\tilde{\mathcal{G}}\\
\omega-k^2/2-1-g_{\rm c}/g_{\rm d}+3\tilde{\mathcal{G}}
\end{array}\right),\label{normchannel}
\end{align}
where $V_{\rm g}=
\mathrm{d}\omega/\mathrm{d} k$ is the group velocity.
We
have presented here the flux conservation in terms of the approximated model, which in turn implies its validity for the
continuum as we take the limit of an infinite number of discretization steps. Also, we note that the particular form of the normalization in Eq.~\eqref{normchannel} is not relevant for the physics of the problem. However, this form is convenient for us because it implies
\begin{equation}
\Phi_{k}^{\dagger}\left(k-3\frac{\mathrm{d}\tilde{\mathcal{G}}}{\mathrm{d} k}\sigma_4\right)\Phi_{k}=\frac{1}{2\pi }\mbox{sgn}(V_{\rm g}),\label{normflux}
\end{equation}
for propagating channels. This form of normalization
is one possible choice which bounds the scattering coefficients absolute value to unity, as we shall see now. Let us label the field modes according to their incoming channel by adding a superscript to the scattering coefficients. For instance, for $p_{\rm in}$, we shall set $S_{k}\rightarrow S_{k}^{p_{\rm in}}$ in the general solution \eqref{gensol}. Moreover, we shall set to unity the intensity of the incoming channels, i.e., $S_{p_{\rm in}}^{p_{\rm in}}=S_{k_{\rm in}}^{k_{\rm in}}=1$.
In view of the schematics displayed in Fig.~\ref{figmain2} lower panel, Eqs.~\eqref{flux} and \eqref{normflux} simply mean that the matrix $\bm{S}_{\omega}$ defined by
\begin{equation}
\bm{S}_{\omega}=\left(
\begin{array}{cc}
S^{k_{\rm in}}_{k_1} & S^{k_{\rm in}}_{p_1}\\
S^{p_{\rm in}}_{k_1} & S^{p_{\rm in}}_{p_1}
\end{array}
\right),
\label{smatrix1}
\end{equation}
for $\omega\notin(\Omega^{\rm (r)},\Omega^{\rm (m)})$ and
\begin{equation}
\bm{S}_{\omega}=\left(
\begin{array}{cccc}
S^{k_{\rm in1}}_{k_1} & S^{k_{\rm in1}}_{k_2} & S^{k_{\rm in1}}_{k_3} & S^{k_{\rm in1}}_{p_1}\\
S^{k_{\rm in2}}_{k_1} & S^{k_{\rm in2}}_{k_2} & S^{k_{\rm in2}}_{k_3} & S^{k_{\rm in2}}_{p_1}\\
S^{k_{\rm in3}}_{k_1} & S^{k_{\rm in3}}_{k_2} & S^{k_{\rm in3}}_{k_3} & S^{k_{\rm in3}}_{p_1}\\
S^{p_{\rm in}}_{k_1} & S^{p_{\rm in}}_{k_2} & S^{p_{\rm in}}_{k_3} & S^{p_{\rm in}}_{p_1}
\end{array}
\right),
\label{smatrix2}
\end{equation}
in case that $\omega\in(\Omega^{\rm (r)},\Omega^{\rm (m)})$, is unitary, i.e., $\bm{S}_{\omega}^\dagger\bm{S}_{\omega}=\bm{S}_{\omega}\bm{S}_{\omega}^\dagger=\mathbb{1}$. This ``unitarity'' enables us to study quasiparticle scattering by the sound barrier as considered, because it ensures that no signal amplitude is lost during quasiparticle propagation and scattering. Finally, a different choice of normalization for $\Phi_{k}$ clearly does not spoil the S-matrix as it is given by Eq.~\eqref{flux}, but the simplified form $\bm{S}_{\omega}$ and its unitarity condition changes.
We finish this section with a disclaimer regarding the S-matrix nomenclature. Naturally, scattering matrices appear generically in distinct branches of physics, and as such it is important that the meaning of the notion of S-matrix be completely clear in each context. We cite, for instance, the two works \cite{Jackel1992,Barnett1996} in which S-matrices appear within the same wave mechanics context as in our work. Yet, in \cite{Barnett1996} the very same mathematical object is not referred to as an S-matrix.
We should thus stress that the S-matrix in our context is not the same object as the one occurring in particle scattering in quantum field theory, where it is just representing the Schr\"odinger equation written in terms of the evolution operator \cite{Schwartz2014}.
Note that a proof of the unitarity of the S-matrix in such a quantum field theory context for a nonlocal family of scalar quantum field theories was provided in \cite{Efimov2}.
\section{Transmittance and reflectance: Impact of dipolar interaction}
As an application of the solutions constructed we now investigate how the sound barrier transmittance/reflectance is affected by the roton minimum.
We shall use in this section {\it only} the approximate model, as it allows for an easier numerical simulation,
and leave for the Appendix \ref{examples} some worked out examples of the
singular integral BdG equation \eqref{inteq}. We recall from the dispersion relation \eqref{dis} that the influence of the dipolar interactions is measured by the coefficient $\beta=\ell_{\bot}$: As $\beta$ increases, the roton minimum emerges and the system eventually becomes dynamically unstable; when $\beta \rightarrow 0$, the system is stable
under the proviso that $g_d>0$ .
Therefore, $\beta$ measures how deep into the quasi-1D regime the system has penetrated.
Furthermore, we note from Eqs.~\eqref{gfourier} and \eqref{approxG} that, because of the global factor $k^2$, the long-range part of the dipolar interactions is suppressed in the quasi-1D limit $\beta\rightarrow0$, and the condensate behaves no different than one with only local contact interactions, which is modeled by $g=g_{\rm d}$ for $x<0$ and $g=g_{\rm d}+g_{\rm c}$ for $x>0$. Thus our model enables us to compare results with the non-dipolar case operating near or within the quasi-1D limit.
For the sake of a clear representation, we now treat separately the cases with roton and no roton minimum.
\subsection{No roton minimum}
When rotonic excitations are not present in the system, for each frequency $\omega$ in the system spectrum there are two elementary excitations, corresponding to the signals sent towards the barrier at each of its sides, i.e., signals $k_{\rm in}$ and $p_{\rm in}$. Accordingly, the S-matrix has the form \eqref{smatrix1} for each frequency subspace.
As $\bm{S}_{\omega}$ is unitary, this means that both its row and column vectors form orthonormal bases, which in turn implies
that the absolute value of one of its components fixes all the others. Therefore, by calculating $|S^{k_{\rm in}}_{p_1}|^2$ --- the transmitted intensity through the barrier from left to right --- we also determine $|S^{k_{\rm in}}_{k_1}|^2$, $|S^{p_{\rm in}}_{p_1}|^2$, and $|S^{p_{\rm in}}_{k_1}|^2$.
We show in Fig.~\ref{figmain5} how the barrier transmittance varies with frequency for several values of $\beta$.
\begin{figure}[t]
\includegraphics[scale=0.5]{figmain5.pdf}
\caption{Sound barrier transmittance measured by the coefficient $|S^{k_{\rm in}}_{p_1}|^2$ for several values of $\beta$. Inset: positive branch of the dispersion relation for each curve, showing how the roton minimum threshold is approached.
The brown-dotted curve corresponds to the contact-only regime ($\beta=0$), showing that this barrier is almost transparent for contact-only interactions. Yet, the bending of the dispersion relation
leads to a noticeable decay of the barrier transparency even in the absence of a roton minimum, as shown by the continuous, dashed, and dot-dashed curves.
Here, discretization parameters are $\mathcal{N}=10$ and $\Delta q=1/3.4$. For these parameters, the roton minimum formation threshold is approximately $\beta\sim0.689$.}
\label{figmain5}
\end{figure}
The dotted-brown curve corresponds to $\beta=0$, and it shows that the barrier with ``height'' modeled by $g_{\rm c}/g_{\rm d}=0.2$ is almost transparent for contact-only interactions. Yet, we note that a noticeable decay in the transmittance is predicted to occur for an extended range of signal frequencies even in the absence of a roton minimum when $\beta$ is increased, which thus reveals how wave propagation in this system is sensitive to inhomogeneities.
\subsection{With roton minimum}
\label{subsecroton}
We recall that when the roton minimum exists, and for frequencies not in the grey band shown
in the left upper panel of Fig.~\ref{figmain2},
$\omega\notin (\Omega^{(\rm r)},\Omega^{(\rm m)})$, there are only two elementary excitations for each frequency, and the same reasoning used to interpret the above case with no roton minimum present can be repeated.
We present in Fig.~\ref{figmain8} several simulations for this regime, which supplements our findings from Fig.~\eqref{figmain5} beyond the roton minimum formation, namely, the high energy sector of the theory is sensitive to the bending of the dispersion relation, a feature not present when only contact interactions exist for condensates at rest.
\begin{figure}[t]
\includegraphics[scale=0.5]{figmain8.pdf}
\caption{Sound barrier transmittance measured by the coefficient $|S^{k_{\rm in}}_{p_1}|^2$ for barrier heights $g_{\rm c}/g_{\rm d}$. The shaded region correspond to the band $\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)})$, which cannot be characterized by the coefficient $|S^{k_{\rm in}}_{p_1}|^2$ alone. Here, $\mathcal{N}=10$, $\Delta q=1/3.4$ and $\beta=0.76$. This implies $\Omega^{(\rm r)}
\sim0.24$ and $\Omega^{(\rm m)
\sim0.45$ for the roton and maxon frequencies, respectively. These curves represent supplementary data to the ones presented in Fig.~\ref{figmain5}: although those sound barriers have weak influence on the phonon sector of the theory, the high-energy sector is sensitive to the bending of the dispersion relation, a feature not present in the contact-only case.}
\label{figmain8}
\end{figure}
Furthermore, within the band $\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)})$ (the shaded area in Fig.~\ref{figmain8}) the degeneracy of each frequency subspace is 4 for the parameter choice we are investigating, corresponding to the four distinct quasiparticles that can be excited: $k_{\rm in1}$, $k_{\rm in2}$, $k_{\rm in3}$, $p_{\rm in}$. We stress that this increased degeneracy has no counterpart in condensates whose particles interact only locally. It is instructive to analyze each quasiparticle separately.
\subsubsection{$\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)}), k_{\rm in1}$ excitation}
\label{paragraph1}
We show in Fig.~\ref{figmain9} the reflectance and transmittance coefficients of the quasiparticle branch indexed by $k_{\rm in 1}$,
cf.~Fig.~\ref{figmain2}.
Reflectance and transmittance measure the fractions of the plane wave signal $\exp(-i\omega t+ik_{\rm in1}x)$ coming towards the barrier from the left that get reflected and transmitted through the various available channels: $k_{1}$, $k_2$, $k_3$, and $p_1$.
\begin{figure}[t]
\includegraphics[scale=0.5]{figmain9.pdf}
\caption{Scattering coefficients for the quasiparticle branch indexed by $k_{\rm in1}$ in the frequency band $\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)})$. Here, $\mathcal{N}=10$, $\Delta q=1/3.4$. This implies $\Omega^{(\rm r)}
\sim0.24$ and $\Omega^{(\rm m)
\sim0.45$ for the roton and maxon frequencies, respectively. We note that the sum $|S^{k_{\rm in1}}_{k_1}|^2+|S^{k_{\rm in1}}_{k_2}|^2+|S^{k_{\rm in1}}_{k_3}|^2+|S^{k_{\rm in1}}_{p_1}|^2=1$ for all $\omega$, one of the properties of the S-matrix. For the
quasiparticles, labeled by $k_{\rm in1}$, the barrier is mostly opaque, with the incoming signal being reflected exclusively through the channel $k_1$ for $\omega\rightarrow\Omega^{(\rm r)}$ and through the channel $k_{2}$ as $\omega\rightarrow\Omega^{(\rm m)}$.}
\label{figmain9}
\end{figure}
Therefore, the data in Fig.~\ref{figmain9} show us that the barrier is mostly opaque for these waves, as the transmittance in this case satisfies $|S^{k_{\rm in 1}}_{p_1}|^2\ll1$. Furthermore, the signal is integrally reflected through the channel $k_{1}$ (respectively~$k_2$) for frequencies close to the roton (respectively~maxon) frequency, and this feature survives even for very small $g_{\rm c}/g_{\rm d}=0.001$. It is noteworthy that in the barrier's absence, i.e., $g_{\rm c}=0$, any signal of this form is clearly integrally transmitted, and {\it thus the very existence of the barrier leads to the total reflection of $k_{\rm in1}$ quasiparticles with frequencies near $\Omega^{(\rm r)}$ and $\Omega^{(\rm m)}$.}
\subsubsection{$\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)}), k_{\rm in2}$ excitation}
The analysis of the remaining quasiparticle modes follows the same line of reasoning just presented. Figure \ref{figmain10} presents the scattering coefficients for the $k_{\rm in2}$ quasiparticle branch.
We see from Fig.~\ref{figmain10} that, in distinction to the $k_{\rm in1}$ quasiparticles, plane waves of the form $\exp(-i\omega t +ik_{\rm in2}x)$ sent towards the barrier from the left are integrally transmitted for frequencies close to the roton minimum, and a transition to a completely opaque barrier is observed as the frequency tends to $\Omega^{(\rm m)}$. We also observe that this opaqueness near maxon frequencies survives even for very small $g_{\rm c}/g_{\rm d}=0.001$, in the same way as it happens for the $k_{\rm in1}$ excitation discussed in the preceding paragraph.
\begin{figure}[t]
\includegraphics[scale=0.5]{figmain10.pdf}
\caption{Scattering coefficients for the quasiparticle branch indexed by $k_{\rm in2}$ in the frequency band $\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)})$. The parameters are the same as in Fig.~\ref{figmain9}. We observe that these quasiparticles experience a transition from a completely transparent barrier for frequencies near $\Omega^{(\rm r)}$ to a completely opaque one, for frequencies near $\Omega^{(\rm m)}$ and with the reflection exclusive through $k_3$. The channels $k_1$ and $k_2$ have negligible participation in the scattering process.}
\label{figmain10}
\end{figure}
\begin{figure}[t]
\hspace*{-1.25em}
\includegraphics[scale=0.48]{figmain11.pdf}
\caption{Scattering coefficients in the frequency band $\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)})$. Left panel: quasiparticle branch indexed by $k_{\rm in3}$. Right: quasiparticle branch indexed by $p_{\rm in}$. The parameters are the same as in Figs.~\ref{figmain9}
and \ref{figmain10}. Both families of quasiparticles experience a partially transmitting barrier for frequencies near $\Omega^{(\rm m)}$, whereas for frequencies near the roton minimum, the quasiparticles $k_{\rm in3}$ (respectively~$p_{\rm in}$) experience a completely opaque (respectively transmitting) barrier.}
\label{figmain11}
\end{figure}
\subsubsection{$\omega\in (\Omega^{(\rm r)},\Omega^{(\rm m)}), k_{\rm in3}$ and $p_{\rm in}$ excitations}
The remaining quasiparticle branches, namely, $k_{\rm in3}$ and $p_{\rm in}$ are analyzed in the same fashion. They correspond to the plane waves $\exp(-i\omega t +ik_{\rm in3}x)$, $\exp(-i\omega t +ip_{\rm in}x)$ propagating toward the barrier from the left and right, respectively. We see from Fig.~\ref{figmain11} that both waves experience a partially transmitting barrier for frequencies near $\Omega^{(\rm m)}$, in distinction to the $k_{\rm in1}$ and $k_{\rm in2}$ excitations
discussed above that experience a completely opaque barrier at these frequencies.
Furthermore, for frequencies near the roton minimum, the barrier becomes completely opaque for the $k_{\rm in3}$ excitations and completely transparent for the $p_{\rm in}$ branch.
\section{Final remarks}
The present study provides a systematic route to describe the scattering of quasiparticles in inhomogeneous
dipolar Bose-Einstein condensates.
To this end, we have studied perturbations in a quasi-1D dipolar condensate in which a sound interface exists, separating the condensate in two regions possessing different sound velocities. The perturbations were built via two distinct methods, one based on the direct simulation of the solutions and one based on a family of approximated models. Both methods explore the fact that the long-range dipolar interaction in Fourier space is not modeled by an analytic kernel, which gives rise to a distinct set of evanescent channels bound to the sound barrier.
As a particular application, we have shown how sound scattering occurs
as a function of the perturbation frequency. An intricate pattern of reflectance/transmittance was shown to emerge due to the existence of rotonic excitations, which are due to the interaction anisotropy, which leads to a strong dependence of the barrier transmittance as function of the mode frequency and ``polarization.'' In fact, if rotonic excitations exist, for frequencies within the roton/maxon band, the number of elementary excitations that can be scattered by the barrier is larger than two, as happens for the case where only contact interactions are present. Each of these correspond to a distinct system response to external excitations, and the barrier behaves fully/partially transparent or opaque depending on the excitations.
We started from a homogeneous quasi-1D dipolar condensate at rest, and asked how sound propagates in this system if a sound barrier is constructed at which the sound speed experiences a steplike change.
As the system admits also rotonic (and maxonic) excitations when dipole-interaction-dominated,
we demonstrated that the S-matrix dimension is enlarged. It should be emphasized that,
starting from these two basic ingredients we use--- dipolar condensate at rest and a sound barrier ---, that only through the exact knowledge of the field modes we derived it is possible to unveil the peculiar
results presented for the reflectance and transmittance of the increased number of modes available to the system.
The properties of the S-matrix we obtain clearly distinguish dipole-interaction-dominated BECs from their their contact-interaction-dominated counterparts.
The importance of the present work thus consists in providing a recipe to build the quasiparticle spectrum in
dipolar condensates with inhomogeneous sound velocity. We should also stress that our method is not restricted to condensates with homogeneous densities, as only a few modifications are necessary to study configurations in which the sound interface is caused by density jumps.
Indeed, once the knowledge of how to build field modes for this type of inhomogeneous dipolar condensates is set, it is straightforward to apply the same technique a number of systems of interest. In particular, the application we have explored here is far from exhausting all the features inherent in this type of system, with examples including a
roton almost touching the wavevector axis which could hybridize with low-energy phonons,
and the existence of rotonic excitations on both sides of the barrier.
As a natural extension of our results, nontrivial effects are expected if the sound barrier is taken to have a finite size.
Furthermore, we expect that selective nature of quasiparticle scattering in dipolar BECs and the associated impact on
transmission and reflection coefficients will have significant impact on the
properties of droplets, which represent naturally emerging interfaces in the condensate due to quantum fluctuations \cite{Ferrier,Pfau,Sinha,Santos,Edmonds}.
Although we have not explored
in our study
any quantum features of the system, the field modes we
constructed are already properly normalized according to the Bogoliubov scalar product.
Therefore, quantization of the present nonlocal field theory is a formal step which follows straightforwardly
from expanding the bosonic field operator into this complete set of modes.
We finally note that
infinitely extended (quasi-)1D Bose-Einstein condensates,
according to the Hohenberg theorem \cite{Hohenberg}, do not exist \footnote{The Hohenberg theorem holds whenever the $f$-sum rule can be applied.},
due to the divergence of nonlocal phase fluctuations in the system. This imposes certain restrictions on the applicability of infinitely extended systems to model experimental realizations, as we expect the phonon part of the spectrum to be sensitive to finite size effects. This, however, should not interfere with the general properties of the scattering processes here investigated --- which occur far from the long-wavelength phonon regime --- as long as the condensate is still sufficiently large along the weakly confining direction, where the achievable length is subject to a position space version of the Hohenberg theorem \cite{Fischer2002}.
\section{Acknowledgments}
We thank Dr.~Luana Silva Ribeiro for her valuable comments regarding singular integral equations. This work has been supported by the National Research Foundation of Korea under
Grants No.~2017R1A2A2A05001422 and No.~2020R1A2C2008103.
| proofpile-arXiv_065-1839 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction and main results}
Let us imagine that we want to know if a Fibonacci number, a pentagonal number, one of a pair of amicable numbers or a Pythagorean triple, the terms in a sequence in the On-line Encyclopedia of Integer Sequences (OEIS, \cite{OEIS}), or any other positive integer can be written as
\begin{equation}
\label{eq:n=k+pk}
n = k + p_k,
\end{equation}
where $(p_k)_{k \ge 1}$ is the sequence of the prime numbers. (Let us note that neither Mathematica, Maple nor SageMath are able to solve the equation~\eqref{eq:n=k+pk} even in simple cases with small numbers, for instance for $n=10+p_{10}=39$.)
Of course, one can look at the sequence $(k+p_k)_{k \ge 1}$ (which is the sequence \seqnum{A014688} in the OEIS) and check if $n$ occurs in the sequence; but this can be a hard task. Instead, we give here an iterative process which, given $n$, tells whether the equation \eqref{eq:n=k+pk} has a solution or not, and, in the affirmative case, provides the solution~$k$.
It is worth noting that there exist a large number of iterative methods for solving equations of the form $f(x) = 0$ where $f$ is a function defined on $\mathbb{R}$, $\mathbb{C}$, $\mathbb{R}^d$, $\mathbb{C}^d$ or a Banach space. However, there are very few iterative methods for solving equations in the theory of numbers.
Perhaps the best example of an iterative method for solving an equation in this area is \cite{KnXe}, where the authors show how classical rootfinding methods from numerical analysis can be used to calculate inverses of units modulo prime powers.
Along the paper, we will use some well known properties of prime numbers, which can be found in many texts; see, for instance, \cite{Apo, BatDia, DKoLu, FiRo, New}.
As usual, $\pi(x)$ denotes the quantity of prime numbers $\le x$, which is a nondecreasing function. Moreover, we will often use the basic properties $\pi(p_k) = k$ and $p_{\pi(n)} \le n$ (with $p_{\pi(n)} = n$ if and only if $n$ is prime). Let us also note that $k + p_k$ is strictly increasing with~$k$. Thus, for a fixed $n$ the solution of~\eqref{eq:n=k+pk}, if it exists, is unique.
A possible method to try to find $k$ such that $n = k + p_k$ is to take $f(k) = k + p_k - n$ and solve the equation $f(k)=0$ by means of a bisection method starting at the initial points $k_0 = 1$ and $k_1=n$. The function $f(k)$ is increasing and satisfies $f(k_0)<0$ and $f(k_1)>0$ so, if the solution exists, the bisection method always finds it. Moreover, the number of iterations needed to reach the solution is $\lfloor\log_2(n-1)\rfloor$ (of course, this number can be a bit different due to the rounding of the bisection to an integer in every step). Instead, we propose another method which is faster, as we can see in the examples, and has a nice dynamics with its own interest.
The iterative method that we are going is simple and, as far as we know, has not been proposed before in this context; it is a kind of regula falsi method to solve an equation $f(k) = 0$ with $f(k) = \pi(n-k) - k$, that is not equivalent to~\eqref{eq:n=k+pk}, but closely related.
Since $p_k \approx k \log k$, it follows that $p_k$ gives the main contribution to the sum $k+p_k$.
The idea is then to approximate the equation \eqref{eq:n=k+pk} by the equation $n = p_k$, and so to take a first guess $k = \pi(n)$; then we adjust the initial guess by successive corrections.
Let us note that any $k$ which solves $n=k+p_k$ satisfies $n-k=p_k$, a prime number, so $\pi(n-k) = k$.
Thus, the procedure for solving~\eqref{eq:n=k+pk} is based on the iterative scheme $k_{j+1} = \pi(n-k_j)$, and the goal is to look for a fixed point.
The following result shows the dynamics of the iterative method:
\begin{theorem}
\label{theo:iter}
Let $n \ge 4 $ be a given integer, and let us define the iterative process
\begin{equation}
\label{eq:iter}
k_{j+1} = \pi(n-k_j), \qquad k_0 = \pi(n).
\end{equation}
After a finite number of steps, one of these cases occurs:
\begin{enumerate}
\item[(i)] We get a fixed point $k^{*}$.
\item[(ii)] We get a cycle $\{k',k''\}$ with $k'' = k'+1$, $n-k'$ a prime number, $(k_{2j})_{j\ge0}$ is a decreasing sequence which converges to $k''$, and $(k_{2j+1})_{j\ge0}$ is an increasing sequence which converges to~$k'$.
\end{enumerate}
\end{theorem}
We postpone the proof of this theorem to Section~\ref{sec:proofs}.
For the moment, let us see the consequences of this result on the solutions of the equation~\eqref{eq:n=k+pk}.
\begin{theorem}
\label{theo:sol}
Let $n \ge 4$ be a given integer, the equation $n = k + p_k$,
and the iterative process
\begin{equation*}
k_{j+1} = \pi(n-k_j), \qquad k_0 = \pi(n).
\end{equation*}
Under these circumstances:
\begin{enumerate}
\item[(i)] If we get a fixed point $k^{*}$ (case (i) in Theorem~\ref{theo:iter}), then $k^{*}$ is the solution of the equation
\[
n' = k + p_k
\]
where $n' = \max\{ j + p_j \le n : j \ge 1\}$.
In particular, the equation $n = k + p_k$ has a solution if and only if $n'=n$,
and then $k^{*}$ is the solution.
(In practice it is enough to check whether $k^{*} + p_{k^{*}} = n$ or not.)
\item[(ii)] If we get a cycle $\{k',k''\}$ (with $k''=k'+1$, case (ii) in Theorem~\ref{theo:iter}), the equation $n = k + p_k$ has no solution.
\end{enumerate}
\end{theorem}
As in the case of Theorem~\ref{theo:iter}, we postpone the proof of Theorem~\ref{theo:sol} to Section~\ref{sec:proofs}.
Both cases (i) and (ii) can really occur, as the following illustrative examples with small numbers show. The case (i) occurs, for instance, with $n=51$ or $n=76$.
For $n=51$, the successive $k_j$ in \eqref{eq:iter} are
\begin{gather*}
k_0 = \pi(51) = 15,
\quad
k_1 = \pi(51-15) = 11,
\\
k_2 = \pi(51-11) = 12,
\quad
k_3 = \pi(51-12) = 12,
\end{gather*}
so we have reached the fixed point $k^{*} = 12$; but $k^{*} + p_{k^{*}} = 12 + 37 = 49 \ne 51$,
so the equation $51 = k + p_{k}$ has no solution.
For $n=76$, the successive $k_j$ in \eqref{eq:iter} are
\begin{gather*}
k_0 = \pi(76) = 21,
\quad
k_1 = \pi(76-21) = 16,
\\
k_2 = \pi(76-16) = 17,
\quad
k_3 = \pi(76-17) = 17,
\end{gather*}
so $k^{*} = 17$ is a fixed point; this time $k^{*} + p_{k^{*}} = 17+59 = 76$,
so we have found the solution of $76 = k + p_{k}$.
To illustrate the case (ii), let us take, for instance, $n=41$.
The successive $k_j$ in \eqref{eq:iter} are
\begin{gather*}
k_0 = \pi(41) = 13,
\quad
k_1 = \pi(41-13) = 9,
\quad
k_2 = \pi(41-9) = 11,
\\
k_3 = \pi(41-11) = 10,
\quad
k_4 = \pi(41-10) = 11,
\\
k' = k_5 = k_7 = k_9 = \cdots = 10,
\quad
k'' = k_6 = k_8 = k_{10} = \cdots = 11.
\end{gather*}
Then, $41 = k + p_{k}$ has no solution.
Some examples with bigger numbers are given in Table~\ref{tab:Fib}.
The iterative algorithm is applied to some Fibonacci numbers (\seqnum{A000045} in the OEIS), showing whether the method converges to a fixed point $k^{*}$ or to a cycle $\{k',k''\}$, as well as the number of iterations to reach it. Observe that the number of iterations for $n \le F_{70} \approx 1.9\cdot 10^{14}$ is always less than ten; for each equation, we also see how our method is faster than the bisection method, which requires a considerably higher number of iterations.
\begin{table}
\centering
\begin{tabular}{c@{}c@{}c@{}cc}
\hline
$F_m$ & Iterations & $k^{*}$ or $\{k',k''\}$
& Is $k^{*}$ a solution? & $\lfloor\log_2{F_{m}}\rfloor$
\\
\hline
$F_{12} = 144$ & 3 & $30$ & no & 7
\\
$F_{13} = 233$ & 3 & $\{42, 43\}$ & --- & 7
\\
$F_{14} = 377$ & 4 & $\{64, 65\}$ & --- & 8
\\
$F_{15} = 610$ & 3 & $97$ & no & 9
\\
$F_{27} = 196\,418$ & 5 & $\{16\,347, 16\,348\}$ & --- & 17
\\
$F_{42} = 267\,914\,296$ & 7 & $13\,887\,473$ & yes & 27
\\
$F_{50} = 12\,586\,269\,025$ & 8 & $\{543\,402\,114, 543\,402\,115\}$ & --- & 33
\\
$F_{53} = 53\,316\,291\,173$ & 7 & $2\,166\,313\,972$ & yes & 35
\\
$F_{66} = 27\,777\,890\,035\,288$ & 9 & $899\,358\,426\,281$ & no & 44
\\
$F_{67} = 44\,945\,570\,212\,853$ & 9 & $1\,432\,816\,693\,546$ & no & 45
\\
$F_{68} = 72\,723\,460\,248\,141$ & 9 & $2\,283\,240\,409\,254$ & no & 46
\\
$F_{69} = 117\,669\,030\,460\,994$ & 9 & $3\,639\,256\,692\,076$ & no & 46
\\
$F_{70} = 190\,392\,490\,709\,135$ & 9 & $5\,801\,907\,791\,391$ & no & 47
\\
\hline
\end{tabular}%
\caption{The iterative method applied to solve $F_m = k + p_k$ where $F_m$ are the Fibonacci numbers with $12 \le m \le 70$.
We omit the $F_{m}$ with $15 < m \le 65$ whose corresponding iterative methods converge to a fixed point $k^{*}$ which is not a solution of $F_m = k + p_k$. In the last column, $\lfloor\log_2{F_{m}}\rfloor$ is a rather precise estimation of the number of iterations required to solve the equation with the bisection method}.
\label{tab:Fib}
\end{table}
Instead of~\eqref{eq:n=k+pk} we can consider the more general equation
\begin{equation*}
n = ak + bp_k,
\end{equation*}
with $b \ge 1$ and $a \in \mathbb{Z} \setminus \{0\}$
(the trivial case $a=0$ is excluded).
In this setting we can consider the iterative process which starts with $k_0 = \pi(n/b)$ and continues with
\[
k_{j+1} = \pi((n-ak_j)/b),
\qquad j \ge 0.
\]
Now, the behavior of the iterative method and its relation with the solution of $n = ak + bp_k$ is a bit more complicated than in the case $a=b=1$. We analyze it in Section~\ref{sec:gen}.
\section{Proof of Theorems~\ref{theo:iter} and~\ref{theo:sol}}
\label{sec:proofs}
\begin{proof}[Proof of Theorem~\ref{theo:iter}]
It is clear that $k_1 = \pi(n-k_0) \le \pi(n) = k_0$. From the inequality $k_1 \le k_0$, it follows that $k_2 = \pi(n-k_1) \ge \pi(n-k_0) = k_1$. Now, from $k_2 \ge k_1$ we get $k_3 = \pi(n-k_2) \le \pi(n-k_1) = k_2$, and so on. That is,
\begin{equation}
\label{eq:alterna}
k_{2j+1} \le k_{2j}
\quad\text{and}\quad
k_{2j+2} \ge k_{2j+1},
\qquad \text{for }j = 0,1,2,\dots.
\end{equation}
Moreover, $k_2 = \pi(n-k_1) \le \pi(n) = k_0$, and from the inequality $k_2 \le k_0$ it follows that $k_3 = \pi(n-k_2) \ge \pi(n-k_0) = k_1$. Now, from $k_3 \ge k_1$ we get $k_4 = \pi(n-k_3) \le \pi(n-k_1) = k_2$, and so on. Thus, $(k_{2j})_{j\ge0}$ is a decreasing sequence, while $(k_{2j+1})_{j\ge0}$ is an increasing sequence. They are both bounded sequences of positive integers, so eventually constant; that is, there exist $k'$ and $k''$ (with $k' \le k''$) and a certain $J$ such that
\[
k_{2j+1} = k'
\quad\text{and}\quad
k_{2j} = k''
\qquad \text{for }j \ge J.
\]
If $k' = k''$, then $k^{*} = k'=k''$ is the fixed point in~\eqref{eq:iter}, the possibility (i) in the theorem.
Otherwise, let us suppose that $k' < k''$, so
\[
k' = \pi(n-k'') < \pi(n-k') = k''.
\]
Then $\pi(n-k')-\pi(n-k'')=k''-k'$, so there are $k''-k'$ prime numbers $q_j$ such that
\begin{equation}
\label{eq:qj}
n-k'' < q_1 < q_2 < \cdots < q_{k''-k'-1} < q_{k''-k'} \le n-k'.
\end{equation}
Except for $2$ and $3$, prime numbers are not consecutive numbers, so \eqref{eq:qj} is only possible in two cases:
\begin{itemize}
\item Case $k''-k'=2$, with $n-k''=1$, $n-k'=3$, $q_1 = 2$ and $q_2=3$.
In this case, we would have $k' = \pi(n-k'') = \pi(1) = 0$, which cannot happen because $k'$ is the limit of the increasing sequence $(k_{2j+1})_{j\ge0}$.
\item Case $k''-k'=1$, with a unique prime $q_1 = n-k'$ in~\eqref{eq:qj}. Then $k''=k'+1$ and
\[
\pi(n-k') = k'+1, \quad \pi(n-k'-1) = k'.
\]
This is the possibility (ii) in the theorem.
\qedhere
\end{itemize}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:sol}]
Let us first analyze the case (i).
We have $k^{*} = \pi(n-k^{*})$ and $p_{k^{*}} \le p_{\pi(n-k^{*})} \le n-k^{*}$, so $k^{*} + p_{k^{*}} \le n$. Thus, $k^{*} \in A$ where $A$ is the set
\[
A = \{ j \ge 1 : j + p_j \le n \}.
\]
To conclude the proof of case (i), it is enough to check that $\max A = k^{*}$ (in particular, this implies $k^{*} + p_{k^{*}} = n'$).
Let us suppose that $\max A \ne k^{*}$. In this case, $k^{*}+1 \in A$, so $p_{k^{*}+1} + k^{*}+1 \le n$, and therefore $p_{k^{*}+1} \le n - k^{*} - 1 \le n - k^{*}$. This implies that $\pi(n-k^{*}) \ge k^{*} + 1$, which is false because $\pi(n-k^{*}) = k^{*}$.
In case (ii), we have
\[
k' = \pi(n-k'') < \pi(n-k') = k'',
\]
with $k'' = k'+1$.
Using that $n-k'$ is a prime number, we have
$p_{k'+1} = p_{k''} = p_{\pi(n-k')} = n-k'$, so $k' + p_{k'+1} = n$ and then
\[
k' + p_{k'} < n < k'+1 + p_{k'+1}.
\]
This clearly implies that $n = k + p_k$ has no solution.
Finally, let us check that there cannot exist any solution of \eqref{eq:n=k+pk} which is
not detected by the iterative method~\eqref{eq:iter}.
Let us suppose on the contrary that $k$ is a solution not detected by the method;
then, $n = k + p_k$ and $\pi(n-k) = k$, so $k$ is a fixed point of
$k_{j+1} = \pi(n-k_j)$, that is, a fixed point of \eqref{eq:iter} except for the starting step $k_0 = \pi(n)$.
But we have proved that the iterative method~\eqref{eq:iter}, starting
in~$k_0 = \pi(n)$, converges to $k^{*}$ or to the 2-cycle $\{k',k''\}$
with $k''=k'+1$. If \eqref{eq:iter} converges to $k^{*}$, this $k^{*}$ is the unique solution of
\eqref{eq:n=k+pk}, so $k=k^{*}$.
If \eqref{eq:iter} converges to $\{k',k''\}$, then $k<k'$ or $k>k''$.
However, both cases are impossible:
\begin{equation}
\label{eq:kk'k''}
\begin{gathered}
k < k' \;\Rightarrow\; k = \pi(n-k) \ge \pi(n-k') = k'' \;\Rightarrow\; k \ge k'', \text{ a contradiction};
\\
k > k'' \;\Rightarrow\; k = \pi(n-k) \le \pi(n-k'') = k' \;\Rightarrow\; k \le k', \text{ a contradiction}.
\end{gathered}
\end{equation}
Then, there cannot exist such fixed point $k$.
\end{proof}
\section{Generalization to the equation $n = a k + b p_k$}
\label{sec:gen}
A more general equation than \eqref{eq:n=k+pk} is
\begin{equation}
\label{eq:n=ak+bpk}
n = a k + b p_k
\end{equation}
with $b \in \mathbb{Z}^{+}$ and $a \in \mathbb{Z} \setminus\{0\}$.
Given $n$, we want to know if there exists some $k$ satisfying~\eqref{eq:n=ak+bpk}
and how to find it by the iterative process
\begin{equation}
\label{eq:iterab}
k_{j+1} = \pi((n-ak_j)/b),
\qquad j \ge 0,
\end{equation}
starting with $k_0 = \pi(n/b)$.
Of course, the reason to choose the iterative process \eqref{eq:iterab} is that,
if $k$ is a solution of \eqref{eq:n=ak+bpk}, then $(n-ak)/b = p_k$, a prime number,
so $\pi((n-ak)/b) = \pi(p_k) = k$ and $k$ is a fixed point of~\eqref{eq:iterab}.
Depending on whether $a>0$ or $a<0$, the beginning of the iterative process \eqref{eq:iterab} varies.
If $a>0$ we have $k_1 \le k_0$; however, if $a<0$ we have $k_1 \ge k_0$.
Moreover, if $a<0$, the sequence $ak+bp_k$ is not always increasing with $k$, so we cannot ensure that the solution of $n = ak+bp_k$, if it exists, is unique. These differences motivate to consider both cases $a>0$ and $a<0$ separately.
In what follows, we present this study in a more informal way, without stating the properties as theorems. Actually, the behavior of the iterative method and its relation with the solutions of \eqref{eq:n=ak+bpk} is similar to what happens in Theorems~\ref{theo:iter} and~\ref{theo:sol} in the case $a>0$, but rather different in the case $a<0$.
\subsection{Case $a>0$}
\label{sec:gena>0}
Let us start noticing that we want the $k_j$ that arise in the iterative method~\eqref{eq:iterab} to be positive integers (otherwise, $p_{k_j}$ does not exist).
Due to the equivalence $\pi(x) \sim x/\log(x)$ for $x\to\infty$ (prime number theorem),
we can guarantee that $k_j > 0$ for all $j$ if $n$ is large enough
(the exact required size depends on $a$ and~$b$). Indeed, for $n\to\infty$ we have
\[
k_0 = \pi(n/b) \sim \frac{n/b}{\log(n/b)} \sim \frac{n}{b\log(n)}
\]
and
\[
k_1 = \pi((n-ak_0)/b) \sim \frac{(n-ak_0)/b}{\log((n-ak_0)/b)}
\sim \frac{n}{b\log(n)},
\]
so $k_0, k_1 \ge 1$ if $n$ is big enough.
Then, given the equation $n = a k + b p_k$, we can assume that $n$ satisfies
\begin{equation}
\label{eq:admis}
k_1 = \pi\big((n-ak_0)/b\big) = \pi\big((n-a\pi(n/b))/b\big) \ge 1
\end{equation}
(this holds except for a finite set of $n$'s). Note that, for $a=b=1$, \eqref{eq:admis} becomes $\pi(n-\pi(n)) \ge 1$, which holds for every $n \ge 4$.
After imposing this technical restriction for small values of $n$, let us analyze the iterative method.
The assumption $a>0$ gives $k_1 = \pi((n-ak_0)/b) \le \pi(n/b) = k_0$. From $k_1 \le k_0$, it follows that $k_2 = \pi((n-ak_1)/b) \ge \pi((n-ak_0)/b) = k_1$, and from $k_2 \ge k_1$, we get $k_3 = \pi((n-ak_2)/b) \le \pi((n-k_1)/b) = k_2$, and so on.
Moreover, $k_2 = \pi((n-ak_1)/b) \le \pi(n/b) = k_0$
and $k_3 = \pi((n-ak_2)/b) \ge \pi((n-ak_0)) = k_1$, and so on.
In particular, if $k_1 \ge 1$ then $k_j \ge 1$ for every $j$,
so assuming \eqref{eq:admis} is enough to ensure that all the $k_j$ will be positive integers.
Following as in the proof of Theorem~\ref{theo:iter} we get that $(k_{2j})_{j\ge0}$ is a decreasing sequence, $(k_{2j+1})_{j\ge0}$ is an increasing sequence, and there exist $k'$ and~$k''$ (with $k' \le k''$) and a certain $J$ such that
\[
k_{2j+1} = k'
\quad\text{and}\quad
k_{2j} = k''
\qquad \text{for }j \ge J.
\]
If $k' = k''$, then $k^{*} = k'=k''$ is the fixed point in~\eqref{eq:n=ak+bpk}.
Otherwise, if $k' < k''$ we have
\[
k' = \pi((n-ak'')/b) < \pi((n-ak')/b) = k''
\]
so $\pi((n-ak')/b)-\pi((n-ak'')/b)=k''-k'$, and there are $k''-k'$ prime numbers $q_j$ such that
\begin{equation}
\label{eq:qjab}
\frac{n-ak''}{b} < q_1 < q_2 < \cdots < q_{k''-k'-1} < q_{k''-k'} \le \frac{n-ak'}{b}.
\end{equation}
{}From now on, the case $k' < k''$ is somewhat different.
Let us suppose that two of these primes are $q_1 = 2$ and $q_2 = 3$. Then $(n-ak'')/b = 1$ and $k' = \pi((n-ak'')/b) = \pi(1) = 0$, which cannot happen. Therefore, we can assume that $q_{j+1}-q_j \ge 2$ for all these primes~$q_j$. Clearly, any interval $(x,y]$ containing $m$ of these primes must satisfy $y-x > 2(m-1)$. In the case \eqref{eq:qjab}, this means that
\[
\frac{n-ak'}{b} - \frac{n-ak''}{b} > 2(k''-k')-2,
\]
that is, $a(k''-k')/b > 2(k''-k')-2$, or
\begin{equation}
\label{eq:2b-a2b}
(2b-a)(k''-k') < 2b.
\end{equation}
Then, two situations can occur:
\begin{itemize}
\item If $a \ge 2b$, \eqref{eq:2b-a2b} is trivial and does not imply any restriction on $k'$ and~$k''$.
\item If $a < 2b$, \eqref{eq:2b-a2b} can be written as
\begin{equation}
\label{eq:2b/2b-a}
k''-k' < \frac{2b}{2b-a}.
\end{equation}
In the particular case $a \le b$ we have $1 < 2b/(2b-a) \le 2$, so \eqref{eq:2b/2b-a} is equivalent to
$k'' = k'+1$ (recall that we are analyzing the case $k'<k''$).
\end{itemize}
In the case $a \le b$, the remaining arguments in the proofs of Theorems~\ref{theo:iter} and~\ref{theo:sol} are valid. In particular \eqref{eq:kk'k''}, which guarantees that the iterative method \eqref{eq:iterab}, starting at $k_0 = \pi(n/b)$, detects as fixed points all the solutions of $n = ak + bp_k$.
If $a \ge 2b$ or $b < a < 2b$, we cannot ensure that $k''=k'+1$. There might be an integer $k$ with $k' < k < k''$ which is a fixed point
\[
\pi((n-ak)/b) = k
\]
and a solution of $n = ak + bp_k$ not detected by the iterative method \eqref{eq:iterab} starting in $k_0=\pi(n/b)$. That is, the method converges to the 2-cycle $\{k',k''\}$ instead of $k$. In practice, if we are looking for a solution of $n = ak + bp_k$, it is enough to check if every $k$ satisfying $k' < k < k''$ is a solution.
Let us give some examples to illustrate that these situations can occur.
For the case $a \ge 2b$, let us see what happens with the equation
\[
n = 7k+2p_k
\]
for several values of~$n$, with convergence to a fixed point $k^{*}$ or to cycles $\{k',k''\}$ with different values of $k''-k'$.
For $n=10\,040$, we have $k_0 = 672$ and the iterative process converges to $k^{*} = 474$, which is a solution of the equation;
instead, for $n = 10\,041$, again $k_0 = 672$ and the iterative process converges to $k^{*} = 474$, which is not a solution of the equation.
For $n=10\,073$, we have $k_0 = 674$ and the iterative process converges to the cycle $\{474,476\}$.
For $n=10\,300$, we have $k_0 = 686$ and the iterative process converges to the cycle $\{482,485\}$.
For $n=10\,325$, we have $k_0 = 687$ and the iterative process converges to the cycle $\{483,487\}$.
For $n=10\,532$, we have $k_0 = 698$ and the iterative process converges to the cycle $\{491,497\}$.
In all these cases, less that $10$ iterations are enough to reach $k^{*}$ or $\{k',k''\}$.
For the case $b < a < 2b$, in the equation $12\,660 = 3k+2p_k$ we have $k_0 = 824$ and the iterative method converges to the cycle $\{699,701\}$, where $k''-k'=2$.
Let us also show some instances of the equation $n = ak+bp_k$ with a solution $k$ such that the iterative method \eqref{eq:iterab} starting with $k_0=\pi(n/b)$ converges to a cycle $\{k',k''\}$.
For $n = 2\cdot 33 + p_{33} = 203$, where $k=33$ is clearly a solution of $n = 2k + p_{k}$, the iterative method converges to the cycle $\{32, 34\}$, with $k''-k'=2$.
For $n = 6\cdot 100 + p_{100} = 1141$, where $k=100$ is the solution of $n = 6k + p_{k}$, the iterative method converges to the cycle $\{80, 121\}$, with $k''-k'=41$.
\subsection{Case $a<0$}
\label{sec:gena<0}
Let us start noticing that, in general, $ak + bp_k$ is not increasing with $k$ when $a<0$.
Therefore, we cannot ensure the unicity of the solution of $n = ak + bp_k$ for fixed $a$, $b$ and~$n$.
For instance, $n=105$ has two solutions $k$ in the equation $n = -2k+p_k$:
\[
105 = -2 \cdot 43 + p_{43} = -2 \cdot 44 + p_{44}.
\]
With $n = -3k+p_k$ we can also find two solutions that are not consecutive integers:
\[
100 = -3 \cdot 59 + p_{59} = -3 \cdot 61 + p_{61}.
\]
There can exist more than two solutions, as is the case of the equation $n = -4k+p_k$:
\[
99 = -4 \cdot 83 + p_{83} = -4 \cdot 85 + p_{85} = -4 \cdot 86 + p_{86}.
\]
However, $p_k$ always grows faster than $k$, so the sequence $ak + bp_k$ is increasing with $k$ if $0 < -a < b$.
In this case, again the solution of $n = ak + bp_k$, if it exists, is unique.
Anyway, taking into account that $p_k \sim k \log k$ when $k \to \infty$, we have
$ak + bp_k \sim ak + bk\log k = (a+b\log k)k$.
Thus, $ak + bp_k$ is increasing with $k$ for $k$ big enough.
In particular, this ensures that, for fixed $a$, $b$ and~$n$, the number of solutions of $n = ak + bp_k$ is always finite.
Precise estimates of the form
\[
C_1 \, \frac{x}{\log x} \le \pi(x) \le C_2 \, \frac{x}{\log x}
\]
with $C_1, C_2 > 0$, which yield an upper bound (depending on $a$, $b$, $n$, $C_1$ and $C_2$) of the number of solutions, can be found in many texts of number theory (see, for instance, \cite[Theorem~8.8.1]{BaSh}).
Now, let us analyze the behavior of the iterative method \eqref{eq:iterab} starting in $k_0 = \pi(n/b)$.
As shown below, nothing similar to \eqref{eq:alterna} appears in the case $a<0$;
the property \eqref{eq:alterna} is very important in the analysis of the case $a>0$,
and the lack of a suitable alternative is a great handicap.
Although the analysis of the case $a<0$ is presented here for completeness, it must be concluded that
the iterative method described in this paper is not so useful to solve~\eqref{eq:n=ak+bpk}
as it is when $a>0$.
Since $a<0$, we have $k_1 = \pi((n-a k_0)/b) \ge \pi(n/b) = k_0$;
and, if we assume that $k_j \ge k_{j-1}$, also
\[
k_{j+1} = \pi((n-a k_j)/b) \ge \pi((n-a k_{j-1})/b) = k_j.
\]
Then $k_j$ is an increasing sequence and it cannot tend to a cycle.
Let $s$ be any number such that
\begin{equation}
\label{eq:cotaa<0}
\pi((n-as)/b) \le s;
\end{equation}
the existence of such an $s$ follows from the estimate $\pi(x) \sim x/\log x$ when $x \to \infty$.
{}Then (recall that $a<0$)
\[
k_0 = \pi(n/b) \le \pi((n-as)/b) \le s;
\]
and assuming that $k_j \le s$ gives
\[
k_{j+1} = \pi((n-ak_j)/b) \le \pi((n-as)/b) \le s.
\]
This proves that the increasing sequence of integers $k_j$ produced by the iterative method \eqref{eq:iterab} starting with $k_0 = \pi(n/b)$ is bounded.
So there exists some~$k^{*}$ (and an index $J$) such that
\[
k_j = k^{*} \quad\text{for every } j \ge J,
\]
and thus $k^{*}$ is a fixed point of~\eqref{eq:iterab}.
We cannot ensure that the fixed point $k^{*}$ is a solution of \eqref{eq:n=ak+bpk}.
A solution of \eqref{eq:n=ak+bpk}, if it exists, satisfies $\pi((n-as)/b) = s$;
in particular, it also satisfies \eqref{eq:cotaa<0}.
Then $k^{*} \le s$ for any possible solution $s$ of \eqref{eq:n=ak+bpk} such that $s \ge \pi(n/b) = k_0$.
In practice, we can continue checking whether $k^{*}$, $k^{*}+1$, $k^{*}+2$,\dots\ are solutions or not
of \eqref{eq:n=ak+bpk} until, when substituting in $ak+bp_k$, we get a value greater than~$n$;
actually, an additional precaution is necessary:
we must continue the checking until reaching values of $k$ for which the sequence $ak+bp_k$ is already increasing.
For an example of this behavior, let us take $n = -7 \cdot 2000 + p_{2000} = 3389$ (that is, $a=-7$ and $b=1$),
so $s = 2000$ is a solution of the equation $3389 = -7k+p_k$. However, the iterative process $k_{j+1} = \pi(3389+7k_j)$
starting in $k_0 = \pi(3389) = 477$ converges to $k^{*} = 1989$, which is not a solution of the equation. Observe that $s-k^{*} = 11$.
Finally, let us observe an extra surprise: in the case $a<0$, the fixed point $k^{*}$ of the iterative method
is never a solution of $n = ak+b_k$. If the equation has no solutions, there is nothing to prove.
If the equation has a solution $s$, let us prove that the fixed point $k^{*}$ cannot be $k^{*} = s$, but $k^{*} < s$.
Indeed, it is enough to check that, if $k_j < s$, also $k_{j+1} < s$.
We have $(n-ak_j)/b < (n-as)/b = p_s$, a prime, so $\pi((n-ak_j)/b) < s$.
But $\pi((n-ak_j)/b) = k_{j+1}$, so $k_{j+1} < s$, just as we were looking for.
\section{Acknowledgments}
I am grateful to Samuel Stigliano, an Uruguayan student (of medicine!),
for the motivation to study this problem.
In March 2021, he told me by email that both numbers of the amicable pair
$(220, 284)$ can be written as $k + p_k$,
namely $220 = 41 + p_{41}$ and $284 = 51 + p_{51}$, and he asked if
a similar property happens with other pairs of amicable numbers
(\seqnum{A259180} in the OEIS).
Trying to answer his question, I wanted to decide if a number
$n$ can be written as $n = k + p_{k}$ with a fast algorithm,
and then I arrived at the iterative method presented in this paper.
In particular, this allowed me to find some pairs of amicable numbers
that satisfy the requested property; the smaller is
\[
1\,392\,368 = 99\,525 + p_{99\,525},
\quad
1\,464\,592 = 104\,283 + p_{104\,283}.
\]
This pair of amicable numbers was discovered by Euler in 1747.
| proofpile-arXiv_065-1863 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In many cases, completeness properties of various objects of General Topology or Topologi\-cal Algebra can be characterized externally as closedness in ambient objects. For example, a metric space $X$ is complete if and only if $X$ is closed in any metric space containing $X$ as a subspace. A uniform space $X$ is complete if and only if $X$ is closed in any uniform space containing $X$ as a uniform subspace. A topological group $G$ is Ra\u\i kov complete if and only if it is closed in any topological group containing $G$ as a subgroup.
Thus, despite that for topological semigroups there are no reasonable notions of inner completeness, we can define many completeness properties of semigroups via their closedness in ambient topological semigroups.
Let us recall that a {\em topological semigroup} is a topological space $X$ endowed with a continuous associative binary operation.
\begin{definition} Let $\C$ be a class of topological semigroups. A topological semigroup $X$ is defined to be
\begin{itemize}
\item {\em $\C$-closed} if for any isomorphic topological embedding $e:X\to Y$ into a topological semigroup $Y\in\C$ the image $e[X]$ is closed in $Y$;
\item {\em injectively $\C$-closed} if for any injective continuous homomorphism $i:X\to Y$ to a topological semigroup $Y\in\C$ the image $i[X]$ is closed in $Y$;
\item {\em absolutely $\C$-closed} if for any continuous homomorphism $h:X\to Y$ to a topological semigroup $Y\in\C$ the image $h[X]$ is closed in $Y$.
\end{itemize}
\end{definition}
For any topological semigroup we have the implications
$$\mbox{absolutely $\C$-closed $\Ra$ injectively $\C$-closed $\Ra$ $\C$-closed}.$$
\begin{definition} A semigroup $X$ is defined to be {\em $\C$-closed} (resp. {\em injectively $\C$-closed}, {\em absolutely $\C$-closed\/}) if so is $X$ endowed with the discrete topology.
\end{definition}
An equivalence relation $\approx$ on a semigroup $X$ is called a {\em congruence} on $X$ if for any elements $x,y,z\in X$ with $x\approx y$ we have $xz\approx yz$ and $zx\approx zy$. For any congruence $\approx$ on a semigroup $X$, the quotient set $X/_\approx$ has a unique structure of a semigroup such that the quotient map $X\to X/_\approx$ is a homomorphism. The semigroup $X/_\approx$ is called a {\em quotient semigroup} of $X$.
A subset $I$ of a semigroup $X$ is called an {\em ideal} if for any $x\in I$ and $y\in X$ the elements $xy$ and $yx$ belong to $I$. Every ideal $I$ determines the congruence $(I\times I)\cup\{(x,y)\in X\times X:x=y\}$ on $X$. The quotient semigroup of $X$ by this congruence is denoted by $X/I$. The semigroup $X$ can be identified with the quotient semigroup $X/\emptyset$.
\begin{definition}
A semigroup $X$ is defined to be
\begin{itemize}
\item {\em projectively $\C$-closed} if for every congruence $\approx$ on $X$ the quotient semigroup $X/_\approx$ is $\C$-closed;
\item {\em ideally $\C$-closed} if for every ideal $I\subseteq X$ the quotient semigroup $X/I$ is $\C$-closed.
\end{itemize}
\end{definition}
For any semigroup we have the implications:
$$
\xymatrix{
\mbox{absolutely $\C$-closed}\ar@{=>}[d]\ar@{=>}[r]&\mbox{projectively $\C$-closed}\ar@{=>}[r]&\mbox{ideally $\C$-closed}\ar@{=>}[d]\\
\mbox{injectively $\C$-closed}\ar@{=>}[rr]&&\mbox{$\C$-closed}.
}
$$
$\C$-closed topological groups for various classes $\C$ were investigated by many authors
in~\cite{AC,AC1,Ban,DU,G,L}. In particular, the closedness of commutative topological groups in the class of Hausdorff topological semigroups was investigated
in~\cite{Z1,Z2}; $\mathcal{C}$-closed topological semilattices were studied
in~\cite{BBm, BBR, GutikPagonRepovs2010, GutikRepovs2008, Stepp75}.
For more information about complete topological semilattices and pospaces we refer to the recent survey of the authors~\cite{BBc}.
This paper is a continuation of~\cite{BB}, where the authors investigated $\C$-closed commutative semigroups.
For a semigroup $X$, its
\begin{itemize}
\item {\em $0$-extension} is the semigroup $X^0=\{0\}\cup X$ where $0\notin X$ is any element such that $0x=0=x0$ for all $x\in X^0$;
\item {\em $1$-extension} is the semigroup $X^1=\{1\}\cup X$ where $1\notin X$ is any element such that $1x=x=x1$ for all $x\in X^1$.
\end{itemize}
A topology $\tau$ on a semigroup $X$ is called a {\em semigroup topology} if the binary operation of $X$ is continuous with respect to the topology $\tau$.
\begin{definition} A semigroup $X$ is called {\em zero-closed} if $X$ is closed in $(X^0,\tau)$ for any $T_1$ semigroup topology $\tau$ on $X^0$.
\end{definition}
For $i\in\{0,1,2,3,3\frac12\}$ by $\mathsf T_{\!i}\mathsf S$ we denote the class of topological semigroups satisfying the separation axiom $T_i$ (see~\cite{Eng} for more details). The class of all zero-dimensional topological semigroups (i.e., $T_1$ topological semigroups possessing a base consisting of clopen sets) is denoted by $\mathsf{T_{\!z}S}$.
Note that any zero-dimensional space is Tychonoff and any $T_1$ space with a unique non-isolated point is zero-dimensional.
This yields the following chain of implications:
$
\mbox{$\mathsf{T_{\!1}S}$-closed}\Ra\mbox{$\mathsf{T_{\!2}S}$-closed}\Ra\mbox{$\mathsf{T_{\!3}S}$-closed}\Ra\mbox
{$\mathsf{T_{\!3\frac12}S}$-closed}\Ra\mbox{$\mathsf{T_zS}$-closed}\Ra\mbox{zero-closed}.$$
By a {\em semigroup polynomial} on a semigroup $X$ we understand a function $f:X\to X$ of the form $f(x)=a_0xa_1\cdots xa_n$ for some elements $a_0,\dots,a_n\in X^1$ where the number $n\ge 1$ is called a {\em degree} of $f$.
Note that every semigroup polynomial $f$ on a topological semigroup $X$ is continuous.
\begin{definition} A subset $A$ of a semigroup $X$ is called {\em polybounded in} $X$ if $$A\subseteq \bigcup_{i=1}^n\{x\in X:f_i(x)=b_i\}$$for some elements $b_1,\dots,b_n\in X$ and some semigroup polynomials $f_1,\dots,f_n$ on $X$.
A semigroup $X$ is called {\em polybounded} if $X$ is polybounded in $X$.
\end{definition}
\begin{definition} A semigroup $X$ is defined to have {\em finite-to-one shifts} if for any $a,b\in X$ the sets $\{x\in X:ax=b\}$ and $\{x\in X:xa=b\}$ are finite.
\end{definition}
The class of semigroups with finite-to-one shifts includes all groups and more generally, all cancellative semigroups. Let us recall that a semigroup $X$ is {\em cancellative} if for any $a,b\in X^1$ the two-sided shift $s_{a,b}:X\to X$, $s_{a,b}:x\mapsto axb$, is injective.
One of the main results of this paper is the following characterization of $\C$-closed countable semigroups with finite-to-one shifts.
\begin{theorem}\label{char} For a countable semigroup $X$ with finite-to-one shifts, the following conditions are equivalent:
\begin{enumerate}
\item $X$ is $\mathsf{T_{\!1}S}$-closed;
\item $X$ is $\mathsf{T_{\!z}S}$-closed;
\item $X$ is zero-closed;
\item $X$ is polybounded.
\end{enumerate}
Moreover, if the semigroup $X$ is cancellative, then the equivalent conditions \textup{(1)--(4)} imply that $X$ is a group.
\end{theorem}
By Proposition~\ref{qd}, a homomorphic image of a polybounded semigroup is polybounded. Since any homomorphic image of a group is a group, Theorem~\ref{char} implies the following characterization.
\begin{corollary
For a countable cancellative semigroup $X$ the following conditions are equivalent:
\begin{enumerate}
\item $X$ is polybounded;
\item $X$ is zero-closed;
\item $X$ is $\mathsf{T_{\!1}S}$-closed;
\item $X$ is projectively $\mathsf{T_{\!1}S}$-closed.
\end{enumerate}
\end{corollary}
\begin{definition} Let $\C$ be a class of topological semigroups.
A semigroup $X$ is called
\begin{itemize}
\item
{\em $\C$-nontopologizable} if for every injective homomorphism $i:X\to Y$ to a topological semigroup $Y\in\C$ the image $i[X]$ is a discrete subspace of $Y$;
\item {\em projetively $\C$-nontopologizable} if for every homomorphism $h:X\to Y$ to a topological semigroup $Y\in\C$ the image $h[X]$ is a discrete subspace of $Y$.
\end{itemize}
\end{definition}
Topologizability and nontopologizability of groups and semigroups were studied in many papers, see, e.g.,
\cite{BPS, DI, KOO, Kotov, Malcev, Mar, Taimanov78, Tro}.
The $\mathsf T_{\!i}\mathsf S$-nontopologizability of countable semigroups can be characterized in terms of Zariski topologies, which were studied in ~\cite{BDT, DS1, DS2, DT1, DT2, EJM}.
For a semigroup $X$ its
\begin{itemize}
\item {\em Zariski topology} $\Zeta_X$ is the topology on $X$ generated by the subbase consisting of the sets $\{x\in X:f(x)\ne b\}$ and $\{x\in X:f(x)\ne g(x)\}$ where $b\in X$ and $f,g$ are semigroup polynomials on $X$;
\item {\em Zariski $T_1$-topology} $\Zeta'_X$ is the topology on $X$ generated by the subbase consisting of the sets $\{x\in X:f(x)\ne b\}$ where $b\in X$ and $f$ is a semigroup polynomial on $X$.
\end{itemize}
The following theorem was proved independently by Podewski \cite{Podewski} and Taimanov \cite{Taimanov78}.
\begin{theorem}[Podewski--Taimanov]\label{t:Podewski}A countable semigroup $X$ is $\mathsf{T_{\!1}S}$-nontopologizable if and only if the Zariski $T_1$ topology $\Zeta'_X$ is discrete.
\end{theorem}
The next theorem was announced by Taimanov \cite{Taimanov78} and proved by Kotov~\cite{Kotov}.
\begin{theorem}[Taimanov--Kotov
A countable semigroup $X$ is $\mathsf{T_{\!2}S}$-nontopologizable if and only if $X$ is $\mathsf{T_{\!z}S}$-nontopologizable if and only if the Zariski topology $\Zeta_X$ is discrete.
\end{theorem}
Now we are able to formulate two more principal results of the paper, establishing a connection between nontopologizability and closedness properties of countable semigroups.
\begin{theorem}\label{main2} For a countable semigroup $X$ with finite-to-one shifts the following conditions are equivalent:
\begin{enumerate}
\item $X$ is injectively $\mathsf{T_{\!1}S}$-closed;
\item $X$ is $\mathsf{T_{\!1}S}$-nontopologizable;
\item $X$ is discrete in the Zariski $T_1$ topology $\mathfrak Z_X'$.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{tag}
For a countable cancellative semigroup $X$ the following conditions are equiva\-lent:
\begin{enumerate}
\item $X$ is absolutely $\mathsf{T_{\!1}S}$-closed;
\item $X$ is projectively $\mathsf{T_{\!1}S}$-nontopologizable.
\end{enumerate}
\end{theorem}
For commutative groups the characterizations given in Theorems~\ref{main2} and \ref{tag} can be unified in a single characterization, which follows from the main result of \cite{Ban2} (see also \cite[Theorem 3]{Ban}).
In this characterization $\mathsf{TG}$ is the class of Hausdorff topological groups.
\begin{theorem}\label{t:aC-cgrp} Let $\C$ be a class of topological semigroups such that $\mathsf{T_{\!z}S}\cap\mathsf{TG}\subseteq\C\subseteq\mathsf{T_{\!1}S}$. For a commutative group $X$ the following conditions are equivalent:
\begin{enumerate}
\item $X$ is injectively $\C$-closed;
\item $X$ is absolutely $\C$-closed;
\item $X$ is finite.
\end{enumerate}
\end{theorem}
\begin{example} By \cite{KOO} there exists a finitely generated simple infinite group $X$ such that every subgroup of $X$ is projectively $\mathsf{T_{\!1}S}$-nontopologizable. By Theorem~\ref{tag}, every subgroup of $X$ is absolutely $\mathsf{T_{\!1}S}$-closed. This example shows that Theorem~\ref{t:aC-cgrp} does not generalize to non-commutative groups.
\end{example}
\section{Auxiliary results}
The following technical lemma yields a sufficient condition of non-zero-closedness and will be used in the proof of Theorem~\ref{t:semibound}.
\begin{lemma}\label{l0} Let $X$ be a semigroup and $\mathcal K$ be a countable family of infinite subsets of $X$ such that
\begin{enumerate}
\item for any $K,L\in\mathcal K$ there exists $M\in\mathcal K$ such that $KL\subseteq M$;
\item for any $a,b\in X^1$ and $K\in\mathcal K$ there exists $L\in\mathcal K$ such that $aKb\subseteq L$;
\item for any $K\in\mathcal K$ and $a,b,c\in X^1$ the set $\{x\in K:axb=c\}$ is finite;
\item for any $K,L\in\mathcal K$ and $c\in X$ the set $\{(x,y)\in K\times L:xy=c\}$ is finite.
\end{enumerate}
Then the topology $$\tau^0=\{V\subseteq X^0:0\in V\Ra \forall K\in\mathcal K\;(|K\setminus V|<\w)\}$$turns $X^0$ into a Hausdorff topological semigroup with a unique non-isolated point $0$.
\end{lemma}
\begin{proof}
Let $\{K_n\}_{n\in\w}$ be an enumeration of the countable family $\K$. For every $n\in\w$ let $U_n=\bigcup_{i<n}K_i$. Note that $U_0=\emptyset$ and $(U_n)_{n\in\IN}$ is an increasing sequence of infinite subsets of $X$. The conditions (1)--(4) imply the conditions:
\begin{itemize}
\item[$(1')$] for any $n\in\w$ there exists $m\in\w$ such that $U_nU_n\subseteq U_m$;
\item[$(2')$] for any $a,b\in X^1$ and $n\in\w$ there exists $m\in\w$ such that $aU_nb\subseteq U_m$;
\item[$(3')$] for any $n\in\w$ and $a,b,c\in X^1$ the set $\{x\in U_n:axb=c\}$ is finite;
\item[$(4')$] for any $n\in\w$ and $c\in X$ the set $\{(x,y)\in U_n\times U_n:xy=c\}$ is finite.
\end{itemize}
Observe that the topology $\tau^0$ coincides with the Hausdorff topology
$$\{V\subseteq X^0:0\in V\;\Ra\;\forall n\in\w\;(|U_n\setminus V|<\w)\}.$$
The definition of the topology $\tau^0$ guarantees that $0$ is the unique non-isolated point of the topological space $(X^0,\tau^0)$. So, it remains to prove that $(X^0,\tau^0)$ is a topological semigroup.
First we show that for every $a,b\in X^1$ the shift $s_{a,b}:X^0\to X^0$, $x\mapsto axb$, is continuous. Since $0$ is a unique non-isolated point of $(X^0,\tau^0)$, it suffices to check the continuity of the shift $s_{a,b}$ at $0$. Given any neighborhood $V\in\tau^0$ of $0$, we need to show that the set $s_{a,b}^{-1}(V)=\{x\in X^0:axb\in V\}$ belongs to the topology $\tau^0$. This will follow as soon as we check that for every $n\in\w$ the set $U_n\setminus s_{a,b}^{-1}(V)$ is finite. Fix any $n\in\omega$. By condition $(2')$, there exists a number $m\in\w$ such that $aU_nb\subseteq U_m$. Then
$$U_n\setminus s_{a,b}^{-1}(V)=\{x\in U_n:axb\notin V\}=\{x\in U_n:axb\in U_m\setminus V\}.$$Since the set $V$ is open, the definition of the topology $\tau^0$ guarantees that the difference $U_m\setminus V$ is finite. Applying condition $(3')$, we conclude that the set $U_n\setminus s_{a,b}^{-1}(V)$ is finite. Therefore, the shift $s_{a,b}$ of the semigroup $X^0$ is continuous with respect to the topology $\tau^0$ and the semigroup operation is continuous on the subset $(X^0\times X)\cup (X\times X^0)$.
So, it remains to check the continuity of the semigroup operation at $(0,0)$. Fix any neighborhood $V\in\tau^0$ of $0=00$. By condition $(1')$, for every $n\in\w$ there exists $m_n\in\w$ such that $U_nU_n\subseteq U_{m_n}$. The definition of the topology $\tau^0$ ensures that for every $n\in\w$ the set $U_{m_n}\setminus V$ is finite. Applying condition $(4')$, we conclude that for every $n\in\w$ the set
$$\Pi_n=\{(x,y)\in U_n\times U_n:xy\notin V\}=\{(x,y)\in U_n\times U_n:xy\in U_{m_n}\setminus V\}$$is finite. Then we can find a finite set $P_n\subseteq U_n$ such that $\Pi_n\subseteq P_n\times P_n$.
Consider the set $$W=\{0\}\cup\bigcup_{n\in\IN}\big((U_n\setminus U_{n-1})\setminus P_n\big)=\{0\}\cup\bigcup_{n\in\IN}(K_{n-1}\setminus P_n)$$and observe that for every $n\in\IN$ the complement
$$U_n\setminus W\subseteq\bigcup_{1\leq k\le n}P_k$$is finite. Then $W\in\tau^0$ by the definition of the topology $\tau^0$.
We claim that $WW\subseteq V$. Assuming the opposite, find $x,y\in W$ such that $xy\notin V$. Let $n,k\in\IN$ be unique numbers such that $x\in (U_n\setminus U_{n-1})\setminus P_n$ and $y\in (U_k\setminus U_{k-1})\setminus P_k$. If $n\le k$, then $(x,y)\in\Pi_k$ and $x,y\in P_k$, which contradicts the choice of $y$. If $k\le n$, then $(x,y)\in \Pi_n$ and $x,y\in P_n$, which contradicts the choice of $x$. In both cases we obtain a contradiction, which completes the proof of the continuity of the semigroup operation on $(X^0,\tau^0)$.
\end{proof}
A nonempty family $\F$ of nonempty subsets of a set $X$ is called a {\em filter} on $X$ if $\F$ is closed under intersections and taking supersets in $X$. A subfamily $\mathcal B\subseteq\F$ is called a {\em base} of a filter $\F$ if each set $F\in\F$ contains some set $B\in\mathcal B$. In this case we say that the filter $\F$ is generated by the family $\mathcal B$.
A filter $\F$ on $X$ is
\begin{itemize}
\item {\em free} if $\bigcap\F=\emptyset$;
\item {\em principal} if $\{x\}\in\F$ for some $x\in X$;
\item an {\em ultrafilter} if for any $F\subseteq X$ either $F\in \F$ or $X\setminus F\in \F$.
\end{itemize}
The set $\Fil(X)$ of filters on a semigroup $X$ has a natural structure of a semigroup: for any filters $\mathcal E,\F$ on $X$ their product $\mathcal E\F$ is the filter generated by the base $\{EF:E\in\mathcal E,\;\;F\in\F\}$ where $EF=\{xy:x\in E,\;y\in F\}$. Identifying each element $x\in X$ with the principal filter $\{F\subseteq X:x\in F\}$, we identify the semigroup $X$ with a subsemigroup of the semigroup $\Fil(X)$.
\begin{proposition}\label{t:T1} A semigroup $X$ is $\mathsf{T_{\!1}S}$-closed if for any free ultrafilter $\F$ on $X$ there are elements $a_0,\ldots,a_n\in X^1$ such that the filter $a_0\F a_1\F\cdots \F a_n$ is neither free nor principal.
\end{proposition}
\begin{proof} To derive a contradiction, assume that a semigroup $X$ is not $\mathsf{T_{\!1}S}$-closed, but for any free ultrafilter $\F$ on $X$ there are elements $a_0,\dots,a_n\in X^1$ such that the filter $a_0\F a_1\F\cdots \F a_n$ is neither free nor principal. By our assumption, $X$ is a non-closed subsemigroup of some $T_1$ topological semigroup $Y$ whose topology is denoted by $\tau_Y$.
Take any element $y\in \overline{X}\setminus X$ and consider the filter $\mathcal H$ on $X$ generated by the family $\{X\cap U:y\in U\in\tau_Y\}$. Since the space $Y$ is $T_1$, the filter $\mathcal H$ is free. Let $\F$ be any ultrafilter on $X$ which contains $\mathcal H$. By our assumption, there exist elements $a_0,\dots,a_n\in X^1\subset Y^1$ such that the filter $a_0\F\cdots \F a_n$ is neither free nor principal.
Since this filter is not free, there exists an element $z\in \bigcap_{F\in \F}a_0F\cdots Fa_n$.
We claim that $a_0ya_1\cdots ya_n=z$. In the opposite case, we can find a neighborhood $U\subseteq Y$ of $y$ such that $a_0Ua_1\cdots Ua_n\subseteq Y\setminus\{z\}$. Then for the set $F=U\cap X\in\mathcal H\subseteq \mathcal{F}$ we obtain $z\notin a_0Fa_1\cdots Fa_n$, which contradicts the choice of $z$. Hence $a_0ya_1\cdots ya_n=z$. Since $X$ is a discrete subspace of $Y$, there exists an open set $V\subseteq Y$ such that $V\cap X=\{z\}$. By the continuity of the semigroup operation on $Y$, the point $y$ has a neighborhood $W\subseteq Y$ such that $a_0Wa_1\cdots Wa_n\subseteq V$. Then the set
$F=X\cap W\in\F$ has the property $a_0Fa_1\cdots Fa_n\subseteq X\cap V=\{z\}$, implying that the filter $a_0\F a_1\cdots \F a_n$ is principal. But this contradicts the choice of the points $a_0,\dots,a_n$.
\end{proof}
\begin{proposition}\label{t:iT1} A semigroup $X$ is injectively $\mathsf{T_{\!1}S}$-closed if for any free ultrafilter $\F$ on $X$ there are elements $a_0,\ldots,a_n\in X^1$ and distinct elements $u,v\in X$ such that $$\{u,v\}\subseteq \bigcap_{F\in\F}a_0F a_1F\cdots F a_n.$$
\end{proposition}
\begin{proof}
Assuming that $X$ is not injectively $\mathsf{T_{\!1}S}$-closed, we can find a continuous injective homomorphism $h:X\rightarrow Y$ into a topological semigroup $(Y,\tau_Y)\in\mathsf{T_{\!1}S}$ such that $h[X]$ is not closed in $Y$. Fix any element $y_0\in \overline{h[X]}\setminus h[X]$. Since the space $Y$ is $T_1$, the filter $\mathcal H$ on $X$ generated by the base $\{h^{-1}[U]:y_0\in U\in\tau_Y\}$ is free. Let $\F$ be any ultrafilter on $X$ which contains $\mathcal H$. By the assumption, there exist elements $a_0,\ldots,a_n\in X^1$ and distinct elements $u,v\in X$ such that $\{u,v\}\subseteq a_0F a_1\cdots Fa_n$ for any $F\in \F$.
Since $(Y,\tau_Y)$ is a $T_1$-space, the point $h(a_0)y_0h(a_1)\cdots y_0h(a_n)\in Y$ has a neighborhood $W\in\tau_Y$ such that $|W\cap\{h(u),h(v)\}|\le 1$ and hence $|\{u,v\}\cap h^{-1}[W]|\le 1$. By the continuity of semigroup operation on $Y$, there exists an open neighborhood $U$ of $y_0$ such that $h(a_0)Uh(a_1)\cdots Uh(a_n)\subseteq W$. Note that $h^{-1}[U]\in\mathcal H\subseteq\F$ and hence $$\{u,v\}\subseteq a_0h^{-1}[U]a_1\cdots h^{-1}[U]a_n=h^{-1}[h(a_0)Uh(a_1)\cdots Uh(a_n)]\subseteq h^{-1}[W],$$ which contradicts the choice of $W$.
\end{proof}
The following example constructed by Taimanov~\cite{Taimanov73} shows that there exists an injectively $\mathsf{T_{\!1}S}$-closed semigroup which is not ideally $\mathsf{T_{\!z}S}$-closed.
\begin{example} Given an infinite cardinal $\kappa$, consider the semigroup $X=(\kappa,*)$ endowed with the binary operation
$$x*y=\begin{cases}1,&\mbox{if $x=y>1$};\\
0,&\mbox{otherwise.}
\end{cases}
$$
Observe that for any free filter $\mathcal F$ on $X$ we have $\{0,1\}\subseteq\bigcap_{F\in\F}FF$. By Proposition~\ref{t:iT1}, the Taimanov semigroup is injectively $\mathsf{T_{\!1}S}$-closed. We claim that for the ideal $J=\{0,1\}\subset X$, the quotient semigroup $X/J$ is not $\mathsf{T_{\!z}S}$-closed. Observe that $X/J$ is a semigroup with trivial multiplication, i.e., $ab=J\in X/J$ for any $a,b\in X/J$. Take any Hausdorff zero-dimensional space $Y$ containing $X/J$ as a non-closed discrete subspace. Endow $Y$ with the continuous semigroup operation defined by $xy=J\in X/J\subset Y$ for all $x,y\in Y$. Since $X/J$ is a non-closed discrete subsemigroup of zero-dimensional topological semigroup $Y$, the semigroup $X$ is not ideally $\mathsf{T_{\!z}S}$-closed.
\end{example}
The following trivial characterization of $\mathsf{T_{\!0}S}$-closed semigroups explains why we restrict our attention to topological semigroups satisfying the separation axioms $T_i$ for $i\ge 1$.
\begin{proposition} For a topological semigroup $X\in\mathsf{T_{\!0}S}$ the following conditions are equivalent:
\begin{enumerate}
\item $X$ is $\mathsf{T_{\!0}S}$-closed;
\item $X$ is absolutely $\mathsf{T_{\!0}S}$-closed;
\item $X=\emptyset$.
\end{enumerate}
\end{proposition}
\begin{proof} The implications $(3)\Ra(2)\Ra(1)$ are trivial. To see that $(1)\Ra(3)$, assume that the semigroup $X$ is not empty. On the $0$-extension $X^0$ of $X$ consider the topology $\tau^0=\{X^0\}\cup\tau$ where $\tau$ is the topology of $X$. It is easy to see that $(X^0,\tau^0)$ is a $T_0$ topological semigroup containing $X$ as a non-closed subsemigroup. Consequently, $X$ is not $\mathsf{T_{\!0}S}$-closed.
\end{proof}
\section{$\C$-closedness and $\C$-nontopologizability}
In this section we outline a connection between $\C$-nontopologizable and (injectively) $\C$-closed semigroups.
\begin{proposition}\label{discr}
Every $\mathsf{T_{\!1}S}$-closed topological semigroup $X\in \mathsf{T_{\!1}S}$ is discrete.
\end{proposition}
\begin{proof}
To derive a contradiction, assume that the topological semigroup $X$ contains a non-isolated point $a$. Take any point $b\notin X$ and consider the set $Y=X\cup\{b\}$ endowed with the semigroup operation defined as follows:
\begin{itemize}
\item $X$ is a subsemigroup of $Y$;
\item $bb=aa$;
\item for every $x\in X$, $bx=ax$ and $xb=xa$.
\end{itemize}
By $\tau_X$ we denote the topology of $X$. Let $\tau_Y$ be the topology on $Y$ generated by the base $\tau_X\cup\{(U\setminus\{a\})\cup\{b\}:a\in U\in\tau_X\}$. It is straightforward to check that $(Y,\tau_Y)$ is a $T_1$ topological semigroup containing $X$ as a non-closed subsemigroup, which contradicts the $\mathsf{T_{\!1}S}$-closedness of $X$.
\end{proof}
\begin{proposition}\label{p:iT1}
For a $T_1$ topological semigroup $X$ the following conditions are equivalent:
\begin{enumerate}
\item $X$ is injectively $\mathsf{T_{\!1}S}$-closed;
\item $X$ is $\mathsf{T_{\!1}S}$-closed and $\mathsf{T_{\!1}S}$-nontopologizable.
\end{enumerate}
\end{proposition}
\begin{proof}
Assume that a $T_1$ topological semigroup $X$ is injectively $\mathsf{T_{\!1}S}$-closed. Then $X$ is $\mathsf{T_{\!1}S}$-closed and, by Proposition~\ref{discr}, the topology of $X$ is discrete. To see that $X$ is $\mathsf{T_{\!1}S}$-nontopologizable, consider any injective homomorphism $i:X\to Y$ to a $T_1$ topological semigroup $Y$. Since $X$ is discrete, the homomorphism $i$ is continuous. Then the injective $\mathsf{T_{\!1}S}$-closedness of $X$ implies the (injective) $\mathsf{T_{\!1}S}$-closedness of the topological semigroup $i[X]$. By Proposition~\ref{discr}, the topology of $i[X]$ is discrete, which means that the semigroup $X$ is $\mathsf{T_{\!1}S}$-nontopologizable.
\smallskip
Assume that a $T_1$ topological semigroup $X$ is $\mathsf{T_{\!1}S}$-closed and $\mathsf{T_{\!1}S}$-nontopolo\-gizable. Consider any continuous injective homomorphism $i:X\to Y$ into a $T_1$ topological semigroup $Y$. Since $X$ is $\mathsf{T_{\!1}S}$-nontopologizable, the topological spaces $X$ and $i[X]$ are discrete and the map $i:X\to Y$ is a topological embedding. Since $X$ is $\mathsf{T_{\!1}S}$-closed, the image $i[X]$ is closed in $Y$.
\end{proof}
\begin{proposition}\label{p:aT1S} For a $T_1$ topological semigroup $X$ the following conditions are equivalent:
\begin{enumerate}
\item $X$ is absolutely $\mathsf{T_{\!1}S}$-closed;
\item $X$ is projectively $\mathsf{T_{\!1}S}$-closed and projectively $\mathsf{T_{\!1}S}$-nontopologizable.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $X$ be a $T_1$ absolutely $\mathsf{T_{\!1}S}$-closed topological semigroup. Proposition~\ref{discr} implies that $X$ is discrete.
Then $X$ is projectively $\mathsf{T_{\!1}S}$-closed and every homomorphic image of $X$ is injectively $\mathsf{T_{\!1}S}$-closed. Proposition~\ref{p:iT1} implies that $X$ is projectively $\mathsf{T_{\!1}S}$-nontopologizable.
\smallskip
Assume that a $T_1$ topological semigroup $X$ is projectively $\mathsf{T_{\!1}S}$-closed and projectively $\mathsf{T_{\!1}S}$-nontopologizable.
Fix any $T_1$ topological semigroup $Y$ and homomorphism $f:X\rightarrow Y$. Since $X$ is projectively $\mathsf{T_{\!1}S}$-nontopologizable, the spaces $X$ and $f[X]\subseteq Y$ are discrete. Since $X$ is projectively $\mathsf{T_{\!1}S}$-closed, the image $f[X]$ is closed in $Y$.
\end{proof}
The following example shows that there is a projectively $\mathsf{T_{\!1}S}$-closed and absolutely $\mathsf{T_{\!2}S}$-closed semilattice which is not injectively $\mathsf{T_{\!1}S}$-closed. We recall that a {\em semilattice} is a commutative semigroup of idempotents.
\begin{example} Given an infinite cardinal $\kappa$, consider the semilattice $X_\kappa=(\kappa,*)$ endowed with the binary operation
$$x*y=
\begin{cases}
x,&\mbox{if $x= y$};\\
0,&\mbox{otherwise}.
\end{cases}
$$Observe that for every free filter $\F$ on $X_\kappa$ the filter $\F\F$ is neither principal nor free. Applying Proposition~\ref{t:T1}, we conclude that the semilattice $X$ is $\mathsf{T_{\!1}S}$-closed. Since any homomorphic image of $X_\kappa$ is isomorphic to the semilattice $X_\lambda$ for some nonzero cardinal $\lambda\le\kappa$, the semilattice $X_\kappa$ is projectively $\mathsf{T_{\!1}S}$-closed. By \cite{BBm}, the semilattice is absolutely $\mathsf{T_{\!2}S}$-closed. Since the semilattice $X_\kappa$ admits a non-discrete $T_1$ semigroup topology $\tau=\{U\subseteq \kappa:0\in U\Ra ( |\kappa\setminus U|<\w)\}$, the semigroup $X_\kappa$ is $\mathsf{T_{\!1}S}$-topologizable. Then Proposition~\ref{p:iT1} implies that $X$ is not injectively $\mathsf{T_{\!1}S}$-closed.
\end{example}
\section{$\C$-closedness and polyboundedness}
In this section we study a relationship between polybounded and $\C$-closed semigroups.
\begin{lemma}\label{t:cancel1} Let $X$ be a semigroup with finite-to-one shifts and $A$ be a polybounded subset in $X$. Then $A$ is closed in any $T_1$ topological semigroup $Y$ that contains $X$ as a discrete subsemigroup.
\end{lemma}
\begin{proof}
Being polybounded in $X$, the set $A$ is contained in the union $\bigcup_{i=1}^m f_i^{-1}(b_i)$ for some semigroup polynomials $f_1,\dots,f_m$ on $X$ and some elements $b_1,\dots,b_m\in X$. Each semigroup polynomial $f_i$ is of the form $f_i(x)=a_{i,0}x\dots x a_{i,n_i}$ for some $n_i\in\IN$ and $a_{i,0},\dots, a_{i,n_i}\in X^1$. Let $\bar f_i:Y\to Y$ be the semigroup polynomial on $Y$, defined by $\bar f_i(y)=a_{i,0}y\cdots ya_{i,n_i}$ for $y\in Y$. Since the space $Y$ is $T_1$ and the polynomials $\bar f_i$, $i\leq m$, are continuous, the set $\bigcup_{i=1}^m\bar f_i^{-1}(b_i)$ is closed and hence contains the closure of $A$ in $Y$.
Assuming that the set $A$ is not closed in $Y$,
take any element $y\in\overline{A}\setminus A$ and find $i\in\{1,\dots,m\}$ such that $\bar f_i(y)=b_i$. Since the subspace $X$ of $Y$ is discrete, we conclude that $y\in Y\setminus X$. By the discreteness of $X$, there exists an open set $V\subseteq Y$ such that $V\cap X=\{b_i\}$. By the continuity of the semigroup operation on $Y$, there exists an open neighborhood $U\subseteq Y$ of $y$ such that $a_{i,0}Ua_{i,1}\cdots Ua_{i,n_i}\subseteq V$. Since $y\in \overline{X}\setminus X$, the set $U\cap X$ is infinite. Fix any element $u\in U\cap X$ and observe that the set
$$\{x\in X:a_{i,0}x(a_{i,1}u\cdots ua_{i,n_i})=b_i\}\supseteq U\cap X$$is infinite. But this is impossible, because $X$ has finite-to-one shifts.
\end{proof}
Lemma~\ref{t:cancel1} implies the following:
\begin{corollary}\label{t:cancel}
Each polybounded semigroup with finite-to-one shifts is $\mathsf{T_{\!1}S}$-closed.
\end{corollary}
\begin{lemma}\label{new}
Let $X$ be a semigroup such that the space $(X,\Zeta'_X)$ contains an isolated point. Then $X$ is polybounded.
\end{lemma}
\begin{proof}
Let $a$ be an isolated point in $(X,\Zeta'_X)$.
By the definition of the topology $\Zeta'_X$, there exist semigroup polynomials $f_1,\ldots f_n$ on $X$ and elements $b_1,\ldots, b_n\in X$ such that $$\{a\}=X\setminus \bigcup_{i=1}^n\{x\in X: f_i(x)=b_i\}.$$Consider the semigroup polynomial $f_0:X\to X$, $f_0:x\mapsto x$, and let $b_0=a$. Then $X=\bigcup_{i=0}^n\{x\in X: f_i(x)=b_i\}$, witnessing that $X$ is polybounded.
\end{proof}
The following example provides a simple method for constructing polybounded groups.
\begin{example}\label{ex} For any Abelian group $X$ with neutral element $0$, consider the group $X\rtimes \{-1,1\}$ endowed with the group operation $$\langle x,i\rangle*\langle y,j\rangle=\langle xy^i,ij\rangle.$$ The group $X\rtimes\{-1,1\}$ is polybounded since
$$X\rtimes \{-1,1\}=\{x\in X\rtimes \{-1,1\}:ax^2ax^2=e\},$
where $e=(0,1)$ and $a=(0,-1)$.
\end{example}
\begin{remark} For every semigroup $S$ its $0$-extension $S^0$ is polybounded since $S^0=\{x\in S^0:0x=0\}$. This trivial example shows that the finite-to-one shift property is essential in Lemma~\ref{t:cancel1} and Corollary~\ref{t:cancel}.
It is straightforward to check that the group $G=\mathbb Z\rtimes \{-1,1\}$ (see Example~\ref{ex}) is polybounded, but the space $(G,\Zeta'_G)$ is homeomorphic to the topological sum of two countable spaces endowed with the cofinite topology, implying that the space $(G,\Zeta'_G)$ has no isolated points. Hence the implication of Lemma~\ref{new} cannot be reversed even for countable groups.
\end{remark}
\begin{theorem}\label{t:semibound}
Every zero-closed countable semigroup is polybounded.
\end{theorem}
\begin{proof}
Assume that a countable semigroup $X$ is not polybounded. Let $(b_n)_{n\in\IN}$ be an enumeration of elements of $X$.
Next we inductively construct a sequence $A=\{x_n\}_{n\in\omega}\subseteq X$ such that for each $n\in\omega$ the point $x_n$ satisfies the following condition:
\begin{itemize}
\item[$(*_n)$] for every $k\le n$ and elements $a_0,\dots,a_{k},a_{k+1}\in(\{1\}\cup\{b_i\}_{1\le i\le n}\cup\{x_i\}_{i<n})^{n}\subseteq X^1$ we have $f(x_n)\ne a_{k+1}$ where $f(x)=a_0xa_1\cdots xa_k$.
\end{itemize}
To start the inductive construction, choose any point $x_0\in X$ and observe that the condition $(*_0)$ is satisfied, as $1\notin X$.
Assume that for some $n\in\w$ we have already chosen pairwise distinct elements $x_0,\dots,x_n$ such that for each $i\leq n$ the condition $(*_i)$ is satisfied. The finiteness of the set $F_{n+1}=\big(\{1\}\cup\{b_k:1\le k\le n+1\}\cup\{x_k:k\leq n\}\big)^{n+1}$ implies that there exist only finitely many semigroup polynomials of the form $f(x)=a_0xa_1\dots xa_k$, where $k\leq n+1$ and $\{a_0,\ldots, a_{k}\}\subseteq F_{n+1}$.
Since the semigroup $X$ is not polybounded, there
exists an element $x_{n+1}\in X\setminus\{x_i:i\leq n\}$ which satisfies condition $(*_{n+1})$. After completing the inductive construction we obtain
the desired set $A=\{x_n\}_{n\in\omega}$.
Consider the countable family
$$\mathcal K=\bigcup_{n=1}^\infty\{a_0Aa_1\cdots Aa_n:a_0,\dots,a_n\in X^1\}$$of subsets of $X$. Since
$$(a_0Aa_1\cdots Aa_n)\cdot(b_0Ab_1\cdots Ab_m)=a_0Aa_1\cdots A(a_nb_0)Ab_1\cdots Ab_m$$ and $$b(a_0Aa_1\cdots Aa_n)c=(ba_0)Aa_1\cdots A(a_nc),$$the family $\mathcal K$ satisfies conditions (1) and (2) of Lemma~\ref{l0}.
\smallskip
To show that $\mathcal K$ satisfies condition (3) of Lemma~\ref{l0}, fix any set $K\in\mathcal K$ and elements $a,b,c\in X^1$. Find $n\in\IN$ and elements $a_0,\dots,a_n\in X^1$ such that $K=a_0Aa_1\cdots Aa_n$. Next, find $m\ge 2n$ such that $$\{aa_0,a_nb,c\}\cup\{a_0,a_1,\dots,a_n\}\subseteq\{b_0,\dots,b_m\}.$$ We claim that $\{z\in K:azb=c\}\subseteq\{a_0x_{i_1}a_1\cdots x_{i_n}a_n:i_1,\dots,i_n<m\}$. In the opposite case there exists an element $z\in K$ such that $azb=c$ and $z=a_0x_{j_1}a_1\cdots x_{j_n}a_n$ for some numbers $j_1,\dots,j_n\in\w$ with $j:=\max\{j_1,\dots,j_n\}\ge m$. Let $P=\{p\leq n: j_p=j\}$ and write $P$ as $P=\{p(1),\dots,p(t)\}$ for some numbers $p(1)<\ldots<p(t)$. Let
$$a_0'=aa_0x_{j_1}a_1\cdots x_{j_{p(1)-1}}a_{p(1)-1},\qquad a_t'=a_{p(t)}x_{j_{p(t)+1}}a_{p(t)+1}\cdots x_{j_n}a_{n}b,$$and for every $0<i<t$ put
$$a_i'=a_{p(i)}x_{j_{p(i)+1}}a_{p(i)+1}\cdots x_{j_{p(i+1)-1}}a_{p(i+1)-1}.$$
It follows that $$azb=aa_0x_{j_1}a_1\cdots x_{j_n}a_nb=f(x_j)$$for the semigroup polynomial $f(x)=a'_0xa_1'\cdots xa_t'$. Observe that
$$\{c,a_0',\dots,a'_t\}\subset (\{1\}\cup\{aa_0,a_nb,c\}\cup\{a_i\}_{i\le n}\cup\{x_i\}_{i<j})^{2n-1}\subseteq(\{1\}\cup\{b_i\}_{1\leq i\le j}\cup\{x_i\}_{i<j})^j.$$
Then $c=azb=f(x_j)$ which is not possible, since $x_j$ satisfies condition $(*_j)$. This contradiction shows that the set $\{z\in K:azb=c\}\subseteq\{a_0x_{i_1}a_1\cdots x_{i_n}a_n:i_1,\dots,i_n< m\}$ is finite and hence the family $\mathcal K$ satisfies condition (3) of Lemma~\ref{l0}.
\smallskip
To show that $\mathcal K$ satisfies the condition (4) of Lemma~\ref{l0}, fix any $c\in X$ and sets $K,L\in\mathcal K$. Find elements $a'_0,\dots,a'_k,a''_0,\dots,a''_l\in X^1$ such that $K=a'_0Aa_1'\cdots Aa_k'$ and $L=a''_0Aa_1''\cdots Aa''_l$. Find a number $m\ge 2(k+l)$ such that $$\{a'_ka''_0,c\}\cup\{a'_0,\dots,a'_k\}\cup\{a''_0,\dots,a''_l\}\subseteq\{b_0,\dots,b_m\}.$$ Repeating the argument of the proof of condition (3), we can show that the set $$\{(x,y)\in K\times L:xy=c\}\subseteq\{(a'_0x_{i_1}\cdots x_{i_k}a'_k,a''_0x_{j_1}\cdots x_{j_l}a''_l):\max\{i_1,\dots,i_k,j_1,\dots,j_l\}<m\}$$ is finite.
Applying Lemma~\ref{l0}, we conclude that the semigroup $X$ is not zero-closed.
\end{proof}
A semigroup polynomial $f:X\to X$, $f:x\mapsto a_0xa_1\cdots xa_n$, on a semigroup $X$ is said to be {\em pruned} if $a_0=1=a_n$. Obviously, for every semigroup polynomial $f$ on $X$ there exists a pruned semigroup polynomial $g$ on $X$ and elements $a,b\in X^1$ such that $f(x)=ag(x)b$ for every $x\in X$. The following two lemmas will be useful in the proof of Proposition~\ref{p:group}.
\begin{lemma}\label{l:pruned} For every polybounded semigroup $X$ with finite-to-one shifts there exist a finite set $B\subseteq X$ and a finite set $F$ of pruned semigroup polynomials on $X$ such that $X=\bigcup_{f\in F}\bigcup_{b\in B}f^{-1}(b)$.
\end{lemma}
\begin{proof} By the polyboundedness of $X$, there exist a finite set $A\subseteq X$ and a finite set $P$ of semigroup polynomials on $X$ such that $X=\bigcup_{p\in P}\bigcup_{a\in A}p^{-1}(a)$. For every semigroup polynomial $p\in P$, find a pruned semigroup polynomial $f_p:X\to X$ and elements $c_p,d_p\in X^1$ such that $p(x)=c_pf_p(x)d_p$ for all $x\in X$. Since the semigroup $X$ has finite-to-one shifts, the set $B=\bigcup_{p\in P}\bigcup_{a\in A}\{x\in X:c_pxd_p=a\}$ is finite. Let $F=\{f_p:p\in P\}$. It is easy to see that $X=\bigcup_{f\in F}\bigcup_{b\in B}f^{-1}(b)$.
\end{proof}
An element $x$ of a semigroup $X$ is called {\em regular} if $x=xx^{-1}x$ for some element $x^{-1}\in X$. A semigroup $X$ is {\em regular} if every element of $X$ is regular.
\begin{lemma}\label{l:regular} For every polybounded semigroup $X$ with finite-to-one shifts there exist a finite set $F$ of pruned semigroup polynomials on $X$ and a finite set $B$ of regular elements of $X$ such that $X=\bigcup_{f\in F}\bigcup_{b\in B}f^{-1}(b)$.
\end{lemma}
\begin{proof} By Lemma~\ref{l:pruned}, $X=\bigcup_{f\in F}\bigcup_{b\in B}f^{-1}(b)$ for some finite set $B\subseteq X$ and some finite set $F$ of pruned semigroup polynomials on $X$. We can assume that the cardinality of the set $B$ is the smallest possible.
Consider the semigroup polynomial $s:X\to X$, $s:x\mapsto xx$.
Since $BB\subseteq X=\bigcup_{f\in F}\bigcup_{b\in B}f^{-1}(b)$, for every $b\in B$ there exists $\varphi_b\in F$ such that $\varphi_b(b^2)\in B$. We claim that $\varphi_b(b^2)=b$ for every $b\in B$. Assuming that $\varphi_b(b^2)\ne b$ for some $b\in B$, we can consider the set $$F'=F\cup\{\varphi_b\circ s\circ \varphi_b\}$$of pruned semigroup polynomials on $X$. We claim that $X=\bigcup_{f\in F'}\bigcup_{c\in B\setminus\{b\}}f^{-1}(c)$. Take any element $x\in X$. If $f(x)=c$ for some $f\in F$ and $c\in B\setminus\{b\}$, then $x\in \bigcup_{f\in F'}\bigcup_{c\in B\setminus\{b\}}f^{-1}(c)$. Otherwise, $f(x)=b$ for each $f\in F$. In particular, $\varphi_b(x)=b$ and, consequently, $\varphi_b\circ s\circ \varphi_b(x)=\varphi_b(b^2)\in B\setminus\{b\}$. It follows that $X=\bigcup_{f\in F'}\bigcup_{c\in B\setminus\{b\}}f^{-1}(c)$ which contradicts the minimality of $B$. This contradiction shows that $\varphi_b(b^2)=b$ for all $b\in B$. Since the polynomial $\varphi_b$ is pruned, there exist elements $a_1,\ldots, a_{n-1}\in X$ such that
$$b=\varphi_b(b^2)=b^2a_1b^2\cdots a_{n-1}b^2=bb^{-1}b$$for the element $b^{-1}=ba_1b^2\cdots a_{n-1}b\in X$, which means that the elements of the set $B$ are regular.
\end{proof}
Lemma~\ref{l:regular} has the following corollary.
\begin{corollary
Every polybounded semigroup with finite-to-one shifts contains an idempotent.
\end{corollary}
\begin{proposition}\label{p:group}
Every polybounded cancellative semigroup $X$ is a group.
\end{proposition}
\begin{proof}
By Lemma~\ref{l:regular}, $X=\bigcup_{f\in F}\bigcup_{b\in B}f^{-1}(b)$ for some finite set $F$ of pruned semigroup polynomials on $X$ and some finite set $B$ of regular elements of $X$. Then for every $b\in B$ we can find an element $b^{-1}\in X$ such that $bb^{-1}b=b$. It follows that the elements $bb^{-1}$ and $b^{-1}b$ are idempotents.
By the cancellativity, $X$ contains a unique idempotent. Indeed, for any idempotents $e,f$ in $X$, the equalities $e(ef)=(ee)f=ef=e(ff)=(ef)f$ imply $f=ef=e$. Now we see that $X$ contains a unique idempotent $e$ and every element $b\in B$ is invertible in the sense that $bb^{-1}=e=b^{-1}b$ for some element $b^{-1}\in X$. By the cancellativity, for every $x\in X$ the equalities $xe=x(ee)=(xe)e$ and $ex=(ee)x=e(ex)$ imply $xe=x=ex$, which means that $e$ is the unit of the semigroup $X$.
For every $x\in X$ we can find $f\in F$ and $b\in B$ such that $f(x)=b$. Since the semigroup polynomial $f$ is pruned, $f(x)=xy$ for some $y\in X$. Then $$x(yb^{-1})x=(xy)b^{-1}x=f(x)b^{-1}x=bb^{-1}x=ex=x,$$ which means that the semigroup $X$ is regular. By \cite[Ex.11]{Howie}, the regular cancellative semigroup $X$ is a group.
\end{proof}
\section{Proofs of main results}
{\em Proof of Theorem~\ref{char}:} Let $X$ be a countable semigroup with finite-to-one shifts. The implications $(1)\Ra (2)\Ra (3)$ are trivial. The implications $(3)\Ra (4)$ and $(4)\Ra (1)$ follows from Theorem~\ref{t:semibound} and Corollary~\ref{t:cancel}, respectively. If $X$ is a cancellative semigroup, then by Proposition~\ref{p:group}, the equivalent conditions (1)--(4) imply that $X$ is a group.
\smallskip
{\em Proof of Theorem~\ref{main2}:} The implication $(1)\Ra (2)$ follows from Proposition~\ref{p:iT1}. Assume that $X$ is a countable $\mathsf{T_{\!1}S}$-nontopologizable semigroup with finite-to-one shifts. By Podewski--Taimanov Theorem~\ref{t:Podewski}, the space $(X,\Zeta'_X)$ is discrete. Then Lemma~\ref{new} implies that the semigroup $X$ is polybounded, Corollary~\ref{t:cancel} implies that $X$ is $\mathsf{T_{\!1}S}$-closed and Proposition~\ref{p:iT1} implies that $X$ is injectively $\mathsf{T_{\!1}S}$-closed, which proves the implication $(2)\Ra (1)$.
The equivalence of $(2)$ and $(3)$ is provided by Podewski--Taimanov Theorem~\ref{t:Podewski}.
\smallskip
{\em Proof of Theorem~\ref{tag}:} By Proposition~\ref{p:aT1S}, every absolutely $\mathsf{T_{\!1}S}$-closed semigroup is projectively $\mathsf{T_{\!1}S}$-nontopologizable. Assume that $X$ is a countable cancellative projectively $\mathsf{T_{\!1}S}$-nontopologizable semigroup. By Podewski--Taimanov Theorem~\ref{t:Podewski}, the space $(X,\Zeta'_X)$ is discrete. By Lemma~\ref{new}, the semigroup $X$ is polybounded.
Proposition~\ref{p:group} implies that $X$ is a group. Consider any homomorphism $h:X\rightarrow Y$ into a $T_1$ topological semigroup $Y$. Then the subgroup $h[X]\subseteq Y$ is $\mathsf{T_{\!1}S}$-nontopologizable. It follows that $h[X]$ is a discrete subgroup of $Y$. Podewski--Taimanov Theorem~\ref{t:Podewski} implies the discreteness of the space $(h[X],\Zeta'_{h[X]})$. By Lemma~\ref{new}, the group $h[X]$ is polybounded. Corollary~\ref{t:cancel} ensures that $h[X]$ is closed in $Y$.
\section{Polyboundedness and its applications}
In this section we establish some additional properties of polybounded semigroups and find applications of polyboundedness to paratopological groups.
A semigroup $S$ is defined to be {\em bounded} if there exists $n\in\IN$ such that for every $x\in S$ the power $x^n$ is an idempotent. The boundedness plays a crucial role in the characterization of $\mathcal C$-closed commutative semigroups. In particular, every subgroup of a $\mathsf{T_{\!z}S}$-closed commutative semigroup is bounded~\cite{BB}. By \cite{Ban}, a commutative group $X$ is $\mathsf{T_{\!z}S}$-closed if and only if $X$ is $\mathsf{T_{\!1}S}$-closed if and only if $X$ is bounded.
The next proposition shows that the polyboundedness is equivalent to the boundedness for commutative semigroups with finite-to-one shifts.
For a semigroup $X$ let
$$Z(X)=\{z\in X:\forall x\in X\;(xz=zx)\}$$be the {\em center} of $X$.
\begin{proposition}\label{p:bound-semibound} Let $X$ be a semigroup.
If $X$ has finite-to-one shifts and $Z(X)$ is polybounded in $X$, then $Z(X)$ is bounded.
\end{proposition}
\begin{proof}
Assume that the semigroup $X$ has finite-to-one shifts and $Z(X)$ is polybounded in $X$. Then $Z(X)\subseteq \bigcup_{i=1}^nf_i^{-1}(b_i)$ for some semigroup polynomials $f_1,\dots,f_n$ on $X$ and some elements $b_1,\dots,b_n\in X$. For every $i\leq n$ we can find a number $p_i\in\IN$ and an element $a_i\in X^1$ such that $f_i(x)=a_ix^{p_i}$ for all $x\in Z(X)$. Let $p=\max_{i\le n}p_i$. Since $X$ has finite-to-one shifts, the set $F=\bigcup_{i\leq n}\{x\in Z(X):a_ix=b_i\}$ is finite.
Assuming that the semigroup $Z(X)$ is not bounded, we can find an element $z\in Z(X)$ such that for any distinct numbers $i,j\in\{1,\dots,p(1+n|F|)\}$ the powers $z^i,z^j$ are distinct. Since $$Z(X)=\bigcup_{i=1}^n\{x\in Z(X):f_i(x)=b_i\}=\bigcup_{i=1}^n\{x\in Z(X):a_ix^{p_i}=b_i\},$$ for every $j\in \IN$ we can find $i_j\in\{1,\dots,n\}$ such that $a_{i_j}(z^j)^{p_{i_j}}=b_{i_j}$.
By the Pigeonhole principle, for some $i\in\{1,\dots,n\}$ the set $J_i=\{j\in\{1,\dots,1+n\cdot|F|\}:i =i_j\}$ has cardinality $|J_i|>|F|$. Then for every $j\in J_i$ we have $a_iz^{jp_i}=b_i$ and hence $z^{jp_i}\in F$. Since $|J_i|>|F|$, there are two numbers $j<j'$ in $J_i$ such that $z^{jp_i}=z^{j'p_i}$. Since $\max\{jp_{i},j'p_i\}\leq (1+n|F|)p$, we obtain a contradiction with the choice of $z$.
\end{proof}
\begin{proposition}\label{qd}
Quotients and finite products of polybounded semigroups are polybounded.
\end{proposition}
\begin{proof} Assume that $X$ is a polybounded semigroup, that is $X=\bigcup_{i=1}^nf_i^{-1}(b_i)$ for semigroup polynomials $f_1,\dots,f_n$ on $X$ and elements $b_1,\dots,b_n\in X$. For every $i\le n$ find a number $n_i\in \IN$ and elements $a_{i,0},\dots,a_{i,n_i}\in X^1$ such that $f_i(x)=a_{i,0}xa_{i,1}x\cdots xa_{i,n_i}$ for all $x\in X$.
Let $Y=X/_\approx$ be a quotient semigroup and $q:X\to Y$ be the quotient homomorphism. For every $i\in\{1,\dots,n\}$ and $j\in\{0,\dots,n_i\}$, let $\tilde a_{i,j}=q(a_{i,j})$ and $\tilde b_i=q(b_i)$. Let $\tilde f_i$ be the semigroup polynomial on $Y$ defined by $\tilde f_i(y)=\tilde a_{i,0}y\tilde a_{i,1}y\cdots y\tilde a_{i,n_i}$. Since $q$ is a homomorphism, for every $i\in\{1,\dots,n\}$ and $x\in f_i^{-1}(b_i)$ we have $\tilde f_i(q(x))=q(f_i(x))=q(b_i)=\tilde b_i$. Then $X=\bigcup_{i=1}^nf_i^{-1}(b_i)$ implies $Y=\bigcup_{i=1}^n\tilde f_i^{-1}(\tilde b_i)$, which means that the quotient semigroup $Y=X/_\approx$ is polybounded.
To show that finite products of polybounded semigroups are polybounded, it suffices to prove that for any polybounded semigroups $X,Y$ their product $X\times Y$ is polybounded. Since $X$ is polybounded, there exist a finite set $F_X\subseteq X$ and a finite set $P_X$ of semigroup polynomials on $X$ such that $X=\bigcup_{f\in P_X}\bigcup_{b\in F_X}f^{-1}(b)$. By the polyboundedness of $Y$, there exist a finite set $F_Y\subseteq Y$ and a finite set $P_Y$ of semigroup polynomials on $Y$ such that $Y=\bigcup_{g\in P_Y}\bigcup_{b\in F_Y}g^{-1}(b)$.
For every polynomial $f\in P_X$, find $\deg(f)\in \IN$ and elements $a_{0,f},\dots,a_{\deg(f),f}\in X^1$ such that $f(x)=a_{0,f}x a_{1,f}\cdots xa_{\deg(f),f}$ for all $x\in X$. Also for every polynomial $g\in P_Y$ find $\deg(g)\in \IN$ and elements $a_{0,g},\dots,a_{\deg(g),g}\in Y^1$ such that $g(y)=a_{0,g}y a_{1,g}\cdots ya_{\deg(g),g}$ for all $y\in Y$.
For any semigroup polynomials $f\in P_X$ and $g\in P_Y$, consider the function $$p_{f,g}:X\times Y\to X\times Y,\quad p_{f,g}:(x,y)\mapsto (f(x)^{\deg(g)},g(y)^{\deg(f)}),$$ and observe that $p_{f,g}$ is a semigroup polynomial (of degree $\deg(f)\cdot \deg(g)$) on $X\times Y$.
Since $$X\times Y=\bigcup_{f\in P_X}\bigcup_{g\in P_Y}\bigcup_{b_X\in F_X}\bigcup_{b_Y\in F_Y}p_{f,g}^{-1}(b_X^{\deg(g)},b_Y^{\deg(f)}),$$
the semigroup $X\times Y$ is polybounded.
\end{proof}
\begin{remark}
By Proposition~\ref{p:bound-semibound}, the infinite product of finite cyclic groups $\prod_{i\in\mathbb N}\mathbb Z_n$ is not polybounded.
Also, Example~\ref{ex} shows that the polyboundedness is not inherited by taking subgroups, whereas the boundedness is hereditary.
\end{remark}
A {\em paratopological group} is a group endowed with a semigroup topology. A paratopological group $G$ is a {\em topological group} if the operation of taking the inverse $G\to G$, $x\mapsto x^{-1}$, is continuous. In this case the topology of $G$ is called a {\em group topology}.
The problem of automatic continuity of the inversion in paratopological groups was investigated by many authors (see surveys~\cite{BR} and~\cite{Tka} for more information). A typical result says that the continuity of the inversion in paratopological groups follows from a suitable property of compactness type. For example, a paratopological group $G$ is topological if $G$ possesses one of the following properties: locally compact, sequentially compact, totally countably compact, regular feebly compact or quasiregular 2-pseudocompact~\cite{Tka}.
The next proposition shows that each polybounded $T_1$ paratopological group is topological.
\begin{proposition}\label{p:topg} Let $G$ be a $T_1$ paratopological group and $H\subseteq G$ be a subgroup which is polybounded in $G$. Then $H$ is a topological group.
\end{proposition}
\begin{proof} Being polybounded, the subgroup $H$ is contained in the union $\bigcup_{i=1}^nf_i^{-1}(b_i)$ for some semigroup polynomials $f_1,\dots,f_n:G\to G$ and some elements $b_1,\dots,b_n\in G$.
Observe that if $f_i(x)=a_0xa_1\cdots xa_n$ for some $a_0,\dots,a_n\in G$, then $f_i^{-1}(b_i)=\tilde f_i^{-1}(e)$ for the semigroup polynomial $\tilde f_i(x)=xa_1\cdots xa_nb_i^{-1}a_0$. So, we can assume that for every $i\le n$ there exists a semigroup polynomial or a constant self-map $g_i$ of $X$ such that $f_i(x)=xg_i(x)$ for all $x\in G$, and $b_i=e$. It follows from $$H\subseteq \bigcup_{i=1}^n f_i^{-1}(e)=\bigcup_{i=1}^n\{x\in G:xg_i(x)=e\}$$ that
$x^{-1}\in\{g_i(x)\}_{i\le n}$ for every $x\in H$.
To prove that $H$ is a topological group we should prove that for any open neighborhood $U\subseteq G$ of $e$ there exists a neighborhood $V\subseteq G$ of $e$ such that $(H\cap V)^{-1}\subseteq U$. Fix any neighborhood $U$ of $e$ in $G$.
Since $G$ satisfies the separation axiom $T_1$, we can replace $U$ by a smaller neighborhood and assume that for every $i\leq n$, $g_i(e)^{-1}\notin U$, whenever $g_i(e)\ne e$. By the continuity of the semigroup operation in $G$, there exists a neighborhood $W\subseteq U$ of $e$ such that $WW\subseteq U$.
By the continuity of the functions $g_1,\dots, g_n$, there exists a neighborhood $V\subseteq W$ of $e$ such that for any $i\in\{1,\dots,n\}$ we have $g_i[V]\subseteq Wg_i(e)$. We claim that $(H\cap V)^{-1}\subseteq U$. Indeed, fix any $x\in H\cap V$. By the choice of the functions $g_1,\dots,g_n$, there exists $i\in\{1,\dots,n\}$ such that $xg_i(x)=e$. Then
$$e=xg_i(x)\in Vg_i[V]\subseteq VW g_i(e)\subseteq WW g_i(e)\subseteq Ug_i(e).$$
Recall that if $g_i(e)\neq e$, then, by the choice of $U$, $g_i(e)^{-1}\notin U$. Consequently, $e\notin Ug_i(e)$ which contradicts to the above inclusion. Hence
$g_i(e)=e$. Finally, $$x^{-1}=g_i(x)\in g_i[V]\subseteq Wg_i(e)=We=W\subseteq U.$$
\end{proof}
\begin{proposition
If a Hausdorff topological semigroup $Y$ contains a dense polybounded subgroup $X$, then $Y$ is a topological group.
\end{proposition}
\begin{proof} Let $e$ be the identity of the group $X$. The density of $X$ in $Y$ and the Hausdorff property of $Y$ implies that the (closed) set $\{y\in Y:ye=y=ey\}$ coincides with $Y=\overline{X}$, which means that $e$ is the identity of the semigroup $Y$.
Since the group $X$ is polybounded, there are elements $b_1,\dots,b_n\in X$ and semigroup polynomials $f_1,\dots,f_n$ on $X$ such that $$X=\bigcup_{i=1}^n f_i^{-1}(b_i).$$ Using the same trick as in the proof of Proposition~\ref{p:topg}, we can assume that each polynomial $f_i(x)$ is of the form $xg_i(x)$ where $g_i$ is a semigroup polynomial or a constant self-map of $X$ and $b_i=e$ for every $i\leq n$. Then for every $x\in X$ we can find $i\in\{1,\dots,n\}$ such that $x g_i(x)=e$, which means that $g_i(x)=x^{-1}$ and $xg_i(x)=e=g_i(x)x$.
The function $g_i:X\to X$ is of the form $g_i(x)=a_{i,0}x\dots x a_{i,n_i}$ for some $n_i\in\w$ and $a_{i,0},\dots, a_{i,n_i}\in X$. Let $\bar g_i:Y\to Y$ be the continuous function on $Y$ defined by $\bar g_i(y)=a_{i,0}y\dots ya_{i,n_i}$ for $y\in Y$. It follows that the set $\bigcup_{i=1}^n\{y\in Y:y\bar g_i(y)=e=\bar g_i(y)y\}$ coincides with $Y$, because it is closed and contains a dense subset $X$ in $Y$. Consequently, the semigroup $Y$ is polybounded and for every $y\in Y$ there exists an element $z\in \{\bar g_i(y):i\leq n\}\subset Y$ such that $yz=e=zy$, which means that $Y$ is a group. By Proposition~\ref{p:topg}, $Y$ is a topological group.
\end{proof}
Conditions implying that a cancellative topological semigroup $S$ is a topological group is another widely studied topic in Topological Algebra (see,~\cite{BG, HH, RS}). The following corollary of Propositions~\ref{p:group} and \ref{p:topg} contributes to this field.
\begin{corollary}
Every polybounded cancellative $T_1$ topological semigroup is a topological group.
\end{corollary}
By Theorem~\ref{char} and a countable group is $\mathsf{T_{\!1}S}$-closed if and only if it is polybounded. We do not know if this characterization remains true for uncountable groups.
\begin{problem} Is each $\mathsf{T_{\!1}S}$-closed group polybounded?
\end{problem}
\section*{Acknowledgements}
The authors express their sincere thanks to Olga Sipach\"eva for valuable discussions on the Podewski--Taimanov Theorem~\ref{t:Podewski} and sharing a copy of Podewski's paper \cite{Podewski}
| proofpile-arXiv_065-1866 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{0pt}{2pt plus 1pt minus 1pt}{2pt plus 1pt minus 1pt}
\usepackage{listings}
\lstset{escapeinside={(*@}{@*)}}
\newcommand{\noind}[0]{\par \noindent}
\newcommand{\noindpar}[1]{\noind {\bf #1}}
\usepackage[scale=.9,light]{sourcecodepro}
\usepackage[T1]{fontenc}
\newcommand{\code}[1]{{\ttfamily #1}}
\newcommand{PureVM\xspace}{PureVM\xspace}
\newcommand{PureLANG\xspace}{PureLANG\xspace}
\newcommand{RewindingVM\xspace}{RewindingVM\xspace}
\newcommand{JustInTimeVM\xspace} %\texttt{JIT\_VM}{JustInTimeVM\xspace}
\newcommand{TestVM\xspace}{TestVM\xspace}
\newcommand{computing loop\xspace}{computing loop\xspace}
\newcommand{routine\xspace}{routine\xspace}
\newcommand{Routine\xspace}{Routine\xspace}
\newcommand{\sinan}[1]{\textcolor{blue}{#1}}
\newcommand{\caglar}[1]{\textcolor{red}{[c:] #1}}
\newcommand{\geylani}[1]{\textcolor{orange}{g: #1}}
\usepackage{comment}
\usepackage{lipsum}
\usepackage{paralist}
\begin{document}
\title{Virtualizing Intermittent Computing}
\author{\c{C}a\u{g}lar~Durmaz,~Kas{\i}m~Sinan~Y{\i}ld{\i}r{\i}m,~\IEEEmembership{IEEE Member,}~and~Geylani~Karda
\thanks{\c{C}a\u{g}lar Durmaz and Geylani Kardas are with the International Computer Institue, Ege University, Turkey. e-mail: \{caglar.durmaz, geylani.kardas\}@ege.edu.tr. Kas{\i}m Sinan Y{\i}ld{\i}r{\i}m is with the Department of Information Engineering and Computer Science, University of Trento, Italy. e-mail: [email protected]}.
}
\maketitle
\input{0-abstract.tex}
\IEEEpeerreviewmaketitle
\input{1-introduction.tex}
\input{2-related.tex}
\input{3-purelang.tex}
\input{4-purevm.tex}
\input{5-evaluation.tex}
\input{6-conclusion.tex}
\bibliographystyle{IEEEtran}
\section{Introduction}
\IEEEPARstart{I}{n} the past decade, the progress in energy harvesting circuits and the decrease in power requirements of processing, sensing, and communication hardware promised the potential of freeing the Internet of Things (IoT) devices from their batteries. Recent works demonstrated several microcontroller-based devices that can work without the need for batteries by harvesting energy from ambient sources, such as solar and radiofrequency~\cite{hester2017flicker,yildirim2018safe,nardello2019camaroptera}. Batteryless devices store the harvested ambient energy into a tiny capacitor that powers the microcontroller and peripherals. A batteryless device can compute, sense, and communicate when the energy stored in its capacitor is above an operating threshold. It turns off and loses its volatile state (e.g., the contents of the CPU, peripheral registers, the volatile memory) when the energy level drops below this threshold. The device can turn on only after charging its capacitor again. This phenomenon, i.e., the intermittent execution due to power failures, led to the emergence of a new computing paradigm, the so-called \emph{intermittent computing}~\cite{yildirim2018ink,kortbeek2020TICS}.
During intermittent execution, batteryless devices use the harvested energy to perform a short burst of computation. To recover their computation state and progress computation forward after a power failure, they need to save the computation state in non-volatile memory before a power failure. Recent studies proposed programming models for intermittent computing to support these state logging and recovery operations. The proposed programming models provide language constructs (i.e., either \emph{checkpoints}~\cite{kortbeek2020TICS} or \emph{tasks}~\cite{colin2016chain}) to (i) maintain the forward progress of computation and (ii) keep the memory (i.e., computation state) consistent. However, with these models, programmers need to deal with the low-level details of the intermittent execution~\cite{durmaz2019puremem}. In particular, existing models pose the following deficiencies.
\noindpar{Explicit Burst Management.} Programmers need to design their programs as a set of computation bursts that should fit in the capacitor. Thus, they explicitly identify the boundaries of these bursts via checkpoint placement or task decomposition. This situation increases the programming effort considerably
\noindpar{Hardware Dependency.} The active time of a batteryless device depends on its capacitor size and its power consumption, i.e., its hardware configuration~\cite{colin2018reconfigurable}. Programmers might need to identify different burst boundaries to execute their programs on a new device with a different hardware configuration. Therefore, existing intermittent programs are not portable, and in turn, not reusable.
\noindpar{Explicit I/O Management.} Power failures that occur during I/O or interrupt handling might leave the memory in an inconsistent state. With existing models, programmers need to manually ensure the atomic execution of interrupt handlers or I/O operations~\cite{ruppel2019Coati}. This situation increases the programming burden and makes programs error-prone.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{pics/purevm.pdf}
\caption{Using PureLANG\xspace, a continuation-passing-style programming language, and PureVM\xspace, whose specification enables the execution of programs written in PureLANG\xspace, programmers do not reason about the intermittent execution and they develop programs portable across several hardware platforms.}
\label{fig:intro}
\end{figure}
In this paper, we aim at virtualizing intermittent computing to remedy the deficiencies mentioned above. We introduce PureVM\xspace virtual machine and its software interface PureLANG\xspace language that abstract away the complicated aspects of intermittent execution. Thanks to PureVM\xspace and PureLANG\xspace, programmers focus only on their application logic, forget about power failures as if they are programming continuously powered systems, and develop portable intermittent programs. Shortly, this paper introduces the following contributions:
\begin{compactenum}
\item \textbf{PureVM\xspace Virtual Machine.} We introduce PureVM\xspace, the first \emph{virtual machine} for intermittent systems, which abstracts a transiently powered computer. This abstraction hides the details of intermittent execution and enables platform-independent program development via its implementations targeting different hardware (see Figure~\ref{fig:intro}).
\item \textbf{PureLANG\xspace Language.} We introduce PureLANG\xspace, a \emph{continuation-passing-style} programming language, intended to prevent programmers from reasoning about intermittent execution.
PureLANG\xspace programs are translated into a set of re-composable and atomically-executed PureVM\xspace instructions.
\end{compactenum}
To the best of our knowledge, our work proposes the first \emph{virtualization} attempt for intermittent computing. PureVM\xspace and PureLANG\xspace create the fundamental building blocks of intermittent computing from scratch that pave the way for portable transiently-powered applications.
\iffalse
With the expansion of the usage areas of IoT applications, the use of battery-powered devices is increasing in places where power transmission is not possible by cable. However, it is necessary to change the batteries of the battery-powered devices from time to time and to dispose of the batteries.
\IEEEPARstart{T}{he} emerging trend in energy harvesting and low-power microelectronics enabled the microcontroller-based systems that can operate without batteries.
in energy harvesting circuits and microelectronics have led to a new type of computing systems; namely transiently powered computers (TPCs), that can operate relying on ambient energy only without requiring batteries. As an example, Wireless Identification and Sensing Platform (WISP)~\cite{smith2013wirelessly} is a tiny computer that operates by using the captured energy of radio waves. The captured energy is stored in a small capacitor and if the stored energy is above a threshold, WISP wakes up to sense the environment and perform computation. When the energy in the capacitor is depleted, WISP dies due to a power failure---leading to an intermittent operation. As opposed to continuously-powered computers, TPCs follow a duty cycle composed of charge$\rightarrow$operate$\rightarrow$die phases~\cite{ransford2012mementos}. Since the ambient energy is non-deterministic, TPCs experience frequent power failures which reset the volatile state of the device; e.g. stack, program counter, registers. Therefore, existing programs and libraries designed for continuously-powered computers cannot run on TPCs correctly due to the frequent loss of volatile state---power failures give rise to failed computation and incorrect results.
In order to preserve progress of computation in TPCs, the researchers proposed inserting checkpoints in the program source at compile time \cite{ransford2012mementos,woude2016ratchet,lucia2015simpler,jayakumar2014quickrecall,mottola2017harvos,balsamo2016hibernus++}, so that the whole volatile state of the processor is saved in non-volatile memory. Upon reboot, the volatile state will be recovered using the checkpointed information and the computation will be restored from where it left. However, checkpointing introduces considerable store/restore overhead due to the size of the volatile state. This issue motivated researchers to develop task-based programming models for TPCs~\cite{colin2016chain,alpaca,hester2017timely,yildirim2018ink}. In these models, the program source is composed of a collection of restartable tasks, task-based control flow and input/output channels for the data flow among the tasks. Task-based programming environments do not checkpoint the whole volatile state: they execute the current task in the control flow, restart it upon recovery from a power failure, guarantee its atomic completion, and then switch to the next task in the control flow.
However, as compared to the structured programming languages such as C, existing task-based programming models provide limited abstractions and in turn expose several disadvantages. In particular, in current task-based systems (a) control flow statements lead to "spaghetti code", (b) tasks are tightly-coupled with each other since they share global input/output data that decreases their re-usability, (c) tasks do not have signatures and therefore make computation vulnerable to potential bugs and (d) automatic and dynamic memory management is not allowed that leads to a bigger memory footprint. This paper addresses these issues and introduces a structured task-based programming model for TPCs; namely PureMEM. We list the main contributions of PureMEM to the state-of-the-art as follows:
\begin{enumerate}
\item Contrary to parameterless tasks and task-based control flow in existing programming models, PureMEM enables signatures for the functions and introduces function composition as a control flow structure in order to \emph{\textbf{eliminate "spaghetti code"}} and to \emph{\textbf{reduce bugs}}. Thanks to continuation-passing style, PureMEM prevents interdependencies caused by the unstructured control and enables \emph{\textbf{re-usability}} of the tasks.
\item PureMEM enables manual (dynamic) memory management that allows creation of variables which live longer by providing allocation and deallocation of nonvolatile memory. These variables can be used for data sharing among the tasks---\emph{\textbf{limiting}} the \emph{\textbf{scope}} and \emph{\textbf{lifetime}} of task-shared variables in contrast global scope and lifetime task-shared variables in existing models.
\item While prior works \cite{alpaca, colin2016chain, hester2017timely} do not contain any constructs on \emph{\textbf{error handling}}, routines in PureMEM may return and recover errors.
\end{enumerate}
The rest of the paper is organized as follows: Section 2 includes a brief discussion on TPC and existing programming models for TPC. PureMEM programming model is discussed in Section 3. Runtime environment of PureMEM is described in Section 4. Use of PureMEM is demonstrated with a case study in Section 5. Section 6 gives the related work and Section 7 concludes the paper.
\subsection{State-of-the-art:} Onceki calismalarda sunlar var. Task-based vs checkpoints.
\begin{enumerate}
\item Checkpointing tarafinda degiliz. (nedenini yazmak gerek). \item Task-based systemlerin eksiklerinden sirayla bahsedecegiz.
\item \textbf{Lack of mechanisms for structured programming}
\item PUREMEM aims to bring \textbf{Task-based structured programming} - konferansta yayinladik.
\item (0) Tasklar parametre alabiliyo
\item (1) Sphagetti Code and Reusability
\item (2) Error handling
\item (3) Dynamic memory management
\end{enumerate}
\subsection{Problem Statement:} PUREMEM de yeterli degil - DSL icin baska bir mekanizma gerek. Onun da dahil oldugu tum sistemlerin asagidaki sorunlari var:
\textbf{Deficiencies of the existing systems:}
None of the existing systems did not propose a DSL for intermittent stytems.
A DSL brings the following advantages:
\begin{enumerate}
\item Platform independence. Thanks to this, platfrom-dependent issues such as code optimization, capacitor size, task-sizes, I/O is abstracted from the programmer.
\item Event driven programming is easier
\item PL features like type checking is enabled (for the first time!) that will eliminate bugs, other issues
\item Code development efforts (bakariz daha sonra)
\end{enumerate}
\textbf{DSL - Lambda-calculus - Fonmsiyonkel Programlama - Scala} bunlar arasindaki iliskiyi gosterirsek, butunlugu saglamis oluruz. Ya da neden feyz aldik? Diger Task-based calismalarda bunlarin hicbirinden bahsedilmemis. (Unstructured olduklari icin)
We propose a VM that will execute the code generated by the DSL compiler. These features
\textbf{PUREMEM vs VM}
\begin{enumerate}
\item Primitive instruction-like tasks -> VM'e goturuyor
\item Program VM instructionlarini kullaniyor -> Program tamamiyle HW independent
\item Prograci da insitruction ekleyebilir (limited degiliz)
\item VM provoides comple-time optimization for the efficient execution
\end{enumerate}
\begin{enumerate}
\item \emph{Optimized code is decoupled from hardware:} Compiler adjusts the task sizes with regards to the capacitor size of the device.
\item B-FreeVM provides an event driven programming model which enables sleep mode.
\item B-Free compiler provides a type checking mecahnism with parametric polymorphism, which reduces the bugs.
\item B-FreeVM is also implemented for personal computers, which brings the advantage of testing without micro-controllers.
\item Memory and cache of BFreeVM can be configured and optimized with regards to the application needs.
\end{enumerate}
\fi
\section{Background and Related Work}
\label{sec:background}
Frequent power failures lead to intermittent execution that poses several challenges to developing batteryless applications. We classify these challenges into two classes: computation and I/O related challenges (A1--A3) and programming challenges (B1--B2). We describe them as follows.
\noindpar{A1- Non-termination and Memory Consistency.} Power failures hinder the \emph{forward progress} of the computation, which leads to non-terminating programs~\cite{maeng2018Chinchilla}. Non-termination occurs since the device loses the intermediate results and restarts the computation from scratch upon each reboot. Power failures might also lead to \emph{memory inconsistencies}~\cite{ransford2014nonvolatile}. To give an example, assume that a program is modifying the persistent variable \code{var} (i.e., a variable kept in non-volatile memory) by executing the code block \code{\{vector[var++]=10;\}}. Since the variable \code{var} is updated after it is read, there is a \emph{Write-After-Read (W.A.R.)} dependency. If a power failure occurs at this point, the re-execution of this code will increment the variable \code{var} again. Therefore, another element of the \code{vector} will be set to 10 in this case. Due to the W.A.R. dependency, there is a violation of \emph{idempotency} since repeated computation produces different results. To prevent these issues, programmers should divide their programs into a set of idempotent code blocks (e.g., \emph{tasks}~\cite{colin2016chain}) that can execute \emph{atomically} in a power cycle and can be safely restarted despite W.A.R. dependencies. A runtime library (e.g., \cite{alpaca,yildirim2018ink}) is required to persist the program state non-volatile memory, to manage power failures, and to re-execute the code blocks that could not complete in the previous power cycle.
\noindpar{A2- Control-flow Inconsistencies.} If the control flow depends on external inputs such as sensor readings, power failures might lead to erratic program behavior~\cite{surbatovich2019dependent}. In particular, programmers need to pay special attention to implementing conditional statements that check persistent variables, whose values might be updated during I/O operations. For example, consider the case that a program reads a temperature value (\code{temp = read\_sensor();}) and sets the variable \code{alarm} based on the temperature reading (\code{if(temp > limit) then alarm = true; else tempOk = true;}). If the temperature is less than a pre-defined limit, the variable \code{tempOK} will be set to true. If there is a power failure right after this operation and the program re-executes, the program might read another temperature value higher than the limit. In this case, the program will set the variable \code{alarm} to true. At this point, both of the variables \code{alarm} and \code{tempOK} are true, which is logically incorrect~\cite{surbatovich2019dependent}.
\noindpar{A3- Handling Interrupts.} Interrupts cause dynamic branches, which move the control from the main thread of execution to the interrupt service routine. The main program and interrupt service routines might share the persistent state. If an interrupt service routine leaves the shared persistent variables partially updated due to a power failure, this situation might lead to memory inconsistencies~\cite{ruppel2019Coati}.
\noindpar{B1- Platform Dependencies.} The execution time of an intermittent program depends on several factors, such as available harvestable energy in the environment, capacitor size (i.e., energy storage capacity), and the energy consumption profile of hardware and software. Intermittent programs need to be modified and restructured regarding these factors to eliminate non-termination and ensure computational progress~\cite{maeng2018Chinchilla}.
\noindpar{B2- Reuse and Maintaining Difficulties.} Platform and runtime dependencies make implementing reusable intermittent programs difficult~\cite{durmaz2019puremem}. For example, programmers using task-based models need to deal with task decomposition and task-based control flow~\cite{colin2016chain}. Handling these issues is complicated and leads to programs that are difficult to maintain.
\subsection{The State of the Art}
We classify the prior art based on how they addressed the aforementioned challenges.
\noindpar{Checkpoint-based Systems.}
Checkpointing runtime environments (e.g.,~\cite{ransford2012mementos,lucia2015DINO,jayakumar2014quickrecall,balsamo2016hibernus++}) persist the registers, global variables, and the call-stack of programs into non-volatile memory via programmer-inserted checkpoints to preserve the forward progress of computation (A1). In particular, due to the call stack's dynamic nature (i.e., it grows and shrinks dynamically), the programmers need to place checkpoints carefully to eliminate non-termination. In particular, the energy required to execute the instructions between two checkpoints should not exceed the maximum energy stored in the capacitor. Therefore, checkpoint placement is platform-dependent and checkpointed programs are not reusable. There are studies (e.g.,~\cite{woude2016ratchet,mottola2017harvos,kortbeek2020TICS,maeng2020CatNap,maeng2019Samoyed,maeng2018Chinchilla}) that provide compilers to translate C programs into intermittent programs without programmer intervention. However, the C language does not provide abstractions for interrupt handling (A3) and atomic I/O (A2) operations on intermittent systems. The absence of these abstractions might lead to memory inconsistencies and non-termination.
\noindpar{Task-based Systems.} Task-based models (e.g.,~\cite{colin2016chain,alpaca,yildirim2018ink,hester2017mayfly,majid2020Coala,ruppel2019Coati}) require programmers to structure their programs as a collection of idempotent and atomic tasks. They eliminate the need for the call-stack and checkpoints by employing GOTO-style transitions among tasks, i.e., task-based control-flow. However, this is an unstructured programming style that leads to programs that are not reusable and that are prone to bugs~\cite{dijkstra1968letters}. Task-based programming also leads to platform-dependent code since task sizes depend on the capacitor size of the platform. Recent work~\cite{durmaz2019puremem} provides tasks with parameters and continuation-passing~\cite{sussman1998scheme} via closures, which enables reusable code by delivering the control flow in a structured way, similar to function calls. However, it also leads to platform-dependent code because of static task sizes.
\input{table}
\subsection{Our Differences}
Table \ref{table:runtime_comparison} provides a comparison of our work with the state-of-the-art. We propose an intermittent computing solution composed of a virtual machine (PureVM\xspace), a programming language (PureLANG\xspace), and a compiler. We design PureVM\xspace to abstract the intermittent execution details and give the programmer a continuously powered view of the target device. This abstraction provides platform-independent code via its multiple compilers for multiple hardware. PureLANG\xspace is the software interface of PureVM\xspace. PureLANG\xspace programs are translated into a sequence of \emph{primitive functions}, which are the smallest computation blocks that PureVM\xspace executes atomically despite power failures. Thanks to this two-layered abstraction, our work overcomes all mentioned challenges of intermittent computing (i.e., A1--A3 and B1--B2).
\section{PureLANG\xspace Language}
\label{sec:language}
PureLANG\xspace is a statically-typed and event-driven programming language. Programmers develop PureLANG\xspace applications via objects and functions that operate on them, and do not reason about intermittent execution. PureLANG\xspace employs continuation-passing style~\cite{sussman1998scheme} where the program control flow is passed explicitly via \emph{continuations}. Each function (or expression) takes one \emph{flow-in object} in addition to its parameters and passes one \emph{flow-out object} to the next function (or expression) that handles the rest of the computation. Therefore, each continuation contains a reference to a flow-in object, a set of object references as parameters, and a function reference to be applied. Events are continuations that are created by interrupt handlers.
PureVM\xspace persists continuations in non-volatile memory to ensure forward progress. The overhead of persisting a continuation is static (since it contains only a function reference and a certain number of object references) compared to persisting the call-stack whose size is dynamic in procedural languages. During program execution, PureVM\xspace applies the function on the flow-in object by using the information stored in the current continuation. Since PureLANG\xspace functions always operate on objects (kept in non-volatile memory), PureVM\xspace can track the updates on these objects to preserve data consistency despite power failures.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{pics/basics.pdf}
\caption{PureLANG\xspace example primitive and control-flow functions. The control-flow only depends on two primitive functions \textit{apply} and \textit{select} in PureLANG\xspace.}
\label{fig:control_flow}
\end{figure}
\subsection{PureLANG\xspace Types and Primitive Functions}
\label{sec:io:primitive}
Primitive functions are the high-level instructions, which execute atomically and form the interface between the PureLANG\xspace and PureVM\xspace. As an analogy with task-based systems~\cite{colin2016chain,yildirim2018ink,alpaca,ruppel2019Coati,hester2017mayfly,bakar2021rehash}, primitive functions are the atomic tasks that are reusable building blocks. PureLANG\xspace programs are a composition of these atomic blocks.
\noindpar{Parameteric and Arrow Type.} PureLANG\xspace has built-in object types \code{Int}, \code{Float}, \code{Bool} and \code{Void}, which are reference types that point an address in memory. Moreover, PureLANG\xspace offers \emph{parametric types} that are used to specify the type later. As an example, the \code{select} primitive function (Fig.~\ref{fig:control_flow}, Lines 1--3) returns one of two objects \code{t} or \code{f} of same parametric type \code{\%a} as a flow-out object. The objects referenced by \code{t} and \code{f} can be of any type but same. The primitive function \code{select} makes the decision based on the value of the boolean flow-in object \code{b}. Functions with the parametric type (also known as \emph{parametric polymorphism}) eliminate code duplication. Without parametric polymorphism, \code{select} method needs different implementations for different types. As another example, the primitive function \code{apply} (Fig.~\ref{fig:control_flow}, Line 5) applies the given function on the flow-in object of parametric type \code{\%a} and returns a flow-out object of parametric type \code{\%b}. It also takes a function reference \code{func} as a parameter, which takes an object of parametric type \code{\%a} and returns an object of parametric type \code{\%b}. This is indicated by using \emph{arrow type} decleration depicted as \code{\%a->\%b}. In the body of the primitive function (Fig.~\ref{fig:control_flow}, Line 6), \code{func} is called by passing the flow-in object \code{a}. Note that, \code{func} returns an object of type \code{\%b}, which is compatible with the flow-out object type of \code{apply}.
\noindpar{IO Primitives.} PureLANG\xspace introduces IO primitive function functions to eliminate control-flow inconsistencies during I/O operations (A2). \code{IO} metadata (e.g., see \code{getTemp} function in Fig.~\ref{fig:control_flow}, Line 9) helps the PureLANG\xspace compiler to handle these operations differently. The compiler splits PureLANG\xspace code blocks with IO primitives into three sections: pre-IO primitive, IO primitive itself, and post-IO primitive. After each section executes, PureVM\xspace takes control to persist computational state, which ensures the atomic execution of the IO primitive.
\noindpar{Type Checking.} Arrow and parametric type declarations help the PureLANG\xspace compiler for type inference and type checking. While decomposing the program into its primitive functions, the PureLANG\xspace compiler performs type checking by using input and output type metadata to eliminate (B2)-type bugs. The compiler also infers the variable types automatically when the programmer does not declare them explicitly.
\noindpar{Resolving W.A.R. Dependencies.} Primitive functions also specify a meta-data concerning write operations on objects. As an example, the \textit{write} in the definition of \code{getTemp} function tells the compiler that this function modifies the flow-in object \code{x}. While decomposing the program into its primitive functions, the compiler can resolve W.A.R. dependencies using this metadata. This situation helps PureVM\xspace to execute the intermittent program correctly by preserving the memory consistency (to ensure A1). Considering the target PureVM\xspace implementation, PureLANG\xspace compiler instruments the bodies of the functions by inserting the necessary undo logging or redo logging code explained in Section~\ref{sec:vm}.
\subsection{PureLANG\xspace Statements and Control Flow}
Since PureLANG\xspace employs structured programming, complex expressions are formed by composing other expressions (and primitive functions). The dot operator (\code{.}) enables expression composition as long as the output type of sub-expression is compatible with the input type of the following sub-expression. The last statement in a function body determines the output object that should be compatible with the output type of the function. Thanks to the continuation-passing style of the language, all statements forming the complex behavior of a function execute in order. Therefore, there is no need for the PureVM\xspace to check branches and early exits.
\noindpar{Control Flow.} In PureLANG\xspace, every function related to the control flow is a composition of \code{select} and \code{apply} primitive functions. For example, \code{ifElse} function (Fig.~\ref{fig:control_flow}, Lines 14--16) enables a conditional branch by invoking the \code{apply} and \code{select} primitive functions in order. The first parameter \code{s} is a function that takes an object of parametric type \code{\%a} and returns a boolean object. Firstly, the function \code{s} is applied on the flow-in object \code{p}, which pipes a boolean object (i.e., \code{p.apply(s)} in Line 11 which returns boolean). Then, the returned object becomes a flow-in object for the \code{select} primitive, which returns one of the functions from \code{t} and \code{f} by considering the flow-in boolean object. The returned function object is assigned to variable \code{func}. Then, \code{func} is applied to the flow-in object \code{p}, and an object of \code{\%b} type is returned (Line 12).
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{pics/application.pdf}
\caption{Sample monitoring application in PureLANG\xspace. Application code contains all semantics of the computation, PureVM\xspace settings defining the events of the program and platform specific attributes. Interrupt code is for receiving the values from environment (e.g., sensors).
}
\label{fig:event_handling_configuration}
\end{figure}
\subsection{Putting Things Together: A Sample Sensing Application}
Figure~\ref{fig:event_handling_configuration} presents an event-driven air monitoring application, which includes an application source code and a configuration file. PureLANG\xspace compiler (implemented using Xtext~\cite{eysholdt2010xtext}) produces C code from the given PureLANG\xspace program. The generated code includes a single C source file and its header. The source file also contains the implementation of the target PureVM\xspace. The PureLANG\xspace compiler requires a configuration file, which mainly contains the list of event handlers, the name of the target hardware platform, and some specific parameters of the selected PureVM\xspace implementation (such as non-volatile memory size, the size of the event queue).
The application code contains the objects, methods, and interrupt handlers. The event handlers \code{boot}, \code{reboot}, and \code{sleep} (which is not shown in the figure), are mandatory. The \code{boot} event occurs the first time the computer boots after being deployed. The \code{reboot} event occurs after recovery from a power failure, which triggers the reboot handler that restores the state of the computation. PureVM\xspace triggers the \code{sleep} handler when there is no event to be processed, which puts the processor in a low power mode to save energy. The timer interrupt handler (Lines 19--21) adds an event to the event queue of PureVM\xspace by calling \code{addEventQ} method with the sensed temperature value (via \code{readTemp}) and the corresponding event handler (which is \code{control} in this case) as parameters. PureVM\xspace processes this event by calling the \code{control} event handler (Lines 9--11), which processes the events generated from the timer interrupt service routine. Inside this routine, the heater is turned on or off based on the received temperature value (Line 10).
\section{PureVM\xspace Intermittent Virtual Machine}
\label{sec:vm}
PureVM\xspace is a single-input/single-output system that specifies a runtime environment for event-driven intermittent programming. PureLANG\xspace programs execute, without any modification, on different hardware and runtime environments conforming PureVM\xspace specification.
PureVM\xspace specification comprises an event queue, a non-volatile object memory, and a continuation executed by its runtime engine. PureVM\xspace pushes the events generated by the interrupt service routines to the event queue. PureVM\xspace removes the event at the head of the event queue and creates a continuation in object memory using that event. As mentioned, continuation represents the control state of the computer program, and it consists of a set of objects and methods to be applied. Running a continuation may create/return another continuation. PureVM\xspace runs the continuations until there is no returned continuation. When there is no event to consume in the queue, PureVM\xspace sleeps until an interrupt generates an event.
PureVM\xspace state (which represents the computational state) is composed of the events in the queue, the continuation of the running event, and the global objects. The object memory region in non-volatile memory maintains the global objects and the running continuation. Before calling the next function (i.e., running the subsequent continuation), PureVM\xspace can decide to persist the state in object memory to preserve forward computation and not to lose the intermediate results in case of power
failures.
PureVM\xspace specifies the artifacts (event buffer, object memory, runtime engine, and running program) and their relationships abstractly. For example, the event buffer can be implemented as a regular first-in-first-out (FIFO) queue or a priority queue, as long as the system's single-input behavior is not violated. Different design choices can lead to different PureVM\xspace implementations. In the following section, we describe RewindingVM\xspace, which is our main PureVM\xspace implementation.
\subsection{RewindingVM\xspace: An Undo-Logging PureVM\xspace}
\label{sec:vm1}
RewindingVM\xspace stores the execution context in the continuation stack and keeps the object memory consistent across power failures via the undo-logging mechanism. Events in the event queue, global objects, and continuation stack in the object memory represent the computational state. The metadata about the continuation stack is stored in the \emph{runtime data} region of the object memory.
\noindpar{Event Handling.} RewindingVM\xspace provides a queue for event buffering, which holds the event objects and the methods that need to be applied to these objects (\emph{Event Queue} in Fig.~\ref{fig:rewinding-in-action}). RewindingVM\xspace runtime engine stores the continuations in a stack in object memory (\emph{Continuation Stack} in Fig.~\ref{fig:rewinding-in-action}). The runtime engine starts event execution by copying the event object (event-data) and event-method (event-handler) to the continuation stack. It is worth mentioning that RewindingVM\xspace reduces the event handling to a single-producer-single-consumer problem which eliminates the race conditions on the event buffer. During execution, the runtime engine pops a method from the continuation stack and runs this method on the flow object. Control passes to the method, which can modify the objects.
\noindpar{Undo-Log and Memory Consistency.} The undo-log mechanism is activated when modifying objects to preserve memory consistency. The modifications on the objects are done only by calling primitive functions. Every primitive function calls the runtime engine's log function before modifying an object. The object memory comprises blocks called pages. The programmer, for efficiency, may configure the page sizes of the object memory. The log function copies the original page to undo-log memory before any modification on that page. When RewindingVM\xspace reboots after a power failure, it copies all the logged pages (including the pages of runtime data) into their corresponding pages in the object memory and continues the program. This mechanism ensures the consistency of the object memory by eliminating partial modifications.
\noindpar{Forward Progress and Execution Flow.} The method being executed can push other methods to the continuation stack. The method execution finally returns an object and gives the control back to the runtime engine. The runtime engine saves the returned object as a flow object. The runtime atomically clears the undo-log memory as the last operation. In this way, the runtime keeps memory consistent and guarantees the forward progress of the program.
\noindpar{I/O Handling.} The PureLANG\xspace compiler already splits the program code blocks with I/O primitive functions into three sections, as described in Section \ref{sec:io:primitive}. This strategy already ensures the atomic execution of I/O operations. Therefore, the RewindingVM\xspace runtime engine does not treat I/O operations in a specific way.
\noindpar{RewindingVM\xspace Compiler Optimizations.} We implemented two compiler optimizations for RewindingVM\xspace: (i) executing several primitive functions as a block, and (ii) some loop optimizations to reduce PureVM\xspace overheads such as repetitive undo logging (i.e., page copy) operations. Programmers can modify PureVM\xspace application configuration file to enable optimizations.
\subsubsection{An Example RewindingVM\xspace Execution.}
Fig. \ref{fig:rewinding-in-action} shows how RewindingVM\xspace handles the \code{control} event described in Fig. \ref{fig:event_handling_configuration}. In the first step, the interrupt service routine adds the address of the control method to the event queue along with the sensed temperature value (which is a floating-point value of 22.0). In the second step, the VM copies the event from the event queue to the object memory (depicted as \code{control} and \code{22.0} in the continuation stack). In the lower right corner of Fig. \ref{fig:rewinding-in-action}, a part of the compiler-generated code is presented. The compiler splits the \code{control} method into primitive functions and then recomposes into its basic blocks. As can be seen from the figure, the recomposed version of the \code{ifElse} method (Line 10 in Fig. \ref{fig:event_handling_configuration}) is fragmented into two continuations that represent \code{apply} and \code{select} primitives. In the third step, the VM removes the first method from the continuation stack (\code{control} function in the lower right corner of Fig. \ref{fig:rewinding-in-action}) and runs it. Since this process changes the runtime data and the stack of the non-volatile memory, it copies the pages to undo memory before any update. The fourth step in the figure shows that the computer restarts after a power failure. In the fifth step, the VM calls the undo function because it detects that the undo memory is not empty. It brings the non-volatile object memory to the last consistent state. Then, in step 6, the method on the stack runs, as described in step three. In the seventh step, the undo memory is cleared by committing it. Forward progress of the computation is ensured by executing the operations \code{vm\_consume}, \code{vm\_commit} and \code{undo} atomically. In the next steps (not shown in the figure), the remaining methods in the continuation stack execute. They pass the returned flow objects to the next methods till the continuation stack becomes empty.
\subsection{Other PureVM\xspace Implementations}
We also implemented two different PureVM\xspace versions, named JustInTimeVM\xspace} %\texttt{JIT\_VM and TestVM\xspace. JustInTimeVM\xspace} %\texttt{JIT\_VM does not contain an undo log memory and requires hardware support to capture an interrupt when the voltage value on the capacitor crosses the lower-threshold voltage. This interrupt persists the computational state in non-volatile memory and puts the runtime into sleep mode to stop computation. This strategy prevents memory inconsistencies without the need for an undo-log. JustInTimeVM\xspace} %\texttt{JIT\_VM's overhead is lower than RewindingVM\xspace's since JustInTimeVM\xspace} %\texttt{JIT\_VM does not include page copy operations between log memory and object memory. We implemented TestVM\xspace to test any PureLANG\xspace program on a personal computer with continuous power. This implementation allows us to test the correctness of the program logic without loading the code on a microcontroller.
\section{Evaluation}
\label{sec:eval}
We evaluated our PureVM\xspace implementations considering three compute-intensive applications Cuckoo Filter (CF), Activity Recognition (AR), and Bit Count (BC). These applications are used as the de facto benchmarks by most of the earlier studies on intermittently powered devices (\cite{kortbeek2020TICS,alpaca,colin2016chain,yildirim2018ink}). CF stores and reads an input data stream using a cuckoo filter with 128 entries and searches the inserted values. AR classifies accelerometer data, computing the mean and standard deviation of a window of accelerometer readings to train a nearest-neighbor model to detect different movements. We used a window size of three and read 128 samples from each class (shaking or stationary) in the training phase. BC counts the number of 1s in a bitstream with seven different methods that are executed 100 times each.
We compiled these applications using the RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM compilers and executed them on the MSP430FR5994 evaluation board \cite{msp430fr5904}. To power MSP430 evaluation boards intermittently, we used Powercast TX91501-3W transmitter \cite{powercast}, which emits radio frequency (RF) signals at 915 MHz center frequency. P2110- EVB receiver, connected to our MSP430FR5994 evaluation board, harvests energy from these RF signals. We placed the P2110- EVB receiver at a distance of 60cm from the RF transmitter.
\subsection{Execution Time Overhead}
We evaluated the performance of PureVM\xspace implementations of the benchmarks on harvested energy and continuous power by considering their execution times. For comparison, we used their InK \cite{yildirim2018ink} implementations since InK is one of the de facto task-based systems for intermittent computing. We directly used InK-based implementations of the benchmarks (CF, AR, and BC) from the InK repository\cite{tudssl_2019}. Since the tasks have fixed sizes in InK, they may exceed the energy buffer of the device, or they may perform inefficiently due to frequent task transitions (causing redo logging of all shared variables to preserve memory consistency). To re-calibrate the size of the tasks, the programmer must recompose all tasks for the new device. This way of programming limits the code portability.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_continuous.pdf}
\caption{Normalized execution times of AR, CF and BC benchmarks with InK, RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM under continuous power.}
\label{fig:execution_time_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_intermittent.pdf}
\caption{Normalized execution times of AR, CF and BC benchmarks with InK, RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM under RF energy harvesting scenario.}
\label{fig:execution_time_2}
\end{figure}
Fig.~\ref{fig:execution_time_1} and Fig.~\ref{fig:execution_time_2} show the normalized execution times of the benchmarks with InK and PureVM\xspace (RewindingVM\xspace, RewindingVM\xspace optimized, JustInTimeVM\xspace} %\texttt{JIT\_VM, JustInTimeVM\xspace} %\texttt{JIT\_VM optimized) on continuous power and intermittent execution on RF-power. The compiled code of RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM can run on the devices with a minimal energy buffer because the compiler recomposes primitive functions regarding basic blocks. Recomposing the code into its basic blocks leads to small continuations and consequently more (continuation) stack operations in RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM, and more undo logging operations in RewindingVM\xspace.
Optimized versions of applications are composed of continuations with more operations. Therefore, these applications run more efficiently compared to InK apps. We applied the loop optimizations to all loops and branch optimizations to some branch method invocations in these applications.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_useful_rew.pdf}
\caption{RewindingVM overhead, split per operation.}
\label{fig:rew_portions}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_useful_jit.pdf}
\caption{JustInTimeVM overhead, split per operation.}
\label{fig:jit_portions}
\end{figure}
\subsection{PureVM\xspace Point-to-Point overheads}
Fig.~\ref{fig:rew_portions} and Fig~\ref{fig:jit_portions} show the useful work and overheads (i.e., undo-logging and stack operations) of corresponding runtimes on continuous power. RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM uses a stack to run the continuations. When there is more branching in the program, the continuation stack operation creates more overhead because RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM run the basic blocks in one cycle, which means every branch needs continuation stack operation. In BC, the overhead of stack manipulation is higher due to many branch operations, which also causes the worst performance compared to other benchmarks. On the other hand, loop optimizations in BC had the most impact on the performance compared to other benchmarks, increasing the performance by a factor of 10.
Before modifying a page that has not been logged before, the undo-logging mechanism is triggered, which introduces page search and page copy overheads. Since the log memory size is small for the benchmarks, we chose a sequential page search in the log memory. Apparently, the page size affects the undo logging performance. The virtual machine configuration file of RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM contains a page size setting. The page size of 32 and 64 bytes gave the best performance in these applications.
\begin{table}
\centering
\caption{Memory consumption for three benchmark applications written in InK and PureVM\xspace.}
\label{table:memory_comparison}
\begin{tabular}{ccccccc}
\toprule
\textbf{App.}& \textbf{Memory(B)}& \textbf{InK} & \textbf{Rew.} & \textbf{Rew.Opt.} & \textbf{JIT} & \textbf{JIT.Opt.}\\
\toprule
\multirow{2}{1em}{AR} & .text & 3822 & 8624 & 6810 & 7950 & 6136\\
& .data & 4980 & 694 & 694 & 420 & 420\\
\multirow{2}{1em}{BC} & .text & 3518 & 11258 & 9928 & 10714 & 9378\\
& .data & 4980 & 882 & 882 & 676 & 548\\
\multirow{2}{1em}{CF} & .text & 2708 & 11980 & 10014 & 11302 & 9350\\
& .data & 5213 & 886 & 694 & 804 & 420\\
\bottomrule
\end{tabular}
\end{table}
\subsection{PureVM\xspace Memory overheads}
Table \ref{table:memory_comparison} shows the memory overheads of InK and PureVM\xspace implementations. Since primitive functions are translated into C codes, PureVM programs have larger code sizes than InK programs. InK uses global shared variables for communication among functions, and hence it has larger data memory than PureVM\xspace. The programmer configures data memory of RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM via the configuration file. The stack size required by virtual machines contributes to their data memory requirements. The code size increase is the cost which PureVM\xspace implementations pay for their performance and platform independence. However, overall results show that the memory requirement of PureVM\xspace is comparable with InK's memory requirement.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{pics/eval_photo.pdf}
\caption{Our testbed deployment used to evaluate our HVAC application.}
\label{fig:AirConditioner}
\end{figure}
\subsection{Case Study: Heating, ventilation, and air conditioning (HVAC) controller}
As a case study to demonstrate the applicability of PureVM\xspace, we developed an air condition controller for home automation (Figure ~\ref{fig:AirConditioner}). The goal is to get the room temperature frequently enough and send a message to home automation to keep the room temperature in the ideal temperature range. The application uses the PureVM\xspace event mechanism to reduce energy consumption as much as possible: after controlling the room temperature, the application switches to sleep mode. A reboot from a power failure or a timer interrupt (with 30-second intervals) triggers the program, which estimates the room temperature using an analog-digital converter configured to the internal temperature sensor of the microcontroller. If the room temperature is not ideal, the application starts asynchronous communication via Nordic nRF52832 \cite{nRF52832} (which supports BLE) using interrupts over the serial peripheral interface. We ran this application for 3 hours. We observed that the temperature of the environment was measured 294 times, and 22 BLE advertisement messages were sent to the HVAC system. During the entire run, there were 18 power failures and recovery.
\section{Conclusion and Future Work}
\label{sec:conc}
In this work, we introduced a new virtual machine (PureVM\xspace) that abstracts a transiently powered computer and a new continuation-passing-style programming language (PureLANG\xspace) used to develop programs that run on PureVM\xspace. This two-layer structure provided a loosely coupled architecture that facilitates the development of platform-independent and reusable event-driven sensor applications. We believe that this is a significant attempt for virtualizing intermittent computing.
As follow-up work, we plan to add new language constructs to PureLANG\xspace to handle the expiration of sensor readings. Due to long charging times after power failures, sensed data might lose its validity. In this case, the sensor value becomes useless and can be discarded. While such a requirement is absent in continuous computing, it exists in intermittent computing~\cite{kortbeek2020TICS,hester2017mayfly}. We also plan to port PureVM\xspace to different ultra-low-power micro-controllers and introduce more sophisticated compiler optimizations. As of now, PureVM\xspace does not implement any task scheduling mechanism. We leave integrating scheduling mechanisms, e.g., real-time scheduling of tasks ~\cite{karimi2021real,maeng2020CatNap}, to PureVM\xspace as future work.
\section{Introduction}
\IEEEPARstart{I}{n} the past decade, the progress in energy harvesting circuits and the decrease in power requirements of processing, sensing, and communication hardware promised the potential of freeing the Internet of Things (IoT) devices from their batteries. Recent works demonstrated several microcontroller-based devices that can work without the need for batteries by harvesting energy from ambient sources, such as solar and radiofrequency~\cite{hester2017flicker,yildirim2018safe,nardello2019camaroptera}. Batteryless devices store the harvested ambient energy into a tiny capacitor that powers the microcontroller and peripherals. A batteryless device can compute, sense, and communicate when the energy stored in its capacitor is above an operating threshold. It turns off and loses its volatile state (e.g., the contents of the CPU, peripheral registers, the volatile memory) when the energy level drops below this threshold. The device can turn on only after charging its capacitor again. This phenomenon, i.e., the intermittent execution due to power failures, led to the emergence of a new computing paradigm, the so-called \emph{intermittent computing}~\cite{yildirim2018ink,kortbeek2020TICS}.
During intermittent execution, batteryless devices use the harvested energy to perform a short burst of computation. To recover their computation state and progress computation forward after a power failure, they need to save the computation state in non-volatile memory before a power failure. Recent studies proposed programming models for intermittent computing to support these state logging and recovery operations. The proposed programming models provide language constructs (i.e., either \emph{checkpoints}~\cite{kortbeek2020TICS} or \emph{tasks}~\cite{colin2016chain}) to (i) maintain the forward progress of computation and (ii) keep the memory (i.e., computation state) consistent. However, with these models, programmers need to deal with the low-level details of the intermittent execution~\cite{durmaz2019puremem}. In particular, existing models pose the following deficiencies.
\noindpar{Explicit Burst Management.} Programmers need to design their programs as a set of computation bursts that should fit in the capacitor. Thus, they explicitly identify the boundaries of these bursts via checkpoint placement or task decomposition. This situation increases the programming effort considerably
\noindpar{Hardware Dependency.} The active time of a batteryless device depends on its capacitor size and its power consumption, i.e., its hardware configuration~\cite{colin2018reconfigurable}. Programmers might need to identify different burst boundaries to execute their programs on a new device with a different hardware configuration. Therefore, existing intermittent programs are not portable, and in turn, not reusable.
\noindpar{Explicit I/O Management.} Power failures that occur during I/O or interrupt handling might leave the memory in an inconsistent state. With existing models, programmers need to manually ensure the atomic execution of interrupt handlers or I/O operations~\cite{ruppel2019Coati}. This situation increases the programming burden and makes programs error-prone.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{pics/purevm.pdf}
\caption{Using PureLANG\xspace, a continuation-passing-style programming language, and PureVM\xspace, whose specification enables the execution of programs written in PureLANG\xspace, programmers do not reason about the intermittent execution and they develop programs portable across several hardware platforms.}
\label{fig:intro}
\end{figure}
In this paper, we aim at virtualizing intermittent computing to remedy the deficiencies mentioned above. We introduce PureVM\xspace virtual machine and its software interface PureLANG\xspace language that abstract away the complicated aspects of intermittent execution. Thanks to PureVM\xspace and PureLANG\xspace, programmers focus only on their application logic, forget about power failures as if they are programming continuously powered systems, and develop portable intermittent programs. Shortly, this paper introduces the following contributions:
\begin{compactenum}
\item \textbf{PureVM\xspace Virtual Machine.} We introduce PureVM\xspace, the first \emph{virtual machine} for intermittent systems, which abstracts a transiently powered computer. This abstraction hides the details of intermittent execution and enables platform-independent program development via its implementations targeting different hardware (see Figure~\ref{fig:intro}).
\item \textbf{PureLANG\xspace Language.} We introduce PureLANG\xspace, a \emph{continuation-passing-style} programming language, intended to prevent programmers from reasoning about intermittent execution.
PureLANG\xspace programs are translated into a set of re-composable and atomically-executed PureVM\xspace instructions.
\end{compactenum}
To the best of our knowledge, our work proposes the first \emph{virtualization} attempt for intermittent computing. PureVM\xspace and PureLANG\xspace create the fundamental building blocks of intermittent computing from scratch that pave the way for portable transiently-powered applications.
\iffalse
With the expansion of the usage areas of IoT applications, the use of battery-powered devices is increasing in places where power transmission is not possible by cable. However, it is necessary to change the batteries of the battery-powered devices from time to time and to dispose of the batteries.
\IEEEPARstart{T}{he} emerging trend in energy harvesting and low-power microelectronics enabled the microcontroller-based systems that can operate without batteries.
in energy harvesting circuits and microelectronics have led to a new type of computing systems; namely transiently powered computers (TPCs), that can operate relying on ambient energy only without requiring batteries. As an example, Wireless Identification and Sensing Platform (WISP)~\cite{smith2013wirelessly} is a tiny computer that operates by using the captured energy of radio waves. The captured energy is stored in a small capacitor and if the stored energy is above a threshold, WISP wakes up to sense the environment and perform computation. When the energy in the capacitor is depleted, WISP dies due to a power failure---leading to an intermittent operation. As opposed to continuously-powered computers, TPCs follow a duty cycle composed of charge$\rightarrow$operate$\rightarrow$die phases~\cite{ransford2012mementos}. Since the ambient energy is non-deterministic, TPCs experience frequent power failures which reset the volatile state of the device; e.g. stack, program counter, registers. Therefore, existing programs and libraries designed for continuously-powered computers cannot run on TPCs correctly due to the frequent loss of volatile state---power failures give rise to failed computation and incorrect results.
In order to preserve progress of computation in TPCs, the researchers proposed inserting checkpoints in the program source at compile time \cite{ransford2012mementos,woude2016ratchet,lucia2015simpler,jayakumar2014quickrecall,mottola2017harvos,balsamo2016hibernus++}, so that the whole volatile state of the processor is saved in non-volatile memory. Upon reboot, the volatile state will be recovered using the checkpointed information and the computation will be restored from where it left. However, checkpointing introduces considerable store/restore overhead due to the size of the volatile state. This issue motivated researchers to develop task-based programming models for TPCs~\cite{colin2016chain,alpaca,hester2017timely,yildirim2018ink}. In these models, the program source is composed of a collection of restartable tasks, task-based control flow and input/output channels for the data flow among the tasks. Task-based programming environments do not checkpoint the whole volatile state: they execute the current task in the control flow, restart it upon recovery from a power failure, guarantee its atomic completion, and then switch to the next task in the control flow.
However, as compared to the structured programming languages such as C, existing task-based programming models provide limited abstractions and in turn expose several disadvantages. In particular, in current task-based systems (a) control flow statements lead to "spaghetti code", (b) tasks are tightly-coupled with each other since they share global input/output data that decreases their re-usability, (c) tasks do not have signatures and therefore make computation vulnerable to potential bugs and (d) automatic and dynamic memory management is not allowed that leads to a bigger memory footprint. This paper addresses these issues and introduces a structured task-based programming model for TPCs; namely PureMEM. We list the main contributions of PureMEM to the state-of-the-art as follows:
\begin{enumerate}
\item Contrary to parameterless tasks and task-based control flow in existing programming models, PureMEM enables signatures for the functions and introduces function composition as a control flow structure in order to \emph{\textbf{eliminate "spaghetti code"}} and to \emph{\textbf{reduce bugs}}. Thanks to continuation-passing style, PureMEM prevents interdependencies caused by the unstructured control and enables \emph{\textbf{re-usability}} of the tasks.
\item PureMEM enables manual (dynamic) memory management that allows creation of variables which live longer by providing allocation and deallocation of nonvolatile memory. These variables can be used for data sharing among the tasks---\emph{\textbf{limiting}} the \emph{\textbf{scope}} and \emph{\textbf{lifetime}} of task-shared variables in contrast global scope and lifetime task-shared variables in existing models.
\item While prior works \cite{alpaca, colin2016chain, hester2017timely} do not contain any constructs on \emph{\textbf{error handling}}, routines in PureMEM may return and recover errors.
\end{enumerate}
The rest of the paper is organized as follows: Section 2 includes a brief discussion on TPC and existing programming models for TPC. PureMEM programming model is discussed in Section 3. Runtime environment of PureMEM is described in Section 4. Use of PureMEM is demonstrated with a case study in Section 5. Section 6 gives the related work and Section 7 concludes the paper.
\subsection{State-of-the-art:} Onceki calismalarda sunlar var. Task-based vs checkpoints.
\begin{enumerate}
\item Checkpointing tarafinda degiliz. (nedenini yazmak gerek). \item Task-based systemlerin eksiklerinden sirayla bahsedecegiz.
\item \textbf{Lack of mechanisms for structured programming}
\item PUREMEM aims to bring \textbf{Task-based structured programming} - konferansta yayinladik.
\item (0) Tasklar parametre alabiliyo
\item (1) Sphagetti Code and Reusability
\item (2) Error handling
\item (3) Dynamic memory management
\end{enumerate}
\subsection{Problem Statement:} PUREMEM de yeterli degil - DSL icin baska bir mekanizma gerek. Onun da dahil oldugu tum sistemlerin asagidaki sorunlari var:
\textbf{Deficiencies of the existing systems:}
None of the existing systems did not propose a DSL for intermittent stytems.
A DSL brings the following advantages:
\begin{enumerate}
\item Platform independence. Thanks to this, platfrom-dependent issues such as code optimization, capacitor size, task-sizes, I/O is abstracted from the programmer.
\item Event driven programming is easier
\item PL features like type checking is enabled (for the first time!) that will eliminate bugs, other issues
\item Code development efforts (bakariz daha sonra)
\end{enumerate}
\textbf{DSL - Lambda-calculus - Fonmsiyonkel Programlama - Scala} bunlar arasindaki iliskiyi gosterirsek, butunlugu saglamis oluruz. Ya da neden feyz aldik? Diger Task-based calismalarda bunlarin hicbirinden bahsedilmemis. (Unstructured olduklari icin)
We propose a VM that will execute the code generated by the DSL compiler. These features
\textbf{PUREMEM vs VM}
\begin{enumerate}
\item Primitive instruction-like tasks -> VM'e goturuyor
\item Program VM instructionlarini kullaniyor -> Program tamamiyle HW independent
\item Prograci da insitruction ekleyebilir (limited degiliz)
\item VM provoides comple-time optimization for the efficient execution
\end{enumerate}
\begin{enumerate}
\item \emph{Optimized code is decoupled from hardware:} Compiler adjusts the task sizes with regards to the capacitor size of the device.
\item B-FreeVM provides an event driven programming model which enables sleep mode.
\item B-Free compiler provides a type checking mecahnism with parametric polymorphism, which reduces the bugs.
\item B-FreeVM is also implemented for personal computers, which brings the advantage of testing without micro-controllers.
\item Memory and cache of BFreeVM can be configured and optimized with regards to the application needs.
\end{enumerate}
\fi
\section{Conclusion and Future Work}
\label{sec:conc}
In this work, we introduced a new virtual machine (PureVM\xspace) that abstracts a transiently powered computer and a new continuation-passing-style programming language (PureLANG\xspace) used to develop programs that run on PureVM\xspace. This two-layer structure provided a loosely coupled architecture that facilitates the development of platform-independent and reusable event-driven sensor applications. We believe that this is a significant attempt for virtualizing intermittent computing.
As follow-up work, we plan to add new language constructs to PureLANG\xspace to handle the expiration of sensor readings. Due to long charging times after power failures, sensed data might lose its validity. In this case, the sensor value becomes useless and can be discarded. While such a requirement is absent in continuous computing, it exists in intermittent computing~\cite{kortbeek2020TICS,hester2017mayfly}. We also plan to port PureVM\xspace to different ultra-low-power micro-controllers and introduce more sophisticated compiler optimizations. As of now, PureVM\xspace does not implement any task scheduling mechanism. We leave integrating scheduling mechanisms, e.g., real-time scheduling of tasks ~\cite{karimi2021real,maeng2020CatNap}, to PureVM\xspace as future work.
\section{Evaluation}
\label{sec:eval}
We evaluated our PureVM\xspace implementations considering three compute-intensive applications Cuckoo Filter (CF), Activity Recognition (AR), and Bit Count (BC). These applications are used as the de facto benchmarks by most of the earlier studies on intermittently powered devices (\cite{kortbeek2020TICS,alpaca,colin2016chain,yildirim2018ink}). CF stores and reads an input data stream using a cuckoo filter with 128 entries and searches the inserted values. AR classifies accelerometer data, computing the mean and standard deviation of a window of accelerometer readings to train a nearest-neighbor model to detect different movements. We used a window size of three and read 128 samples from each class (shaking or stationary) in the training phase. BC counts the number of 1s in a bitstream with seven different methods that are executed 100 times each.
We compiled these applications using the RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM compilers and executed them on the MSP430FR5994 evaluation board \cite{msp430fr5904}. To power MSP430 evaluation boards intermittently, we used Powercast TX91501-3W transmitter \cite{powercast}, which emits radio frequency (RF) signals at 915 MHz center frequency. P2110- EVB receiver, connected to our MSP430FR5994 evaluation board, harvests energy from these RF signals. We placed the P2110- EVB receiver at a distance of 60cm from the RF transmitter.
\subsection{Execution Time Overhead}
We evaluated the performance of PureVM\xspace implementations of the benchmarks on harvested energy and continuous power by considering their execution times. For comparison, we used their InK \cite{yildirim2018ink} implementations since InK is one of the de facto task-based systems for intermittent computing. We directly used InK-based implementations of the benchmarks (CF, AR, and BC) from the InK repository\cite{tudssl_2019}. Since the tasks have fixed sizes in InK, they may exceed the energy buffer of the device, or they may perform inefficiently due to frequent task transitions (causing redo logging of all shared variables to preserve memory consistency). To re-calibrate the size of the tasks, the programmer must recompose all tasks for the new device. This way of programming limits the code portability.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_continuous.pdf}
\caption{Normalized execution times of AR, CF and BC benchmarks with InK, RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM under continuous power.}
\label{fig:execution_time_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_intermittent.pdf}
\caption{Normalized execution times of AR, CF and BC benchmarks with InK, RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM under RF energy harvesting scenario.}
\label{fig:execution_time_2}
\end{figure}
Fig.~\ref{fig:execution_time_1} and Fig.~\ref{fig:execution_time_2} show the normalized execution times of the benchmarks with InK and PureVM\xspace (RewindingVM\xspace, RewindingVM\xspace optimized, JustInTimeVM\xspace} %\texttt{JIT\_VM, JustInTimeVM\xspace} %\texttt{JIT\_VM optimized) on continuous power and intermittent execution on RF-power. The compiled code of RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM can run on the devices with a minimal energy buffer because the compiler recomposes primitive functions regarding basic blocks. Recomposing the code into its basic blocks leads to small continuations and consequently more (continuation) stack operations in RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM, and more undo logging operations in RewindingVM\xspace.
Optimized versions of applications are composed of continuations with more operations. Therefore, these applications run more efficiently compared to InK apps. We applied the loop optimizations to all loops and branch optimizations to some branch method invocations in these applications.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_useful_rew.pdf}
\caption{RewindingVM overhead, split per operation.}
\label{fig:rew_portions}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pics/eval_useful_jit.pdf}
\caption{JustInTimeVM overhead, split per operation.}
\label{fig:jit_portions}
\end{figure}
\subsection{PureVM\xspace Point-to-Point overheads}
Fig.~\ref{fig:rew_portions} and Fig~\ref{fig:jit_portions} show the useful work and overheads (i.e., undo-logging and stack operations) of corresponding runtimes on continuous power. RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM uses a stack to run the continuations. When there is more branching in the program, the continuation stack operation creates more overhead because RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM run the basic blocks in one cycle, which means every branch needs continuation stack operation. In BC, the overhead of stack manipulation is higher due to many branch operations, which also causes the worst performance compared to other benchmarks. On the other hand, loop optimizations in BC had the most impact on the performance compared to other benchmarks, increasing the performance by a factor of 10.
Before modifying a page that has not been logged before, the undo-logging mechanism is triggered, which introduces page search and page copy overheads. Since the log memory size is small for the benchmarks, we chose a sequential page search in the log memory. Apparently, the page size affects the undo logging performance. The virtual machine configuration file of RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM contains a page size setting. The page size of 32 and 64 bytes gave the best performance in these applications.
\begin{table}
\centering
\caption{Memory consumption for three benchmark applications written in InK and PureVM\xspace.}
\label{table:memory_comparison}
\begin{tabular}{ccccccc}
\toprule
\textbf{App.}& \textbf{Memory(B)}& \textbf{InK} & \textbf{Rew.} & \textbf{Rew.Opt.} & \textbf{JIT} & \textbf{JIT.Opt.}\\
\toprule
\multirow{2}{1em}{AR} & .text & 3822 & 8624 & 6810 & 7950 & 6136\\
& .data & 4980 & 694 & 694 & 420 & 420\\
\multirow{2}{1em}{BC} & .text & 3518 & 11258 & 9928 & 10714 & 9378\\
& .data & 4980 & 882 & 882 & 676 & 548\\
\multirow{2}{1em}{CF} & .text & 2708 & 11980 & 10014 & 11302 & 9350\\
& .data & 5213 & 886 & 694 & 804 & 420\\
\bottomrule
\end{tabular}
\end{table}
\subsection{PureVM\xspace Memory overheads}
Table \ref{table:memory_comparison} shows the memory overheads of InK and PureVM\xspace implementations. Since primitive functions are translated into C codes, PureVM programs have larger code sizes than InK programs. InK uses global shared variables for communication among functions, and hence it has larger data memory than PureVM\xspace. The programmer configures data memory of RewindingVM\xspace and JustInTimeVM\xspace} %\texttt{JIT\_VM via the configuration file. The stack size required by virtual machines contributes to their data memory requirements. The code size increase is the cost which PureVM\xspace implementations pay for their performance and platform independence. However, overall results show that the memory requirement of PureVM\xspace is comparable with InK's memory requirement.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{pics/eval_photo.pdf}
\caption{Our testbed deployment used to evaluate our HVAC application.}
\label{fig:AirConditioner}
\end{figure}
\subsection{Case Study: Heating, ventilation, and air conditioning (HVAC) controller}
As a case study to demonstrate the applicability of PureVM\xspace, we developed an air condition controller for home automation (Figure ~\ref{fig:AirConditioner}). The goal is to get the room temperature frequently enough and send a message to home automation to keep the room temperature in the ideal temperature range. The application uses the PureVM\xspace event mechanism to reduce energy consumption as much as possible: after controlling the room temperature, the application switches to sleep mode. A reboot from a power failure or a timer interrupt (with 30-second intervals) triggers the program, which estimates the room temperature using an analog-digital converter configured to the internal temperature sensor of the microcontroller. If the room temperature is not ideal, the application starts asynchronous communication via Nordic nRF52832 \cite{nRF52832} (which supports BLE) using interrupts over the serial peripheral interface. We ran this application for 3 hours. We observed that the temperature of the environment was measured 294 times, and 22 BLE advertisement messages were sent to the HVAC system. During the entire run, there were 18 power failures and recovery.
\section{Background and Related Work}
\label{sec:background}
Frequent power failures lead to intermittent execution that poses several challenges to developing batteryless applications. We classify these challenges into two classes: computation and I/O related challenges (A1--A3) and programming challenges (B1--B2). We describe them as follows.
\noindpar{A1- Non-termination and Memory Consistency.} Power failures hinder the \emph{forward progress} of the computation, which leads to non-terminating programs~\cite{maeng2018Chinchilla}. Non-termination occurs since the device loses the intermediate results and restarts the computation from scratch upon each reboot. Power failures might also lead to \emph{memory inconsistencies}~\cite{ransford2014nonvolatile}. To give an example, assume that a program is modifying the persistent variable \code{var} (i.e., a variable kept in non-volatile memory) by executing the code block \code{\{vector[var++]=10;\}}. Since the variable \code{var} is updated after it is read, there is a \emph{Write-After-Read (W.A.R.)} dependency. If a power failure occurs at this point, the re-execution of this code will increment the variable \code{var} again. Therefore, another element of the \code{vector} will be set to 10 in this case. Due to the W.A.R. dependency, there is a violation of \emph{idempotency} since repeated computation produces different results. To prevent these issues, programmers should divide their programs into a set of idempotent code blocks (e.g., \emph{tasks}~\cite{colin2016chain}) that can execute \emph{atomically} in a power cycle and can be safely restarted despite W.A.R. dependencies. A runtime library (e.g., \cite{alpaca,yildirim2018ink}) is required to persist the program state non-volatile memory, to manage power failures, and to re-execute the code blocks that could not complete in the previous power cycle.
\noindpar{A2- Control-flow Inconsistencies.} If the control flow depends on external inputs such as sensor readings, power failures might lead to erratic program behavior~\cite{surbatovich2019dependent}. In particular, programmers need to pay special attention to implementing conditional statements that check persistent variables, whose values might be updated during I/O operations. For example, consider the case that a program reads a temperature value (\code{temp = read\_sensor();}) and sets the variable \code{alarm} based on the temperature reading (\code{if(temp > limit) then alarm = true; else tempOk = true;}). If the temperature is less than a pre-defined limit, the variable \code{tempOK} will be set to true. If there is a power failure right after this operation and the program re-executes, the program might read another temperature value higher than the limit. In this case, the program will set the variable \code{alarm} to true. At this point, both of the variables \code{alarm} and \code{tempOK} are true, which is logically incorrect~\cite{surbatovich2019dependent}.
\noindpar{A3- Handling Interrupts.} Interrupts cause dynamic branches, which move the control from the main thread of execution to the interrupt service routine. The main program and interrupt service routines might share the persistent state. If an interrupt service routine leaves the shared persistent variables partially updated due to a power failure, this situation might lead to memory inconsistencies~\cite{ruppel2019Coati}.
\noindpar{B1- Platform Dependencies.} The execution time of an intermittent program depends on several factors, such as available harvestable energy in the environment, capacitor size (i.e., energy storage capacity), and the energy consumption profile of hardware and software. Intermittent programs need to be modified and restructured regarding these factors to eliminate non-termination and ensure computational progress~\cite{maeng2018Chinchilla}.
\noindpar{B2- Reuse and Maintaining Difficulties.} Platform and runtime dependencies make implementing reusable intermittent programs difficult~\cite{durmaz2019puremem}. For example, programmers using task-based models need to deal with task decomposition and task-based control flow~\cite{colin2016chain}. Handling these issues is complicated and leads to programs that are difficult to maintain.
\subsection{The State of the Art}
We classify the prior art based on how they addressed the aforementioned challenges.
\noindpar{Checkpoint-based Systems.}
Checkpointing runtime environments (e.g.,~\cite{ransford2012mementos,lucia2015DINO,jayakumar2014quickrecall,balsamo2016hibernus++}) persist the registers, global variables, and the call-stack of programs into non-volatile memory via programmer-inserted checkpoints to preserve the forward progress of computation (A1). In particular, due to the call stack's dynamic nature (i.e., it grows and shrinks dynamically), the programmers need to place checkpoints carefully to eliminate non-termination. In particular, the energy required to execute the instructions between two checkpoints should not exceed the maximum energy stored in the capacitor. Therefore, checkpoint placement is platform-dependent and checkpointed programs are not reusable. There are studies (e.g.,~\cite{woude2016ratchet,mottola2017harvos,kortbeek2020TICS,maeng2020CatNap,maeng2019Samoyed,maeng2018Chinchilla}) that provide compilers to translate C programs into intermittent programs without programmer intervention. However, the C language does not provide abstractions for interrupt handling (A3) and atomic I/O (A2) operations on intermittent systems. The absence of these abstractions might lead to memory inconsistencies and non-termination.
\noindpar{Task-based Systems.} Task-based models (e.g.,~\cite{colin2016chain,alpaca,yildirim2018ink,hester2017mayfly,majid2020Coala,ruppel2019Coati}) require programmers to structure their programs as a collection of idempotent and atomic tasks. They eliminate the need for the call-stack and checkpoints by employing GOTO-style transitions among tasks, i.e., task-based control-flow. However, this is an unstructured programming style that leads to programs that are not reusable and that are prone to bugs~\cite{dijkstra1968letters}. Task-based programming also leads to platform-dependent code since task sizes depend on the capacitor size of the platform. Recent work~\cite{durmaz2019puremem} provides tasks with parameters and continuation-passing~\cite{sussman1998scheme} via closures, which enables reusable code by delivering the control flow in a structured way, similar to function calls. However, it also leads to platform-dependent code because of static task sizes.
\input{table}
\subsection{Our Differences}
Table \ref{table:runtime_comparison} provides a comparison of our work with the state-of-the-art. We propose an intermittent computing solution composed of a virtual machine (PureVM\xspace), a programming language (PureLANG\xspace), and a compiler. We design PureVM\xspace to abstract the intermittent execution details and give the programmer a continuously powered view of the target device. This abstraction provides platform-independent code via its multiple compilers for multiple hardware. PureLANG\xspace is the software interface of PureVM\xspace. PureLANG\xspace programs are translated into a sequence of \emph{primitive functions}, which are the smallest computation blocks that PureVM\xspace executes atomically despite power failures. Thanks to this two-layered abstraction, our work overcomes all mentioned challenges of intermittent computing (i.e., A1--A3 and B1--B2).
\section{0pt}{2pt plus 1pt minus 1pt}{2pt plus 1pt minus 1pt}
\usepackage{listings}
\lstset{escapeinside={(*@}{@*)}}
\newcommand{\noind}[0]{\par \noindent}
\newcommand{\noindpar}[1]{\noind {\bf #1}}
\usepackage[scale=.9,light]{sourcecodepro}
\usepackage[T1]{fontenc}
\newcommand{\code}[1]{{\ttfamily #1}}
\newcommand{PureVM\xspace}{PureVM\xspace}
\newcommand{PureLANG\xspace}{PureLANG\xspace}
\newcommand{RewindingVM\xspace}{RewindingVM\xspace}
\newcommand{JustInTimeVM\xspace} %\texttt{JIT\_VM}{JustInTimeVM\xspace}
\newcommand{TestVM\xspace}{TestVM\xspace}
\newcommand{computing loop\xspace}{computing loop\xspace}
\newcommand{routine\xspace}{routine\xspace}
\newcommand{Routine\xspace}{Routine\xspace}
\newcommand{\sinan}[1]{\textcolor{blue}{#1}}
\newcommand{\caglar}[1]{\textcolor{red}{[c:] #1}}
\newcommand{\geylani}[1]{\textcolor{orange}{g: #1}}
\usepackage{comment}
\usepackage{lipsum}
\usepackage{paralist}
\begin{document}
\title{Virtualizing Intermittent Computing}
\author{\c{C}a\u{g}lar~Durmaz,~Kas{\i}m~Sinan~Y{\i}ld{\i}r{\i}m,~\IEEEmembership{IEEE Member,}~and~Geylani~Karda
\thanks{\c{C}a\u{g}lar Durmaz and Geylani Kardas are with the International Computer Institue, Ege University, Turkey. e-mail: \{caglar.durmaz, geylani.kardas\}@ege.edu.tr. Kas{\i}m Sinan Y{\i}ld{\i}r{\i}m is with the Department of Information Engineering and Computer Science, University of Trento, Italy. e-mail: [email protected]}.
}
\maketitle
\input{0-abstract.tex}
\IEEEpeerreviewmaketitle
\input{1-introduction.tex}
\input{2-related.tex}
\input{3-purelang.tex}
\input{4-purevm.tex}
\input{5-evaluation.tex}
\input{6-conclusion.tex}
\bibliographystyle{IEEEtran}
\section{PureVM\xspace Intermittent Virtual Machine}
\label{sec:vm}
PureVM\xspace is a single-input/single-output system that specifies a runtime environment for event-driven intermittent programming. PureLANG\xspace programs execute, without any modification, on different hardware and runtime environments conforming PureVM\xspace specification.
PureVM\xspace specification comprises an event queue, a non-volatile object memory, and a continuation executed by its runtime engine. PureVM\xspace pushes the events generated by the interrupt service routines to the event queue. PureVM\xspace removes the event at the head of the event queue and creates a continuation in object memory using that event. As mentioned, continuation represents the control state of the computer program, and it consists of a set of objects and methods to be applied. Running a continuation may create/return another continuation. PureVM\xspace runs the continuations until there is no returned continuation. When there is no event to consume in the queue, PureVM\xspace sleeps until an interrupt generates an event.
PureVM\xspace state (which represents the computational state) is composed of the events in the queue, the continuation of the running event, and the global objects. The object memory region in non-volatile memory maintains the global objects and the running continuation. Before calling the next function (i.e., running the subsequent continuation), PureVM\xspace can decide to persist the state in object memory to preserve forward computation and not to lose the intermediate results in case of power
failures.
PureVM\xspace specifies the artifacts (event buffer, object memory, runtime engine, and running program) and their relationships abstractly. For example, the event buffer can be implemented as a regular first-in-first-out (FIFO) queue or a priority queue, as long as the system's single-input behavior is not violated. Different design choices can lead to different PureVM\xspace implementations. In the following section, we describe RewindingVM\xspace, which is our main PureVM\xspace implementation.
\subsection{RewindingVM\xspace: An Undo-Logging PureVM\xspace}
\label{sec:vm1}
RewindingVM\xspace stores the execution context in the continuation stack and keeps the object memory consistent across power failures via the undo-logging mechanism. Events in the event queue, global objects, and continuation stack in the object memory represent the computational state. The metadata about the continuation stack is stored in the \emph{runtime data} region of the object memory.
\noindpar{Event Handling.} RewindingVM\xspace provides a queue for event buffering, which holds the event objects and the methods that need to be applied to these objects (\emph{Event Queue} in Fig.~\ref{fig:rewinding-in-action}). RewindingVM\xspace runtime engine stores the continuations in a stack in object memory (\emph{Continuation Stack} in Fig.~\ref{fig:rewinding-in-action}). The runtime engine starts event execution by copying the event object (event-data) and event-method (event-handler) to the continuation stack. It is worth mentioning that RewindingVM\xspace reduces the event handling to a single-producer-single-consumer problem which eliminates the race conditions on the event buffer. During execution, the runtime engine pops a method from the continuation stack and runs this method on the flow object. Control passes to the method, which can modify the objects.
\noindpar{Undo-Log and Memory Consistency.} The undo-log mechanism is activated when modifying objects to preserve memory consistency. The modifications on the objects are done only by calling primitive functions. Every primitive function calls the runtime engine's log function before modifying an object. The object memory comprises blocks called pages. The programmer, for efficiency, may configure the page sizes of the object memory. The log function copies the original page to undo-log memory before any modification on that page. When RewindingVM\xspace reboots after a power failure, it copies all the logged pages (including the pages of runtime data) into their corresponding pages in the object memory and continues the program. This mechanism ensures the consistency of the object memory by eliminating partial modifications.
\noindpar{Forward Progress and Execution Flow.} The method being executed can push other methods to the continuation stack. The method execution finally returns an object and gives the control back to the runtime engine. The runtime engine saves the returned object as a flow object. The runtime atomically clears the undo-log memory as the last operation. In this way, the runtime keeps memory consistent and guarantees the forward progress of the program.
\noindpar{I/O Handling.} The PureLANG\xspace compiler already splits the program code blocks with I/O primitive functions into three sections, as described in Section \ref{sec:io:primitive}. This strategy already ensures the atomic execution of I/O operations. Therefore, the RewindingVM\xspace runtime engine does not treat I/O operations in a specific way.
\noindpar{RewindingVM\xspace Compiler Optimizations.} We implemented two compiler optimizations for RewindingVM\xspace: (i) executing several primitive functions as a block, and (ii) some loop optimizations to reduce PureVM\xspace overheads such as repetitive undo logging (i.e., page copy) operations. Programmers can modify PureVM\xspace application configuration file to enable optimizations.
\subsubsection{An Example RewindingVM\xspace Execution.}
Fig. \ref{fig:rewinding-in-action} shows how RewindingVM\xspace handles the \code{control} event described in Fig. \ref{fig:event_handling_configuration}. In the first step, the interrupt service routine adds the address of the control method to the event queue along with the sensed temperature value (which is a floating-point value of 22.0). In the second step, the VM copies the event from the event queue to the object memory (depicted as \code{control} and \code{22.0} in the continuation stack). In the lower right corner of Fig. \ref{fig:rewinding-in-action}, a part of the compiler-generated code is presented. The compiler splits the \code{control} method into primitive functions and then recomposes into its basic blocks. As can be seen from the figure, the recomposed version of the \code{ifElse} method (Line 10 in Fig. \ref{fig:event_handling_configuration}) is fragmented into two continuations that represent \code{apply} and \code{select} primitives. In the third step, the VM removes the first method from the continuation stack (\code{control} function in the lower right corner of Fig. \ref{fig:rewinding-in-action}) and runs it. Since this process changes the runtime data and the stack of the non-volatile memory, it copies the pages to undo memory before any update. The fourth step in the figure shows that the computer restarts after a power failure. In the fifth step, the VM calls the undo function because it detects that the undo memory is not empty. It brings the non-volatile object memory to the last consistent state. Then, in step 6, the method on the stack runs, as described in step three. In the seventh step, the undo memory is cleared by committing it. Forward progress of the computation is ensured by executing the operations \code{vm\_consume}, \code{vm\_commit} and \code{undo} atomically. In the next steps (not shown in the figure), the remaining methods in the continuation stack execute. They pass the returned flow objects to the next methods till the continuation stack becomes empty.
\subsection{Other PureVM\xspace Implementations}
We also implemented two different PureVM\xspace versions, named JustInTimeVM\xspace} %\texttt{JIT\_VM and TestVM\xspace. JustInTimeVM\xspace} %\texttt{JIT\_VM does not contain an undo log memory and requires hardware support to capture an interrupt when the voltage value on the capacitor crosses the lower-threshold voltage. This interrupt persists the computational state in non-volatile memory and puts the runtime into sleep mode to stop computation. This strategy prevents memory inconsistencies without the need for an undo-log. JustInTimeVM\xspace} %\texttt{JIT\_VM's overhead is lower than RewindingVM\xspace's since JustInTimeVM\xspace} %\texttt{JIT\_VM does not include page copy operations between log memory and object memory. We implemented TestVM\xspace to test any PureLANG\xspace program on a personal computer with continuous power. This implementation allows us to test the correctness of the program logic without loading the code on a microcontroller.
\section{PureLANG\xspace Language}
\label{sec:language}
PureLANG\xspace is a statically-typed and event-driven programming language. Programmers develop PureLANG\xspace applications via objects and functions that operate on them, and do not reason about intermittent execution. PureLANG\xspace employs continuation-passing style~\cite{sussman1998scheme} where the program control flow is passed explicitly via \emph{continuations}. Each function (or expression) takes one \emph{flow-in object} in addition to its parameters and passes one \emph{flow-out object} to the next function (or expression) that handles the rest of the computation. Therefore, each continuation contains a reference to a flow-in object, a set of object references as parameters, and a function reference to be applied. Events are continuations that are created by interrupt handlers.
PureVM\xspace persists continuations in non-volatile memory to ensure forward progress. The overhead of persisting a continuation is static (since it contains only a function reference and a certain number of object references) compared to persisting the call-stack whose size is dynamic in procedural languages. During program execution, PureVM\xspace applies the function on the flow-in object by using the information stored in the current continuation. Since PureLANG\xspace functions always operate on objects (kept in non-volatile memory), PureVM\xspace can track the updates on these objects to preserve data consistency despite power failures.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{pics/basics.pdf}
\caption{PureLANG\xspace example primitive and control-flow functions. The control-flow only depends on two primitive functions \textit{apply} and \textit{select} in PureLANG\xspace.}
\label{fig:control_flow}
\end{figure}
\subsection{PureLANG\xspace Types and Primitive Functions}
\label{sec:io:primitive}
Primitive functions are the high-level instructions, which execute atomically and form the interface between the PureLANG\xspace and PureVM\xspace. As an analogy with task-based systems~\cite{colin2016chain,yildirim2018ink,alpaca,ruppel2019Coati,hester2017mayfly,bakar2021rehash}, primitive functions are the atomic tasks that are reusable building blocks. PureLANG\xspace programs are a composition of these atomic blocks.
\noindpar{Parameteric and Arrow Type.} PureLANG\xspace has built-in object types \code{Int}, \code{Float}, \code{Bool} and \code{Void}, which are reference types that point an address in memory. Moreover, PureLANG\xspace offers \emph{parametric types} that are used to specify the type later. As an example, the \code{select} primitive function (Fig.~\ref{fig:control_flow}, Lines 1--3) returns one of two objects \code{t} or \code{f} of same parametric type \code{\%a} as a flow-out object. The objects referenced by \code{t} and \code{f} can be of any type but same. The primitive function \code{select} makes the decision based on the value of the boolean flow-in object \code{b}. Functions with the parametric type (also known as \emph{parametric polymorphism}) eliminate code duplication. Without parametric polymorphism, \code{select} method needs different implementations for different types. As another example, the primitive function \code{apply} (Fig.~\ref{fig:control_flow}, Line 5) applies the given function on the flow-in object of parametric type \code{\%a} and returns a flow-out object of parametric type \code{\%b}. It also takes a function reference \code{func} as a parameter, which takes an object of parametric type \code{\%a} and returns an object of parametric type \code{\%b}. This is indicated by using \emph{arrow type} decleration depicted as \code{\%a->\%b}. In the body of the primitive function (Fig.~\ref{fig:control_flow}, Line 6), \code{func} is called by passing the flow-in object \code{a}. Note that, \code{func} returns an object of type \code{\%b}, which is compatible with the flow-out object type of \code{apply}.
\noindpar{IO Primitives.} PureLANG\xspace introduces IO primitive function functions to eliminate control-flow inconsistencies during I/O operations (A2). \code{IO} metadata (e.g., see \code{getTemp} function in Fig.~\ref{fig:control_flow}, Line 9) helps the PureLANG\xspace compiler to handle these operations differently. The compiler splits PureLANG\xspace code blocks with IO primitives into three sections: pre-IO primitive, IO primitive itself, and post-IO primitive. After each section executes, PureVM\xspace takes control to persist computational state, which ensures the atomic execution of the IO primitive.
\noindpar{Type Checking.} Arrow and parametric type declarations help the PureLANG\xspace compiler for type inference and type checking. While decomposing the program into its primitive functions, the PureLANG\xspace compiler performs type checking by using input and output type metadata to eliminate (B2)-type bugs. The compiler also infers the variable types automatically when the programmer does not declare them explicitly.
\noindpar{Resolving W.A.R. Dependencies.} Primitive functions also specify a meta-data concerning write operations on objects. As an example, the \textit{write} in the definition of \code{getTemp} function tells the compiler that this function modifies the flow-in object \code{x}. While decomposing the program into its primitive functions, the compiler can resolve W.A.R. dependencies using this metadata. This situation helps PureVM\xspace to execute the intermittent program correctly by preserving the memory consistency (to ensure A1). Considering the target PureVM\xspace implementation, PureLANG\xspace compiler instruments the bodies of the functions by inserting the necessary undo logging or redo logging code explained in Section~\ref{sec:vm}.
\subsection{PureLANG\xspace Statements and Control Flow}
Since PureLANG\xspace employs structured programming, complex expressions are formed by composing other expressions (and primitive functions). The dot operator (\code{.}) enables expression composition as long as the output type of sub-expression is compatible with the input type of the following sub-expression. The last statement in a function body determines the output object that should be compatible with the output type of the function. Thanks to the continuation-passing style of the language, all statements forming the complex behavior of a function execute in order. Therefore, there is no need for the PureVM\xspace to check branches and early exits.
\noindpar{Control Flow.} In PureLANG\xspace, every function related to the control flow is a composition of \code{select} and \code{apply} primitive functions. For example, \code{ifElse} function (Fig.~\ref{fig:control_flow}, Lines 14--16) enables a conditional branch by invoking the \code{apply} and \code{select} primitive functions in order. The first parameter \code{s} is a function that takes an object of parametric type \code{\%a} and returns a boolean object. Firstly, the function \code{s} is applied on the flow-in object \code{p}, which pipes a boolean object (i.e., \code{p.apply(s)} in Line 11 which returns boolean). Then, the returned object becomes a flow-in object for the \code{select} primitive, which returns one of the functions from \code{t} and \code{f} by considering the flow-in boolean object. The returned function object is assigned to variable \code{func}. Then, \code{func} is applied to the flow-in object \code{p}, and an object of \code{\%b} type is returned (Line 12).
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{pics/application.pdf}
\caption{Sample monitoring application in PureLANG\xspace. Application code contains all semantics of the computation, PureVM\xspace settings defining the events of the program and platform specific attributes. Interrupt code is for receiving the values from environment (e.g., sensors).
}
\label{fig:event_handling_configuration}
\end{figure}
\subsection{Putting Things Together: A Sample Sensing Application}
Figure~\ref{fig:event_handling_configuration} presents an event-driven air monitoring application, which includes an application source code and a configuration file. PureLANG\xspace compiler (implemented using Xtext~\cite{eysholdt2010xtext}) produces C code from the given PureLANG\xspace program. The generated code includes a single C source file and its header. The source file also contains the implementation of the target PureVM\xspace. The PureLANG\xspace compiler requires a configuration file, which mainly contains the list of event handlers, the name of the target hardware platform, and some specific parameters of the selected PureVM\xspace implementation (such as non-volatile memory size, the size of the event queue).
The application code contains the objects, methods, and interrupt handlers. The event handlers \code{boot}, \code{reboot}, and \code{sleep} (which is not shown in the figure), are mandatory. The \code{boot} event occurs the first time the computer boots after being deployed. The \code{reboot} event occurs after recovery from a power failure, which triggers the reboot handler that restores the state of the computation. PureVM\xspace triggers the \code{sleep} handler when there is no event to be processed, which puts the processor in a low power mode to save energy. The timer interrupt handler (Lines 19--21) adds an event to the event queue of PureVM\xspace by calling \code{addEventQ} method with the sensed temperature value (via \code{readTemp}) and the corresponding event handler (which is \code{control} in this case) as parameters. PureVM\xspace processes this event by calling the \code{control} event handler (Lines 9--11), which processes the events generated from the timer interrupt service routine. Inside this routine, the heater is turned on or off based on the received temperature value (Line 10).
| proofpile-arXiv_065-1871 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Response functions and their inverse, link functions, are central parts of many modern regression approaches.
They create a one-to-one mapping between predictors defined on the real line and the distributional parameters featuring restricted support (e.g., positivity restrictions).
The choice of the response function can affect model quality in two ways.
First, without a suitable response function the model may not fit the data given.
Second, the interpretability of covariate effects depends crucially on the response function.
For example, the identity function implies additive effects while the exponential function facilitates a multiplicative interpretation.
Complex response functions may even hinder any useful interpretation except the marginal interpretation, that is, assessing the effect of one covariate on the distributional parameter visually by keeping all but the covariate in question constant.
In this paper, we suggest to construct novel types of response functions for strictly positive parameters based on the softplus function.
These response functions can be used when an additive interpretation of effects is desired since they allow for a quasi-additive interpretation over a certain part of the range of predictor values. Consider as a motivating example heteroscedastic data from the Munich rent index plotted in Figure~\ref{fig:munich-rent}.
We relate rents in Euro of flats in Munich to the size of living area in square meters,
dealing with the heteroscedasticity by using a linear quantile regression model \citep{koenkerQuantileRegression2005} and two versions of location-scale models within the framework of generalized additive models for location, scale and shape \citep{rigbyGeneralizedAdditiveModels2005} assuming normally distributed responses with regression effects on both the mean and the standard deviation.
The location-scale models differ only in the response function employed for the scale parameter (viz., the exponential and softplus response function).
In every model, the predictors consist of an intercept and the linear effect of living area.
Figure~\ref{fig:munich-rent} shows estimated quantiles and, thereby, illustrates how the non-linearity of the exponential function translates to the estimated quantiles as a function of the living area.
In contrast, the softplus response function maintains the linear relationship in the predictor providing a parametric counterpart to the non-parametric quantile regression model.
The estimated quantiles are more similar to the results from the quantile regression model.
We explain the reasoning behind the quasi-additive interpretation in Section~\ref{sec:meth_softplus} and provide details on this application in Section~\ref{sec:munich-rents}.
\begin{figure}[bth]
\centering
\includegraphics[width=.75\textwidth]{arxiv-files/munich}
\caption{Scatter plot of the rents versus living area. Quantiles estimated for $q = 0.01, 0.1, 0.25,0.5,\protect\linebreak 0.75, 0.9, 0.99$ via quantile regression (qr) as well as location-scale regression with exponential (exp), and with softplus response function (sp) are shown as lines.}
\label{fig:munich-rent}
\end{figure}
For the choice of the response function, most researchers rely on default choices such as the logistic response function for parameters restricted to the unit interval (e.g., probabilities), or the exponential response function for strictly positive parameters.
In generalized linear models \citep[GLMs,][]{mccullaghGeneralizedLinearModels1989}, these defaults can often be justified by their characterization as natural link functions arising in the context of exponential families.
In other cases, the default response functions are chosen to entail specific modes of interpretation, e.g., multiplicative effects on odds in case of the logistic response function or multiplicative effects on the parameter of interest in case of the exponential response function.
The interpretability is also the reason why \cite{fahrmeirRegression2013} recommend to use the exponential response function for gamma distributed responses in the GLM framework instead of the canonical link function.
However, the use of the exponential response function implies a multiplicative model, contrasting with the assumption of additivity of effects often desired in statistics.
Moreover, domain-specific knowledge about the application can invalidate a multiplicative model entirely and e.g., suggest an additive model.
As shown above, an additive model for the standard deviation of a normally distributed response variable leads to additive effects on the quantiles of the response distribution, providing a parametric counterpart to non-parametric quantile regression specifications.
So far, researchers need to fall back to using no response function if the additivity of effects is desired.
This implies the aforementioned issues when the distribution parameter modelled must comply with a positivity restriction.
With this paper, we introduce the softplus response function to regression modeling to overcome these issues.
The softplus response function allows for a quasi-additive interpretation of regression effects for the majority of the relevant predictor space.
Nonetheless, it is a strictly increasing bijective function mapping the real values to its positive subset.
Therefore, it is an eligible response function for positively restricted distribution parameters and can be used instead of the exponential response function (guaranteeing a quasi-additive model) or the identity response function (avoiding the restriction of regression coefficients).
In addition to the quasi-additive interpretation, the softplus function enables the design of response functions with interesting properties.
It can be augmented with an additional parameter that yields further flexibility to model the data given, (ii) it avoids exponential growth which can be an issue under certain covariate combinations, and (iii) it enables the construction of an exponential-like function that avoids potential numerical overflow when evaluating it for large positive predictor values.
An alternative to pre-chosen response functions is to estimate the response function flexibly from the data.
The most well-known example for this approach is the single-index model introduced by \cite{ichimuraSemiparametricLeastSquares1993}.
The kernel-based single-index models share the disadvantage that the estimated response function is often to flexible.
To counter this characteristic, \citep{yuPenalizedSplineEstimation2002,yuPenalisedSplineEstimation2017} introduced penalization to single-index models.
Recently, \cite{spiegelGeneralizedAdditiveModels2019} presented an approach that combines the single-index models based on penalized splines with the flexibility of generalized additive models \citep{hastieGeneralizedAdditiveModels1986}.
One practical challenge when employing flexible link functions is the interpretation of the resulting model since restrictions have to be assigned to the regression predictor to render the response function estimate identifiable.
In contrast, simple, fixed response functions considerably facilitate interpretation.
Having easily interpretable effects may be the reason that for positively bounded parameters the exponential response function is still the most common approach.
Regardless of how well default choices can be justified in general, there is no a priori reason to assume that the defaults fit well on any given data set.
Therefore, the investigation of alternative response functions is a worthwhile and relevant endeavour.
The remainder of this paper is structured as follows: Section~\ref{sec:meth_softplus} introduces the softplus response function, justifies the quasi-additive interpretation and gives a guideline for its proper use.
Furthermore, Section~2 describes statistical inference when employing the softplus response function.
Section~\ref{sec:simulation} investigates the softplus response function in simulation studies.
The practical applicability of softplus-based regression specifications is demonstrated in Section~\ref{sec:applications}.
The final Section~\ref{sec:sumconc} summarizes our findings and discusses directions of future research.
\section{The Softplus Function}\label{sec:meth_softplus}
So far, the softplus function \citep{dugasIncorporatingSecondOrderFunctional2001} is mainly used as a continuously differentiable approximation of the rectifier function (i.e., $\rect(x) = \max(0, x)$) in deep neural networks \citep{haozhengImprovingDeepNeural2015}.
The function maps the elements of $\mathbb{R}$ to elements of its positive subset $\mathbb{R}_+$. We use a generalized version which can be defined by the equation
\begin{align}
\softplus_a(x) = \frac{\log\left(1 + \exp(ax)\right)}{a}\label{eq:def-softplus}
\end{align}
featuring an additional parameter $a > 0$.
Setting $a = 1$ reduces the function to its simple form.
Figure~\ref{fig:softplus} shows the softplus response function for different values of $a$.
Introducing the softplus parameter $a$ allows us to control the approximation error with respect to the rectifier function, as it can be shown that for every $\varepsilon > 0$, exists some $a > 0$ such that
\begin{align*}
0 < \operatorname{softplus}_a(x) - \max(0, x) \leq \log(2)/a < \varepsilon
\end{align*}
holds for all $x \in \mathbb{R}$.
The largest approximation error is at $x = 0$ as visually indicated by Figure~\ref{fig:softplus} and follows from the reformulation for numerical stability subsequently discussed (see Equation~\eqref{eq:sp_stable}).
Besides, one can observe in Figure~\ref{fig:softplus} that the softplus function follows the identity function very closely in the positive domain and rapidly approaches zero in the negative domain for $x\rightarrow-\infty$.
This behavior can be further accentuated by increasing the parameter $a$.
Therefore, the softplus parameter $a$ can be used to control how long the quasi-linear relationship should be maintained when approaching zero and consequently, how fast the boundary of the linked distribution parameter is approached.
\begin{figure}[tb]\centering
\includegraphics[width=.75\textwidth]{arxiv-files/plot_sp.pdf}
\caption{\label{fig:softplus} Plot of the softplus function (left) for different values of softplus parameters $a$. The approximated rectifier function is shown as the dotted line.}
\end{figure}
The adoption of the softplus function as a response function features two major advantages:
\begin{itemize}
\item It translates the additivity of effects on the predictor level to the parameter space for a majority of the relevant distribution parameter space while always guaranteeing the positivity of the distribution parameter.
This is achieved by being quas-linear in its argument as long, as the predictor is large enough for a given value of $a$.
\item The softplus function allows for a straightforward interpretation of the covariate effects. When the predictor value is large enough, the effects can be interpreted directly on the parameter.
Consider the linear effect of an covariate $x$ with the corresponding regression coefficient $\beta$. An increase of $x$ by one unit is associated with an increase of $\beta$ in the predictor and when using the softplus function also an increase of almost $\beta$ in the distribution parameter or expressed as an formula ${\softplus(\beta(x + 1)) \approx \beta + \beta x}$.
\end{itemize}
Clearly, the quasi-additive interpretation is no longer valid once the argument of the softplus function is not within the approximately linear part of the softplus function which we define in detail in Section~\ref{sec:linear-part}.
However, by choosing a sufficiently large $a$, the linear part covers almost the entire positive domain.
In the negative domain and for a sufficiently large $a$, a small change of the covariate usually does not cause a relevant change on the parameter value, since the softplus function outputs values very close to zero.
To ensure the validity of this interpretation, it is necessary to check the range of values of the linear predictor for the observations in the data set.
Most of them should be located within the linear part of the softplus function.
The quasi-additive interpretation is in contrast to the usual multiplicative interpretation for positively constrained parameters that arises from the use of the exponential response function.
For example, $\exp(\beta (x + 1)) = \exp(\beta)\exp(\beta x)$ leads to the interpretation that a change of one unit in the covariate is associated with a multiplicative change of $\exp(\beta)$ units on the parameter.
In contrast to large values for $a$ which enable the quasi-additive interpretation, with the choice of sufficiently small values for $a$, the softplus function resembles the exponential function with a scaled and shifted argument.
This becomes more obvious when taking into account that $\log(x + 1)$ is almost linear in $x$, for $|x| \ll 1$, and thus $\log(1 + \exp(ax)) / a \approx \exp(ax - \log(a))$ for ${\exp(ax) \ll 1}$.
Consequently, the choice of the softplus parameter $a$ allows to continuously vary between an identity-like response function (for $a \rightarrow \infty$) and the exponential response function (with scaled and shifted argument for $a \rightarrow 0$).
This very property facilitates the construction of another response function approximating the exponential function for small arguments but with a limiting gradient (see Section~2 in the Supplementary Material).
This function can be used when an exponential-like response function is desired, but unbounded growth is an (e.g., numerical) issue.
\subsection{Numerical Stability of the Softplus Function}\label{sec:numerical}
A naive implementation of the softplus function derived from Equation~\eqref{eq:def-softplus} can easily lead to numerical issues.
The value of the exponential function for relatively small inputs is infinity on common computer hardware.
To give some intuition, according to the IEEE~754 standard \citep{zuras2008ieee} the largest 32-bit and 64-bit floating point numbers are roughly $3.4028 \cdot 10^{38}$ and $1.7977 \cdot 10^{308}$, respectively.
Consequently, calculating $\exp(89)$ and $\exp(710)$, respectively, yields infinity.
This is of special concern for the implementation of softplus function since the argument to the softplus function is multiplied with the softplus parameter $a$ before the exponential function is applied.
Consider a Poisson regression model with softplus response function and $a=10$.
The predictor $\eta = 9$, targeting an expected value of $9$, would already yield infinity on a 32-bit system using the naive implementation although the correct result is between $9$ and $9 + 10^{-40}$.
Albeit 64-bit CPUs are common nowadays, one should still consider 32-bit floating point arithmetic since it is often used in high-performance computing or when the computation is carried out on graphical processing units (GPUs) or tensor processing units (TPUs).
Despite the difficulties described, the softplus function becomes numerically stable by using the equality
\begin{align}
\softplus_a(x) &= \max(0, x) + \frac{\log(1 + \exp(-|ax|))}{a}\label{eq:sp_stable}
\end{align}
in conjunction with the \texttt{log1p} procedure.
\texttt{log1p} evaluates $\log(1 + x)$ very precisely even for $|x| \ll 1$ \citep[p. 68]{abramowitzHandbookMathematicalFunctions1972} and is available in most programming languages.
In this formulation, the exponential function must be evaluated only for arguments less than 0 which can be done accurately.
Besides numerical stability, Equation~\eqref{eq:sp_stable} also implies that the softplus function has its largest approximation error with respect to rectifier function at $x=0$ with $\log(2)/a$.
The correctness of the numerical stable formulation is easily verified by expressing the softplus function in terms of the log-sum-exp (LSE) function and exploiting its translation property.
The LSE function takes $l$ real valued arguments $x_1, \dots, x_l$. Its value is given by
\begin{align*}
\operatorname{LSE}(x_1, \dotsc, x_l) = \log\left(\sum_{i=1}^l \exp(x_i)\right).
\end{align*}
The translational property \citep{nielsenGuaranteedBoundsInformationTheoretic2016} states that for $c \in \mathbb{R}$
\begin{align*}
\operatorname{LSE}(x_1, \dotsc, x_l) = c + \log\left(\sum_{i=1}^l \exp(x_i - c)\right)
\end{align*}
holds.
Consequently, we have
\begin{align*}
\softplus_a(x) &= \frac{\log(1 + \exp(ax))}{a} = \frac{\operatorname{LSE}(0, ax)}{a}\\
&= \max(0, x) + \frac{\log(\exp(0 - \max(0, ax)) + \exp(ax - \max(0, ax)))}{a}\\
&= \max(0, x) + \frac{\log(1 + \exp(-|ax|))}{a}
\end{align*}
where the second line arises by setting $c = \max(0, ax)$ and the last line follows from the observation that $-|ax| = ax$ for $x < 0$.
We provide a numerical stable reformulation of the inverse of the softplus function in the Supplementary Material.
\subsection{Linear Part of the Softplus Function}\label{sec:linear-part}
Telling apart the section of the softplus function that is approximately linear is important when it comes to the interpretation of regression effects.
In its approximately linear part, the softplus function approximates the identity function and thus permits the quasi-additive interpretation.
A change in the predictor within this part can directly be interpreted as a change in the parameter.
However, since the softplus funtion is not linear over its whole domain we need to define under which conditions we consider this interpretation to be valid.
Consider a linear function $f_l$ on $\mathbb{R}$ with $\text{d}/\text{d}x\ f_l(x) = \tilde f_l > 0$ for all $x \in \mathbb{R}$.
Then, starting at $x_1 \in \mathbb{R}$ and changing the argument by $\gamma \in \mathbb{R}$ leads to a change under $f_l$ of $\tilde f_l \gamma$, or in other words, $f_l(x_2) - f_l(x_1) = \tilde f_l\gamma$ with $x_2 = x_1 + \gamma$.
See Figure~\ref{fig:deriv-lin-part} for a graphical representation.
This change under $l$ is, of course, constant over the whole domain of $f_l$.
When used as a response function with $\tilde f_l \neq 0$, a regression effect of size~$\gamma$ can be interpreted as influencing the parameter by $\tilde f_l\gamma$ for any $x_1 \in \mathbb{R}$.
If $f_l$ is the identity function, we have $\tilde f_l = 1$ and consequently a change in the predictor equals the same change on the parameter modeled.
More care has to be taken when considering the softplus function since the same change in the argument lead to different changes under the softplus function depending on the location of $x_1$.
We want to find the region for which we can interpret the change in the predictor as if it would change the parameter under $f_l$.
Therefore, we need to assess the error induced by the softplus function and define an acceptable error threshold.
Figure~\ref{fig:deriv-lin-part} displays the different changes under $f_l$ and $\softplus_a$.
The error induced by using the softplus function instead of $f_l$ is given by
\begin{align}
\operatorname{error}_a(x_1, x_2, \tilde f_l)
= (x_2 - x_1)\tilde f_l - (\softplus_a(x_2) - \softplus_a(x_1)).\label{eq:sp-error}
\end{align}
Considering this error relative to the change under $f_l$, we obtain the relative error
\begin{align*}
\operatorname{rerr}_a(x_1, x_2, \tilde{f}_l)
&= \frac{(x_2 - x_1)\tilde f_l - (\softplus_a(x_2) - \softplus_a(x_1))}{(x_2 - x_1)\tilde f_l}\\
&= 1 - \frac{\softplus_a(x_2) - \softplus_a(x_1)}{(x_2 - x_1)\tilde f_l}.
\end{align*}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.75\textwidth]{arxiv-files/err_gra_deriv.pdf}
\caption{Graphical derivation of the relative error. The solid and dashed line represent the softplus function and the linear function $f_l$, respectively. The error induced by using the softplus function when translating a change of $\gamma$ to the parameter space is given by $f_l(x_2) - f_l(x_1) - (\softplus(x_2) - \softplus(x_1))$. The relative error arrises from taking this quantity relative to $\gamma$.}
\label{fig:deriv-lin-part}
\end{figure}
Another way to derive the error induced by the softplus function, is to consider the definite integral of the first derivative.
The change under $f_l$, i.e. $f_l(x_2) - f_l(x_1)$, equals the definite integral from $x_1$ to $x_2$ of the first derivative.
Consequently, one can assess the error as the integral of the absolute deviations between the first derivatives of the linear function and the softplus function, i.e.,
\[
\int_{x_1}^{x_2} \left| \frac{\text{d}}{\text{d}z} f_l(z) - \frac{\text{d}}{\text{d}z}\softplus_a(z) \right| \text{d}z.
\]
As expected, this approach leads to the same measure as defined in Equation~\eqref{eq:sp-error}.
In this paper, we are interested in the interpretation w.r.t. the identity function, thus the relative error is given by
\begin{align*}
\operatorname{rerr}_a(x_1, x_2)
&= 1 - \frac{\softplus_a(x_2) - \softplus_a(x_1)}{x_2 - x_1}.
\end{align*}
We say that interpreting a regression effect $\gamma$ directly on the parameter is valid if, for some pre-specified acceptable relative error $\alpha$, the predictor $\eta$ is in the interval $[T, \infty) \subseteq \mathbb{R}$ for which $\operatorname{rerr}_a(T, T + \gamma) < \alpha$ holds.
The acceptable relative error, of course, depends on the application and should be chosen carefully.
In this paper, we consider a relative error of 5\% acceptable.
\subsection{Further properties of the softplus function}
The softplus function shares a number of its properties with the exponential function.
Both functions are smooth and bijective mappings from the real numbers to the positive half-axis.
The first derivative of the softplus function is always positive and is given by
\begin{align*}
0 < \frac{\mathrm d}{\mathrm dx}\softplus_a(x) = \frac{1}{1 + \exp(-ax)} < 1.
\end{align*}
The second derivative is likewise strictly positive
\begin{align*}
0 < \frac{\mathrm d^2}{\mathrm dx^2}\softplus_a(x) = \frac{a \exp(ax)}{(1 + \exp(ax))^2}.
\end{align*}
Therefore, the softplus function is strictly monotonically increasing and strictly convex.
\subsection{Inference}\label{sec:inference}
Replacing the standard exponential response function with the softplus response functions introduced in this paper does not cause major difficulties as long as the parameter $a$ is fixed.
Since the softplus-based response functions are continuously differentiable, standard maximum likelihood inference can be used in GLM-type setting where only the derivative of the link function in the definition of the working weights and the working observations of iteratively weighted least squares (IWLS) optimization have to be replaced \citep[see for example][for details on the IWLS algorithm]{fahrmeirRegression2013}.
In our simulations and applications we rely on the Bayesian paradigm for statistical inference, since this allows us to apply the softplus-based response functions also beyond GLMs, for example in generalized structured additive regression models with complex additive predictor \citep{brezgerGeneralizedStructuredAdditive2006} or in structured additive distributional regression models \citep{kleinBayesianGeneralizedAdditive2015}.
For the case of pre-specified response function with parameter $a$, we rely on an MCMC simulation scheme where we update the parameter vector block-wise with a Metropolis-Hastings (MH) step in conjunction with IWLS proposals \citep{gamermanSamplingPosteriorDistribution1997,kleinBayesianGeneralizedAdditive2015}.
IWLS proposals automatically adapt the proposal distribution to the full conditional distribution and therefore avoid manual tuning which is, for example, required in random walk proposals.
This is achieved by approximating the full conditional distribution with a multivariate normal distribution whose expectation and covariance matrix match mode and curvature of the full conditional distribution at the current state of the chain which can be determined based on the IWLS algorithm of frequentist maximum likelihood estimation without requiring the normalizing constant of the full posterior.
More precisely, the parameters of the proposal distribution are determined by executing one Fisher-Scoring step and using the new position as the mean for the multivariate normal distribution while the covariance of the normal distribution is set to be the inverted observed Fisher-Information at the old position.
More formally, let $\thetavec$ be the vector of parameters that should be updated within a MH-block and let $\mathcal{L}(\thetavec)$ be the unnormalized full conditional posterior log density with respect to the parameter vector $\thetavec$.
The proposal distribution is Normal with mean $\muvec = \thetavec + \gvec \Fmat^{-1}$ and covariance matrix $\bm{\Sigma} = \Fmat^{-1}$ where $\gvec$ denotes the gradient of $\mathcal{L}(\thetavec)$ and $\Fmat$ denotes the Hessian of $-\mathcal{L}(\thetavec)$ each with respect to $\thetavec$.
This sampling scheme has proven to be effective in various regression models \citep{langBayesianPSplines2004, kleinBayesianGeneralizedAdditive2015, kleinSimultaneousInferenceStructured2016}.
Our implementation relies on an extension of the R-package \texttt{bamlss} \citep{umlaufBAMLSSBayesianAdditive2018} which implements methodology described above.
\section{Simulations}\label{sec:simulation}
With our simulations, we
\begin{itemize}
\item conduct a proof of concept evaluation that investigates how reliable models with the softplus response function can be estimated and whether the resulting credible intervals are well calibrated,
\item study the ability of model selection criteria to distinguish between data generating processes involving either the softplus or the exponential response function, and
\end{itemize}
For all simulations, estimation is conducted within the Bayesian paradigm and carried out in \texttt{R} \citep{rcoreteamLanguageEnvironmentStatistical2019} with the package \texttt{bamlss} \citep{umlaufBAMLSSBayesianAdditive2018}.
We use a similar data generating process varying only the sample size and the response function.
In particular, we assume that the data are generated from a Poisson distribution with expectation $\operatorname{E}(y_i) = \lambda_i = \operatorname{h}(\eta_i)$ where $\operatorname{h}$ denotes the response function.
For a single observation, we choose the predictor structure $\eta = 1.0 + 0.5 x_1 + 1.0 x_2 + 2.0 x_3$ with $x_1, x_2, x_3$ being independent and identically uniform distributed on the interval from $-1$ to $1$. All observations are simulated as being stochastically independent.
Throughout this section, we assume flat priors for all regression coefficients.
\subsection{Point Estimates and Credible Intervals}\label{simsec:cis}
In the first part of the simulation studies, we show that the softplus function can be reliably used as a response function and that posterior means and credible intervals are well-calibrated. The simulation scenarios feature the sample sizes $n \in \{50, 100, 200, 500, 1000,\allowbreak 5000\}$ and the softplus parameter $a$ was set to a value from $\{1, 5, 10\}$. Within each scenario, we simulated 6150~replications.
The number~6150 is determined from considering the coverage of the true parameter as a Bernoulli experiment and requiring that the normal approximation of the 95\% confidence interval for a coverage rate of $0.8$ is smaller than $0.02$.
We run one MCMC chain with 12000 iterations of which the first 2000 iterations are considered as burnin phase.
To visualize the results, we show box plots of the posterior mean estimates in Figure~\ref{fig:consistency}, and plot coverage rages of 80\% and 95\% credible intervals in Figure~\ref{fig:cis}.
In summary, we draw the following conclusions:
\begin{figure}[tb]\centering
\includegraphics[width=\textwidth,keepaspectratio]{arxiv-files/plot_cons.pdf}
\caption{
\label{fig:consistency}
Box plots of deviations from posterior mean estimates of regression coefficients to the true value for different sample sizes and different softplus parameters $a$. Replications that include an absolute deviation larger than five for one coefficient have been excluded from plotting for better visualization. This applies to one replication with $a = 5$ and to ten replications with $a = 10$ each with a sample size of $50$.
}
\end{figure}
\begin{figure}[tb]\centering
\includegraphics[width=\textwidth,keepaspectratio]{arxiv-files/plot_cis.pdf}
\caption{
\label{fig:cis}
Coverage probability for 80\% (solid line) 95\% (dotted line) credible intervals for different sample sizes and different softplus parameters $a$.
}
\end{figure}
\begin{description}
\item[Bias] For most simulation settings, the bias is negligibly small. The only exception is a small sample size in conjunction with a rather large softplus parameter $a$ where we can observe a small bias, especially for the intercept. However, one has to keep in mind that a large parameter $a$ implies an almost linear link function such that there is considerably less variability (and therefore information) in data sets with softplus response function as compared to the exponential response function with the same linear predictor. Furthermore, the softplus function maps even small negative values to a positive value that is close to zero and thus close to the boundary of the parameter space. The bias quickly diminishes as the sample size increases.
\item[Coverage rates] Figure~\ref{fig:cis} supports that our Bayesian approach provides valid credible intervals when the number of observations is large enough. For smaller sample sizes, the coverage rates suffer from the bias introduced by using larger values for the softplus parameter.
\end{description}
In short, the results obtained with the softplus response function are reliable.
Especially for larger sample sizes, no biases are observed and the coverage rates behave as expected.
Results of this simulation exercise obtained with maximum likelihood inference are virtually identical and are omitted for the sake of brevity.
\subsection{Model Selection Based on DIC}\label{sec:model_sel_dic}
In this simulation setting, we study how successfully the well established deviance information criterion \citep[DIC,][]{spiegelhalterBayesianMeasuresModel2002} can be used to discriminate between data generated by either the softplus or the exponential response function.
As in the last subsection, we vary the sample size and use $a=1$~or $a=5$~for the softplus parameter.
Each scenario is replicated 500~times and as before, we run one MCMC chain with a burnin phase of 2000~iterations and 10000~sampling iterations.
In Figure~\ref{fig:selection_dic}, we present the results summarized as percentages of correct model selection.
In addition, we consider a more conservative model decision rule where a minimum difference in DIC has to be achieved and use 1, 10, 100 as threshold values.
In all settings, a larger sample size leads to the correct model being recognized more frequently.
Furthermore, the correct model for the same sample size is easier identified if the data are generated with the exponential response function.
As described above, this can be attributed to the fact that the information per observation (quantified by the expected Fisher information) is larger when generated with the exponential response function than with the $\softplus$ response function with $a=5$.
Thus, a larger sample size is needed to have the same probability to select the correct model.
Yet, our simulations show that the DIC is a reliable metric to differentiate between the softplus response function and the exponential response function.
\begin{figure}[tb]\centering
\includegraphics[width=\textwidth,keepaspectratio]{arxiv-files/plot_selection_dic.pdf}
\caption{
\label{fig:selection_dic}
Percentages of correct model selections based on DIC differences with thresholds 0, 1, 10 and 100 when data are generated with the exponential response function (top row) and the softplus response function (second and third row).
}
\end{figure}
\section{Applications}\label{sec:applications}
We present four applications to demonstrate how the softplus response function can be used in practice.
We contrast our novel approach to the commonly used exponential response function.
First, we employ a well-known data set from ethology about horseshoe crab mating behavior as an illustrative example for count data regression with the softplus response function (Section~\ref{sec:horseshoe_crabs}).
Then, we illustrate the usefullness of the softplus response function in a distributional regression model with smooth effects (Section~\ref{sec:capital_bikes}).
For that, we fit a model to data from a bike-sharing service in Washington D.C., where the softplus function can be used as a response function for the variance parameter of a normally distributed outcome.
Following, we provide details on the motivating example shown in the introduction. Using data from the Munich rent index, we demonstrate similarities between results obtained from quantile regression and the softplus response function (Section~\ref{sec:munich-rents}).
In an application to operational loss data (Section~\ref{sec:jh-application}), we demonstrate the usefulness of the softplus response function apart from the quasi-additive interpretation.
In the supplementary material, we revisit the horseshoe crab data estimating the limiting gradient of the softplus exponential function which suggests to use a linear response function.
Furthermore, we analyze data from the Australian Health Survey 1977--1987 focusing on the number of physician consultations within two weeks in relation to explanatory variables as another illustrative example.
\subsection{Horseshoe Crabs}\label{sec:horseshoe_crabs}
\cite{brockmannSatelliteMaleGroups1996} investigates horseshoe crab mating behavior.
Horseshoe crabs have a strongly male-biased sex ratio which becomes particularly apparent in spring when male and female horseshoe crabs arrive in pairs at the beach ready to spawn.
Unattached males also come to the beach, gather around females and try to gain fertilization at the expense of the attached males.
\cite{brockmannSatelliteMaleGroups1996} shows that the number of unattached males, so-called satellites, gathering around a couple depends mainly on properties of the female crab, and to a lesser extent on environmental factors.
\cite{agrestiCategoricalDataAnalysis2013} and \cite{kleiberVisualizingCountData2016} reanalyze these data using count data regression techniques to model the number of satellite males.
\cite{agrestiCategoricalDataAnalysis2013} assumes the response to be Poisson or negative binomial distributed and for each response distribution he compares the exponential response function and the identity response function.
Among these four models, he finds that the negative binomial regression model with identity response function fits the data best.
\cite{kleiberVisualizingCountData2016} extend this approach by using hurdle models to allow excess zeros.
The authors favor the negative binomial hurdle model with exponential response function.
However, they omit results for the identity response function since they claim that the negative binomial hurdle model is superior with respect to predictions compared to the negative binomial model with identity response function favored by \cite{agrestiCategoricalDataAnalysis2013}.
Their argumentation includes that, in contrast to the identity response function, the exponential response function avoids negative predictions for small carapace widths.
The softplus function prevents negative predictions as well.
To illustrate how the softplus function can be used to model the bounded expectation of a count data model, we extend the analyses mentioned.
For that, we use the softplus function with $a=5$ as a response function in negative binomial regression models with and without accounting for excess zeros.
Following \cite{kleiberVisualizingCountData2016}, the carapace width and a numeric coding of the color variable are used as regressors in all models.
All models are fitted with \texttt{bamlss} using uninformative priors on all coefficients.
To compare the relative performances of the eight models, we use the DIC (see Table~\ref{tab:hs-dic}).
Similar to \cite{kleiberVisualizingCountData2016}, we find that the negative binomial hurdle models fit best and the DIC slightly favours the softplus response function.
The small difference in fit between the response functions is not surprising since \cite{kleiberVisualizingCountData2016} already point out that, given at least one satellite, neither carapace width nor color seem to have a significant contribution.
Note, an intercept-only model does not depend on the response function used since the intercept parameter can adapt to the response function yielding the same distribution parameter.
Consequently, the limited impact of the response function in the zero-adjusted model is expected.
\input{arxiv-files/hs_dic_tab}
Nonetheless, the application gives insight to the usefulness of the softplus function.
In Figure~\ref{fig:hs-expectations}, we display the expected number of satellites predicted as a function of carapace width with color set to the mean value.
When considering the negative binomial regression, one can clearly observe the differently shaped curves reflecting the response function employed.
A visual examination suggests that the exponential response function might not decay fast enough for small values of carapace width while increases too fast for large values.
On the contrary, the softplus response function seems to fit better when compared to the pattern arising from the model with zero-adjusted negative binomial response distribution (i.e., the hurdle model).
In particular, when considering the probabilities of observing zero satellites ($\operatorname{P}(y = 0)$; these are represented as dashed lines in Figure~\ref{fig:hs-expectations}), the model based on the softplus function is closer the output from the zero-adjusted response distribution.
This is especially true for small width values of the carapace.
This is due to the fact that the softplus function with $a = 5$ approaches zero much faster than the exponential response function does.
Furthermore, quantile-quantile plots (QQ-plots) of the randomized quantile residuals \citep[RQRs,][]{dunnRandomizedQuantileResiduals1996} indicate a decent fit to the data for all models with a preference for the hurdle model (see Figure~\ref{fig:hs-qq} for one realization).
\begin{figure}[hbt]
\centering
\includegraphics[width=1\textwidth]{arxiv-files/hs_fig4.pdf}
\caption{Plots of the expected response given carapace width and mean color for different response distributions broken down by response function. ZA indicates the the zero-adjusted response distribution. In addition, the dashed line indicates the probability of 0 satellites. The points show the observed data. }\label{fig:hs-expectations}
\end{figure}
When removing the zero-adjusted model from our considerations, the DIC suggests that the model with softplus response function has an advantage.
This finding is in line with \cite{agrestiCategoricalDataAnalysis2013} and his claim of a better fit using the identity response function compared to the exponential response function.
By fitting Poisson models with softplus response function and exponential response function, we can confirm the results from \citet{agrestiCategoricalDataAnalysis2013}, i.e., the quasi-linear response function fits the data better in terms of DIC.
However, we omit the results here because \citet{kleiberVisualizingCountData2016} have already pointed out that the Poisson response distribution can not appropriately model the data.
\input{arxiv-files/hs_coef_tab}
To illustrate the difference in the interpretation of softplus and exponential response function, we focus on the model assuming a negative binomial distributed response without adjusting for zeros since the impact of the different link functions becomes almost indistinguishable when adjusting for zeros.
Posterior means of the parameters are displayed in Table~\ref{tab:hs-coef} together with the corresponding 95\% credible interval (equal-tailed).
For a change of $0.53$, the linear threshold, as defined in Section~\ref{sec:linear-part}, is $0.37$, while for a change of $-0.54$, its value is $0.91$.
Notice that more than $98\%$ and $94\%$ of the posterior means of the linear predictor are larger than these linear thresholds.
Thus, we consider the linear interpretation of the covariate effects of width and color as valid for almost all observation.
In particular, a change by one unit in carapace size or color would increase the expected number of satellites by $0.53$ or $-0.54$, respectively.
This is in contrast to the interpretation of the exponential response function where the same changes would lead to a multiplicative change of $1.20$ and $0.77$, respectively.
The 95\% credible interval of the effect of color includes $0$ for the softplus response function but not for the exponential response function.
In both cases, however, the null effect is very close to the credible interval's boundary.
\begin{figure}[bt]
\centering
\includegraphics[width=\textwidth]{arxiv-files/hs_qq.pdf}
\caption{QQ-plots of one realization of RQRs for negative binomial distributed responses without and with zero-adjustment (indicated as ZA) employing the softplus or the exponential response function.}
\label{fig:hs-qq}
\end{figure}
\subsection{CapitalBikeshare}\label{sec:capital_bikes}
In this section, we demonstrate the applicability of the softplus function as a response function in a Bayesian distributional regression model with flexible covariate effects.
We employ data from CapitalBikeshare, a bicycle sharing service located in Washington D.C., to analyze the mean rental duration in minutes within each hour in the years 2016 -- 2017\footnote{the raw data can be found at https://www.capitalbikeshare.com/system-data}.
The operator might want predict the number of trips and their expected duration in order to know how many bikes have to be stocked.
However, the variance of the average journey time is also important, as it can prevent bottlenecks caused by unforeseen fluctuations.
The data have been preprocessed by the following rules:
\begin{itemize}
\item Trips taken by staff for service and inspection of the system have been removed as well as trips towards test stations.
\item Trips taken by non-members have been removed.
\item All trips with a duration of fewer than 60~seconds have been removed since they most likely indicate a false start or users ensuring that the bike is secure by redocking it.
\item Trips longer or equal to 60 minutes have been removed. This amounts to roughly 0.5\%~of the eligible trips. We consider them as outliers since the financial incentive system of CapitalBikeshare strongly encourages users to return bikes within the first hour.
\end{itemize}
The mean rental duration per hour is on average based on $308.24$~trips.
A raw descriptive analysis of this quantity gives an average of $10.9$~minutes with a standard deviation of $1.88$~minutes.
The framework of structured additive distributional regression models \citep{rigbyGeneralizedAdditiveModels2005,umlaufBAMLSSBayesianAdditive2018} extends the generalized additive models such that multiple parameters of a response distribution can be modeled with structured additive predictors and a suitable response functions.
For our analysis, we assume the mean rental duration to be conditionally independent and normally distributed.
We model both distributional parameters (mean and standard deviation) with structured additive predictors.
In particular, the mean rental duration within each hour $y_i$ is assumed to be independently and normally distributed with mean $\mu_i$ and standard deviation $\sigma_i$.
The parameters are linked to predictors ($\eta^{\mu}_i$, $\eta^{\sigma}_i$) via response functions $\h^{\mu}$ and $\h^{\sigma}$.
We use the same structure for both predictors and drop in the following superscript index.
The predictor is specified as $$\eta_i = f_1(\texttt{yday}_i) + f_2(\texttt{dhour}_i) + \xvec_i'\betavec,$$ where \texttt{yday} denotes the day of the year, \texttt{dhour} denotes the hour of the day and the last term contains the intercept and additional linear effects.
As linear effects, we consider a dummy variable for the year 2017 and a binary variable that encodes if the trip took place on a weekend.
The smooth functions are represented by cyclic P-splines \citep{eilersFlexibleSmoothingBsplines1996,hofnerUnifiedFrameworkConstrained2016} with second-order random walk penalty \citep{langBayesianPSplines2004}.
To illustrate the difference in interpretation between the softplus response function and the popular exponential response function, we estimate the model for both response functions, i.e.\ $\h^{\sigma} = \exp$ or $\h^{\sigma} = \softplus_{10}$.
The DIC favors the softplus response function (exponential: $58152$, softplus: $57943$).
The softplus parameter was not chosen on the basis of an information criterion, but rather to enable the quasi-additive interpretation.
Detailed results concerning the mean predictor and its components are omitted since both models employ the same response functions and the results are very similar (see the Supplementary Material for a full description).
We focus on the effect of the response function with respect to the standard deviation $\sigma_i$
and, in particular, on the smooth effect of \texttt{dhour} and the linear effect of \texttt{weekend}.
Figure~\ref{fig:cbs-sigma-dhour} shows the estimated effect of the time of the day on the predictor of the standard deviation.
We find that both models yield similar patterns.
The standard deviation is much larger in the early hours of the day with a peak around 3~am, then drops steeply, crosses the zero line shortly after 5~am and is comparatively low in the morning.
Over the course of the morning the standard deviation increases slightly until lunchtime, then decreases over the early afternoon before starting to increase again around 4~pm. At first slightly and then very steep until it reaches its peak again in the early morning hours.
The direct interpretation of these effects is difficult, especially when using the exponential response function.
For the softplus model, the estimated values of the linear predictor are larger than $0.42$ and in conjunction with a softplus parameter of 10, covariate effects can be interpreted as quasi-additive effects on the parameter (the relative error for a change of $0.0001$ at predictor value $0.42$ is smaller than $2\%$).
In the following, consider the difference between the initial peak at 2.5~am and the second peak at lunch time.
In the model with the softplus response function, we observe that the predictor decreases by about 2.7~units and, consequently, the standard deviation likewise.
In contrast, for the competing model, the exponential function must be applied to the predictor, and the outcome can subsequently be interpreted multiplicatively.
The exponential model outputs an additive change of $-1.25$ on the predictor $\eta^{\sigma}$, which is reflected in a multiplicative change of the standard deviation by $3.5^{-1}$.
When considering the variable \texttt{weekend} (Table~\ref{tab:cbs-sigma-le}), its effect is similar in both models: the mean rental duration exhibits more variance during the weekend, and both 95\% credible intervals exclude zero.
However, the interpretation of the exponential model is not straightforward: the posterior mean of the regression coefficient related to \texttt{weekend} is 0.39.
In order to assess the multiplicative effect of weekend on the standard deviation, one needs to consider the posterior mean of the transformed parameter, that is $\overline{\exp(\beta_\text{weekend})} = 1.48$.
We conclude that on a weekend the standard deviation is 1.48~times larger than on weekdays.
The softplus model directly outputs the additive effect of weekend.
We expect that the standard deviation of the mean rental duration is $0.56$~minutes larger on weekends than on working days.
\input{arxiv-files/cbs-sigma-le}
\begin{figure}[t]\centering
\includegraphics[width=\textwidth]{arxiv-files/cbs_sigma_dhour.pdf}
\caption{\label{fig:cbs-sigma-dhour} Posterior mean estimates on the predictor of the standard deviation together with 95\% point-wise credible intervals (equal-tailed) for both response functions.
%
}
\end{figure}
In both models, the interpretation of regression effects w.r.t.\ the predictor $\eta^{\sigma}$ is straightforward and the nature of the effects w.r.t.\ the standard deviation is known (i.e., additive or multiplicative).
Despite this, the combination of effects and their interaction with the response function makes an assessment of the absolute effects on the distribution parameter difficult.
However, it becomes more apparent when we consider plots of the predicted parameter values.
In Figure~\ref{fig:cbs-pred-sigma}, we show the predicted values (using the posterior mean of the estimated parameters) for $\sigma$ over the course of two selected days of the year 2016 (these are 1st of January and 1st of July).
We further add the effect of \texttt{weekend} and display the predicted values for both models.
We observe that both models output similar values for a weekday on the first of July. Even the spike in the early morning appears similar.
In the winter or on a weekend, the standard deviation is larger in both models (exponential model: 1.48~times larger on weekends, 1.31~times larger on the 1st of January; softplus model: 0.56~minutes larger on weekends, 0.23~minutes larger on the 1st of January).
The difference between the models is most apparent at the 3am peak where it is now about one minute.
The almost explosive behaviour of the exponential function becomes apparent when now considering the combined effect (left panel in Figure~\ref{fig:cbs-sigma-dhour}~B).
The difference between the peak at noon and in the morning is almost 5 minutes with the exponential function (that is a 3.5~fold increase) and just 2.75 minutes with the softplus function.
Again, for the remaining time of the day both models output relatively small differences.
Compared to the right panel in~A, we find $\exp(0.27 + 0.39 + 1.25) = 6.75$~fold increase between noon and morning peak with the exponential function. Due to the additive nature of the softplus function, the effects are not multiplied and the difference is just $4.53$~fold (4.49~minutes compared to 0.99~minutes).
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{arxiv-files/cbs-pred-sigma}
\caption{Predicted standard deviation over the course of a day. Exemplary for the first day of the year~2016 and July 1st on a working day (Panel~A) and on a weekend (Panel~B). The solid line refers to predictions for the model with exponential response function while the dashed line refers to the model with softplus response function.}
\label{fig:cbs-pred-sigma}
\end{figure}
From this application, we conclude that the softplus function can be used as an alternative response function in distributional regression for a bounded parameter.
Using the softplus function, can lead to a better model fit.
In addition, due to the limited growth rate of the softplus function it can the avoid explosive behaviour of the exponential function.
Besides, it can offer the quasi-additive interpretation of regression effects.
The softplus function is even a viable alternative if both functions fit equally well and it is up to the practitioner to decide which one to prefer.
\subsection{Munich Rent Index}\label{sec:munich-rents}
In Munich, tenants have to pay some of the highest prices per square meter for housing across Germany.
To provide tenants and landlords with reference prices, the city has established a rent index. %
Using data with 3082~observations from the Munich rent index available on the website accompanying \citet{fahrmeirRegression2013},
we relate the rent in Euros to the size in square meters of the living area.
The data shows significant heteroskedasticity, indicated by increasing variance in the response as the size of the flat increases (see Figure~\ref{fig:munich-rent}).
Therefore, a simple linear model with constant variance is inappropriate.
Besides linear quantile regression \citep{koenkerQuantileRegression2005}, we additionally use two versions of location-scale models within the framework of distributional regression relating the standard derivation to the covariate.
Since the standard derivation is strictly positive, a response function enforcing positivity of the transformed predictor should be used.
We compare the performance of the location-scale models equipped with the softplus response function (with $a$ fixed to 30) and exponential response function assuming normally distributed responses.
The linear predictors for the expectation $\beta_0 + \beta_1 \texttt{area}_i$ and the standard deviation $\gamma_0 + \gamma_1 \texttt{area}_i$ feature beside the linear effect of area only the intercept.
We use flat priors on the parameters and fit the model with \texttt{bamlss}.
In terms of DIC, we find the model with softplus response function fits the data better than the competing model (39281 vs.\ 39301).
A possible explanation seems to be the overestimated variance of the response for flats larger than approximately 100~square meters.
We draw this conclusion by visually comparing the estimated quantiles of the response to the data shown in Figure~\ref{fig:munich-rent}.
Additionally, the figure highlights the explosive behavior of the exponential function for larger arguments which the softplus function avoids.
In particular, the softplus function translates the linear effects in the predictor to the estimated quantiles since the quantile function for a normal random variable with mean $\mu$ and standard derivation $\sigma$ is linear in the standard derivation
\[
F^{-1}(p) = \mu + \sqrt(2) \sigma \operatorname{erf}^{-1}(2p - 1)
\] where $\text{erf}^{-1}$ denotes the inverse error function.
We further observe quantiles estimated using the softplus response function are roughly comparable to the results obtained via quantile regression \citep{koenkerQuantregQuantileRegression2021}.
However, the estimated quantiles depend on the assumed response distribution.
Here, this is the normal distribution and, therefore the quantiles are symmetric w.r.t.\ the median.
The non-parametric quantile regression model avoids this restriction since it requires no distribution assumption.
However, using a parametric regression model has the advantage over non-parametric quantile regression of preventing undesirable artifacts such as quantile crossing (altough we do not observe this in our application due to the relatively simple model structure that we employ).
\subsection{Operational Losses at UniCredit}\label{sec:jh-application}
In this section, we demonstrate the usefulness of the softplus function in the context of a distributional regression model.
To do so, we employ the data used in \cite{hambuckers2018} where the authors model the size distribution of extreme operational losses in a large Italian bank (i.e., losses stemming from fraud, computer crashes or human errors) given a set of economic factors (e.g., interest rates, market volatility or unemployment rate).
This conditional distribution is then used to estimate a quantile at the 99.9\% level, a quantity needed to establish the regulatory capital held by the bank, with large quantile values requesting more capital, and to monitor operational risk exposure in various economic situations, such as a a financial crisis or economic expansion periods.
Since operational loss data are heavy-tailed and that the focus is on extreme values dynamics, distributional regression techniques are needed to properly reflect the effect of the covariates on extreme quantiles.
Following \cite{chavez2016}, an approach based on extreme value theory is traditionally used: a high threshold~$\tau$ is defined by the statistician, and only losses larger than this threshold are kept for the analysis.
Then, we assume that the distribution of the exceedances above the threshold is well approximated by a Generalized Pareto distribution (GPD).
In the context of extreme value regression, the parameters of the GPD are additionally modeled as functions of covariates, defining a Generalized Pareto (GP) regression model.
Estimated parameters of this model are used to derive the quantile of interest given values of the covariates.
For mathematical and conceptual reasons, both parameters of the GPD are restricted to strictly positive values: the scale parameter $\sigma(x)$ is strictly larger than 0, whereas the shape parameter $\gamma(x)$ is restricted to positive values to guarantee the consistency of the maximum likelihood estimator and to reflect the tail-heaviness of the loss distribution.
Thus, an exponential response function is commonly used for computational simplicity, although no theoretical support for a multiplicative model exists (see, e.g., \cite{umlaufPrimerBayesianDistributional2018, hambuckersQF, bee2019} and \cite{groll2019}).
However, this choice for $\gamma(x)$ might quickly generate explosive quantile estimates for some combinations of the covariates, making the model economically unexploitable to derive capital requirements.
In addition, it can have a similar undesired effect on uncertainty quantification: the width of the confidence interval on the quantile increases exponentially with the estimated quantile itself.
Consequently, it is in times of high estimated risk exposure (i.e., large values of the 99.9\% quantiles) that risk managers face the highest model uncertainty to take decisions.
To illustrate how the softplus function helps mitigating these issues, we reanalyze the UniCredit loss data for three categories of operational losses, namely the categories \textit{execution, delivery and process management} (\texttt{EDPM}), \textit{clients, products, and business practices} (\texttt{CPBP}) and \textit{external fraud} (\texttt{EFRAUD}).
The data were collected over the period January 2004 - June 2014.
As in \cite{hambuckers2018}, we work with the 25\% largest losses in each category. Descriptive statistics and histograms of the data are provided in Table~\ref{tab:ol-desc} and Figure~\ref{fig:gpd_histo}.
They highlight both the presence of extreme values that need to be accounted for. For each loss registered during a given month, we associate the values taken by a set of economic covariates observed the month before and found susceptible to influence the loss distribution by \cite{hambuckers2018} (the complete list can be found in the Supplementary Material).
Denoting by $y_{i}=z_{i}-\tau$ the exceedance of a loss $z_{i}$ above the threshold $\tau$, and by $\mathbf{x}_{\gamma,i}$ and $\mathbf{x}_{\sigma,i}$ the corresponding vectors of covariates for both $\gamma$ and $\sigma$, our model can be written in a generic form as
\begin{align*}
y_{i} &\sim G(\gamma(\mathbf{x}_{i}),\sigma(\mathbf{x}_{i})),\\
\gamma(\mathbf{x}_{i})&=h^{\gamma}(\mathbf{x}_{\gamma,i}^{'}\pmb{\beta}_{\gamma}),\\
\sigma(\mathbf{x}_{i})&=h^{\sigma}(\mathbf{x}_{\sigma,i}^{'}\pmb{\beta}_{\sigma}),
\end{align*}
with $G(\cdot)$ denoting the cumulative distribution function of the GPD, and $\pmb{\beta}_{\gamma}$ and $\pmb{\beta}_{\sigma}$ being the vectors of regression parameters for $\gamma$ and $\sigma$, respectively.
We fit separate GP regression models to each sample with \texttt{bamlss} using 24000~MCMC iterations, treating the first 4000~iterations as burn-in and applying a thinning factor of~20.
We compare the results obtained with various response function $h^{\gamma}$ (we keep the exponential function for $h^{\sigma}$). Estimated regression parameters can be found in the Supplementary Material. We report the DIC in Table~\ref{tab:gpd_infocrit}, whereas Figure~\ref{fig:gpd_qqplot} displays the QQ-plots of the RQRs.
They both indicate that the overall goodness-of-fit is satisfactory and similar across models, with a slight preference for the sofplus models for \texttt{EFRAUD}, and an advantage of the exponential model for the other categories.
\input{arxiv-files/ol_desc_stats}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{arxiv-files/ol_hist.pdf}
\caption{Histograms of the log-losses larger than the 75\% quantile per category, for the three event types.}\label{fig:gpd_histo}
\end{figure}
\begin{table}[htbp]
\centering
\begin{small}
\begin{tabular}{l|cccccc}
\toprule
Category & Exp. & sofptplus $1$ & softplus $5$ & softplus $10$ & Null model\\
\midrule
\texttt{CPBP}
& \underline{23,593.24} & 23,594.99 & 23,598.48 & 23,603.37 & 23,626.18 \\
\texttt{EDPM}
%
& \underline{16,532.26} & 16,532.77 & 16,535.42 & 16,540.51 & 16,542.68\\
\texttt{EFRAUD}
%
& 6,547.11 & \underline{6,545.59} & 6,545.89 & 6,546.31 & 6,567.46\\
\bottomrule
\end{tabular}
\end{small}
\caption{Deviance information criterion for the different models. \textit{Null model} refers to the exponential model with no covariates. %
}\label{tab:gpd_infocrit}
\end{table}
\begin{figure}[tb]
\includegraphics[width=\textwidth]{arxiv-files/ol_qq.pdf}
\caption{QQ-plots of the RQRs for the different models. The number in parentheses refers to the employed goodness of approximation parameter in the softplus function.}\label{fig:gpd_qqplot}
\end{figure}
However, looking first at the predicted values of the 99.9\% quantile of the conditional distributions, models based on the softplus functions generate less outliers than the exponential function (Figure~\ref{fig:gpd_boxplot}): the largest predicted quantiles are between 1.5~and 3471~times smaller with the softplus models than with the exponential model.
Whereas UniCredit is exposed to extremely high (and unrealistic) capital requirements if it uses the exponential model, this issue is well mitigated with the softplus model.
This effect is particularly strong for \texttt{EFRAUD}.
Second, looking at the size of the confidence intervals for the 99.9\% quantiles, we observe a clear trend: on Figure~\ref{fig:gpd_CI}, we show the ratio between the size of the confidence intervals obtained with the softplus functions and those obtained with the exponential function, with values smaller than 1 indicating an advantage for the softplus function.
For large values of the estimated quantile, we obtain much narrower confidence intervals with the softplus functions, with most ratios below 1.
This result implies that, in times of financial stress characterized by high values of the quantiles, the softplus models delivers more informative estimations.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{arxiv-files/ol_pred_99.pdf}
\caption{Box plots of the estimated 99.9\% quantiles of the conditional loss size distribution (x-axis is in log-scale).}\label{fig:gpd_boxplot}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{arxiv-files/ol_ci_ratios.pdf}
\caption{Ratio of the sizes of the confidence intervals (sp / exp). Left panel: ratio expressed as a function of the softplus posterior mean estimate. Right panel: ratio expressed as a function of the exponential posterior mean estimate. Each row displays the results with the softplus parameter set to the value indicated on the right ($a = 1, 5, 10$).}\label{fig:gpd_CI}
\end{figure}
Finally, we investigate if these results imply also a better fit of the models based on the softplus function for the observations far in the tail. To do so, we report the Anderson-Darling~(AD) statistics~\citep[][]{stephensEDFStatisticsGoodness1974} computed on the RQRs (Figure \ref{fig:ad_stat}). Compared to AIC or DIC, the AD statistic gives more weight to extreme residuals and is therefore routinely used to assess the goodness-of-fit of extreme value regression models \citep{choulakian2001,bader2018}. We observe a better fit of the sofplus functions for CPBP and EFRAUD categories. The fit is rather similar for EDPM, although slightly better for the exponential model.
Overall, this application demonstrates the usefulness of the softplus function to prevent outliers among estimated quantities of interests (in the present case, a quantile far in the tail) when there are no justifications for a multiplicative model.
In addition, it shows that the softplus models provide similar global goodness-of-fit levels but dramatically reduce the uncertainty around large estimated quantities of interest, a desirable feature for end-users.
\begin{figure}[tbp]
\includegraphics[width=\textwidth]{arxiv-files/oladstat.pdf}
\caption{Anderson-Darling statistics obtained from the RQRs for the three categories.}\label{fig:ad_stat}
\end{figure}
\section{Summary and Conclusion}\label{sec:sumconc}
In this paper, we introduce softplus response function and showcase its applicability in a broad range of statistical models.
The novel response function ensures the positivity of the associated distribution parameter while allowing for a quasi-additive interpretation of regression effects for the majority of the relevant predictor space.
We highlight the interesting theoretical properties of the sofplus response function, justify the quasi-additive interpretation and give a guideline to assess the validity of this interpretation.
Particular emphasis is placed on demonstrating the straightforward quasi-linear and quasi-additive interpretation of covariate effects with several applications.
Furthermore, we highlight that the limited growth rate of the softplus response function can prevent outliers in predictions and, thus, can reduce prediction uncertainty.
Thereby, we show that the new response function is applicable to a great variety of model classes and data situations.
Our simulation studies demonstrate that the softplus function behaves well as a response function with no noticeable shortcomings.
Estimates are consistent and our Bayesian approach yields reliable credible intervals.
Furthermore, we show that information criteria can be used to distinguish between data generated by the exponential and the softplus response function.
We do not claim that the softplus function is, in general, a better response function than the exponential function.
However, we feel that having a quasi-additive alternative available is useful at least for empirical verification.
Since the implementation of the softplus response function is easy and straightforward,
we are optimistic that it will be available in more software for regression modeling (it is already included in the R-packages \texttt{bmrs} \citep{JSSv080i01} and \texttt{bamlss} \citep{umlaufBAMLSSBayesianAdditive2018}).
Thus, researchers can benefit from employing the softplus response function in the future.
The work of \citet{weissSoftplusINGARCHModel2021}, evaluating the softplus response function in the context of INGARCH models, is a first indication for interest of the statistical community in the novel response function.
\section*{Acknowledgements}
Financial support from the German Research Foundation (DFG) within the research project KN 922/9-1 is gratefully acknowledged.
Julien Hambuckers acknowledges the financial support of the National Bank of Belgium.
\bibliographystyle{foo}
| proofpile-arXiv_065-1883 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Further}
There are several variants on the constructions considered here. For
example, we could consider the cylinder $A$, equipped with $m$
vertical lines separated by $O$-markings. The endomorphism algebra, in
$\Sym^k(A)$ of the corresponding Lagrangians can be thought of as a
more symmetric, circular analogue of the pong algebra (i.e. one
without left and right walls), discovered by Manion and
Rouqier~\cite{ManionRouquier}.
In our notation, to each integer $m\geq 1$ and $1\leq k\leq m$, one
can consider a differential graded algebra $A(m,k)$ over
$\Field[v_1,\dots,v_m]$, which could be called the {\em asteroids
algebra}, defined as follows.\footnote{The choice of terminology for
our algebras can be taken as evidence for misspent youth.} (This
algebra was fist considered in unpublished joint work of the first author with
Robert Lipshitz and Dylan Thurston, when consider knot Floer homology
for toroidal grid diagrams; compare~\cite{Gentle, PetkovaVertesi}. It is also closely related to the
``differential graded nil Hecke algebras associated to the extended
affine symmetric groups'' of~\cite{ManionRouquier}.) Consider the
circle $\R/m\Z$, equipped with with basepoints
$\{\OneHalf,\dots,\OneHalf+m-1\}$ corresponding to $v_1,\dots,v_m$. An
idempotent state corresponds to a $k$-element subset of
$\{1,\dots,m\}\subset \R/m\Z$; or, equivalently, a a $m\Z$-invariant
subset ${\widetilde S}\subset \Z$. A ($m\Z$-lifted) partial
permutation, now, consists of such a subset ${\widetilde S}$, and a
map $f\colon {\widetilde S}\to \Z$ satisfying $f(x+m)=f(x)+m$. A {\em
crossing} consists of a pair of integers $i<j$ so that
$f(i)>f(j)$. Once again, there is a Maslov grading that counts the
number of crossings. There is a multiplication map induced from
composition of $m\Z$-lifed partial permutations, which is set to $0$
if the Maslov grading of the product is smaller than the sum of the
Maslov gradings of the factors. Similarly, there is a differential
that resolves crossings, containing only those terms whose resolutions
have exactly one fewer crossing. Verifying that the result is a
differential graded algebra is straightforward, and slightly simpler
than the corresponding verification for the pong
algebra. (Compare~\cite[Section~4]{Pong}.) Weights are at points in
$\frac{\OneHalf+\Z}{m\Z}$, with the weight of $a\in \OneHalf+\Z$ given
by
\[ \OneHalf (\#\{i\big| i<a<f(i)\} + \#\{i\big| f(i)<a<i\}).\]
For example, consider the $3\Z$-lifted partial permutation $f$ with
domain $\{1+3\Z,2+3\Z\}$, which is determined by
\begin{equation}
f(1)=6 \qquad{\text{and}}\qquad
f(2)=1.
\label{eq:Deff}
\end{equation}
This has two crossings: the equivalence class of the pair of strands
starting at $1$ and $2$; and the pair of strands starting at $-2$ and
$3$. The weight vector is given by $(1/2,3/2,1)$. See
Figure~\ref{fig:Asteroids} for a picture.
Note also that when $m=4$ and $k=1$, this is the ``peculiar algebra''
of Zibrowius~\cite{Zibrowius}.
\begin{figure}[ht]
\input{Asteroids.pstex_t}
\caption{\label{fig:Asteroids} {\bf{Asteroids diagram.}}
At the left, an asteroids diagram for $f$ from Equation~\eqref{eq:Deff}.
The other two pictures represent the terms in the differential of the first term,
both taken with multiplicity $v_2$.}
\end{figure}
The Heegaard diagram for the wrapped Fukaya category (analogous
to the diagram from Section~\ref{sec:HeegPong})
is a quotient of $\R^2$ by a group of translations (rather
than the group of motions ${\mathbb G}_m$ considered above).
Proposition~\ref{prop:NoNGons} holds also in this case.
One can look at other configurations of Lagrangians in punctured
spheres. For instance, for a linear chain of spheres, the
endomorphism algebra in the wrapped Fukaya category does have a higher
multiplication (and indeed it can be computed, for example, with the
methods of~\cite{Pong}).
Finally, building on~\cite{Pong}, the pong algebra can be thought of
as governing the bordered invariants for certain types of Heegaard
diagrams associated to tangles; see~\cite{NextPong}; compare
also~\cite{InvPair, HolKnot}. In light of this, the present work can
be thought of as analogous to Auroux's interpretation of bordered
Floer homology~\cite{Auroux}.
\section{The wrapped diagram}
\label{sec:HeegPong}
Our aim here is to give a particularly convenient description of the wrapped
diagram for $\Sym^k(M)$, equipped with $\Lambda_\mathbf x$.
Consider the plane $\R^2$, decorated with the following data:
\begin{itemize}
\item an infinite grid of
vertical lines
${\widetilde\alphas}=\{{\widetilde \alpha}_i=i\times \R\}_{i\in\Z}$;
\item horizontal lines
${\widetilde\betas}=\{{\widetilde \beta}_i=\R\times i\}_{i\in\Z}$;
\item an infinite set of punctures at the points $\{(\OneHalf +
i,\OneHalf +i)\}_{i\in \Z}$, so that $(\OneHalf+i,\OneHalf+i)$ is
labeled by $O_j$, where $j=Q_2(\OneHalf+i)$, with $Q_2$ as in Equation~\eqref{eq:DefQ2}.
\end{itemize}
The symmetry group of this picture is generated by the two
$180^\circ$-rotations with fixed point at $(\OneHalf,\OneHalf)$,
and the one with fixed point at $(m-\OneHalf,m-\OneHalf)$. Let
${\mathbb G}_m$ denote this group of rigid motions. Note that ${\mathbb G}_m\cong G_m$, induced by the
restriction of ${\mathbb G}_m$ to the diagonal line in $\R^2$.
The quotient space $\R^2/{\mathbb G}_m$ is homeomorphic to the disk $\HD$ with
two order $2$ orbifold points, which are the points marked $O_1$ and
$O_m$. There are an addition $m-2$ marked points, labeled $O_i$ for
$i=2,\dots,m-1$.
The vertical lines $\Z\times \R=\{{\widetilde \alpha}_i\}_{i\in\Z}$
project to $m-1$ embedded lines
$\{\alpha_i\}_{i=1}^{m-1}$ in $\HD$. Similarly, the horizontal lines
$\{{\widetilde\beta}_i\}_{i\in\Z}$, project to $m-1$
embedded lines $\{\beta_i\}_{i=1}^{m-1}$ in $\HD$. We label the lines in
$\HD$ so that $\alpha_i$ is the image of ${\widetilde \alpha}_j$, for
any $j\in\Z$ with $Q_1(j)=i$, with $Q_1$ as in Equation~\eqref{eq:DefQ1};
similarly, $\beta_i$ is the image of ${\widetilde \beta}_j$.
\begin{lemma}
\label{lem:WrapDiagram}
Consider $\HD=\R^2/{\mathbb G}_m$, equipped with $\{\alpha_i\}_{i=1}^{m-1}$,
$\{\beta_i\}_{i=1}^{m-1}$, and the markings $\{O_i\}_{i=1}^m$.
This is a diagram for the wrapping of $\{\alpha_i\}_{i=1}^{m-1}$.
\end{lemma}
\begin{proof}
What we mean is the following. Consider $\HD$ as above, equipped
with the vertical circles ${\widetilde\alphas}$. ${\mathbf G}_m$
contains an index two subgroup of translations, generated by
$(x,y)\mapsto(x+2m,y+2m)$. Consider the cylinder $A$ obtained as
the quotient
\begin{align}
A=\frac{\R\times \R}{(2m,2m)\cdot \Z}.
\end{align}
Moreover, there is a branched covering map from $A$ to $\HD$,
with branching at $O_1$ and $O_m$.
Equip $\Sym^k(\HD)$ with the symplectic structure from
Proposition~\ref{prop:IdentifyLagrangians}, chosen so that the images
in $\Sym^k(\HD)$ of the
manifolds $\alpha_{x_1}\times\dots\times \alpha_{x_k}\subset
\HD^{\times k}$ for all subsequences $\mathbf x\subset \{1,\dots,m-1\}$ are
Lagrangian. Note that our explicit parametrizations here differ
from the ones described around
Proposition~\ref{prop:IdentifyLagrangians} by a linear
transformation. In particular, these manifolds are equivalent to the
submanifolds $\Lambda_\mathbf x$ from that proposition. Moreover,
the Liouville flow carries
$\alphas_{x_1}\times\dots\times\alpha_{x_k}$ to
$\beta_{x_1}\times\dots\times\beta_{x_k}$.
after some (positve) time.
\end{proof}
See Figures~\ref{fig:QuotDiag} and~\ref{fig:QuotDiag2} for examples.
\begin{figure}[ht]
\input{QuotDiag.pstex_t}
\caption{\label{fig:QuotDiag} {\bf{Heegaard diagrams.}}
The quotient of the infinite grid diagram on the left is the diagram (with $m=2$) on the right. On the left, we have shaded a fundamental domain for the
${\mathbb G}_2$ action.}
\end{figure}
\begin{figure}[ht]
\input{QuotDiag2.pstex_t}
\caption{\label{fig:QuotDiag2} {\bf{Heegaard diagrams.}}
The case where $m=3$.}
\end{figure}
As usual, a {\em $k$-fold Heegaard state} is a $k$-tuple of points (for some $0<k<m$), with the property that each point lies on $\alpha_i\cap \beta_j$,
no two points lie on the same $\alpha_i$, and no two points lie on the
same $\beta_j$.
Given a lifted partial permutation ${\widetilde f}$, we can form its
graph
\[ \Gamma_{\widetilde f}=\{(i,{\widetilde f}(i))\mid i\in {\widetilde S}\}\subset \R^2.\]
Clearly $\Gamma_{\widetilde f}$ is invariant under ${\mathbb G}_m$;
as such we can form the associated subset $\mathbf x({\widetilde f})\subset \HD$.
\begin{lemma}
The above map sets up a one-to-one correspondence between ($k$-element) lifted partial permutations and ($k$-fold) Heegaard states for $\HD$.
\end{lemma}
\begin{proof}
The proof is straightforward.
\end{proof}
\begin{defn}
\label{def:doms2}
Suppose that $\HD$ is a surface, equipped with two sets of curves
$\alphas=\{\alpha_i\}$ and $\betas=\{\beta_i\}$. We think of the curves as
giving $\HD$ a $CW$-complex structure, with $0$-cells the
intersection points between $\alpha_i$ and $\beta_j$, $1$-cells the arcs
in $\alpha_i$ and $\beta_j$, and two-chains the components of
$\HD\setminus (\alphas\cup\betas)$. Thus, a two-chain can be thought of
as an assignment of integers to each component of
$\HD\setminus(\alphas\cup\betas)$. Fix some intersection point $x$ of
$\alpha_i$ with $\beta_j$. We say that $x$ is a {\em{corner}} if
the local multiplicities $A$, $B$, $C$, and $D$ around $x$, as
pictured in Figure~\ref{fig:Cornerless}, satisfy
$A+D\neq B+C$. In fact, we
say that $x$ is an {\em initial $(\alpha,\beta)$ corner} if $B+C=A+D+1$;
if $B+C=A+D-1$, we say $x$ is a {\em terminal $(\alpha,\beta)$-corner}.
A domain is a {\em cornerless domain} if it has no corner.
\end{defn}
\begin{figure}[ht]
\input{Cornerless.pstex_t}
\caption{\label{fig:Cornerless} {\bf{Corner conventions.}}
}
\end{figure}
The space of cornerless domains is an abelian group. If $\mathbf x$ and $\mathbf y$
are Heegaard states, let $\doms(\mathbf x,\mathbf y)$ denote the space of domains
with initial corner at the components of $\mathbf x\setminus(\mathbf x\cap\mathbf y)$ and terminal corner at
the components of $\mathbf y\setminus(\mathbf x\cap\mathbf y)$.
(Algebraically, $\doms(\mathbf x,\mathbf y)$ is an affine
space for the space of cornerless domains.) As explained
in~\cite{HolDisk}, for $k\geq 3$,
$\doms(\mathbf x,\mathbf y)$ is identified with a space of relative
homotopy classes of Whitney disks connecting $\mathbf x$ to $\mathbf y$, denoted
there $\pi_2(\mathbf x,\mathbf y)$.
\begin{lemma}
\label{lem:UniqueHomotopyClass}
Given Heegaard states $\mathbf x$ and $\mathbf y$, corresponding to lifted partial permutations
$({\widetilde f},{\widetilde S})$ and
$({\widetilde g},{\widetilde T})$,
there is a $\phi\in\doms(\mathbf x,\mathbf y)$
with compact support if and only if
${\widetilde S}={\widetilde T}$ and
${\widetilde f}({\widetilde S})={\widetilde g}({\widetilde T})$.
Moreover, if $\phi$ exists, then it is unique.
\end{lemma}
\begin{proof}
Given $\mathbf x$ and $\mathbf y$, consider the corresponding lift to $\R^2$. The
hypothesis that ${\widetilde S}={\widetilde T}$ is equivalent to the
condition that we can connect $\mathbf x$ to $\mathbf y$ by a path $A$ in
$\alphas=\{\alpha_i\}_{i=1}^{m-1}$. (When the path exists,
its uniqueness, as a relative one-chain in $\HD$, is obvious.)
Similarly, the condition that ${\widetilde
f}({\widetilde S})={\widetilde f}({\widetilde T})$ is equivalent
to the condition that we can connect ${\widetilde y}$ to
${\widetilde \mathbf x}$ (uniquely) by a path $B$ inside $\betas$. By
construction, $\partial A = \partial B$, so $A-B=\partial D$, for
some two-chain. Uniqueness follows from contractability of $\HD$.
\end{proof}
Suppose that $({\widetilde f},{\widetilde S})$ is a lifted partial
permutation with graph ${\widetilde \mathbf x}$ and Heegaard state $\mathbf x$. Let
$A$ be the set of vertical lines in $\R^2$ that connect ${\widetilde x}$
to the diagonal line. Given $1\leq i\leq m$, the weight of
${\widetilde S}$ at $i$ can be
interpreted as the number of times $A$ crosses the horizontal line
$\R\times (i-\OneHalf)$.
\begin{lemma}
\label{lem:IdentifyWeights}
Suppose that $({\widetilde f},{\widetilde S})$ and
$({\widetilde g},{\widetilde T})$ are lifted
partial permutations with corresponding Heegaard states $\mathbf x$ and
$\mathbf y$, which can be connected by some $\phi\in \doms(\mathbf x,\mathbf y)$. Then,
the local multiplicity of $\phi$ at $O_i$ coincides with
$\weight(\widetilde f)-\weight(\widetilde g)$.
\end{lemma}
\begin{proof}
Let $\mathbf x_0$ denote the Heegaard states corresponding to the identity
map on ${\widetilde S}$. Let $A$ be (oriented) vertical path from
$\mathbf x$ to $\mathbf x_0$, so that $\weight(\mathbf x)$ counts half the number of
times $A$ crosses $\R\times (i-\OneHalf)$. We can think of $\R\times
(i-\OneHalf)$ as the union of two rays ${\widetilde r}_i$ and
${\widetilde r}_i'$ starting at $(i-\OneHalf,i-\OneHalf)$. Then,
$\weight(\mathbf x)$ is one half the oriented intersection number of $r_i$
with $A$ plus the oriented intersection number of $r_i'$ with $A$.
Let $r_i$ resp. $r_i'$ denote the image in ${\mathbb H}$ of
${\widetilde r}_i$ resp. ${\widetilde r}'_i$. Observe that $r_i$ and
$r_i'$ are paths in ${\mathbb H}$ from $O_i$ to infinity that avoid
$\betas$. (Indeed, for $i=1$ and $m$, $r_i=r_i'$.) Thus,
$\weight(\mathbf x)-\weight(\mathbf y)$ is the algebraic intersection number of
$r_i$ with $\partial \phi$, which in turn coincides with the winding
number of $\partial \phi$ around $O_i$, and hence the local
multiplicity of $\phi$ at $O_i$. Since the same remarks apply for
$r_i'$, the result follows.
\end{proof}
\begin{figure}[ht]
\input{Bigon.pstex_t}
\caption{\label{fig:Bigon} {\bf{A lifted bigon.}} Consider $m=3$ and
$k=1$. The black dot corresponds to the lifted partial permutation
sending $2$ to $-1$, while the white dots corresponds to the map
sending $2$ to $2$. There is a (shaded) bigon from the black to the white
dot with multiplicity $1$ at $O_1$ and $O_2$. The ray $r_2$ is indicated in the picture.}
\end{figure}
Under this correspondence, $\Cross({\widetilde f})$ corresponds
${\mathbb G}_m$ orbits of pairs of points in $\mathbf x$ of the form
$(x_1,y_1),(x_2,y_2)$ so that $x_1<x_2$ and $y_1>y_2$.
Thus, for each crossing, there is a unique ${\mathbb G}_m$-orbit of
embedded rectangle $r$ in $\R^2$, whose upper left corner is at $(x_1,y_1)$,
and whose lower right corner is at $(x_2,y_2)$. The graph of
${\widetilde f}_{\langle i,j\rangle}$ is the subset of
$\Z\times \Z$ obtained from ${\widetilde\mathbf x}$ (the graph of ${\widetilde f}$)
by removing the ${\mathbb
G}_m$-orbits of $(x_1,y_1)$ and $(x_2,y_2)$, and replacing them with
the ${\mathbb G}_m$ orbits of $(x_1,y_2)$ and $(x_2,y_1)$.
The condition that $\cross({\widetilde f}_{\langle
i,j\rangle})=\cross({\widetilde f})-1$ is equivalent to the
condition that $r$ is {\em empty}: i.e. it does not contain any points
in $\mathbf x$ in its interior. (This is equivalent to the condition that the
${\mathbb G}_m$ translates of $r$ are a collection of disjoint rectangles,
none of which contains a component of ${\widetilde x}$ in its interior.)
\begin{lemma}
\label{lem:IdentifyDifferential}
Given a lifted partial permutation ${\widetilde f}$ and graph
${\widetilde\mathbf x}$, there is a one-to-one
correspondence between the resolutions of the crossings in
${\widetilde f}$ and ${\mathbb G}_m$-orbits of embedded rectangles
in $\R^2$ whose upper left and lower right corners are on ${\widetilde x}$.
Moreover, the following conditions are equivalent:
\begin{enumerate}
\item Each (or any) rectangle in $\R^2$ in the ${\mathbb G}_m$ orbit
is empty.
\item The image of the rectangle in $\HD$ is an empty
bigon (in the case where ${\mathbb G}_m$-orbit of the rectangle has isotropy group ${\mathbb Z}/2{\mathbb Z}$) or an empty rectangle
(when the isotropy group is trivial).
\item
The number of crossings in the resolution is one less than the number of
crossings in ${\widetilde f}$.
\end{enumerate}
\end{lemma}
\begin{proof}
It was already noted that the crossings in ${\widetilde f}$
correspond to embedded rectangles in ${\mathbb R}^2$, from
${\widetilde f}$ to ${\widetilde g}$. The rectangle is empty
precisely when $\cross({\widetilde g})=\cross({\widetilde f})-1$.
The isotropy group of the ${\widetilde G}_m$-orbit of any rectangle
is either trivial or $\Zmod{2}$. The orbits of empty rectangles project
to either empty rectangles or empty bigons in $\HD$.
Note that $\partial r$ is embedded precisely when the interior of $r$
does not contain any points that are equivalent to the corners of $r$;
see Figure~\ref{fig:Nonemb} for an example where $\partial r$ is not embedded.
\end{proof}
\begin{figure}[ht]
\input{Nonemb.pstex_t}
\caption{\label{fig:Nonemb} {\bf{Projection of a nonempty rectangle.}}
We have a non-empty rectangle on the left
which projects to the domain in $\HD$ pictured on the right.}
\end{figure}
The Heegaard diagram $\HD_{m,k}$ is {\em nice} in the sense of Sarkar;
thus, by~\cite{SarkarWang}, the differentials in ${\rm {CF}} ^-(\HD_{m,k})$
count empty bigons and rectangles. Thus, we could use this and
Lemma~\ref{lem:IdentifyDifferential} to deduce an identification of
chain complexes ${\rm {CF}} ^-(\HD_{m,k})\cong\Pong{m}{k}$. Instead, we invest
a little more work in understanding the combinatorics of $\HD$ to
give an alternative argument.
There is a partial ordering on Heegaard states: we write $\mathbf x\geq \mathbf y$
if there is a $\phi\in\pi_2(\mathbf x,\mathbf y)$ all of whose local multiplicities
are non-negative; with strict inquality $\mathbf x>\mathbf y$ if $\phi$ has positive
local multiplicity somewhere.
\begin{lemma}
\label{lem:PositiveDomains}
If $({\widetilde f},{\widetilde S})$ and $({\widetilde
g},{\widetilde T})$ are two lifted partial permutations, and $\mathbf x$
and $\mathbf y$ be their corresponding Heegaard states.
The following conditions are equivalent:
\begin{enumerate}[label=(P-\arabic*),ref=(P-\arabic*)]
\item \label{P:Greater}
$\mathbf x\geq \mathbf y$
\item
\label{P:CountLines}
For each $(i,j)$,
\[ \#\{a\in \Z\mid a<i, {\widetilde g}(a) <j<{\widetilde f}(a)\}
\geq \#\{a\in \Z\mid a<i, {\widetilde g}(a) >j>{\widetilde f}(a)\}.\]
\item
\label{P:ResolveCrossings}
There is a sequence of crossings in ${\widetilde f}$ with
the property that ${\widetilde g}$ is obtained from ${\widetilde
f}$ by resolving those crossings.
\end{enumerate}
Moreover, if $\mathbf x\geq \mathbf y$, then
\begin{equation}
\Mas(\mathbf x)-\Mas(\mathbf y)=\#\cross({\widetilde f})-\#\cross({\widetilde g});
\label{eq:MaslovDifference}
\end{equation}
\end{lemma}
\begin{proof}
As in the proof of Lemma~\ref{lem:IdentifyWeights}, we can interpret
\[ \#\{a\in \Z\mid a<i, {\widetilde g}(a) <i<{\widetilde f}(a)\}-\#\{a\in
\Z\mid {\widetilde g}(a) >j>{\widetilde f}(a)\} \]
as the local
multiplicity of $\phi$ at the point $(i+\OneHalf,j+\OneHalf)/{\mathbb G}_m$.
It follows at once that Properties~\ref{P:Greater} and~\ref{P:CountLines} are equivalent.
Property~\ref{P:ResolveCrossings} clearly implies Property~\ref{P:CountLines}.
To see that~\ref{P:CountLines}$\Rightarrow$\ref{P:ResolveCrossings}
we argue as follows.
When $\mathbf x=\mathbf y$, the result is obvious. Suppose $\mathbf x> \mathbf y$,
then there is some $i_1<i_2$ so that:
\[ {\widetilde
f}(i_1)>j>{\widetilde g}(i_2)\qquad{\text{and}}\qquad
{\widetilde f}(i_1)<j.\]
(In particular, $\langle i_1,i_2\rangle$ is a
crossing in ${\widetilde f}$.) Choose $i_2>i_1$ minimal with this
property, and
let ${\widetilde x}'$ correspond to ${\widetilde
f}_{\langle i_1,i_2\rangle}$. Minimality of $i_2$ ensures that
$\mathbf x>\mathbf x'$ and $\mathbf x'\geq \mathbf y$.
It is easy to see that $\mathbf x'\geq \mathbf y$.
Indeed, the sequence is constructed
Since $\cross(\mathbf x')=\cross(\mathbf x)-1$, this process must terminate after at
most $\cross(\mathbf x)$ steps.
If $\mathbf x'$ is obtained from $\mathbf x$ by resolving single crossing, and
$\phi\in\pi_2(\mathbf x,\mathbf x')$ is the corresponding domain in $\HD$, then
$\Mas(\phi)=1$. This is true because $\phi$ is a bigon or a
rectangle (Lemma~\ref{lem:IdentifyDifferential});
both of these are easily seen to have Maslov index one.
The Maslov index is additive under jutapositions,
and the algorithm
establishing~\ref{P:CountLines}$\Rightarrow$\ref{P:ResolveCrossings}
gave a sequence $\{{\widetilde f}_i\}_{i=1}^n$ with ${\widetilde
f}_1={\widetilde f}$, ${\widetilde f}_n={\widetilde g}$, and
$\Mas(\mathbf x_i)-\Mas(\mathbf x_{i+1})=1$, $\cross({\widetilde
f}_{i})-\cross({\widetilde f}_{i+1})=1$.
Equation~\eqref{eq:MaslovDifference} follows.
\end{proof}
\begin{prop}
\label{prop:IdentifyComplexes}
There is an isomorphism of chain complexes
${\rm {CF}} ^-(\HD_{m,k})\cong \Pong{m}{k}$.
\end{prop}
\begin{proof}
By definition, the differential on ${\rm {CF}} ^-(\HD_{m,k})$ counts
$\phi\in\pi_2(\mathbf x,\mathbf y)$ with $\phi\geq 0$ and $\Mas(\phi)=1$.
Equation~\eqref{eq:MaslovDifference} identifies these with the
${\mathbf G}_m$-orbits of empty rectangles in $\C$ which in turn, by
Lemma~\ref{lem:IdentifyDifferential} identifies such rectangles with
crossings in the diagram for $\mathbf x$ whose resolution drops the
crossing number by exactly one. Lemma~\ref{lem:IdentifyWeights}
identifies the coefficients in $\partial^-$ for ${\rm {CF}} ^-$ with the coefficients
in $\partial$ for $\Pong{m}{k}$.
\end{proof}
\section{Introduction}
In~\cite{Pong}, we introduced a differential graded algebra, the {\em
pong algebra}, which is an enrichment of the strands algebra
from~\cite{InvPair}. The aim of this note is to identify this
algebra with the endomorphism algebra in a wrapped, relative Fukaya
category, of a distinguished set of objects.
Consider $\C$, equipped with punctures at the points ${\mathbf
P}=\{j+\OneHalf\}_{j=0}^{m-1}$, and let $M=\C\setminus {\mathbf P}$.
Consider $m-1$ disjoint vertical lines $e_j=j\times \R $ for
$j=1,\dots,m-1$.
Given any $k$-element subsequence $1\leq x_1<\dots<x_k\leq m-1$, which
we call a {\em $k$-idempotent state}, there is a corresponding
Lagrangian ${\Lambda}_{\mathbf x}=e_{x_1}\times\dots\times e_{x_k}\subset
\Sym^k(M\subset\Sym^k(\C)$
The manifold $\Sym^k(M)$ is a Liouville manifold, and the Lagrangians
$\Lambda_\mathbf x$ are exact and conical at infinity. One can then consider
the {\em wrapped Fukaya category} of introduced by Abouzaid and
Seidel~\cite{AbouzaidSeidel} relative to the divisor ${\mathbf
P}\times \Sym^{k-1}(\C)$; see also~\cite{AbouzaidCriterion,AurouxBeginner}. We
will consider the endomorphism algebra of the set of objects
$\{\Lambda_\mathbf x\}_{\mathbf x}$ indexed by $k$-idempotent states.
This is an A-infinity algebra $A$ over $\Field[v_1,\dots,v_m]$,
equipped with idempotents $\Idemp{\mathbf x}$ correspondingto the idempotent states $\mathbf x$,
and $\Idemp{\mathbf x}\cdot A \cdot \Idemp{\mathbf y}$ is given by the chain complex
$\CF(\phi_H(\Lambda_\mathbf x),\Lambda_\mathbf y)$ (again, relative to the divisor
${\mathbf P}$). The composition is given by the chan map
\[ \circ\colon \mathrm{Mor}(\Lambda_{x_2},\Lambda_{\mathbf x_3}) \otimes \mathrm{Mor}(\Lambda_{\mathbf x_1},\Lambda_{\mathbf x_2})\to \mathrm{Mor}(\Lambda_{\mathbf x_1},\Lambda_{\mathbf x_3}),\]
is specified by the following diagram
\begin{equation}
\label{lem:TriangleMap}
\begin{CD}
\CF(\phi_H(\Lambda_{\mathbf x_1}),\Lambda_{\mathbf x_2})\otimes \CF(\phi_H(\Lambda_{\mathbf x_2}),\Lambda_{\mathbf x_3}) @>{\circ}>>\CF(\phi_H(\Lambda_{\mathbf x_1}),\Lambda_{\mathbf x_3}) \\
@V{(\phi_H)_*\otimes \Id}VV @A{\sigma}AA \\
\CF(\phi^2_H(\Lambda_{\mathbf x_1}),\phi_H(\Lambda_{\mathbf x_2})\otimes \CF(\phi_H(\Lambda_{\mathbf x_2}),\Lambda_{\mathbf x_3}) @>>>\CF(\phi^2_H(\Lambda_{\mathbf x_1}),\Lambda_{\mathbf x_3}), \\
\end{CD}
\end{equation}
where $\sigma$ is a map induced from the Liouville flow, and the
bottom arrow is induced by counting pseudo-holomorphic triangles. The
base ring is $\Field[v_1,\dots,v_m]$; and for the map in the bottom arrow,
each holomorphic triangle
$u$ is counted as a monomial $v_1^{n_{p_1}(u)}\dots v_m^{n_{p_m}(u)}$.
We can take the homology to obtain an ordinary category, $H({\mathcal
C})$; or alternatively, we can consider the $A_{\infty}$ category,
where the higher compositions are defined by counting
pseudo-holomorphic polygons.
Our aim is to prove the following:
\begin{thm}
\label{thm:IdentifyPong}
The pong algebra is isomorphic to the endomorphism algebra, in the
wrapped relative Fukaya category, of the objects
$\{\Lambda_\mathbf x\}_{\mathbf x}$.
\end{thm}
This proof is in the spirit of Auroux~\cite{Auroux}; see
also~\cite{TorusAlg}. Indeed, the proof is a fairly straightforward
application of a suitable Heegaard diagram, which can be thought of as
the analogue of the ``Auroux-Zarev piece'' for the pong algebra;
see~\cite{Auroux,Zarev,HomPairing}.
It is interesting to compare the results herein with the results
of~\cite{LaudaLicataManion,ManionRouquier}, and the constructions
of~\cite{PetkovaVertesi,EllisPetkovaVertesi,Zibrowius,KotelskiyWatsonZibrowius}.
Our interest in the pong algebra stems from our
goal of understanding knot Floer homology, a topic which we do not
discuss in the present paper, but hope to return to in future
work~\cite{NextPong}.
{\bf{Acknowledgements:}} The authors wish to thank Denis Auroux,
Robert Lipshitz, Dylan Thurston, and Andy Manion for interesting
conversations.
\section{Polygons with $n>3$ sides}
We now generalize the Heegaard diagram and the Heegaard triple as
follows. Consider $\R^2$, with the $O_i$ marking the diagonal
line with half-integer coordinates as before. Choose $n$ sets
$\Las^1,\dots,\Las^n$ of lines, as follows. Choose an increasing set
of angles $\frac{\pi}{4}<\theta_1<\dots<\theta_n<\frac{5\pi}{4}$, and
let $\Las^j$ be parallel lines in $\R^n$ forming angle $\theta_j$ with
respect to the real axis $\R\times 0$, so that two consecutive lines
are separated by some $O$ marking, and so that the intersections
between the various lines $\La^j_\ell$ within the set $\Las^j$ are in general position. Explicitly,
\[ \Las^j_\ell =
\{(\ell+\epsilon^j(\ell)+t\cos\theta_j,\ell+\epsilon^j(\ell)+t\sin\theta_j)\}_{t\in
\R},\] where $\{\epsilon^j\colon \Z\to \R\}_{j=1}^n$ are
$G_m$-invariant functions with $\epsilon^j(\ell)<\OneHalf$, so that
$\epsilon^s(\ell)\neq \epsilon^t(\ell)$ for all $s\neq t$. See
Figure~\ref{fig:HeegaardQuad} for a non-generic picture
(i.e. $\epsilon^j\equiv 0$ in the above formulas) in $\R^2$, when
$n=4$. Note that the picture is also rotated 90$\circ$: the line
through the $O$ markings should have slope $1$.
\begin{figure}[ht]
\input{HeegaardQuad5.pstex_t}
\caption{\label{fig:HeegaardQuad} {\bf{Heegaard quadruple.}}
This is the (unperturbed) multi-diagram with $n=4$ lifted to $\R^2$.
Two elements of ${\mathcal A}$ are shaded.}
\end{figure}
When $n=2$, a linear transformation carries this to the Heegaard
diagram from Section~\ref{sec:HeegPong}; and when $n=3$, a linear
transformation carries this to the Heegaard triple from
Section~\ref{sec:Triples}. As in Lemma~\ref{lem:WrapDiagram},
this is the Heegaard diagram for the $A_\infty$ actions on the wrapped Fukaya category.
Lemma~\ref{lem:EulerMeasure}, which was stated earlier for triangles,
actually holds for arbitrary $n$-gons:
\begin{lemma}
\label{lem:EulerMeasureNGon}
Let $\psi_n\in\pi_2(\mathbf x_1,\dots,\mathbf x_n)$ be a positive domain,
then
\[ e(\psi_n)=\frac{k(n-2)}{4}+\frac{O_1(\psi_n)}{2}+\frac{O_m(\psi)}{2}.\]
\end{lemma}
\begin{proof}
The argument used in the proof of Lemma~\ref{lem:EulerMeasure}
when $n=3$ can be adapted to $n>2$, as follows.
Let ${\mathcal A}$ denote the Abelian group of compactly supported
$2$-chains which are required to be cornerless at all the intersections of
$\La^j_\ell$ with $\L^{j'}_{\ell'}$, provided that $j,j'\neq 2$.
Fix Heegaard states $\mathbf x^{i,i+1}$ for $(\HD,\alphas^i,\alphas^{i+1})$
and $\mathbf x^{n,1}$ for $(\HD,\alphas^{n},\alphas^{1})$. Let ${\mathcal
B}$ be the affine space for ${\mathcal A}$ with initial corners at
the lifts of $\mathbf x^{3,4},\mathbf x^{4,5},\dots,\mathbf x^{n-1,n},\mathbf x^{n,1}$;
i.e. there are no constraints placed on the intersections with
$\alphas^2$. There are, once again, fundamental regions, which are
rectangles (possibly with a self-intersection) formed now by two
segments in $\Las^2$ and two other segments in $\Las^{j}$ and
$\Las^{j'}$. The regions are called {\em fundamental bigons} if the
four segments are permuted by the ${\mathbb G}_m$ action, otherwise
they are called {\em fundamental rectangles}.
Like for the case of triangles, the fundamental rectangles and
bigons generate ${\mathcal A}$; the function ${\widetilde
e}=e-\frac{O_1}{2}-\frac{O_m}{2}$, which is defined on all
(finite) $2$-chains, vanishes on ${\mathcal A}$. Consider
next ${\mathcal B}$, which is the set of (finite)
$2$-chains with initial corners at
$\mathbf x^{3,4},\mathbf x^{4,5},\dots,\mathbf x^{n-1,n}$, and terminal corner at
$\mathbf x^{n,1}$. We can find representatives of ${\mathcal B}/{\mathcal
A}$, which are a union of $n-2$ triangles that miss $O_1$ and
$O_m$. It is an easy computation to see that this representative has
${\widetilde e}=\frac{(n-2)}{k}$.
The result is also true when $n=2$. We do not explicitly need it here,
and we leave it to the reader to supply the details.
\end{proof}
Recall that a rigid holomorphic $n$-gon has $\Mas(\phi_n)=3-n$.
\begin{prop}
\label{prop:NoNGons}
There are no rigid, holomorphic Whitney $n$-gons with $n>3$;
in particular, for any pseudo-holomorphic $n$-gon with $n>3$,
$\Mas(\psi_n)\geq 0$.
\end{prop}
\begin{proof}
Combining Lemma~\ref{lem:EulerMeasureNGon} with
Equation~\eqref{eq:IndexOfNgon}, we find that
\[ \Mas(\psi_n)=\frac{O_1(\psi_n)}{2}+
\frac{O_m(\psi_n)}{2}+\#(\psi_n\cap \Delta).\] For
pseudo-holomorphic $\psi_n$, all three terms on the right-hand-side
are non-negative.
\end{proof}
\begin{proof}[of Theorem~\ref{thm:IdentifyPong}]
The identification from Theorem~\ref{thm:IdentifyPong}
consists of three statements:
\begin{itemize}
\item The endomorphism algebra is isomorphic with the pong algebra,
as a chain complex. This is Proposition~\ref{prop:IdentifyComplexes}.
\item The composition law on the endomorphism algebra is identified
with the multiplication on the pong algebra.
This is Proposition~\ref{prop:TrianglePong}.
\item Like the pong algebra, the endomorphism algebra has vanishing
$\mu_n$ with $n>3$. This is Proposition~\ref{prop:NoNGons}.
\end{itemize}
\end{proof}
\section{Lifted partial permutations and the pong algebra}
\label{sec:LiftPerm}
We recall the construction of the pong algebra from~\cite[Section~4]{Pong};
we refer the reader to that reference for a more leisurely account.
Let $r_t\colon \R\to\R$ be the reflection $r_t(x)=2t-x$; and consider
the subgroup $G_m$ of the reflection group of the real line generated
by $r_{\OneHalf}$ and $r_{m-\OneHalf}$. The quotient of the integral lattice by
this group of rigid motions is naturally an $m-1$ point set; generated
by $\{1,\dots,m-1\}$. Let
\begin{equation}
\label{eq:DefQ1}
Q_1 \colon \Z \to \{1,\dots,m-1\}
\end{equation} denote this quotient map.
Note that $G_m$ also acts on the set $\OneHalf+ \Z$. The quotient of $\OneHalf
+ \Z$ by $G_m$ is naturally the $m$-point set, $\{\OneHalf,\dots,
m-\OneHalf\}$. We think of these points as being in one-to-one
correspondence with the underlying variables in the pong algebra,
where the point $j\in \{\OneHalf,\dots,m-\OneHalf\}$ corresponds to the
variable $v_{\OneHalf+j}$.
Explicitly, we have the map
\begin{equation}
\label{eq:DefQ2} Q_2\colon \Z+\OneHalf\to \{1,\dots,m\},
\end{equation}
defined so that $Q_2(j-\OneHalf)$ is the element $i\in\{1,\dots,m\}$
with
$i\equiv j\pmod{2m-2}$ or $i\equiv 2-j\pmod{2m-2}$.
A $G_m$ invariant subset $\LiftS$ of $\Z$ has a natural quotient $\LiftS/G_m$,
which is a subset of $\{1,\dots,m-1\}$.
\begin{defn}
A {\em lifted partial permutation on $k$ letters}
is a pair $({\widetilde S}, {\widetilde f})$ where:
\begin{itemize}
\item ${\widetilde S}\subset \Z$
is a $G_m$-invariant subset
\item ${\widetilde f}\colon {\widetilde S} \to \Z$
is a $G_m$-equivariant map;
\end{itemize}
subject to the following two conditions:
\begin{itemize}
\item ${\widetilde S}/G_m$ consists of $k$ elements
\item the induced map ${\widetilde f}\colon {\widetilde S}/G_m\to \Z/G_m$
is injective.
\end{itemize}
\end{defn}
\begin{defn}
A lifted partial permutation $({\widetilde S},{\widetilde f})$ has a
{\em weight vector} ${\vec
\weight}=(\weight_1,\dots,\weight_m)\in(\OneHalf \Z)^m$, specified
by
\[ \weight_j({\widetilde f})= \OneHalf
\#\{i\in{\widetilde S}\big| i<j-\OneHalf<{\widetilde f}(i)~\text{or}~
i>j-\OneHalf>{\widetilde f}(i)\}.
\]
\end{defn}
We extend the weight vector to $\Field[v_1,\dots,v_m]$ so that $\weight(v_i)$
is the $i^{th}$ basis vector in $\Z^m$.
\begin{defn}
A {\em crossing} in a lifted partial permutation ${\widetilde f}$ is
an equivalence class of pairs of integers $(i,j)$ with the property
that $i<j$ and ${\widetilde f}(i)>{\widetilde f}(j)$. We say that
$(i,j)$ and $(i',j')$ determine the same crossing if there is some
$g\in G_m$ so that $\{g\cdot i, g\cdot j\}=\{i',j'\}$. We write
$\langle i,j\rangle$ for the equivalence class of the pair of
integers$(i,j)$.
Let $\Cross({\widetilde f})$ denote the set of crossings in ${\widetilde f}$.
\end{defn}
Note that $\langle i,j\rangle \in \Cross({\widetilde f})$ does not exclude
cases where $[i]=[j]$.
Let $\cross({\widetilde f})$ denote the number of
crossings in ${\widetilde f}$.
Let
$({\widetilde f},{\widetilde S})$ and
$({\widetilde g},{\widetilde T})$ be two partial permutations with
${\widetilde T}={\widetilde f}({\widetilde S})$.
Then, the composite $({\widetilde g}\circ {\widetilde f},{\widetilde S})$
is a lifted partial permutation.
It is elementary to verify that
\begin{align*}
\weight({\widetilde g}\circ {\widetilde f})&\leq
\weight({\widetilde g}) + \weight({\widetilde f}) \\
\cross({\widetilde g}\circ {\widetilde f})&\leq
\cross({\widetilde g}) + \cross({\widetilde f})
\end{align*}
The pong algebra $\Pong{m}{k}$ is the algebra over
$\Field[v_1,\dots,v_m]$ freely generated by lifted partial
permutations, with a multiplication map
\[ \mu_2\colon \Pong{m}{k}\otimes_{\Field[v_1,\dots,v_m]} \Pong{m}{k}
\to \Pong{m}{k} \]
characterized by
\[ \mu_2([{\widetilde
f},{\widetilde S}], [{\widetilde g},{\widetilde T}])]
=\begin{cases}
0 & \text{if ${\widetilde T}\neq {\widetilde f}({\widetilde S)}$} \\
0 & \text{if $\cross({\widetilde g}\circ {\widetilde f})<\cross({\widetilde
g}) + \cross({\widetilde f})$} \\
v\cdot [{\widetilde g}\circ {\widetilde f},{\widetilde S}] &{\text{otherwise,}}
\end{cases}\]
where $v$ is the monomial in $v_1,\dots,v_m$ chosen so that
\[
\weight(\mu_2([{\widetilde f},{\widetilde S}],
[{\widetilde g},{\widetilde T}]))
= \weight[{\widetilde f},{\widetilde S}]+
\weight[{\widetilde g},{\widetilde T}].\]
Given $a,b\in\Pong{m}{k}$, we abbreviate $\mu_2(a,b)$ by $a\cdot b$.
For each $\langle i,j\rangle\in\Cross({\widetilde f})$, there is a new lifted partial permutation ${\widetilde f}_{\langle i,j\rangle}$ characterized as follows:
\[
{\widetilde f}_{\langle i,j\rangle}(k)=
\begin{cases}
{\widetilde f}(k) & {\text{if~$[k]\not\in\{[i],[j]\}$}} \\
g\cdot {\widetilde f}(j) &{\text{if $k=g\cdot i$}} \\
g\cdot {\widetilde f}(i) &{\text{if $k=g\cdot j$}}
\end{cases}.\]
It is elementary to verify that
\begin{align*}
\weight({\widetilde f}_{\langle i,j\rangle})&\leq \weight({\widetilde f}) \\
\cross({\widetilde f}_{\langle i,j\rangle})&\leq \cross({\widetilde f})-1
\end{align*}
Given $\langle i,j\rangle\in\Cross({\widetilde f},{\widetilde S})$,
let $\partial_{\langle i,j\rangle}{\widetilde f}\in\Pong{m}{k}$ be the element defined by
\[ \partial_{\langle i,j\rangle}{\widetilde f}=
\begin{cases} 0 & {\text
{if $\cross({\widetilde f}_{\langle i,j\rangle})<
\cross({\widetilde f})-1$}} \\
v \cdot {\widetilde f}_{\langle i,j\rangle} & {\text{otherwise,}}
\end{cases} \]
where now $v$ is the monomial in $v_1,\dots,v_m$ characterized by the property that
\[
\weight[{\widetilde f},{\widetilde S}]
= \weight[{\widetilde f}_{\langle i,j\rangle},{\widetilde S}].
\]
Define a map
\[ \partial\colon \Pong{m}{k}\to \Pong{m}{k}, \]
characterized by
\[ \partial({\widetilde f},{\widetilde S}) =\sum_{\langle i,j\rangle\in\Cross({\widetilde f},{\widetilde S})} \partial_{\langle i,j\rangle} [{\widetilde f},{\widetilde S}].\]
With the above definitions, $\Pong{m}{k}$ is a differential graded
algebra over $\Field[v_1,\dots,v_m]$;
see~\cite[Proposition~\ref{P:prop:PongIsAlg}]{Pong} for details.
It will be convenient to have the following:
\begin{lemma}
\label{lem:NoOuterUs}
If
$\cross({\widetilde f}\circ{\widetilde g})=
\cross({\widetilde f})+\cross({\widetilde g})$
then
\[ \weight_1({\widetilde f}\circ{\widetilde g})=
\weight_1({\widetilde f})+\weight_1({\widetilde g})
\qquad{\text{and}}\qquad
\weight_m({\widetilde f}\circ{\widetilde g})=
\weight_m({\widetilde f})+\weight_m({\widetilde g}). \]
\end{lemma}
\begin{proof}
If $\weight_1({\widetilde f}\circ {\widetilde g})\neq \weight_1({\widetilde f}))+\weight_1({\widetilde g})$, then there exists some $1<i$ with
\begin{equation}
\label{eq:cr1}
1<i \text{~such that~} {\widetilde g}(i)<1;
\end{equation}
and
\begin{equation}
\label{eq:cr2}
{\widetilde f}\circ {\widetilde g}(i)>1.
\end{equation}
On the other hand, Equation~\eqref{eq:cr1} implies that
\[ {\widetilde g}(1-i)=1-{\widetilde g}(i)>1-i;\]
while Equation~\eqref{eq:cr2} implies that
\[ {\widetilde f}\circ{\widetilde g}(1-i)>{\widetilde g}(1-i);\]
i.e. $i$ and $1-i$ have a crossing in ${\widetilde g}$, while
${\widetilde g}(i)$ and ${\widetilde g}(1-i)$ have a crossing in
${\widetilde f}$.
An analogous argument works for $\weight_m$.
\end{proof}
\section{Triples}
\label{sec:Triples}
We construct the Heegaard triple corresponding to wrapping. Once
again, this will be drawn as a quotient of $\R^2$ by ${\mathbb G}$,
with the $O_i$ markings along the diagonal line with half-integer
coordinates labeled as before. Now, we have three sets of lines,
$\Las$, $\Lbs$, and $\Lcs$. As before, $\{\La_i=i\times
\R\}_{i\in\Z}$ and $\{\Lc_i=\R\times i\}_{i\in\Z}$ (i.e. they are the $\Lbs$ from before). We choose the $\Lbs$ with the following properties:
\begin{enumerate}[label=(${\mathcal H}$-\arabic*),ref=(${\mathcal H}$-\arabic*)]
\item The set $\Lbs$ is ${\mathbb G}_m$-invariant.
\item The slope of each line in $\Lbs$ is $-1$.
\item The ${\mathbb G}_m$-orbits of the $\Lbs$ consist of $m$ lines
so that $\Lb_i$ and $\Lb_{i+1}$ are separated by $O_i$.
\item
\label{Habc:Generic}
There is no triple-intersection point between $\La_i$, $\Lb_j$, and the diagonal.
\end{enumerate}
\begin{figure}[ht]
\input{HeegaardTriple.pstex_t}
\caption{\label{fig:HeegaardTriple} {\bf{Heegaard triple.}}
At the left, the (wrapped) Heegaard triple. At the right, the lift
of the diagram on the left to $\R^2$. (Note that the $O$ markings are displayed
here as horizontal; this horizontal line is to be viewed as the diagonal.)}
\end{figure}
Analogous to Definition~\ref{def:doms2}, given Heegaard states $\mathbf x$,
$\mathbf y$, and $\mathbf z$ for $\Hab$, $\Hbc$, and $\Hac$ respectively, we can
consider the set of two-chains $\psi\in \doms(\mathbf x,\mathbf y,\mathbf z)$ so that the components
of $\mathbf x$ are initial $\alpha-\beta$ corners, components of $\mathbf y$ are
initial $\beta-\gamma$ corners, and components of $\mathbf z$ are terminal
$\alpha-\gamma$ corners.
\begin{defn}
\label{def:TriangularlyConnected}
Let $\mathbf x$, $\mathbf y$, and $\mathbf z$ be three Heegaard states. We say that the
states are {\em triangularly connected} if their ${\mathbb
G}_m$-equivariant lifts ${\widetilde \mathbf x}$, ${\widetilde \mathbf y}$, and
${\widetilde \mathbf z}$ in $\R^2$ admit $k$ triangles in ${\mathbb \R}^2$,
oriented so that the they have sides in $\alpha$-$\betas$-$\gammas$
in counterclockwise order, whose ${\mathbb G}_m$ orbits have corners
exactly at $\Lx$, $\Ly$, and $\Lz$. Taking the quotients of the
triangles gives a domain $\psi\in\doms(\mathbf x,\mathbf y,\mathbf z)$.
\end{defn}
Note that there is a weaker notion: one can ask whether three Heegaard
states $\mathbf x$, $\mathbf y$, and $\mathbf z$, can be connected by a {\em Whitney
triangle}, a continuous map from the triangle $\psi\colon T \to
\Sym^k(\HD)$, which maps three edges of the triangle to the tori
$\Ta=\alpha_1\times\dots\times \alpha_k\subset \Sym^k(\HD)$, $\Tb$,
and $\Tc$, so that the the three vertices are mapped to $\mathbf x$, $\mathbf y$,
and $\mathbf z$. For example, the elementary region containing $O_4$ in
Figure~\ref{fig:HeegaardTriple} is a triangle in $\HD$, and if we
think of its three corners as Heegaard states (with $k=1$), these
three states are connected by a Whitney triangle, which is
double-covered by the elementary hexagon containing $O_4$ on the right
in Figure~\ref{fig:HeegaardTriple}. Thus, those three states are not
triangularly connected in the sense of
Definition~\ref{def:TriangularlyConnected}, though they can be
connected by a Whitney triangle.
Note that each Whitney triangle gives rise to a domain $\doms(\mathbf x,\mathbf y,\mathbf z)$.
\begin{lemma}
\label{lem:TriangularConnectedMeans}
There is a one-to-one correspondence between:
\begin{itemize}
\item triples of lifted
partial permutations $({\widetilde f},{\widetilde S})$,
$({\widetilde g},{\widetilde T})$ with ${\widetilde T}={\widetilde
f}({\widetilde S})$ and $({\widetilde g}\circ {\widetilde
f},{\widetilde S})$
\item
triangularly connected triples of
Heegaard states $\mathbf x\in\States(\Hab)$, $\mathbf y\in\States(\Hbc)$,
$\mathbf z\in\States(\Hac)$.
\end{itemize}
Moreover,
under this correspondence
\begin{equation}
\label{eq:WeightEquation}
\weight_i({\widetilde f},{\widetilde S})
+ \weight_i({\widetilde g},{\widetilde T})
=
\weight_i({\widetilde g}\circ{\widetilde f},{\widetilde S})+
\# (O_i\cap \psi).
\end{equation}
\end{lemma}
\begin{proof}
Given $\mathbf x$, $\mathbf y$, and $\mathbf z$,
their corresponding lifted partial permutations ${\widetilde f}$,
${\widetilde g}$, and ${\widetilde h}$ are characterized by
\[
{\widetilde \mathbf x}=\bigcup \La_{s}\cap \Lb_{{\widetilde f}(s)},
\qquad {\widetilde \mathbf y}=\bigcup \Lb_{t}\cap \Lb_{{\widetilde
g}(s)}, \qquad {\widetilde \mathbf x}=\bigcup \La_{s}\cap
\Lb_{{\widetilde h}(s)}.\] Thus, the condition that the components
of $\Lx$, $\Ly$, and $\Lz$ can be connected by triangles is
precisely the condition that ${\widetilde h}(s)={\widetilde g}\circ
{\widetilde f}$.
To establish Equation~\eqref{eq:WeightEquation}, we argue as follows.
Consider a coordinate $x\in \La_i\cap\Lb_j$. By
Condition~\ref{Habc:Generic}, the lines $\La_i$ and $\Lb_j$, and the
diagonal divide $\R^2$ into seven regions, one of which is a
compact region -- indeed, it is a triangle triangle $T_x$, Let
$O_i(T_x)$ be $\OneHalf$ times the number of $O_i$ appears in
$T_x$. (Note that each occurence of $O_i$ appears on the boundary of
$T_i$, hence the factor of $\OneHalf$.)
Given a Heegaard state ${\widetilde \mathbf x}$, choose any
unordered set of $m$ points
$\{x_1,\dots,x_m\}\subset \R^2$ whose ${\mathbb G}_m$-orbit is $\mathbf x$.
It is elementary to see that
\[ \weight_i(\mathbf x)=\sum_{j=1}^m O_i(T_{x_j}).\]
Equation~\eqref{eq:WeightEquation} is obtained by counting points in
the plane, divided into cases according to how the diagonal line
intersects each triangle. Specifically, after applying an element of
${\mathbb G}_m$ if necessary, we can assume that the
$\alpha$-$\gamma$ corner of the triangle is on the upper right.
There are now four remaining cases, according to the
number of vertices of the triangle which lie above the diagonal
line: this can be any number between $0$ and $3$.
The cases are illustrated in Figure~\ref{fig:CountOs}.
For the case on the left on that figure (where the triangle is
entirely below the diagonal line), the weight of $\mathbf x$ counts the $O$
markings on the diagonal boundary of $B\cup C$ or, equivalently,
$C$; the weight of $\mathbf y$ counts the markings on the diagonal boundary
of $A$; and the weigth of $\mathbf z$ counts $O$ markings on the diagonal
boundary of $A\cup B$; more succinctly,
\[
\weight_i(\mathbf x)= O_i (B)+ O_i(C) \qquad
\weight_i(\mathbf y)= O_i(A) \qquad
\weight_i(\mathbf z)= O_i(A)+ O_i(B).
\]
Since in this case the number of $O_i$
in the triangle is given by $ O_i(C)=0$,
Equation~\eqref{eq:WeightEquation} follows. The other three cases
are:
\[
\weight_i(\mathbf x)= O_i (B)+ O_i(C) \qquad
\weight_i(\mathbf y)= O_i(A) \qquad
\weight_i(\mathbf z)= O_i(B).
\]
\[
\weight_i(\mathbf x)= O_i (C) \qquad
\weight_i(\mathbf y)= O_i(A)+ O_i(B) \qquad
\weight_i(\mathbf z)= O_i(B).
\]
\[
\weight_i(\mathbf x)= O_i (B) \qquad
\weight_i(\mathbf y)= O_i(A)+ O_i(C) \qquad
\weight_i(\mathbf z)= O_i(B)+ O_i(C);
\]
and the number of $O_i$ markings in the triangles are
\[ O_i(A)+ O_i(C)\qquad O_i(A)+ O_i(C)\qquad O_i(A)=0 \]
respectively. Thus, in the remaining three cases, Equation~\eqref{eq:WeightEquation} holds.
\begin{figure}[ht]
\input{CountOs.pstex_t}
\caption{\label{fig:CountOs} {\bf{Counting $O$'s in triangles.}}
The diagonal dashed line represents the diagonal.}
\end{figure}
\end{proof}
Consider two-chains for $\HD$ analogous to Definition~\ref{def:doms2},
now using all three sets of curves $\alphas$, $\betas$, and $\gammas$.
These curves divide $\HD$ into {\em elementary domains}, which are
polygons. As in~\cite{HolDisk,RasmussenThesis,LipshitzCyl}, the {\em
euler measure} each polygon has an {\em Euler measure}: if the
number of sides is $m$, the euler measure of the corresponding polygon
is given by $1-\frac{m}{4}$. Extend the Euler measure $e$ linearly to
all two-chains, and denote the resulting function $e$.
\begin{figure}[ht]
\input{Triangles.pstex_t}
\caption{\label{fig:Triangles} {\bf{Triangles.}}
At the left: a triangle with Euler measure $3/4$; at the right, a triangle with Euler measure $1/4$.}
\end{figure}
\begin{lemma}
\label{lem:EulerMeasure}
Let $\psi\in\pi_2(\mathbf x,\mathbf y,\mathbf z)$ be a positive domain. Then, the euler
measure of $\psi$ is computed by
\begin{equation}
\label{eq:EulerMeasure}
e(\psi)= \frac{k}{4}+\frac{ O_1(\psi)}{2} + \frac{ O_m(\psi)}{2}.
\end{equation}
\end{lemma}
\begin{proof}
Loosely speaking, we claim that any positive domain can be cut along
the $\alpha$, $\beta$, and $\gamma$ lines to give $k$ triangles
(each with Euler measure $1/4$), $O_i(\psi)$+$O_m(\psi)$ bigons
(each with Euler measure $1/2$), and many rectangles (each with
Euler measure $0$). We make this claim precise as follows.
Consider the group of cornerless $2$-chains with compact support,
generalizing Definition~\ref{def:doms2} in a straightforward way.
This group is evidently $0$. But there is a non-trivial group ${\mathcal A}$ of
$2$-chains which are allowed $\beta$-$\gamma$ and $\alpha$-$\gamma$
corners, but no $\alpha$-$\beta$ corners. More precisely, at each intersection
of $\alpha$ with $\beta$, we require that the alternating sum of
the local multiplicities of the four quadrants add up to zero; i.e.
$A+D=B+C$, in the conventions of
Figure~\ref{fig:Cornerless}.
That group is non-trivial: for example, fix a $\beta$- or $\gamma$-segment
that
connects a pair of consecutive $\beta$-lines; and fix another
$\beta$- or $\gamma$-segment
that connects the same pair of $\beta$-lines. The four
segments enclose some region in the plane: when all four segments are disjoint
that region is a quadrilateral, when
they intersect, it is a difference of two triangles. See
Figure~\ref{fig:Fundamentals}.
When it
happens that these four segments are permuted by some ${\mathbb
G}_m$ action, We call these regions {\em fundamental bigons};
otherwise, we call them {\em fundamental rectangles}.
Fundamental bigons project to actual bigons in $\HD$;
while fundamental rectangles project to rectangles (or differences of two
triangles).
\begin{figure}[ht]
\input{Fundamentals4.pstex_t}
\caption{\label{fig:Fundamentals} {\bf{Fundamental regions.}} At the
left, we have
a fundamental bigon, a fundamental rectangle, and a difference of
two triangles. At the right, we have their lifts.)}
\end{figure}
It is elementary to verify that ${\mathcal A}$ is generated by the
fundamental bigons and rectangles. Moreover, the quantity $e(D)-
\frac{O_1(D)}{2}-\frac{O_m(D)}{2}$ vanishes on all fundamental bigons
and rectangles.
Let ${\mathcal B}$ the space of domains with
$\alpha$-$\gamma$-terminal corner (in the sense of
Definition~\ref{def:doms2}) at some component of $\Lz$. This is clearly an affine
space of ${\mathcal A}$. Moreover, given any $\Lz$, we can draw some
positive union of triangles $T_0(\mathbf z)\in B$. Clearly, the Euler measure
is $k/4$. Now, the map ${\widetilde e}=e-\frac{O_1}{2}-\frac{O_m}{2}\colon \doms(\mathbf x,\mathbf y,\mathbf z)\to \Z/4$ factors
through the space of ${\mathcal A}$-orbits in ${\mathcal B}$,
${\mathcal B}/{\mathcal A}$, since ${\widetilde e}$ vanishes on
${\mathcal A}$. Since $\psi$ and $T_0(\mathbf z)$ have the the same
image in ${\mathcal B}/{\mathcal A}$, and $e(T_0(\mathbf x))=k/4$, it follows
that $e(\psi)=k/4$.
\end{proof}
The relevance of Lemma~\ref{lem:EulerMeasure} stems from Sarkar's
computation of the Maslov index of a triangle (or more generally, a
Whitney $n$-gon)~\cite{SarkarMaslov}. One of his formulas, generalizing a
theorem of Rasmussen when $n=2$~\cite{RasmussenThesis}, gives
\begin{equation}
\label{eq:IndexOfNgon}
\Mas(\psi)=2e(\psi)-\frac{k(n-2)}{2}+\#(\psi\cap \Delta).
\end{equation}
Here, $\#(\psi\cap\Delta)$ denotes the algebraic intersection number of of
$\psi$ with the big diagonal in $\Sym^k(\HD)$.
\begin{lemma}
\label{lem:MasZeroTriangConn}
If $\psi\in\doms(\mathbf x,\mathbf y,\mathbf z)$ has $\Mas(\psi)=0$, and $\psi$ has a
pseudo-holomorphic representative, then $\mathbf x$, $\mathbf y$, and $\mathbf z$ are
triangularly connected, in the sense of
Definition~\ref{def:TriangularlyConnected}.
\end{lemma}
\begin{proof}
We begin with some remarks. Let $\mathbf x=\{x_1,\dots,x_k\}$,
$\mathbf y=\{y_1,\dots,y_k\}$, $\mathbf z=\{z_1,\dots,z_k\}$. Suppose there are $k$
triangles $\psi_i\colon T\to \HD$ with corners at $x_i$, $y_i$, and
$z_i$, and edges mapping to some $\alpha$-curve, some $\beta$-curve,
and some $\gamma$-curve. We can then form
$\psi=\psi_1\times\dots\times \psi_k\colon T \to \Sym^k(\HD)$, to get
a Whitney triangle. We call such
Whitney triangles {\em decomposable}. Not every Whitney triangle is
decomposable, but Whitney triangles which are disjoint from the
diagonal $\Delta$ are.
Now, combining Equation~\eqref{eq:EulerMeasure} with
Equation~\eqref{eq:IndexOfNgon}, we see that for any triangle,
\[ \Mas(\psi)=O_1(\psi) + O_m(\psi) + \#(\psi\cap \Delta).\]
If $\psi$ has a pseudo-holomorphic representative, all three of the
terms on the right are non-negative. Indeed, if $\Mas(\psi)=0$,
then $\#(\psi\cap \Delta)=0$; and indeed, $\psi$ is disjoint from
the diagonal; and hence it is decomposable. Next, observe that
since $O_1(\psi)=O_m(\psi)=0$, each factor $\psi_i\colon T\to \HD$
in the factor of decomposition of $\psi$ maps to
$\HD\setminus\{O_1,O_m\}$; i.e. the locus where the quotient map
$\R^2\to \HD=\R^2/{\mathbb G}_m$ is a covering space. Thus, we can
lift each factor $\psi_i$ to obtain maps $\{{\widetilde \psi}_i\colon
T \to \R^2\}_{i=1}^k$, showing that $\mathbf x$, $\mathbf y$ and $\mathbf z$ are triangularly connected.
\end{proof}
\begin{lemma}
\label{lem:DiagonalCount}
Suppose that $\mathbf x$, $\mathbf y$, and $\mathbf z$ are triangularly connected Heegaard
states, let $\psi\in\doms(\mathbf x,\mathbf y,\mathbf z)$ be the domain in $\HD$
connecting them, and
choose triangles $\{\widetilde \psi_i\colon T \to \R^2\}_{i=1}^k$
whose projection to $\HD$ gives $\psi\in\doms(\mathbf x,\mathbf y,\mathbf z)$.
Then,
\begin{equation}
\label{eq:DiagonalCount}
\#(\psi\cap \Delta) =
\OneHalf \sum_{i,j}^k \sum_{g\in {\mathbb G}_m} \#\left({\widetilde \psi}_i
\times g\cdot {\widetilde \psi}_j\right)\cap \Delta,
\end{equation}
where the right-hand-side is to be interpreted as an intersection
with $\Delta\subset \Sym^2(\R^2)$.
\end{lemma}
\begin{proof}
The intersection of $\psi=\bigtimes_{i=1}^k \psi_i$ with $\Delta$ in
$\Sym^k(\HD)$ is identified with ${\mathbb G}_m$-orbits of
data $g_1,
g_2\in{\mathbb G}_m$, $\tau_1,\tau_2\in {\mathbb R}^2$ with
$g_1\cdot {\widetilde \psi}_i(\tau_1)=g_2\cdot {\widetilde
\psi}_j(\tau_2)$, where the pair of triples
$(g_1,\tau_1,{\widetilde \psi}_i)$ and $(g_2,\tau_2,{\widetilde
\psi}_j)$ is thought of as unordered. Breaking symmetry, we can
think of this as half the count of $\tau_1,\tau_2\in \R^2$ and $g\in
{\mathbb G}_m$ and pairs $i$, $j$, so that
${\widetilde\psi}_i(\tau_1)=g\cdot {\widetilde\psi}_j(\tau_2)$. This
is the count on the right-hand-side of
Equation~\eqref{eq:DiagonalCount}.
\end{proof}
\begin{lemma}
\label{lem:MasZeroTriangleMult}
Suppose that $\mathbf x$, $\mathbf y$, and $\mathbf z$ are three triangularly connected
Heegaard states, corresponding to lifted partial permutations Let
$({\widetilde f},{\widetilde S})$, $({\widetilde f},{\widetilde
T})$, and $({\widetilde g}\circ {\widetilde f},{\widetilde S})$.
Let $\psi\in \doms(\mathbf x,\mathbf y,\mathbf z)$ be the corresponding domain
in $\Habc$. Suppose moreover that
$O_1(\psi)=O_m(\psi)=0$. Then,
\begin{equation}
\label{eq:TriangleMaslov}
\cross({\widetilde f},{\widetilde S})+
\cross({\widetilde g},{\widetilde T})
-\cross({\widetilde g}\circ {\widetilde f},{\widetilde S})
= \Mas(\psi).
\end{equation}
\end{lemma}
\begin{proof}
Each strand in ${\widetilde f}$ corresponds to a Heegaard state for $\Hab$;
each strand in ${\widetilde g}$ corresponds to a Heegaard state for $\Hbc$,
and each composite strand corresponds to a triangle in $\Habc$,
which in turn corresponds to a ${\mathbb G}_m$-orbit of a triangle in $\R^2$.
Up to the action of ${\mathbb G}_m$, we can assume that the triangle has its
$\alpha-\gamma$ vertex above its $\beta$-line, as in Figure~\ref{fig:MaslovComputation}.
\begin{figure}[ht]
\input{MaslovComputation.pstex_t}
\caption{\label{fig:MaslovComputation} {\bf{Maslov index of triangles.}}}
\end{figure}
Extending the lines of each triangle, we see that each triangle splits
the plane into $7$ regions, one of which is the (compact) triangle
itself. The left-hand-side of Equation~\eqref{eq:TriangleMaslov} can
be interpreted as a sum, over each triangle $T$ of a count of all the
other components of type $X$, $Y$, or $Z$, in the seven regions,
counted with multiplicity $\pm 1$ or $0$, as indicated in
Figure~\ref{fig:MaslovComputation}. This count, in turn, can be
organized according to all other triangles $T'$ connecting three
auxiliary generators, taken with given multiplicity.
Note that $T'$ is the image of $T$ under an an affine transformation
$L$ of $\R^2$ which is a composition of a real rescaling (which
includes a possible $180^\circ$ rotation) composed with a
translation. Note that $L$ is either a translation or it has a unique
fixed point.
We claim that $T\cup T'$ hits the diagonal precisely when $L$ has a
fixed point, and that fixed point is contained in the interior of $L$
(or $L'$); and . moreover, in that case $\#((\psi_T\times \psi_T')\cap
\Delta)=2$. To see this, note that the map to $\Sym^2(\R^2)$ is
modeled on the map $t\mapsto \{t,L(t)\}$, which in turn corresponds to
the monic polynomial $z^2 -(t+L(t)) z +t L(t)$, whose discriminant is
$(t+L(t))^2-4 tL(t)=(t-L(t))^2$, which vanishes to order $2$ at the fixed
point of $L$.
We now verify that the contribution of the triangle pair $T,T'$ to the
left-hand-side of Equation~\eqref{eq:TriangleMaslov} coincides with
this intersection number with the diagonal, by looking at the possible
cases for the two triangles. After possibly switching roles of $T$ and
$T'$, we can assume that the number of vertices of $T'$ in $T$ is
greater or equal to the number of vertices of $T$ in $T'$. There are
the following possibilities:
\begin{itemize}
\item $T$ contains no vertices of $T'$.
In this case, either $T'$ and $T$ are disjoint, or
they can overlap as pictured in
the first picture of Figure~\ref{fig:MaslovComputation5},
in which case the local contribution is $2$.
\item $T$ contains exactly one vertex of $T'$.
This can happen in two inequivalent ways: either $T'$ is a translate of $T$,
in which case the contribution is $0$;
or $T'$ is not obtained as a translate of $T$, in which case
the local contribution is $2$, as can be seen by considering the six cases
in the second picture of Figure~\ref{fig:MaslovComputation5}
\item $T$ contains exactly two vertices of $T'$ in its interior, as
shown in the three cases on the third picture of
Figure~\ref{fig:MaslovComputation5}.
\item $T$ contains all three vertices of $T'$ in its interior,
as shown in the fourth picture of Figure~\ref{fig:MaslovComputation},
in which case the local contribution is $2$.
\end{itemize}
\begin{figure}[ht]
\input{MaslovComputation5.pstex_t}
\caption{\label{fig:MaslovComputation5} {\bf{Triangle pairs.}} Here
are the various combinatorial ways two triangles can interact, so
that their contributions to (both sides of)
Equation~\eqref{eq:TriangleMaslov} are non-zero. (Think of $T$ as
the large triangle; various choices of the other triangle $T'$ are
indicated by the smaller triangles.)}
\end{figure}
In view of Lemma~\ref{lem:DiagonalCount}, we have verified that
\[
\cross({\widetilde f},{\widetilde S})+ \cross({\widetilde
g},{\widetilde T}) -\cross({\widetilde g}\circ {\widetilde
f},{\widetilde S}) = \#(\Delta\cap \psi).\]
Combining Equations~\eqref{eq:EulerMeasure} and~\ref{eq:IndexOfNgon},
we see that
\[ \Mas(\psi)=O_1(\psi)+O_m(\psi)+\#(\psi\cap \Delta);\]
since $O_1(\psi)=O_m(\psi)=0$, Equation~\eqref{eq:TriangleMaslov}.
\end{proof}
\begin{prop}
\label{prop:TrianglePong}
Under the identification from Proposition~\ref{prop:IdentifyComplexes},
the triangle map corresponds to composition in the pong algebra.
\end{prop}
\begin{proof}
The triangle map counts index zero triangles, which corresponds to
triangularly connected lifted partial permutations by
Lemma~\ref{lem:MasZeroTriangConn}.
Lemma~\ref{lem:MasZeroTriangleMult} in turn identifies the counts of
index zero triangles with compositions in the pong algebra,
in view of Lemma~\ref{lem:NoOuterUs}. Moreover,
Equation~\eqref{eq:WeightEquation} identifies the coefficients of
those counts.
Implicit in the above identification is the identification
$(\HD,\alphas,\betas,\{O_1,\dots,O_m\})\cong
(\HD,\alphas,\betas,\{O_1,\dots,O_m\})$, which can be viewed as the
identification coming from the Liouville flow. (Compare
Equation~\eqref{eq:ImageUnderLiouville}.) The corresponding map on
the wrapped complexes was denoted $\sigma$ in
Equation~\eqref{lem:TriangleMap}. To promote this to a chain map, in
general, following~\cite{AbouzaidCriterion}, one must use a
continuation map interpolating between the complex structure used on
$\HD$, and its pull-back under the Liouville flow.
For the Heegaard diagram $(\HD,\alphas,\betas,\{O_i\}_{i=1}^m)$,
though the continuation map between any two admissible paths of
almost-complex structures is simply the identity map, since the
continuation is supported on non-negative domains with index zero
and, following Lemma~\ref{lem:PositiveDomains}
(i.e. Property~\ref{P:ResolveCrossings} combined with
Equation~\eqref{eq:MaslovDifference}), the only such domain is the
constant domain (with $\mathbf x=\mathbf y$).
\end{proof}
\section{The wrapped Fukaya category}
\label{sec:Wrap}
\subsection{The symmetric product}
We start with some of the geometric setup, as explained
in~\cite{AbouzaidSeidel,CieliebakEliashberg,AurouxBeginner}.
\begin{defn}
A {\em Liouville domain} is a $2n$-dimensional manifold with
boundary, equipped with a one-form $\lambda$ such that
$\omega=d\lambda$ is symplectic, and the dual vector field, called
the {\em Liouville vector field}, $Z$ characterized by
$i_Z\omega=\lambda$ points strictly outwards along $\partial M$.
\end{defn}
A special case of a Liouville domain is a Stein manifold, which is a
complex manifold $(V,J)$, equipped with a proper, smooth function
$\phi\colon V \to \R$, which is {\em strictly psudo-plurisubharmonic};
i.e. for which the two-form $\omega=-dd^{\C}\phi$ is symplectic and
$J$-compatible. (Here, $d^\C\phi=d\phi\circ J$.) In this case,
$\lambda=-d^{\C}\phi$; i.e. $d\phi=\lambda\circ J$.
Our basic example is the following. Let $A$ denote the infinite
cylinder $\R\times S^1\cong \C\setminus 0$. Let $(t,\theta)$ denote
the coordinates with respect to the parameterization $\R\times S^1$;
so that the isomorphism $\R\times S^1\cong \C\setminus 0$ is given by
$(t,\theta)\mapsto e^{t+i\theta}$; sometimes we write $r=e^{t}$. The
function $\log(r)^2$ is strictly pluri-subharmonic, with $\omega=2
dt\wedge\theta$. Let
$H=r^2$. The Hamiltonian flow for $H$, written $\Phi\colon \R\times A \to A$, is
given by
\begin{equation}
\label{eq:HamiltonianFlow}
\Phi(s,t,\theta)=(t,2 s t \theta).
\end{equation}
The Liouville flow, written $\Psi\colon \R\times A \to A$, is given by
\begin{equation}
\label{eq:LiouvilleFlow}
\Psi(s,t,\theta)=(t e^s,\theta).
\end{equation}
To a Liouville domain Abouzaid and Seidel associate an $A_\infty$
category, the {\em wrapped Fukaya category}. Objects are Lagrangian
submanifolds $L\subset M$ that intersect $\partial M$ transversely,
with the property that $\theta|L\in \Omega^1(L)$ is exact, and
$\theta$ vanishes to infinite order along the boundary $\partial
L=L\cap \partial M$.
We will be considering Lagrangians in the symmetric product of $\C$,
$\Sym^k(\C)$. There is a quotient map $\pi\colon \C^k\to \Sym^k(\C)$,
and also there is a diffeomorphism $\C^k\cong\Sym^k(\C)$.
The
relationship between Lagrangians in a symmetric product of a curve
with the ${\mathfrak S}_k$-invariant Lagrangians in the $k$-fold
Cartesian product is unclear; but there is a nice bridge offered by
work of Perutz~\cite{Perutz}, building on work of
Varouchas~\cite{Varouchas}, who constructs a new symplectic form on
the symmetric product that agrees with group-invariant the symplectic
structure on the Cartesian product on an open set.
The case at hand is a
particularly simple, local version. (See Proposition~\ref{prop:IdentifyLagrangians} below.)
In the interest of concreteness, we find it convenient to have some explicit
parameterizations. Consider the
map from the infinite cylinder
$A=\C\setminus\{0\}\cong \R \times S^1=\R\times(\R/2\pi\Z)$ to $\C$,
specified by
\[ p(z)=\OneHalf\left(z+\frac{1}{z}\right),\]
This map has the following properties:
\begin{itemize}
\item $p$ is is a branched double-cover, with two branched points
at $1$ and $-1$.
\item $p$ is proper
\item The image under $p$ of the circle $\{0\}\times S^1$ is the interval
$[-1,1]\subset \C$.
\end{itemize}
Given a $k$-element subset $\mathbf x\subset S^1\setminus \{\pm
1\}$, we can view $\R\times \mathbf x$
as a subset of $\Sym^k(A)$. This image is a smooth submanifold,
whose image under $\Sym^k(p)$ is a smooth submanifold
of $\Sym^k(\C)$. We denote this subspace $\Lambda_{\mathbf x}$.
More generally, fix ${\mathbf x}$ as above and an element $\phi\in \R^{\geq 0}$.
There is a submanifold of $A\cong \R\times S^1$ of elements of
the form $\{(t,e^{i\phi\cdot t/2}\mathbf x)\}_{t\in\R}$, which induces
a submanifold $\Lambda_{\mathbf x}^{\phi}\subset \Sym^k(\C)$.
Clearly, $\Lambda_{\mathbf x}^{0}=\Lambda_{\mathbf x}$.
Let $t\colon A \to \R$ be projection to the first coordinate or, equivalently,
$z\in \C\setminus\{0\}\mapsto \log|z|$.
Consider the function $\delta\colon C^k\to \R\geq 0$ defined by
\[ \delta(z_1,\dots,z_k)=\min(\min_i t(z_i),\min_{i\neq j} |z_i-z_j|),\]
which descends to a continuous function $\Sym^k(\C)\to \R$,
so that
\[ \delta^{-1}(0)=\Delta\cup \left(S^1\times \Sym^{k-1}(A)\right).\]
The following is an adaptation of a theorem of Perutz~\cite{Perutz};
see also Varouchas~\cite{Varouchas}.
\begin{prop}
\label{prop:IdentifyLagrangians}
Given any bounded open set $W\subset \Sym^k(\C)$ containing
$\Sym^k[-1,1]$ and any $\eta>0$,
there is a smooth plurisubharmonic
function $\psi\colon W\to \R$ with the following properties:
\begin{itemize}
\item
Given any $k$-elements subset $\mathbf x\subset (S^1\setminus\{\pm 1\})$
with $\delta(\mathbf x)\geq\eta$,
the intersection of $W$ with the submanifold
$\Lambda_{\mathbf x}\subset \Sym^k(\C)$
is Lagrangian
with respect to the symplectic structure $d d^{\C}\psi$.
\item Given $s\geq 0$, there is an exact Hamiltonian diffeomorphism
$\Phi^s\colon \R\times W\to W$
with the property that
$\Phi^{s}(\Lambda_{\mathbf x}\cap W)=\Lambda^{s}_{\mathbf x}\cap W$.
\end{itemize}
\end{prop}
\begin{proof}
This follows easily from Varouchas's ``Lemme
Principal''~\cite{Varouchas}, which which we state in a slightly
simplified form. Given
the data:
\begin{itemize}
\item
Open subsets $U$, $V$, $W$, and $X$ in $\C^n$
so that $U$, $V$, and $W$ are bounded,
with ${\overline
U}\subset V$, ${\overline V}\subset W$, ${\overline W}\subset X$
\item a continuous, strictly pluri-subharmonic function $\phi\colon X\to \R$
so that $\phi|_{X\setminus U}$ is smooth.
\end{itemize}
there is a smooth, strictly pluri-subharmonic function $\chi\colon W\to \R$
so that
\[ \psi|_{W\setminus (V\cap W)}=\chi|_{W\setminus (V\cap W)}.\]
The function $t^2\colon \C^k\to \R$ given by $t^2=\sum_{i=1}^k
|t_i|^2$ is a smooth. Let $X=(\C\setminus \{0\})^{\times k}$,
and $\Pi\colon \C^{\times k}\to \Sym^k(\C)$ be the quotient map.
As
in~\cite{Varouchas}, since $t^2\colon A \to \R$ is a smooth,
strictly pluri-subharmonic function and $\Pi\circ p^{\times k}\colon
A^k\to \Sym^k(\C)$ is a branched cover, the push-forward $\psi=(\Pi\circ p^{\times k})_*(t^2)$ is a
continuous, strictly pluri-subharmonic function on $X$.
Given $W$ as in the statement of the proposition, apply Varouchas' lemma to $\psi$,
$U=\delta^{-1}(0,\frac{\eta}{3})$, and $V=\delta^{-1}(0,\frac{\eta}{2})$. The exact
Hamiltonian is associated to the Hamiltonian function $\chi$ coming from the lemma. Since
$\chi$ agrees with $t^2$ over the complement of $V$, it is easy to
see that the integral of $\chi$ preserves $\delta$ over that
set. The second point now follows readily; see Equation~\eqref{eq:HamiltonianFlow}.
\end{proof}
We fix the following data:
\begin{itemize}
\item integers $m$ and $k$ with $0<k<m$
\item $m$ basepoints
$-1=O_1,O_2,\dots,O_{m-1},1=O_m$
so that there is a positively oriented arc in $S^1$
from $O_{i+1}$ ot $O_i$ containing no other $O_j$.
\item $m-1$ additional points $p_1,\dots,p_{m-1}$,
so that $z_i$ is on the arc from $O_{i+1}$ to $O_i$.
\item $\delta(z_i,z_{i+1})\geq 2/m-1$.
\end{itemize}
Choose $W\subset \Sym^k(\C)$ as in
Proposition~\ref{prop:IdentifyLagrangians}, and let $\psi$ be the
function supplied by that proposition. There are
$\binom{m-1}{k}$ Lagrangians $\Lambda_\mathbf x\cap
W$, associated to the $k$-element subsets of
$\{p_1,\dots,p_{m-1}\}$. We will be considering these as our basic
objects in the wrapped Fukaya category of $W$.
Let $\Psi^c\colon W\to W$ be the time $\log(c)$ flow of the Liouville vector field
induced from $\psi$.
It is an easy consequence of Equation~\eqref{eq:LiouvilleFlow}
that
\begin{equation}
\label{eq:ImageUnderLiouville}
\Psi^c(\Lambda_\mathbf x^\phi)=
\Lambda_\mathbf x^{\phi/c}.
\end{equation}
\subsection{The relative Fukaya category}
In our case, the symplectic manifold $W$ is equipped also with $m$
divisors, of the form $\{O_i\}\times \Sym^{k-1}(\HD)$. Correspondingly,
as in~\cite{HolDisk}; see also, cite~\cite{PerutzSheridan} for a
general construction, the Floer complexes are to be thought of as
modules over a polynomial algebra $\Field[v_1,\dots,vm]$.
Specifically, for $L_1=\Phi_H(\Lambda_{\mathbf x_1})$, $L_2=\Lambda_{\mathbf x_2}$,
the complex $\CF(L_1,L_2)$ is a module over $\Field[v_1,\dots,v_m]$
freely generated by $L_1\cap L_2$, with differential determined by
\[ \partial \mathbf x= \sum_\mathbf y \sum_{\{\phi\in\ModFlow(\mathbf x,\mathbf y)\mid
\Mas(\phi)=1\}} \#\ModFlow(\phi) \cdot \mathbf y \cdot v_1^{n_{O_1}(\phi)}\cdots
v_m^{n_{O_m}(\phi)}.\] Here, $n_{O_i}(\phi)$ denotes the
(non-negative) algebraic intersection number of $\phi$ with the
divisor $\{O_i\}\times \Sym^{k-1}(\HD)$. The moduli space
$\ModFlow(\mathbf x,\mathbf y)$ is to be taken with respect to a suitable
perturbation of the Floer equation.
| proofpile-arXiv_065-1885 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{introduction}
Stock trading is about buying and selling stocks in the financial market to maximize the profit earned from the investment—profit-making can be achieved by exploiting price fluctuations in the market. When an investor sells stocks at a higher price, a profit is made. However, the price fluctuations are usually so high and the environment so dynamic that they make trading optimization hard to achieve. As a result, the situation in the stock market obliges investors to use an intelligent policy that can adapt itself to the dynamic and non-deterministic idiosyncrasies of stock markets.
\par Portfolio management is the decision-making process of continuously reallocating an amount of found into different financial investment products, aiming to maximize the return while restraining the risk \cite{haugen1988}. Portfolio management systems often use price history fluctuations of one or multiple stocks. However, besides the price history, there is other information in the stock markets to take advantage of, including textual, macroeconomics, knowledge graph, image, fundamental, and analytics data \cite{jiang2021}. Many models, such as ANN, DNN, PCA, RNN, CNN, LSTM, attention models, and others, have been used in recent studies to address the stock market prediction problem \cite{jiang2021}. These models have been used in multiple markets, including the US, China, Japan, Korea, India, Brazil, and others. Some works regarding financial time series forecasting, such as stock price, index, commodity price, volatility, bond price, forex price, cryptocurrency, and trend forecasting using deep models, have been done \cite{sezer2020}.
\par The advent of reinforcement learning (RL) in financial markets is driven by several advantages inherent to this field of artificial intelligence. In particular, RL allows the combination of the "prediction" and the "portfolio construction" task in one integrated step, thereby closely aligning the machine learning problem with the objectives of the investor \cite{fischer2018reinforcement}. Three main approaches of deep RL in finance have been elaborated and compared, indicating that the actor-only approach is the best-suited one for financial markets \cite{fischer2018reinforcement}. Many types of research have been made to use RL, and deep RL in finance \cite{neuneier1997enhancing, cumming2015investigation, watts2015hedging, du2016algorithm, charpentier2021reinforcement, taghian2022learning}. Some other efforts are made to address the portfolio management problem using RL \cite{jiang2017, liang2018adversarial, jeong2019improving, filos2019reinforcement, ye2020reinforcement, aboussalah2020continuous}. Solving portfolio management problem using modern deep reinforcement learning approaches constitutes only a small part of problems addressed in stock markets. Hence side resources have yet to be fully taken advantage of in these frameworks. An essential source of information in the field of finance is the experts' signals. Some research has been made to exploit experts' advice, and signals in the field of finance, especially for portfolio management \cite{hill2011, bhattacharya2013, he2021, zhang2017, yang2020, yang2020boosting, wu2020}. We will thoroughly investigate them in the next section.
\par In this paper, alongside the price history, we used signals from experts to feed into our deep RL network. Due to the dynamic and not-deterministic features of the stock markets and the fact that the profit is determined in the long and mid-term, we enjoyed a deep reinforcement learning framework to model the problem. In this framework, using a reward function, we tried to maximize the accumulated profit in the long term; that is, our goal is to achieve a policy that may not necessarily be profitable in the short or mid-term, but it is aimed to be optimal in the long term.
\section{Related Works}
\label{Related}
In the field of experts' opinion aggregation, to achieve a fair and accepted decision, it is a good practice to average the opinions of experts who have access to different resources. We can leverage these signals to help get an accurate prediction of stocks' future returns.
\par A crowd opinion aggregation model, namely CrowdIQ, has been proposed, which has a differential weighting mechanism and accounts for individual dependence \cite{du2017}. A decay function was introduced to address the dependency problem in individual judgments and give different weights to judges based on their previous judgments. Those judges who make original judgments were given higher weights than those who follow others' judgments. CrowdIQ has been evaluated compared to four baseline methods using real data collected from StockTwits. The results showed that CrowdIQ significantly outperformed all baseline methods in terms of both a quadratic prediction scoring method and simulated investment returns. The research sample is relatively small, and more stocks and postings are needed to improve the generalizability of the results.
\par A genetic algorithm approach that can be used to identify the appropriate vote weights for users based on their prior individual voting success has been proposed \cite{hill2011}. These are user-generated stock pick votes from a widely used online financial newsletter. This method allowed them to identify and rank “experts” within the crowd, enabling better stock pick decisions than the S\&P 500. They showed that the online crowd outperformed the S\&P 500 in two test periods, 2008 and 2009. The main disadvantage is that the period covered by the data set was too short to be ideal for testing and training. Also, during the testing period, the market was abnormal, meaning that the result was not indicative of future performance.
\par A persuasion game between a set of experts and a judge is applied to find the efficiency of decision-making \cite{bhattacharya2013}. Each expert is identified by her quality and agenda, meaning they can be biased. This article tries to find the relation between \emph{the conflict of experts} and the quality of decision-making. The finding suggests that, first, employing better quality experts does not necessarily lead to better decision-making; second, it is optimal for the judge to choose experts whose agendas are either extremely opposite or aligned; and third, it may be optimal to employ two experts with the same extreme agenda rather than experts with completely opposite ones.
\par An experiment was made to answer the question of whether the wisdom of crowds helps or hinders knowledge of factual information. \cite{he2021} The results show that policymakers tend to rely heavily on majority rule as the preferred heuristic, and also, there is a limit to the amount of information about the opinions of others that they can effectively process. In addition, information about others’ answers improves performance on easy questions but tends to harm performance on difficult ones, showing the downside of this work.
\par Conventionally, there are two main approaches to stock portfolio selection, the mean-variance model and capital growth theory. The variance model focuses on creating a balance between return value and risk in the portfolio of a period. However, CGT focuses on maximizing the expected growth rate of the portfolio or the expected logarithmic return of the multi-period or sequential portfolio. The online selection of stocks is a problem in which several selection periods are considered, and a statistical hypothesis on the future prices of assets is not determined. The weak aggregation algorithm (WAA) \cite{kalnishkan2008} is an online sequencing algorithm that makes decisions using the weighted average of all expert recommendations and updates each expert's weight based on their previous performance.
\par Based on the WAA algorithm, a method in which expert opinion is used for learning and decision-making for the online stock portfolio selection problem was introduced \cite{zhang2017}. This method assigns a weight to each expert based on their performance in previous rounds. This algorithm is defined in two ways: WAAS, in which an expert recommends a stock, and WAAC, in which an expert recommends a stock portfolio containing multiple assets. The two defined algorithms are designed to work without considering information from the financial markets. A new algorithm is defined in which the WAAC is combined with the ancillary information of the financial markets and makes a new strategy called WAACS \cite{yang2020}. In other researches, using the WAA method to collect expert recommendations, a new online portfolio selection strategy called continuous aggregating exponential gradient (CAEG) was proposed \cite{ yang2020boosting, he2020universal, yang2022}. In this method, first, a set of exponential gradient strategies with different learning rates g are considered experts, and then the stock portfolio is calculated in the next period using the WAA method to aggregate all the recommendations.
\par A novel arbitrage-based approach in which multiple prediction models are dynamically combined to obtain predictions is presented \cite{cerqueira2019}. Arbitrage is a meta-learning approach that combines the output of experts based on predictions of the loss they will suffer. It is argued that this meta-learning approach is useful to better capture recurring changes in the environment. This proposal also includes a sequential rebalancing approach to model dependencies between experts. \cite{mcandrew2021} reviewed the current literature on the aggregation of predictions obtained from peers. They have compiled common terminology, aggregation methods, and forecast performance metrics. They identified four key themes that are repeated in the combined forecasting literature: (a) using human intuition to facilitate statistical forecasting when data is sparse and changing rapidly, (b) involving experts because of their role as decision-makers, (c) Using more straightforward aggregation of models to combine prediction densities and more complicated models to combine point predictions and (d) lack of experimental design and comparative measurements in many manuscripts.
\par According to the positive impact of two sources of stock analysts' opinions and information from collective wisdom (social network users), \cite{eickhoff2016} examine the relationship and its impact on predictions. The important issue is that stock analysts, individuals who are paid to provide regularly updated opinions about specific assets, may face restrictions that influence their opinions, whereas such influence would not be problematic at the level of social media users. Results suggest that there is no uniform orientation between these two areas, professional analysts can include information before social media in their evaluation, and if the reasons for changes are known to the public, social networks can change faster. Different effects of analyst attitude and crowd sentiment on stock prices are compared \cite{wu2020}. They found that the wisdom of the experts and the crowd are positively related to stock prices in the following days. They adopted LightGBM \cite{ke2017lightgbm} to predict the trend of stocks, suggesting that the wisdom of the crowd is more valuable in investing than the wisdom of experts.
\par A tool, named SINVLIO, was presented based on semantic technologies and fuzzy logic techniques that recommend investments based on both the psychological aspects of the investor and the traditional financial parameters of investments \cite{garcia2012sinvlio}. Jörg Gottschlich and Oliver Hinz proposed a decision support system design that allows investors to include crowd recommendations in their investment decisions and use them to manage a portfolio \cite{gottschlich2014}. In \cite{li2010}, fuzzy and genetic algorithms were used to solve the financial portfolio management problem. The planning agent can easily access all the intelligent processing agents, including the financial risk tolerance evaluation, asset allocation, stock portfolio selection, interest prediction, and decision aggregation agents. However, in this method, a suitable mechanism is not defined for selecting specialists. It is possible to use various sources of collective wisdom in investing management. Kumar et al. used the potential of virtual investing communities(VIC) such as MotleyFools CAPS, Sharewise, Minkabu, Hollywood Stock Exchange, etc \cite{kumar2020deriving}. The idea of this research is the automatic construction of a stock portfolio that is dynamically adjusted and relies on VIC information. In order to combine expert domain knowledge with collective wisdom, Kovacevich presented a framework that enables ranking and selection of alternatives and quantifying the quality of crowd votes \cite{kovacevic2020crex}. This framework permits the consideration of the crowd's votes concerning the knowledge of the experts and the modeling procedures of the compromises between the crowd and the satisfaction of the experts in the final decisions.
\section{Preliminaries}
\label{Preliminaries}
We intend to design a model for the portfolio management problem based on experts' advice aggregation. In this model, an intelligent agent finds major stock market patterns and takes advantage of them in tandem with signals generated by experts to optimize the portfolio selection problem.
\subsection{Mathematical Prerequisites}
Inspired by \cite{jiang2017}, a mathematical model of an RL-based portfolio management problem is represented in this section. This problem is divided into steps in which the agent makes a decision based on the environment. That is, the time is segmented into equal parts. In every episode, the agent distributes the capital into different stocks. These episodes continue until they reach a final point. It is important to mention that in this article, we assumed that every episode is a day long.
The portfolio can hold m assets. The closing prices of all assets include the price vector for period t, $v_t$. The relative price vector of the t-th trading period, $y_t$, is defined as the item-by-item division of $v_t$ by $v_{t-1}$:
\begin{displaymath}
y_t := v_t \varnothing v_{t-1} = (1,\dfrac{v_{1,t}}{v_{1,t-1}},
\dfrac{v_{2,t}}{v_{2,t-1}},...,
\dfrac{v_{m,t}}{v_{m,t-1}}
)^\intercal.
\end{displaymath}
The elements of $y_t$ are the quotients of the closing prices for each asset during the period. The relative price vector can be used to calculate the change in total portfolio value over a period. If $p_{t-1}$ is the value of the portfolio at the beginning of period $t$, without taking into account the transaction cost,
$$
p_t=p_{t-1} \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}
$$
where $w_{t-1}$ is the portfolio weight vector (henceforth called portfolio vector) at the beginning of period t, whose ith element, $w_{t-1,i}$, is the proportion of asset $i$ in the portfolio after reallocation of capital. The rate of return for period $t$ is defined as
$$
\rho_t:=\frac{p_t}{p_{t-1}}-1=\boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}-1
$$
In a real scenario, buying or selling assets in a market is not free. The cost typically amounts to the commission fee. Reallocation to a new portfolio shrinks the portfolio value by a factor of $\mu_t$. Considering the transaction cost in the formula, we need to rewrite the rate-of-return formula below
$$
\begin{aligned}
&\rho_t=\frac{p_t}{p_{t-1}}-1=\frac{\mu_t p_t^{\prime}}{p_{t-1}}-1=\mu_t \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}-1 \\
\end{aligned}
$$
and logarithmic rate of return as
$$
r_t=\ln \frac{p_t}{p_{t-1}}=\ln \left(\mu_t \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}\right) \\
$$
Finally, the final portfolio value will be
$$
p_{\mathrm{f}}=p_0 \exp \left(\sum_{t=1}^{t_{\mathrm{f}}+1} r_t\right)=p_0 \prod_{t=1}^{t_{\mathrm{f}}+1} \mu_t \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}
$$
where $p_0$ is the initial investment amount. The job of a portfolio manager is to maximize $p_f$ for a given time frame.
The transaction remainder factor($\mu_t$) will be calculated as follow
$$
\mu_t=\frac{1}{1-c_{\mathrm{p}} w_{t, 0}}\left[1-c_{\mathrm{p}} w_{t, 0}^{\prime}-\left(c_{\mathrm{s}}+c_{\mathrm{p}}-c_{\mathrm{s}} c_{\mathrm{p}}\right) \sum_{i=1}^m\left(w_{t, i}^{\prime}-\mu_t w_{t, i}\right)^{+}\right]
$$
where $c_p$ and $c_s$ are the commission rates for purchasing and selling stocks respectively. In this article, we considered the sum of the two commission rates as one-tenth of a percent.
\par
We restricted the total episodes during which our model is training. We defined three rules according which the model stops the ongoing training and then starts a new one.
\begin{enumerate}
\item Absence of new data(at the end of training data samples)
\item A $10\%$ decrease in the stock portfolio's value compared to the starting point. The higher this criterion is considered, the higher the risk of the model will be, but it will enable the model to find more forward-looking strategies. This number can be adjusted during training and considered zero during testing.
\item A $20\%$ decrease in the stock portfolio's value compared to the maximum point of the ongoing episode. This criterion is helpful in a situation where the agent achieves a high profit in the current episode but continues to lose this profit, which should be stopped at some point. The higher value we consider for this criterion, the agent will be able to find more promising strategies, which, of course, will also face a high risk.
\end{enumerate}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=.8]{Fig2}}
\caption{Sample price data of a stock during a week}
\label{fig2}
\end{figure}
\section{Data Treatments}
\label{Data}
In this article, we have leveraged two kinds of information which are elaborated in the following sections.
\subsection{Price History}
We used the price history of stocks in the Iran stock market. They have been gathered by a crawler, implemented by one of the authors, from the securities and exchange organization portal. The crawled data include daily prices from 2017 until 2020. The testing phase covers the last six months, and the rest of the data is used for training. Every row in the dataset consists of open, high, close, and low prices and the total traded volume of a single stock during a single day. Figure \ref{fig2} shows an example of one week's data related to a particular stock. The data relating to 54 stocks have been collected and will be used to train and test the model.
\par
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.8]{Fig3}}
\caption{Example of stock price over time}
\label{fig3}
\end{figure}
The statistical and analytical review of the stock price data shows the instability of the data, and as shown in figure \ref{fig3} for one of the stocks, the data has no stability over time, neither in terms of average nor variance.
\par
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.8]{Fig4}}
\caption{Correlation of a stock's price over time}
\label{fig4}
\end{figure}
In addition, the examination of the correlation criterion of the data shows that the data has zero correlation in a time interval of about 200 days, which will be used in the time frame for training the model. Figure \ref{fig4} shows the correlation of the stock in \ref{fig3}.
\subsection{Experts' Signals}
Experts' signals are obtained from 100tahlil website, containing signals from 85 experts. All the signals of these experts, which are intended for 54 desired stocks, add up to a total of 6335 signals, any of which comprises the following information.
\begin{table}
\begin{tabular}{||c c c c c c||}
\hline
$start date$ & $close date$ & $expected return$ & $expected risk$ & $export$ & $symbol$ \\ [0.5ex]
\hline\hline
6/25/2019 & 7/7/2019 & 48.39 & -14.8 & 49 & 0 \\
\hline
\end{tabular}
\caption{Example of a signal}
\label{table1}
\end{table}
\begin{itemize}
\item Expert Id: the identification of the signal issuer
\item Stock Id: the identification of the stock issued
\item Start date: the beginning date when the signal becomes valid
\item End date: the date when the signal expires
\item Expected return: the maximum return which may be achieved during start and end date
\item Expected risk: the maximum loss which may occur during start and end date
\end{itemize}
Table \ref{table1} presents an example signal. Each signal, from the time it is activated until the time it ends, based on the fluctuations of the relevant stock price, will result in profit or loss at any moment. For these calculations not to reduce efficiency during model training, instant profit or loss of each signal is calculated and stored in advance.
\subsection{Managing Missing And Overlapping Data}
During a certain period, some stocks do not have any price on some days because their symbol is closed. To solve this problem, we used the last valid price method, in which whenever there is no price, the nearest next or previous valid price is considered.
\par
For a particular stock, Some experts may have provided several signals overlapping over a period of time. We considered the average expected profit and loss in the overlapping periods to deal with this problem.
\begin{figure}[tp]
\centerline{\includegraphics[scale=0.5]{Fig5}}
\caption{Aggregation of the stocks' OHLC prices and experts' signals}
\label{fig5}
\end{figure}
\subsection{Data Aggregation}
We need to prepare the data to make it usable in our model. Our approach uses the time series in which stocks' signals and price history are divided daily to be fed to the model. For this purpose, we used a time window of 60 days, in which the model will use the stock's historical price and signal during the period. We chose this number based on some experiments. As shown in \ref{fig5}, for every stock, the open, high, low, and close prices and the total trade volume for the stock during a single day are gathered. Every active signal suggested by experts is also added to the aggregated data. Experts' signals include the following information:
\begin{enumerate}
\item expected return
\item expected risk
\item instant return defined as:
$$
\text { InstantReturn }=\frac{\text { CurrenSignalPrice }-\text { StartSignalPrice }}{\text { StartSignalPrice }}
$$
\item Status: it determines the final state of a signal. A value of zero indicates an active signal. Values 1 and -1 are considered if the signal ended with profit and loss, respectively.
\end{enumerate}
\begin{figure}[tp]
\centerline{\includegraphics[scale=0.4]{Fig6}}
\caption{The architecture of proposed method}
\label{fig6}
\end{figure}
\section{Proposed Method}
\label{Method}
In this article, we have proposed a convolutional network as it is a widely used and preferable architecture in the context of the portfolio management problem. In this architecture, 60 days of stock price information (starting price, closing price, low price, high price, and trading volume) are combined in the form of a tensor with 60 days of all experts' signal information and given to the neural network. Every stock is processed independently through two consecutive convolutional layers, using 1*6 and 1*5 kernels, and then is fed to a feed-forward neural network, which uses a tanh activation function to determine the proportion of the stocks as the next portfolio weight vector. As stated before, in the real-world scenario, there are fees to be paid for any transaction, which can add up and negatively impact the profit. In order to prevent constant changes in the portfolio weight vector, the previous portfolio weight vector is added to the output of the convolutional networks and then fed to the feed-forward network. There are two kinds of signals, which are current and future ones. In the current signals, we know the instant profit or loss made by the signal, while in future signals, only the expected profit or loss is known. Figure \ref{fig6} demonstrates the proposed architecture.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=.5]{Fig1}}
\caption{Reinforcement Learning Framework Model }
\label{fig1}
\end{figure}
\section{Experimental Results}
\label{Results}
There are three frameworks, Pytorch, Ray, and Rlib, in use to
implement the proposed model introduced in the previous section. Rllib is a framework for reinforcement learning modeling and can implement various models. This framework has the capability of distributed execution with the help of Ray. To use the features of Rllib, the environment and the agent we implement must inherit several features from the abstractions of this framework so that different algorithms provided by this framework can be used. For this purpose, we created an environment called TradingEnv, which contains all the necessary facilities to simulate the investment environment and manage the stock portfolio. When the environment receives an action, applying that action to the stock portfolio, it determines the amount of profit earned and provides it to the agent. To simulate the execution of an action in an environment, an ActionSchema, which calculates the change in the stock portfolio's value in every step, is presented. In addition, RewardSchema specifies how to calculate the reward in each step. A general overview of different parts of the reinforcement learning environment is illustrated in figure \ref{fig1}.
\begin{figure}[tp]
\centerline{\includegraphics[scale=.5]{Fig7}}
\caption{The infrastructure architecture of model execution}
\label{fig7}
\end{figure}
\par
As mentioned, the architecture can also run in a distributed manner. Fig \ref{fig7} describes how it runs distributedly on several servers. Each worker can be a server. In addition, according to the Ray framework's capabilities, it can also use any CPU core as a worker. We can have an arbitrary number of workers, each of which has a copy of the current environment and policy, produces one or more episodes, and then sends them to the leader cluster. Next, the cluster leader updates the original policy and sends this new policy to the workers so that new episodes are generated based on the latest policy.
\par
We chose the PPO(Proxy Policy Optimization) algorithm, considered one of the most effective reinforcement learning algorithms in continuous environments, as the base algorithm for updating our policy. PPO is an Actor-Critic algorithm that creates several episodes of a strategy in each step and then updates the strategy to maximize the average reward. The main advantage of this algorithm compared to other Actor-Critic ones is that in this one, the amount of change is limited in each step, helping to make the variance of changes less and, as a result, lead to more stable models.
\begin{table}[h!]
\begin{tabular}{|c | c|}
\hline
Parameter & The tested value \\ [0.5ex]
\hline\hline
[1e-2, 1e-3, 5e-4, 1e-4, 5e-5, 1e-5] & LEARNING\textunderscore RATE\\
\hline
[100, 200, 300, 400] & SGD\textunderscore MINIBATCH\textunderscore SIZE\\
\hline
uniform(0.85, 1.0) & LAMBDA\\
\hline
uniform(0.1, 0.5) & CLIP\textunderscore PARAM\\
\hline
[14, 30, 60, 90] & ROLLOUT\textunderscore FRAGMENT\textunderscore LENGTH\\
\hline
-0.2 & MAX\textunderscore DRAWDOWN\\
\hline
-0.1 & MIN\textunderscore PROFIT\\
\hline
\end{tabular}
\caption{Examination space for model parameters}
\label{table2}
\end{table}
\begin{table}[h!]
\begin{tabular}{|c | c|}
\hline
Parameter & The selected value \\ [0.5ex]
\hline\hline
5e-4 & LEARNING\textunderscore RATE\\
\hline
300 & SGD\textunderscore MINIBATCH\textunderscore SIZE\\
\hline
0.9 & LAMBDA\\
\hline
0.2 & CLIP\textunderscore PARAM\\
\hline
60 & ROLLOUT\textunderscore FRAGMENT\textunderscore LENGTH\\
\hline
-0.2 & MAX\textunderscore DRAWDOWN\\
\hline
-0.1 & MIN\textunderscore PROFIT\\
\hline
\end{tabular}
\caption{Selected parameters for model training}
\label{table3}
\end{table}
\par
A number of hyperparameters have been tested in this paper. Table \ref{table2} shows the parameters and their tested values. After experimenting with the testing values, we found the best ones, shown in table \ref{table3}.
\begin{figure}[tp]
\centerline{\includegraphics[scale=0.7]{Fig8}}
\caption{The average profit of each expert's signals during the test period}
\label{fig8}
\end{figure}
We compared the results of our model with the average profit gained by the experts' signals. The average profit is illustrated in figure \ref{fig8}. As can be seen, the best expert averaged 72\% profit, and the worst expert had about 8\% loss during the test period. In addition, the highest profit gained by the best expert was 110\%.
\begin{table}[h!]
\begin{tabular}{|c | c | c | c |}
\hline
Average gain & Maximum gain & minimum gain \\ [0.5ex]
\hline\hline
65\% & 85\% & 32\% \\
\hline
\end{tabular}
\caption{Average, maximum and minimum gain during all periods}
\label{table4}
\end{table}
The results suggest that, on average, our model is able to earn 90\% of the profit that the best expert earned. Also, in the maximum mode, our model was able to obtain 77\% of the signals obtained by the best expert.
\section{Conclusion}
\label{Conclusion}
This paper proposes a new deep RL framework for the portfolio management problem using Iran's stock market data. We leveraged experts' signals, historical price data, and a deep RL network, which is a new approach to solving this problem, as far as we know. We took advantage of convolutional networks to extract meaning from historical prices and experts' signals and then aggregated them in a feed-forward network to find the next portfolio weight vector. Despite being simple, we found that our model performed very well and could compete with the best experts. We showed that, on average, we could harness 90\% of the profit made by the best expert.
\section{Introduction}
\label{introduction}
Stock trading is about buying and selling stocks in the financial market to maximize the profit earned from the investment—profit-making can be achieved by exploiting price fluctuations in the market. When an investor sells stocks at a higher price, a profit is made. However, the price fluctuations are usually so high and the environment so dynamic that they make trading optimization hard to achieve. As a result, the situation in the stock market obliges investors to use an intelligent policy that can adapt itself to the dynamic and non-deterministic idiosyncrasies of stock markets.
\par Portfolio management is the decision-making process of continuously reallocating an amount of found into different financial investment products, aiming to maximize the return while restraining the risk \cite{haugen1988}. Portfolio management systems often use price history fluctuations of one or multiple stocks. However, besides the price history, there is other information in the stock markets to take advantage of, including textual, macroeconomics, knowledge graph, image, fundamental, and analytics data \cite{jiang2021}. Many models, such as ANN, DNN, PCA, RNN, CNN, LSTM, attention models, and others, have been used in recent studies to address the stock market prediction problem \cite{jiang2021}. These models have been used in multiple markets, including the US, China, Japan, Korea, India, Brazil, and others. Some works regarding financial time series forecasting, such as stock price, index, commodity price, volatility, bond price, forex price, cryptocurrency, and trend forecasting using deep models, have been done \cite{sezer2020}.
\par The advent of reinforcement learning (RL) in financial markets is driven by several advantages inherent to this field of artificial intelligence. In particular, RL allows the combination of the "prediction" and the "portfolio construction" task in one integrated step, thereby closely aligning the machine learning problem with the objectives of the investor \cite{fischer2018reinforcement}. Three main approaches of deep RL in finance have been elaborated and compared, indicating that the actor-only approach is the best-suited one for financial markets \cite{fischer2018reinforcement}. Many types of research have been made to use RL, and deep RL in finance \cite{neuneier1997enhancing, cumming2015investigation, watts2015hedging, du2016algorithm, charpentier2021reinforcement, taghian2022learning}. Some other efforts are made to address the portfolio management problem using RL \cite{jiang2017, liang2018adversarial, jeong2019improving, filos2019reinforcement, ye2020reinforcement, aboussalah2020continuous}. Solving portfolio management problem using modern deep reinforcement learning approaches constitutes only a small part of problems addressed in stock markets. Hence side resources have yet to be fully taken advantage of in these frameworks. An essential source of information in the field of finance is the experts' signals. Some research has been made to exploit experts' advice, and signals in the field of finance, especially for portfolio management \cite{hill2011, bhattacharya2013, he2021, zhang2017, yang2020, yang2020boosting, wu2020}. We will thoroughly investigate them in the next section.
\par In this paper, alongside the price history, we used signals from experts to feed into our deep RL network. Due to the dynamic and not-deterministic features of the stock markets and the fact that the profit is determined in the long and mid-term, we enjoyed a deep reinforcement learning framework to model the problem. In this framework, using a reward function, we tried to maximize the accumulated profit in the long term; that is, our goal is to achieve a policy that may not necessarily be profitable in the short or mid-term, but it is aimed to be optimal in the long term.
\section{Related Works}
\label{Related}
In the field of experts' opinion aggregation, to achieve a fair and accepted decision, it is a good practice to average the opinions of experts who have access to different resources. We can leverage these signals to help get an accurate prediction of stocks' future returns.
\par A crowd opinion aggregation model, namely CrowdIQ, has been proposed, which has a differential weighting mechanism and accounts for individual dependence \cite{du2017}. A decay function was introduced to address the dependency problem in individual judgments and give different weights to judges based on their previous judgments. Those judges who make original judgments were given higher weights than those who follow others' judgments. CrowdIQ has been evaluated compared to four baseline methods using real data collected from StockTwits. The results showed that CrowdIQ significantly outperformed all baseline methods in terms of both a quadratic prediction scoring method and simulated investment returns. The research sample is relatively small, and more stocks and postings are needed to improve the generalizability of the results.
\par A genetic algorithm approach that can be used to identify the appropriate vote weights for users based on their prior individual voting success has been proposed \cite{hill2011}. These are user-generated stock pick votes from a widely used online financial newsletter. This method allowed them to identify and rank “experts” within the crowd, enabling better stock pick decisions than the S\&P 500. They showed that the online crowd outperformed the S\&P 500 in two test periods, 2008 and 2009. The main disadvantage is that the period covered by the data set was too short to be ideal for testing and training. Also, during the testing period, the market was abnormal, meaning that the result was not indicative of future performance.
\par A persuasion game between a set of experts and a judge is applied to find the efficiency of decision-making \cite{bhattacharya2013}. Each expert is identified by her quality and agenda, meaning they can be biased. This article tries to find the relation between \emph{the conflict of experts} and the quality of decision-making. The finding suggests that, first, employing better quality experts does not necessarily lead to better decision-making; second, it is optimal for the judge to choose experts whose agendas are either extremely opposite or aligned; and third, it may be optimal to employ two experts with the same extreme agenda rather than experts with completely opposite ones.
\par An experiment was made to answer the question of whether the wisdom of crowds helps or hinders knowledge of factual information. \cite{he2021} The results show that policymakers tend to rely heavily on majority rule as the preferred heuristic, and also, there is a limit to the amount of information about the opinions of others that they can effectively process. In addition, information about others’ answers improves performance on easy questions but tends to harm performance on difficult ones, showing the downside of this work.
\par Conventionally, there are two main approaches to stock portfolio selection, the mean-variance model and capital growth theory. The variance model focuses on creating a balance between return value and risk in the portfolio of a period. However, CGT focuses on maximizing the expected growth rate of the portfolio or the expected logarithmic return of the multi-period or sequential portfolio. The online selection of stocks is a problem in which several selection periods are considered, and a statistical hypothesis on the future prices of assets is not determined. The weak aggregation algorithm (WAA) \cite{kalnishkan2008} is an online sequencing algorithm that makes decisions using the weighted average of all expert recommendations and updates each expert's weight based on their previous performance.
\par Based on the WAA algorithm, a method in which expert opinion is used for learning and decision-making for the online stock portfolio selection problem was introduced \cite{zhang2017}. This method assigns a weight to each expert based on their performance in previous rounds. This algorithm is defined in two ways: WAAS, in which an expert recommends a stock, and WAAC, in which an expert recommends a stock portfolio containing multiple assets. The two defined algorithms are designed to work without considering information from the financial markets. A new algorithm is defined in which the WAAC is combined with the ancillary information of the financial markets and makes a new strategy called WAACS \cite{yang2020}. In other researches, using the WAA method to collect expert recommendations, a new online portfolio selection strategy called continuous aggregating exponential gradient (CAEG) was proposed \cite{ yang2020boosting, he2020universal, yang2022}. In this method, first, a set of exponential gradient strategies with different learning rates g are considered experts, and then the stock portfolio is calculated in the next period using the WAA method to aggregate all the recommendations.
\par A novel arbitrage-based approach in which multiple prediction models are dynamically combined to obtain predictions is presented \cite{cerqueira2019}. Arbitrage is a meta-learning approach that combines the output of experts based on predictions of the loss they will suffer. It is argued that this meta-learning approach is useful to better capture recurring changes in the environment. This proposal also includes a sequential rebalancing approach to model dependencies between experts. \cite{mcandrew2021} reviewed the current literature on the aggregation of predictions obtained from peers. They have compiled common terminology, aggregation methods, and forecast performance metrics. They identified four key themes that are repeated in the combined forecasting literature: (a) using human intuition to facilitate statistical forecasting when data is sparse and changing rapidly, (b) involving experts because of their role as decision-makers, (c) Using more straightforward aggregation of models to combine prediction densities and more complicated models to combine point predictions and (d) lack of experimental design and comparative measurements in many manuscripts.
\par According to the positive impact of two sources of stock analysts' opinions and information from collective wisdom (social network users), \cite{eickhoff2016} examine the relationship and its impact on predictions. The important issue is that stock analysts, individuals who are paid to provide regularly updated opinions about specific assets, may face restrictions that influence their opinions, whereas such influence would not be problematic at the level of social media users. Results suggest that there is no uniform orientation between these two areas, professional analysts can include information before social media in their evaluation, and if the reasons for changes are known to the public, social networks can change faster. Different effects of analyst attitude and crowd sentiment on stock prices are compared \cite{wu2020}. They found that the wisdom of the experts and the crowd are positively related to stock prices in the following days. They adopted LightGBM \cite{ke2017lightgbm} to predict the trend of stocks, suggesting that the wisdom of the crowd is more valuable in investing than the wisdom of experts.
\par A tool, named SINVLIO, was presented based on semantic technologies and fuzzy logic techniques that recommend investments based on both the psychological aspects of the investor and the traditional financial parameters of investments \cite{garcia2012sinvlio}. Jörg Gottschlich and Oliver Hinz proposed a decision support system design that allows investors to include crowd recommendations in their investment decisions and use them to manage a portfolio \cite{gottschlich2014}. In \cite{li2010}, fuzzy and genetic algorithms were used to solve the financial portfolio management problem. The planning agent can easily access all the intelligent processing agents, including the financial risk tolerance evaluation, asset allocation, stock portfolio selection, interest prediction, and decision aggregation agents. However, in this method, a suitable mechanism is not defined for selecting specialists. It is possible to use various sources of collective wisdom in investing management. Kumar et al. used the potential of virtual investing communities(VIC) such as MotleyFools CAPS, Sharewise, Minkabu, Hollywood Stock Exchange, etc \cite{kumar2020deriving}. The idea of this research is the automatic construction of a stock portfolio that is dynamically adjusted and relies on VIC information. In order to combine expert domain knowledge with collective wisdom, Kovacevich presented a framework that enables ranking and selection of alternatives and quantifying the quality of crowd votes \cite{kovacevic2020crex}. This framework permits the consideration of the crowd's votes concerning the knowledge of the experts and the modeling procedures of the compromises between the crowd and the satisfaction of the experts in the final decisions.
\section{Preliminaries}
\label{Preliminaries}
We intend to design a model for the portfolio management problem based on experts' advice aggregation. In this model, an intelligent agent finds major stock market patterns and takes advantage of them in tandem with signals generated by experts to optimize the portfolio selection problem.
\subsection{Mathematical Prerequisites}
Inspired by \cite{jiang2017}, a mathematical model of an RL-based portfolio management problem is represented in this section. This problem is divided into steps in which the agent makes a decision based on the environment. That is, the time is segmented into equal parts. In every episode, the agent distributes the capital into different stocks. These episodes continue until they reach a final point. It is important to mention that in this article, we assumed that every episode is a day long.
The portfolio can hold m assets. The closing prices of all assets include the price vector for period t, $v_t$. The relative price vector of the t-th trading period, $y_t$, is defined as the item-by-item division of $v_t$ by $v_{t-1}$:
\begin{displaymath}
y_t := v_t \varnothing v_{t-1} = (1,\dfrac{v_{1,t}}{v_{1,t-1}},
\dfrac{v_{2,t}}{v_{2,t-1}},...,
\dfrac{v_{m,t}}{v_{m,t-1}}
)^\intercal.
\end{displaymath}
The elements of $y_t$ are the quotients of the closing prices for each asset during the period. The relative price vector can be used to calculate the change in total portfolio value over a period. If $p_{t-1}$ is the value of the portfolio at the beginning of period $t$, without taking into account the transaction cost,
$$
p_t=p_{t-1} \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}
$$
where $w_{t-1}$ is the portfolio weight vector (henceforth called portfolio vector) at the beginning of period t, whose ith element, $w_{t-1,i}$, is the proportion of asset $i$ in the portfolio after reallocation of capital. The rate of return for period $t$ is defined as
$$
\rho_t:=\frac{p_t}{p_{t-1}}-1=\boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}-1
$$
In a real scenario, buying or selling assets in a market is not free. The cost typically amounts to the commission fee. Reallocation to a new portfolio shrinks the portfolio value by a factor of $\mu_t$. Considering the transaction cost in the formula, we need to rewrite the rate-of-return formula below
$$
\begin{aligned}
&\rho_t=\frac{p_t}{p_{t-1}}-1=\frac{\mu_t p_t^{\prime}}{p_{t-1}}-1=\mu_t \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}-1 \\
\end{aligned}
$$
and logarithmic rate of return as
$$
r_t=\ln \frac{p_t}{p_{t-1}}=\ln \left(\mu_t \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}\right) \\
$$
Finally, the final portfolio value will be
$$
p_{\mathrm{f}}=p_0 \exp \left(\sum_{t=1}^{t_{\mathrm{f}}+1} r_t\right)=p_0 \prod_{t=1}^{t_{\mathrm{f}}+1} \mu_t \boldsymbol{y}_t \cdot \boldsymbol{w}_{t-1}
$$
where $p_0$ is the initial investment amount. The job of a portfolio manager is to maximize $p_f$ for a given time frame.
The transaction remainder factor($\mu_t$) will be calculated as follow
$$
\mu_t=\frac{1}{1-c_{\mathrm{p}} w_{t, 0}}\left[1-c_{\mathrm{p}} w_{t, 0}^{\prime}-\left(c_{\mathrm{s}}+c_{\mathrm{p}}-c_{\mathrm{s}} c_{\mathrm{p}}\right) \sum_{i=1}^m\left(w_{t, i}^{\prime}-\mu_t w_{t, i}\right)^{+}\right]
$$
where $c_p$ and $c_s$ are the commission rates for purchasing and selling stocks respectively. In this article, we considered the sum of the two commission rates as one-tenth of a percent.
\par
We restricted the total episodes during which our model is training. We defined three rules according which the model stops the ongoing training and then starts a new one.
\begin{enumerate}
\item Absence of new data(at the end of training data samples)
\item A $10\%$ decrease in the stock portfolio's value compared to the starting point. The higher this criterion is considered, the higher the risk of the model will be, but it will enable the model to find more forward-looking strategies. This number can be adjusted during training and considered zero during testing.
\item A $20\%$ decrease in the stock portfolio's value compared to the maximum point of the ongoing episode. This criterion is helpful in a situation where the agent achieves a high profit in the current episode but continues to lose this profit, which should be stopped at some point. The higher value we consider for this criterion, the agent will be able to find more promising strategies, which, of course, will also face a high risk.
\end{enumerate}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=.8]{Fig2}}
\caption{Sample price data of a stock during a week}
\label{fig2}
\end{figure}
\section{Data Treatments}
\label{Data}
In this article, we have leveraged two kinds of information which are elaborated in the following sections.
\subsection{Price History}
We used the price history of stocks in the Iran stock market. They have been gathered by a crawler, implemented by one of the authors, from the securities and exchange organization portal. The crawled data include daily prices from 2017 until 2020. The testing phase covers the last six months, and the rest of the data is used for training. Every row in the dataset consists of open, high, close, and low prices and the total traded volume of a single stock during a single day. Figure \ref{fig2} shows an example of one week's data related to a particular stock. The data relating to 54 stocks have been collected and will be used to train and test the model.
\par
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.8]{Fig3}}
\caption{Example of stock price over time}
\label{fig3}
\end{figure}
The statistical and analytical review of the stock price data shows the instability of the data, and as shown in figure \ref{fig3} for one of the stocks, the data has no stability over time, neither in terms of average nor variance.
\par
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.8]{Fig4}}
\caption{Correlation of a stock's price over time}
\label{fig4}
\end{figure}
In addition, the examination of the correlation criterion of the data shows that the data has zero correlation in a time interval of about 200 days, which will be used in the time frame for training the model. Figure \ref{fig4} shows the correlation of the stock in \ref{fig3}.
\subsection{Experts' Signals}
Experts' signals are obtained from 100tahlil website, containing signals from 85 experts. All the signals of these experts, which are intended for 54 desired stocks, add up to a total of 6335 signals, any of which comprises the following information.
\begin{table}
\begin{tabular}{||c c c c c c||}
\hline
$start date$ & $close date$ & $expected return$ & $expected risk$ & $export$ & $symbol$ \\ [0.5ex]
\hline\hline
6/25/2019 & 7/7/2019 & 48.39 & -14.8 & 49 & 0 \\
\hline
\end{tabular}
\caption{Example of a signal}
\label{table1}
\end{table}
\begin{itemize}
\item Expert Id: the identification of the signal issuer
\item Stock Id: the identification of the stock issued
\item Start date: the beginning date when the signal becomes valid
\item End date: the date when the signal expires
\item Expected return: the maximum return which may be achieved during start and end date
\item Expected risk: the maximum loss which may occur during start and end date
\end{itemize}
Table \ref{table1} presents an example signal. Each signal, from the time it is activated until the time it ends, based on the fluctuations of the relevant stock price, will result in profit or loss at any moment. For these calculations not to reduce efficiency during model training, instant profit or loss of each signal is calculated and stored in advance.
\subsection{Managing Missing And Overlapping Data}
During a certain period, some stocks do not have any price on some days because their symbol is closed. To solve this problem, we used the last valid price method, in which whenever there is no price, the nearest next or previous valid price is considered.
\par
For a particular stock, Some experts may have provided several signals overlapping over a period of time. We considered the average expected profit and loss in the overlapping periods to deal with this problem.
\begin{figure}[tp]
\centerline{\includegraphics[scale=0.5]{Fig5}}
\caption{Aggregation of the stocks' OHLC prices and experts' signals}
\label{fig5}
\end{figure}
\subsection{Data Aggregation}
We need to prepare the data to make it usable in our model. Our approach uses the time series in which stocks' signals and price history are divided daily to be fed to the model. For this purpose, we used a time window of 60 days, in which the model will use the stock's historical price and signal during the period. We chose this number based on some experiments. As shown in \ref{fig5}, for every stock, the open, high, low, and close prices and the total trade volume for the stock during a single day are gathered. Every active signal suggested by experts is also added to the aggregated data. Experts' signals include the following information:
\begin{enumerate}
\item expected return
\item expected risk
\item instant return defined as:
$$
\text { InstantReturn }=\frac{\text { CurrenSignalPrice }-\text { StartSignalPrice }}{\text { StartSignalPrice }}
$$
\item Status: it determines the final state of a signal. A value of zero indicates an active signal. Values 1 and -1 are considered if the signal ended with profit and loss, respectively.
\end{enumerate}
\begin{figure}[tp]
\centerline{\includegraphics[scale=0.4]{Fig6}}
\caption{The architecture of proposed method}
\label{fig6}
\end{figure}
\section{Proposed Method}
\label{Method}
In this article, we have proposed a convolutional network as it is a widely used and preferable architecture in the context of the portfolio management problem. In this architecture, 60 days of stock price information (starting price, closing price, low price, high price, and trading volume) are combined in the form of a tensor with 60 days of all experts' signal information and given to the neural network. Every stock is processed independently through two consecutive convolutional layers, using 1*6 and 1*5 kernels, and then is fed to a feed-forward neural network, which uses a tanh activation function to determine the proportion of the stocks as the next portfolio weight vector. As stated before, in the real-world scenario, there are fees to be paid for any transaction, which can add up and negatively impact the profit. In order to prevent constant changes in the portfolio weight vector, the previous portfolio weight vector is added to the output of the convolutional networks and then fed to the feed-forward network. There are two kinds of signals, which are current and future ones. In the current signals, we know the instant profit or loss made by the signal, while in future signals, only the expected profit or loss is known. Figure \ref{fig6} demonstrates the proposed architecture.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=.5]{Fig1}}
\caption{Reinforcement Learning Framework Model }
\label{fig1}
\end{figure}
\section{Experimental Results}
\label{Results}
There are three frameworks, Pytorch, Ray, and Rlib, in use to
implement the proposed model introduced in the previous section. Rllib is a framework for reinforcement learning modeling and can implement various models. This framework has the capability of distributed execution with the help of Ray. To use the features of Rllib, the environment and the agent we implement must inherit several features from the abstractions of this framework so that different algorithms provided by this framework can be used. For this purpose, we created an environment called TradingEnv, which contains all the necessary facilities to simulate the investment environment and manage the stock portfolio. When the environment receives an action, applying that action to the stock portfolio, it determines the amount of profit earned and provides it to the agent. To simulate the execution of an action in an environment, an ActionSchema, which calculates the change in the stock portfolio's value in every step, is presented. In addition, RewardSchema specifies how to calculate the reward in each step. A general overview of different parts of the reinforcement learning environment is illustrated in figure \ref{fig1}.
\begin{figure}[tp]
\centerline{\includegraphics[scale=.5]{Fig7}}
\caption{The infrastructure architecture of model execution}
\label{fig7}
\end{figure}
\par
As mentioned, the architecture can also run in a distributed manner. Fig \ref{fig7} describes how it runs distributedly on several servers. Each worker can be a server. In addition, according to the Ray framework's capabilities, it can also use any CPU core as a worker. We can have an arbitrary number of workers, each of which has a copy of the current environment and policy, produces one or more episodes, and then sends them to the leader cluster. Next, the cluster leader updates the original policy and sends this new policy to the workers so that new episodes are generated based on the latest policy.
\par
We chose the PPO(Proxy Policy Optimization) algorithm, considered one of the most effective reinforcement learning algorithms in continuous environments, as the base algorithm for updating our policy. PPO is an Actor-Critic algorithm that creates several episodes of a strategy in each step and then updates the strategy to maximize the average reward. The main advantage of this algorithm compared to other Actor-Critic ones is that in this one, the amount of change is limited in each step, helping to make the variance of changes less and, as a result, lead to more stable models.
\begin{table}[h!]
\begin{tabular}{|c | c|}
\hline
Parameter & The tested value \\ [0.5ex]
\hline\hline
[1e-2, 1e-3, 5e-4, 1e-4, 5e-5, 1e-5] & LEARNING\textunderscore RATE\\
\hline
[100, 200, 300, 400] & SGD\textunderscore MINIBATCH\textunderscore SIZE\\
\hline
uniform(0.85, 1.0) & LAMBDA\\
\hline
uniform(0.1, 0.5) & CLIP\textunderscore PARAM\\
\hline
[14, 30, 60, 90] & ROLLOUT\textunderscore FRAGMENT\textunderscore LENGTH\\
\hline
-0.2 & MAX\textunderscore DRAWDOWN\\
\hline
-0.1 & MIN\textunderscore PROFIT\\
\hline
\end{tabular}
\caption{Examination space for model parameters}
\label{table2}
\end{table}
\begin{table}[h!]
\begin{tabular}{|c | c|}
\hline
Parameter & The selected value \\ [0.5ex]
\hline\hline
5e-4 & LEARNING\textunderscore RATE\\
\hline
300 & SGD\textunderscore MINIBATCH\textunderscore SIZE\\
\hline
0.9 & LAMBDA\\
\hline
0.2 & CLIP\textunderscore PARAM\\
\hline
60 & ROLLOUT\textunderscore FRAGMENT\textunderscore LENGTH\\
\hline
-0.2 & MAX\textunderscore DRAWDOWN\\
\hline
-0.1 & MIN\textunderscore PROFIT\\
\hline
\end{tabular}
\caption{Selected parameters for model training}
\label{table3}
\end{table}
\par
A number of hyperparameters have been tested in this paper. Table \ref{table2} shows the parameters and their tested values. After experimenting with the testing values, we found the best ones, shown in table \ref{table3}.
\begin{figure}[tp]
\centerline{\includegraphics[scale=0.7]{Fig8}}
\caption{The average profit of each expert's signals during the test period}
\label{fig8}
\end{figure}
We compared the results of our model with the average profit gained by the experts' signals. The average profit is illustrated in figure \ref{fig8}. As can be seen, the best expert averaged 72\% profit, and the worst expert had about 8\% loss during the test period. In addition, the highest profit gained by the best expert was 110\%.
\begin{table}[h!]
\begin{tabular}{|c | c | c | c |}
\hline
Average gain & Maximum gain & minimum gain \\ [0.5ex]
\hline\hline
65\% & 85\% & 32\% \\
\hline
\end{tabular}
\caption{Average, maximum and minimum gain during all periods}
\label{table4}
\end{table}
The results suggest that, on average, our model is able to earn 90\% of the profit that the best expert earned. Also, in the maximum mode, our model was able to obtain 77\% of the signals obtained by the best expert.
\section{Conclusion}
\label{Conclusion}
This paper proposes a new deep RL framework for the portfolio management problem using Iran's stock market data. We leveraged experts' signals, historical price data, and a deep RL network, which is a new approach to solving this problem, as far as we know. We took advantage of convolutional networks to extract meaning from historical prices and experts' signals and then aggregated them in a feed-forward network to find the next portfolio weight vector. Despite being simple, we found that our model performed very well and could compete with the best experts. We showed that, on average, we could harness 90\% of the profit made by the best expert.
| proofpile-arXiv_065-1886 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
With the development of wireless technology and cheap sensors, real-time monitoring systems are widely used. Typically in such systems, a monitor monitors one or more events simultaneously and transmits updates to allow one or more receivers at a distance to have a good knowledge of the events. Therefore, the timeliness of information is often one of the most important performance indicators. The Age of Information (AoI), first introduced in~\cite{b1}, captures the freshness of information by tracking the time elapsed since the generation of the last received update. More precisely, let $V(t)$ be the generation time of the last update received up to time $t$. Then, AoI at time $t$ is defined by $\Delta_{AoI}(t) = t-V(t)$. After the introduction, it has attracted extensive attention and research~\cite{aoi1,aoi2,aoi3}. However, the limitation of AoI is that it ignores the information content of the transmitted updates. Therefore, it falls short in the context of remote estimation. For example, we want to estimate a rapidly changing event remotely. In this case, a small AoI does not necessarily mean that the receiver has accurate information about the event. Likewise, when the event changes slowly, the receiver can estimate relatively accurately without timely information.
Inspired by the above limitation, the Age of Incorrect Information (AoII) is introduced in~\cite{3}, which combines the timeliness and the information content. As presented in~\cite{3}, AoII is dominated by two penalty functions. The first is the time penalty function, which captures the time elapsed since the last time the receiver has perfect information about the remote event. The second is the information penalty function, which captures the information mismatch between the receiver and the remote event. Therefore, AoII captures not only the information mismatch between the event and the receiver but also the aging process of conflicting information. Moreover, by choosing different penalty functions, AoII is adaptable to various systems and communication goals. Hence, AoII is regarded as a semantic metric~\cite{semantic}.
Several works have been done since the introduction. Optimizing AoII under resource constraints is studied in~\cite{3, b2, b3}, with different assumptions on the system and different penalty function choices. AoII in scheduling problems is studied in~\cite{b4,b5}. However, all these papers assume that the transmission time of each update is deterministic and normalized. In this paper, we consider the communication system in which the transmission time of an update is random. A similar system setup is considered in~\cite{b6}, where the problem is studied based on simulation results. However, we provide a theoretical analysis of the system, and the results apply to generic transmission delay. The system with random transmission delay has also been studied in the context of remote estimation and AoI~\cite{b7,b10,b8,b9}. However, the problem considered in this paper is very different, as AoII is a combination of age-based metrics frameworks and error-based metrics frameworks.
The main contributions of this paper are: 1) We investigate the AoII in the system where the communication channel suffers a random delay. 2) We analyze the characteristics of the threshold policy, under which the transmitter initiates transmission only when AoII exceeds the threshold. 3) We calculate the expected AoII achieved by the threshold policy precisely.
The remainder of this paper is organized in the following way. In Section \ref{sec-system}, we introduce the system model. Then, in Section \ref{sec-MDP}, we theoretically analyze and calculate the expected AoII achieved by the threshold policy. Finally, we conclude the paper with numerical results, which are detailed in Section \ref{sec-Numerical}.
\section{System Overview}\label{sec-system}
\subsection{System Model}\label{sec-SystemModel}
We consider a transmitter-receiver pair in a slotted-time system, where the transmitter observes a dynamic source and needs to send status updates to the remote receiver through a communication channel. The dynamic source is modeled by a two-state symmetric Markov chain with state transition probability $p$. An illustration of the Markov chain is shown in Fig.~\ref{fig-MarkovChain}.
\begin{figure}
\centering \includegraphics[width=3in]{Figure/MarkovChain.pdf} \caption{Two-state symmetric Markov chain with state transition probability $p$.}
\label{fig-MarkovChain}
\end{figure}
The transmitter receives an update from the dynamic source at the beginning of each time slot. We denote the update at time slot $k$ as $X_k$. The old update will be discarded upon the arrival of the new one. Then, the transmitter must decide whether to transmit the update based only on the system's current status. More precisely, when the channel is idle (i.e., no transmission in progress), the transmitter chooses between transmitting the update and staying idle. When the channel is busy, the transmitter cannot do anything other than stay idle. The transmission time $T\in\mathbb{N}^*$ of an update is a random variable with distribution denoted by $p_t\triangleq Pr(T=t)$. We assume that $T$ is independent and identically distributed. The channel is error-free, meaning the update will not be corrupted during transmission. When a transmission finishes, the communication channel is immediately available for the subsequent transmission.
The remote receiver will estimate the state of the dynamic source using the received updates. We denote by $\hat{X}_k$ the receiver's estimate at time slot $k$. According to~\cite{b9}, the best estimator when $p\leq\frac{1}{2}$ is the last received update. When $p>\frac{1}{2}$, the optimal estimator depends on the transmission time. In this paper, we consider only the case of $p\leq\frac{1}{2}$. Hence, the receiver uses the last received update as the estimate. For the case of $p>\frac{1}{2}$, the results can be extended using the corresponding best estimator. The receiver uses $ACK/NACK$ packets to inform the transmitter of its reception of the new update. $ACK/NACK$ packets are generally very small~\cite{2a2}. Hence, we assume they are reliably and instantaneously received by the transmitter. Then, when $ACK$ is received, the transmitter knows that the receiver's estimate has changed to the last sent update. When $NACK$ is received, the transmitter knows that the receiver's estimate did not change. In this way, the transmitter always knows the current estimate on the receiver side. An illustration of the system model is shown in Fig.~\ref{fig-SystemModel}.
\begin{figure}
\centering \includegraphics[width=3in]{Figure/SystemModel.pdf}\caption{An illustration of the system model.}
\label{fig-SystemModel}
\end{figure}
\subsection{Age of Incorrect Information}
The system adopts the Age of Incorrect Information (AoII) as the performance metric. We first define $U_k$ as the last time instant before time $k$ (including $k$) that the receiver's estimate is correct. Mathematically,
\[
U_k \triangleq \max\{h:h\leq k, X_h = \hat{X}_h\}.
\]
Then, in a slotted-time system, AoII at time slot $k$ can be written as
\begin{equation}\label{eq-AoII}
\Delta_{AoII}(X_k,\hat{X}_k,k) = \sum_{h=U_k+1}^k\bigg(g(X_h,\hat{X}_h) F(h-U_k)\bigg),
\end{equation}
where $g(X_k,\hat{X}_k)$ is the information penalty function. $F(k) \triangleq f(k) - f(k-1)$ where $f(k)$ is the time penalty function. In this paper, we choose $g(X_k,\hat{X}_k) = |X_k-\hat{X}_k|$ and $f(k) = k$. Hence, $F(k) =1 $ for all $k$ and $g(X_k,\hat{X}_k)\in\{0,1\}$. Then, equation \eqref{eq-AoII} can be simplified as
\[
\Delta_{AoII}(X_k,\hat{X}_k,k) = k-U_k\triangleq\Delta_k.
\]
To characterize the evolution of $\Delta_k$, it is sufficient to characterize the value of $\Delta_{t+1}$ using $\Delta_k$ and the system dynamics. To this end, we distinguish between the following two cases.
\begin{itemize}
\item When the receiver's estimate is correct at time $k + 1$, we have $U_{k+1} = k + 1$. Then, by definition, $\Delta_{k+1} = 0$.
\item When the receiver's estimate is incorrect at time $k + 1$, we have $U_{k+1} = U_k$. Then, by definition, $\Delta_{k+1}=k+1-U_k=\Delta_k +1$.
\end{itemize}
Combining together, we have
\begin{equation}\label{eq-AoIIDynamic}
\Delta_{k+1} = \mathbbm{1}\{U_{k+1}\neq k+1\}(\Delta_k+1),
\end{equation}
where $\mathbbm{1}\{A\}$ is the indicator function, whose value is one when event $A$ occurs and zero otherwise. A sample path of $\Delta_k$ is shown in Fig.~\ref{fig-SamplePath}.
\begin{figure}
\centering \includegraphics[width=3in]{Figure/SamplePath.pdf}
\caption{A sample path of $\Delta_k$, where $T_i$ and $D_i$ are the transmission start time and the delivery time of the $i$-th update, respectively. At $T_1$, the transmitted update is $X_3$. The estimate at time slot $6$ (i.e., $\hat{X}_6$) changes due to the reception of the update transmitted at $T_2$.}
\label{fig-SamplePath}
\end{figure}
When combined with the source dynamics and the receiver's estimate, the evolution of $\Delta_{k}$ can also be characterized by the following cases.
\begin{itemize}
\item When $\Delta_k=0$ and $\hat{X}_k = \hat{X}_{k+1}$, we know $\hat{X}_k=X_k$ and the receiver's estimate remains the same. Then, when the source remains in the same state (i.e., $X_k = X_{k+1}$), we have $X_{k+1} = \hat{X}_{k+1}$. Hence, $\Delta_{k+1}=0$ with probability $1-p$. On the other hand, when the source changes state (i.e., $X_k \neq X_{k+1}$), we have $X_{k+1} \neq \hat{X}_{k+1}$. Hence, $\Delta_{k+1}=1$ with probability $p$. Combining together, we have
\[
\Delta_{k+1} = \begin{dcases}
0 & w.p.\ 1-p,\\
1 & w.p.\ p.
\end{dcases}
\]
The same analysis can be applied to other cases. Hence, the details are omitted for the following cases.
\item When $\Delta_k=0$ and $\hat{X}_k \neq \hat{X}_{k+1}$, we have
\[
\Delta_{k+1} = \begin{dcases}
0 & w.p.\ p,\\
1 & w.p.\ 1-p.
\end{dcases}
\]
\item When $\Delta_k>0$ and $\hat{X}_k = \hat{X}_{k+1}$, we have
\[
\Delta_{k+1} = \begin{dcases}
0 & w.p.\ p,\\
\Delta_k+1 & w.p.\ 1-p.
\end{dcases}
\]
\item When $\Delta_k>0$ and the $\hat{X}_k \neq \hat{X}_{k+1}$, we have
\[
\Delta_{k+1} = \begin{dcases}
0 & w.p.\ 1-p,\\
\Delta_k+1 & w.p.\ p.
\end{dcases}
\]
\end{itemize}
Combining together, we obtain \eqref{eq-AoIIWithSource}.
\begin{figure*}[!t]
\normalsize
\begin{equation}\label{eq-AoIIWithSource}
\Delta_{k+1} = \begin{dcases}
\mathbbm{1}\{\hat{X}_k\neq\hat{X}_{k+1}; \Delta_k=0\} + \mathbbm{1}\{\hat{X}_k=\hat{X}_{k+1};\Delta_k>0\}(\Delta_k+1) & w.p.\ 1-p,\\
\mathbbm{1}\{\hat{X}_k=\hat{X}_{k+1}; \Delta_k=0\} + \mathbbm{1}\{\hat{X}_k\neq\hat{X}_{k+1};\Delta_k>0\}(\Delta_k+1) & w.p.\ p.\\
\end{dcases}
\end{equation}
\hrulefill
\vspace*{4pt}
\end{figure*}
In this paper, we use the expected AoII to quantify the performance. To this end, we first define a policy $\phi$ as the one that specifies the transmitter's action in each time slot. Then, the expected AoII achieved by policy $\phi$ is
\[
\bar{\Delta}_{\phi} \triangleq\lim_{K\to\infty} \frac{1}{K}\mathbb{E}_{\phi}\left(\sum_{k=0}^{K-1}\Delta_k\right).
\]
To evaluate $\bar{\Delta}_{\phi}$, we start with characterizing the system dynamics in the following subsection.
\subsection{System Dynamics}\label{sec-SystemDynamic}
The first thing to notice is that the transmission time for each update is unbounded. To simplify the analysis, we consider the following two more practical cases\footnote{All results presented in this paper apply to both cases unless stated otherwise.}.
\begin{itemize}
\item \textbf{Assumption 1}: We assume that the transmission will take at most $t_{max}$ time slots and the update will always be delivered. More precisely, we assume $1\leq T\leq t_{max}$ and
\[
\sum_{t=1}^{t_{max}}p_t = 1,\quad p_t\geq0,\ 1\leq t\leq t_{max}.
\]
In practice, we can always choose a sufficiently large $t_{max}$ so that the probability that the transmission time is longer than $t_{max}$ is negligible.
\item \textbf{Assumption 2}: We assume that the transmission survives a maximum of $t_{max}$ time slots. When the transmission lasts to the $t_{max}$th time slot, the update is either delivered or discarded at the end of the $t_{max}$th time slot. In both cases, the channel will be available for a new transmission at the next time slot. In practice, similar techniques, such as time-to-live (TTL)~\cite{b12}, are used to prevent an update from occupying the channel for too long.
\end{itemize}
\begin{remark}
When $t_{max}=1$, the system reduces to the one considered in~\cite{3}. Hence, for the remainder of this paper, we consider the case of $t_{max}>1$.
\end{remark}
We notice that the status of the system at the beginning of the $k$th time slot can be fully captured by the triplet $s_k\triangleq(\Delta_k,t_k,i_k)$ where $t_k\in\{0,1,...,t_{max}-1\}$ indicates the time the current transmission has been in progress. We define $t_k=0$ if there is no transmission in progress. The last element $i_k\in\{-1,0,1\}$ indicates the state of the channel. We define $i_k=-1$ when the channel is idle. $i_k=0$ if the channel is busy and the transmitting update is the same as the receiver's current estimate, and $i_k=1$ when the transmitting update is different from the receiver's current estimate.
\begin{remark}\label{rem-ti}
Note that, according to the definition of $t_k$ and $i_k$, $i_k=-1$ if and only if $t_k=0$. In this case, the channel is idle.
\end{remark}
Then, characterizing the system dynamics is equivalent to characterizing the value of $s_{k+1}$ using $s_k$ and the transmitter's action. We denote by $a_k\in\{0,1\}$ the transmitter's decision. We define $a_k=0$ when the transmitter decides not to initiate a transmission and $a_k=1$ otherwise. Hence, the system dynamics can be fully characterized by $P_{s_k,s_{k+1}}(a_k)$, which is defined as the probability that action $a_k$ at $s_k$ leads to $s_{k+1}$.
\section{AoII Analysis}\label{sec-MDP}
As is proved in~\cite{b2,b3,b4,b5}, the AoII-optimal policy often has a threshold structure. Hence, we consider threshold policy.
\begin{definition}[Threshold policy]\label{def-ThreholdPolicy}
Under threshold policy $\tau$, the transmitter will initiate a transmission only when the current AoII is no less than threshold $\tau\in\mathbbm{N}^0$ and the channel is idle.
\end{definition}
\begin{remark}
We define $\tau\triangleq\infty$ as the policy under which the transmitter never initiates any transmissions.
\end{remark}
We notice that the system dynamics under threshold policy can be characterized by a discrete-time Markov chain (DTMC). Without loss of generality, we assume the DTMC starts at state $s=(0,0,-1)$. Then, the state space $\mathcal{S}$ consists of all the states accessible from state $s=(0,0,-1)$. Since state $s=(0,0,-1)$ is positive recurrent and communicates with each state $s\in\mathcal{S}$, the stationary distribution exists. Let $\pi_{s}$ be the steady-state probability of state $s$. Then, $\pi_s$ satisfies the following balance equation.
\[
\pi_s = \sum_{s'\in\mathcal{S}}P_{s',s}(a)\pi_{s'},\quad s\in\mathcal{S},
\]
where $P_{s',s}(a)$ is the single-step state transition probability, and the action $a$ depends on the threshold policy. Then, the first step in calculating the expected AoII achieved by the threshold policy is to calculate the stationary distribution of the induced DTMC. However, the problem arises as the state space $\mathcal{S}$ is infinite and intertwined. To simplify the state transitions, we recall that the transmitter can only stay idle (i.e., $a=0$) when the channel is busy. Let $\mathcal{S}'=\{s=(\Delta,t,i):i\neq-1\}$ be the set of the state where the channel is busy. Then, for $s'\in\mathcal{S}'$, $P_{s',s}(a) = P_{s',s}(0)$ and is independent of the threshold policy. Hence, for any threshold policy and each $s\in\mathcal{S}\setminus\mathcal{S}'$, we can repeatedly replace $\pi_{s'}$, where $s'\in\mathcal{S}'$, with the corresponding balance equation until we get the following equation.
\begin{equation}\label{eq-CompactBalanceEq}
\pi_{s} = \sum_{s'\in \mathcal{S}\setminus\mathcal{S}'}P_{\Delta',\Delta}(a)\pi_{s'},\quad s\in\mathcal{S}\setminus\mathcal{S}',
\end{equation}
where $P_{\Delta',\Delta}(a)$ is the multi-step state transition probability from state $s'=(\Delta',0,-1)$ to state $s=(\Delta,0,-1)$ under action $a$. For simplicity, we write \eqref{eq-CompactBalanceEq} as
\begin{equation}\label{eq-CompactBalanceEq2}
\pi_{\Delta} = \sum_{\Delta'\geq0}P_{\Delta',\Delta}(a)\pi_{\Delta'},\quad \Delta\geq0.
\end{equation}
As we will see in the following subsections, $\pi_\Delta$ is sufficient to calculate the expected AoII obtained by any threshold policy.
\begin{remark}
The intuition behind the simplification of the balance equations is as follows. We recall that the system dynamics when the channel is busy are independent of the adopted policy. Hence, we can calculate these dynamics in advance so that the balance equations contain only the states in which the transmitter needs to make decisions.
\end{remark}
\noindent In the next subsection, we derive the expression of $P_{\Delta,\Delta'}(a)$.
\subsection{Multi-step State Transition Probability}\label{sec-StateTransProb}
We start with the case of $a=0$. In this case, no update will be transmitted, and $P_{\Delta,\Delta'}(0)$ is independent of the transmission delay. Then, according to \eqref{eq-AoIIWithSource}, we have
\[
P_{0,\Delta'}(0) = \begin{dcases}
1-p & \Delta'=0,\\
p & \Delta'=1,
\end{dcases}
\]
and for $\Delta>0$,
\[
P_{\Delta,\Delta'}(0) = \begin{dcases}
p & \Delta'=0,\\
1-p & \Delta' = \Delta+1.
\end{dcases}
\]
In the sequel, we focus on the case of $a=1$. We define $P^{t}_{\Delta,\Delta'}(a)$ as the probability that action $a$ at state $s=(\Delta,0,-1)$ will lead to state $s'=(\Delta',0,-1)$, given that the transmission takes $t$ time slots. Then, under \textbf{Assumption 1},
\[
P_{\Delta,\Delta'}(1) = \sum_{t=1}^{t_{max}}p_tP^t_{\Delta,\Delta'}(1).
\]
Hence, it is sufficient to obtain the expressions of $P^t_{\Delta,\Delta'}(1)$. To this end, we define $p^{(t)}$ as the probability that the dynamic source will remain in the same state after $t$ time slots. Since the Markov chain is symmetric, $p^{(t)}$ is independent of the state and can be calculated by
\[
p^{(t)} = \left(\begin{bmatrix}
1-p & p\\
p & 1-p
\end{bmatrix}^t\right)_{11},
\]
where the subscript indicates the row number and the column number of the target probability. For the consistency of notation, we define $p^{(0)}\triangleq1$. Then, we have the following lemma.
\begin{lemma}\label{lem-CompactTrans}
Under \textbf{Assumption 1},
\begin{equation}\label{eq-Assumption1Trans}
P_{\Delta,\Delta'}(1) = \sum_{t=1}^{t_{max}}p_tP^t_{\Delta,\Delta'}(1),
\end{equation}
where
\[
P^{t}_{0,\Delta'}(1) =
\begin{dcases}
p^{(t)} & \Delta'=0,\\
p^{(t-k)}p(1-p)^{k-1} & 1\leq\Delta'= k\leq t,\\
0 & otherwise,
\end{dcases}
\]
and for $\Delta>0$,
\begin{multline*}
P^{t}_{\Delta,\Delta'}(1)=\\
\begin{dcases}
p^{(t)} & \Delta'=0,\\
(1-p^{(t-1)})(1-p) & \Delta'=1,\\
(1-p^{(t-k)})p^{2}(1-p)^{k-2} & 2\leq \Delta'=k\leq t-1,\\
p(1-p)^{t-1} & \Delta'=\Delta+t,\\
0 & otherwise.
\end{dcases}
\end{multline*}
\end{lemma}
\begin{proof}
The expression of $P^t_{\Delta,\Delta'}(1)$ is obtained by analyzing the system dynamics. The complete proof can be found in Appendix \ref{pf-CompactTrans} of the supplementary material.
\end{proof}
To get more insights, we provide the following corollary.
\begin{corollary}\label{lem-StateTransProb}
Under \textbf{Assumption 1}, equation \eqref{eq-Assumption1Trans} can be written equivalently as \eqref{eq-EquivalentEq1}
\begin{figure*}[!t]
\normalsize
\begin{equation}\label{eq-EquivalentEq1}
P_{\Delta,\Delta'}(1) =\\
\begin{dcases}
\sum_{t=\Delta'}^{t_{max}}p_tP^t_{\Delta,\Delta'}(1) & 0\leq\Delta'\leq t_{max}-1,\Delta\geq\Delta',\\
\sum_{t=\Delta'}^{t_{max}}p_tP^t_{\Delta,\Delta'}(1) + p_{t'}P^{t'}_{\Delta,\Delta'}(1) & 0\leq\Delta'\leq t_{max}-1,\Delta<\Delta',\\
p_{t'}P^{t'}_{\Delta,\Delta'}(1) & \Delta'\geq t_{max}.
\end{dcases}
\end{equation}
\hrulefill
\vspace*{4pt}
\end{figure*}
where $t'\triangleq\Delta'-\Delta$ and $P^{t'}_{\Delta,\Delta'}(1)\triangleq 0$ when $t'\leq0$ or when $t'>t_{max}$. Meanwhile, $P_{\Delta,\Delta'}(1)$ possesses the following properties.
\begin{enumerate}
\item $P_{\Delta,\Delta'}(1)$ is independent of $\Delta$ when $0\leq\Delta'\leq t_{max}-1$ and $\Delta\geq\Delta'$.
\item $P_{\Delta,\Delta'}(1) = P_{\Delta+\delta,\Delta'+\delta}(1)$ when $\Delta'\geq t_{max}$ and $\Delta\geq0$ for any $\delta\geq1$.
\item $P_{\Delta,\Delta'}(1)=0$ when $\Delta'>\Delta+t_{max}$ or when $t_{max}-1<\Delta'<\Delta+1$.
\end{enumerate}
\end{corollary}
\begin{proof}
The equivalent expression and the properties can be obtained by analyzing the equations detailed in Lemma \ref{lem-CompactTrans}. The complete proof can be found in Appendix \ref{pf-StateTransProb} of the supplementary material.
\end{proof}
The state transition probabilities under \textbf{Assumption 2} can be obtained similarly. To this end, we first define $p_{t^+}\triangleq\sum_{t=t_{max}+1}^{\infty}p_t$ as the probability that the transmission will be terminated and $P^{t^+}_{\Delta,\Delta'}(a)$ as the probability that action $a$ at state $s=(\Delta,0,-1)$ will result in state $s'=(\Delta',0,-1)$ when the transmission is terminated. Then, we have the following lemma.
\begin{lemma}\label{lem-Case2TransProb}
Under \textbf{Assumption 2},
\begin{equation}\label{eq-compactTransProbs}
P_{\Delta,\Delta'}(1) = \sum_{t=1}^{t_{max}}p_tP^{t}_{\Delta,\Delta'}(1) + p_{t^+}P^{t^+}_{\Delta,\Delta'}(1),
\end{equation}
where
\[
P^{t}_{0,\Delta'}(1) =
\begin{dcases}
p^{(t)} & \Delta'=0,\\
p^{(t-k)}p(1-p)^{k-1} & 1\leq\Delta'= k\leq t,\\
0 & otherwise,
\end{dcases}
\]
\[
P^{t^+}_{0,\Delta'}(1) = P^{t_{max}}_{0,\Delta'}(1),
\]
and for $\Delta>0$,
\begin{multline*}
P^{t}_{\Delta,\Delta'}(1) = \\
\begin{dcases}
p^{(t)} & \Delta'=0,\\
(1-p^{(t-1)})(1-p) & \Delta'=1,\\
(1-p^{(t-k)})p^{2}(1-p)^{k-2} & 2\leq\Delta'=k\leq t-1,\\
p(1-p)^{t-1} & \Delta'=\Delta+t,\\
0 & otherwise,
\end{dcases}
\end{multline*}
\begin{multline*}
P^{t^+}_{\Delta,\Delta'}(1) = \\
\begin{dcases}
1-p^{(t_{max})} & \Delta'=0,\\
(1-p^{(t_{max}-k)})p(1-p)^{k-1} & 1\leq\Delta'= k\leq t_{max}-1,\\
(1-p)^{t_{max}} & \Delta' = \Delta+t_{max},\\
0 & otherwise.
\end{dcases}
\end{multline*}
Under \textbf{Assumption 2}, equation \eqref{eq-compactTransProbs} can be written equivalently as \eqref{eq-EquivalentEq2}.
\begin{figure*}[!t]
\normalsize
\begin{equation}\label{eq-EquivalentEq2}
P_{\Delta,\Delta'}(1) =
\begin{dcases}
\sum_{t=\Delta'}^{t_{max}}p_tP^t_{\Delta,\Delta'}(1)+ p_{t^+}P^{t^+}_{\Delta,\Delta'}(1) & 0\leq\Delta'\leq t_{max}-1,\Delta\geq\Delta',\\
\sum_{t=\Delta'}^{t_{max}}p_tP^t_{\Delta,\Delta'}(1) + p_{t'}P^{t'}_{\Delta,\Delta'}(1)+ p_{t^+}P^{t^+}_{\Delta,\Delta'}(1) & 0\leq\Delta'\leq t_{max}-1,\Delta<\Delta',\\
p_{t'}P^{t'}_{\Delta,\Delta'}(1)+ p_{t^+}P^{t^+}_{\Delta,\Delta'}(1) & \Delta'\geq t_{max},\\
0 & otherwise.
\end{dcases}
\end{equation}
\hrulefill
\vspace*{4pt}
\end{figure*}
Meanwhile, $P_{\Delta,\Delta'}(1)$ possesses the following properties.
\begin{enumerate}
\item $P_{\Delta,\Delta'}(1)$ is independent of $\Delta$ when $0\leq\Delta'\leq t_{max}-1$ and $\Delta\geq\max\{1,\Delta'\}$.
\item $P_{\Delta,\Delta'}(1) = P_{\Delta+\delta,\Delta'+\delta}(1)$ when $\Delta'\geq t_{max}$ and $\Delta>0$ for any $\delta\geq1$.
\item $P_{\Delta,\Delta'}(1)=0$ when $\Delta'>\Delta+t_{max}$ or when $t_{max}-1<\Delta'<\Delta+1$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof follows similar steps as presented in the proofs of Lemma \ref{lem-CompactTrans} and Corollary \ref{lem-StateTransProb}. The complete proof can be found in Appendix \ref{pf-Case2TransProb} of the supplementary material.
\end{proof}
As the expressions of $P_{\Delta,\Delta'}(a)$ under both assumptions are clarified, we solve for $\pi_{\Delta}$ in the next subsection.
\subsection{Stationary Distribution}\label{sec-SteadyState}
Let $ET$ be the expected transmission time of an update. Since the channel remains idle if no transmission is initiated and the expected transmission time of an update is $ET$, $\pi_{\Delta}$ satisfies the following equation.
\begin{equation}\label{eq-TotalProbs}
\sum_{\Delta=0}^{\tau-1}\pi_\Delta + ET\sum_{\Delta=\tau}^{\infty}\pi_\Delta = 1,
\end{equation}
where $ET= \sum_{t=1}^{t_{max}}tp_t$ under \textbf{Assumption 1} and $ET= \sum_{t=1}^{t_{max}}tp_t+t_{max}p_{t^+}$ under \textbf{Assumption 2}. We notice that there is still infinitely many $\pi_{\Delta}$ to calculate. To overcome the infinity, we recall that, under threshold policy, the suggested action is $a=1$ for all the state $s=(\Delta,0,-1)$ with $\Delta\geq\tau$. Hence, we define $\Pi\triangleq \sum_{\Delta=\omega}^{\infty}\pi_\Delta$ where $\omega \triangleq t_{max} + \tau+1$. As we will see in the following subsections, $\Pi$ and $\pi_{\Delta}$ for $0\leq\Delta<\omega-1$ are sufficient for calculating the expected AoII achieved by the threshold policy. With $\Pi$ in mind, we have the following theorem.
\begin{theorem}\label{prop-StationaryDistribution}
For $0<\tau<\infty$, $\Pi$ and $\pi_{\Delta}$ for $0\leq\Delta<\omega-1$ are the solution to the following system of linear equations.
\[
\pi_0 = (1-p)\pi_0 + p\sum_{i=1}^{\tau-1}\pi_i+ P_{1,0}(1)\left(\sum_{i=\tau}^{\omega-1}\pi_i+\Pi\right).
\]
\[
\pi_1= p\pi_0 + P_{1,1}(1)\left(\sum_{i=\tau}^{\omega-1}\pi_i+\Pi\right).
\]
\[
\Pi = \sum_{i=\tau+1}^{\omega-1}\left(\sum_{k=\tau+1}^iP_{i,t_{max}+k}(1)\right)\pi_i + \sum_{i=1}^{t_{max}}\bigg(P_{\omega,\omega+i}(1)\bigg)\Pi.
\]
\[
\sum_{i=0}^{\tau-1}\pi_i + ET\left(\sum_{i=\tau}^{\omega-1}\pi_i+\Pi\right) = 1.
\]
For each $2\leq\Delta\leq t_{max}-1$,
\begin{multline*}
\pi_\Delta =\\
\begin{dcases}
(1-p)\pi_{\Delta-1} + P_{\tau,\Delta}(1)\left(\sum_{i=\tau}^{\omega-1}\pi_i+\Pi\right) & \Delta-1<\tau,\\
\sum_{i=\tau}^{\Delta-1}P_{i,\Delta}(1)\pi_i + P_{\Delta,\Delta}(1)\left(\sum_{i=\Delta}^{\omega-1}\pi_i+\Pi\right) & \Delta-1\geq\tau.
\end{dcases}
\end{multline*}
For each $t_{max}\leq\Delta\leq\omega-1$,
\[
\pi_{\Delta} = \begin{dcases}
(1-p)\pi_{\Delta-1} & \Delta-1<\tau,\\
\sum_{i=\tau}^{\Delta-1}P_{i,\Delta}(1)\pi_i & \Delta-1\geq\tau.
\end{dcases}
\]
\end{theorem}
\begin{proof}
We delve into the definition of $\Pi$. By leveraging the structural property of the threshold policy and the properties of $P_{\Delta,\Delta'}(a)$, we obtain the above system of linear equations. The complete proof can be found in Appendix \ref{pf-StationaryDistribution} of the supplementary material.
\end{proof}
\begin{remark}
The size of the system of linear equations detailed in Theorem \ref{prop-StationaryDistribution} is $\omega+1$.
\end{remark}
\begin{corollary}\label{cor-AoIISpecialCase1}
When $\tau=0$,
\[
\pi_{0} = \frac{P_{1,0}(1)}{ET[1-P_{0,0}(1)+P_{1,0}(1)]}.
\]
For each $1\leq\Delta\leq t_{max}$,
\[
\pi_{\Delta} = \sum_{i=0}^{\Delta-1}P_{i,\Delta}(1)\pi_i + P_{\Delta,\Delta}(1)\left(\frac{1}{ET} - \sum_{i=0}^{\Delta-1}\pi_i\right).
\]
\[
\Pi = \ddfrac{\sum_{i=1}^{t_{max}}\left(\sum_{k=1}^{i}P_{i,t_{max}+k}(1)\right)\pi_i}{1-\sum_{i=1}^{t_{max}}P_{t_{max}+1,t_{max}+1+i}(1)}.
\]
When $\tau=1$,
\[
\pi_0 = \frac{P_{1,0}(1)}{pET+P_{1,0}(1)},\quad \pi_1 = \frac{pP_{1,0}(1)+pP_{1,1}(1)}{pET+P_{1,0}(1)}.
\]
For each $2\leq \Delta\leq t_{max}+1$,
\[
\pi_\Delta = \sum_{i=1}^{\Delta-1}P_{i,\Delta}(1)\pi_i + P_{\Delta,\Delta}(1)\left(\frac{1-\pi_0}{ET} - \sum_{i=1}^{\Delta-1}\pi_i\right).
\]
\[
\Pi = \ddfrac{\sum_{i=2}^{t_{max}+1}\left(\sum_{k=2}^iP_{i,t_{max}+k}(1)\right)\pi_i}{1-\sum_{i=1}^{t_{max}}P_{t_{max}+2,t_{max}+2+i}(1)}.
\]
\end{corollary}
\begin{proof}
The calculations follow similar steps as detailed in the proof of Theorem \ref{prop-StationaryDistribution}. The complete proof can be found in Appendix \ref{pf-AoIISpecialCase1} of the supplementary material.
\end{proof}
We will calculate the expected AoII in the next subsection based on the above results.
\subsection{Expected AoII}
Let $\bar{\Delta}_{\tau}$ be the expected AoII achieved by threshold policy $\tau$. Then,
\begin{equation}\label{eq-expectedAoIIDet}
\bar{\Delta}_{\tau} = \sum_{\Delta=0}^{\tau-1}C(\Delta,0)\pi_\Delta + \sum_{\Delta=\tau}^{\infty}C(\Delta,1)\pi_\Delta,
\end{equation}
where $C(\Delta,a)$ is the expected sum of AoII during the transmission of the update caused by the operation of $a$ at state $s=(\Delta,0,-1)$. Note that $C(\Delta,a)$ includes the AoII for being at state $s=(\Delta,0,-1)$.
\begin{remark}\label{rem-CandP}
In order to have a more intuitive understanding of the definition of $C(\Delta,a)$, we use $\eta$ to denote a possible path of the state during the transmission of the update and let $H$ be the set of all possible paths. Moreover, we denote by $C_{\eta}$ and $P_{\eta}$ the sum of AoII and the probability associated with path $\eta$, respectively. Then,
\[
C(\Delta,a) = \sum_{\eta\in H}P_{\eta}C_{\eta}.
\]
For example, we consider the case of $p_2=1$, where the transmission takes $2$ time slots to be delivered. Also, action $a=1$ is taken at state $(2,0,-1)$. Then, a sample path $\eta$ of the state during the transmission can be the following.
\[
(2,0,-1)\rightarrow(3,1,1)\rightarrow(4,0,-1).
\]
By our definition, $C_{\eta}=2+3=5$ and $P_{\eta} = Pr[(3,1,1)\mid (2,0,-1),a=1]\cdot Pr[(4,0,-1)\mid (3,1,1),a=1]$ for the above sample path.
\end{remark}
In the following, we calculate $C(\Delta,a)$. Similar to Section \ref{sec-StateTransProb}, we define $C^{t}(\Delta,a)$ as the expected sum of AoII during the transmission of the update caused by action $a$ at state $s=(\Delta,0,-1)$, given that the transmission takes $t$ time slots. Then, under \textbf{Assumption 1},
\begin{equation}\label{eq-CompactCostRan}
C(\Delta,a) = \begin{dcases}
\Delta & a=0,\\
\sum_{t=1}^{t_{max}}p_tC^{t}(\Delta,1) & a=1,
\end{dcases}
\end{equation}
and, under \textbf{Assumption 2},
\begin{equation}\label{eq-CompactCostRan2}
C(\Delta,a) = \begin{dcases}
\Delta & a=0,\\
\sum_{t=1}^{t_{max}}p_tC^{t}(\Delta,1) + p_{t^+}C^{t_{max}}(\Delta,1) & a=1.
\end{dcases}
\end{equation}
Hence, obtaining the expressions of $C^{t}(\Delta,1)$ is sufficient. To this end, we define $C^k(\Delta)$ as the expected AoII $k$ time slots after the transmission starts at state $s=(\Delta,0,-1)$, given that the transmission is still in progress. Then, we have the following lemma.
\begin{lemma}\label{lem-CompactCost}
$C^{t}(\Delta,1)$ is given by
\[
C^{t}(\Delta,1) = \sum_{k=0}^{t-1}C^k(\Delta),
\]
where $C^k(\Delta)$ is given by \eqref{eq-Cost}.
\begin{figure*}[!t]
\normalsize
\begin{equation}\label{eq-Cost}
C^k(\Delta) = \begin{dcases}
\sum_{h=1}^{k} hp^{(k-h)}p(1-p)^{h-1} & \Delta=0,\\
\sum_{h=1}^{k-1} h(1-p^{(k-h)})p(1-p)^{h-1} + (\Delta+k)(1-p)^k & \Delta>0.
\end{dcases}
\end{equation}
\hrulefill
\vspace*{4pt}
\end{figure*}
\end{lemma}
\begin{proof}
The expression of $C^k(\Delta)$ is obtained by analyzing the system dynamics. The complete proof can be found in Appendix \ref{pf-ExpectedCost} of the supplementary material.
\end{proof}
Next, we calculate the expected AoII achieved by the threshold policy. We start with the case of $\tau=\infty$.
\begin{theorem}\label{prop-LazyPerformance}
The expected AoII achieved by the threshold policy with $\tau=\infty$ is
\[
\bar{\Delta}_\infty = \frac{1}{2p}.
\]
\end{theorem}
\begin{proof}
In this case, the transmitter will never initiate any transmissions. Hence, the state transitions are straightforward. The complete proof can be found in Appendix \ref{pf-LazyPerformance} of the supplementary material.
\end{proof}
In the following, we focus on the case where $\tau$ is finite. We recall that the expected AoII is given by \eqref{eq-expectedAoIIDet}. The problem arises because of the infinite sum. To overcome this, we adopt a similar approach as proposed in Section \ref{sec-SteadyState}. More precisely, we leverage the structural property of the threshold policy and define $\Sigma\triangleq \sum_{\Delta=\omega}^{\infty}C(\Delta,1)\pi_\Delta$. Then, equation \eqref{eq-expectedAoIIDet} can be written as
\[
\bar{\Delta}_{\tau} = \sum_{i=0}^{\tau-1}C(i,0)\pi_i + \sum_{i=\tau}^{\omega-1}C(i,1)\pi_i + \Sigma.
\]
As we have obtained the expressions of $\pi_\Delta$ and $C(\Delta,a)$ in previous subsections, it is sufficient to obtain the expression of $\Sigma$.
\begin{theorem}\label{prop-Performance}
Under \textbf{Assumption 1} and for $0\leq\tau<\infty$,
\[
\Sigma = \ddfrac{\sum_{t=1}^{t_{max}}\left[p_tP^t_{1,1+t}(1)\left(\sum_{i=\omega-t}^{\omega-1}C(i,1)\pi_i\right) + \Delta_t'\Pi_t\right]}{1-\sum_{t=1}^{t_{max}}\bigg(p_tP^t_{1,1+t}(1)\bigg)},
\]
where
\[
\Pi_t= p_{t}P^{t}_{1,1+t}(1)\left(\sum_{i=\omega-t}^{\omega-1}\pi_i + \Pi\right),
\]
\[
\Delta_t' = \sum_{i=1}^{t_{max}}p_i\left(\frac{t-t(1-p)^i}{p}\right).
\]
\end{theorem}
\begin{proof}
We delve into the definition of $\Sigma$ and repeatedly use the properties of $C(\Delta,a)$ and $P_{\Delta,\Delta'}(a)$. The complete proof can be found in Appendix \ref{pf-Performance} of the supplementary material.
\end{proof}
\begin{theorem}\label{prop-ThresholdPerformance}
Under \textbf{Assumption 2} and for $0\leq\tau<\infty$,
\[
\Sigma = \ddfrac{\sum_{t=1}^{t_{max}}\left[\left(\sum_{i=\omega-t}^{\omega-1}\Upsilon(i+t,t)C(i,1)\pi_i\right) + \Delta_t'\Pi_t\right]}{1-\sum_{t=1}^{t_{max}}\Upsilon(\omega+t,t)},
\]
where
\[
\Upsilon(\Delta,t)= p_tP^t_{\Delta-t,\Delta}(1) + p_{t^+}P^{t^+}_{\Delta-t,\Delta}(1),
\]
\[
\Pi_t = \sum_{i=\omega-t}^{\omega-1}\Upsilon(i+t,t)\pi_i + \Upsilon(\omega+t,t)\Pi,
\]
\[
\Delta_t' = \sum_{i=1}^{t_{max}}p_i\left(\frac{t-t(1-p)^i}{p}\right) + p_{t^+}\left(\frac{t-t(1-p)^{t_{max}}}{p}\right).
\]
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem \ref{prop-Performance}. The complete proof can be found in Appendix \ref{pf-ThresholdPerformance} of the supplementary material.
\end{proof}
\section{Numerical Results}\label{sec-Numerical}
We lay out the numerical results in Fig.~\ref{fig-Numerical}.
\begin{figure*}%
\centering
\begin{subfigure}{2in}
\centering
\includegraphics[width=2in]{Figure/Expected_AoII_Ass1_Geo.pdf}%
\caption{Performance under \textbf{Assumption 1} and Geometric distribution.}%
\label{fig-Assumption1Geo}%
\end{subfigure}\hfill%
\begin{subfigure}{2in}
\centering
\includegraphics[width=2in]{Figure/Expected_AoII_Ass1_Zipf.pdf}%
\caption{Performance under \textbf{Assumption 1} and Zipf distribution.}%
\label{fig-Assumption1Zipf}%
\end{subfigure}\hfill%
\begin{subfigure}{2in}
\centering
\includegraphics[width=2in]{Figure/Expected_AoII_Ass2_Geo.pdf}%
\caption{Performance under \textbf{Assumption 2} and Geometric distribution.}%
\label{fig-Assumption2Geo}%
\end{subfigure}%
\caption{Illustrations of the expected AoII in the function of $p$ and $\tau$. In the figure, lines represent the simulation results, and markers represent the calculation results. We set the upper limit on the transmission time $t_{max}=5$, the success probability in Geometric distribution $p_s = 0.7$, and the constant in Zipf distribution $a=3$. The simulation results are the average of $15$ runs, each run containing $25000$ epochs.}
\label{fig-Numerical}
\end{figure*}
\begin{itemize}
\item Fig.~\ref{fig-Assumption1Geo} considers the system under \textbf{Assumption 1} with $t_{max}=5$ and Geometric transmission delay with success probability $p_s=0.7$. Specifically, $p_t = (1-p_s)^{t-1}p_s$. For each considered threshold $\tau$, we vary the value of $p$ and plot the expected AoIIs obtained through numerical simulation and calculated using Section \ref{sec-MDP}.
\item Fig.~\ref{fig-Assumption1Zipf} considers the same system as the one considered in Fig.~\ref{fig-Assumption1Geo}, except this time, the transmission delay follows a Zipf distribution with the constant $a=3$. More precisely, $p_t = \frac{t^{-a}}{\sum_{i=1}^{t_{max}}i^{-a}},\ 1\leq t\leq t_{max}$. Zipf distribution is commonly used in linguistics and is also studied in the context of AoII~\cite{b6} and AoI~\cite{b11}.
\item Fig.~\ref{fig-Assumption2Geo} considers the same system as the one considered in Fig.~\ref{fig-Assumption1Geo}, except this time, the system adopts \textbf{Assumption 2} with $t_{max}=5$.
\end{itemize}
We can confirm the correctness of our theoretical analysis and calculations based on the results shown in the figure. At the same time, we can conclude from the figure that the threshold policy with $\tau=0$ is not optimal. That is, when AoII is zero, transmitting new updates is harmful to the system. One reason is that updates transmitted when AoII is zero do not provide new information to the receiver. Meanwhile, the transmission will occupy the channel for a few time slots. Therefore, such action deprives the transmitter of the ability to send new updates for the next few time slots without providing the receiver with any new information.
\section{Conclusion}
In this paper, we study the AoII in a communication system where the communication channel has a random delay. To ease the theoretical analysis, we make two independent assumptions. Under the first assumption, the transmission time is capped, and the update will always be delivered. Under the second assumption, the system will terminate the transmission when its duration exceeds a specific value. For the remote receiver to know the dynamic source well, the transmitter will choose when to send updates. This paper considers the threshold policy, under which the transmitter initiates transmission only when the AoII exceeds the threshold. Leveraging the notion of the Markov chain, we analyze the system dynamics and calculate the expected AoII achieved by the threshold policy precisely.
Finally, by analyzing the calculation results, we find that the threshold policy with $\tau=0$ is not optimal. In other words, to achieve the best performance, the transmitter should selectively transmit updates. Aware of this phenomenon, we naturally ask what the optimal policy is. Also, we remember that one of the reasons for this phenomenon is that the transmitter does not have the ability to preempt. Consequently, the transmitter can only wait until the current transmission is completed before initiating a new one. Therefore, we ask what is the optimal policy if the transmitter is preemptive. These two questions will be the focus of our subsequent work. We also plan to extend the methodology demonstrated in this paper to more generic system setups, such as more complex source processes, more general AoIIs, and systems where multiple transmitter-receiver pairs exist.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-1891 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Autonomous robotic systems have become one of the most popular research topics in recent years due to its huge potential social benefits. In particular, autonomous driving is developing rapidly and at the same time requires efficient motion planning in complex and highly dynamic environments, meanwhile taking into account the kinodynamic constraints of an non-holonomic autonomous vehicle. Often, the planners that address the first aspect of the problem, i.e. dynamic environment, like the ones presented in~\cite{Otte2016RRTX, phillips2011sipp} do not take into account the kinodynamic constraints. On the other hand, kinodynamic planners often do not explicitly reason about the future changes of the environments, even if these changes are for-seen, e.g. predicted by the control system of the robot. In this work we want to enrich the kinodynamic planning methods with the ability to take the dynamics of the environment as well (at the planning stage).
Two common approaches to kinodynamic planning are widespread: lattice-based and sampling-based planning methods. Lattice-based planning methods utilize the so-called motion primitives~\cite{MotionPrimitives} that form a regular lattice. Each motion primitive represents a small segment of kinodynamically feasible trajectory of the robot, which is pre-computed before planning. At the planning stage the search-based algorithms (e.g. A*~\cite{hart1968formal} of its variants) are used to find the resultant trajectory, represented as a sequence of the motion primitives. Contrary, sampling-based planners, e.g. RRT~\cite{RRT} or RRT*~\cite{RRT*}, grow a search tree by sampling states in the robot's configuration space and invoke a local planner to connect two states while respecting the kinematic constraints of the robot. Thus, the motion primitives are constructed online (i.e. while planning).
\begin{figure}[]
\centering
\includegraphics[width=0.49\textwidth]{Images/IntroFigure.png}
\caption{Illustration of the POLAMP algorithm. \textbf{Red} arrow represent the start state, \textbf{Blue} arrow -- the goal one. \textbf{Black} and \textbf{Blue} rectangles represent the static and dynamic obstacles respectively. \textbf{Cyan} curve is the generated trajectory by A*-POLAMP.}\label{fig:IntroFigure}
\end{figure}
One of the prominent approaches to alleviate the complexity of local planners to respect the kinematic constraints of the robot is to use methods based on reinforcement learning such as methods proposed in RL-RRT~\cite{Chiang2019RL-RRT} and PRM-RL~\cite{Faust2018PRM-RL}. In this work, we suggest Policy Optimization algorithm to Learn Adaptive Motion Primitives (POLAMP) to take into account the future changes of the environment at the planning stage, while producing plans that satisfy the kinodynamic constraints of the robot. POLAMP utilizes a reinforcement learning approach to find a policy that generates local segments of trajectory which are embedded in the global planning algorithms, RRT and A*, to generate a global motion plan. Our learnable local planner utilizes local observation to avoid both static and dynamic obstacles and, as well, respect the kinodynamic constraints of the robot. As a result, POLAMP is able to generate feasible solutions with high success rate ($>92\%$) in the environments with up to $50$ moving obstacles thus outperforming competitors.
\section{Related Work}
The problem of kinodynamic planning is well researched and various approaches such as graph-based, sampling based, optimization, reinforcement learning or the combination of them are used, see~\cite{gonzalez2015review} for review. Nevertheless, the kinodynamic planning in presence of dynamic obstacles is still a challenging problem.
A widespread approach to kinodynamic planning in robotics is sampling-based planners. The most popular way to account for the robot's dynamics is to sample in the robot's state space and attempt to connect states via different local planners~\cite{hwan2011anytime,webb2013kinodynamic,BIT*SQP,Otte2016RRTX}, including for car-like robot~\cite{vailland2021CubicBezier}. The method described in this work also relies on the local planner, but it is learnable and takes the moving obstacles into account. Unlike the methods presented in~\cite{Otte2016RRTX, Chen2019Horizon}, it assumes that the information on how the obstacles are intended to move in future is available (e.g. predicted from the sensors' observations) and takes this information into account while planning.
Recently a lattice-based planner for car-like robots in highly dynamic environments was proposed~\cite{lin2021search}. Other variants of lattice-based planners for car-like robots are described in ~\cite{MotionPrimitives, rufli2010design, ziegler2009spatiotemporal}. Contrary to these algorithms the suggested method does not construct a lattice in the high-dimensional space to search for a feasible plan, but uses a local learnable planner to connect states.
There also exist methods that, first, generate a rough path, often the one that does not take the kinodynamic constraints into account, and then generate controls to follow the path respecting the system's dynamics and avoiding obstacles. The variants of these methods are described in~\cite{perez2021robot, kontoudis2019kinodynamic}. Unlike them the method proposed in this work builds a feasible trajectory in one planning step. Avoiding the moving obstacles is performed by utilizing the knowledge of their future trajectories.
Finally, the most similar methods to the one presented in this article are RL-RRT~\cite{Chiang2019RL-RRT} and PRM-RL~\cite{Faust2018PRM-RL}. Our method also uses a learning local planner inside a sampling-based planner. However, unlike these methods our local planner considers the presence of dynamic obstacles.
\section{Problem Statement}
We are interested in planning a feasible kinodynamic trajectory for a non-holomonic robot, that avoids both static and moving obstacles. In particular we are interested in car-like robots whose dynamics is described as~\cite{surveyMotionPlanning}:
\begin{align}\label{eq:diffEquationsRobot}
&\dot{x} = v cos(\theta)\nonumber\\
&\dot{y} = v sin(\theta)\\
&\dot{\theta} = \frac{v}{L} \tan(\gamma), \nonumber
\end{align}
where $x$,$y$ are the coordinates of the robot's reference point (middle of the rear axle), $\theta$ is the orientation, $L$ is the wheel-base, $v$ is the linear velocity, $\gamma$ is the steering angle. The former three variables comprise the state vector: $\boldsymbol{x}(t)=(x,y,\theta)$.
The latter two variables form the control vector: $\boldsymbol{u}(t) = (v, \gamma)$, which can also be re-written using the acceleration $a$ and the rotation rate $\omega$ as follows: $v = v_0 + a \cdot t, \gamma = \gamma_0 + \omega \cdot t$.
The robot is operating in the 2D workspace populated with static and dynamic obstacles. Their shapes are rectangular (as the one of the robot). Let $Obs = \{ Obs_1(t), ..., Obs_n(t)\}$ denote the set of obstacles, where $Obs_i(t)$ maps the time moments to the positions of the obstacle's reference point in the workspace. For the static obstacles it obviously holds that $\forall t:Obs_i(t)=Obs_i(0)$. In our work, we consider the functions $Obs_i(t)$ to be known.
Denote by $\mathcal{X}_{free}(t)$ all the configurations of the robot which are not in collision with any of the obstacles at time moment $t$ (w.r.t. the robot's and the obstacles' shapes). The problem now is to find the controls (as functions of time) that move the robot from its start configuration $s_{start}$ to the goal one $s_{goal}$ s.t. that the kinodynamic constraints~(\ref{eq:diffEquationsRobot}) are met and the resultant trajectory lies in $\mathcal{X}_{free}(t)$.
\section{Method}
We rely on the combination of the global and the local planners to solve the described problem. Global planner is aimed to systematically decompose the problem into the set of sub-problems which are easier to solve, i.e. moving from one configuration to another. The local planner is tailored to solve the latter problem. Any such a sub-problem is in essence the two-boundary value problem with additional constraints (prohibiting the robot to collide with both static and dynamic obstacles) which is hard to solve directly. The crux of our approach is to cast this problem as the partially-observable Markov decision process (POMDP) and to obtain the policy for solving the POMDP via the reinforcement learning, and more specifically via the custom-tailored Proximal Policy Optimization algorithm. Once the policy is obtained (learned) we plug it into the global planner. As a global planner we can use an adaptation of the renowned algorithms, RRT and A*, to get the final solver. We name this type of solvers as POLAMP -- Policy Optimization to Learn Adaptive Motion Primitives.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{Images/NewActorCriticScheme.png}
\caption{Actor-Critic architecture that is implemented in POLAMP}
\label{fig:actor-critic-arch}
\end{figure}
\subsection{Learnable Local Planner}
\paragraph{Background}
Formally, POMDP can be represented as a tuple $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \Omega)$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{P}$ is the state-transition model, $\mathcal{R}$ is the reward function, $\Omega$ is the observation space. During learning at each time step the agent receives an observation $o_t \in \Omega$, takes an action $a_t \in \mathcal{A}$ and receives a reward $r_t \in \mathcal{R}$. The goal is to learn a policy, i.e. the mapping from the observations to the distributions of actions, $\pi: \Omega\rightarrow P(\mathcal A)$. The policy should maximize the following expected return from the start state $s_t$:
\begin{equation*}
J(\pi) = \mathbb E_{r_{i}, s_{i} \sim \mathcal{P}, a_{i} \sim \pi}[\sum_{i=t}^T \gamma^{i-t}r(s_i, a_i) | s_t, a_t], i>t,
\end{equation*}
where $\gamma$ is the discounting factor.
The $Q$-function is used to concise definition of the most essential information for the agent in order to make an optimal decision:
\begin{equation*}
Q^{\pi}(s_t, a_t) = \mathbb{E}_{r_{i}, s_{i} \sim \mathcal{P}, a_{} \sim \pi}[R_t | s_t, a_t], i>t,
\end{equation*}
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{Images/learningEnvironment.png}
\caption{The learning environment. \textbf{Green} rectangle with the \textbf{Red} arrow is the current state of the robot. \textbf{Green} rectangle with the \textbf{cyan} orientation is the goal desired state. \textbf{Blue} rectangles are the static obstacles and the \textbf{Blue} rectangle with \textbf{Pink} arrow is the moving obstacle. \textbf{Orange} lines are the laser beams.}
\label{fig:learningEnvironment}
\end{figure}
In this paper, we consider algorithms of the actor-critic family, which are more stable, have less variance, and are less prone to convergence to a local minimum. The actor updates the policy approximator $\hat \pi_w$ using the following equation:
\begin{equation*}
\nabla_w J(\hat{\pi}_w) = \mathbb{E}_{\hat{\pi}_w} \big[ \nabla_w \log \hat{\pi}_w(s,a) Q^{\hat{\pi}_w}(s,a) \big],
\end{equation*}
where $\hat \pi_w$ is an arbitrary differentiable policy. Critic evaluates the approximation of the $Q^{\hat{\pi}_w}(s,a)$ value for the current policy $\hat \pi_w$. Actor-critic algorithms have two sets of parameters: a critic updates parameters $\phi$ of the $Q$-function, and an actor updates parameters $w$ of the policy according to the critic assumptions.
In this work, we use Proximal Policy Optimization method~\cite{PPO} (PPO) because it has shown the best performance among other methods in our preliminarly evaluation. Actor part of the PPO optimizes the clipped loss function
\begin{multline*}
L(s, a, w_k, w) = \min (\frac{\pi_w(a|s)}{\pi_{w_k}(a|s)} A^{\pi_{w_{old}}}(s, a), \\
clip(\frac{\pi_w(a|s)}{\pi_{w_{old}}(a|s)}, 1- \epsilon, 1+\epsilon) A^{\pi_{w_{old}}}(s, a)),
\end{multline*}
where $A^{\pi_{w}}$ is an estimation of the advantage function $A(s, a) = Q(s, a) - V(s)$ given by the critic part. Clipping is a regularizer removing incentives for the policy to change dramatically. The hyperparameter $\epsilon$ corresponds to how far away the new policy can go from the old while still profiting from the objective. When integrating the PPO algorithm into our method, we considered the state $s_t$ as a function from observation $s_t\approx f(o_t)$, where $f$ is lower layers of neural network approximator of the actor and critic shown in Fig.~\Ref{fig:actor-critic-arch}.
\paragraph{Observations, actions and rewards\label{localPlannner}}\label{paragraph:observations-actions-reward}
In this paper, we consider actions $a_t = (a, \omega) \in R^2$ that are composed of setting the linear acceleration $a \in (-5, 5) \; m/s^2$ and rotation rate $\omega \in (-\pi/12, \pi/12) \;rad/s$. The latter ones can be converted to robot's controls using the transformations for Eq.~\ref{eq:diffEquationsRobot}, where we set the range of the linear velocity in $v \in (0, 4) \; m/s$ and steering angle in $\gamma \in (-\pi/6, \pi/6) \;rad$.
The observation $o_t$ is a vector that consists of the $N_{beams}=39$ measurements of the lidar that cover the 360$^{\circ}$ surrounding of the robot up to the length of $beam_{max} = 20\;m$ -- see Fig.~\ref{fig:learningEnvironment} concatenated with the features $(\Delta x, \Delta y, \Delta \theta, \Delta v, \Delta \gamma, \theta, v, \gamma, a, \omega)$, where $\Delta(s_i)$ stands for the difference between the respective parameter $s_i$ of the goal state and the current one, $(\theta, v, \gamma)$ are last three parameters of the current state and $(a, \omega)$ are the current controls. We consider an ideal environment, so both simulation and actuation model do not have errors.
\begin{figure*}[t]
\centering
\includegraphics[width=0.32\textwidth]{Images/new_map2.png}
\includegraphics[width=0.32\textwidth]{Images/new_map0.png}
\includegraphics[width=0.32\textwidth]{Images/new_map1.png}
\caption{Maps (1-3) used in our tests. \textbf{Red} rectangles and arrows show the start coordinates and orientations (with the start velocity $v = 0$ and the steering angle $\gamma = 0$), and \textbf{Cyan} rectangles and arrows show the goal coordinates and orientations.}
\label{fig:environments}
\end{figure*}
The reward function is described by:
\begin{equation*}
\mathcal R = w_r^T [r_{\text{goal}}, r_{\text{col}}, r_{\text{field}}, r_{t}, r_{\text{backward}}, r_{v_\text{max}}, r_{\gamma_\text{max}}],
\end{equation*}
where $w_r$ is a vector of weights, $r_{\text{goal}}$ is 1 if the agent has reached the goal state with the $(\epsilon_{\rho}, \epsilon_{\theta})$ tolerance and 0 otherwise, $r_{\text{col}}$ is $-1$ if the agent collides with the obstacles and 0 otherwise, $ r_{\text{field}} = \rho_{curr} - \rho_{last}$, where $\rho_{last} = \|s_{t - 1} - s_{goal}\|$ and $\rho_{curr} = \|s_t - s_{goal}\|$ we penalize the agent for moving away from the goal, $r_{t}=-1$ is the constant penalty for each time step, $r_{\text{backward}}$ is $-1$ the the agent is using rear gear (moving backwards) and 0 otherwise, $r_{v_\text{max}}$ is $-1$ for exceeding the maximum speed limit, $r_{\gamma_\text{max}}$ is $-1$ for exceeding the maximum of steering angle threshold. We set the weights to be $w_r = [20, 8, 1, 0.1, 0.3, 0.5, 0.5]$ (empirically those values result in a more efficient learning).
\paragraph{Curriculum policy learning\label{policyLearning}}
To accelerate training end we propose a three-stage curriculum learning (see Fig.~\ref{fig:curriculum-training}). During the first stage, we train the agent in the empty environment. This stage is tailored to learn the kinodynamic constraints of the vehicle. Once the agent achieves an acceptable success rate (80\% of the solved tasks), we stop training and proceed to the next stage. In the second stage, we re-train the policy in a new environment which is populated with static obstacles so the agent learns to avoid the collisions with them. In the last stage, we add an adversarial dynamic obstacle to the static environment so the agent learns to circumnavigate it or wait in place if needed to let the obstacle go away. The latter is the essential skill for planning with dynamic obstacles.
\subsection{Global planners}
\begin{algorithm}[t]
\caption{POLAMP with RRT planner}
\label{alg:RRT-POLAMP}
\begin{algorithmic}[1]
\Require $s_{start}$, $s_{goal}$, $Obs(t)$, $N_{max}$, RL-PI, $D$, $N_{nbs}$, $R_{ex}$
\Ensure $\mathcal{P}$: Motion Plan
\State $s_{start}.t \gets 0$
\State $\mathcal{T} \gets$
\Call{InitializeTree}{$s_{start}$}
\While{$N_{\max}$ was not reached}
\State $s_{rand} \gets$ \Call{RandomSample}{}
\State $neighbors \gets$ \Call{Nearest}{$\mathcal{T}, s_{rand}$, $N_{nbs}$}
\For{$s_i \in neighbors $}
\State $s_j \gets$ \Call{Extend}{$s_i, s_{rand}, R_{ex}$}
\State $s_j \gets$ \Call{RL-Steer}{$s_i, s_j, s_{goal}, Obs(t)$, RL-PI, $D$}
\If{$s_j.tr$ is not empty}
\State $\mathcal{T} \gets$ \Call{APPEND}{$s_j$}
\If{$s_{goal}.tr$ is not empty}
\State $\mathcal{T} \gets$ \Call{APPEND}{$s_{goal}$}
\State \Return $\mathcal{P}$ = \Call{MotionPlan}{$\mathcal{T}$}
\Else
\State \textbf{break}
\EndIf
\EndIf
\EndFor
\EndWhile
\State \Return $\mathcal{P} = \emptyset$
\end{algorithmic}
\end{algorithm}
Although our learnable local planner can generate a trajectory between two nearby states it is not well-suited for constructing a long-term plans. Thus we suggest using a global planner as well that can consistently explore different regions of the workspace relying on the global observation and find the ways to reach the remotely located goals. In this work, we utilize the classical algorithms RRT and A* as the global planners. For the detailed explanation of these algorithms we refer the reader to the original papers, and now proceed with an overview.
The pseudocodes of both algorithms are given in Alg.~\ref{alg:RRT-POLAMP} and Alg.~\ref{alg:ASTAR-POLAMP} respectively. The main difference between the sampling-based (i.e. RRT) and the lattice-based (i.e. A*) algorithms is how to choose the state to extend and how to extend the given state. On the one hand RRT uses \textit{RandomSample} in the state space to grow the search tree randomly from the \emph{Nearest} state in the tree using \emph{Extend} to limit the maximum distance of the states that should be connected. On the other hand, A* does not choose a random sample, but rather uses a deterministic priority queue of states, OPEN, to choose which state to expand (extend). The OPEN queue is sorted in order of increasing $f$-values, where $f(s) = g(s) + \epsilon \cdot h(s)$ consists of two terms $g(s)$ and $h(s)$. $g(s)$ is the cost of the shortest path from the start state to the current one, and $h(s)$ is the heuristic estimate of the cost from $s$ to goal. Upon choosing a most promising state A* the next states (\emph{Successors}) using a fixed set of motion primitives through which the robot reaches the next states.
The major difference between these classical algorithms and POLAMP is that POLAMP explicitly reasons about time moments to take the dynamic obstacles into account while planning. Local planning is implemented with the \textit{RL-STEER} function. This function solves a local planning problem, defined by the two states $s_i$ and $s_j$. If the distance between $s_i$ and $s_{goal}$ is less than $D$ then the goal is attempted to be reached from $s_i$. To reach the target state the policy \textbf{RL-PI} is used which has an access to the information on how the dynamic obstacles move, i.e. $Obs(t)$. If \textbf{RL-PI} managed to connect the states, it returns the generated trajectory $s_j.tr$ and the time by which the target state is reached, i.e. $s_j.t$. Thus all the states in the search tree bear the information on their reaching time which is used while planning.
In this work, we use a modified version of RRT, when at each iteration \emph{Nearest} gets several $N_{nbs}$ with the maximum radius of extend $R_{ext}$ and tries to generate trajectories to them until one of them is build.
Unlike the original algorithm A*, where the search ends when the goal state is expanded, in this work, the search ends as soon as the trajectory to the final state is found. To generate the successors we use the technique of online motion primitives from~\cite{lin2021search}, i.e. we apply discrete controls $\xi = (a, \gamma)$ for a period of $H$ to determine the robot's desired configurations. Then we use our learned policy to construct collision-free trajectories to these configurations.
\begin{algorithm}[ht!]
\caption{POLAMP with A* planner}
\label{alg:ASTAR-POLAMP}
\begin{algorithmic}[1]
\Require $s_{start}$, $s_{goal}$, $Obs(t)$, $T$, $\xi$, $D$, RL-PI, $N_{max}$
\Ensure $\mathcal{P}$: Motion Plan
\State CLOSED $\gets \emptyset$, OPEN $\gets \emptyset$
\State $s_{start}.t \gets 0$, $ g(s_{start}) \gets 0$, $f(s_{start}) \gets h(s_{start})$
\State OPEN $\gets$ \Call{Insert}{$s_{start}$}
\While{OPEN is not empty or $N_{max}$ was not reached}
\State $s_i \gets $ OPEN.POP(), CLOSED $\gets$ \Call{Insert}{$s_i$}
\State SUCCESSORS $\gets$ \Call{GetNextStates}{$\xi, T$}
\For{$s_j \in$ SUCCESSORS}
\State $s_j \gets$ \Call{RL-Steer}{$s_i, s_j, s_{goal}, Obs(t)$, RL-PI, $D$}
\If{$s_j.tr$ is empty}
\State \textbf{continue}
\EndIf
\If{$s_{goal}.tr$ is not empty}
\State CLOSED $\gets s_{t_{n}}$, CLOSED $\gets s_{goal}$
\State \Return $\mathcal{P}$ = \Call{MotionPlan}{CLOSED}
\EndIf
\State $c(s_i, s_j) \gets $\Call{COST}{$s_{t_n}.tr$}
\If{$g(s_j)$ is better than any previous one}
\State OPEN $\gets$ \Call{Insert}{$s_j$}
\EndIf
\EndFor
\EndWhile
\State \Return $\mathcal{P} = \emptyset$
\end{algorithmic}
\end{algorithm}
\section{Experimental Evaluation}
We evaluated POLAMP (and compared it with the competitors) in two types of environments: with static obstacles and with both static and dynamic obstacles.
\subsection{Policy learning}
To train the policy we created a dataset of different tasks (start and goal states) in three types of environments: empty, static, dynamic. Every of these environments had a size $40m \times 40m$. Each task was generated randomly in a way that the distance between the start and goal locations was in the interval of $[15, 30]$m, moreover the difference in orientations did not exceeded $\frac{\pi}{4}$. The task was considered solved if the agent reached the goal state with the Euclidean error $\epsilon_{\rho} \leq 0.3$ m and the orientation error $\epsilon_{\theta} \leq \pi/18$ rad with no collisions.
To generate tasks in static environments we sampled $12$ fragments of size $40m \times 40m$ from the map depicted on Fig.~\Ref{fig:environments} on the left (Map1), which has the size of $100m \times 60m$.
For training in dynamic environments we populated the static environments with one adversarial dynamic obstacle. I.e. the start state of the dynamic obstacles and its trajectory were generated semi-randomly in such way that with a very high chance it will intersect the path of the agent and will force the latter to detour/wait. An illustration is given in Fig.~\ref{fig:learningEnvironment}.
Similarly to the train dataset we created a separate set of validation tasks. We used them to measure the progress of training, i.e. once in a while we evaluated the performance of the currently trained policy on the validation tasks. If the success rate (the fraction of the solved tasks) was lower than 80\% we continue learning, in the opposite case -- we stopped learning.
\textbf{The effect of curriculum learning.} To qualitatively assess the effect of the proposed curriculum learning we trained two policies: the first (baseline) was trained immediately in the dynamic environment, $\pi^{stand}$, while the second one, $\pi^{curr}$, was trained with the proposed three-stage curriculum. The corresponding learning curves are shown in Fig~\ref{fig:curriculum-training}. Evidently the curriculum policy $\pi^{curr}$ starts to converge from approx. 300M time step with almost 30 of reward and in this time the standard policy $\pi^{stand}$ only achieves the reward of 13 (and starts converging later). Thus, we confirm that the suggested curriculum leads to a faster convergence, which is especially useful when the resources, e.g. training time, are limited.
\begin{figure}[!t]
\includegraphics[width=0.5\textwidth]{Images/PPOCurr-reward.png}
\caption{
A comparison of learning curves between curriculum and standart learning for our policy. The dash lines represent the intermediate trained policy in the respecting environment.}
\label{fig:curriculum-training}
\end{figure}
\begin{table}[ht]
\begin{center}
\resizebox{0.6\linewidth}{!}{
\begin{tabular}{p{0.10\linewidth}|p{0.10\linewidth}p{0.15\linewidth}p{0.10\linewidth}}
Agent & Dynamic & Orientation & SR \%\\
\hline \hline
$\pi^{st}_{w/o-\theta}$ & no & no & 99\\
$\pi^{st}_{w-\theta}$ & no & yes & 32\\
$\pi^{dyn}_{w/o-\theta}$ & yes & no & 28\\
$\pi^{dyn}_{w-\theta}$ & yes & yes & 22\\
\hline
\hline
\end{tabular}}
\caption{The results of the trained DDPG agent in different setups.}
\label{tab:DDPG-agents}
\end{center}
\end{table}
\textbf{Training the learnable baseline.} The learnable baseline which we primarily aimed to compare with was RL-RRT~\cite{Chiang2019RL-RRT}. Similarly to POLAMP it is a combination of the global planner, RRT, with the learnable local planner, based on the DDPG policy. To provide a fair comparison we trained this policy on our dataset from scratch. However, even after a prolonged training its success rate on the validation tasks was not exceeding 22\%.
To understand the reasons of such performance we conducted additional training for the three variants of this policy in more simple setups. The characteristics of those setups and the resultant success rates are shown in Table~\ref{tab:DDPG-agents}. Notably, the policy that ignored the orientation constraints and dynamic obstacles (the same setting from RL-RRT), $\pi^{stat}_{w/o-\theta}$, showed a very good performance -- almost 100\% success rate. This goes in line with the original paper on RL-RRT as the authors considered this setting. However, when the setup becomes more complex, the performance of the policy drops significantly. For example, the policy which ignores the dynamic obstacle, $\pi^{stat}_{w/-\theta}$, showed only 32\% SR, and the one that ignores the goal orientation, $\pi^{dyn}_{w/o-\theta}$, -- 28\%. Thus, we conduct that this type of policy has an acceptable performance only in basic setups.
The poor performance of the RL-RRT in the case of more complex environmental conditions and with a large number of dynamic obstacles is primarily due to the instability of the learning process of the DDPG algorithm in a stochastic environment. DDPG belongs to the class of the off-policy methods, saves experience from different episodes in the replay buffer (including those that led to collisions), and generates a deterministic policy relative to the value function. In POLAMP, we use the on-policy PPO method, when only the latest relevant trajectories are considered to improve the policy, which at the later stages of training are unlikely to contain collision situations. In a number of works~\cite{SAC, MPO, Muesli}, on-policy algorithms showed a significant advantage over off-policy in a stochastic environment, due to the ability to generate a stochastic policy. The advantage of PPO over DDPG in our task is undeniable when using curriculum learning when a replay buffer prevents the DDPG from adapting to the new conditions of the next stage of training.
\begin{figure*}[t]
\centering
\includegraphics[width=0.31\linewidth]{Images/DynSuccessRate.png}
\includegraphics[width=0.31\linewidth]{Images/DynTimeToReach.png}
\includegraphics[width=0.31\linewidth]{Images/DynSamples.png}
\caption{Planning results for the maps with dynamic obstacles (success rate, time to reach and number of samples). The legend for all algorithms is shown in the figure on the right.}
\label{fig:MetricsOnDynamicMaps}
\end{figure*}
\begin{table}[t]
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{p{0.05\linewidth}|p{0.25\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.14\linewidth}p{0.11\linewidth}}
Map & Planner & SR,\% & TTR,\% & Samples,\% & Time,\%\\
\hline \hline
\multirow{2}{4em}{1} & POLAMP-RRT & 100 & 100 & 100 & 100\\
& POLAMP-A* & 100 & 93 & 179 & 103\\
& RRT-ES & 90 & 120 & 3851 & 104\\
& RL-RRT & 40 & 96 & 2424 & 578\\
& SST* & 85 & 140 & 4124 & 111\\
\hline
\multirow{2}{4em}{2} & POLAMP-RRT & 100 & 100 & 100 & 100\\
& POLAMP-A* & 100 & 78 & 121 & 85\\
& RRT-ES & 62.5 & 143 & 1322 & 107\\
& RL-RRT & 4.5 & 153 & 677 & 308\\
& SST* & 82.5 & 123 & 1225 & 102\\
\hline
\multirow{2}{4em}{3} & POLAMP-RRT & 100 & 100 & 100 & 100\\
& POLAMP-A* & 100 & 84 & 143 & 89\\
& RRT-ES & 31 & 102 & 3426 & 98\\
& RL-RRT & 8 & 126 & 1532 & 407\\
& SST* & 58.8 & 141 & 3560 & 101\\
\hline
\hline
\end{tabular}
}
\caption{Results of the experiments on the static maps.}
\label{tab:resultsForStaticEnvironment}
\end{center}
\end{table}
\subsection{Evaluation In Static Environments}
We used three different maps, resembling the parking lots, for the evaluation -- see Fig.~\ref{fig:environments}. Each map had a size of $100m \times 60m$ and was generated based on the dataset from~\cite{parkingDataSet}. Please note, that only several fragments of Map1 were observed by the policy during training, while Map2 and Map3 were not used while training at all. For each map, we generated 20 different planning instances, i.e. the start-goal location pairs. We generated them randomly and discarded the instances for which the straight-line distance between start and goal was less than $50$m (in order to avoid non-challenging tasks). Start/goal orientations were also chosen randomly as the multiplicative of $90^\circ$. Each test was repeated 30 times. A test was counted as failure if the robot was not able to reach the goal with following tolerance: $\epsilon_{\rho} \leq 0.5$ m and $\epsilon_{\theta} \leq \pi / 18$.
We compared POLAMP to the following algorithms: RRT that utilized a well-known non-learnable steering function based on the exponential stabilization~\cite{Astolfi1999ESPOSQ} (denoted RRT-ES), a kinodynamic motion planner SST*~\cite{li2016asymptotically}, RL-RRT~\cite{Chiang2019RL-RRT} -- a state-of-the-art planning method with a learnable local planner (details on learning this planner were provided above).
For the RRT part of the algorithms, we set the radius of the Extend method $R_{ext} = 10$ m, the maximum distance which we can reach the goal from is $D = 30$ m, the number of nearest neighbors $N_{nbs} = 5$ and the maximum number of iterations of the RRT $N_{max} = 1500$ for the POLAMP-RRT and RL-RRT, and $N_{max} = 3000$ for the RRT-ES and SST*. For POLAMP-A* we used the same parameters as for RRT. Additionally, we used the 7 discrete steering angles ranged uniformly between $[\gamma_{min}, \gamma_{max}]$, the linear velocity $v=2$ and the time horizon $H = 3$ s to generate the lattice of the motion primitives. All these values were chosen following a preliminary evaluation aimed at identifying the suitable parameters' values.
The metrics we used were: success rate (SR) -- how often the planner produces a path that reaches the goal, time to reach the goal (TTR), total number of samples and the runtime of the algorithm.
The results are presented in Table~\ref{tab:resultsForStaticEnvironment}. Notably, POLAMP has a much higher success rate compared to the other algorithms reaching almost 100\% in every map. This shows that our learnable local planner, indeed, generalizes well to the unseen consitions. The observable trend is that POLAMP requires much fewer samples than RRT-ES to generate the motion plan. For example, for Map2 POLAMP requires 14x and 12x less samples compared to RRT-ES and SST* respectively. This is because POLAMP performs collision avoidance for local steering while RRT-ES and SST* do not. In comparison with RL-RRT, POLAMP also requires less samples.
Also, is can be noted that RL-RRT has a higher success rate for the Map1 than for the other maps, meaning that, unlike our policy, the policy of RL-RRT did not generalize well to the other two maps. We can suggest that the main reason for RL-RRT not being able to perform well on Map2 and Map3 is that the learnable component of that planner, i.e. DDPG, was not able to learn sufficiently well in our setup, i.e. provided only with the instances that were taken from the Map1. In other words, the DDPG policy was not able to learn well in our dataset and was overffited to Map1. Meanwhile, PPO that used the same amount of data for training, was able to generalize to solving local pathfinding queries on (the unseen during training) Map2 and Map3. Thus, we infer that PPO is a more sample efficient policy that, generally, should be preferred over DDPG in similar setups.
\subsection{Evaluation In Dynamic Environments}
For this series of the experiments, we used Map2 and Map3, i.e. the maps that were not used for training. These maps were populated with the varying number of dynamic obstacles: from $0$ to $70$. Every dynamic obstacle is a rectangular shape car-like robot. Its trajectory is generated by sampling the random control input $(a, \omega)$ every 10th time step. We generated 5 different trajectories for every dynamic obstacle. Two different start-goal pairs were chosen for each map. Each test was repeated 20 times for the sampling-based planners.
As before we compared POLAMP to RL-RRT. We also compared to A*-CMP~\cite{lin2021search}. For this algorithm we used the same parameters as for POLAMP-A
. Another baseline was the combination of RRT with the seminal Dynamic Window Approach (DWA)~\cite{DWA} as a local planner (RRT-DWA). The latter is capable of avoiding moving obstacles and is widely used in robotics. For RRT-DWA we did not account for the final orientation as DWA is not tailored to obey orientation constraints. Also, we compared to RRTX~\cite{Otte2016RRTX} that used Dubins steering function~\cite{DUBINS}. This algorithm is essentially a plan-execute-re-plan type of algorithm that re-uses the search tree while the robot is moving towards the goal. For better performance of RRTX at each re-planning iteration we did not take into account the moving obstacles located more then 20 m away from robot. In the case of RRTX, the SR means how often the robot can reach the goal without collisions while executing the path. Additionally because RRTX needs much more samples during the re-planning we do not show this metric for RRTX.
The results are presented in Fig~\ref{fig:MetricsOnDynamicMaps}. The first clear trend is that POLAMP-RRT, POLAMP-A* and A*-CMP
in all cases maintain a high success rate ($>92\%$) until the number of dynamic obstacles goes beyond 50. However, POLAMP-A* and POLAMP-RRT require much fewer samples than A*-CMP to find the trajectory. This is because the A*-CMP requires two groups of primitives. One group of primitives allows accelerate and move at a constant speed while another group tries to decelerate to avoid collision with dynamic obstacles. However our algorithm only requires one group of primitives, because our policy is able to decelerate to avoid collision with dynamic obstacles when it is necessary.
We also note that there are trade-off between POLAMP-RRT, POLAMP-A* and A*-CMP. On the one hand, POLAMP-RRT is slightly better than the baseline A*-CMP and our POLAMP-A* in terms of success rate. Thanks to the randomness of RRT, POLAMP-RRT is able to explore more and can solve complicated tasks, unlike A* which performs a systematic non-explorative search. On the other hand, A*-CMP has the lowest duration in comparison with the rest algorithms. The latter is because in each iteration A*-CMP uses the minimum and maximum acceleration to generate the neighbors, i.e. the algorithm makes abrupt changes in speed. However, our local learnable steering tries to change the speed smoothly due to the presence of obstacles. Our algorithm is better than the other baselines RL-RRT, and RRT-DWA. Due the poor performance of the $\pi^{dyn}_{w-\theta}$ the RL-RRT algorithm did not show good results. RRT-DWA works well only when the number of obstacles is small.
POLAMP-RRT and POLAMP-A* are also better than RRTX. RRTX tries to replan the path online but sometimes when the current path is occluded by dynamic obstacles the robot is forced to stop and stay in its place until it finds another solution. In these situations, the robot can get into a deadlock from where it is impossible to get out without a collision because of moving obstacles. This problem is due to RRTX not taking into account the future trajectories of dynamic obstacles while planning. Besides, TTR of RRTX is almost double that of the other algorithms. This is because RRTX has abrupt path changes when the path is affected by the appearance of dynamic obstacles.
Overall, the conducted experiments show that our policy $\pi^{curr}$ generalizes well to both new environments and increasing number of dynamic obstacles (recall that it was trained only with one moving obstacle). A combination of that policy with a search-based or sampling-based global planner works well in challenging environments with dozens of simultaneously moving obstacles. Some experimental videos are provided in the Multimedia Materials.
\section{Conclusion}
In this paper, we considered a problem of kinodynamic planning for non-holonomic robot in the environments with dynamic obstacles. We enhanced the two classical planning methods, A* and RRT, with a learnable steering function that takes into account kinodynamic constraints and both static and moving obstacles. We designed a reward function and created a specific curriculum for learning the steering behaviors. The resultant algorithm, POLAMP, was evaluated empirically in both static and dynamic environments and was shown to outperform the state-of-the-art baselines (both learnable and non-learnable).
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-1892 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:Introduction}
The AdS/CFT duality has shed light on some important, and hardly accessible otherwise, features of strongly coupled, out-of-equilibrium quantum systems. Particularly, a very relevant field of application has been the Heavy Ion Collision program, where the collision of high energy shockwaves in AdS are used as a proxy for colliding ions in a particle accelerator and collider. Several studies have been carried out in this direction, from the study of particular aspects of the collision dynamics \cite{Nastase:2005rp, Janik:2005zt, Janik:2006gp, Kovchegov:2007pq, Grumiller:2008va, Lin:2009pn, Beuf:2009cx, Kovchegov:2009du, Gubser:2009sx, Romatschke:2013re, Bantilan:2018vjv} to simplified modelling of the full collision \cite{Kajantie:2008rx, Albacete:2009ji}. Moreover, fully numerical simulations of shockwave collisions were done in pure AdS$_5$ \cite{Chesler:2010bi, Casalderrey-Solana:2013aba, Chesler:2015wra, vanderSchee:2015rta, Grozdanov:2016zjj, Waeber:2019nqd, Muller:2020ziz, Waeber:2022tts, Waeber:2022vgf}, and in non-conformal quantum field theories \cite{Attems:2016tby, Attems:2017zam} including those with thermal phase transitions \cite{Attems:2018gou}. Additionally, a non-vanishing baryonic density was included in \cite{Casalderrey-Solana:2016xfq}.
While the numerical simulation of these gravitational phenomena is very much possible, general shockwave collisions are computationally expensive as they imply numerically solving Einstein's equations, ideally with a 4+1 dependence, with no symmetry assumptions. A first simplification to be done is to reduce the number of dynamical dimensions, such as the collision of planar shocks \cite{Chesler:2010bi}, which is justified by the high Lorentz contraction of rapidly moving ions. This reduces the problem to 2+1 dimensions and has proved to be useful to gain insight into the collision dynamics. Going a bit beyond, \cite{Waeber:2022tts} has recently proposed to perform an expansion in gradients transverse to the collision direction. This has revealed that even the first nontrivial order is able to capture a surprisingly high amount of physics involved in a full AdS$_5$ shockwave collision \cite{Chesler:2015wra}. By means of this approach, a collision with a more realistic model for the energy distribution inside the nuclei was first performed in \cite{Waeber:2022vgf}.
Here we propose to tackle the problem with a different kind of approximation by taking the limit of a very large number of spacetime dimensions, which is known as the large $D$ limit of General Relativity \cite{Emparan:2013moa} (see \cite{Emparan:2020inr} for a review). In this limit, the horizon dynamics gets decoupled from the region far from the horizon and, as a result, gravitational waves are decoupled from the dynamics of the horizon \cite{Emparan:2014cia, Emparan:2014aba}. The resulting effective description represents a major simplification \cite{Emparan:2015hwa} with respect to Einstein's equations at finite dimension $D$. Such a limit has been already useful in studying many properties of black holes \cite{Emparan:2013xia, Emparan:2013oza, Emparan:2015rva, Emparan:2019obu} and the effective theory has been widely used to understand the classical dynamics of horizons: instabilities, turbulent behavior and even violation of the Weak Cosmic Censorship conjecture in both asymptotically flat and AdS spacetimes \cite{Emparan:2014jca, Emparan:2015gva, Emparan:2016sjk, Rozali:2017bll, Andrade:2018nsz, Andrade:2018rcx, Andrade:2018yqu, Andrade:2018zeb, Andrade:2019edf, Andrade:2019rpn, Andrade:2020ilm, Emparan:2021ewh}.
In the present work will we will focus on 4+1-dimensional holographic collisions of blobs in Einstein-Maxwell theory, which adds a non-vanishing baryonic number density on the boundary theory \cite{Emparan:2016sjk}. Recall that 4+1 holographic collisions are dual to 3+1-dimensional collisions in the boundary conformal quantum field theory. This means that we will take only $4$ spatial dimensions out of the infinite number of them to be non passive. The effect of the spectator dimensions is to dilute the gravitational field, strongly focusing it near the horizon. The resulting effective description is non-relativistic, it has different transport coefficients, and the background horizon temperature only differs from that of the blobs by $1/D$ corrections. This last point, which is possibly the most relevant, implies that dissipation of the initial blobs cannot be arbitrarily suppressed.
However, the simplification of the description means that the computational cost of evolving its equations of motion is small. This fact allows us to scan over parameters in order to obtain a qualitative picture of the possible differences that arise during the collisions. Questions about the importance of the baryonic density, and the dependency of the results on the impact parameter can be therefore addressed. Additionally, we are able of reaching further in time during collisions than in the past \cite{Chesler:2015wra}.
In particular, we are interested in studying the production of entropy during the collisions. In the past, linear growths in time of the total entropy were observed in the context of AdS$_5$ shock collisions, see e.g. \cite{Grozdanov:2016zjj,Muller:2020ziz}. We wonder if such behavior will be captured by the large $D$ effective theory and if, as it was claimed in \cite{Muller:2020ziz}, we can link the growth rate to Lyapunov exponents. We will see that scanning over different initial data setups we can stablish the possible sensitivity of the growth rate to such details in order to determine whether there is a connection with chaotic behavior.
The paper is organized as follows: In Section \ref{sec:Large-D} we introduce the large $D$ effective equations of motion and the setup. We then give an overview of the collisions in Section \ref{sec:Collisions}, including how the evolution changes with the impact parameter, and to what extent the baryonic charge plays a role. We then focus on the entropy growth in Section \ref{sec:Entropy}, for both vanishing and non-vanishing charge density. We finally conclude in Section \ref{sec:Discussion}.
\section{The large $D$ effective equations}
\label{sec:Large-D}
Let us consider a gravitational theory dual to a conformal theory with the addition of a $U(1)$ global symmetry playing the role of a baryon number. The action of such theory reads
\begin{equation}
I = \int d^{D}x \sqrt{-g}\left(R-\frac{1}{4}F^2-2\Lambda\right),
\end{equation}
with $\Lambda=-n(n-1)/2$ the cosmological constant, $n=D-1$ and $F_{\mu\nu}$ the Maxwell field strength tensor. Throughout the paper we will be working in units in which $16\pi G =\Omega^{n+1}$, the area of the $n+1$-dimensional unit sphere.
We now follow the same steps from \cite{Emparan:2016sjk} to obtain the large $D$ effective equations. We start by observing that, as $n$ increases, the speed of sound scales as $c_s=1/\sqrt{n-1}$, and so the theory will become non-relativistic. In order to capture the arising physics we need to focus on velocities and distances of $\mathcal{O}(1/\sqrt{n})$, so we rescale our spatial coordinates and $g_{it}$ by $1/\sqrt{n}$ in order to work with order one quantities\footnote{We will write the physical quantities with boldface while the rescaled ones in regular typography, e.g. $\boldsymbol{v^i} = v^i/\sqrt{n}$ and $\boldsymbol{x}^i=x^i/\sqrt{n}$}. A general AdS black brane geometry in ingoing Eddington-Finkelstein coordinates reads, under these rescalings,
\begin{equation}
ds^2 = r^2\left(-Adt^2-\frac{2}{n}C_idtdx^i+\frac{1}{n}G_{ij}dx^idx^j\right)-2dtdr,
\end{equation}
where $r$ is the holographic coordinate and $x_i$ are the rescaled, order one coordinates along the horizon with $i=1,2..., n-1$. The factors of $1/n$ result from the rescalings. Furthermore, if we want the gauge field to backreact on the metric at leading order in $1/n$ we need to take $A_t = \mathcal{O}(1)$ and $A_i = \mathcal{O}(1/n)$. Upon substitution in Einstein's equations, one can solve them as a series expansion in $1/n$. To leading order, the result is,
\begin{equation}
\begin{aligned}
A & = 1- \rho(t,\vec x)\left(\frac{r_0}{r}\right)^n+q(t,\vec x)^2\left(\frac{r_0}{r}\right)^{2n},\\
C_i & = p_i(t,\vec x)\left(\frac{r_0}{r}\right)^n\left(1-q(t,\vec x)^2\left(\frac{r_0}{r}\right)^{2n}\right),\\
G_{ij} & = \delta_{ij}+\frac{1}{n}\left[\frac{C_ip_j(t,\vec x)}{\rho(t,\vec x)}-\log\left(1-\rho_-(t,\vec x)\left(\frac{r_0}{r}\right)^n\right)\partial_{(i}\left(\frac{p_{j)}(t,\vec x)}{\rho(t,\vec x)}\right)\right],\\
\end{aligned}
\end{equation}
where $r_0$ indicates the position of the unperturbed, neutral horizon (corresponding to $\rho=1$ and $q=p_i=0$). The remaining variables, $\rho=\boldsymbol{\rho}/n$, $q=\boldsymbol{q}/n$ and $p^i=\boldsymbol{p}^i/n$ are the mass, charge and momentum densities respectively. From now on, we will set $r_0^n=1$ which fixes the units in which we will work. The quantities $\rho_{\pm}$ are defined as,
\begin{equation}
\rho_{\pm} = \frac{1}{2}\left(\rho\pm\sqrt{\rho^2-2q^2}\right),
\end{equation}
and the horizon is located at $r^n = \rho_+$. To next order in the $1/n$ expansion, one obtains the equations of motion for the new variables,
\begin{equation}
\begin{aligned}
\partial_t\rho-\partial_i\partial^i\rho+\partial_ip^i & = 0,\\
\partial_tq-\partial_i\partial^iq+\partial_i\left(\frac{p^iq}{\rho}\right) & = 0,\\
\partial_tp_i-\partial_j\partial^jp_i+\partial_i\rho+\partial^j\left[\frac{p_ip_j}{\rho}+\rho_-\left(\partial_i\frac{p_j}{\rho}+\partial_j\frac{p_i}{\rho}\right)\right] & = 0.
\end{aligned}
\label{eq:Large-D-effecttive-equations}
\end{equation}
Notice that, in order for $\rho_\pm$ to be real, we need to have $\rho \leq \sqrt{2}q$, where the extremal limit saturates the inequality. The extremal solution should get rid of the high normal gradients ($T=0$), and so we expect the effective theory to break down and the equations to cease being valid. The ratio $\sqrt{2}q/\rho$, ranging from $0$ to $1$, will be later used as a measure of extremality. We can obtain the thermodynamic quantities in the usual manner, as
\begin{equation}
s = \boldsymbol{s} = 4\pi\rho_+, \quad T = \frac{\boldsymbol{T}}{n} = \frac{\rho_+-\rho_-}{4\pi\rho_+}, \quad \mu = \boldsymbol{\mu}= \frac{q}{\rho_+}.
\label{eq:Thermo}
\end{equation}
For illustration purposes, a more familiar set of equations can be obtained by performing the change of variables $p_i = \partial_i\rho + \rho v_i$, by which the equations of motion take the form
\begin{equation}
\begin{aligned}
\partial_t\rho+\partial_i\left(\rho v^i\right) &= 0,\\
\partial_tq+\partial_i j^i &= 0,\\
\partial_t(\rho v^i)+\partial_j\left(\rho v^iv^j+\tau^{ij}\right) & = 0,
\label{eq:Continuity}
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
j_i & = qv^i-\rho\partial_i\left(\frac{q}{\rho}\right),\\
\tau_{ij} & = \rho\delta_{ij}-2\rho_+\partial_{(i}v_{j)}-\left(\rho_+-\rho_-\right)\partial_i\partial_j\log\rho.
\end{aligned}
\label{eq:constitutive-relations}
\end{equation}
These are simply continuity equations for the mass, charge and momentum of a compressible fluid up to first order in derivatives together with the addition of a single second order term. The transport coefficients that follow from them are,
\begin{equation}
\mathcal{P} = \boldsymbol{\mathcal{P}} = \rho,\quad \eta =\boldsymbol{\eta} = \frac{s}{4\pi},\quad \zeta = 0, \quad \kappa_q = \frac{\boldsymbol{\kappa}_q}{n} = \frac{Ts}{4\pi}\left(\frac{Ts}{\rho}\right)^2.
\end{equation}
These are the shear viscosity $\eta$, bulk viscosity $\zeta$ and heat conductivity $\kappa_q$. In terms of our rescaled coordinates we have $c_s=1$, which explains the relation between the pressure and the (rescaled) mass density.
As opposed to a hydrodynamic theory
these equations capture all the physics in the regime where $\mathbf{k}/\mathbf{T}\sim 1/\sqrt{n}$ instead of order by order in a series expansion. In other words, this theory corresponds to a hydrodynamic theory in which all the transport coefficients but a handful of them identically vanish.
Furthermore, we require some notion of an out-of-equilibrium entropy in order to study its time evolution in the collisions. In gravity, a direct candidate for it is simply the apparent horizon area. For the charged case we have that the entropy in \eqref{eq:Thermo} does indeed satisfy the second law,
\begin{equation}
\partial_t s + \partial_i\left(s v^i+\kappa_q\frac{\mu}{T}\partial^i\left(\frac{\mu}{T}\right)\right)\geq 0,
\end{equation}
which is purely originated by the diffusion of charge. Additionally, one can identify a notion of entropy density at first order in $1/n$ which can be written solely in terms of quantities at leading order in $n$ \cite{Andrade:2020ilm},
\begin{equation}
s_1 = -4\pi\left(\frac{1}{2}\rho v_iv^i+\frac{1}{2\rho}\partial_i\rho\partial^i\rho+\rho\log\rho\right).
\label{eq:entropy_neutral}
\end{equation}
For in-equilibrium configurations, only the logarithmic term survives and $\rho$ is just a constant. This means that this entropy scales with the volume, as it should. The second law that it satisfies is,
\begin{equation}
\partial_t s_1 + \partial_i\left(s_1 v^i-4\pi\left(v_j\tau ^{ij}\vert_{q=0}+\partial_j\rho\partial^jv^i\right)\right)\geq 0,
\end{equation}
where $\tau^{ij}\vert_{q=0}$ is the shear stress tensor \eqref{eq:constitutive-relations} with the charge density set to zero. The origin of entropy generation in this case is associated to viscous dissipation. The total entropy is then given by the combination
\begin{equation}
s_\text{tot} = s+\frac{1}{n}s_1+\mathcal{O}\left(\frac{1}{n^2}\right),
\label{eq:total_entropy}
\end{equation}
which implies that viscous dissipation is $1/n$ suppressed with respect to charge diffusion. In the neutral case, $s=4\pi\rho$ becomes a constant and all entropy variations come from viscous dissipation, $s_1$.
From now on we will focus on equations \eqref{eq:Large-D-effecttive-equations} as they turn out to be better behaved numerically. In order to obtain a 4+1-dimensional collision \footnote{Notice that we are actually solving a 3+1-dimensional PDE system, because the radial direction has already been integrated when deriving equations \eqref{eq:Large-D-effecttive-equations}.}, we choose to work with nontrivial dependence along three of the $n-1$ horizon directions. The whole set of simulations where done using the code \texttt{Chihuahua} \cite{Chihuahua-2022}, written in the \texttt{Julia} language. It can use both pseudospectral \texttt{FFT} differentiation and finite differences for the spatial derivatives, with a fixed time step \texttt{RK4} time evolution algorithm. In all simulations in this paper, we use single-domain \texttt{FFT} differentiation. We chose $L_x=L_y=L_z=100$, with $N_x=N_y=50$ and $N_z=150$, and the time step is taken to be $\Delta t = 0.1$ in all cases. Running each simulation up to $t_{end} = 30$ took about 20 minutes on a single Intel Core i7-10750H at 2.60GHz CPU, a short time compared to the typical AdS$_5$ shock collision simulations, allowing for a scan over possible different scenarios. The reduced computational cost also allowed us to follow the collision to later times than in previous 4+1-dimensional collisions. Testing for the code is provided in Appendices \ref{app:QNM} and \ref{app:convergence}, using pseudospectral differentiation.
\section{Collisions}
\label{sec:Collisions}
In this section we present the result of colliding two Gaussian blobs of mass that follow the equations of motion \eqref{eq:Large-D-effecttive-equations}. The initial data for the mass, charge and momentum density profiles is given by,
\begin{equation}
\begin{aligned}
\rho(0,\vec x) = & 1 + \delta\rho \left\{\exp\left[-\frac{(x-\delta x)^2+y^2}{\sigma_{T1}}-\frac{(z-\delta z)^2}{\sigma_{L1}}\right]+\exp\left[-\frac{(x+\delta x)^2+y^2}{\sigma_{T2}}-\frac{(z+\delta z)^2}{\sigma_{L2}}\right]\right\},\\
q(0,\vec x) = & q_0+ \delta q \left\{\exp\left[-\frac{(x-\delta x)^2+y^2}{\sigma_{T1}}-\frac{(z-\delta z)^2}{\sigma_{L1}}\right]+\exp\left[-\frac{(x+\delta x)^2+y^2}{\sigma_{T2}}-\frac{(z+\delta z)^2}{\sigma_{L2}}\right]\right\},\\
p_z(0,\vec x) = & \delta p \left\{-\exp\left[-\frac{(x-\delta x)^2+y^2}{\sigma_{T1}}-\frac{(z-\delta z)^2}{\sigma_{L1}}\right]+\exp\left[-\frac{(x+\delta x)^2+y^2}{\sigma_{T2}}-\frac{(z+\delta z)^2}{\sigma_{L2}}\right]\right\},\\
p_x(0,\vec x) = & p_y(0,\vec x) = 0,
\end{aligned}
\end{equation}
where the axes are oriented in such a way that the center of the blobs is contained in the $(x,z)$-plane and the direction of the collision is $z$. $\sigma_L$ is the squared width of the Gaussian in the direction of collision while $\sigma_T$ corresponds to the squared width in the transverse directions.
All collisions were done choosing $\delta \rho = 20$, $\delta p = 60$. The rest of parameters were varied. We set the background charge density as $q_0=0$ in all cases but one, in which we use $q_0 > 0$ in order to allow for charge propagation via the sound mode. We considered several kinds of initial data, as listed in Table \ref{tab:initial_data}.
\begin{table}[thpb]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{7}{|c|}{Initial data configurations} \\ \hline \hline
& $(\sigma_L, \sigma_T)_1$ & $(\sigma_L, \sigma_T)_2$ & $q_0$ & $\delta q$ & $\delta x$ & $\delta z$ \\ \hline \hline
{\it spherical} & (10, 10) & (10, 10) & 0 & (0, 6, 10) & 0 & 10 \\ \hline
{\it neutral oblate} & (10, 50) & (10, 50) & 0 & 0 & (0, 2, 5, 8, 10) & 10 \\ \hline
{\it charged oblate} & (10, 50) & (10, 50) & 0 & 6 & (0, 2, 5, 8, 10) & 10 \\ \hline
{\it unequal} & (4, 4) & (10, 50) & 0 & 6 & 0 & 10 \\ \hline
{\it quasi-spherical} & (30, 50/$\sqrt{3}$) & (30, 50/$\sqrt{3}$) & 0 & 6 & 2 & 15 \\ \hline
{\it charged background} & (10, 50) & (10, 50) & 0.3 & 6 & 2 & 10 \\ \hline
\end{tabular}
\caption{Symmary of parameters in inital data configurations. All of them share $\delta\rho=20$, $\delta p = 60$. Through the text we will refer to the different collisions by the names given in this table. \label{tab:initial_data}}
\end{table}
The values of $\sigma_L$ and $\sigma_T$ in the {\it quasi-spherical} blobs are chosen so that the total mass is the same as in the {\it charged oblate} case, which will be relevant when studying entropy growth.
Contrary to what happens at finite $D$, we cannot parametrically suppress the background horizon with respect to the amplitude of the Gaussian blobs. Therefore, the dissipation of the blobs cannot be reduced arbitrarily. Wider Gaussians (at fixed amplitude) lead to longer lived blobs, but require a larger domain, which increases the computational cost. We found that the values of $(\sigma_L,\sigma_T)$ chosen for {\it oblates} in Table \ref{tab:initial_data} are a good compromise.
\subsection{Overview of collisions}
In Figure~\ref{fig:snapshots_asymmetric} one can find snapshots corresponding to the collision of {\it charged oblate} blobs with $\delta x = 2$. Initially, the blobs approach each other until they collide at $t = t_{c} \approx 3.11$ to form a blob of mass that is highly compressed in the direction of the collision, $z$. We define the collision time, $t_c$, as the time in which the mass density reaches is maximum value. After the collision, the resulting blob of mass expands and the mass density at the collision site decreases. The shape of the expanding blob is far from symmetric, as shown in Figure \ref{fig:snapshots_asymmetric}, the mass density is flowing outwards in a roughly elliptical shape. The minor axis of such ellipse coincides with the line joining the initial blobs. Up to $t \approx 10$, both the mass density and the charge profile follow similar evolution patterns. However, at later times the charge density will simply follow a diffusion pattern, while the mass density will continue to propagate away from the collision site leaving a depleted region in its middle. The reason behind such a difference is that the background metric is neutral, so charge density propagation modes cannot be excited and only diffusion takes place. On the other hand, the mass density diffuses but it also propagates\footnote{These notions apply strictly in the linear regime of small perturbations, but the nonlinear physics involved in the collisions seems to retain some of these features.}. After $t_c$, a high dissipation stage takes place. While the mass and charge density have barely decreased until $t\approx 4$, they both fall down by an order of magnitude by $t \approx 9$.
Figure \ref{fig:snapshots_bckg_charge} shows the charge density in a {\it charged background} collision. In this case, charge can both diffuse and propagate on the background horizon, and so its evolution very much resembles that of the mass density in Figure \ref{fig:snapshots_asymmetric}. We see that the charge blob resulting from the collision ends up fragmenting in a similar fashion to the mass density.
\begin{figure}[thpb]
\centerline{\includegraphics[width=\textwidth]{images/snapshots_mass_charged_oblate_dx_2.pdf}
\put(-435,55){\mbox{{$z$}}}
\put(-376,-7){\mbox{{$x$}}}
\put(-271,-7){\mbox{{$x$}}}
\put(-163,-7){\mbox{{$x$}}}
\put(-55,-7){\mbox{{$x$}}}
}
\centerline{\includegraphics[width=\textwidth]{images/snapshots_charge_charged_oblate_dx_2.pdf}
\put(-435,55){\mbox{{$z$}}}
\put(-376,-7){\mbox{{$x$}}}
\put(-271,-7){\mbox{{$x$}}}
\put(-163,-7){\mbox{{$x$}}}
\put(-55,-7){\mbox{{$x$}}}
}
\centerline{\includegraphics[width=\textwidth]{images/velocity_flow__charged_oblate_dx_2.pdf}
\put(-435,55){\mbox{{$z$}}}
\put(-376,-7){\mbox{{$x$}}}
\put(-271,-7){\mbox{{$x$}}}
\put(-163,-7){\mbox{{$x$}}}
\put(-55,-7){\mbox{{$x$}}}
}
\caption{Mass density (top row), charge density (middle row) and fluid velocity (bottom row) at the collision plane for a 4+1-dimensional {\it charged oblate} blob collision with $\delta x = 2$ from Table \ref{tab:initial_data}. The snapshots correspond to $t = (0,4,9,15)$.}
\label{fig:snapshots_asymmetric}
\end{figure}
\begin{figure}[thpb]
\centerline{\includegraphics[width=\textwidth]{images/snapshots_charge_charged_background.pdf}
\put(-435,55){\mbox{{$z$}}}
\put(-376,-7){\mbox{{$x$}}}
\put(-271,-7){\mbox{{$x$}}}
\put(-163,-7){\mbox{{$x$}}}
\put(-55,-7){\mbox{{$x$}}}
}
\caption{Charge density at the collision plane at $t=(0,4,9,15)$ for the {\it charged background} collision from Table \ref{tab:initial_data}.}
\label{fig:snapshots_bckg_charge}
\end{figure}
In Figure \ref{fig:snapshots_symmetric} one can find a head-on ($\delta x = 0$) collision of {\it spherical} blobs. As one might have expected, dissipation of the blobs during the early stage in which they approach each other is higher in this case, due to the smaller size of the blobs. In fact, by $t \approx 4$, the mass density has dropped to half of its original value. As in the {\it charged oblate} case, the blob resulting from the collision is elliptical in shape, which ends up fragmenting into two lumps of mass that travel along the major axis of the ellipse.
\begin{figure}[thpb]
\centerline{\includegraphics[width=\textwidth]{images/snapshots_mass_spherical_dq_0.pdf}
\put(-435,55){\mbox{{$z$}}}
\put(-376,-7){\mbox{{$x$}}}
\put(-271,-7){\mbox{{$x$}}}
\put(-163,-7){\mbox{{$x$}}}
\put(-55,-7){\mbox{{$x$}}}
}
\caption{Mass density at the collision plane at $t=(0,4,8,12)$. The collision corresponds to the head-on {\it spherical} blob collision from Table \ref{tab:initial_data}.}
\label{fig:snapshots_symmetric}
\end{figure}
Finally, Figure \ref{fig:snapshots_unequal} shows the snapshots of {\it unequal} blobs collision. The result of such a collision is completely different to the previous ones. Its appearance is similar to the propagation of a single blob, perturbed by the collision with a smaller blob. As time runs, the blob diffuses leaving the usual depletion of mass behind it and the shape of the front relaxes to the shape that a freely moving blob would have.
\begin{figure}[thpb]
\centerline{\includegraphics[width=\textwidth]{images/snapshots_mass_unequal.pdf}
\put(-435,55){\mbox{{$z$}}}
\put(-376,-7){\mbox{{$x$}}}
\put(-271,-7){\mbox{{$x$}}}
\put(-163,-7){\mbox{{$x$}}}
\put(-55,-7){\mbox{{$x$}}}
}
\caption{Mass density at the collision plane at $t=(0,3,7,13)$. The collision corresponds to the {\it unequal} blob collision from Table \ref{tab:initial_data}.}
\label{fig:snapshots_unequal}
\end{figure}
\subsection{Impact parameter dependence}
In order to better assess the differences that arise from different impact parameters, in Figure \ref{fig:dx_dependence} we show the result of all {\it charged oblate} collisions. In the top panels we observe that, as $\delta x$ grows, the maximum mass and charge densities reached during the collision decrease. This is to be expected, since a bigger separation in the $x$-axis causes the effective overlap between the blobs to be smaller. In other words, the collision is less violent as the impact parameter grows. The collision time $t_c$ is almost insensitive to the value of $\delta x$.
It is instructive to compare the mass and charge profiles at the origin, normalized by their maximum, as a function of time. The result is shown in the bottom panel of Figure \ref{fig:dx_dependence}. All the curves with $\delta x\leq 8$ are very close to each other, while some major discrepancies are observed for the largest impact parameter: $\delta x = 10$. Therefore, collisions whose impact parameter does not surpass the blob transverse width ($\sqrt{\sigma_T}$) can be approximately seen as rescaled versions of one another.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.5\textwidth]{images/dx_dependence_mass.pdf}
\includegraphics[width=0.5\textwidth]{images/dx_dependence_charge.pdf}
}
\centerline{
\includegraphics[width=0.5\textwidth]{images/dx_dependence_mass_rescaled.pdf}
\includegraphics[width=0.5\textwidth]{images/dx_dependence_charge_rescaled.pdf}
}
\caption{Top: Mass and charge densities at the origin as a function of time for \textit{charged oblate} collisions with different impact parameters. Bottom: Mass and charge densities normalized by their the maximum, as a function of the elapsed time since the collision.}
\label{fig:dx_dependence}
\end{figure}
\subsection{Isotropization and hydrodynamization}
Due to the large flow of mass in the direction of collision, large anisotropies are present. As the product of the collision approaches equilibrium, two events take place during the evolution: isotropization and hydrodynamization. The former is simply a consequence of the fact that equilibrium states are isotropic. The latter comes from the expectation that interacting systems whose departure from equilibrium is small can be well characterized by hydrodynamics.
An easy way of testing to what extent a system has isotropized and hydrodynamized is by looking into the evolution of the three different pressures in the system. By three pressures we mean the diagonal components, $\mathcal{P}_x=\tau_{xx}$, $\mathcal{P}_y=\tau_{yy}$ and $\mathcal{P}_z=\tau_{zz}$, of the stress tensor in \eqref{eq:constitutive-relations}. Discrepancies among these pressures are a sign of anisotropy. Also, once the system has hydrodynamized, the hydrodynamic constitutive relations should be a good approximation of $\tau_{ii}$. The way in which we will define hydrodynamization is by comparing the full pressures in the system ($\mathcal{P}_i$) to the viscous hydrodynamic ones ($\mathcal{P}_i^V$). We define $\mathcal{P}_i^V$ as the expressions for $\tau_{ii}$ in \eqref{eq:constitutive-relations} without the inclusion of second order derivative terms,
\begin{equation}
\begin{aligned}
\mathcal{P}_i & = \rho - 2 \rho_+ \, \partial_i v_i - \left(\rho_+ - \rho_-\right)\partial_i^2 \log\rho,\\
\mathcal{P}_i^V & = \rho - 2\rho_+ \, \partial_i v_i.
\end{aligned}
\label{eq:pressures}
\end{equation}
The large $D$ effective theory can be seen as a hydrodynamic theory up to second order in gradients. Hence, the theory is always in a hydrodynamic regime, but our hydrodynamization time is then a measure of the required time for second order gradients to become negligible. Let us emphasize that we are not performing the time evolution of viscous hydrodynamics. We are instead using the data of the full solutions to evaluate $\mathcal{P}_i^V$, and then comparing it point-wise to $\mathcal{P}_i$. The charge current has no second order derivative terms, so it is always well captured by first-order hydrodynamics.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.5\textwidth]{images/hydrodynamization_charged_oblate_dx_2.pdf}
\includegraphics[width=0.5\textwidth]{images/hydrodynamization_charged_oblate_dx_8.pdf}
}
\centerline{
\includegraphics[width=0.5\textwidth]{images/hydrodynamization_unequal.pdf}
}
\caption{Pressures along the three active directions as a function of time at the spatial origin. The solid lines represent the full pressures ($\mathcal{P}_i$), while the dashed lines show by the first order hydrodynamic pressures ($\mathcal{P}_i^V$). Top: \textit{Charged oblate} blob collision with impact parameter $\delta x = (2,8)$ for the right and left panels respectively. Bottom: \textit{Unequal} blob collision.}
\label{fig:pressures}
\end{figure}
In Figure \ref{fig:pressures} we show the pressures as a function of time at the collision spot, for {\it charged oblate} blob collisions (with $\delta x = 2, 8$) in the top panels, and for an {\it unequal} blob collision in the bottom one. Continuous lines refer to $\mathcal{P}_i$, while the dashed lines correspond to the viscous hydrodynamic pressures, $\mathcal{P}_i^V$. We focus on the collision point, where largest gradients are expected.
The top panel of Figure \ref{fig:pressures} illustrates the large anisotropies reached during the collision, with a ratio of longitudinal to transverse pressures that can get slightly over 3. Also, viscous hydrodynamics fails to describe the system at times around $t_c$, especially along the axis of collision. For lower impact parameter collisions, a greater anysotropy is produced and viscous hydrodynamics further departs from the actual value for $\mathcal{P}_i$. Furthermore, the bigger $\delta x$ gets, the more the $x \longleftrightarrow y$ symmetry is broken, so the difference between $\mathcal{P}_x$ and $\mathcal{P}_y$ gets accentuated. For $\delta x$ larger than the blob width, the anysotropy in the $x-y$ plane will presumably decrease.
The results suggest that, after $t \approx 7$, the system becomes very isotropic and well described by viscous hydrodynamics, roughly at the same time. This result differs from the findings in finite $D$ collisions, where hydrodynamization takes place earlier than isotropization \cite{Chesler:2010bi,Casalderrey-Solana:2013aba, Chesler:2015wra}. Since the conclusions extend to the rest of collisions in Table \ref{tab:initial_data}, we relate the discrepancy we found to the large $D$ limit and not to the addition of charge.
The results for an {\it unequal} blob collision, as shown in the bottom panel of Figure \ref{fig:pressures}, differ from the rest. Throughout the whole process, viscous hydrodynamics approximates better the physics, and it gives an overestimation of the pressure (in opposition to the rest of collisions). The level of anysotropy that is reached is smaller than for equal shock collisions.
\subsection{Charge influence}
In this subsection we compare the results of collisions with different values for the charge of the blobs to study its influence on the dynamics. In Figure \ref{fig:charge_influence_mass_charge_extr} we display the mass, charge density, charge density normalized by the blob charge $\delta q$ and the ratio $\sqrt{2}q/\rho$ at the spatial origin for the {\it spherical} blob collisions, with $\delta q = (0, 6, 10)$.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.5\textwidth]{images/charge_influence_mass.pdf}
\includegraphics[width=0.5\textwidth]{images/charge_influence_charge.pdf}
}
\centerline{
\includegraphics[width=0.5\textwidth]{images/charge_influence_charge_dq.pdf}
\includegraphics[width=0.5\textwidth]{images/charge_influence_extremality.pdf}
}
\caption{Mass, charge density, charge density normalized by $\delta q$ and the ratio $\sqrt{2}q/\rho$ at the spatial origin for the head-on \textit{spherical} blob collisions of Table \ref{tab:initial_data}.}
\label{fig:charge_influence_mass_charge_extr}
\end{figure}
The behavior of the charge density is qualitatively similar, approximately proportional to $\delta q$, see Figure \ref{fig:charge_influence_mass_charge_extr} bottom-left. Surprisingly, even though the maximum charge density is around a third of the maximum mass density ($\sqrt{2}q/\rho$ gets up to $1/2$) the effect of charge on the mass density is small. The biggest difference takes place at the collision time ($t_c$), where larger values of the mass density are achieved for larger charge. For $\Delta t\approx 2$ after the collision time, the mass density follows the same evolution as in the neutral collision. The time it takes to the mass density to follow the neutral collision profile is shorter than the isotropization and hydrodynamization timescales.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.5\textwidth]{images/charge_influence_px.pdf}
\includegraphics[width=0.5\textwidth]{images/charge_influence_py.pdf}
}
\centerline{
\includegraphics[width=0.5\textwidth]{images/charge_influence_pz.pdf}
}
\caption{Pressures along the three active directions ($\tau_{ii}$) at the spatial origin for head-on \textit{spherical} blob collisions from Table \ref{tab:initial_data}.}
\label{fig:charge_influence_pressures}
\end{figure}
Figure \ref{fig:charge_influence_pressures} shows the time evolution of the pressure along all three directions at the collision location. The maximum pressure achieved does also increase with the blob charge $\delta q$, and the pressures follow very closely the result of the neutral collision soon after $t_c$, although $\mathcal{P}_z$ is more sensitive to the presence of charge than any of the previously studied quantities.
Our results therefore suggest that charge does not greatly affect the collision dynamics. Similarly, \cite{Casalderrey-Solana:2016xfq} found that charge does not greatly affect other observables in planar shockwave collisions at finite $D$. Due to technical difficulties, we could not go beyond the maximum charge value presented here, so it is still unknown to us if our conclusions would change for larger $\delta q$.
\section{Entropy growth}
\label{sec:Entropy}
We now present the details of entropy evolution during collisions. As mentioned earlier, the leading entropy production comes from charge diffusion, while viscous dissipation is $1/n$ suppressed. Therefore, for charged collisions we will focus on the entropy defined in \eqref{eq:Thermo}, while for neutral collisions we take the entropy in \eqref{eq:entropy_neutral}. The results that we present here correspond to the {\it charged oblate} and {\it neutral oblate} collisions in Table \ref{tab:initial_data}.
\subsection{Charged collisions}
We begin by analyzing the {\it charged oblate} blob collisions. In Figure \ref{fig:entropy_growth_asymmetric} we show the average entropy density (integrated entropy over the three non passive directions and divided by the volume) for the collision with $\delta x = 2$. We can clearly distinguish between three different stages in the collision dynamics. First, we observe a linear growth at early times, before the collision has happened. This growth corresponds to the diffusion of the moving blobs. More details about it will be given below. At $t_c$, marked by a vertical gray line, the slope of the linear growth becomes smaller. In this second stage, lasting for $\Delta t \approx 15$, the system continues producing entropy in a linear fashion. This time, however, on top of the linear growth, there is now a slight oscillation. Notice that both the hydrodynamization and isotropization times fall inside this post-collision stage. At late times, $t \geq 20$, the entropy clearly departs from the linear growth and its rate of production slows down once again. This last stage is likely to be the longest since the equilibrium value for the entropy is $s_\text{final} \approx 13.0082$, which means that half of the total entropy jump is still to be produced, but at a lower rate. An analogous linear entropy growth was observed in planar-shockwave collisions at finite $D$ in \cite{Muller:2020ziz}. The end time of our simulations is large enough to observe the eventual departure from linearity that could not be observed there.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.8\textwidth]{images/entropy_charged_oblate_dx_2.pdf}
}
\caption{Average entropy density as a function of time for a \textit{charged oblate} blob collision with $\delta x = 2$. The vertical gray line indicates the collision time $t_c\approx 3.11$. The dashed lines are linear fits, whose slopes are $(8.39\cdot 10^{-4}, 3.29\cdot 10^{-4})$.}
\label{fig:entropy_growth_asymmetric}
\end{figure}
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.8\textwidth]{images/charged_entropy_dx_dependence.pdf}
}
\caption{Average entropy density as a function of time for \textit{charged oblate} blob collision with several different impact parameters $\delta x$. The vertical gray line indicates the collision time, $t_c\approx 3.11$. As a reference, the entropy for the motion of two non-colliding \textit{charged oblate} blobs is included.}
\label{fig:entropy_growth_asymmetric_dx_dependence}
\end{figure}
The general qualitative features observed in Figure \ref{fig:entropy_growth_asymmetric}, including the linear growths and the presence of three stages, can be identified in most collisions with charge.
A comparison of the entropy time dependence for the {\it charged oblate} collisions is shown in Figure \ref{fig:entropy_growth_asymmetric_dx_dependence}. As a reference, we have included twice the entropy of a single moving {\it charged oblate} blob. At early times, before $t_c \approx 3.11$, all the curves coincide. This fact proves that the pre-collision entropy growth can be understood as coming from the diffusion of two freely moving {\it charged oblate} blobs on the background horizon. For times near $t_c$, differences start to arise. For small values of the impact parameter, $\delta x \leq 5$, we can still identify a second stage of linear entropy growth. The values we measured for the slopes are $3.22\cdot 10^{-4}$, $3.29\cdot 10^{-4}$ and $3.56\cdot 10^{-4}$ for $\delta x = 0, 2$ and $5$ respectively. The similarity among the values suggests a possible insensitivity to the initial data details and a dependence only on the final, equilibrium state. As $\delta x$ is increased, the length of the second stage decreases until it fully disappears. As expected, higher impact parameters exhibit an entropy behavior that is more similar to that of two freely moving blobs.
The results also show that the collision dynamics slows down the entropy generation with respect to the diffusion of freely moving blobs. Contrary to what we expected, the highest entropy generation rate does not happen for head-on collisions. Higher impact parameters imply higher entropy rates during the second stage. In particular, freely moving blobs generate entropy faster than the complicated dynamics in collisions (for the time window presented here).
In order to decide whether the measured rates in Figure \ref{fig:entropy_growth_asymmetric_dx_dependence} are insensitive to all the details of the initial data, in Figure \ref{fig:entropy_growth_asymmetric_vs_symmetric} we compare the entropy of a {\it charged oblate} blob collision with a {\it quasi-spherical} blob collision, both with $\delta x = 2$. Both setups also have identical total mass and charge values, which means that the final equilibrium state is the same. The only difference is on the initial blob shapes. The measured rates are $3.29\cdot 10^{-4}$ for the {\it charged oblate} blob collision, and $2.86\cdot 10^{-4}$ for the {\it quasi-spherical} blob one. We observe a bigger discrepancy between their values than for collisions of identical blobs with different impact parameters.
The linear growth rate in the post-collision stage is not independent of the details of the initial data. Interestingly, at later times both curves behave in a similar way, which seems to indicate that the details of the initial data have been forgotten.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.8\textwidth]{images/charged_entropy_oblate_vs_quasi_spherical.pdf}
}
\caption{Total entropy normalized by the volume as a function of time for a \textit{charged oblate} with $\delta x = 2$ and \textit{quasi-spherical} blob collisions. The width of the initial blobs is chosen in such a way that the end state is the same. The measured slopes are $(3.29\cdot 10^{-4}, 2.86\cdot 10^{-4})$}
\label{fig:entropy_growth_asymmetric_vs_symmetric}
\end{figure}
We also studied the time evolution of the entropy for blobs with different values of the charge $\delta q$, and observed that the qualitative behavior of the entropy is maintained, and all three stages can be identified for small enough impact parameters.
\subsection{Neutral collisions}
In Figure \ref{fig:entropy_growth_asymmetric_neutral} we show the full evolution of the entropy in a {\it neutral oblate} blob collision with $\delta x = 2$.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.8\textwidth]{images/entropy_neutral_oblate_dx_2.pdf}
}
\caption{Total entropy normalized by the volume as a function of time for an \textit{neutral oblate} blob collision with $\delta x = 2$. The dashed lines correspond to linear fits, whose slopes are $(0.1541, 0.4945, 0.0660)$. The first change in the slope takes place at $t \approx 1.5$, while the vertical gray marks the collision time $t_c\approx 3.11$.}
\label{fig:entropy_growth_asymmetric_neutral}
\end{figure}
In this case, we can identify four different stages of entropy growth, one more than in charged collisions. At early times, we find a linear growth of the entropy which is related to the dissipation of the blobs before they collide. At around $t\approx 1.5$, before $t_c\approx 3.11$ (vertical dashed line), the system enters a second stage of faster linear growth. This phase is completely absent in the charged case. By looking into the mass density at $t \approx 1.5$, we can relate this early change in the rate to the instant in which the blobs start to considerably overlap with each other, as shown in Figure \ref{fig:1st_contact}. Indeed, at $t\approx1.5$, the mass density at the origin is about half the maximum of the blobs. This stage lasts from $t \approx 1.5$ to $t_c$, and its entropy growth rate is the fastest of the whole evolution. The fact that this second stage is missing in the evolution for charged blobs indicates that the entropy \eqref{eq:entropy_neutral} is more sensitive to the presence of new dynamical regimes.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.5\textwidth]{images/snapshot_mass_neutral_oblate_dx_2.pdf}
}
\caption{Mass density at $t = 1.5$ at the collision plane. At this instant the two blobs start to considerably overlap each other and the collision process starts.}
\label{fig:1st_contact}
\end{figure}
After $t_c$, the entropy enters even a new regime of linear growth, this time with a smaller slope. This stage is analogous to the second regime of linear growth found in Figure \ref{fig:entropy_growth_asymmetric}, but with about a third of the duration ($\Delta t\approx 5$). At $t \approx 10$, the entropy production stops following a linear trend and its growth rate decreases. Contrary to the charged case, most of the expected entropy increment has already taken place. In fact, by the end of our simulation at $t=30$, the total increase in the entropy has been of around $s_ \text{final}-s_\text{initial}\approx 1.9$, about 90\% of the expected total jump.
\begin{figure}[thpb]
\centerline{
\includegraphics[width=0.8\textwidth]{images/neutral_entropy_dx_dependence.pdf}
}
\caption{Total entropy normalized by the volume as a function of time for identical initial data with different impact parameters. The vertical gray line indicates the collision time. As a reference, the entropy for the motion of two non-colliding blobs is included.}
\label{fig:entropy_growth_asymmetric_dx_dependence_neutral}
\end{figure}
Finally, in Figure \ref{fig:entropy_growth_asymmetric_dx_dependence_neutral} we show the entropy of {\it neutral oblate} blob collisions. We also add the entropy produced by two freely diffusing blobs. As the impact parameter is increased, the curves resemble more the non-colliding blobs, just as expected. More intuitively than in the charged setup, the collision accelerates the entropy growth thanks to the second stage, which is absent in charged collisions.
We conclude that the first stage of linear growth is due to the diffusion of the blobs before they get into contact, while the second stage clearly depends on the impact parameter. This is seen in the slope values of $0.5233$, $0.4945$ and $0.3731$ when taking impact parameters of $\delta x = 0$, $2$ and $5$ respectively. Regarding the slope of the linear entropy production after $t_c$, we measured clear dependence on the impact parameter, with slopes of $0.0627$, $0.0660$ and $0.0777$ for $\delta x = 0$, $2$ and $5$. The results therefore show that the second and third linear growth rates do depend on the impact parameter.
\section{Discussion}
\label{sec:Discussion}
Although the AdS/CFT correspondence transforms problems that are hard to address within Quantum Field Theory into tractable classical gravity ones, the computational cost of solving Einstein's equations without further assumptions is still large. In the present work we used the large $D$ limit of General Relativity to drastically simplify the problem of shockwave collisions, allowing us to scan over different kinds of collision scenarios with full 4+1-dimensional dynamics. We provided an overview of collisions of different blob sizes, shapes, charges and impact parameters. However, the formalism has its own limitations. The effective description \eqref{eq:Large-D-effecttive-equations} becomes non-relativistic, including the equation of state, and the transport coefficients differ from those in 4+1 dimensions. Presumably, these have little effect at the qualitative level. The most important limitation is that the background horizon temperature cannot be parametrically suppressed while keeping the blob amplitude fixed, which produces undesired dissipative effects.
In a similar way to what is observed in AdS$_5$ collisions, the system produces a large amount of anisotropy as well as deviations from first-order hydrodynamics. In other words, the second order gradient in \eqref{eq:constitutive-relations} plays an important role around the collision time. After a few units of time, the system relaxes back to a nearly isotropic state which is well captured by first order hydrodynamic terms. Contrary to what has been observed in AdS$_5$ collisions \cite{Chesler:2010bi, Casalderrey-Solana:2013aba, Chesler:2015wra}, the hydrodynamization and isotropization times approximately coincide at large $D$.
As the impact parameter is increased, the maximum mass, charge density and anisotropty decrease. We found that, for impact parameters below the blobs' width, the evolution of the mass and charge densities are the same when normalized by their maximum value. By colliding blobs of increasing charge values, we conclude that the charge only plays an important role for the entropy, but not for the rest of variables. A similar weak effect on the rest of observables was observed in finite $D$ planar shockwave collisions \cite{Casalderrey-Solana:2016xfq}. It is unknown to us if larger values of blob charge would change our conclusions.
We additionally studied the entropy produced during collisions. At large $D$, entropy generation by viscous dissipation is $1/D$ suppressed over charge diffusion. Hence, one has to consider two different notions of entropy for charged and neutral collisions. For both kinds of collisions, we observed several regimes of entropy production linear in time.
When the incoming blobs are charged, the entropy grows linearly in time at different rates before and after the collision. Eventually, a departure from linear growth is observed. Interestingly, the growth rate after the collision increases with the impact parameter. In neutral collisions, we find an extra linear growth stage. This stage starts when the blobs start overlapping with each other and ends at the collision time. The highest rate of entropy production occurs at this stage, although such rate decreases with increasing impact parameter.
Similar post-collision linear entropy growths were observed at finite $D$ collisions \cite{Muller:2020ziz, Grozdanov:2016zjj}. In \cite{Muller:2020ziz} a connection between the growing rate and the largest Lyapunov exponent was suggested through the Kolmogorov-Sinai (KS) entropy. Even if certain notions of entropy may exhibit regimes of linear growth whose rate is equal to the KS entropy, e.g. \cite{KS_vs_physical, Kunihiro:2008gv}, the entropy definitions used here may not. In fact, KS entropy is only sensitive to the end state, while the rates we observed depend on the initial data. Given that our boundary theory has a classical gravitational dual, the maximum Lyapunov exponent saturates the Maldacena-Shenker-Stanford bound $\lambda_L\leq2\pi \boldsymbol{T}$ \cite{Maldacena:2015waa}, with $\boldsymbol{T}$ the physical temperature, which diverges as $D\rightarrow\infty$. One can presumably recover the bound by working with appropriately rescaled quantities. We wonder whether the large $D$ limit can simplify the study of chaos too. Progress in this direction will be reported in \cite{DMM}.
Linear entropy growths were also observed in the context of holographic collisions of phase domains in a theory with a first-order thermal phase transition \cite{Bea:2021ieq}. This corresponds to a completely different type of setup, which suggests that stages of linear entropy production are a signature in collision dynamics.
It would be interesting to investigate the effects of $1/D$ corrections, which would allow us to more precisely observe the deviations that finite $D$ introduce into the results presented here, even if only perturbatively. The resulting equations would increase in difficulty, however they would nevertheless still represent a major simplification to AdS$_5$ collisions.
\section*{Acknowledgements}
\label{sec:Acknowledgements}
We thank David Ramirez and Martin Sasieta for useful discussions. We are grateful to David Mateos and Roberto Emparan and Marija Toma{\v s}evi\'c for their very useful comments on the first manuscript. We thank David Licht, Ryotaku Suzuki and Roberto Emparan for their early exploratory work in this line of research.
RL acknowledges financial support provided by Next Generation EU through a University of Barcelona Margarita Salas grant from the Spanish Ministry of Universities under the {\it Plan de Recuperaci\'on, Transformaci\'on y Resiliencia} and by Generalitat Valenciana / Fons Social Europeu through APOSTD 2022 post-doctoral grant CIAPOS/2021/150. Work supported by Spanish Agencia Estatal de Investigaci\'on (Grant PID2021-125485NB-C21) funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe, and the Generalitat Valenciana (Grant PROMETEO/2019/071).
The work of MSG is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No758759).
| proofpile-arXiv_065-1894 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{SecI}
Randomness is an essential resource \cite{Pironio2007, Pironio2010, Colbeck2011, Gallego2013, Silleras2014, Bancal2014, Bierhorst2018, Liu2021} for various information processing tasks such as secure communication, key distribution, encryption schemes in cryptographic protocols, statistical sampling and many others. In this regard, the generation of certifiable as well as provably-secure generation of the random number is crucial. The existing and widely used random number generators (RNGs) such as pseudo-RNG, and true-RNG lack the certification of the generated randomness \cite{Bera2017}. This is because pseudo-RNG uses a given algorithm or arithmetic operation to produce a sequence of random numbers, and thus, the privacy is based on assumptions about the computational power of the adversary as well as on the assumptions that the adversary does not know the underlying arithmetic operations or the algorithms used to produce the random numbers - making the generated pseudo-random sequence to vulnerable to adversarial guessing attack. True-RNG relies on the complexity of the calculations or the lack of complete knowledge about the systems and exploits hard-to-predict physical phenomena for generating randomness. However, in principle, one can always predict the outcomes of such physical phenomena with certainty using classical mechanics and powerful computational power \cite{Acin2016}.
Now, in quantum theory, the outcomes of a measurement performed on a quantum mechanical system are truly random. This randomness persists even if we have the full knowledge of the preparation of the state of the system. Thus, such randomness does not rely on the lack of knowledge about the systems or complexity of the calculations \cite{Acin2016}. However, the fundamental problem with the certification of such randomness is that we have to rely on the inner workings of the quantum device. Even if we assume that the quantum-RNG device \cite{collantes2017} is provided by a trusted supplier, it may deviate from its intended design because of unavoidable imperfections, aging, or tampering leading to uncontrollable biases \cite{Pironio2018}. Therefore, to generate provably secure randomness, we have to certify the generated randomness in the device-independent (DI) way.
To this end, it is the empirical violation of the Bell inequality, together with the validity of the condition of signal locality, that guarantees unpredictability of Bell violating measurement outcomes, independent of the computational power of any adversary \cite{Masanes2006, Cavalcanti2012, ss2020}. Since the violation of Bell inequality only requires the input-output statistics of an experiment, the certification of the generated randomness does not rely on the inner working of the devices used. In \cite{Pironio2010}, it has been shown that the device-independent (DI) guaranteed bound of randomness is monotonic with violations of Clauser-Horne-Shimony-Holt (CHSH) \cite{CHSH1970} inequality. Further, by invoking the CHSH inequality, Colbeck \emph{et al.}\cite{Colbeck2011} introduced a protocol in which a short and weak random string can be expanded into a longer provably private random string - known as randomness expansion. Further, Acin \emph {et al.} \cite{Acin2012} shade more light on the relationship between randomness, nonlocality, and entanglement by showing more randomness can be produced from a small amount of nonlocality and states with arbitrarily little entanglement. Recently, by invoking the Hardy \cite{Hardy1992} and Cabello-Liang \cite{Cabello2002, Liang2005} relations for nonlocality, it has been shown \cite{ss2020} that the quantitative relationship between the randomness and the amount of nonlocality is more nuanced than that thought of. While the minimum guaranteed amount of randomness has a monotonic relationship with nonlocality, the maximum amount bound of randomness is quantitatively incommensurate with nonlocality.
It is important to note here that in the CHSH case, the maximum amount of guaranteed randomness that can be certified is 1.2284 bits \cite{Pironio2010} corresponding to the maximum violation of the CHSH inequality $(2\sqrt{2})$. The amount of certified randomness in the same experimental setup can not be increased even if one increases the dimension of the shared maximally entangled state. This is because the value of the CHSH function becomes optimized for the two-qubit system \cite{Cirelson1980, Supic2020, Pan2020}. Hence, increasing the dimension of the system does not provide any advantage in the generation of certified randomness. Now, it is the purpose of this manuscript to investigate the question of whether more than one copy of an entangled two-qubit state provides any advantage over a single copy for the generation of certified randomness. While at first glance, the question may seem to be obvious, deeper looks into it provide the deep-seated non-triviality. Of course, one may argue that pair of maximally entangled two-qubit states always provide a greater amount of randomness than that can be obtained from a single copy of it. Although it is trivial that pair of maximally entangled states indeed provide the quantitative advantage in the certified randomness, the non-trivial part is the certification of such a pair of entangled states. This is because, the standard CHSH inequality, as mentioned, does not provide the required certification since it can be shown that the quantum maximum of the CHSH inequality is attained for a two-qubit entangled state. Thus, the maximum violation will not be increased even if one increases the dimension of the system or use many copies of two-qubit entangled states. The Elegant Bell inequality (EBI) \cite{Gisin2007} where one party has four measurement settings and the other has three measurement settings, the maximum value is also reached for a two-qubit system \cite{Anderson2017}. Moreover, the $n$-settings chained Bell inequality cannot also certify more than one copy of maximally entangled two-qubit state for the same reason \cite{Supic2016}.
Thus, in a single experimental setup, it is not obvious to guarantee a greater amount of provably-secure certified randomness from many copies of a maximally entangled state than that obtained from a single copy. Against this backdrop, by invoking a family of many settings Bell inequality, we demonstrate that increasing the number of maximally entangled two-qubit states does provide an advantage in the generation of certified randomness. Such a family of inequalities has earlier been introduced as a dimension witness \cite{Pan2020} in the context of random-access-code communication games.
This paper is organized as follows. In Sec. \ref{SecII}, we briefly recapitulate the essence of the derivation for the optimal quantum bound of the Bell inequality without assuming the dimension of the state as well as measurement (Sec. \ref{sosnop}). Next, we employ the Bell inequality through DI certification of randomness. Then, in Sec. \ref{SecIII} by suitably quantifying, we evaluate the amount of certified randomness corresponding to the optimal quantum violation of Bell inequality for an arbitrary number of measurement settings ($n$). In particular, in Secs. \ref{cr2}-\ref{cr4}, we explicitly evaluate the certified randomness when Alice and Bob share a single copy as well as more than one copy of maximally entangled two-qubit state for $n\in \{2,3,4,5,6\}$. The obtained results are illustrated in Table \ref{table1} and Fig. \ref{figrn}. Finally, we conclude our work and discussed our proposal from two different viewpoints where more than a single copy of a two-qubit maximally entangled state provides an advantage over a single copy (Sec. \ref{SecIV}).
\section{A family of Bell inequalities and corresponding optimal quantum violations}\label{SecII}
\begin{figure}[ht]
\centering
{{\includegraphics[width=0.9\linewidth]{bell.pdf}}}
\caption{A bipartite Bell experiment involving $2^{n-1}$ settings for Alice and $n$ settings for Bob. }
\label{bs}
\end{figure}
For the sake of completeness, we briefly revisit the quantum optimization of a family of Bell inequalities, which was earlier introduced as a consequence of the analysis of the $n$-bit RAC communication game \cite{Ghorai2018}. It was shown that the success probability crucially depends on the quantum violation of such Bell inequality. For our purpose of generation of certified randomness, we independently invoke this inequality. Before we begin, let us first introduce the scenario in the following.
We consider a bipartite Bell experiment where two space-like separated parties Alice and Bob perform measurements on their local subsystems. Alice performs one of $2^{n-1}$ dichotomic measurements denoted by $A_{n,i} \in \{A_{n,1}, A_{n,2}, ..., A_{n,2^{n-1}}\}$ and the corresponding outcomes be denoted as $a \in \{0,1\}$. Bob performs one of $n$ dichotomic measurements denoted by $B_{n,y} \in \{B_{n,1}, B_{n,2}, ..., B_{n,n}\}$ and the corresponding outcomes be denoted as $b\in \{0,1\}$. In the above, the following Bell functional has been proposed \cite{Ghorai2018} is given by
\begin{align}\label{nbell}
\mathcal{B}_{n} = \sum_{y=1}^{n}\qty(\sum_{i=1}^{2^{n-1}} (-1)^{x^i_y} A_{n,i})\otimes B_{n,y}
\end{align}
The term $x^i_{y}$ takes a value of either 0 or 1 which is fixed by using the encoding scheme demonstrated in \cite{Ghorai2018, Pan2020, Pan2021}. Then for a given $i$, $x^{i}_{y}$ fix the values of $(-1)^{x^{i}_{y}}$ in the following way. Let us consider a random variable $x^{\alpha}\in \{0,1\}^{n}$ with $\alpha\in \{1,2,\cdots,2^{n}\}$. Each element of the bit string can be written as $x^{\alpha}=x^{\alpha}_{y=1} x^{\alpha}_{y=2} x^{\alpha}_{y=3},\cdots,x^{\alpha}_{y=n}$. For example, if $x^{\alpha} = 110...10$ then $x^{{\alpha}}_{y=1} =1$, $x^{{\alpha}}_{y=2} =1$, $x^{{\alpha}}_{y=3} =0,\cdots ,x^{{\alpha}}_{y=n-1} =1,x^{{\alpha}}_{y=n} =0$. We denote the $n$-bit binary strings as $x^{i}$. Here we consider the bit strings such that for any two $i$ and $j$, $x^{i}\oplus_{2} x^{j}=11\cdots1$. Clearly, we have $i\in \{1,2,\cdots,2^{n-1}\}$ constituting the inputs for Alice. If $i=1$, we get all the first bit of each bit string $x_y$ for every, $y\in \{1,2, \cdots ,n\}$ which are the inputs of Bob.
It has been shown \cite{Pan2020} that if Alice and Bob are space-like separated, the local bound of the Bell functional $\mathcal{B}_{n}$ is given by
\begin{equation}\label{local}
\langle\mathcal{B}_{n}\rangle_{L}= n \ \binom{n-1}{\lfloor \frac{n-1}{2}\rfloor}
\end{equation}
where $\binom{m}{r}=\frac{m!}{r! \ (m-r)!}$ and $\lfloor m \rfloor$ is the floor function that takes as input a real number $m$, and gives as output the greatest integer less than or equal to $m$.
Let a source $S$ emits an entangled two-qubit state $\rho_{AB}$. It is important to note here that there exist quantum correlations for which $\langle\mathcal{B}_{n}\rangle_{Q}=Tr[\rho_{AB} \ \mathcal{B}_{n}]>\langle\mathcal{B}_{n}\rangle_{L}$. In the following, by employing an elegant sum-of-squares (SOS) method, the optimal quantum bound is derived.
\subsection{The optimal quantum bound of the Bell functional $\mathcal{B}_{n}$ } \label{sosnop}
Here we derive the optimal quantum bound of the Bell functional $\mathcal{B}_{n}$ independent of the dimension of the state as well as measurement. For this purpose, we utilize the SOS technique introduced in \cite{Pan2020}. We consider a positive semi-definite operator $\gamma_{n}$, which can be expressed as $\gamma_{n}=\beta_{n} \mathbb{I}_{d}-(\mathcal{B}_{n})_{Q}$, where $\beta_{n}$ is the optimal value that can be obtained when $\langle \gamma_{n}\rangle$ is equal to zero. This can be proved by considering a set of operators $M_{n,i}$ which are linear functions of the observables $A_{n,i}$ and $B_{n,y}$ such that
\begin{align}\label{gamma}
\gamma_{n}=\sum\limits_{i=1}^{2^{n-1}} \frac{\omega_{n,i}}{2} \qty(M_{n,i})^{\dagger}\qty(M_{n,i})
\end{align}
where the operators $M_{n,i}$ and the quantities $\omega_{n,i}$ are given as follows
\begin{eqnarray}
\label{mi}
&&M_{n,i}=\frac{1}{\omega_{n,i}}\qty(\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y})-A_{n,i} \nonumber \\
&&\omega_{n,i}=\Big|\Big|\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y}\ket{\psi}\Big|\Big|
\end{eqnarray}
where, $||\cdot||$ is the Frobenious norm, given by $|| \mathcal{O}\ket{\psi}||=\sqrt{\bra{\psi}\mathcal{O}^{\dagger}\mathcal{O}\ket{\psi}}$.
Now, putting Eq. (\ref{mi}) into Eq. (\ref{gamma}) and by noting that $A_{n,i}^{\dagger} A_{n,i}=B_{n,y}^{\dagger} B_{n,y}=\mathbb{I}_{d} $, we obtain
\begin{eqnarray} \label{opt1}
\gamma_{n}&=&-(\mathcal{B}_{n})_{Q} + \sum\limits_{i=1}^{2^{n-1}}\left[ \frac{1}{2\omega_{n,i}}\left(\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y}\right)^2 + \frac{\omega_{n,i}}{2} \mathbb{I}_d\right] \nonumber \\
&=& -(\mathcal{B}_{n})_{Q} + \sum\limits_{i=1}^{2^{n-1}}\omega_{n,i} \ \mathbb{I}_d \ \ \ \ \text{[from Eq.~(\ref{mi})]}
\end{eqnarray}
Therefore, it follows from the above Eq.~(\ref{opt1}) that the quantum optimum value of $(\mathcal{B}_{n})_{Q}$ is attained when $\langle\gamma_{n}\rangle_{Q}= \bra{\psi} \gamma_{n} \ket{\psi} = 0$, which in turn provides
\begin{equation}\label{optbn}
\langle \mathcal{B}_{n}\rangle_{Q}^{opt} = \max\left(\sum\limits_{i=1}^{2^{n-1}}\omega_{n,i}\right)
\end{equation}
Note that such optimal value will be obtained for the following optimization conditions
\begin{equation}
M_{n,i}=0 \implies A_{n,i}= \frac{1}{\omega_{n,i}}\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y}
\end{equation}
Now, in order to achieve the optimal quantum value $\langle \mathcal{B}_{n}\rangle_{Q}^{opt}$ from Eq.~(\ref{optbn}), we first evaluate $(\sum\limits_{i=1}^{2^{n-1}}\omega_{n,i})$ by invoking the convex inequality as follows
\begin{align}
\label{concav}
\sum\limits_{i=1}^{2^{n-1}}\omega_{n,i}\leq \sqrt{2^{n-1} \sum\limits_{i=1}^{2^{n-1}} (\omega_{n,i})^{2}}
\end{align}
It is crucial to remark here that in Eq. (\ref{concav}), the equality holds only when all $\omega_{n,i}$ are equal for each $i$.
Since $B_{n,y}$s are dichotomic, the quantity $\omega_{n,i}$ can explicitly be written from Eq.~(\ref{mi}) as
\begin{eqnarray}\label{omega}
\omega_{n,i}&=& \Big[ n+ \langle \{(-1)^{x^i_1} B_{n,1}, \sum\limits_{y=2}^{n}(-1)^{x^i_y} B_{n,y}\} \rangle \nonumber\\
&+& \langle \{(-1)^{x^i_2} B_{n,2}, \sum\limits_{y=3}^{n}(-1)^{x^i_y} B_{n,y}\}\rangle + \cdots \cdots \nonumber\\
&+&\langle \{(-1)^{x^i_{n-1}} B_{n,n-1}, (-1)^{x^i_n} B_{n,n}\}\rangle\Big]^{\frac{1}{2}}
\end{eqnarray}
where $\{ \ , \ \}$ denotes the anti-commutation relation.
As already mentioned that the equality holds in Eq.~(\ref{concav}) when each of $\omega_{n,i}$ are equal. From the above Eq. (\ref{omega}) it can be shown that all $\omega_{n,i}$ are equal to each other if Bob's observables are mutually anti-commuting, i.e., $\{B_{n,y},B_{n,y^{\prime}}\}=0 \ \ \forall y,y^{\prime}\in \{1,2,\cdots,n\}$. This then, provides the maximum value of $\omega_{n,i}=\sqrt{n}$. Therefore form Eqs.(\ref{optbn}) and (\ref{concav}), it is now straightforward to obtain
\begin{equation}
\label{opt}
\langle\mathcal{B}_{n}\rangle_Q^{opt}= 2^{n-1}\sqrt{n}
\end{equation}
It is to be noted here that the optimal value will be achieved when there exist $n$ number of mutually anti-commuting observables in Bob's local Hilbert space, thereby, specifying the dimension of the global Hilbert pace, given by $d=4^{m}$ with $m=\lfloor n/2 \rfloor$, as well as the number of maximally entangled state ($m=\lfloor n/2 \rfloor$) required for achieving such optimal value in quantum theory.
Thus, for the cases of $n\geq 4$, a single copy of a maximally entangled two-qubit state will not suffice the purpose and one requires a higher dimensional system. For example, the optimal value of the Bell expression for $n=4$ requires at least two copies of a bipartite maximally entangled qubit state.
\section{ Generation of certified Randomness from the new family of Bell inequalities} \label{SecIII}
It follows from the argument presented in the preceding Sec. (\ref{SecII}) that in the quantum theory there exist correlations for which it is possible to obtain $\langle\mathcal{B}_{n}\rangle_{L}<\langle\mathcal{B}_{n}\rangle_{Q}\leq 2^{n-1}\sqrt{n}$. This implies that such Bell violating quantum correlations cannot be reproduced by any predetermined local strategy between Alice and Bob. Thus, empirical violations of the proposed Bell inequalities certify the presence of inherent randomness in the observed statistics. It is the purpose of this manuscript to appropriately quantify such inherent certified randomness.
In information theory, among the different measures of entropy, the quantity min.-Entropy ($H_{\infty}$) \cite{Renner2009} is used in all the studies for the quantification of randomness. Here, to compare with those earlier relevant works, we also take min.-Entropy as the suitable quantifier of certified randomness. Note that the min.-Entropy is determined only by the event occurring with the highest probability, independent of the probabilities of the other events, it quantifies the minimum unpredictability involved in the probability distribution \cite{Renner2009}. Thus, min.-Entropy provides the most secure bound of the generated randomness. From now on, we call such bound as guaranteed bound of randomness.
We quantify the amount of DI certified global randomness $\qty(R_{min})_n$ for a given Bell violation, $\langle\mathcal{B}_n(\vec{\mathbb{P}}_{obs}^{n})\rangle> n \ \binom{n-1}{\lfloor \frac{n-1}{2}\rfloor}$, in terms of the min.-Entropy as follows
\begin{eqnarray} \label{randef}
\qty(R_{min})_n &=& \min\limits_{\vec{\mathbb{P}}_{obs}} H_{\infty}(a,b|A_{n,i},B_{n,y}) = -\log_2\qty[\max_{\vec{\mathbb{P}}_{obs}} p(a,b|A_{n,i},B_{n,y})] \nonumber\\
&& \ \ \ \text{subject to:} \nonumber\\
&& \ \ \ \text{(i)} \ \vec{\mathbb{P}}_{obs} \in \{p(a,b|A_{n,i},B_{n,y})\},\nonumber \\
&& \ \ \ \text{(ii)} \ p(a,b|A_{n,i},B_{n,y})=Tr[\rho_{AB} \ (\Pi_{A_{n,i} }^a \otimes \Pi_{B_{n,y}}^b)],\nonumber \\
&& \ \ \ \text{(iii)} \ \langle\mathcal{B}_{n}\rangle_Q=\left\langle\sum_{y=1}^{n}\sum_{i=1}^{2^{n-1}} (-1)^{x^i_y} A_{n,i}\otimes B_{n,y}\right\rangle = \epsilon \nonumber\\
&&\ \ \ \text{with} \ n \ \binom{n-1}{\lfloor \frac{n-1}{2}\rfloor}<\epsilon\leq 2^{n-1}\sqrt{n} \ ;
\end{eqnarray}
where $\vec{\mathbb{P}}_{obs}^{n} \in {\mathbb{R}^{n 2^{n+1}}}$ is a vector in a $(n 2^{n+1})$ dimensional real number space, which denotes the set of all observed joint probability distributions known as behaviour.
It is important to remark here that for a given Bell violation, there may exist more than one behaviour. Thus, to ensure the security of the quantified randomness, for a given Bell violation, the min.-Entropy need to be minimized over all possible behaviours.
In this regard, a point to be noted here is that amount of certified randomness can be quantified in two different ways \cite{ss2020}. To understand it more clearly, let us first consider the simplest 2 parties-2 measurements per party-2 outcomes per measurement (2-2-2) scenario. In such a scenario, there are four possible combinations of the pairs of measurement settings, given by $\{(A_1,B_1),(A_1,B_2),(A_2,B_1),(A_2,B_2)\}$, and one can always evaluate the maximum value of joint probability corresponding to each combination of pairs of measurement settings, denoted by $P^{\ast}_{ij} \ \ \forall i,j\in\{1,2\}$. Then, the amount of randomness corresponding to each such combination is given by $R_{ij}=-\log_2\qty[P^{\ast}_{ij}]$. Now, we can evaluate the amount of certified randomness in two ways - (i) by taking the minimum value of $R_{ij}$ which then gives us the DI guaranteed bound of randomness, $(R_{min}=\min\limits_{i,j} R_{ij})$, and (ii) by taking the maximum value of $R_{ij}$ which then gives us the maximum amount of randomness, $(R_{max}=\max\limits_{i,j} R_{ij})$. Such a discussion on the quantification of the certified amount of randomness is essential since there are works \cite{Acin2012, Law2014, ss2020} that have used $R_{max}$ as a quantifier of certified randomness to show that close to two bits of randomness can be certified in the 2-2-2 scenario. Further, by invoking a suitable generalised measurement (POVM) scheme, it has been shown \cite{Anderson2018, Woodhead2019} that it is possible to achieve close to four bits of amount of $R_{max}$ certified by the Elegant Bell inequality.
Importantly, in our prescription of quantifying the DI certified randomness, we take $R_{min}$ as a viable quantifier of the certified randomness as given by Eq.~(\ref{randef}) like the bound considered for the estimation of DI certified randomness by Pironio \emph{et al.} \cite{Pironio2010}. Moreover, a significant point to be reflected here is that for the optimal violation of the concerned Bell functional ($\mathcal{B}_n$), all the maximum joint probabilities corresponding to each combination of pairs of measurement settings are found to be equal, thereby in our treatment $R_{min}=R_{max}$. However, while one can also interject a suitable measurement scheme of POVM and/or employ higher settings tilted Bell inequalities to certify a greater amount of randomness, such a line of study is beyond the scope of our present manuscript.
Now, since the quantum theory is not a polytope, it is impossible to analytically evaluate the guaranteed bound of randomness by executing the optimization by taking into account all possible behaviours. However, for the optimal quantum violation, the observed behaviour is unique. Then, for optimal quantum violation of a Bell inequality, it will be straightforward to evaluate the secure guaranteed bound of randomness. Hence, for our purpose of the evaluation of the certified randomness in the quantum theory, we take recourse to the case when the proposed Bell inequality is optimally violated.
Before we proceed to evaluate the amount of certified randomness, we again point out the interesting feature anew: by increasing the dimension of the system, whether it is possible to certify a greater amount of guaranteed randomness than that obtained in the CHSH case in a same experimental setup. Note that such an advantage cannot be revealed by using only the CHSH inequality, EBI, and Chain Bell inequality. The deep-seated significance of the proposed newly found family of Bell inequalities is that the maximal quantum violation provides a dimension witness \cite{Pan2020}. Thus, many copies of maximally entangled two-qubit states can be certified from the optimum quantum violation of such inequality by increasing the number of measurement settings. This, then, will provide the advantage in the randomness generation in a provable-secure way than the CHSH or elegant or chain Bell inequalities. We are now in a position for evaluating the certified randomness corresponding to the optimal quantum violations of the Bell inequalities given by $\mathcal{B}_n$ for different values of $n$.
\subsection{Evaluation of certified randomness for $n=2$} \label{cr2}
Note that the $n=2$ case corresponds to the standard CHSH inequality. It has been shown (Sec. \ref{sosnop}) that to ensure the DI maximal violation of the CHSH inequality, both Alice's and Bob's observables need to be anti-commuting. The such maximum value is attained for the two-qubit maximally entangled state. Hence, we can always construct such anti-commuting observables in the local two-dimensional Hilbert space. Then, a straightforward algebra completely characterizes the unique behaviour $\vec{\mathbb{P}}_{obs}^2$ with the maximum joint probability $p_{2}^{\ast}$ given by
\begin{equation}
p_{2}^{\ast}= \frac{1}{4}(1+\frac{1}{\sqrt{2}})
\end{equation}
and subsequently, the amount of randomness is given by
\begin{equation}
\qty(R_{min})_{n=2} = -\log_2\left[\frac{1}{4}\left(1+\frac{1}{\sqrt{2}}\right)\right] \approx 1.2284 \ bits
\end{equation}
Note the amount of randomness for $n=2$, $ \qty(R_{min})_{n=2}=1.2284 \ bits$ corresponding to the optimal Bell violation (here CHSH) is found to be the same as earlier obtained in \cite{Pironio2010}.
\subsection{Evaluation of Certified Randomness for $n=3$} \label{cr3}
The Bell inequality $\mathcal{B}_n$ for $n=3$ corresponds to the EBI given by
\begin{eqnarray}\label{bell3}
\langle\mathcal{B}_{3}\rangle_{L} &=& A_{3,1} \otimes \left(B_{3,1}+B_{3,2}+B_{3,3}\right) \nonumber \\
&+& A_{3,2} \otimes \left(B_{3,1}+B_{3,2}-B_{3,3}\right) \nonumber\\
&+&A_{3,3} \otimes \left(B_{3,1}-B_{3,2}+B_{3,3}\right) \nonumber \\
&+& A_{3,4} \otimes \left(-B_{3,1}+B_{3,2}+B_{3,3}\right) \leq \ 6
\end{eqnarray}
It is interesting to note here that while the EBI does not provide any quantum advantage (or the QM optimal violation) even if one increases the system's dimension such Bell inequality provides a greater amount of certified randomness than that obtained in the CHSH case.
Now, since the optimal quantum violation $ \langle\mathcal{B}_3\rangle_{Q}$ of the EBI corresponds to a unique statistics which subsequently fixes the least requirement of the shared state to be the maximally entangled two-qubit state as well as puts the restriction on the observables for Both Alice and Bob. Here we construct the desired observable in the two-dimensional local Hilbert space from the conditions for quantum optimality as follows
\begin{eqnarray}\label{obs3}
&&A_{3,1}= (\sigma_{x} + \sigma_{y}+\sigma_{z})/{\sqrt{3}} \ ; \ A_{3,2}=(\sigma_{x} + \sigma_{y} - \sigma_{z})/{\sqrt{3}}\nonumber\\
&& A_{3,3}= (\sigma_{x} - \sigma_{y} + \sigma_{z})/{\sqrt{3}} \ ; \ A_{3,4}=(-\sigma_{x} + \sigma_{y} + \sigma_{z})/{\sqrt{3}}\nonumber\\
&&B_{3,1}=\sigma_{x} \ \ ; \ \ B_{3,2}=\sigma_{y} \ \ ; \ \ B_{3,3}=\sigma_{z}
\end{eqnarray}
Now, by employing the observables given in Eq. (\ref{obs3}) and the maximally entangled state $\rho_{AB}=\ket{\psi^{-}}\bra{\psi^{-}}$, where $\ket{\psi^{-}}=\frac{1}{\sqrt{2}}(\ket{01}-\ket{10})$, the observed behaviour $\vec{\mathbb{P}}_{obs}\equiv \{p(a,b|A_{3,i},B_{3,y},\rho_{AB})\}\in\mathbb{R}^{48}$ can be evaluated. We found that for the EBI, the maximum violation leads to the unique behaviour having joint probabilities of the following form
\begin{eqnarray}\label{maxp3}
p(a,b|A_{3,i},B_{3,y},\rho_{AB})&=& Tr[\rho_{AB} (\Pi_{A_{3,i}}\otimes\Pi_{B_{3,y}})]\nonumber\\
&=&\frac{1}{4}(1\pm\frac{1}{\sqrt{3}})
\end{eqnarray}
with maximum joint probability $p_{3}^{\ast}=\frac{1}{4}(1+\frac{1}{\sqrt{3}})$ and consequently, the guaranteed amount of randomness is given by
\begin{equation}
\qty(R_{min})_{n=3} = -\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] = 1.3425 \ bits
\end{equation}
It is to be noted here that the amount of guaranteed randomness is found to be greater than that obtained in the CHSH case. Hence, for the generation of a guaranteed amount of randomness, the use of EBI will be more advantageous than that of CHSH inequality. Therefore, it seems that increasing the number of measurement settings may provide an advantage in the generation of certified randomness. Next, in the following, we proceed to evaluate the same for $n=4$.
\subsection{Evaluation of Certified Randomness for $n=4$} \label{cr4}
The Bell inequality $\mathcal{B}_n$ for $n=4$ is given as follows
\begin{eqnarray}\label{bell4}
\langle\mathcal{B}_{4}\rangle_{L} &=& A_{4,1} \otimes\left( B_{4,1}+B_{4,2}+B_{4,3}+B_{4,4}\right) \nonumber\\
&+& A_{4,2} \otimes\left( B_{4,1}+B_{4,2}+B_{4,3}-B_{4,4}\right)\nonumber \\
&+& A_{4,3} \otimes\left( B_{4,1}+B_{4,2}-B_{4,3}+B_{4,4}\right)\nonumber \\
&+& A_{4,4} \otimes\left( B_{4,1}-B_{4,2}+B_{4,3}+B_{4,4}\right)\nonumber \\
&+& A_{4,5} \otimes\left( -B_{4,1}+B_{4,2}+B_{4,3}+B_{4,4}\right)\nonumber \\
&+& A_{4,6} \otimes\left( B_{4,1}+B_{4,2}-B_{4,3}-B_{4,4}\right)\nonumber\\
&+& A_{4,7} \otimes\left( B_{4,1}-B_{4,2}+B_{4,3}-B_{4,4}\right)\nonumber\\
&+& A_{4,8} \otimes\left( B_{4,1}-B_{4,2}-B_{4,3}+B_{4,4}\right) \leq \ 12
\end{eqnarray}
It has been shown that if Alice and Bob shares a single copy of maximally entangled state, the above Bell inequality given by the Eq.~(\ref{bell4}) is optimised if Bob's observables satisfy the condition $\{B_{4,1},B_{4,2}\}=\{B_{4,1},B_{4,3}\}=\{B_{4,1},B_{4,4}\}=\{B_{4,2},B_{4,3}\}=\{B_{4,2},B_{4,4}\}=0$ and $\{B_{4,3},B_{4,4}\}=\pm2$. Then, by using such constraint relations we construct the following observables in Hilbert space dimension two $(\mathcal{H}^2)$
\begin{eqnarray}\label{nobs4}
&& B_{4,1} = \sigma_{x} \ ; \ \ B_{4,2} = \sigma_{y} \ ; \ \ B_{4,3} = B_{4,4} = \sigma_{z} \nonumber \\
&& A_{4,i}=\frac{1}{\sqrt{N_i}}\sum\limits_{y=1}^{4} (-1)^{x^i_y} B_{4,y}
\end{eqnarray}
where $N_i =2 \ \forall \{2,3,7,8\}$ and $N_i =6 \ \forall \{1,4,5,6\}$.
Then one can evaluate the behaviour $\vec{\mathbb{P}}_{obs}^4\equiv \{p(a,b|A_{4,i},B_{4,y},\rho_{AB})\}\in\mathbb{R}^{128}$ corresponding to such observables given by Eq.~(\ref{obs4}) and a single copy of maximally entangled state. The maximum Bell value in this case is then given by $4(\sqrt{2}+\sqrt{6})$. The greatest joint probability $(p^{\prime}_4)^{\ast}$ in this scenario is then given by
\begin{equation}\label{nmaxp4}
(p^{\prime}_4)^{\ast}=\frac{1}{12} \qty(3+\sqrt{6})
\end{equation}
Subsequently, the amount of randomness $\qty(R^{\prime})_{n=4}$ is given by
\begin{equation}\label{nr4}
\qty(R^{\prime})_{n=4}=-\log_2\qty[\frac{1}{12} \qty(3+\sqrt{6})] \approx 1.1388 \ bits
\end{equation}
It is important to note that such evaluated randomness $\qty(R^{\prime})_{n=4}$ from the single copy of a maximally entangled state does not provide the required security of the bound of randomness. This is because single copy of a maximally entangled state only provides a sub-optimal violation of the concerned Bell inequality, and thus, the behaviour leading to such Bell violation is not unique. Hence, in order to evaluate the DI guaranteed bound of randomness, one has to consider all possible convex combinations of local and nonlocal behaviours that produce all behaviours leading to the same Bell violation of $4(\sqrt{2} + \sqrt{6})$. For this purpose, in order to evaluate the DI guaranteed bound of randomness corresponding to a sub-optimal Bell violation, one has to invoke the numerical method proposed in \cite{Silleras2014}. However, in our present work, although the evaluated amount of randomness evaluated does not correspond to the DI secure guaranteed bound, it serves the purpose of our work that for higher settings, many copies of the maximally entangled state provide an advantage in the generation of guaranteed amount of randomness than that can be obtained by using a single copy.
The randomness obtained for the single copy of the maximally entangled two-qubit state is less than that obtained for both the CHSH and Elegant cases. This is because the optimal quantum value of $\mathcal{B}_4$ is given by $\langle\mathcal{B}_{4}\rangle_{Q}^{opt}=16$ which is greater than that obtained for a single copy of a maximally entangled state. It is crucial to remark here that although the constraint relations for Alice's observables are the same for achieving $\langle\mathcal{B}_{4}\rangle_{Q}^{opt}=16$, the constraint relations for Bob's observable are found to be mutually anti-commuting which is different from that obtained for the single copy case. Now, since the existence of four anti-commuting observables is not possible in two-dimensional Hilbert space, at least two copies of a maximally entangled state are necessarily shared between Alice and Bob with the local Hilbert dimension to be $4$. Thus, the Bell functional $\mathcal{B}_4$ is not optimized for a single copy of a maximally entangled two-qubit state.
Now, we construct the necessary observables in the local dimension $4$ in the following.
\begin{eqnarray}\label{obs4}
&& B_{4,1}=\sigma_{x} \otimes \sigma_{x} \ ; \ \ B_{4,2} = \sigma_{x} \otimes \sigma_{y} \nonumber \\
&& B_{4,3} = \sigma_{x} \otimes \sigma_{z} \ ; \ \ B_{4,4} = \sigma_{y} \otimes \mathbb{I}_2 \nonumber \\
&&A_{4,i} = \frac{1}{2} \sum\limits_{y=1}^{4} (-1)^{x^i_y} B_{4,y}
\end{eqnarray}
The state for which $\mathcal{B}_4$ is optimized is given by
\begin{equation}\label{state4}
\rho_{AB}^{\otimes 2}= \rho_{AB} \otimes \rho_{AB}
\end{equation}
In this case, the maximum joint probability ($p^{\ast}_4$) is given by
\begin{equation}\label{pmax4}
p^{\ast}_4 = 3/8
\end{equation}
which in turn gives the amount of DI certified randomness as follows
\begin{equation}
\qty(R_{min})_{n=4} = -\log_2 (3/8) \approx 1.4150 \ bits
\end{equation}
Note that the amount of randomness for two copies of the maximally entangled two-qubit state is significantly increased than that obtained for the single copy of the maximally entangled two-qubit state. Thus, it is revealed that more than one copy of the bipartite maximally entangled state provides an advantage in the generation of certified randomness by using such Bell inequality.
We further extend our study for $n=5$ and $n=6$ cases to demonstrate how an increase in the number of shared maximally entangled two-qubit states provides more randomness over a lesser number of bipartite maximally entangled states. For $n=5$, the Bell functional $\mathcal{B}_5$ is also optimized for a pair of maximally entangled two-qubit states like for the case of $n=4$. In this case, the amount of randomness for the single copy of two-qubit entangled state is $(2-\log_2[1+\frac{\sqrt{2}+1}{\sqrt{2 \sqrt{2}+5}}])\approx 1.1025$ bits. Note that for a single copy, the amount of randomness is decreased than that of $n<5$. However, for the pair of maximally entangled two-qubit state, the amount of DI certified randomness is $(2-\log_2[1+\frac{1}{\sqrt{5}}])\approx 1.4667$ bits. Hence, for the pair of maximally entangled two-qubit states, we find that the amount of DI certified randomness is increased than that of $n=4$.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[legend pos=south east, legend cell align=left, enlargelimits=false, xlabel={Number of Bob's measurement settings ($n$)}, ylabel style ={align=center}, ylabel= {Amount of certified randomness ($R_n)$ \\ (in bits)}, xticklabel style={
/pgf/number format/fixed, /pgf/number format/fixed zerofill,
/pgf/number format/precision=0
}, scaled ticks=false, xtick={2,4,6,8,10,12,14,16,18,20}, yticklabel style={
/pgf/number format/fixed, /pgf/number format/fixed zerofill,
/pgf/number format/precision=4
}, scaled ticks=false, ytick={ 0.9,1.0374, 1.2284,1.4, 1.6,1.8}, xmin=2, xmax=20, ymin=0.9, ymax=1.8
]
\addplot[mark=*,
mark size=2pt,color=red]
table[meta=R]
{rNqubit.txt};
\addlegendentry{ \textit{a} : single copy }
\addplot[mark=triangle*, mark size = 2.5 pt, color = blue, dashed]
table[meta=R]
{rN.txt};
\addlegendentry{ \textit{b} : $m$-copies}
\node[above] at (20,300) {$a$};
\node[above] at (100,740) {$b$};
\end{axis}
\end{tikzpicture}
\caption{The figure illustrates the variation of the amount of certified randomness $(R_{n})$ corresponding to the optimal quantum Bell violation with different measurement settings of Bob. In particular, the red curve $`a$' shows the amount of randomness for different $n$, when a single copy of a maximally entangled two-qubit state is shared between Alice and Bob. It is found that the amount of randomness decreases with the increase of the number of measurement settings $n$. On the other hand, if $m=\lfloor n/2 \rfloor$ copies of bipartite maximally entangled state are shared between Alice and Bob, then the amount of randomness increases with the increase of $n$ as shown by the dashed blue curve $`b$'. Thus, $\lfloor n/2 \rfloor$ copies of the bipartite maximally entangled state provide an advantage over a single copy in the quantification of certified randomness.}
\label{figrn}
\end{figure}
For $n=6$, the Bell inequality $\mathcal{B}_6$ is optimized for three bipartite maximally entangled states. Interestingly, in this case, we find that the amount of DI guaranteed randomness that can be certified is $(2-\log_2[1+\frac{1}{\sqrt{6}}])\approx1.5061$ bits when three copies of maximally entangled two-qubit state are shared between Alice and Bob. However, the amounts of randomness for the single copy and the pair of copies are given by $(2-\log_2[1+\frac{3}{\sqrt{10}}])\approx1.0375$ and $(2-\log_2[1+\frac{1}{\sqrt{2}}])\approx 1.2284$ bits respectively. It is crucial to note here that for a single copy or pair of maximally entangled two-qubit states, there are significant decreases in the amount of randomness than that obtained for the earlier respective cases. The comparative results for all the cases are illustrated in Table~\ref{table1}.
\begin{widetext}
\begin{table*}[ht]
\centering
\begin{tabular}{|C{0.9in}|C{1.8in}|C{1.6in}|C{1.6in}|}
\hline \cline{1-4}
\multicolumn{1}{|c|}{\multirow{2}{*}{Number of Bob's}}&
\multicolumn{3}{c|}{} \\
\multicolumn{1}{|c|}{\multirow{2}{*}{measurement}}&
\multicolumn{3}{c|}{Amount of certified randomness for $m\leq \lfloor n/2 \rfloor$ copies of maximally entangled two-qubit state} \\
\multicolumn{1}{|c|}{\multirow{2}{*}{settings}}&
\multicolumn{3}{c|}{corresponding to the optimal quantum violation of the Bell inequality $\mathcal{B}_n$} \\
\multicolumn{1}{|c|}{\multirow{2}{*}{($n$)}}&
\multicolumn{3}{c|}{} \\
\cline{2-4}
& $m=1$ & $m=2$ & $m=3$\\
\hline \cline{1-4}
&&&\\
$2$ & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{2}})] \approx 1.2284$ &$-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{2}})] \approx 1.2284$& $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{2}})] \approx 1.2284$ \\
&&&\\
\hline
&&&\\
3 & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] \approx 1.3425$ & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] \approx 1.3425$ & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] \approx 1.3425$ \\
&&&\\
\hline
&&&\\
4 & $-\log_2\qty[\frac{1}{12} \qty(3+\sqrt{6})] \approx 1.1388$ & $-\log_2 \qty[\frac{3}{8}] \approx 1.4150$ & $-\log_2 \qty[\frac{3}{8}] \approx 1.4150$ \\
&&&\\
\hline
&&&\\
5 & $-\log_2\qty[\frac{1}{4}\qty(1+\frac{\sqrt{2}+1}{\sqrt{2 \sqrt{2}+5}})]\approx 1.1025$ & $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{5}})]\approx 1.4667$& $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{5}})]\approx 1.4667$ \\
&&&\\
\hline
&&&\\
6 & $-\log_2\qty[\frac{1}{4}(1+\frac{3}{\sqrt{10}}]\approx1.0375$& $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{2}})]\approx 1.2284$ & $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{6}})]\approx1.5061$ \\
&&&\\
\hline \cline{1-4}
\end{tabular}
\caption{ The table shows the amount of certified randomness for different copies ($m\leq \lfloor n/2 \rfloor$) of bipartite maximally entangled state corresponding to different number of measurement settings $n \in\{1,2,3,4,5,6\}$. Clearly, it is seen that with the increasing number of $n$, the number of shared maximally entangled two-qubit state need to be increased to obtain a higher amount of certified randomness. The maximum amount of certified randomness for a particular $n$ can be achieved for $\lfloor n/2\rfloor$ copies of a maximally entangled two-qubit state which provide the optimal quantum violation of the Bell inequality $\mathcal{B}_{n}$.}
\label{table1}
\end{table*}
\end{widetext}
Finally, we evaluate the amount of certified randomness for arbitrary $n$-settings Bell inequality $(\mathcal{B}_n)$. The optimal quantum violation has been earlier derived to be $\langle\mathcal{B}_n\rangle_Q^{opt}=2^{n-1}\sqrt{n}$ \cite{Ghorai2018} and such optimal bound is achieved for $m=\lfloor \frac{n}{2} \rfloor$ copies of maximally entangled two-qubit state. The maximum joint probability, in this case, can be evaluated by following the simple argument. For this purpose, we recall the maximum success probability $(p_Q^{\ast}(b=x_y^{\delta}))$ of the communication game in \cite{Ghorai2018} given by $p_Q^{\ast}(b=x_y^{\delta})=\frac{1}{2}(1+\frac{1}{\sqrt{n}})$. Note that the probability $(p_Q^{\ast}(b=x_y^{\delta}))$ is the conditional probability of Bob's outcome $b$ corresponding to the measurements $B_{n,y}$ when Alice obtains the outcome $a$ after measuring $A_{n,i}$. Thus, using the Bayes rule of conditional probability, the maximum joint probability is given by
\begin{eqnarray}\label{pmaxn}
p^{*}(a,b|A_{n,1},B_{n,y},\rho_{AB}^{\otimes m})&=& p^{*}(a|A_{n,1},\rho_{AB}^{\otimes m}) \ p^{*}(b|a,A_{n,1},B_{n,y},\rho_{AB}^{\otimes m}) \nonumber\\
&=&p^{*}(a|A_{n,1},\rho_{AB}^{\otimes m}) \ p_Q^{*}(b=x_y^{\delta}) \nonumber\\
&=& \frac{1}{2} \ \times \ \frac{1}{2} \qty(1+\frac{1}{\sqrt{n}})
\end{eqnarray}
Subsequently, the amount of certified randomness is then given by
\begin{equation}
\qty(R_{min})_n=2-\log_2 \qty[1+\frac{1}{\sqrt{n}}]
\end{equation}
It is interesting to note here that for large $n$, it is possible to certify close to 2 bits of $R_{min}$ using the Bell functional $\mathcal{B}_n$. It is important to remark here that close to two bits of DI guaranteed bound of randomness in terms of $R_{max}$ has already been shown by using different measurement contexts \cite{Acin2012, Law2014, Anderson2018, Woodhead2019}, our central motivation of this work is to show that the use of many copies of maximally entangled state in generating randomness quantified in terms of $R_{min}$ is more advantageous than the single copy.
\section{Summary and outlook}\label{SecIV}
In the present work, we investigate the possibility of certifying more randomness quantified in terms of $R_{min}$from many copies of a maximally entangled two-qubit state than from a single copy. For this purpose, we first revisit the derivation of the quantum optimal value of the new family of Bell inequalities which was earlier introduced in the context of a RAC communication game and later, importantly, also demonstrated as a dimension witness. Next, by suitably quantifying the amount of randomness in terms of min.-Entropy, we evaluate the amount of randomness for different values of $n$. Specifically, we evaluate the amount of randomness corresponding to the optimal quantum violation of the invoked Bell inequality. Such evaluation of randomness only for the optimal quantum violation becomes sufficient for serving the purpose of our present work.
In particular, we explicitly show that the Bell inequality $\mathcal{B}_{n}$ for $n=2$ and $n=3$ case the guaranteed amount of randomness corresponding to the optimal quantum violation is obtained when a single copy of maximally entangled state is shared. Then, we show that the amount of randomness for the single copy of the maximally entangled state continues to decrease with the increase of $n\geq4$. Next, for both the $n=4$ and $n=5$ cases, it is found that the amounts of certified randomness are increased if Alice and Bob share a pair of maximally entangled two-qubit states instead of a single copy. Such an amount of randomness again decreases for $n\geq6$. Moreover, for $n=6$, we obtain that the maximum amount of certified randomness is achieved when Alice and Bob share three pairs of maximally entangled two-qubit states. Hence, it follows from our demonstration that with the increasing number of measurement settings, the minimum amount of certified randomness depends on the number of maximally entangled two-qubit states shared between Alice and Bob. Finally, we evaluate the maximum amount of $R_{min}$ corresponding to the optimal quantum value for an arbitrary number of measurement settings $n$. Such amount of randomness will be realized if Alice and Bob share $m=\lfloor n/2 \rfloor$ copies of maximally entangled two-qubit state. The findings of our results are illustrated in Table~\ref{table1} and Fig. \ref{figrn}.
While our current study has limited advantage in randomness generation as compared to the existing protocols, such study may serve as a stepping stone towards the near future practical applications involving many copies of maximally entangled state. In particular, the present study may lead to the DI certification of an unbounded amount of randomness using the many copies of the maximally entangled state. Further, the present study then opens up an important avenue for the randomness expansion protocol in which a short and weak random string can be expanded into a long and strong certified provable secure randomness string. To date the randomness expansion protocols have been explored by using either the CHSH inequality or Hardy relations \cite{Ramanathan2018, Colbeck2012, Li2021, Liu2021, Bhavsar2021} which cannot certify more than a single copy of the bipartite maximally entangled state. Hence, it will be interesting to follow up the expansion as well as amplification protocols by invoking the new family of Bell inequalities $\mathcal{B}_n$ which has the capability of certifying more randomness from many copies of maximally entangled two-qubit states.
\section*{Acknowledgments}
We thank Souradeep Sasmal for his immense help in writing this paper. SSM acknowledges the UGC fellowship [Fellowship No. 16-9(June 2018)/2019(NET/CSIR)]. AKP acknowledges the support from the project DST/ICPS/QuST/Theme 1/2019/4. \linebreak
\section{Introduction} \label{SecI}
Randomness is an essential resource \cite{Pironio2007, Pironio2010, Colbeck2011, Gallego2013, Silleras2014, Bancal2014, Bierhorst2018, Liu2021} for various information processing tasks such as secure communication, key distribution, encryption schemes in cryptographic protocols, statistical sampling and many others. In this regard, the generation of certifiable as well as provably-secure generation of the random number is crucial. The existing and widely used random number generators (RNGs) such as pseudo-RNG, and true-RNG lack the certification of the generated randomness \cite{Bera2017}. This is because pseudo-RNG uses a given algorithm or arithmetic operation to produce a sequence of random numbers, and thus, the privacy is based on assumptions about the computational power of the adversary as well as on the assumptions that the adversary does not know the underlying arithmetic operations or the algorithms used to produce the random numbers - making the generated pseudo-random sequence to vulnerable to adversarial guessing attack. True-RNG relies on the complexity of the calculations or the lack of complete knowledge about the systems and exploits hard-to-predict physical phenomena for generating randomness. However, in principle, one can always predict the outcomes of such physical phenomena with certainty using classical mechanics and powerful computational power \cite{Acin2016}.
Now, in quantum theory, the outcomes of a measurement performed on a quantum mechanical system are truly random. This randomness persists even if we have the full knowledge of the preparation of the state of the system. Thus, such randomness does not rely on the lack of knowledge about the systems or complexity of the calculations \cite{Acin2016}. However, the fundamental problem with the certification of such randomness is that we have to rely on the inner workings of the quantum device. Even if we assume that the quantum-RNG device \cite{collantes2017} is provided by a trusted supplier, it may deviate from its intended design because of unavoidable imperfections, aging, or tampering leading to uncontrollable biases \cite{Pironio2018}. Therefore, to generate provably secure randomness, we have to certify the generated randomness in the device-independent (DI) way.
To this end, it is the empirical violation of the Bell inequality, together with the validity of the condition of signal locality, that guarantees unpredictability of Bell violating measurement outcomes, independent of the computational power of any adversary \cite{Masanes2006, Cavalcanti2012, ss2020}. Since the violation of Bell inequality only requires the input-output statistics of an experiment, the certification of the generated randomness does not rely on the inner working of the devices used. In \cite{Pironio2010}, it has been shown that the device-independent (DI) guaranteed bound of randomness is monotonic with violations of Clauser-Horne-Shimony-Holt (CHSH) \cite{CHSH1970} inequality. Further, by invoking the CHSH inequality, Colbeck \emph{et al.}\cite{Colbeck2011} introduced a protocol in which a short and weak random string can be expanded into a longer provably private random string - known as randomness expansion. Further, Acin \emph {et al.} \cite{Acin2012} shade more light on the relationship between randomness, nonlocality, and entanglement by showing more randomness can be produced from a small amount of nonlocality and states with arbitrarily little entanglement. Recently, by invoking the Hardy \cite{Hardy1992} and Cabello-Liang \cite{Cabello2002, Liang2005} relations for nonlocality, it has been shown \cite{ss2020} that the quantitative relationship between the randomness and the amount of nonlocality is more nuanced than that thought of. While the minimum guaranteed amount of randomness has a monotonic relationship with nonlocality, the maximum amount bound of randomness is quantitatively incommensurate with nonlocality.
It is important to note here that in the CHSH case, the maximum amount of guaranteed randomness that can be certified is 1.2284 bits \cite{Pironio2010} corresponding to the maximum violation of the CHSH inequality $(2\sqrt{2})$. The amount of certified randomness in the same experimental setup can not be increased even if one increases the dimension of the shared maximally entangled state. This is because the value of the CHSH function becomes optimized for the two-qubit system \cite{Cirelson1980, Supic2020, Pan2020}. Hence, increasing the dimension of the system does not provide any advantage in the generation of certified randomness. Now, it is the purpose of this manuscript to investigate the question of whether more than one copy of an entangled two-qubit state provides any advantage over a single copy for the generation of certified randomness. While at first glance, the question may seem to be obvious, deeper looks into it provide the deep-seated non-triviality. Of course, one may argue that pair of maximally entangled two-qubit states always provide a greater amount of randomness than that can be obtained from a single copy of it. Although it is trivial that pair of maximally entangled states indeed provide the quantitative advantage in the certified randomness, the non-trivial part is the certification of such a pair of entangled states. This is because, the standard CHSH inequality, as mentioned, does not provide the required certification since it can be shown that the quantum maximum of the CHSH inequality is attained for a two-qubit entangled state. Thus, the maximum violation will not be increased even if one increases the dimension of the system or use many copies of two-qubit entangled states. The Elegant Bell inequality (EBI) \cite{Gisin2007} where one party has four measurement settings and the other has three measurement settings, the maximum value is also reached for a two-qubit system \cite{Anderson2017}. Moreover, the $n$-settings chained Bell inequality cannot also certify more than one copy of maximally entangled two-qubit state for the same reason \cite{Supic2016}.
Thus, in a single experimental setup, it is not obvious to guarantee a greater amount of provably-secure certified randomness from many copies of a maximally entangled state than that obtained from a single copy. Against this backdrop, by invoking a family of many settings Bell inequality, we demonstrate that increasing the number of maximally entangled two-qubit states does provide an advantage in the generation of certified randomness. Such a family of inequalities has earlier been introduced as a dimension witness \cite{Pan2020} in the context of random-access-code communication games.
This paper is organized as follows. In Sec. \ref{SecII}, we briefly recapitulate the essence of the derivation for the optimal quantum bound of the Bell inequality without assuming the dimension of the state as well as measurement (Sec. \ref{sosnop}). Next, we employ the Bell inequality through DI certification of randomness. Then, in Sec. \ref{SecIII} by suitably quantifying, we evaluate the amount of certified randomness corresponding to the optimal quantum violation of Bell inequality for an arbitrary number of measurement settings ($n$). In particular, in Secs. \ref{cr2}-\ref{cr4}, we explicitly evaluate the certified randomness when Alice and Bob share a single copy as well as more than one copy of maximally entangled two-qubit state for $n\in \{2,3,4,5,6\}$. The obtained results are illustrated in Table \ref{table1} and Fig. \ref{figrn}. Finally, we conclude our work and discussed our proposal from two different viewpoints where more than a single copy of a two-qubit maximally entangled state provides an advantage over a single copy (Sec. \ref{SecIV}).
\section{A family of Bell inequalities and corresponding optimal quantum violations}\label{SecII}
\begin{figure}[ht]
\centering
{{\includegraphics[width=0.9\linewidth]{bell.pdf}}}
\caption{A bipartite Bell experiment involving $2^{n-1}$ settings for Alice and $n$ settings for Bob. }
\label{bs}
\end{figure}
For the sake of completeness, we briefly revisit the quantum optimization of a family of Bell inequalities, which was earlier introduced as a consequence of the analysis of the $n$-bit RAC communication game \cite{Ghorai2018}. It was shown that the success probability crucially depends on the quantum violation of such Bell inequality. For our purpose of generation of certified randomness, we independently invoke this inequality. Before we begin, let us first introduce the scenario in the following.
We consider a bipartite Bell experiment where two space-like separated parties Alice and Bob perform measurements on their local subsystems. Alice performs one of $2^{n-1}$ dichotomic measurements denoted by $A_{n,i} \in \{A_{n,1}, A_{n,2}, ..., A_{n,2^{n-1}}\}$ and the corresponding outcomes be denoted as $a \in \{0,1\}$. Bob performs one of $n$ dichotomic measurements denoted by $B_{n,y} \in \{B_{n,1}, B_{n,2}, ..., B_{n,n}\}$ and the corresponding outcomes be denoted as $b\in \{0,1\}$. In the above, the following Bell functional has been proposed \cite{Ghorai2018} is given by
\begin{align}\label{nbell}
\mathcal{B}_{n} = \sum_{y=1}^{n}\qty(\sum_{i=1}^{2^{n-1}} (-1)^{x^i_y} A_{n,i})\otimes B_{n,y}
\end{align}
The term $x^i_{y}$ takes a value of either 0 or 1 which is fixed by using the encoding scheme demonstrated in \cite{Ghorai2018, Pan2020, Pan2021}. Then for a given $i$, $x^{i}_{y}$ fix the values of $(-1)^{x^{i}_{y}}$ in the following way. Let us consider a random variable $x^{\alpha}\in \{0,1\}^{n}$ with $\alpha\in \{1,2,\cdots,2^{n}\}$. Each element of the bit string can be written as $x^{\alpha}=x^{\alpha}_{y=1} x^{\alpha}_{y=2} x^{\alpha}_{y=3},\cdots,x^{\alpha}_{y=n}$. For example, if $x^{\alpha} = 110...10$ then $x^{{\alpha}}_{y=1} =1$, $x^{{\alpha}}_{y=2} =1$, $x^{{\alpha}}_{y=3} =0,\cdots ,x^{{\alpha}}_{y=n-1} =1,x^{{\alpha}}_{y=n} =0$. We denote the $n$-bit binary strings as $x^{i}$. Here we consider the bit strings such that for any two $i$ and $j$, $x^{i}\oplus_{2} x^{j}=11\cdots1$. Clearly, we have $i\in \{1,2,\cdots,2^{n-1}\}$ constituting the inputs for Alice. If $i=1$, we get all the first bit of each bit string $x_y$ for every, $y\in \{1,2, \cdots ,n\}$ which are the inputs of Bob.
It has been shown \cite{Pan2020} that if Alice and Bob are space-like separated, the local bound of the Bell functional $\mathcal{B}_{n}$ is given by
\begin{equation}\label{local}
\langle\mathcal{B}_{n}\rangle_{L}= n \ \binom{n-1}{\lfloor \frac{n-1}{2}\rfloor}
\end{equation}
where $\binom{m}{r}=\frac{m!}{r! \ (m-r)!}$ and $\lfloor m \rfloor$ is the floor function that takes as input a real number $m$, and gives as output the greatest integer less than or equal to $m$.
Let a source $S$ emits an entangled two-qubit state $\rho_{AB}$. It is important to note here that there exist quantum correlations for which $\langle\mathcal{B}_{n}\rangle_{Q}=Tr[\rho_{AB} \ \mathcal{B}_{n}]>\langle\mathcal{B}_{n}\rangle_{L}$. In the following, by employing an elegant sum-of-squares (SOS) method, the optimal quantum bound is derived.
\subsection{The optimal quantum bound of the Bell functional $\mathcal{B}_{n}$ } \label{sosnop}
Here we derive the optimal quantum bound of the Bell functional $\mathcal{B}_{n}$ independent of the dimension of the state as well as measurement. For this purpose, we utilize the SOS technique introduced in \cite{Pan2020}. We consider a positive semi-definite operator $\gamma_{n}$, which can be expressed as $\gamma_{n}=\beta_{n} \mathbb{I}_{d}-(\mathcal{B}_{n})_{Q}$, where $\beta_{n}$ is the optimal value that can be obtained when $\langle \gamma_{n}\rangle$ is equal to zero. This can be proved by considering a set of operators $M_{n,i}$ which are linear functions of the observables $A_{n,i}$ and $B_{n,y}$ such that
\begin{align}\label{gamma}
\gamma_{n}=\sum\limits_{i=1}^{2^{n-1}} \frac{\omega_{n,i}}{2} \qty(M_{n,i})^{\dagger}\qty(M_{n,i})
\end{align}
where the operators $M_{n,i}$ and the quantities $\omega_{n,i}$ are given as follows
\begin{eqnarray}
\label{mi}
&&M_{n,i}=\frac{1}{\omega_{n,i}}\qty(\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y})-A_{n,i} \nonumber \\
&&\omega_{n,i}=\Big|\Big|\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y}\ket{\psi}\Big|\Big|
\end{eqnarray}
where, $||\cdot||$ is the Frobenious norm, given by $|| \mathcal{O}\ket{\psi}||=\sqrt{\bra{\psi}\mathcal{O}^{\dagger}\mathcal{O}\ket{\psi}}$.
Now, putting Eq. (\ref{mi}) into Eq. (\ref{gamma}) and by noting that $A_{n,i}^{\dagger} A_{n,i}=B_{n,y}^{\dagger} B_{n,y}=\mathbb{I}_{d} $, we obtain
\begin{eqnarray} \label{opt1}
\gamma_{n}&=&-(\mathcal{B}_{n})_{Q} + \sum\limits_{i=1}^{2^{n-1}}\left[ \frac{1}{2\omega_{n,i}}\left(\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y}\right)^2 + \frac{\omega_{n,i}}{2} \mathbb{I}_d\right] \nonumber \\
&=& -(\mathcal{B}_{n})_{Q} + \sum\limits_{i=1}^{2^{n-1}}\omega_{n,i} \ \mathbb{I}_d \ \ \ \ \text{[from Eq.~(\ref{mi})]}
\end{eqnarray}
Therefore, it follows from the above Eq.~(\ref{opt1}) that the quantum optimum value of $(\mathcal{B}_{n})_{Q}$ is attained when $\langle\gamma_{n}\rangle_{Q}= \bra{\psi} \gamma_{n} \ket{\psi} = 0$, which in turn provides
\begin{equation}\label{optbn}
\langle \mathcal{B}_{n}\rangle_{Q}^{opt} = \max\left(\sum\limits_{i=1}^{2^{n-1}}\omega_{n,i}\right)
\end{equation}
Note that such optimal value will be obtained for the following optimization conditions
\begin{equation}
M_{n,i}=0 \implies A_{n,i}= \frac{1}{\omega_{n,i}}\sum\limits_{y=1}^{n} (-1)^{x^i_y} B_{n,y}
\end{equation}
Now, in order to achieve the optimal quantum value $\langle \mathcal{B}_{n}\rangle_{Q}^{opt}$ from Eq.~(\ref{optbn}), we first evaluate $(\sum\limits_{i=1}^{2^{n-1}}\omega_{n,i})$ by invoking the convex inequality as follows
\begin{align}
\label{concav}
\sum\limits_{i=1}^{2^{n-1}}\omega_{n,i}\leq \sqrt{2^{n-1} \sum\limits_{i=1}^{2^{n-1}} (\omega_{n,i})^{2}}
\end{align}
It is crucial to remark here that in Eq. (\ref{concav}), the equality holds only when all $\omega_{n,i}$ are equal for each $i$.
Since $B_{n,y}$s are dichotomic, the quantity $\omega_{n,i}$ can explicitly be written from Eq.~(\ref{mi}) as
\begin{eqnarray}\label{omega}
\omega_{n,i}&=& \Big[ n+ \langle \{(-1)^{x^i_1} B_{n,1}, \sum\limits_{y=2}^{n}(-1)^{x^i_y} B_{n,y}\} \rangle \nonumber\\
&+& \langle \{(-1)^{x^i_2} B_{n,2}, \sum\limits_{y=3}^{n}(-1)^{x^i_y} B_{n,y}\}\rangle + \cdots \cdots \nonumber\\
&+&\langle \{(-1)^{x^i_{n-1}} B_{n,n-1}, (-1)^{x^i_n} B_{n,n}\}\rangle\Big]^{\frac{1}{2}}
\end{eqnarray}
where $\{ \ , \ \}$ denotes the anti-commutation relation.
As already mentioned that the equality holds in Eq.~(\ref{concav}) when each of $\omega_{n,i}$ are equal. From the above Eq. (\ref{omega}) it can be shown that all $\omega_{n,i}$ are equal to each other if Bob's observables are mutually anti-commuting, i.e., $\{B_{n,y},B_{n,y^{\prime}}\}=0 \ \ \forall y,y^{\prime}\in \{1,2,\cdots,n\}$. This then, provides the maximum value of $\omega_{n,i}=\sqrt{n}$. Therefore form Eqs.(\ref{optbn}) and (\ref{concav}), it is now straightforward to obtain
\begin{equation}
\label{opt}
\langle\mathcal{B}_{n}\rangle_Q^{opt}= 2^{n-1}\sqrt{n}
\end{equation}
It is to be noted here that the optimal value will be achieved when there exist $n$ number of mutually anti-commuting observables in Bob's local Hilbert space, thereby, specifying the dimension of the global Hilbert pace, given by $d=4^{m}$ with $m=\lfloor n/2 \rfloor$, as well as the number of maximally entangled state ($m=\lfloor n/2 \rfloor$) required for achieving such optimal value in quantum theory.
Thus, for the cases of $n\geq 4$, a single copy of a maximally entangled two-qubit state will not suffice the purpose and one requires a higher dimensional system. For example, the optimal value of the Bell expression for $n=4$ requires at least two copies of a bipartite maximally entangled qubit state.
\section{ Generation of certified Randomness from the new family of Bell inequalities} \label{SecIII}
It follows from the argument presented in the preceding Sec. (\ref{SecII}) that in the quantum theory there exist correlations for which it is possible to obtain $\langle\mathcal{B}_{n}\rangle_{L}<\langle\mathcal{B}_{n}\rangle_{Q}\leq 2^{n-1}\sqrt{n}$. This implies that such Bell violating quantum correlations cannot be reproduced by any predetermined local strategy between Alice and Bob. Thus, empirical violations of the proposed Bell inequalities certify the presence of inherent randomness in the observed statistics. It is the purpose of this manuscript to appropriately quantify such inherent certified randomness.
In information theory, among the different measures of entropy, the quantity min.-Entropy ($H_{\infty}$) \cite{Renner2009} is used in all the studies for the quantification of randomness. Here, to compare with those earlier relevant works, we also take min.-Entropy as the suitable quantifier of certified randomness. Note that the min.-Entropy is determined only by the event occurring with the highest probability, independent of the probabilities of the other events, it quantifies the minimum unpredictability involved in the probability distribution \cite{Renner2009}. Thus, min.-Entropy provides the most secure bound of the generated randomness. From now on, we call such bound as guaranteed bound of randomness.
We quantify the amount of DI certified global randomness $\qty(R_{min})_n$ for a given Bell violation, $\langle\mathcal{B}_n(\vec{\mathbb{P}}_{obs}^{n})\rangle> n \ \binom{n-1}{\lfloor \frac{n-1}{2}\rfloor}$, in terms of the min.-Entropy as follows
\begin{eqnarray} \label{randef}
\qty(R_{min})_n &=& \min\limits_{\vec{\mathbb{P}}_{obs}} H_{\infty}(a,b|A_{n,i},B_{n,y}) = -\log_2\qty[\max_{\vec{\mathbb{P}}_{obs}} p(a,b|A_{n,i},B_{n,y})] \nonumber\\
&& \ \ \ \text{subject to:} \nonumber\\
&& \ \ \ \text{(i)} \ \vec{\mathbb{P}}_{obs} \in \{p(a,b|A_{n,i},B_{n,y})\},\nonumber \\
&& \ \ \ \text{(ii)} \ p(a,b|A_{n,i},B_{n,y})=Tr[\rho_{AB} \ (\Pi_{A_{n,i} }^a \otimes \Pi_{B_{n,y}}^b)],\nonumber \\
&& \ \ \ \text{(iii)} \ \langle\mathcal{B}_{n}\rangle_Q=\left\langle\sum_{y=1}^{n}\sum_{i=1}^{2^{n-1}} (-1)^{x^i_y} A_{n,i}\otimes B_{n,y}\right\rangle = \epsilon \nonumber\\
&&\ \ \ \text{with} \ n \ \binom{n-1}{\lfloor \frac{n-1}{2}\rfloor}<\epsilon\leq 2^{n-1}\sqrt{n} \ ;
\end{eqnarray}
where $\vec{\mathbb{P}}_{obs}^{n} \in {\mathbb{R}^{n 2^{n+1}}}$ is a vector in a $(n 2^{n+1})$ dimensional real number space, which denotes the set of all observed joint probability distributions known as behaviour.
It is important to remark here that for a given Bell violation, there may exist more than one behaviour. Thus, to ensure the security of the quantified randomness, for a given Bell violation, the min.-Entropy need to be minimized over all possible behaviours.
In this regard, a point to be noted here is that amount of certified randomness can be quantified in two different ways \cite{ss2020}. To understand it more clearly, let us first consider the simplest 2 parties-2 measurements per party-2 outcomes per measurement (2-2-2) scenario. In such a scenario, there are four possible combinations of the pairs of measurement settings, given by $\{(A_1,B_1),(A_1,B_2),(A_2,B_1),(A_2,B_2)\}$, and one can always evaluate the maximum value of joint probability corresponding to each combination of pairs of measurement settings, denoted by $P^{\ast}_{ij} \ \ \forall i,j\in\{1,2\}$. Then, the amount of randomness corresponding to each such combination is given by $R_{ij}=-\log_2\qty[P^{\ast}_{ij}]$. Now, we can evaluate the amount of certified randomness in two ways - (i) by taking the minimum value of $R_{ij}$ which then gives us the DI guaranteed bound of randomness, $(R_{min}=\min\limits_{i,j} R_{ij})$, and (ii) by taking the maximum value of $R_{ij}$ which then gives us the maximum amount of randomness, $(R_{max}=\max\limits_{i,j} R_{ij})$. Such a discussion on the quantification of the certified amount of randomness is essential since there are works \cite{Acin2012, Law2014, ss2020} that have used $R_{max}$ as a quantifier of certified randomness to show that close to two bits of randomness can be certified in the 2-2-2 scenario. Further, by invoking a suitable generalised measurement (POVM) scheme, it has been shown \cite{Anderson2018, Woodhead2019} that it is possible to achieve close to four bits of amount of $R_{max}$ certified by the Elegant Bell inequality.
Importantly, in our prescription of quantifying the DI certified randomness, we take $R_{min}$ as a viable quantifier of the certified randomness as given by Eq.~(\ref{randef}) like the bound considered for the estimation of DI certified randomness by Pironio \emph{et al.} \cite{Pironio2010}. Moreover, a significant point to be reflected here is that for the optimal violation of the concerned Bell functional ($\mathcal{B}_n$), all the maximum joint probabilities corresponding to each combination of pairs of measurement settings are found to be equal, thereby in our treatment $R_{min}=R_{max}$. However, while one can also interject a suitable measurement scheme of POVM and/or employ higher settings tilted Bell inequalities to certify a greater amount of randomness, such a line of study is beyond the scope of our present manuscript.
Now, since the quantum theory is not a polytope, it is impossible to analytically evaluate the guaranteed bound of randomness by executing the optimization by taking into account all possible behaviours. However, for the optimal quantum violation, the observed behaviour is unique. Then, for optimal quantum violation of a Bell inequality, it will be straightforward to evaluate the secure guaranteed bound of randomness. Hence, for our purpose of the evaluation of the certified randomness in the quantum theory, we take recourse to the case when the proposed Bell inequality is optimally violated.
Before we proceed to evaluate the amount of certified randomness, we again point out the interesting feature anew: by increasing the dimension of the system, whether it is possible to certify a greater amount of guaranteed randomness than that obtained in the CHSH case in a same experimental setup. Note that such an advantage cannot be revealed by using only the CHSH inequality, EBI, and Chain Bell inequality. The deep-seated significance of the proposed newly found family of Bell inequalities is that the maximal quantum violation provides a dimension witness \cite{Pan2020}. Thus, many copies of maximally entangled two-qubit states can be certified from the optimum quantum violation of such inequality by increasing the number of measurement settings. This, then, will provide the advantage in the randomness generation in a provable-secure way than the CHSH or elegant or chain Bell inequalities. We are now in a position for evaluating the certified randomness corresponding to the optimal quantum violations of the Bell inequalities given by $\mathcal{B}_n$ for different values of $n$.
\subsection{Evaluation of certified randomness for $n=2$} \label{cr2}
Note that the $n=2$ case corresponds to the standard CHSH inequality. It has been shown (Sec. \ref{sosnop}) that to ensure the DI maximal violation of the CHSH inequality, both Alice's and Bob's observables need to be anti-commuting. The such maximum value is attained for the two-qubit maximally entangled state. Hence, we can always construct such anti-commuting observables in the local two-dimensional Hilbert space. Then, a straightforward algebra completely characterizes the unique behaviour $\vec{\mathbb{P}}_{obs}^2$ with the maximum joint probability $p_{2}^{\ast}$ given by
\begin{equation}
p_{2}^{\ast}= \frac{1}{4}(1+\frac{1}{\sqrt{2}})
\end{equation}
and subsequently, the amount of randomness is given by
\begin{equation}
\qty(R_{min})_{n=2} = -\log_2\left[\frac{1}{4}\left(1+\frac{1}{\sqrt{2}}\right)\right] \approx 1.2284 \ bits
\end{equation}
Note the amount of randomness for $n=2$, $ \qty(R_{min})_{n=2}=1.2284 \ bits$ corresponding to the optimal Bell violation (here CHSH) is found to be the same as earlier obtained in \cite{Pironio2010}.
\subsection{Evaluation of Certified Randomness for $n=3$} \label{cr3}
The Bell inequality $\mathcal{B}_n$ for $n=3$ corresponds to the EBI given by
\begin{eqnarray}\label{bell3}
\langle\mathcal{B}_{3}\rangle_{L} &=& A_{3,1} \otimes \left(B_{3,1}+B_{3,2}+B_{3,3}\right) \nonumber \\
&+& A_{3,2} \otimes \left(B_{3,1}+B_{3,2}-B_{3,3}\right) \nonumber\\
&+&A_{3,3} \otimes \left(B_{3,1}-B_{3,2}+B_{3,3}\right) \nonumber \\
&+& A_{3,4} \otimes \left(-B_{3,1}+B_{3,2}+B_{3,3}\right) \leq \ 6
\end{eqnarray}
It is interesting to note here that while the EBI does not provide any quantum advantage (or the QM optimal violation) even if one increases the system's dimension such Bell inequality provides a greater amount of certified randomness than that obtained in the CHSH case.
Now, since the optimal quantum violation $ \langle\mathcal{B}_3\rangle_{Q}$ of the EBI corresponds to a unique statistics which subsequently fixes the least requirement of the shared state to be the maximally entangled two-qubit state as well as puts the restriction on the observables for Both Alice and Bob. Here we construct the desired observable in the two-dimensional local Hilbert space from the conditions for quantum optimality as follows
\begin{eqnarray}\label{obs3}
&&A_{3,1}= (\sigma_{x} + \sigma_{y}+\sigma_{z})/{\sqrt{3}} \ ; \ A_{3,2}=(\sigma_{x} + \sigma_{y} - \sigma_{z})/{\sqrt{3}}\nonumber\\
&& A_{3,3}= (\sigma_{x} - \sigma_{y} + \sigma_{z})/{\sqrt{3}} \ ; \ A_{3,4}=(-\sigma_{x} + \sigma_{y} + \sigma_{z})/{\sqrt{3}}\nonumber\\
&&B_{3,1}=\sigma_{x} \ \ ; \ \ B_{3,2}=\sigma_{y} \ \ ; \ \ B_{3,3}=\sigma_{z}
\end{eqnarray}
Now, by employing the observables given in Eq. (\ref{obs3}) and the maximally entangled state $\rho_{AB}=\ket{\psi^{-}}\bra{\psi^{-}}$, where $\ket{\psi^{-}}=\frac{1}{\sqrt{2}}(\ket{01}-\ket{10})$, the observed behaviour $\vec{\mathbb{P}}_{obs}\equiv \{p(a,b|A_{3,i},B_{3,y},\rho_{AB})\}\in\mathbb{R}^{48}$ can be evaluated. We found that for the EBI, the maximum violation leads to the unique behaviour having joint probabilities of the following form
\begin{eqnarray}\label{maxp3}
p(a,b|A_{3,i},B_{3,y},\rho_{AB})&=& Tr[\rho_{AB} (\Pi_{A_{3,i}}\otimes\Pi_{B_{3,y}})]\nonumber\\
&=&\frac{1}{4}(1\pm\frac{1}{\sqrt{3}})
\end{eqnarray}
with maximum joint probability $p_{3}^{\ast}=\frac{1}{4}(1+\frac{1}{\sqrt{3}})$ and consequently, the guaranteed amount of randomness is given by
\begin{equation}
\qty(R_{min})_{n=3} = -\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] = 1.3425 \ bits
\end{equation}
It is to be noted here that the amount of guaranteed randomness is found to be greater than that obtained in the CHSH case. Hence, for the generation of a guaranteed amount of randomness, the use of EBI will be more advantageous than that of CHSH inequality. Therefore, it seems that increasing the number of measurement settings may provide an advantage in the generation of certified randomness. Next, in the following, we proceed to evaluate the same for $n=4$.
\subsection{Evaluation of Certified Randomness for $n=4$} \label{cr4}
The Bell inequality $\mathcal{B}_n$ for $n=4$ is given as follows
\begin{eqnarray}\label{bell4}
\langle\mathcal{B}_{4}\rangle_{L} &=& A_{4,1} \otimes\left( B_{4,1}+B_{4,2}+B_{4,3}+B_{4,4}\right) \nonumber\\
&+& A_{4,2} \otimes\left( B_{4,1}+B_{4,2}+B_{4,3}-B_{4,4}\right)\nonumber \\
&+& A_{4,3} \otimes\left( B_{4,1}+B_{4,2}-B_{4,3}+B_{4,4}\right)\nonumber \\
&+& A_{4,4} \otimes\left( B_{4,1}-B_{4,2}+B_{4,3}+B_{4,4}\right)\nonumber \\
&+& A_{4,5} \otimes\left( -B_{4,1}+B_{4,2}+B_{4,3}+B_{4,4}\right)\nonumber \\
&+& A_{4,6} \otimes\left( B_{4,1}+B_{4,2}-B_{4,3}-B_{4,4}\right)\nonumber\\
&+& A_{4,7} \otimes\left( B_{4,1}-B_{4,2}+B_{4,3}-B_{4,4}\right)\nonumber\\
&+& A_{4,8} \otimes\left( B_{4,1}-B_{4,2}-B_{4,3}+B_{4,4}\right) \leq \ 12
\end{eqnarray}
It has been shown that if Alice and Bob shares a single copy of maximally entangled state, the above Bell inequality given by the Eq.~(\ref{bell4}) is optimised if Bob's observables satisfy the condition $\{B_{4,1},B_{4,2}\}=\{B_{4,1},B_{4,3}\}=\{B_{4,1},B_{4,4}\}=\{B_{4,2},B_{4,3}\}=\{B_{4,2},B_{4,4}\}=0$ and $\{B_{4,3},B_{4,4}\}=\pm2$. Then, by using such constraint relations we construct the following observables in Hilbert space dimension two $(\mathcal{H}^2)$
\begin{eqnarray}\label{nobs4}
&& B_{4,1} = \sigma_{x} \ ; \ \ B_{4,2} = \sigma_{y} \ ; \ \ B_{4,3} = B_{4,4} = \sigma_{z} \nonumber \\
&& A_{4,i}=\frac{1}{\sqrt{N_i}}\sum\limits_{y=1}^{4} (-1)^{x^i_y} B_{4,y}
\end{eqnarray}
where $N_i =2 \ \forall \{2,3,7,8\}$ and $N_i =6 \ \forall \{1,4,5,6\}$.
Then one can evaluate the behaviour $\vec{\mathbb{P}}_{obs}^4\equiv \{p(a,b|A_{4,i},B_{4,y},\rho_{AB})\}\in\mathbb{R}^{128}$ corresponding to such observables given by Eq.~(\ref{obs4}) and a single copy of maximally entangled state. The maximum Bell value in this case is then given by $4(\sqrt{2}+\sqrt{6})$. The greatest joint probability $(p^{\prime}_4)^{\ast}$ in this scenario is then given by
\begin{equation}\label{nmaxp4}
(p^{\prime}_4)^{\ast}=\frac{1}{12} \qty(3+\sqrt{6})
\end{equation}
Subsequently, the amount of randomness $\qty(R^{\prime})_{n=4}$ is given by
\begin{equation}\label{nr4}
\qty(R^{\prime})_{n=4}=-\log_2\qty[\frac{1}{12} \qty(3+\sqrt{6})] \approx 1.1388 \ bits
\end{equation}
It is important to note that such evaluated randomness $\qty(R^{\prime})_{n=4}$ from the single copy of a maximally entangled state does not provide the required security of the bound of randomness. This is because single copy of a maximally entangled state only provides a sub-optimal violation of the concerned Bell inequality, and thus, the behaviour leading to such Bell violation is not unique. Hence, in order to evaluate the DI guaranteed bound of randomness, one has to consider all possible convex combinations of local and nonlocal behaviours that produce all behaviours leading to the same Bell violation of $4(\sqrt{2} + \sqrt{6})$. For this purpose, in order to evaluate the DI guaranteed bound of randomness corresponding to a sub-optimal Bell violation, one has to invoke the numerical method proposed in \cite{Silleras2014}. However, in our present work, although the evaluated amount of randomness evaluated does not correspond to the DI secure guaranteed bound, it serves the purpose of our work that for higher settings, many copies of the maximally entangled state provide an advantage in the generation of guaranteed amount of randomness than that can be obtained by using a single copy.
The randomness obtained for the single copy of the maximally entangled two-qubit state is less than that obtained for both the CHSH and Elegant cases. This is because the optimal quantum value of $\mathcal{B}_4$ is given by $\langle\mathcal{B}_{4}\rangle_{Q}^{opt}=16$ which is greater than that obtained for a single copy of a maximally entangled state. It is crucial to remark here that although the constraint relations for Alice's observables are the same for achieving $\langle\mathcal{B}_{4}\rangle_{Q}^{opt}=16$, the constraint relations for Bob's observable are found to be mutually anti-commuting which is different from that obtained for the single copy case. Now, since the existence of four anti-commuting observables is not possible in two-dimensional Hilbert space, at least two copies of a maximally entangled state are necessarily shared between Alice and Bob with the local Hilbert dimension to be $4$. Thus, the Bell functional $\mathcal{B}_4$ is not optimized for a single copy of a maximally entangled two-qubit state.
Now, we construct the necessary observables in the local dimension $4$ in the following.
\begin{eqnarray}\label{obs4}
&& B_{4,1}=\sigma_{x} \otimes \sigma_{x} \ ; \ \ B_{4,2} = \sigma_{x} \otimes \sigma_{y} \nonumber \\
&& B_{4,3} = \sigma_{x} \otimes \sigma_{z} \ ; \ \ B_{4,4} = \sigma_{y} \otimes \mathbb{I}_2 \nonumber \\
&&A_{4,i} = \frac{1}{2} \sum\limits_{y=1}^{4} (-1)^{x^i_y} B_{4,y}
\end{eqnarray}
The state for which $\mathcal{B}_4$ is optimized is given by
\begin{equation}\label{state4}
\rho_{AB}^{\otimes 2}= \rho_{AB} \otimes \rho_{AB}
\end{equation}
In this case, the maximum joint probability ($p^{\ast}_4$) is given by
\begin{equation}\label{pmax4}
p^{\ast}_4 = 3/8
\end{equation}
which in turn gives the amount of DI certified randomness as follows
\begin{equation}
\qty(R_{min})_{n=4} = -\log_2 (3/8) \approx 1.4150 \ bits
\end{equation}
Note that the amount of randomness for two copies of the maximally entangled two-qubit state is significantly increased than that obtained for the single copy of the maximally entangled two-qubit state. Thus, it is revealed that more than one copy of the bipartite maximally entangled state provides an advantage in the generation of certified randomness by using such Bell inequality.
We further extend our study for $n=5$ and $n=6$ cases to demonstrate how an increase in the number of shared maximally entangled two-qubit states provides more randomness over a lesser number of bipartite maximally entangled states. For $n=5$, the Bell functional $\mathcal{B}_5$ is also optimized for a pair of maximally entangled two-qubit states like for the case of $n=4$. In this case, the amount of randomness for the single copy of two-qubit entangled state is $(2-\log_2[1+\frac{\sqrt{2}+1}{\sqrt{2 \sqrt{2}+5}}])\approx 1.1025$ bits. Note that for a single copy, the amount of randomness is decreased than that of $n<5$. However, for the pair of maximally entangled two-qubit state, the amount of DI certified randomness is $(2-\log_2[1+\frac{1}{\sqrt{5}}])\approx 1.4667$ bits. Hence, for the pair of maximally entangled two-qubit states, we find that the amount of DI certified randomness is increased than that of $n=4$.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[legend pos=south east, legend cell align=left, enlargelimits=false, xlabel={Number of Bob's measurement settings ($n$)}, ylabel style ={align=center}, ylabel= {Amount of certified randomness ($R_n)$ \\ (in bits)}, xticklabel style={
/pgf/number format/fixed, /pgf/number format/fixed zerofill,
/pgf/number format/precision=0
}, scaled ticks=false, xtick={2,4,6,8,10,12,14,16,18,20}, yticklabel style={
/pgf/number format/fixed, /pgf/number format/fixed zerofill,
/pgf/number format/precision=4
}, scaled ticks=false, ytick={ 0.9,1.0374, 1.2284,1.4, 1.6,1.8}, xmin=2, xmax=20, ymin=0.9, ymax=1.8
]
\addplot[mark=*,
mark size=2pt,color=red]
table[meta=R]
{rNqubit.txt};
\addlegendentry{ \textit{a} : single copy }
\addplot[mark=triangle*, mark size = 2.5 pt, color = blue, dashed]
table[meta=R]
{rN.txt};
\addlegendentry{ \textit{b} : $m$-copies}
\node[above] at (20,300) {$a$};
\node[above] at (100,740) {$b$};
\end{axis}
\end{tikzpicture}
\caption{The figure illustrates the variation of the amount of certified randomness $(R_{n})$ corresponding to the optimal quantum Bell violation with different measurement settings of Bob. In particular, the red curve $`a$' shows the amount of randomness for different $n$, when a single copy of a maximally entangled two-qubit state is shared between Alice and Bob. It is found that the amount of randomness decreases with the increase of the number of measurement settings $n$. On the other hand, if $m=\lfloor n/2 \rfloor$ copies of bipartite maximally entangled state are shared between Alice and Bob, then the amount of randomness increases with the increase of $n$ as shown by the dashed blue curve $`b$'. Thus, $\lfloor n/2 \rfloor$ copies of the bipartite maximally entangled state provide an advantage over a single copy in the quantification of certified randomness.}
\label{figrn}
\end{figure}
For $n=6$, the Bell inequality $\mathcal{B}_6$ is optimized for three bipartite maximally entangled states. Interestingly, in this case, we find that the amount of DI guaranteed randomness that can be certified is $(2-\log_2[1+\frac{1}{\sqrt{6}}])\approx1.5061$ bits when three copies of maximally entangled two-qubit state are shared between Alice and Bob. However, the amounts of randomness for the single copy and the pair of copies are given by $(2-\log_2[1+\frac{3}{\sqrt{10}}])\approx1.0375$ and $(2-\log_2[1+\frac{1}{\sqrt{2}}])\approx 1.2284$ bits respectively. It is crucial to note here that for a single copy or pair of maximally entangled two-qubit states, there are significant decreases in the amount of randomness than that obtained for the earlier respective cases. The comparative results for all the cases are illustrated in Table~\ref{table1}.
\begin{widetext}
\begin{table*}[ht]
\centering
\begin{tabular}{|C{0.9in}|C{1.8in}|C{1.6in}|C{1.6in}|}
\hline \cline{1-4}
\multicolumn{1}{|c|}{\multirow{2}{*}{Number of Bob's}}&
\multicolumn{3}{c|}{} \\
\multicolumn{1}{|c|}{\multirow{2}{*}{measurement}}&
\multicolumn{3}{c|}{Amount of certified randomness for $m\leq \lfloor n/2 \rfloor$ copies of maximally entangled two-qubit state} \\
\multicolumn{1}{|c|}{\multirow{2}{*}{settings}}&
\multicolumn{3}{c|}{corresponding to the optimal quantum violation of the Bell inequality $\mathcal{B}_n$} \\
\multicolumn{1}{|c|}{\multirow{2}{*}{($n$)}}&
\multicolumn{3}{c|}{} \\
\cline{2-4}
& $m=1$ & $m=2$ & $m=3$\\
\hline \cline{1-4}
&&&\\
$2$ & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{2}})] \approx 1.2284$ &$-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{2}})] \approx 1.2284$& $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{2}})] \approx 1.2284$ \\
&&&\\
\hline
&&&\\
3 & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] \approx 1.3425$ & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] \approx 1.3425$ & $-\log_2 \qty[\frac{1}{4}\qty(1+\frac{1}{\sqrt{3}})] \approx 1.3425$ \\
&&&\\
\hline
&&&\\
4 & $-\log_2\qty[\frac{1}{12} \qty(3+\sqrt{6})] \approx 1.1388$ & $-\log_2 \qty[\frac{3}{8}] \approx 1.4150$ & $-\log_2 \qty[\frac{3}{8}] \approx 1.4150$ \\
&&&\\
\hline
&&&\\
5 & $-\log_2\qty[\frac{1}{4}\qty(1+\frac{\sqrt{2}+1}{\sqrt{2 \sqrt{2}+5}})]\approx 1.1025$ & $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{5}})]\approx 1.4667$& $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{5}})]\approx 1.4667$ \\
&&&\\
\hline
&&&\\
6 & $-\log_2\qty[\frac{1}{4}(1+\frac{3}{\sqrt{10}}]\approx1.0375$& $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{2}})]\approx 1.2284$ & $-\log_2\qty[\frac{1}{4}(1+\frac{1}{\sqrt{6}})]\approx1.5061$ \\
&&&\\
\hline \cline{1-4}
\end{tabular}
\caption{ The table shows the amount of certified randomness for different copies ($m\leq \lfloor n/2 \rfloor$) of bipartite maximally entangled state corresponding to different number of measurement settings $n \in\{1,2,3,4,5,6\}$. Clearly, it is seen that with the increasing number of $n$, the number of shared maximally entangled two-qubit state need to be increased to obtain a higher amount of certified randomness. The maximum amount of certified randomness for a particular $n$ can be achieved for $\lfloor n/2\rfloor$ copies of a maximally entangled two-qubit state which provide the optimal quantum violation of the Bell inequality $\mathcal{B}_{n}$.}
\label{table1}
\end{table*}
\end{widetext}
Finally, we evaluate the amount of certified randomness for arbitrary $n$-settings Bell inequality $(\mathcal{B}_n)$. The optimal quantum violation has been earlier derived to be $\langle\mathcal{B}_n\rangle_Q^{opt}=2^{n-1}\sqrt{n}$ \cite{Ghorai2018} and such optimal bound is achieved for $m=\lfloor \frac{n}{2} \rfloor$ copies of maximally entangled two-qubit state. The maximum joint probability, in this case, can be evaluated by following the simple argument. For this purpose, we recall the maximum success probability $(p_Q^{\ast}(b=x_y^{\delta}))$ of the communication game in \cite{Ghorai2018} given by $p_Q^{\ast}(b=x_y^{\delta})=\frac{1}{2}(1+\frac{1}{\sqrt{n}})$. Note that the probability $(p_Q^{\ast}(b=x_y^{\delta}))$ is the conditional probability of Bob's outcome $b$ corresponding to the measurements $B_{n,y}$ when Alice obtains the outcome $a$ after measuring $A_{n,i}$. Thus, using the Bayes rule of conditional probability, the maximum joint probability is given by
\begin{eqnarray}\label{pmaxn}
p^{*}(a,b|A_{n,1},B_{n,y},\rho_{AB}^{\otimes m})&=& p^{*}(a|A_{n,1},\rho_{AB}^{\otimes m}) \ p^{*}(b|a,A_{n,1},B_{n,y},\rho_{AB}^{\otimes m}) \nonumber\\
&=&p^{*}(a|A_{n,1},\rho_{AB}^{\otimes m}) \ p_Q^{*}(b=x_y^{\delta}) \nonumber\\
&=& \frac{1}{2} \ \times \ \frac{1}{2} \qty(1+\frac{1}{\sqrt{n}})
\end{eqnarray}
Subsequently, the amount of certified randomness is then given by
\begin{equation}
\qty(R_{min})_n=2-\log_2 \qty[1+\frac{1}{\sqrt{n}}]
\end{equation}
It is interesting to note here that for large $n$, it is possible to certify close to 2 bits of $R_{min}$ using the Bell functional $\mathcal{B}_n$. It is important to remark here that close to two bits of DI guaranteed bound of randomness in terms of $R_{max}$ has already been shown by using different measurement contexts \cite{Acin2012, Law2014, Anderson2018, Woodhead2019}, our central motivation of this work is to show that the use of many copies of maximally entangled state in generating randomness quantified in terms of $R_{min}$ is more advantageous than the single copy.
\section{Summary and outlook}\label{SecIV}
In the present work, we investigate the possibility of certifying more randomness quantified in terms of $R_{min}$from many copies of a maximally entangled two-qubit state than from a single copy. For this purpose, we first revisit the derivation of the quantum optimal value of the new family of Bell inequalities which was earlier introduced in the context of a RAC communication game and later, importantly, also demonstrated as a dimension witness. Next, by suitably quantifying the amount of randomness in terms of min.-Entropy, we evaluate the amount of randomness for different values of $n$. Specifically, we evaluate the amount of randomness corresponding to the optimal quantum violation of the invoked Bell inequality. Such evaluation of randomness only for the optimal quantum violation becomes sufficient for serving the purpose of our present work.
In particular, we explicitly show that the Bell inequality $\mathcal{B}_{n}$ for $n=2$ and $n=3$ case the guaranteed amount of randomness corresponding to the optimal quantum violation is obtained when a single copy of maximally entangled state is shared. Then, we show that the amount of randomness for the single copy of the maximally entangled state continues to decrease with the increase of $n\geq4$. Next, for both the $n=4$ and $n=5$ cases, it is found that the amounts of certified randomness are increased if Alice and Bob share a pair of maximally entangled two-qubit states instead of a single copy. Such an amount of randomness again decreases for $n\geq6$. Moreover, for $n=6$, we obtain that the maximum amount of certified randomness is achieved when Alice and Bob share three pairs of maximally entangled two-qubit states. Hence, it follows from our demonstration that with the increasing number of measurement settings, the minimum amount of certified randomness depends on the number of maximally entangled two-qubit states shared between Alice and Bob. Finally, we evaluate the maximum amount of $R_{min}$ corresponding to the optimal quantum value for an arbitrary number of measurement settings $n$. Such amount of randomness will be realized if Alice and Bob share $m=\lfloor n/2 \rfloor$ copies of maximally entangled two-qubit state. The findings of our results are illustrated in Table~\ref{table1} and Fig. \ref{figrn}.
While our current study has limited advantage in randomness generation as compared to the existing protocols, such study may serve as a stepping stone towards the near future practical applications involving many copies of maximally entangled state. In particular, the present study may lead to the DI certification of an unbounded amount of randomness using the many copies of the maximally entangled state. Further, the present study then opens up an important avenue for the randomness expansion protocol in which a short and weak random string can be expanded into a long and strong certified provable secure randomness string. To date the randomness expansion protocols have been explored by using either the CHSH inequality or Hardy relations \cite{Ramanathan2018, Colbeck2012, Li2021, Liu2021, Bhavsar2021} which cannot certify more than a single copy of the bipartite maximally entangled state. Hence, it will be interesting to follow up the expansion as well as amplification protocols by invoking the new family of Bell inequalities $\mathcal{B}_n$ which has the capability of certifying more randomness from many copies of maximally entangled two-qubit states.
\section*{Acknowledgments}
We thank Souradeep Sasmal for his immense help in writing this paper. SSM acknowledges the UGC fellowship [Fellowship No. 16-9(June 2018)/2019(NET/CSIR)]. AKP acknowledges the support from the project DST/ICPS/QuST/Theme 1/2019/4. \linebreak
| proofpile-arXiv_065-1896 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introdution}
The matching problems are central problems in the study of both sequential and distributed graph algorithms.
A {\it matching} is a set of edges that do not share endpoints. Given a weighted graph $G = (V, E, w)$, where $w: E \to \{1, \ldots, W\}$, the maximum weight matching ({\sc mwm}{}) problem is to compute a matching $M$ with the maximum weight, where the weight of $M$ is defined as $\sum_{e \in M} w(e)$. Given an unweighted graph $G = (V, E)$, the maximum cardinality matching ({\sc mcm}{}) problem is to compute a matching $M$ such that $|M|$ is maximized. Clearly, the {\sc mcm}{} problem is a special case of the {\sc mwm}{} problem. For $0 < \epsilon < 1$, a $(1-\epsilon)$-{\sc mwm}{} (or $(1-\epsilon)$-{\sc mcm}{}) is a $(1-\epsilon)$-approximate solution to the {\sc mwm}{} (or {\sc mcm}{}) problem. Throughout the paper, we let $n = |V|$ and $m = |E|$.
In distributed computing, the {\sc mcm}{} and {\sc mwm}{} problems have been studied extensively in the \ensuremath{\mathsf{CONGEST}}\xspace model and the $\mathsf{LOCAL}$\xspace model. In these models, nodes host processors and operate in synchronized rounds. In each round, each node sends a message to its neighbors, receives messages from its neighbors, and performs local computations. The time complexity of an algorithm is defined to be the number of rounds used. In the $\mathsf{LOCAL}$\xspace model, there are no limits on the message size; while the \ensuremath{\mathsf{CONGEST}}\xspace model is a more realistic model where the message size is limited by $O(\log n)$ per link per round.
Computing an exact {\sc mwm}{} requires $\Omega(n)$ rounds in both the \ensuremath{\mathsf{CONGEST}}\xspace model and the $\mathsf{LOCAL}$\xspace model (e.g., consider the graph $G$ to be an even cycle.) Thus, the focus has been on developing efficient approximate algorithms. In fact, the approximate {\sc mwm}{} problem is also one of the few classic combinatorial optimization problems where it is possible to bypass the notorious \ensuremath{\mathsf{CONGEST}}\xspace model lower bound of $\tilde{\Omega}(D+\sqrt{n})$ by \cite{SarmaHKKNPPW12}, where $D$ denotes the diameter of the graph. For $(1-\epsilon)$-{\sc mwm}{} in the \ensuremath{\mathsf{CONGEST}}\xspace model, the lower bounds of \cite{KuhnMW16, AKO18} imply polynomial dependencies on $(\log n)$ and $(1/\epsilon)$ are needed. Whether matching upper bounds can be achieved is an intriguing and important problem, as also mentioned in \cite{FFK21}: \begin{quotation}``Obtaining a $(1-\epsilon)$-approximation (for {\sc mwm}{}) in $\operatorname{\text{{\rm poly}}}(\log n/\epsilon)$ \ensuremath{\mathsf{CONGEST}}\xspace rounds is one of the key open questions in understanding the distributed complexity of maximum matching.''\end{quotation}
A long line of studies has been pushing progress toward the goal. Below, we summarize the current fronts made by the existing results (also see \cref{table:matching} in \cref{apx:tables} for a full list):
\begin{itemize}[leftmargin=*]
\item $c$-{\sc mwm}{} algorithms for $c < 2/3$. Wattenhofer and Wattenhofer \cite{WW04} were among the first to study the {\sc mwm}{} problem in the \ensuremath{\mathsf{CONGEST}}\xspace model. They gave an algorithm for computing a $(1/5)$-{\sc mwm}{} that runs in $O(\log^2 n)$ rounds. Then \cite{LPR09} developed an algorithm that computes a $(1/4-\epsilon)$-{\sc mwm}{} in $O((1/\epsilon) \log (1/\epsilon) \log n)$ rounds. Later, \cite{LPP15} improved the approximation ratio and the number of rounds to $1/2-\epsilon$ and $O(\log(1/\epsilon)\cdot \log n)$ respectively. \cite{BCGS17} gave a (1/2)-{\sc mwm}{} algorithm that runs in $O(T_{\operatorname{MIS}}(n) \cdot \log W)$ rounds, where $T_{\operatorname{MIS}}(n)$ is the time needed to compute a maximal independent set (MIS) in an $n$-node graph. Fischer \cite{Fischer17} gave a deterministic algorithm that computes a $(1/2 -\epsilon)$-{\sc mwm}{} in $O(\log^{2}\Delta \cdot \log \epsilon^{-1} + \log^{*} n)$ rounds by using a rounding approach, where $\Delta$ is the maximum degree. Then \cite{AKO18} gave another rounding approach for $(2/3 - \epsilon)$-{\sc mwm}{} that runs in $O(\frac{\log(\Delta W)}{\epsilon^2} + \frac{\log^2 \Delta + \log^{*} n}{\epsilon})$ rounds deterministically.
\item Exponential-in-$(1/\epsilon)$ algorithms. \cite{LPP15} showed that the random bipartition technique can be applied to get a randomized $2^{O(1/\epsilon)} \cdot O(\log n)$-round $(1-\epsilon)$-{\sc mcm}{} algorithm. Such a technique was later also applied by \cite{FFK21}, who gave a deterministic $2^{O(1/\epsilon)} \cdot \operatorname{\text{{\rm poly}}}(\log n)$-round algorithm for $(1-\epsilon)$-{\sc mwm}{}.
\item Bipartite graphs and other special graphs. For bipartite graphs, \cite{LPP15} gave an algorithm for $(1-\epsilon)$-{\sc mcm}{} that runs in $O(\log n /\epsilon^3)$ rounds. \cite{AKO18} showed that $(1-\epsilon)$-{\sc mwm}{} in bipartite graphs can be computed in $O(\log(\Delta W) / \epsilon^2 + (\log^2 \Delta + \log^{*} n) /\epsilon)$ rounds deterministically. Recently, \cite{CS22} showed that a $(1-\epsilon)$-{\sc mwm}{} can be obtained in $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ rounds in minor-free graphs with randomization.
\item Algorithms using larger messages.
In the $\mathsf{LOCAL}$\xspace model, a number of $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$-round algorithms are known for obtaining $(1-\epsilon)$-{\sc mwm}{} \cite{Nieberg08, GhaffariKMU18, Harris19}. The current fastest algorithms are by \cite{Harris19}, who gave a $O(\epsilon^{-3} \log(\Delta + \log \log n)+\epsilon^{-2}(\log \log n)^2)$-round randomized algorithm and a $O(\epsilon^{-4} \log^2 \Delta + \epsilon^{-1} \log^{*} n)$-round deterministic algorithm.
\end{itemize}
Recently, \cite{FMU22} made a significant progress by giving a $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$-round algorithm for computing a $(1-\epsilon)$-{\sc mcm}~{} --- the unweighted version of the problem. Despite the progress, the complexity of $(1-\epsilon)$-{\sc mwm}{} still remains unsettled.
We close the gap by giving the first $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ round algorithm for computing $(1-\epsilon)$-{\sc mwm}{} in the \ensuremath{\mathsf{CONGEST}}\xspace model.
The result is summarized as \cref{thm:main}.
\begin{theorem}\label{thm:main}
There exists a deterministic \ensuremath{\mathsf{CONGEST}}\xspace{} algorithm that solves the $(1-\epsilon)$-{\sc mwm}{} problem in $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ rounds.
\end{theorem}
In the parallel setting, Hougardy and Vinkemeier~\cite{HougardyV06} gave an \textsf{CREW PRAM}\footnote{A parallel random access machine that allows concurrent reads but exclusive writes.} algorithm that solves the $(1-\epsilon)$-{\sc mwm}{} problem in $O(\frac1{\epsilon}\log^5 n)$ span with $n^{O(1/\epsilon)}$ processors. However, it is still not clear whether a {\it work-efficient} algorithm with a $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$-span and $O(m)$ processors exists. Our \ensuremath{\mathsf{CONGEST}}\xspace algorithm can be directly simulated in \textsf{CREW PRAM} model, obtaining an $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ span algorithm that uses only $O(m)$ processors. The total work matches the best known sequential algorithm of \cite{DuanP14}, up to $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ factors.
\begin{corollary}\label{thm:main-parallel}
There exists a deterministic \textsf{CREW PRAM} algorithm that solves the $(1-\epsilon)$-{\sc mwm}{} problem with $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ span and uses only $O(m)$ processors.
\end{corollary}
\subsection{Related Works and Other Approaches}
\label{sec:tech_sum}
\paragraph{Sequential Model} For the sequential model, by the classical results of \cite{MV80,Blum90,GT91}, it was known that the exact {\sc mcm}{} and {\sc mwm}{} problems can be solved in $\tilde{O}(m\sqrt{n})$ time. For the approximate matching, it is well-known that a $\frac12$-{\sc mwm}{} can be computed in linear time by computing a maximal matching.
Although near-linear time algorithms for $(1-\epsilon)$-{\sc mcm}{} were known in 1980s~\cite{MV80,GT91},
it was a challenging task to obtain a near-linear time $\alpha$-{\sc mwm}{} algorithm for the approximate ratio $\alpha > \frac12$.
Several near-linear time algorithms were developed, such as $(\frac23-\epsilon)$-{\sc mwm}{}~\cite{DH03a,PS04} and $(\frac34-\epsilon)$-{\sc mwm}{}~\cite{DuanP10,HankeH10}.
Duan and Pettie~\cite{DuanP14} gave the first
near-linear time algorithms for $(1-\epsilon)$-{\sc mwm}{}, which runs in $O(\epsilon^{-1} \log (1/\epsilon)\cdot m)$ time.
\paragraph{Semi-Streaming Model}
In the semi-streaming model, the celebrated results of $(1-\epsilon)$-{\sc mwm}{} with $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ passes were already known by Ahn and Guha~\cite{AhnG13, AhnG11}.
Thus, in the semi-streaming model, the focus has been on obtaining algorithms with $o(\log n)$ dependencies on $n$.
The state of the art algorithms for $(1-\epsilon)$-{\sc mwm}{} still have exponential dependencies on $(1/\epsilon)$ (see \cite{GKMS19}).
Recently, Fischer, Mitrovi\'{c}, and Uitto~\cite{FMU22} made a breakthrough in the semi-streaming model, obtaining a $\operatorname{\text{{\rm poly}}}(1/\epsilon)$ passes algorithm to the $(1-\epsilon)$-{\sc mcm}{} problem.
It is not known yet whether there is a $\operatorname{\text{{\rm poly}}}(1/\epsilon)$ passes algorithm for $(1-\epsilon)$-{\sc mwm}{}.
We remark that the results of Ahn and Guha~\cite{AhnG13, AhnG11} do not translate easily to an \ensuremath{\mathsf{CONGEST}}\xspace{} algorithm within $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ rounds.
In particular, in \cite{AhnG11} the algorithm reduces to solving several instances of minimum odd edge cut\footnote{The goal is to return a mincut $(X, V\setminus X)$ among all subsets $X\subseteq V$ with an odd cardinality and $|X|=O(1/\epsilon)$.}.
It seems hard to solve for minimum odd edge cut in \ensuremath{\mathsf{CONGEST}}\xspace, given the fact that approximate minimum edge cut has a lower bound $\tilde\Omega(D+\sqrt{n})$~\cite{GhaffariK13}, where $D$ is the diameter of the graph.
On the other hand, in \cite{AhnG13} the runtime per pass could be as high as $n^{O(1/\epsilon)}$, so it would be inefficient in \ensuremath{\mathsf{CONGEST}}\xspace.
\paragraph{Other Approaches} A number of different approaches have been proposed for the $(1-\epsilon)$-{\sc mwm}{} problem in distributed settings, which we summarize and discuss as follows:
\begin{itemize}[leftmargin = *]
\item Augmenting paths.
We say an augmenting path is an $l$-{\it augmenting path} if it contains at most $l$-vertices. Being able to find a set of (inclusion-wise) maximal $l$ augmenting paths in $\operatorname{\text{{\rm poly}}}(l, \log n)$ rounds is a key subproblem in many known algorithms for $(1-\epsilon)$-{\sc mcm}{}, where $l = O(1/\epsilon)$. In bipartite graphs, \cite{LPP15} showed that the subproblem can be done by simulating Luby's MIS algorithm on the fly. On general graphs, finding an augmenting path is significantly more complicated than that in bipartite graphs. Finding a maximal set of augmenting paths is even more difficult. In the recent breakthrough of \cite{FMU22}, they showed how to find an ``almost'' maximal set of $l$-augmenting paths in $\operatorname{\text{{\rm poly}}}(l, \log n)$ rounds in the \ensuremath{\mathsf{CONGEST}}\xspace model in general graphs via bounded-length parallel DFS. We note that the problem of finding a maximal set of $l$ augmenting paths can be thought of as finding a hypergraph maximal matching, where an $l$-augmenting path is represented by a rank-$l$ hyperedge.
\item Hypergraph maximal matching. For the {\sc mwm}{} problem, the current approaches \cite{HougardyV06, Nieberg08, GhaffariKMU18, Harris19} in the \textsf{PRAM} model and the $\mathsf{LOCAL}$\xspace model consider an extension of $l$-augmenting paths, the {\it $l$-augmentations}. Roughly speaking, an $l$-augmentation is an alternating path\footnote{more precisely, with an additional condition that each endpoint is free if its incident edge is unmatched.} or cycle with at most $l$ vertices. Similar to $l$-augmenting paths, the $l$-augmentations can also be represented by a rank-$l$ hypergraph (albeit a significantly larger one). Then they divide the augmentations into poly-logarithmic classes based on their {\it gains}. From the class with the highest gain to the lowest, compute the hypergraph maximal matching of the hyperedges representing those augmentations. While in the $\mathsf{LOCAL}$\xspace model and the \textsf{PRAM} model, the rank-$l$ hypergraph can be built explicitly and maximal independent set algorithms can be simulated on the hypergraph efficiently to find a maximal matching; it is not the case for the \ensuremath{\mathsf{CONGEST}}\xspace model due to the bandwidth restriction.
\item The rounding approach. The rounding approaches work by first solving a linear program relaxation of the {\sc mwm}{} problem. In \cite{Fischer17, AKO18}, they both developed procedures for obtaining fractional solutions and deterministic procedures to round a fractional matching to an integer matching (with some loss). While \cite{AKO18} obtained an algorithm for $(1-\epsilon)$-{\sc mwm}{} in bipartite graphs, the direct linear program that they have considered has an integrality gap of 2/3 in general graphs. Therefore, the approximation factor will be inherently stuck at 2/3 without considering other formulations such as Edmonds' blossom linear program \cite{Edmonds65}.
\item The random bipartition approach. Bipartite graphs are where the matching problems are more well-understood. The random bipartition approach randomly partitions vertices into two sets and then ignores the edges within the same partition. A path containing $l$ vertices will be preserved with probability at least $2^{-l}$. By using this property, \cite{LPP15} gave a $(1-\epsilon)$-{\sc mcm}{} algorithm that runs in $2^{O(1/\epsilon)} \cdot O(\log n)$ rounds and \cite{FFK21} gave a $(1-\epsilon)$-{\sc mwm}{} algorithm that runs in $2^{O(1/\epsilon)} \cdot \operatorname{\text{{\rm polylog}}}(n)$ rounds. Note that this approach naturally introduces an exponential dependency on $(1/\epsilon)$.
\end{itemize}
\subsection{Our Approach}
Our approach is to parallelize Duan and Pettie's \cite{DuanP14} near-linear time algorithm, which involves combining the recent approaches of \cite{CS22} and \cite{FMU22} as well as several new techniques. The algorithm of \cite{DuanP14} is a primal-dual based algorithm that utilizes Edmonds' formulation \cite{Edmonds65}. Roughly speaking, the algorithm maintains a matching $M$, a set of active blossoms $\Omega \subseteq 2^{V}$, dual variables $y: V \to \mathbb{R}$ and $z: 2^{V} \to \mathbb{R}$. It consists of $O(\log W)$ scales with exponentially decreasing step sizes. Each scale consists of multiple primal-dual iterations that operate on a contracted {\bf unweighted} subgraph, $G_{elig}/ \Omega$, which they referred to as the {\it eligible graph}. For each iteration in scale $i$, it tries to make progress on both the primal variables ($M$, $\Omega$) and the dual variables ($y,z$) by the step size of the scale.
Initially, $\Omega = \emptyset$ so no blossoms are contracted. The first step in adjusting the primal variable is to search for an (inclusion-wise) maximal set of augmenting paths in the eligible graph and augment along them. After the augmentation, their edges will disappear from the eligible graph. Although \cite{DuanP14} showed that such a step can be performed in linear time in the sequential setting, it is unclear how it can be done efficiently in $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ time in the \ensuremath{\mathsf{CONGEST}}\xspace model or the \textsf{PRAM} model. Specifically, for example, it is impossible to find the augmenting paths of length $\Theta(n)$ in \Cref{fig:1} in such time in the \ensuremath{\mathsf{CONGEST}}\xspace model.
Our first ingredient is an idea from \cite{CS22}, where they introduced the weight modifier $\Delta w$ and dummy free vertices to effectively remove edges and free vertices from the eligible graph. They used this technique to integrate the expander decomposition procedure into the algorithm of \cite{DuanP14} for minor-free graphs. As long as the total edges and free vertices removed is small, one can show that the final error can be bounded.
With this tool introduced, it becomes more plausible that a maximal set of augmenting paths can be found in $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ time, as we may remove edges to cut the long ones. Indeed, in {\it bipartite graphs}, this can be done by partitioning matched edges into layers. An edge is in $i$-th layer if the shortest alternating path from any free vertex ends at it contains exactly $i$ matched edges. Let $M_i$ be the set of matched edges of the $i$-th layer. It must be that the removal of $M_{i}$ disconnects all augmenting paths that contain more than $i$ matched edges. Let $i^{*} = \arg \min_{1 \leq i \leq 1/\epsilon} |M_i|$ and thus $|M_{i^{*}}| \leq \epsilon |M|$. The removal of $M_{i^{*}}$ would cause all the leftover augmenting paths to have lengths of $O(1/\epsilon)$.
In general graphs, the above {\it path-cutting technique} no longer works. The removal of $M_i$ would not necessarily disconnect augmenting paths that contain more than $i$ matched edges.
Consider the example in \Cref{fig:3},
for any matched edge $e$, the shortest alternating path from a free vertex and ends at $e$ contains at most $2$ matched edges.
There is a (unique) augmenting path from $\alpha$ to $\beta$ with $5$ matched edges.
However,
the removal of $M_5$ would not disconnect this augmenting path, since $M_5 = \emptyset$.
One of the technical challenges is to have an efficient procedure to {\it find a small fraction of edges whose removal cut all the remaining long augmenting paths in general graphs}.
Secondly, the second step of the primal-dual iterations of \cite{DuanP14} is to find a maximal set of full blossoms reachable from free vertices and add them to $\Omega$ so they become contracted in the eligible graph. The problem here is that such a blossom can have a size as large as $\Theta(n)$ (See \Cref{fig:2}), so contracting it would take $\Theta(n)$ time in the \ensuremath{\mathsf{CONGEST}}\xspace model. So the other technical challenge is {\it to ensure such blossoms will not be formed, possibly by removing a small fraction of edges and free vertices.} In general, these technical challenges are to remove a small fraction of edges and free vertices to achieve the so-called the {\it primal blocking condition}, which we formally define in \Cref{prop:PBC}.
Note that the challenge may become more involved after the first iteration, where $\Omega$ is not necessarily empty. It may be the case that a blossom found in $G_{elig}/\Omega$ contains a very small number of vertices in the contracted graph $G_{elig}/\Omega$ but is very large in the original graph $G$. In this case, we cannot add it to $\Omega$ either, as it would take too much time to simulate algorithms on $G_{elig}/\Omega$ in the \ensuremath{\mathsf{CONGEST}}\xspace model if $\Omega$ has a blossom containing too many vertices in $G$. Therefore, we also need to ensure such a blossom is never formed.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\hspace*{1cm}
\includegraphics[width=4.3cm]{fig1_vertical}
\hspace*{-1cm}
\caption{}\label{fig:1}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\vspace*{-0.5cm}
\includegraphics[width=3.8cm]{fig3_vertical}
\vspace*{0.5cm}
\caption{}\label{fig:3}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\vspace*{-1.2cm}
\includegraphics[width=4.3cm]{fig2_vertical}
\vspace*{1.2cm}
\caption{}\label{fig:2}
\end{subfigure}
\caption{Note that in these examples, we have $\Omega = \emptyset$ and so that $G/\Omega = G$.}
\end{figure}
To overcome these challenges, our second ingredient is the parallel DFS algorithm of Fischer, Mitrovic, and Uitto \cite{FMU22}. In \cite{FMU22}, they developed a procedure for finding an almost maximal set of $k$-augmenting paths in $\operatorname{\text{{\rm poly}}}(k)$ rounds, where a $k$-augmenting path is an augmenting path of length at most $k$. We show that the path-cutting technique for bipartite graphs can be combined seamlessly with a tweaked, {\it vertex-weighted version} of \cite{FMU22} to overcome these challenges for general graphs.
The central idea of \cite{FMU22} is parallel DFS \cite{GPV93}. A rough description of the approach of \cite{FMU22} is the following: Start a bounded-depth DFS from each free vertex where the depth is bounded by $O(k)$ and each search maintains a cluster of vertices. The clusters are always vertex-disjoint. In each step, each search tries to enlarge the cluster by adding the next edge from its active path. If there is no such edge, the search will back up one edge on its active path. If the search finds an augmenting path that goes from one cluster to the other, then the two clusters are removed from the graph. Note that this is a very high-level description for the purpose of understanding our usage, the actual algorithm of \cite{FMU22} is much more involved. For example, it could be possible that the search from one cluster overtakes some portion of another cluster.
The key property shown in \cite{FMU22} is that at any point of the search all the remaining $k$-augmenting paths must pass one of the edges on the active paths, so removing the edges on active paths of the searches (in addition to the removal of clusters where augmenting paths are found) would cut all $k$-augmenting paths. Moreover, after searching for $\operatorname{\text{{\rm poly}}}(k)$ steps, it is shown at most $1/\operatorname{\text{{\rm poly}}}(k)$ fraction of searches remain active.
Since each DFS will only search up to a depth of $O(k)$, the number of edges on the active paths is at most $O(k)\cdot 1/\operatorname{\text{{\rm poly}}}(k) = 1/\operatorname{\text{{\rm poly}}}(k) $ fraction of the searches. In addition, we note that the process has an extra benefit that, roughly speaking, if a blossom is ought to be contracted in the second step of \cite{DuanP14}, it will lie entirely within a cluster or it will be far away from any free vertices.
To better illustrate how we use \cite{FMU22} to overcome these challenges, we first describe our procedure for the first iteration of \cite{DuanP14}, where $\Omega = \emptyset$. In this case, we run several iterations \cite{FMU22} to find a collection of $k$-augmenting paths until the number of $k$-augmenting paths found is relatively small. Then remove (1) the clusters where augmenting paths have been found and (2) the active paths in the still active searches. By removing a structure, we meant using the weight modifier technique from \cite{CS22} to remove the matched edges and free vertices inside the structure.
At this point, all the $k$-augmenting paths either overlap within the collection of $k$-augmenting paths or have been cut. The remaining augmenting paths must have lengths more than $k$.
To cut them, we contract all the blossoms found within each cluster. As the search only runs for $\operatorname{\text{{\rm poly}}}(k)$ steps, each cluster has at most $\operatorname{\text{{\rm poly}}}(k)$ vertices so these blossoms can be contracted in each cluster on a vertex locally by aggregating the topology to the vertex in $\operatorname{\text{{\rm poly}}}(k)$ rounds.
The key property we show is that after the contraction, if we assign each blossom a weight proportional to its size, the weighted $O(k)$-neighborhood of the free vertices becomes bipartite.
The reason why this is correct is that the weighted distance is now an overestimate of the actual distance, and there are no full blossoms reachable within distance $k$ from the free vertices in the graph now.
Since the weighted $O(k)$-neighborhood from the free vertices are bipartite, we can run the aforementioned, but a weighted version, path-cutting technique on it to remove some edges augmenting paths of weighted length more than $k$. The weight assignment to the blossoms ensures that we will only remove a small fraction of the edges.
Starting from the second iteration of \cite{DuanP14}, the set of active blossoms $\Omega$ may not be empty anymore.
We will need to be careful to not form any large nested blossoms after the Fischer-Mitrovic-Uitto parallel DFS algorithm (FMU-search), where the size of a blossom is measured by the number of vertices it contains in the original graph. To this end, when running the FMU-search, we run a weighted version of it, where each contracted vertex in $G_{elig}/\Omega$ is weighted proportional to the number of vertices it represents in the original graph. This way we can ensure the weight of each cluster is $\operatorname{\text{{\rm poly}}}(k)$ and so the largest blossom it can form will be $\operatorname{\text{{\rm poly}}}(k)$.
In order to generalize the properties guaranteed by FMU-search, one may have to open up the black-box and redo the whole analysis of Fischer, Mitrovic, and Uitto~\cite{FMU22}.
However, we show that the properties can be guaranteed by a blossom-to-path simulation analysis, where each weighted blossom is replaced by an unweighted path. The properties guaranteed by FMU-search from the transformed unweighted graph can then be carried back to the blossom-weighted graph.
\paragraph{Organization} In \cref{sec:preliminaries}, we define the basic notations and give a brief overview of the scaling approach of \cite{DuanP14} as well as the modification of \cite{CS22}. In \cref{sec:scaling}, we describe our modified scaling framework. In \cref{sec:FMU}, we describe how \cite{FMU22} can be augmented to run in contracted graphs where vertices are weighted. In \cref{sec:MWMcongest}, we describe our \textsc{Approx\_Primal} procedure for achieving the primal blocking conditions.
\section{Preliminaries and Assumptions}\label{sec:preliminaries}
Throughout the paper, we denote $G=(V, E, \hat{w})$ to be the input weighted undirected graph.
\paragraph{Matchings and Augmenting Paths} Given a matching $M$, a vertex is {\it free} if it is not incident to any edge in $M$. An {\it alternating path} is a path whose edges alternate between $M$ and $E \setminus M$.
An {\it augmenting path} $P$ is an alternating path that begins and ends with free vertices. Given an augmenting path $P$, let $M\oplus P = (M \setminus P) \cup (P \setminus M)$ denote the resulting matching after we augment along $P$. Note that we must have $|M \oplus P| = |M| + 1$.
\paragraph{Linear Program for {\sc mwm}{}}
Edmonds~\cite{Edmonds65} first formulated the matching polytope for general graphs.
On top of the bipartite graph linear programs, there are additional exponentially many constraints over $\mathcal{V}_{odd}$ --- all odd sized subsets of vertices.
In this paper, we follow Edmonds'~\cite{Edmonds65} linear program formulation for {\sc mwm}{} for the graph $(V, E, w)$:
\[
\begin{array}{c|c}
\begin{array}{c}
\textbf{Primal}\\[8pt]
\begin{array}{rll}
\text{max } &\multicolumn{2}{l}{ \sum_{e\in E} w(e)x(e) } \\[5pt]
\text{st. } & \forall u\in V, & \sum_{v} x(uv)\le 1 \\[5pt]
& \forall B\in \mathcal{V}_{odd}, & \sum_{u, v\in B} x(uv) \le \frac{|B|-1}{2}\\[5pt]
&\multicolumn{2}{l}{ x(e)\ge 0\ \forall e\in E }
\end{array}
\end{array}
&
\begin{array}{c}
\textbf{Dual}\\[8pt]
\begin{array}{rll}
\text{min } &\multicolumn{2}{l}{
\sum_{u\in V} y(u) + \sum_{B\in\mathcal{V}_{odd}} \frac{|B|-1}{2}z(B)
} \\[5pt]
\text{st. } & \forall uv\in E, & \\
& \multicolumn{2}{l}{
\ \ \ \ y(u)+y(v) + \sum_{B\ni u, v} z(B) \ge w(uv) } \\[10pt]
& \multicolumn{2}{l}{ y(u) \ge 0, z(B) \ge 0 }
\end{array}
\end{array}
\end{array}
\]
\paragraph{Dual Variables} The variables $y(u)$ and $z(B)$ are called the {\it dual variables}.
For convenience, given an edge $e = uv$, we define $$yz(e)=y(u)+y(v)+\sum_{B \in \mathcal{V}_{odd} : e \in B} z(B)$$
\paragraph{Blossoms} A blossom is specified with a vertex set $B$ and an edge set $E_{B}$. A trivial blossom is when $B = \{v \}$ for some $v \in V$ and $E_{B}=\emptyset$. A non-trivial blossom is defined recursively: If there are an odd number of blossoms $B_0 \ldots B_{\ell}$ connected as an odd cycle by $e_i \in B_{i} \times B_{[(i+1)\mod (\ell+1)]}$ for $0 \leq i \leq \ell$, then $B = \bigcup_{i=0}^{\ell} B_i$ is a blossom with $E_B = \{e_0 \ldots, e_{\ell} \} \cup \bigcup_{i=0}^{\ell} E_{B_i} $. It can be shown inductively that $|B|$ is odd and so $B \in \mathcal{V}_{odd}$. A blossom is {\it full} if $|M \cap E_{B}| = (|B| -1 )/2$. The only vertex that is not adjacent to the matched edges in a full blossom is called the {\it base} of $B$. Note that $E(B) = \{(u,v) \mid u,v \in B\}$ may contain edges not in $E_{B}$.
\paragraph{Active Blossoms}
A blossom is \emph{active} whenever $z(B) > 0$. We use $\Omega$ to denote the set of active blossoms throughout the execution of the algorithm.
Throughout the execution,
only full blossoms will be contained in $\Omega$. Moreover, the set of active blossoms $\Omega$ forms a laminar (nested) family, which can be represented by a set of rooted trees. The leaves of the trees are the trivial blossoms. If a blossom $B$ is defined to be the cycles formed by $B_0,\ldots, B_{\ell}$, then $B$ is the parent of $B_0,\ldots, B_{\ell}$. The blossoms that are represented by the roots are called the {\it root blossoms}.
\paragraph{Blossom-Contracted Graphs} Given $\Omega$, let $G / \Omega$ denote the unweighted simple graph obtained by contracting all the root blossoms in $\Omega$.
A vertex in $G/\Omega$ is {\it free} if the vertices it represents in $G$ contain a (unique) free vertex.
The following lemma guarantees that the contraction of the blossoms does not tuck away all augmenting paths.
\begin{lemma}(\cite[Lemma 2.1]{DuanP14}) Let $\Omega$ be a set of full blossoms with respect to a matching $M$.
\begin{enumerate}[leftmargin=*,itemsep=-1ex, topsep = 0pt,partopsep=1ex,parsep=1ex]
\item If $M$ is a matching in $G$, then $M / \Omega$ is a matching in $G/ \Omega$.
\item Every augmenting path $P'$ relative to $M/\Omega$ in $G /\Omega$ extends to an augmenting path $P$ relative to $M$ in $G$.
\item Let $P'$ and $P$ be mentioned as in (2). Then $\Omega$ remains a valid set of full blossoms.
\end{enumerate}
\end{lemma}
\begin{definition}
Let $v$ be a vertex in $G/\Omega$, we use $\hat{v}$ to denote the set of vertices in $G$ that contract to $v$. Also, given a set of vertices $S$, define $\hat{S} = \bigcup_{v \in S} \hat{v}$.
For a free vertex $f$ on $G/\Omega$, we define $\dot f$ to be the unique free vertex in $\hat f$.
Given a matched edge $e\in M/\Omega$, we use $\hat{e}$ to denote its corresponding matched edge in $M$.
Conversely, given a set of vertices $S \in \Omega$, let $v^{\Omega}_{S}$ be the vertex in $G/\Omega$ obtained by contracting $S$ in $G$. Given a free vertex in $G$, let $f^{\Omega}$ denote the unique free vertex in $G/\Omega$ that contains $f$. Given a set of free vertices $F$ of $G$, define $F^{\Omega} = \{f^{\Omega} \mid f \in F \}$. Similarly, given a matched edge $e\in M$, if both endpoints belong to different blossoms in $\Omega$, then we define $e^\Omega$ to be the corresponding matched edge in $M/\Omega$.
\end{definition}
\begin{definition}
Let $H$ be a subgraph of $G$ with a matching $M$.
We denote the set of free vertices in $H$ by $F(H)$ and the set of matched edges in $H$ by $M(H)$.
\end{definition}
\begin{definition}[Inner, outer, and reachable vertices]
Let $F$ be a set of free vertices in a graph $H$ with matching $M$. Let $V^{H,M}_{in}(F)$ and $V^{H,M}_{out}(F)$ denote the set of vertices that are reachable from $F$ with odd-length augmenting paths and even-length augmenting paths respectively. Define $R^{H,M}(F) = V^{H,M}_{in}(F) \cup V^{H,M}_{out}(F)$. When the reference to $H$ and $M$ are clear, we will omit the superscripts and write $R(F)$, $V_{in}(F)$, and $V_{out}(F)$ respectively.
\end{definition}
\subsection{Assumptions to Edge Weights and Approximate Ratio}
Since we are looking for a $(1-\epsilon)$-approximation, we can always re-scale the edge weights to be $O(n/\epsilon)$ while introducing at most $(1-\Theta(\epsilon))$ error (See \cite[Section 2]{DuanP14}). Therefore, we can assume that $\epsilon > 1/n^2$ and so $W \leq n^3$ and $O(\log W) = O(\log n)$; for otherwise we may aggregate the whole network at a node in $O(1 /\epsilon) = O(n^2)$ rounds and have it compute a {\sc mwm}{} locally. Let $\epsilon' = \Theta(\epsilon)$ be a parameter that we will choose later. Assume without loss of generality that $W$ and $\epsilon'$ are powers of two. We can make the assumption because if $W$ is not a power of two, we can simply set $W$ to be the next smallest power of two. Similarly for $\epsilon'$, but for the next largest power of two.
\subsection{Assumption of $O((1/\epsilon)\log^3n)$ Weak Diameter}
To begin, we process our input graph by applying a diameter reduction theorem developed by \cite{FFK21} to claim that we may assume that the graph we are considering has a broadcast tree of depth $O((1/\epsilon)\log^3n)$ that can be used to aggregate and propagate information.
\begin{theorem}[\cite{FFK21}, Theorem 7]\label{thm:diameter} Let $T^{\alpha}_{\textsf{SC}}(n,D)$ be the time required for computing an $\alpha$-approximation for the {\sc mwm}{} problem in the \ensuremath{\mathsf{SUPPORTED ~CONGEST}~} model with a communication graph of diameter $D$. Then, for every $\epsilon \in (0,1]$, there is a $\operatorname{\text{{\rm poly}}}(\log n, 1/\epsilon) + O(\log n \cdot T^{\alpha}_{\textsf{SC}}(n,O((1/\epsilon)\log^3 n)))$-round \ensuremath{\mathsf{CONGEST}}\xspace algorithm to compute a $(1-\epsilon)\alpha$-approximation of {\sc mwm}{} in the \ensuremath{\mathsf{CONGEST}}\xspace model. If the given \ensuremath{\mathsf{SUPPORTED ~CONGEST}~} model algorithm is deterministic, then the resulting \ensuremath{\mathsf{CONGEST}}\xspace model algorithm is also deterministic.
\end{theorem}
The \ensuremath{\mathsf{SUPPORTED ~CONGEST}~} model is the same as the \ensuremath{\mathsf{CONGEST}}\xspace model except that the input graph can be a subgraph of the communication graph. Suppose the input graph is a weighted graph $G = (V,E,\hat{w})$, where $\hat{w}:E \to \{1 ,\ldots, W\}$. The above theorem implies that we can focus on solving the problem on $G$ as if we were in the \ensuremath{\mathsf{CONGEST}}\xspace model, except that we have access to a broadcast tree (potentially outside $G$) where an aggregation or a broadcast takes $O((1/\epsilon)\log^3n)$ rounds. We slightly abuse the notation and say that $G$ has a {\it weak diameter} of $O((1/\epsilon)\log^3n)$.
With~\cref{thm:diameter},
we may broadcast $W$ (the upper bound on edge weights) to every node in $O((1/\epsilon)\log^3n)$ rounds.
We remark that the assumption to the weak diameter is required not only in our algorithm but also in the $(1-\epsilon)$-{\sc mcm}{} \ensuremath{\mathsf{CONGEST}}\xspace algorithm described in \cite{FMU22}\footnote{The application of \cref{thm:diameter} can also tie up loose ends left in \cite{FMU22}, where they presented a semi-streaming algorithm first and then described the adaption to other models. One of the primitives, \textsc{Storage} in item (v) in Section 6 assumed a memory of $\Omega(n \operatorname{\text{{\rm poly}}} 1/\epsilon)$ is available to all nodes. This may be needed in some of their procedures, e.g.~counting $|M_H|$ in Algorithm 7. The running time was not analyzed, but it may take $O(diameter)$ rounds to implement it in the \ensuremath{\mathsf{CONGEST}}\xspace model.}.
\subsection{The Duan and Pettie's Scaling Framework}
The scaling framework for solving {\sc mwm}{} using the primal-dual approach was originally proposed by Gabow and Tarjan~\cite{GT91}.
Let $L=\lfloor \log_2W\rfloor$.
A typical algorithm under this scaling framework consists of $L+1$ scales.
In each scale $i$, such an algorithm puts its attention to the graph with \emph{truncated weights} (whose definition varies in different algorithms).
As $i$ increases, these truncated weights typically move toward the actual input weight.
Duan and Pettie~\cite{DuanP14} introduced a scaling algorithm to solve the $(1-\epsilon)$-{\sc mwm}{} problem.
They
proposed a new \emph{relaxed complementary slackness criteria} (see \Cref{prop:dp-relaxed-complementary-slackness}).
The criteria change between iterations. At the end of the algorithm, the criteria can be used to certify the desired approximation guarantee of the maintained solution.
Unlike Gabow and Tarjan's framework~\cite{GT91}, Duan and Pettie's framework~\cite{DuanP14} allows the matching found in the previous scale to be carried over to the next scale without violating the feasibility, thereby improving the efficiency.
In order to obtain this carry-over feature, Duan and Pettie also introduce the \emph{type $j$ edges} in their complementary slackness criteria.
\begin{definition}[Type $j$ Edges]
A matched edge or a blossom edge is of \emph{type $j$} if it was last made a matched edge or a blossom edge in scale $j$.
\end{definition}
Let $\delta_0 = \epsilon'W$ and $\delta_i = \delta_0/2^i$ for all $i\in [0, L]$.
At each scale $i$,
the \emph{truncated weight} of an edge $e$ is defined as $w_i(e)=\delta_i\lfloor \hat{w}(e)/\delta_i\rfloor$.
The relaxed complementary slackness criteria are based on the truncated weight at each scale.
\begin{lemma}[Relaxed Feasibility and Complementary Slackness {\cite[Property 3.1]{DuanP14}}]\label{prop:dp-relaxed-complementary-slackness}
After each iteration $i=[0, L]$, the algorithm explicitly maintains the set of currently matched edges $M$, the dual variables $y(u)$ and $z(B)$, and the set of active blossoms $\Omega\subseteq \mathcal{V}_{odd}$.
The following properties are guaranteed:
\begin{enumerate}[itemsep=0pt]
\item {\bf{Granularity.}} For all $B\in\mathcal{V}_{odd}$, $z(B)$ is a nonnegative multiple of $\delta_i$. For all $u\in V(G)$, $y(u)$ is a multiple of $\delta_i/2$.
\item {\bf{Active Blossoms.}} $\Omega$ contains all $B$ with $z(B)>0$ and all root blossoms $B$ have $z(B)>0$.
\item {\bf{Near Domination.}} For all $e\in E$, $yz(e)\ge w_i(e)-\delta_i$.
\item {\bf{Near Tightness.}} If $e$ is a type $j$ edge, then $yz(e)\le w_i(e)+2(\delta_j-\delta_i)$.
\item {\bf{Free Vertex Duals.}} If $u\in F(G)$ and $v\notin F(G)$ then $y(u)\le y(v)$.
\end{enumerate}
\end{lemma}
\paragraph{Eligible Graph}
To achieve~\Cref{prop:dp-relaxed-complementary-slackness} efficiently, at each scale an \emph{eligible graph} $G_{elig}$ is defined. An edge $e$ is said to be \emph{eligible}, if (1) $e\in E_B$ for some $B\in\Omega$, (2) $e\notin M$ and $yz(e)=w(e)-\delta_i$, or (3) $e\in M$ and $yz(e)-w_i(e)$ is a nonnegative integer multiple of $\delta_i$.
$G_{elig}$ is the graph that consists of all edges that are currently eligible.
The algorithm initializes with an empty matching $M\gets\emptyset$, an empty set of active blossoms $\Omega\gets \emptyset$, and high vertex duals $y(u)\gets W/2-\delta_0/2$ for all $u\in V$.
Then, in each scale $i=0, 1, \ldots, L$ the algorithm repeatedly searches for a maximal set $\Psi$ of vertex disjoint augmenting paths in $G_{elig}$, augment these paths, searches for new blossoms, adjust dual variables, and dissolves zero-valued inactive blossoms.
These steps are iteratively applied for $O(1/\epsilon')$ times until the free vertex duals $y(v)$ reaches $W/2^{i+2}-\delta_i/2$ whenever $i\in [0, L)$ or $0$ whenever $i=L$.
At the end of scale $L$, \Cref{prop:dp-relaxed-complementary-slackness} guarantees a matching with desired approximate ratio.
We emphasize that the correctness of~\Cref{prop:dp-relaxed-complementary-slackness} relies on the fact that $\Psi$ is maximal in $G_{elig}$, and the subroutine that searches for $\Psi$ is a modified depth first search from Gabow and Tarjan~\cite{GT91} (see also \cite{MV80,Vazirani94}.)
Unfortunately, some returned augmenting paths in $\Psi$ could be very long.
It is not clear to immediately see an efficient parallel or distributed implementation of this subroutine.
\subsection{The Chang and Su's Scaling Framework}
Chang and Su~\cite{CS22} noticed that it is possible to relax Duan and Pettie's framework further,
by introducing the \emph{weight modifiers} $\Delta w(e)$ that satisfies the following new invariants (appended to~\Cref{prop:dp-relaxed-complementary-slackness} with some changes to the other properties) after each iteration:
{\it{
\begin{enumerate}[itemsep=0pt]
\item[6.] {\bf{Bounded Weight Change.}} The sum of $|\Delta w(e)|$ is at most $\epsilon'\cdot \hat{w}(M^*)$, where $M^*$ is a maximum weight matching with respect to $\hat{w}$.
\end{enumerate}
}}
Chang and Su showed that
it is possible to efficiently
obtain a maximal set $\Psi$ of augmenting paths from $G_{elig}$
in an expander decomposed $H$-minor-free graph.
By carefully tweaking the definitions to the eligibility of an edge,
their modified Duan-Pettie framework fits well into the expander decomposition in the \ensuremath{\mathsf{CONGEST}}\xspace model.
Notice that Chang and Su's scaling algorithm depends on a ``center process'' in each decomposed subgraph.
The center process in each subgraph can obtain the entire subgraph topology within $\operatorname{\text{{\rm polylog}}}(n)$ rounds, with the assumption that the underlying graph is $H$-minor-free.
Thus, a maximal set of augmenting paths in each subgraph can then be computed sequentially in each center process.
This explains two non-trivial difficulties:
First, it is not clear if the same framework can be generalized to general graphs.
Furthermore, the sequential subroutine searching for augmenting paths may return long augmenting paths. It is not clear how to efficiently parallelize this subroutine in the PRAM model.
Our scaling framework for general graphs is modified from
both Duan-Pettie~\cite{DuanP14} and Chang-Su~\cite{CS22}.
In~\cref{sec:scaling} we present our modified scaling framework.
With the adaption of~\cite{FMU22},
we believe our framework is simpler compared with Chang and Su~\cite{CS22}.
Benefiting from~\cite{FMU22} searching for short augmenting paths, the new framework can now be efficiently implemented on a PRAM model.
\section{Our Modified Scaling Framework}\label{sec:scaling}
Our modified scaling framework maintains the following variables during the execution of the algorithm:
\begin{center}
\begin{tabular}{lll}
$M$:& The set of matched edges. &\\
$y(u)$: & The dual variable defined on each vertex $u \in V$. & \\
$z(B)$: & The dual variable defined on each $B \in \mathcal{V}_{odd}$. & \\
$\Omega$:& $\Omega \subseteq \mathcal{V}_{odd}$ is the set of \emph{active blossoms}. & \\
$\Delta w (e):$ & The weight modifier defined on each edge $e \in E$. &
\end{tabular}
\end{center}
Our algorithm runs for
$L+1=\lfloor\log_2 W\rfloor+1$ scales.
In each scale $i$, the same granularity value $\delta_i = \delta_0/2^i$ is used, where $\delta_0 := \epsilon'W$.
Moreover, the truncated weight $w_i(e)$ is now derived from the \emph{effective weight} $w(e) := \hat{w}(e)+\Delta w(e)$, namely $w_i(e)=\delta_i \lfloor w(e)/\delta_i\rfloor$.
There will be $O(1/\epsilon')$ iterations within each scale.
Within each iteration, the algorithm subsequently performs augmentations, updates to weight modifiers, dual adjustments, and updates to active blossoms (see \Cref{sec:iterations-within-each-scale} and \Cref{fig:edmondssearch}).
Similar to Chang and Su's framework~\cite{CS22},
the $y$-values to free vertices are no longer having the same value throughout the execution.
To make sure that there are still at most $O(1/\epsilon')$ iterations within each scale,
a special quantity $\tau$ is introduced.
Within each scale $i$,
the quantity $\tau$ will be decreased from $W/2^{i+1}-\delta_{i+1}/2$ to a specified target value $W/2^{i+2}-\delta_{i}/2$ (or $0$ if $i=L$).
Conceptually,
$\tau$ is the desired free vertex dual which gets decreased by $\delta_i/2$ after every dual adjustment as in \cite{DuanP14} (hence $O(1/\epsilon')$ iterations per scale).
However, in both \cite{CS22} and our framework,
not all free vertices participate in the dual adjustment step of every iteration.
Whenever a free vertex $u$ is omitted from a dual adjustment, the gap between $y(u)$ and $\tau$ gets increased by $\delta_i/2$.
Therefore, $\tau$ can be seen as a lower bound to all free vertex duals.
Our modified relaxed complementary slackness (see \Cref{prop:RCS}) guarantees that the sum of such gaps will be small.
\paragraph{Eligible Graph}
The eligible graph $G_{elig}$ is an unweighted subgraph of $G$ defined dynamically throughout the algorithm execution.
The edges in $G_{elig}$ are \emph{eligible edges}.
Conceptually, all eligible edges are the ``tight'' edges, which are either blossom edges or the ones that nearly violate the complementary slackness condition.
The precise definition of such eligible edges is defined in \Cref{def:eligible}.
\begin{definition}\label{def:eligible} At scale $i$, an edge is {\it eligible} if at least one of the following holds.
\begin{enumerate}[topsep=0.5ex,itemsep=-.5ex]
\item $e \in E_B$ for some $B \in \Omega$.
\item $e \notin M$ and $yz(e) = w_i(e) - \delta_i$.
\item \label{elig:3} $e \in M$ is a type $j$ edge and $yz(e) = w_i(e) + 2(\delta_j - \delta_i)$.
\end{enumerate}
\end{definition}
We remark that \cref{elig:3} is more constrained from the Duan-Pettie framework~\cite{DuanP14}.
With the new definition of \cref{elig:3}, the algorithm is able to artificially make a previously eligible matched edge ineligible afterward, by adjusting its weight modifier $\Delta w(e)$.
Now we describe the relaxed complementary slackness properties that are guaranteed by our algorithm at the end of each scale.
\begin{property} (Relaxed Complementary Slackness)\label{prop:RCS}
\begin{enumerate}[topsep=0.5ex,itemsep=-.2ex]
\item \label{RCS:1} {\bf Granularity.} $z(B)$ and $w_i(e)$ are non-negative multiple of $\delta_i$ for all $B\in \mathcal{V}_{odd}, e \in E$ and $y(u)$ is a non-negative multiple of $\delta_i/2$ for all $u \in V$.
\item {\bf Active Blossoms.} \label{RCS:2} All blossoms in $\Omega$ are full.
If $B \in \Omega$ is a root blossom then $z(B)>0$; if $B \notin \Omega$ then $z(B) = 0$. Non-root active blossoms may have zero $z$-values.
\item {\bf Near Domination.} \label{RCS:3} For all edge $e\in E$, $yz(e)\geq w_i(e)-\delta_i$.
\item \label{RCS:4}{\bf Near Tightness.} If $e$ is a type $j$ edge, then $yz(e) \leq w_i(e) + 2(\delta_j - \delta_i)$.
\item \label{RCS:5}{\bf Accumulated Free Vertex Duals.} The $y$-values of all free vertices have the same parity as multiples of $\delta_i / 2$. Moreover, the sum of the $y$-value of the free vertices is at most $\tau \cdot |F(G)| + \epsilon' \cdot \hat{w}(M^{*})$, where $M^{*}$ is a maximum weight matching w.r.t.~$\hat{w}$ and $\tau$ is a variable where $y(v) \geq \tau$ for every $v$. $\tau$ decreases to $0$ when the algorithm ends.
\item \label{item:boundedw} {\bf Bounded Weight Change.} The weight modifiers will always be nonnegative. Moreover, the sum of $\Delta w(e)$ is at most $\epsilon' \cdot \hat{w}(M^{*})$.
\end{enumerate}
\end{property}
The main modifications from \cite{DuanP14} and \cite{CS22} are the following:
\begin{itemize}[leftmargin=*]
\item We modified \cref{RCS:5} so that the $y$-values of the free vertices is no longer required to be equal but have the same parity as multiples of $\delta_i / 2$. This is because we may freeze a small fraction of free vertices to prevent their $y$-values from being decreased during an iteration. As a result, they are no longer required to be zero in the end.
However, the sum of the $y$-values will be upper bounded in the end.
\item All weight modifiers will be nonnegative in our scaling algorithm, so in \cref{item:boundedw} the absolute function is no longer needed.
\end{itemize}
With the modified relaxed complementary slackness properties, the following~\Cref{lem:approximateMWM} guarantees the desired approximate ratio of the matching at the end of the algorithm.
As the proof technique is similar to~\cite{DuanP14} and \cite{CS22}, we defer the proof of \Cref{lem:approximateMWM} to \Cref{apx:DPanalysis}.
\begin{lemma}\label{lem:approximateMWM}
Suppose that $y, z, M, \Omega$, and $\Delta w$ satisfy the relaxed complementary slackness condition at the end of scale $L$. Then $\hat{w}(M) \geq (1-\epsilon) \cdot \hat{w}(M^{*})$.
\end{lemma}
\subsection{Iterations in each Scale}
\label{sec:iterations-within-each-scale}
There will be $O(1/\epsilon')$ iterations within each scale.
In each scale $i$, the ultimate goal to the algorithm is to make progress on primal $(M)$ and dual $(y, z)$ solutions such that they meet the complementary slackness properties (\Cref{prop:RCS}).
This can be achieved by iteratively seeking for a set of augmenting paths $\Psi$, updating the matching $M\gets M\oplus\cup_{P\in\Psi}P$, and then performing dual adjustments on $y$ and $z$ variables.
However, in order to ensure that dual variables are adjusted properly,
we enforce the following \emph{primal blocking conditions} to $\Psi$:
\begin{property}[{\it Primal Blocking Conditions}]\label{prop:PBC}~
\begin{enumerate}[(1),itemsep=0pt]
\item \label{item:first_block} No augmenting paths exist in $G_{elig}/\Omega$.
\item \label{item:second_block} No full blossoms can be reached from any free vertices in $G_{elig}/\Omega$ via alternating paths.
\end{enumerate}
\end{property}
Here we briefly explain why \Cref{prop:PBC} leads to satisfactory dual adjustments.
In the dual adjustment step,
the algorithm decreases the $y$-values of inner vertices in $\hat{V}_{in}(F(G_{elig}/\Omega))$ by $\delta_i/2$ and increases the $y$-values of outer vertices in $\hat{V}_{out}(F(G_{elig}/\Omega))$.
\Cref{prop:PBC} ensures that $\hat{V}_{in}(F(G_{elig}/\Omega)) \cap \hat{V}_{out}(F(G_{elig}/\Omega)) = \emptyset$ and so the duals can be adjusted without ambiguity.
As mentioned in \cref{sec:tech_sum}, it is difficult to
achieve the primal blocking conditions efficiently in \ensuremath{\mathsf{CONGEST}}\xspace and parallel model,
and the main reason is that most of our black-box tools for obtaining $\Psi$ do not find a long augmenting path.
Fortunately, with the weight modifiers $\Delta w$ introduced from~\cite{CS22},
now we are allowed to remove some matched edges and free vertices from $G_{elig} / \Omega$, which enables a cute trick of \emph{retrospective eligibility modififcation}:
after \emph{some} set of augmenting paths $\Psi$ is returned, we modify $G_{elig}$ such that $\Psi$ satisfies \Cref{prop:PBC}.
To remove a matched edge $e$ from $G_{elig}$, we simply add $\delta_i$ to $\Delta w(e)$ and so $e$ becomes ineligible.
To remove a free vertex $f$, we add $\delta_i$ to the $y$-values of vertices in $\hat{f^\Omega}$ and decrease $z(B)$ by $2\delta_i$ where $B$ is the root blossom containing $f$. By doing this, the vertex $f^{\Omega}$ is isolated from all the other vertices in $G_{elig}/\Omega$ because all the edges going out of $f^{\Omega}$ must become ineligible (note that all these edges must be unmatched). Additionally,
all the internal edges inside $\hat{f}^{\Omega}$ will have their $yz$-values unchanged.
Note that the reason that we increase the $y$-values by $\delta_i$ instead of $\delta_i/2$ is that we need to synchronize the parity of the $y$-values (as a multiple of $\delta_i/2$) as a technicality required for the analysis.
We present the details of the entire scaling algorithm in \Cref{fig:edmondssearch}.
The {\textbf{augmentation and blossom shrinking step}} is the step that adjusts the primal variables $M$ (and also $\Omega$) and removes some matched edges and free vertices to achieve the primal blocking conditions.
It uses procedure \textsc{Approx\_Primal}, which we describe in \cref{sec:MWMcongest}, that runs in $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ rounds and returns a set of matched edges $M'$ and free vertices $F'$ of sizes $O((\epsilon / \log n)\cdot |M|)$ as well as a set of augmenting paths $\Psi$ and a set of blossoms $\Omega'$ such that the primal blocking conditions hold in $(G_{elig} - F' - M' - \Psi) / (\Omega \cup \Omega') $.
Assuming such a procedure exists, we give a full analysis in \cref{apx:DPanalysis} to show that the algorithm runs in $\operatorname{\text{{\rm poly}}}(1/\epsilon, \log n)$ rounds and outputs a $(1-\epsilon)$-{\sc mwm}{}.
\begin{figure}[h!t]
\centering
\framebox{
\begin{minipage}{0.97\textwidth}
\small
$M \leftarrow \emptyset$, $\Omega \leftarrow \emptyset$, $\delta_0 \leftarrow \epsilon' W$, $\tau = W/2 - \delta_0/2$\\ $y(u) \leftarrow \tau$ for all $u \in V$, $z(B) \leftarrow 0$ for all $B \in \mathcal{V}_{odd}$, $\Delta w(e) \leftarrow 0$ for $e \in E$. \\
Execute scales $i = 0, 1, \ldots, L=\log_{2} W$ and return the matching $M$. \\
\underline{Scale $i$:}
\begin{itemize}[leftmargin=*]
\item[--] Repeat the following until $\tau = W/2^{i+2} - \delta_i / 2$ if $i \in [0,L)$, or until it reaches $0$ if $i = L$.
\begin{enumerate}[itemsep=0ex, leftmargin=*]
\item {\bf Augmentation and Blossom Shrinking:} \label{step:2} \label{step:start}
\begin{itemize}
\item Let $\lambda = \epsilon'/(12(L+1))$. Invoke \textsc{Approx\_Primal}$(G_{elig},M,\Omega, \lambda)$ to obtain:
\begin{enumerate}[leftmargin=*]
\item The set of edge-disjoint augmenting path $\Psi$.
\item The set of free vertices $F'$ and the set of matched edges $M'$ that are to be removed.
\item The set of new blossoms $\Omega'$.
\end{enumerate}
\item Set $M \leftarrow M \oplus \bigcup_{P \in \Psi} P$.
\item Set $\Omega \leftarrow \Omega \cup \Omega'$
\item For each $e \in M'$, set $\Delta w(e) + \delta_i$.
\item For each $f \in F'$, set $y(u) \leftarrow y(u) + \delta_i$ for $u \in \hat{f}^{\Omega}$ and $z(B) \leftarrow z(B) - 2\delta_i $ for the root blossom $B \ni f$ if it exists. (Note that $z(B)$ can be as small as $-\delta_i$ after this step, but it will become non-negative after the dual adjustment).
\end{itemize}
\item {\bf Dual Adjustment:}
\begin{itemize}
\item $\tau \leftarrow \tau - \delta_i / 2$
\item $y(u) \leftarrow y(u) - \delta_i / 2$, if $u \in \hat{V}_{out}(F(G_{elig}/\Omega))$
\item $y(u) \leftarrow y(u) + \delta_i / 2$, if $u \in \hat{V}_{in}(F(G_{elig}/\Omega))$
\item $z(B) \leftarrow z(B) + \delta_i$, if $B \in \Omega$ is a root blossom with $B \subseteq \hat{V}_{out}(F(G_{elig}/\Omega))$
\item $z(B) \leftarrow z(B) - \delta_i$, if $B \in \Omega$ is a root blossom with $B \subseteq \hat{V}_{in}(F(G_{elig}/\Omega))$
\end{itemize}
\item {\bf Blossom Dissolution:} \label{step:end}
\begin{itemize}
\item Some root blossoms might have zero $z$-values after the dual adjustment step. Dissolve them by removing them from $\Omega$ until there are no such root blossoms. Update $G_{elig}$ with the new $\Omega$.
\end{itemize}
\end{enumerate}
\item[--] If $i \in [0,L)$, set $\delta_{i+1} \leftarrow \delta_{i} / 2$, $\tau \leftarrow \tau + \delta_{i+1}$ and $y(u) \leftarrow y(u)+ \delta_{i+1}$ for every $u \in V$.
\end{itemize}
\end{minipage}
}
\caption{The modified scaling framework.}
\label{fig:edmondssearch}
\end{figure}
\paragraph{Implementation in {\textsf{CONGEST} model}}
In \ensuremath{\mathsf{CONGEST}}\xspace model, all quantities $M, y(u), y(B), \Omega$, and $\Delta w(e)$ shall be stored and accessed locally.
We describe how a \ensuremath{\mathsf{CONGEST}}\xspace algorithm stores these variables in \cref{appendix:data-structures} in a straightforward way (with a multiplicative slowdown of the maximum active blossom size).
We remark that there is no need to store $\tau$ as a variable since the number of iterations per scale can be pre-computed at the beginning of the algorithm.
\paragraph{Implementation in Parallel Model}
We can simulate the \ensuremath{\mathsf{CONGEST}}\xspace implementation mentioned above in the CREW PRAM model.
The CREW PRAM implementation of
\textsc{Approx\_Primal} will be described in \cref{sec:pram-approx-primal}.
\section{The Parallel Depth-First Search}\label{sec:FMU}
Fischer, Mitrovic, and Uitto~\cite{FMU22} give a deterministic algorithm for $(1-\epsilon)$-approximate {\sc mcm}{} in the semi-streaming model as well as in other models such as \ensuremath{\mathsf{CONGEST}}\xspace and the massive parallel computation (MPC) model.
The core of their algorithm is the procedure \textsc{Alg-Phase}, which searches for an almost maximal set of (short) augmenting paths.
In particular,
\textsc{Alg-Phase} runs a parallel DFS from every free vertex and returns a set of augmenting paths $\Psi$ and two small-sized sets of vertices $V'$ and $V_A$ such that there exists no augmenting paths of length $O(1/\epsilon)$ on $G-\Psi-V'-V_A$.
The DFS originated from a free vertex $\alpha$ defines a search \emph{structure}, denoted as $S_\alpha$.
For the efficiency purpose, the algorithm imposes several restrictions to the DFS on these structures which are parametrized by a \emph{pass limit} $\tau_{\max}$, a \emph{size limit} $\operatorname{\text{\sc limit}}$, and a \emph{depth limit} $\ell_{\max}$.
We provide an overview of the \textsc{Alg-Phase} algorithm of~\cite{FMU22} in~\cref{sec:fmu-overview}.
Unfortunately, directly run {\textsc{Alg-Phase}}\xspace
on $G$ for an almost maximal set of augmenting paths without considering the blossoms would break the scaling framework.
For example, the framework does not allow the search to return two disjoint augmenting paths that pass through the same blossom.
We now describe the modified {\textsc{Alg-Phase}}\xspace that works for the contracted graph $G/\Omega$.
\subsection{{\normalfont{\textsc{Vertex-Weighted-Alg-Phase}}\xspace} over the Contracted Graph $G/\Omega$}\label{sec:our-fmu}
Our goal of the modified {\textsc{Alg-Phase}}\xspace is clear: all we need to do is to come up with a set $\Psi^\Omega$ of almost maximal short augmenting paths on $G/\Omega$.
After $\Psi^\Omega$ is found, the algorithm recovers $\Psi$, the actual corresponding augmenting paths on $G$.
Moreover, the algorithm returns two small sets of vertices $V'$ and $V_A$ so that no short augmenting path can be found in $(G-\Psi-V'-V_A)/\Omega$.
Observe that
the lengths of the paths in $\Psi$ on $G$ could be much longer than the paths in $\Psi^\Omega$ on $G/\Omega$, regarding to the size of blossoms in $\Omega$.
This observation motivates us to consider a \emph{vertex-weighted} version of \textsc{Alg-Phase}.
When computing lengths to an augmenting path on $G/\Omega$, each contracted root blossoms now have weights corresponding to the number of matched edges inside the blossom.
Let us formally define the \emph{matching distance} and \emph{matching length} in the contracted graph.
\begin{definition}
Given $u \in G/\Omega$, define $\|u\|=|\hat{u}|$ to be the number of vertices represented by $u$ in the original $G$. Given a set of vertices $S$, define $\|S\| = \sum_{u\in S} \|u\|$.\end{definition}
\begin{definition}\label{def:new_matching_length}
Let $M$ be a matching on $G$ and $\Omega$ be a set of full blossoms with respect to $M$ on $G$.
Define $\ensuremath{\tilde{M}} := M/\Omega$ to be the set of corresponding matched edges of $M$ on $G/\Omega$.
Given an alternating path $P = (u_1, \ldots, u_k)$ in $G / \Omega$, define the matching length of $P$, $\|P\|_{M} = |P \cap \ensuremath{\tilde{M}}| + \sum_{i=1}^{k} (\|u_i\|-1)/2$.
For any matched edge $e=uv\in \ensuremath{\tilde{M}}$ we define $\|e\|_M=(\|u\|+\|v\|)/2$ which corresponds to the total number of matched edges in the blossoms $\hat{u}$ and $\hat{v}$ as well as the edge $e$ itself.
\end{definition}
In the DFS algorithm searching for augmenting paths, a search process may visit a matched edge $e$ in both directions. We distinguish these two situations by giving an orientation to $e$, denoting them as \emph{matched arcs} $\vec{e}$ and $\cev{e}$.
\cref{def:new_matching_length} gives a natural generalization of matching distances on $G/\Omega$:
\begin{definition}\label{def:new_distance} Given a subgraph $H \subseteq G/\Omega$, a set of free vertices $F$, a matching $M$, and a matched arc $\vec{e}$, the matching distance to $\vec{e}$, $d_{H,M}(F,\vec{e})$, is defined to be the shortest matching length of all alternating paths in $H$ that start from a free vertex $F$ and end at $\vec{e}$. When the first parameter is omitted, $d_{H,M}(\vec{e})$ is the shortest matching length among all alternating paths in $H$ that start from any free vertex in $H$ and end at $\vec{e}$.
\end{definition}
Throughout this paper,
if an alternating path starts with a free vertex $u_0$ but ends at a non-free vertex, we conveniently denote this alternating path by $(u_0, \vec{e_1}, \vec{e_2}, \ldots, \vec{e_t})$, where $u_0$ is the starting free vertex and $e_1, e_2, \ldots, e_t$ is the sequence of the matched edges along the path.
Also for convenience we define $\|\vec{e}\|_M=\|e\|_M$ for each matched arc $\vec{e}$.
Let $\lambda$ be a parameter\footnote{For the purpose of fitting this subroutine into the scaling framework showed in~\Cref{fig:edmondssearch} and not to be confused with the already-defined parameter $\epsilon$,
we introduce
the parameter $\lambda$ for the error ratio.}.
Similar to the {\textsc{Alg-Phase}}\xspace algorithm, our {\textsc{Vertex-Weighted-Alg-Phase}}\xspace returns a collection of disjoint augmenting paths $\mathcal{P}$ where each augmenting path has a matching length at most $O(\operatorname{\text{{\rm poly}}}(1/\lambda))$; two sets of vertices to be removed $V'$ and $V_A$ with their total weight bounded by $\|V'\|=O(\lambda|M|)$ and $\|V_A\|=O(\lambda |M|)$; and the collection of search structures $\mathcal{S}$ where each search structure $S_\alpha\in\mathcal{S}$ has weight $\|S_\alpha\|=O(\operatorname{\text{{\rm poly}}}(1/\lambda))$.
We summarize the vertex-weighted FMU algorithm below:
\begin{lemma}\label{lemma:vertex-weighted-FMU}
Let $\lambda$ be a parameter.
Let $G$ be the network with weak diameter $\operatorname{\text{{\rm poly}}}(1/\lambda)$ and $M$ be the current matching.
Let $\Omega$ be a laminar family of vertex subsets (e.g., the current collection of blossoms) such that
each set $B\in\Omega$ contains at most $C_{\max{}}=O(1/\lambda^7)$ vertices.
Define the DFS parameters $\tau_{\max}:=1/\lambda^4$, $\operatorname{\text{\sc limit}}:=1/\lambda^2$, and $\ell_{\max}:=1/\lambda$.
Then, there exists a \ensuremath{\mathsf{CONGEST}}\xspace and a \ensuremath{\mathsf{CREW\ PRAM}}\xspace algorithm {\textsc{Vertex-Weighted-Alg-Phase}}\xspace such that in $\operatorname{\text{{\rm poly}}}(1/\lambda)$ time, returns $(\mathcal{P}, V', V_A, \mathcal{S})$ that satisfies the following:
\begin{enumerate}[itemsep=0pt]
\item $\|S_{\alpha}\| \leq C_{\max}$ for each structure $S_\alpha\in \mathcal{S}$, \hfill where $C_{\max{}} := \tau_{\max}\cdot (\ell_{\max}+1)\cdot \operatorname{\text{\sc limit}}$.
\item $\|V'\| \leq C_{\max{}}\cdot (\ell_{\max}+1) \cdot (2|\mathcal{P}| + \lambda^{32}\tau_{\max}|M|) $.
\item $\|V_{A}\| \leq h(\lambda)\cdot (2\ell_{\max}) \cdot |M|$, \hfill where $h(\lambda):=\frac{4+2/\lambda}{\lambda\cdot \tau_{\max}} + \frac{2}{\operatorname{\text{\sc limit}}}$.
\item No augmenting path $P$ with $\|P\|_M \leq \ell_{\max}$ exists in $(G/\Omega) \setminus (V' \cup V_{A})$.
\item For each matched arc $\vec{e}$, if $d_{(G/\Omega)\setminus (V'\cup V_A), M}(\vec{e}) \le \ell_{\max}$, then there exists a $S_\alpha\in \mathcal{S}$ such that $\vec{e}$ belongs to $S_\alpha$.
\end{enumerate}
\end{lemma}
In \cref{appendix:proof-vertex-weighted-FMU}
we prove \cref{lemma:vertex-weighted-FMU}.
Specifically, we show that
it is possible to apply a {\textsc{Vertex-Weighted-Alg-Phase}}\xspace algorithm on $G/\Omega$ but simulating over the underlying network $G$ with an additional $O(C_{\max{}}^2)$ factor in the round complexity.
Interestingly, the {\textsc{Vertex-Weighted-Alg-Phase}}\xspace itself is implemented via a black-box reduction back to the unweighted {\textsc{Alg-Phase}}\xspace procedure of \cite{FMU22}.
\section{Augmentation and Blossom Shrinking}
\label{sec:MWMcongest}
The main goal of this section is to prove the following theorem:
\begin{theorem}\label{thm:augment_and_shrink}
Let $\lambda$ be a parameter.
Given a graph $G$, a matching $M$, a collection of active blossoms $\Omega$ where each active blossom $B\in\Omega$ has size at most $C_{\max{}}=O(1/\lambda^7)$ vertices. There exists a $\operatorname{\text{{\rm poly}}}(1/\lambda, \log n)$ time algorithm in \ensuremath{\mathsf{CONGEST}}\xspace model that identifies the following:
\begin{enumerate}[(1),itemsep=0pt]
\item\label{enum:main-aug-path-length} A set $\Psi$ of vertex-disjoint augmenting paths with matching lengths at most $\operatorname{\text{{\rm poly}}}(1/\lambda)$.
\item\label{enum:main-small-blossoms} A set of new blossoms $\Omega'$ in $G/\Omega$, where the size of each blossom is at most $C_{\max{}}$.
\item\label{enum:main-not-many-removed-edges} A set of at most $O(\lambda \cdot |M|)$ matched edges $M'$ and free vertices $F'$.
\end{enumerate}
\noindent Let $G_{\textrm{alive}}$ be the contracted graph $G_{\textrm{alive}} := (G-\Psi-M'-F')/(\Omega\cup \Omega')$ after new blossoms are found and $F_{\textrm{alive}} := F(G_{\textrm{alive}})$ be the remaining free vertices.
Then, the algorithm also obtains:
\begin{enumerate}[(1),itemsep=0pt]
\setcounter{enumi}{3}
\item\label{enum:main-no-aug-path} \label{enum:main-no-outer-outer}
The sets $\hat{V}_{in}(F_{\textrm{alive}})$ and $\hat{V}_{out}(F_{\textrm{alive}})$.
\end{enumerate}
\noindent These objects are marked locally in the network (see \cref{appendix:data-structures}).
Moreover,
$V_{out}(F_{\textrm{alive}}) \cap V_{in}(F_{\textrm{alive}}) = \emptyset$.
\end{theorem}
Notice that the last statement of \Cref{thm:augment_and_shrink} implies that
no blossoms can be detected in $G_{\textrm{alive}}$ from a free vertex, and thus there is no augmenting path from $F_{\textrm{alive}}$ on $G_{\textrm{alive}}$.
I.e., $(G_{elig}-M'-F')/(\Omega\cup \Omega')$ meets the primal blocking conditions (\Cref{prop:PBC}) after augmenting all the paths in $\Psi$.
\medskip
\begin{algorithm}[htb]
\caption{\textsc{Approx\_Primal}$(G,M,\Omega, \lambda)$}\label{alg:main}
\begin{algorithmic}[1]\small
\Require Unweighted graph $G$ with weak diameter $O(\frac{\log^3 n}{\epsilon})$, a matching $M$, a collection of blossoms $\Omega$, and $\lambda$ (recall from \Cref{fig:edmondssearch} we set $\lambda=\Theta(\frac{\epsilon'}{\log W})$).
\Ensure Collection of augmenting paths $\Psi$, a set of blossoms $\Omega'$, a set of matched edges $M'$ and a set of free vertices $F'$. These objects as well as $\hat{V}_{in}$ and $\hat{V}_{out}$ are represented locally (see \cref{appendix:data-structures}.)
\State Compute $|M|$.
\vspace*{0.5em}\LineComment{\textbf{Step 1:} Repeatedly search for augmenting paths.}
\Repeat
\label{ln:repeat-begin}
\State $(\mathcal{P}, V', V_{A}, \mathcal{S}) \leftarrow{\textsc{Vertex-Weighted-Alg-Phase}}\xspace(G/\Omega, M, \lambda)$ \Comment{See \cref{alg:vertex-weighted-fmu}.}
\State $\Psi \leftarrow \mathcal{P} \cup \Psi$.
\State $G\gets G\setminus \mathcal{P}$.
\Until{$|\mathcal{P}|
\leq
\lambda \cdot |M| / (C_{\max{}}(\ell_{\max}+1)) $}\label{ln:repeat-end}
\vspace*{0.5em}
\LineComment{\textbf{Step 2:} Remove matched edges and free vertices returned by the last call to {\textsc{Alg-Phase}}\xspace.}
\State Set $M' \leftarrow M(V') \cup M(V_{A})$ and $F' \leftarrow F(V') \cup F(V_{A})$.\label{ln:collect-active-path}
\vspace*{0.5em}
\LineComment{\textbf{Step 3:} Detect new blossoms.}
\For{each $S_{\alpha} \in \mathcal{S}$}
\label{ln:detect-blossom-for-loop}
\State Detect all the (possibly nested) blossoms in $S_{\alpha}$ and add them to $\Omega'$.\label{ln:detect-blossom-step}
\EndFor
\vspace*{0.5em}
\LineComment{\textbf{Step 4:} Remove some matched edges and free vertices such that all long augmenting paths disappear.}
\State
Define $G_{\textrm{alive}} := (G-\Psi-M'-F')/(\Omega\cup \Omega')$, $F_{\textrm{alive}}=F(G_{\textrm{alive}})$, and $\ensuremath{\tilde{M}}:=M(G_{\textrm{alive}})$.
\State
Use \cref{lemma:simulate-bfs} to compute distance labels $\ell(\vec{e})\in \{d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e}), \infty \}$
for each matched arc $\vec{e}\in\ensuremath{\tilde{M}}$.
\label{ln:bfs}
\State Let $E_i, F_i\gets \emptyset$ for all $i\in 1, 2, \dots, \lfloor\ell_{\max}/2\rfloor$.
\For{each matched arc $\vec{e}\in\ensuremath{\tilde{M}}$ such that $\ell(\vec{e}) \neq \infty$}
\State Add the corresponding matched edge $\hat{e}\in M$ to $E_i$ for all $i\in [\ell(\vec{e}) - \|e\|_M, \ell(\vec{e})]\cap [1, \ell_{\max}/2]$.
\EndFor
\For{each free vertex $f \in F_{\textrm{alive}}$}
\State Add the corresponding free vertex $\dot f\in F$ to $F_i$ for all $i\in [1, (\|f\|-1)/2]$.
\EndFor
\State Let $i^{*} = \arg \min_{1 \leq i \leq \ell_{\max}/2} \{|E_i| + |F_i|\}$.
\label{ln:partition-edges}
\State Add all the matched edges in $E_{i^{*}}$ to $M'$ and all free vertices in $F_{i^*}$ to $F'$.
\label{ln:lastline}
\State\Return $\Psi, \Omega', M'$, and $F'$.
\end{algorithmic}
\end{algorithm}
To prove \cref{thm:augment_and_shrink}, we propose
the main algorithm \cref{alg:main}.
The algorithm consists of four steps.
In the first step (\cref{ln:repeat-begin} to \cref{ln:repeat-end}), the algorithm repeatedly invokes the {\textsc{Alg-Phase}}\xspace{} procedure and obtains a collection $\mathcal{P}$ of (short) augmenting paths.
These augmenting paths, once found, are temporarily removed from the graph.
The loop ends once the number of newly found augmenting paths becomes no more than $\lambda\cdot |M|/C_{\max{}}$.
Notice that we are able to count $|\mathcal{P}|$ in $\operatorname{\text{{\rm poly}}}(1/\lambda, \log n)$ time
because $G$ has a weak diameter $O(\log^3 n/\epsilon)$.
The algorithm then utilizes the output $(\mathcal{P}, V', V_A, \mathcal{S})$ from the last execution of ${\textsc{Vertex-Weighted-Alg-Phase}}\xspace{}$ in the subsequent steps.
In step two (\cref{ln:collect-active-path}) the algorithm removes edges in $M(V')\cup M(V_A)$ and free vertices in $F(V')\cup F(V_A)$. This ensures that no short augmenting paths can be found from the remaining free vertices.
In the third step (\cref{ln:detect-blossom-for-loop} to \cref{ln:detect-blossom-step}), the algorithm detects and contracts $\Omega'$ ---
the set of all blossoms within any part of $\mathcal{S}$.
We remark that there could still be an edge connecting two outer vertices in $V_{out}(G_{\textrm{alive}})$ after the contraction of the blossoms from $\Omega'$, leading to an undetected augmenting path.
However, in this case, we are able to show that at least one endpoint of the edge must be far enough from any remaining free vertex, so
after the fourth step, such an \emph{outer-outer edge} no longer belongs to any augmenting path.
The fourth step (\cref{ln:bfs} to \cref{ln:lastline}) of the algorithm assembles the collection of matched edges $\{E_{i}\}_{i=1,2,\ldots, \floor{\ell_{\max}/2}}$ and free vertices $\{F_i\}_{i=1,2,\ldots, \floor{\ell_{\max}/2}}$.
Each pair of sets $(E_{i}, F_i)$ has the property that after removing the all matched edges in $E_i$ and free vertices in $F_i$, there will be no more far-away outer-outer edges (and thus no augmenting path).
Therefore, to eliminate all far away outer-outer edges, the algorithm chooses the index $i^*$ with the smallest $|E_{i^*}|+|F_{i^*}|$ and then removes all matched edges in $E_{i^*}$ and free vertices in $F_{i^*}$.
Intuitively, any long enough alternating paths starting from a free vertex will be intercepted at the matching distance $i^*$ by $E_{i^*}$ and $F_{i^*}$.
Let $G_{\textrm{alive}}$ be the current contracted graph $(G-\Psi-M'-F')/(\Omega\cup \Omega')$ after removing a set of augmenting paths $\Psi$, a set of matched edges $M'$ and a set of free vertices $F'$.
To form the collection of matched edges $\{E_{i}\}$ and free vertices $\{F_i\}$, the algorithm runs a Bellman-Ford style procedure that computes distance labels to each matched arc $\ell(\vec{e})$.
The goal is to obtain $\ell(\vec{e})=d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e})$ whenever this matching distance is no more than $\ell_{\max} + \|e\|_M$, and $\ell(\vec{e})=\infty$ otherwise.
For each matched arc $\vec{e}$ with a computed matching length $\ell(\vec{e}) = d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e})\le \ell_{\max} + \|e\|_M$, we add the corresponding matched edge $\hat{e}\in M$ to $E_{i}$ for all integers $i\in [\ell(\vec{e})-\|e\|_M, \ell(\vec{e})] \cap [1, \ell_{\max}/2]$.
In addition, for each free vertex $f\in F_{\textrm{alive}}$, its corresponding free vertex $\dot f\in F$ is added to $F_i$ for all $i\in [1, (\|f\|-1)/2]$.
Finally, $i^* = \arg\min_{i}\{|E_i|+|F_i|\}$ can be computed and then $E_{i^*}$ and $F_{i^*}$ are removed from $G$.
The rest of the section proves \cref{thm:augment_and_shrink}.
\subsection{Correctness of \cref{thm:augment_and_shrink}}
First of all, we notice that $\Psi$ is comprised of augmenting paths returned by {\textsc{Alg-Phase}}\xspace, which is parametrized to return augmenting paths of matching length at most $2C_{\max{}} = \operatorname{\text{{\rm poly}}}(1/\lambda)$. Thus \cref{enum:main-aug-path-length} of \cref{thm:augment_and_shrink} holds.
Moreover, since in Step 3 (\cref{ln:detect-blossom-step}) the algorithm only searches for blossoms within each part in $\mathcal{S}$, the size of each blossom must be at most $C_{\max{}}$. Thus \cref{enum:main-small-blossoms} holds.
Now, we turn our attention to \cref{enum:main-not-many-removed-edges}.
We notice that the set of removed matched edges $M'$ and free vertices $F'$
are only affected in \cref{ln:collect-active-path} and \cref{ln:lastline}.
\cref{lemma:main-bound-removed-edges-1}
focuses on \cref{ln:collect-active-path}, and \cref{lemma:size-of-layered-matched-edges} focuses on \cref{ln:lastline}:
\begin{lemma}\label{lemma:main-bound-removed-edges-1}
After the execution of \cref{ln:collect-active-path}, both the size of the sets $M'$ and $F'$ are $O(\lambda\cdot |M|)$.
\end{lemma}
\begin{proof}
It suffices to upper bound the four quantities $|M(V')|$, $|M(V_A)|$, $|F(V')|$, and $|F(V_A)|$ individually.
Since the repeat loop (\cref{ln:repeat-begin} to \cref{ln:repeat-end}) stops whenever the number of augmenting paths $|\mathcal{P}|$ is upper bounded by $\lambda\cdot |M|/(C_{\max{}}(\ell_{\max}+1))$, we have
\begin{equation}
\begin{aligned}
|M(V')| &\le \|V'\|/2\\
&\le C_{\max{}} (\ell_{\max}+1) \cdot (|\mathcal{P}|
+ \lambda^{32} \tau_{\max} |M|/2) \ \ \ \ \ \ \ \ \ \ \ \text{(By property 2 of \cref{lemma:vertex-weighted-FMU})}\\
&\le C_{\max{}} (\ell_{\max}+1) \cdot \lambda\cdot |M| / (C_{\max{}}(\ell_{\max}+1)) + O(\lambda^{32} C_{\max{}}\ell_{\max}\tau_{\max} |M|) \\
&= O(\lambda\cdot |M|) \hspace{8em} (C_{\max{}}=O(1/\lambda^7), \ell_{\max}=1/\lambda, \text{ and }\tau_{\max}=1/\lambda^4)
\end{aligned}
\end{equation}
In addition, we have $|F(V')| \le \|V'\| = O(\lambda\cdot |M|)$.
Now, because each active path can be decomposed to exactly one free vertex and several matched edges, we have
\begin{equation}
\begin{aligned}
|F(V_A)| + 2|M(V_A)| &= |V_A| \le \|V_A\| \\
&\le h(\lambda)\cdot (2\ell_{\max})\cdot |M| & \text{(By property 3 of \cref{lemma:vertex-weighted-FMU})} \\
&= h(\lambda)\cdot (2/\lambda)\cdot |M| \\
&= O(\lambda\cdot |M|) & \text{(Notice that $h(x)=O(x^2)$ for all small $x$)}\\
\end{aligned}
\end{equation}
Therefore, we conclude that
$|M'|= O(\lambda\cdot|M|)$ and
$|F'|= O(\lambda\cdot|M|)$ after \cref{ln:collect-active-path}.
\end{proof}
Now, we claim that in \cref{ln:lastline}, the algorithm removes at most $O(\lambda\cdot |M|)$ matched edges and free vertices.
The claim is implied by the following \cref{lemma:size-of-layered-matched-edges}, which states that the total size of the collection does not exceed $2|M|$.
Therefore, by the fact that $i^*$ minimizes the total size $|E_{i^*}|+|F_{i^*}|$, \cref{ln:lastline} adds at most $2|M|/\floor{\ell_{\max}/2} = O(\lambda\cdot |M|)$ matched edges and free vertices to $M'$ and $F'$.
\begin{lemma}\label{lemma:size-of-layered-matched-edges}
$\sum_{i=1}^{\floor{\ell_{\max}/2}} |E_{i}| + |F_{i}| \le 2|M|$.
\end{lemma}
\begin{proof}
\begin{align*}
\sum_{i=1}^{\floor{\ell_{\max}/2}} |E_i|+|F_i| &\le \sum_{e\in \ensuremath{\tilde{M}}} (\|\vec{e}\|_M+1+\|\cev{e}\|_M+1) + \sum_{f\in F_{\textrm{alive}}} (\|f\|-1)/2 \tag{the total size equals to all occurrences of each arc and free vertex in the collection}\\
&\le 2|\ensuremath{\tilde{M}}| + \sum_{e=uv\in \ensuremath{\tilde{M}}} 2\left(\frac{\|u\|-1}{2}+\frac{\|v\|-1}{2}\right) + \sum_{f\in F_{\textrm{alive}}} \frac{\|f\|-1}{2} \\
&\le 2|\ensuremath{\tilde{M}}| + 2\sum_{v\in G_{\textrm{alive}}} \frac{\|v\|-1}{2} \tag{every vertex is incident to at most one edge in $\ensuremath{\tilde{M}}$}\\
&\le 2|\ensuremath{\tilde{M}}| + 2(|M|-|\ensuremath{\tilde{M}}|)\\
&=2|M| \qedhere
\end{align*}
\end{proof}
To prove \cref{enum:main-no-outer-outer}, we first show that any shortest alternating path with a matching length at least $\ell_{\max}=1/\lambda$ must intersect either $E_{i}$ or $F_{i}$ for any integer $i\in [1, \ell_{\max}/2]$.
\begin{lemma}\label{lem:cut}
Consider an alternating path $P = (v_0, \vec{e}_1, \ldots, \vec{e}_t)$ on $G_{\textrm{alive}}$, where $v_0\in F_{\textrm{alive}}$ is a free vertex and $e_j$ is a matched edge for $1 \leq j \leq t$.
Assume $\ell(\vec{e}_t)=d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e}_t) \ge \ell_{\max}$.
Then, for any $i\in [1, \ell_{\max}/2]$,
$ \{\dot{v_0}, \hat{e_1}, \cdots, \hat{e}_t\}\cap (E_{i}\cup F_{i})\neq \emptyset$.
\end{lemma}
\begin{proof}
By definition, $\dot{v_0}$ occurs in $F_i$ for all $i\in [1, (\|v_0\|-1)/2]$ and all matched edges $e_j$ occurs in $E_i$ for all $i\in [\ell(\vec{e}_j) - \|e_j\|_M, \ell(\vec{e}_j)] \cap [1, \ell_{\max}/2]$.
Since $\ell(\vec{e}_t) \neq \infty$, we know that $\ell(\vec{e}_j) = d_{G_{\textrm{alive}},\ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e}_j)$ for all $j=1,2,\ldots, t$.
Moreover, $P$ is an alternating path so by definition of matching length we know that whenever $j>1$ we have $\ell(\vec{e}_j) \le \ell(\vec{e}_{j-1}) + \|e_j\|_M$ and
when $j=1$ we have $\ell(\vec{e}_1) \le (\|v_0\|-1)/2 + \|e_1\|_M$.
Therefore, using the assumption that $\ell(\vec{e}_t)\ge \ell_{\max}$ we obtain
$$
[1, \ell_{\max}/2]\subseteq [1, (\|v_0\|-1)/2] \cup \bigcup_{j\ge 1} [\ell(\vec{e}_j) - \|e_j\|_M, \ell(\vec{e}_j)].
$$
Thus, for any $i\in [1, \ell_{\max}/2]$ either $\dot{v_0}\in F_i$ or there are some $j$ such that $\hat{e_j}\in E_i$.
\end{proof}
\cref{lem:cut} implies that there is no augmenting path leaving from any of the free vertex $F_{\textrm{alive}}$ on $G_{\textrm{alive}}$.
But it does not imply \cref{enum:main-no-outer-outer} since there could be an edge connecting two outer vertices in $V_{out}(F_{\textrm{alive}})$ without an augmenting path.
The next lemma (\cref{lemma:no-outer-outer}) shows that such a situation does not happen after contracting blossoms (Step 3) and removing the thinnest layer of matched edges and free vertices (Step 4).
\begin{lemma}\label{lemma:no-outer-outer}
Fix any integer $i\in [1, \ell_{\max}]$.
Let $H = (G-\Psi-M'-F'- E_i-F_i)/(\Omega\cup\Omega')$.
Then there is no unmatched edge connecting two vertices in $V_{out}^{H, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}\setminus F_{i}^{\Omega\cup \Omega'})$.
\end{lemma}
\begin{proof}
Let
$uv$ be an unmatched edge on $G$ with $u^{\Omega\cup\Omega'}, v^{\Omega\cup\Omega'}\in V_{out}^{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}})$ but $u^{\Omega\cup\Omega'}\neq v^{\Omega\cup\Omega'}$.
With \cref{lem:cut} in mind, it suffices to prove the following claim: there exists one of the vertices $x\in\{u, v\}$ such that either
\begin{enumerate}[itemsep=0pt]
\item $x^{\Omega\cup \Omega'}\in F_{\textrm{alive}}$ and $(\|x^{\Omega\cup \Omega'}\|-1)/2 \ge \ell_{\max}/2$, or
\item $x^{\Omega\cup \Omega'}\notin F_{\textrm{alive}}$ and the matching distance $d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e}_x) \ge \ell_{\max}/2$ where $e_x\in M$ is the matched edge incident to $x^{\Omega\cup \Omega'}$ on $G_{\textrm{alive}}$.
\end{enumerate}
Once we have the above claim for some vertex $x\in\{u, v\}$, we obtain a contradictory argument because now either $x^{\Omega\cup\Omega'}\in F_{i^*}^{\Omega\cup \Omega'}$, or
there exists a shortest augmenting path from a free vertex in $F_{\textrm{alive}}$ to $x^{\Omega\cup\Omega'}$ of matching distance at least $\ell_{\max}$, which is cut off by some set $E_{i^*}^{\Omega\cup \Omega'}$ in the middle by \cref{lem:cut} and thus $x^{\Omega\cup\Omega'}\notin V_{out}^{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}})$.
Now
we prove the claim by another contradiction.
Suppose the statements 1. and 2. in the claim are all false for both $x=u$ and $x=v$.
Using the fact that the contraction never decreases the matching distances, we know that for \emph{both} $x\in\{u, v\}$,
either
\begin{enumerate}
\item $x^\Omega\in F^\Omega$ but $(\|x^\Omega\|-1)/2 < \ell_{\max}/2$, or
\item $x^\Omega \notin F^\Omega$ but $d_{(G-\Psi-M'-F')/\Omega, M}(F^\Omega, \vec{e}_x) < \ell_{\max}/2$ where $e_x\in M$ is the matched edge incident to $x^\Omega$.
\end{enumerate}
Now, both $u^\Omega$ and $v^\Omega$ do not belong to any active path from the last execution of {\textsc{Vertex-Weighted-Alg-Phase}}\xspace.
Furthermore, both $u^\Omega$ and $v^\Omega$ must belong to some structure by Item 5 in \cref{lemma:vertex-weighted-FMU}.
If $u^\Omega$ and $v^\Omega$ belong to different structures, then there must be an augmenting path of matching length $<2(\ell_{\max}/2)=\ell_{\max}$ which contradicts with Item 4 of \cref{lemma:vertex-weighted-FMU}.
Hence,
we conclude that both $u^\Omega$ and $v^\Omega$ belongs to the same structure $S_\alpha\in \mathcal{S}$.
However, the assumption states that both $u^\Omega$ and $v^\Omega$ are outer vertices in $V_{out}^{(G-\Psi-M'-F')/\Omega, M}(F^\Omega)$.
Hence, in Step 3 the algorithm creates a blossom that contains both $u^\Omega$ and $v^\Omega$, which implies $u^{\Omega\cup \Omega'} = v^{\Omega\cup\Omega'}$, a contradiction.
\end{proof}
The proof of \cref{enum:main-no-aug-path} in \cref{thm:augment_and_shrink} now follows immediately from \cref{lemma:no-outer-outer}.
\subsection{Implementation Details in \cref{alg:main} in \textsf{CONGEST}}
There are 3 tasks in \cref{alg:main} that are unclear for implementation in the \ensuremath{\mathsf{CONGEST}}\xspace model.
These tasks are (1) obtaining the correct counts of $|\mathcal{P}|$ (\cref{ln:repeat-end}), $|E_i|$ and $|F_i|$ (\cref{ln:partition-edges}),
(2) correctly identifying and forming blossoms within each $\mathcal{S}_\alpha$ (\cref{ln:detect-blossom-step}), and
(3) computing distance labels for matched arcs on $G_{\textrm{alive}}$ (\cref{ln:bfs}).
Task (1) can be solved in $O(\log n)$ rounds per set using the underlying communication network.
For Task (2), we simulate the naive sequential algorithm for formulating blossoms in each structure $S_\alpha\in \mathcal{S}$:
\begin{lemma}\label{lemma:forming-blossoms}
Let $S_\alpha$ be a structure returned from an execution of {\textsc{Vertex-Weighted-Alg-Phase}}\xspace.
Then, there exists an algorithm in \ensuremath{\mathsf{CONGEST}}\xspace that detects (hierarchy of) blossoms within $S_\alpha$ in $\operatorname{\text{{\rm poly}}}(1/\lambda)$ rounds.
\end{lemma}
\begin{proof}
Since each $S_\alpha$ has size at most $C_{\max{}}$, it suffices to spend $O(C_{\max{}}^2)$ rounds to aggregate the entire induced subgraph of vertices in $S_\alpha$ to the free node $\alpha$.
After the node $\alpha$ identifies all blossoms locally, it broadcasts this information to all the vertices within $S_\alpha$ using another $O(C_{\max{}}^2)$ rounds.
\end{proof}
For Task (3), we give a Bellman-Ford style algorithm on $G_{\textrm{alive}}$ in a straightforward way:
\begin{lemma}\label{lemma:simulate-bfs}
Assume every node knows the blossom size and the root of its associated root blossom (if there is one), and the incident matched edge (if there is one).
Assume that each blossom has its diameter at most $O(\ell_{\max})$.
Then, after Step 3 is performed, there exists an algorithm in \ensuremath{\mathsf{CONGEST}}\xspace that computes the distance labels $\ell(\vec{e})$ to matched arcs with $\ell(\vec{e}) = d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e})$ whenever $d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}}, \vec{e})\le \ell_{\max} + \|e\|_M$.
This algorithm finishes
in $O(\ell_{\max}^2) = \operatorname{\text{{\rm poly}}}(\ell_{\max})$ rounds.
\end{lemma}
\begin{proof}
We first describe the algorithm.
Initially, using $O(\ell_{\max})$ rounds, all matched arcs that can be reached by a free vertex $f$ obtain the distance label $\ell(\vec{e})=\|f\|+\|e\|_M$.
For all other matched arcs the label is set to be $\ell(\vec{e})=\infty$.
Then,
the rest steps of the algorithm are split into $\ell_{\max}$ iterations $t=1, 2, \ldots, \ell_{\max}$.
In each iteration $t$, each matched arc $\vec{uv}$ with a label $\ell(\vec{uv})=t$ informs the neighbors of $v$ about this label.
For each neighbor $x$ of $v$ who receives this information attempts to update the associated matched arc $\vec{xy}$ by setting $\ell(\vec{xy}) \gets \min\{\ell(\vec{xy}), t+\|xy\|_M\}$, similar to a relaxation step in the Bellman-Ford algorithm.
\paragraph{Correctness}
It is straightforward to see that $\ell(\vec{e})\le d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}})$ whenever $\ell(\vec{e})<\infty$.
Moreover, whenever $\ell(\vec{e}) < \infty$ we must have $\ell(\vec{e})\le \ell_{\max} + \|e\|_M$.
Now we prove that for each matched arc $\vec{e}$ with $\ell(\vec{e})<\infty$,
there exists an alternating path of matching length exactly $\ell(\vec{e})$, so that $\ell(\vec{e})\ge d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}})$ which implies the equality.
Suppose this is not true,
then there exists a matched arc $\vec{e}$ such that $\ell(\vec{e}) < d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}})$.
According to the algorithm, there exists a walk of interleaving unmatched and matched arcs such that the walk ends with $\vec{e}$. Moreover, the sum of all weighted matched arcs in the walk is exactly $\ell(\vec{e})$.
Since $\ell(\vec{e}) < d_{G_{\textrm{alive}}, \ensuremath{\tilde{M}}}(F_{\textrm{alive}})$, the walk must \emph{not} be an alternating path.
This implies that a matched edge whose both directional arcs appear in the walk.
Hence, there must exist an unmatched edge connecting two outer vertices must be observed.
However, this contradicts again the claim in the proof of \cref{lemma:no-outer-outer} since a blossom should have been formed during Step 3.
\paragraph{Remark}
It can be shown that if an arc $\vec{e}$ has $\ell(\vec{e}) < \ell_{\max}$ then $\ell(\cev{e})=\infty$. That is, the graph induced by finitely labeled matched edges (as well as those unmatched edges used for relaxation) is very close to a bipartite graph, in the sense that all matched arcs are most likely to be visited from an inner vertex to an outer vertex.
The proof on the bipartite graph becomes trivial as every distance label corresponds to an alternating path on bipartite graphs.
However, there could be some edges ``at the boundary'' where both $\ell(\vec{e})$ and $\ell(\cev{e})$ are within $[\ell_{\max}, \infty)$ so the graph we are concerning is not quite bipartite.
\paragraph{Runtime}
Finally, we analyze the runtime.
Since there are $\ell_{\max}$ iterations and each iteration takes $O(\ell_{\max})$ rounds to propagate the labels, the total runtime
is $O(\ell_{\max}^2)$.
\end{proof}
\subsection{Runtime Analysis in \cref{thm:augment_and_shrink}}
The following lemma summarizes the runtime analysis to \cref{thm:augment_and_shrink}.
\begin{lemma}\label{lemma:main-alg-runtime}
\cref{alg:main} finishes in $\operatorname{\text{{\rm poly}}}(1/\lambda,\log n)$ rounds.
\end{lemma}
\begin{proof}
In Step 1, each iteration in a repeated loop involves one execution of {\textsc{Vertex-Weighted-Alg-Phase}}\xspace and $O((1/\epsilon)\log^3 n)$ additional rounds for counting augmenting paths.
Moreover, there can be at most $$2+|M| \frac{1}{\lambda\cdot|M|/(C_{\max{}}(\ell_{\max}+1))} =\operatorname{\text{{\rm poly}}}(1/\lambda)$$ iterations.
The reason is,
starting from the second iteration, all augmenting paths found include at least one matched edge, so at least $\lambda |M|/C_{\max{}}$ matched edges will be removed from the graph.
By \cref{lemma:vertex-weighted-FMU}, each {\textsc{Vertex-Weighted-Alg-Phase}}\xspace takes $\operatorname{\text{{\rm poly}}}(1/\lambda, \log n)$ rounds (see also~\cref{lemma:simulate-fmu}).
Thus, Step 1 takes $\operatorname{\text{{\rm poly}}}(1/\lambda, \log n)$ rounds.
Step 2 takes $O(1)$ rounds.
Step 3 takes up to $\operatorname{\text{{\rm poly}}}(1/\lambda)$ rounds by \cref{lemma:forming-blossoms}.
Step 4 has two parts, performing a Bellman-Ford style algorithm takes $\operatorname{\text{{\rm poly}}}(1/\lambda)$ rounds by \cref{lemma:simulate-bfs}. In the second part, the algorithm computes the size of each set $|E_i|$ and $|F_i|$, which takes $O(\log n + 1/\lambda)$ rounds.
Therefore, the total runtime to \cref{alg:main} is $\operatorname{\text{{\rm poly}}}(1/\lambda) + O(\log n) = \operatorname{\text{{\rm poly}}}(1/\lambda, \log n)$ as desired.
\end{proof}
\subsection{\Cref{alg:main} in CREW PRAM model}\label{sec:pram-approx-primal}
We simulate the previous \ensuremath{\mathsf{CONGEST}}\xspace implementation of \Cref{alg:main}, so it suffices to show that all local decisions can be done efficiently (with an extra factor of $O(C_{\max{}}^2+\log n)$ parallel time).
We remark that the \ensuremath{\mathsf{CREW\ PRAM}}\xspace implementation does not require the assumptions of the $O((1/\epsilon)\log^3 n)$ weak diameter.
\begin{itemize}
\item Obtaining the matching size $|M|$ and the set size of augmenting paths $|\mathcal{P}|$ can be done in $O(\log n)$ time via the standard parallel prefix sum operation.
\item In Step 1, each {\textsc{Vertex-Weighted-Alg-Phase}}\xspace takes $\operatorname{\text{{\rm poly}}}(1/\lambda, \log n)$ time by \cref{lemma:vertex-weighted-FMU} (see also \cref{lemma:simulate-fmu-pram}).
\item In Step 2, marking matched edges and free vertices takes $O(1)$ time.
\item In Step 3, identifying new blossoms within each structure $S_\alpha$
can be done sequentially in $O(C_{\max{}}^2)$ time~\cite{GT91} as each structure contains at most $O(C_{\max{}}^2)$ edges.
\item In Step 4, the parallel implementation to \Cref{lemma:simulate-bfs} requires relaxing the distances of a matched arc in parallel.
Each Bellman-Ford step of simultaneously relaxing a set of matched arcs takes $O(\log n)$ time.
\end{itemize}
Therefore, \Cref{thm:main-parallel} follows immediately.
{\small
\bibliographystyle{alpha}
| proofpile-arXiv_065-1899 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Electrically-driven models of Mott-insulating systems are known to exhibit an insulator-to-metal transition~\cite{aron.12,am.we.12,ec.we.13.db} at large field strengths. This makes them candidates to describe the {\em resistive switch} occurring in Mott insulators and correlated systems~\cite{ja.tr.15} under the action of a constant bias voltage. It is believed that the resistive switch is due to the formation of metallic filaments percolating through the material/device~\cite{ja.tr.15,st.ca.13}. The so-called {\em effective resistors} models~\cite{st.ca.13} or the non-homogeneous mean-field theory~\cite{li.ar.17} are only a couple of possible explanations of this phenomenon, as it is not entirely clear what microscopic mechanism leads to the formation of such filaments.
Far from providing an explanation to the way these filaments are created, the first attempts to model the dielectric breakdown of an insulator have focused on the importance of a fermion bath in the context of dissipative systems~\cite{ts.ok.09} and their role in getting to a non-trivial nonequilibrium steady-state (NESS)~\cite{ar.ko.12,am.we.12}. Also, it is still debated if the resistive switch occurs mainly due to thermal-~\cite{li.ar.15,ha.li.18,di.ha.22u} or quantum-triggered~\cite{ha.ar.22u} effects. However, there is agreement on the fact that a satisfactory understanding of a field-induced dielectric breakdown must take into account the realistic microscopic mechanism leading to Joule heat dissipation. To do so, lattice vibrations, i.e. {\em phonons}, should be included in the model of the insulator together with the electronic degrees of freedom. To this end, a first attempt has been made in~\cite{ha.ar.22u}, focusing on a two-dimensional lattice coupled to either acoustic or Einstein phonons.
A comprehensive description cannot avoid the inclusion of feedback effects onto phonons which may help to characterize the NESS in terms of the possibly temperature-triggered effects onto the dielectric breakdown of the insulator. In fact, due to the large amount of energy required to overcome the insulating phase, the onset of the metallic phase is expected to be accompanied by a large amount of heat generated by the accelerated electrons of the system. In the absence of feedback effects from these hot electrons, the description of the dielectric breakdown would then miss the phonon contributions to the heat transport within the material. For this reason, in this paper we also include self-consistent (SC) phonons on a single-band Hubbard model in a static electric field, as opposed to the nonself-consistent (NSC) case with only acoustic phonons addressed in Ref.~\cite{ma.ga.22}. In addition, we also extend our investigation to the case of an optical phonon branch, which we model as an Einstein phonon coupled to an ohmic bath, in order to assess their effectiveness in dissipating Joule heat. However, as pointed out in~\cite{ma.ga.22}, a non-trivial NESS is quite difficult to reach with phonons alone as dissipation mechanism; for this reason we also couple the system to an electron bath. Our main goal is to characterize the different types of SC phonons -- either optical or acoustic -- in terms of the dissipation of the current-induced Joule heat in a model a correlated insulator at the onset of the conducting phase. For this reason, the description of the electric field-driven IMT occurring in realistic material is beyond the purpose of the present paper.
The rest of the paper is organized as follows: In Sec.~\ref{sec:MO_HA} we introduce the model, while in Sec.~\ref{sec:method} we present the Dyson equations for both the electronic and phononic Green's function (GF): we refer to Appendix~\ref{sec:GFs_Dyson_Floquet} for further details concerning the Floquet structure of the latter. In Sec.~\ref{sec:results} we discuss the results and leave Sec.~\ref{sec:conclusions} for final remarks and comments.
\section{Model Hamiltonian}\label{sec:MO_HA}
We start from the setup described in Ref.~\cite{ma.ga.22}, namely the single-band Hubbard model in the presence of a constant electric field, the Hamiltonian of which is given by
\begin{equation}\label{eq:MicroHamiltonian}
\hat{H}(t) = \hat{H}_{\text{U}}(t) + \hat{H}_{\text{bath}} + \hat{H}_{\text{e-ph}} + \hat{H}_{\text{ph}}.
\end{equation}
The Hubbard Hamiltonian $\hat{H}_{\text{U}}(t)$ is given by
\begin{equation}\label{eq:Hubbard_ham}
\hat{H}_{\text{U}}(t) = \varepsilon_{\text{c}} \sum_{i\sigma}\hat{n}^{f}_{i\sigma} -\sum_{\sigma}\sum_{(i,j)} t_{ij}(t) \hat{f}^{\dagger}_{i\sigma} \hat{f}_{j\sigma} + U \sum_{i} \hat{n}^{f}_{i\uparrow} \hat{n}^{f}_{i\downarrow},
\end{equation}
where $\hat{f}^{\dagger}_{i\sigma}$ ($\hat{f}_{i\sigma}$) is the creation (annihilation) operator of an electron of spin $\sigma= \{ \uparrow,\downarrow \}$ at the $i$-th lattice site and $\hat{n}^{f}_{i\sigma}\equiv \hat{f}^{\dagger}_{i\sigma} \hat{f}_{i\sigma}$ the corresponding density operator. Sums over nearest neighbor sites are denoted by $(i,j)$ and the electrons' \emph{onsite energy} is chosen as $\varepsilon_{\text{c}} \equiv -U/2$.
In the temporal gauge the static homogeneous electric field defines the time dependent hopping $t_{ij}(t)$ in Eq.~\eqref{eq:Hubbard_ham} via the Peierls substitution~\cite{peie.33}
\begin{equation}\label{eq:peierls}
t_{ij}(t) = t_{\text{c}} \ \ee^{-\ii \frac{Q}{\hbar} \left( \vec{r}_j - \vec{r}_i \right) \cdot \vec{A}(t)},
\end{equation}
where $t_{\text{c}}$ is the hopping amplitude, $\vec{A}$(t) the homogeneous vector potential, $Q$ the electron charge and $\hbar$ Planck's constant. The static electric field is then given by $\vec{F}= -\partial_{t}\vec{A}(t)$ where we choose $\vec{A}(t)=\vec{e}_{0} A(t)$, with $\vec{e}_{0}=(1,1,\ldots,1)$ denoting the lattice body diagonal and
\begin{equation}\label{eq:TD_VecPot}
\vec{A}(t)= -\vec{F}\ t.
\end{equation}
By means of Eqs~\eqref{eq:peierls} and \eqref{eq:TD_VecPot} we define the Bloch frequency $\Omega \equiv -FQa/\hbar$ with $a$ being the lattice spacing and $F\equiv |\vec{F}|$.
Here we consider a $d$-dimensional lattice in the $d \rightarrow \infty$ limit~\cite{mu.we.18} with the usual rescaling of the hopping $t_{\text{c}}=t^{\ast}/(2\sqrt{d})$. Sums over the crystal momentum are then performed using the joint density of states~\cite{ts.ok.08,ma.ga.22} $\rho(\epsilon,\overline{\epsilon}) = 1/(\pi t^{\ast 2}) \ \exp[-( \epsilon^{2} + \overline{\epsilon}^{2})/t^{\ast 2}]$ with $\epsilon = -2t_{\text{c}} \sum_{i=1}^{d} \cos(k_i a)$ and $\overline{\epsilon} = -2t_{\text{c}}\sum_{i=1}^{d} \sin(k_i a)$.
In this work we {\em attach} either an optical phonon or an acoustic phonon branch to each lattice site. The electron-phonon interaction is given by the Hamiltonian
\begin{equation}\label{eq:e-ph_Ein_ham}
\hat{H}_{\text{e-ph}} = g \sum_{i\sigma} \hat{n}^{f}_{i\sigma} \hat{x}_{i}
\end{equation}
with $\hat{x}_{i}\equiv (\hat{b}^{\dagger}_{i} + \hat{b}_{i})/\sqrt{2}$, where $\hat{b}^{\dagger}_{i}$ ($\hat{b}_{i}$) can either create (annihilate) an optical phonon at the lattice site $i$ or an acoustic phonon belonging to the branch $i$. In the former case, the optical phonon Hamiltonian consists of an Einstein phonon $\hat{H}_{\text{ph},\text{E}} = \omega_{\text{E}}\sum_{i}\hat{n}^{b}_{i}$ with $\hat{n}^{b}_{i}=\hat{b}^{\dagger}_{i}\hat{b}_{i}$ the phonon density, coupled to an ohmic bath $\hat{H}_{\text{ph},\text{ohm}}$ with spectral density given in Eq.~\eqref{eq:ohm_bath_spec}. The details concerning acoustic phonons implementation can be found in our previous work~\cite{ma.ga.22} and in Sec.~\ref{sec:Ph_Dyson} of this paper.
As pointed out in the introduction, a stable steady-state~\footnote{For further details about the stability of the steady-state we point at our recent paper in Ref~\cite{ma.ga.22}.} is hard to reach with phonon-mediated dissipation only. This is due to the fact that the narrow phonon bandwidth~\footnote{Especially when considering optical phonon, the phonon bandwidth is really small with respect to the other energy scales.} cannot relax electrons across the band gap~\cite{ma.ga.22}. For this reason, it is convenient to include fermion baths in the guise of B\"uttiker tube chains attached to each lattice site via the Hamiltonian $\hat{H}_{\text{bath}}$, the details of which will be specified in Sec.~\ref{sec:Dyson-eq}, see Eq.~\eqref{eq:WBL_bathGF}. We set $\hbar = k_{\text{B}} = a = 1 = -Q$, such that the Bloch frequency $\Omega$ equals the electric field strength $F$ and the current is measured in units of $t^{\ast}$. In the following, we denote the electron and phonon GFs by $G$ and $D$, and the corresponding self-energy (SE) by $\Sigma$ or $\Pi$, respectively.
\section{Methods}\label{sec:method}
\subsection{Electron Dyson equation}\label{sec:Dyson-eq}
Here we follow the derivation given in~\cite{ma.ga.22}: for details about the Floquet structure we refer to~\cite{ts.ok.08} and to Appendix~\ref{sec:GFs_Dyson_Floquet}. The Dyson equation for the electronic lattice GF reads
\begin{equation}\label{eq:FullDysonEq}
\kel{\mat{G}}^{-1}(\omega,\epsilon,\overline{\epsilon}) = \kel{\mat{G}}^{-1}_{0}(\omega,\epsilon,\overline{\epsilon}) - \kel{\mat{\Sigma}}(\omega,\epsilon,\overline{\epsilon}) - \kel{\mat{\Sigma}}_{\text{e-ph}}(\omega,\epsilon,\overline{\epsilon}),
\end{equation}
where both electron and e-ph SE depend on the crystal momentum via $\epsilon$, $\overline{\epsilon}$. In this paper, any Floquet-represented matrix is denoted by either $X_{mn}$ or $\mat{X}$ (see e.g.~\cite{so.do.18,ma.ga.22}), while an underline defines the so-called Keldysh structure
\begin{equation}\label{eq:Keld-structure}
\kel{\mat{X}} \equiv
\begin{pmatrix}
\mat{X}^{\text{R}} & \mat{X}^{\text{K}}\\
\mat{0} & \mat{X}^{\text{A}} \\
\end{pmatrix}
\end{equation}
with $\mat{X}^{\text{R},\text{A},\text{K}}$ being the {\em retarded}, {\em advanced} and {\em Keldysh} components. We recall that $\mat{X}^{\text{A}}=(\mat{X}^{\text{R}})^{\dagger}$ and $\mat{X}^{\text{K}} \equiv \mat{X}^{>} + \mat{X}^{<}$, where $\mat{X}^{\lessgtr}$ are the \emph{lesser} and \emph{greater} components~\cite{schw.61,keld.65,ra.sm.86,ha.ja}.
The electron GF of the non-interacting part of the Hamiltonian~\eqref{eq:MicroHamiltonian} reads
\begin{align}\label{eq:non-int_InvGF}
\begin{split}
[G_{0}^{-1}(\omega,\epsilon,\bar{\epsilon})]^{\text{R}}_{mn} & = \left[ \omega_n-\varepsilon_c -v^{2}g^{\text{R}}_{\text{bath}}(\omega_n) \right]\delta_{mn} - \varepsilon_{mn}(\epsilon,\overline{\epsilon}), \\
[G_{0}^{-1}(\omega,\epsilon,\bar{\epsilon})]^{\text{K}}_{mn} & = - \delta_{mn} v^{2}g^{\text{K}}_{\text{bath}}(\omega_{n})
\end{split}
\end{align}
with the shorthand notation $\omega_{n}\equiv \omega+n\Omega$. The off-diagonal terms in Eq.~\eqref{eq:non-int_InvGF} are given by the Floquet dispersion relation $\varepsilon_{mn}$ which, for a hypercubic lattice in a dc field~\cite{ts.ok.08}, reads
\begin{equation}\label{eq:Floquet_disp}
\varepsilon_{mn}(\epsilon,\overline{\epsilon}) = \frac{1}{2} \left[ \left( \epsilon + \ii \overline{\epsilon} \right)\delta_{m-n,1} + \left( \epsilon - \ii \overline{\epsilon} \right)\delta_{m-n,-1} \right].
\end{equation}
We make use of the so-called wide band limit for the electronic bath GF in Eq.~\eqref{eq:non-int_InvGF}, according to which the \emph{retarded} and \emph{Keldysh} components~\cite{ma.ga.22,ne.ar.15} read
\begin{align}\label{eq:WBL_bathGF}
\begin{split}
v^{2}g^{\text{R}}_{\text{bath}}(\omega) & = - \ii \Gamma_{\text{e}}/2, \\
v^{2}g^{\text{K}}_{\text{bath}}(\omega) & = 2\ii \text{Im}[\Sigma^{\text{R}}_{\text{bath}}(\omega)] \tanh \left[ \beta\left(\omega-\mu\right)/2\right]
\end{split}
\end{align}
with $v$ being the hybridization strength between the system and the electron bath, and $\beta$ and $\mu$ the inverse temperature and chemical potential of the bath.
The electron and e-ph SEs $\kel{\mat{\Sigma}}$ and $\kel{\mat{\Sigma}}_{\text{e-ph}}$ are obtained from the dynamical mean-field theory~\cite{me.vo.89,ge.ko.92,ge.ko.96} (DMFT), and its non-equilibrium Floquet (F-DMFT) extension~\cite{ts.ok.08,sc.mo.02u,jo.fr.08}, by means of the approximations $\kel{\mat{\Sigma}}(\omega,\epsilon,\overline{\epsilon}) \approx \kel{\mat{\Sigma}}(\omega)$, $\kel{\mat{\Sigma}}_{\text{e-ph}}(\omega,\epsilon,\overline{\epsilon}) \approx \kel{\mat{\Sigma}}_{\text{e-ph}}(\omega)$. Further details can be found in Appendix~\ref{sec:imp_solver}.
\subsubsection{Electron-phonon SE}\label{sec:e-ph_SE_impl}
Within DMFT, the e-ph SE is taken to be a local quantity too, namely $\kel{\mat{\Sigma}}_{\text{e-ph}}(\omega,\epsilon,\overline{\epsilon}) \approx \kel{\mat{\Sigma}}_{\text{e-ph}}(\omega)$. In terms of the {\em contour-times} $z,z^\prime$, and in the Migdal approximation~\footnote{It should be noted that in the Migdal approximation the so-called {\em Hartree term} amounts to a constant energy shift that can be reabsorbed in a constant factor in the electron-phonon Hamiltonian $\hat{H}_{\text{e-ph}}$ at half-filling.}, the latter reads~\cite{ma.ga.22}
\begin{equation}\label{eq:backbone_e-ph_SE}
\Sigma_{\text{e-ph}}(z,z^{\prime}) = \ii g^{2} G_{\text{loc}}(z,z^{\prime}) D_{\text{ph}}(z,z^{\prime})
\end{equation}
and corresponds to the lowest-order diagram in the phonon propagator $D_{\text{ph}}$, the form of which will be discussed in Secs~\ref{sec:ac_ph_formalism} and~\ref{sec:Ein_ph_formalism}. The {\em retarded} and {\em Keldysh} components of Eq.~\eqref{eq:backbone_e-ph_SE} can be found in Appendix~\ref{sec:real-time_eph_se}.
Here we only mention that $G_{\text{loc}}(z,z^{\prime})$ is the {\em contour-times} local electron GF allowing the following representation in frequency-domain
\begin{equation}\label{eq:Lat_LocGF}
\begin{split}
\kel{\mat G}_{\text{loc}}(\omega) &= \int \dd\epsilon \int \dd\overline{\epsilon} \ \rho(\epsilon,\overline{\epsilon}) \\
&\times \left\{ \left[ \kel{\mat G}^{-1}_{0}(\omega,\epsilon,\overline{\epsilon}) - \kel{\mat\Sigma}(\omega) - \kel{\mat \Sigma}_{\text{e-ph}}(\omega) \right]^{-1} \right\}.
\end{split}
\end{equation}
Due to gauge invariance $\kel{\mat G}_{\text{loc}}(\omega)$ is diagonal in Floquet indices in the case of a dc field~\cite{ts.ok.08,ma.ga.22} considered here. We now separately discuss the setups pertaining acoustic and optical phonons.
\subsection{Phonon Dyson equation}\label{sec:Ph_Dyson}
\subsubsection{Acoustic phonons}\label{sec:ac_ph_formalism}
In this paper we include acoustic phonons by using an {\em ohmic} density of states (DOS)~\cite{pi.li.21,ma.ga.22}
\begin{equation}\label{eq:ac_ohmic_DOS}
\begin{split}
\rho_{\text{D}}(\omega) & \equiv -\frac{1}{\pi}\text{Im}[D^{\text{R}}_{\text{ph},0}(\omega)] \\
& = \frac{ \omega}{4\omega^{2}_{\text{D}}} e^{-|\omega|/\omega_{\text{D}}}
\end{split}
\end{equation}
for the unperturbed Hamiltonian $\hat{H}_{\text{ph},0}$. The resulting Dyson equation then reads
\begin{equation}\label{eq:local_Dyson_ac_ph}
\kel{D}_{\text{ph}}(\omega) = [\kel{D}^{-1}_{\text{ph},0}(\omega) - \kel{\Pi}_{\text{e-ph}}(\omega)]^{-1},
\end{equation}
where $\kel{D}^{-1}_{\text{ph},0}(\omega)$ is the non-interacting phonon propagator~\cite{ao.ts.14,pi.li.21}, the real part of which is determined by the Kramers-Kr\"onig relations and the {\em Keldysh} component by the fluctuation-dissipation theorem~\footnote{For bosons the fluctuation-dissipation theorem reads $\Pi^{\text{K}}_{\text{bath}}(\omega) = \left(\Pi^{\text{R}}_{\text{bath}}(\omega) - \Pi^{\text{A}}_{\text{bath}}(\omega)\right) \coth(\beta\omega/2)$.}
\begin{equation}
\label{eq:non-int_acoustic_ph}
D^{\text{K}}_{\text{ph},0}(\omega) = -2\pi \ii \rho_{\text{D}}(\omega) \coth(\beta\omega/2).
\end{equation}
Notice that the ohmic phonon DOS~\eqref{eq:ac_ohmic_DOS} ensures a linear dispersion relation in the low-energy range $\omega \approx 0$.
\subsubsection{Optical phonons}\label{sec:Ein_ph_formalism}
We model the optical phonon branch by Einstein phonons coupled to an ohmic bath, the Dyson equation of which reads
\begin{equation}\label{eq:local_Dyson_Ein_ph}
\kel{D}_{\text{ph}}(\omega) = [\kel{D}^{-1}_{\text{ph},\text{E}}(\omega) - \kel{\Pi}_{\text{bath}}(\omega) - \kel{\Pi}_{\text{e-ph}}(\omega)]^{-1}
\end{equation}
with the non-interacting Einstein phonon propagator
\begin{align}\label{eq:non-int_einstein_ph}
\begin{split}
D^{\text{R}}_{\text{ph},\text{E}}(\omega) & = 2\omega_{\text{E}}/\left(\omega^{2} - \omega_{\text{E}}^{2}\right), \\
D^{\text{K}}_{\text{ph},\text{E}}(\omega) & \to 0,
\end{split}
\end{align}
in which the Keldysh component can be neglected due to the presence of $\kel{\Pi}_{\text{bath}}$, which will be described below.
The Einstein phonon is coupled to an ohmic bath $\hat{H}_{\text{ph},\text{ohm}}$, the real {\em retarded} GF of which is obtained from the Kramers-Kr\"onig relations (see e.g. Ref.~\cite{mu.ts.17}), while the {\em Keldysh} component is given by
\begin{equation}\label{eq:ohmic_bath_GF}
\Pi^{\text{K}}_{\text{bath}}(\omega) = -2\pi\ii A_{\text{bath}}(\omega) \coth(\beta\omega/2).
\end{equation}
The ohmic bath DOS in~\eqref{eq:ohmic_bath_GF} is taken as
\begin{equation}\label{eq:ohm_bath_spec}
A_{\text{bath}}(\omega) = \frac{v^{2}_{\text{c}}}{\omega_{\text{c}}} \left[ \frac{1}{1+\left( \frac{\omega-\omega_{\text{c}}}{\omega_{\text{c}}}\right)^{2}} - \frac{1}{1+\left( \frac{\omega+\omega_{\text{c}}}{\omega_{\text{c}}}\right)^{2}} \right]
\end{equation}
with the usual definition $-\pi A_{\text{bath}}(\omega) \equiv \text{Im}[\Pi^{\text{R}}_{\text{bath}}(\omega)]$. In Eq.~\eqref{eq:ohm_bath_spec} $\omega_{\text{c}}$ denotes the ohmic bath cutoff frequency and $v_{\text{c}}$ the hybridization strength to the ohmic bath~\footnote{The parameters $v_{\text{c}}$ and $\omega_{\text{c}}$ are chosen such that $\alpha \ \text{Im}[\Pi^{\text{R}}_{\text{bath}}(\omega_{\text{max}})]<\text{Im}[\Pi^{\text{R}}_{\text{e-ph}}(\omega^{\prime}_{\text{max}})]$ with $\alpha\in [2,3]$ and $\omega_{\text{max}}$, $\omega^{\prime}_{\text{max}}$ being the points at which $\text{Im}\Pi^{\text{R}}_{\text{bath}}$ and $\text{Im}\Pi^{\text{R}}_{\text{e-ph}}$ have their maxima.}. Notice that Eq.~\eqref{eq:ohm_bath_spec} ensures a linear dependence within almost the entire interval $\omega \in [-\omega_{\text{c}},\omega_{\text{c}}]$.
\subsubsection{Self-consistent phonons and polarization diagram}\label{sec:self-cons_phonons}
According to the DMFT approximation of local SE, the polarization diagram $\Pi_{\text{e-ph}}$ only depends on the local electron GFs. Within the Migdal approximation, the contour times {\em polarization} diagram~\cite{mu.we.15,mu.ts.17} in Eqs~\eqref{eq:local_Dyson_ac_ph} and \eqref{eq:local_Dyson_Ein_ph} reads
\begin{align}\label{eq:bubble_GG}
\Pi_{\text{e-ph}}(z,z^{\prime})=-2\ii g^{2} G_{\text{loc}}(z,z^{\prime})G_{\text{loc}}(z^{\prime},z)
\end{align}
with $G_{\text{loc}}(z,z^{\prime})$ being the electron GF on the Keldysh contour allowing the representation~\eqref{eq:Lat_LocGF} and the factor $2$ accounting for spin degeneracy. The real time components of Eq.~\eqref{eq:bubble_GG} are also derived in Appendix~\ref{sec:real-time_eph_se}.
\subsection{Observables}\label{sec:observables}
The local electron and phonon spectral functions read
\begin{align}\label{eq:local_spec_func}
\begin{split}
A(\omega)& =-\text{Im}[G^{\text{R}}_{\text{loc}}(\omega)]/\pi, \\
A_{\text{ph}}(\omega)& =- \text{Im}[D^{\text{R}}_{\text{ph}}(\omega)]/\pi.
\end{split}
\end{align}
We define the electron spectral occupation function as
\begin{equation}\label{eq:Filling_func}
N_{\text{e}}(\omega) \equiv A(\omega)\left\{ \frac{1}{2} - \frac{1}{4} \frac{\text{Im}[G^{\text{K}}_{\text{loc}}(\omega)]}{\text{Im}[G^{\text{R}}_{\text{loc}}(\omega)]} \right\},
\end{equation}
where the combination in curly brackets is the nonequilibrium electron distribution function
\begin{equation}\label{eq:NEFD-dist}
F_{\text{el}}(\omega) \equiv \frac{1}{2} \left\{1 - \frac{1}{2}\frac{\text{Im}[G^{\text{K}}_{\text{loc}}(\omega)]}{\text{Im}[G^{\text{R}}_{\text{loc}}(\omega)]} \right\}.
\end{equation}
Analogously, we define the nonequilibrium phonon distribution function $F_{\text{ph}}(\omega)$ as
\begin{equation}\label{eq:NEBD-dist}
F_{\text{ph}}(\omega) = -\frac{1}{2} \left\{1 - \frac{1}{2}\frac{\text{Im}[D^{\text{K}}_{\text{ph}}(\omega)]}{\text{Im}[D^{\text{R}}_{\text{ph}}(\omega)]} \right\}.
\end{equation}
In our units, the steady-state current~\cite{ma.ga.22} reads~\footnote{We recall that due to the time-independent nature of the dc field setup~\cite{ts.ok.08,ma.ga.22} the elements with $l\neq 0$ of any Wigner-represented matrix are vanishing.}
\begin{equation}\label{eq:general_Wig_current}
\begin{split}
J = \int_{-\infty}^{+\infty} & \frac{\dd\omega}{2\pi} \int \dd\epsilon \int \dd\overline{\epsilon} \ \rho(\epsilon,\overline{\epsilon}) \\
& \times \left[ \left( \epsilon - \ii \overline{\epsilon} \right) G^{<}_{1}(\omega,\epsilon,\overline{\epsilon}) + \text{H.c.} \right],
\end{split}
\end{equation}
while the steady-state kinetic energy is given by
\begin{equation}\label{eq:general_Wig_energy}
\begin{split}
E_{\text{kin}} = \int_{-\infty}^{+\infty} & \frac{\dd\omega}{2\pi} \int \dd\epsilon \int \dd\overline{\epsilon} \ \rho(\epsilon,\overline{\epsilon}) \\
& \times \left[ - \left( \overline{\epsilon} + \ii\epsilon \right) G^{<}_{1}(\omega,\epsilon,\overline{\epsilon}) + \text{H.c.} \right].
\end{split}
\end{equation}
\begin{table}[b]
\begin{center}
\begin{tabular}{ cccccccccc }
\hline
\hline
& $U/t^{\ast}$ & $\varepsilon_{\text{c}}/t^{\ast}$ & $\mu/t^{\ast}$ & $\beta/t^{\ast-1}$ & $\Gamma_{\text{ph}}/t^{\ast}$ & $\omega_{\text{c}}/t^{\ast}$ & $g/t^{\ast}$ & $\omega_{\text{E}}/t^{\ast}$ & $\omega_{\text{D}}/t^{\ast}$ \\
\hline
O & 8 & -4 & 0 & 20 & 23.8& 0.6 & 0.4 & 0.6 & 0 \\
A & 8 & -4 & 0 & 20 & 1.850 & 0 & 0.4 & 0 & 0.05 \\
\hline
\hline
\end{tabular}
\caption{Default parameters for electron bath plus optical (setup O) and acoustic (setup A) phonons. In setup O the phonon coupling strength is defined as $\Gamma_{\text{ph}}\equiv 2\pi g^{2} \rho_{\text{E}}(\omega_{\text{E}})|_{\text{NSC}}$, where $\rho_{\text{E}}(\omega_{\text{E}})|_{\text{NSC}} = -\text{Im}[D^{\text{R}}_{\text{ph}}(\omega_{\text{E}})]_{\text{NSC}}/\pi$ is the equilibrium ($F=0$) optical phonon DOS in the NSC case, see Eq.~\eqref{eq:local_Dyson_Ein_ph}. On the other hand, in setup A the phonon coupling strength reads $\Gamma_{\text{ph}} \equiv 2\pi g^{2}\rho_{\text{D}}(\omega_{\text{D}})$, with the acoustic phonon DOS $\rho_{\text{D}}(\omega_{\text{D}})$ given in Eq.~\eqref{eq:ac_ohmic_DOS}.}
\label{tab:default_pars}
\end{center}
\end{table}
\section{Results}\label{sec:results}
We study a Mott insulating system attached to an electron bath {\em via} the coupling strength $\Gamma_{\text{e}}$, see Eq.~\eqref{eq:WBL_bathGF}, {\em plus} optical (setup O) or acoustic (setup A) phonons. The phonon coupling strength $\Gamma_{\text{ph}}$ for both optical and acoustic phonons is defined in Tab.~\ref{tab:default_pars}, while the coupling strength to the fermionic baths reads $\Gamma_{\text{e}}\equiv 2\pi v^{2}S_{\text{bath}}(0)$ with $S_{\text{bath}}(\omega) = -\text{Im}[g^{\text{R}}_{\text{bath}}(\omega)]/\pi$, see Eq.~\eqref{eq:WBL_bathGF}. In this paper we choose $\Gamma_{\text{e}}/t^{\ast}=\left\{ 0.12, 0.16, 0.20 \right\}$ and set the temperature of the fermionic bath equal to that of the phonon one. If not stated otherwise, the parameters in Tab.~\ref{tab:default_pars} will be used. Setting $\kel{\Pi}_{\text{e-ph}}$ to zero in Eqs~\eqref{eq:local_Dyson_ac_ph} and~\eqref{eq:local_Dyson_Ein_ph} corresponds to the NSC scheme, as opposed to the SC one.
\subsection{Optical phonons}\label{sec:AMEA_Ein_ph}
We first discuss the case corresponding to the setup O in Tab.~\ref{tab:default_pars}, in which the system is coupled to fermionic baths and optical phonons, see Eqs~\eqref{eq:local_Dyson_Ein_ph} and \eqref{eq:non-int_einstein_ph}.
\subsubsection{Current, energy, double occupation}\label{sec:Ein_ph_obs}
The current $J$, double occupation per site $d$ and kinetic energy $E_{\text{kin}}$ as function of the applied field $F$ for selected electron coupling strengths $\Gamma_{\text{e}}$~\footnote{We choose the values of the coupling strength $\Gamma_{\text{e}}$ such to ensure that a {\em stable} steady-state in the sense discussed in Ref.~\cite{ma.ga.22} is reached.} are shown in Fig.~\ref{fig:observables_Hols}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig1.pdf}
\caption{Current $J$ for (a) $\Gamma_{\text{e}}=0.2t^{\ast}$, (b) $\Gamma_{\text{e}}=0.16t^{\ast}$ and (c) $\Gamma_{\text{e}}=0.12t^{\ast}$ as function of the applied field $F$ in the SC and NSC schemes. Panels (d), (e) and (f) show the double occupation $d$, while (g), (h) and (i) display the kinetic energy $E_{\text{kin}}$ for the same coupling strengths. Default parameters are specified in setup O in Tab.~\ref{tab:default_pars}.}
\label{fig:observables_Hols}
\end{figure}
Regardless of the value of $\Gamma_{\text{e}}$, the two resonances at $F\approx U/2=4t^{\ast}$ and $F\approx U=8t^{\ast}$ in $J$ are accompanied by enhancements in $d$~\cite{ma.ga.22,mu.we.18}, as evidenced by Figs~\ref{fig:observables_Hols}(a), (b) and (c) for the former and (d), (e) and (f) for the latter. On the other hand, the kinetic energy $E_{\text{kin}}$ shows a {\em plateau}-like behavior followed by an inflection point at around $F\approx 4t^{\ast}$ and then rises sharply, starting from $F\approx 7t^{\ast}$ as shown in Figs~\ref{fig:observables_Hols}(g), (h) and (i). Note that $E_{\text{kin}}$ keeps growing even at field strengths at which both $d$ and $J$ are already suppressed. This signals that the injected energy no longer increases the mobility of the electrons but rather promotes their incoherent motion~\footnote{For $F>U$ any further increase in the applied field no longer results in a net motion of charge carriers, thus the injected energy only increases the systems temperature.}. These findings are qualitatively robust against the value of $\Gamma_{\text{e}}$: as shown in Ref.~\cite{ma.ga.22}, a larger coupling to the electron bath is more effective in relaxing excited charge carriers to the lower Hubbard band (LHB), which implies a reduction of the double occupancy at $F\approx U$, see Figs~\ref{fig:observables_Hols}(d), (e) and (f). We see that in the region $F<U/2$ the observables are basically identical in the SC and NSC schemes, while for $F\in [U/2,U]$ and $F>U$ both $J$ and $d$ are slightly enhanced by the SC treatment. This is in contrast with the regions around the resonances $F\approx U/2$ and $F\approx U$, in which the values of the current, double occupancy and kinetic energy are reduced in the SC scheme. Finally we note that within the SC treatment the sharp increase of $E_{\text{kin}}$ observed in Fig.~\ref{fig:observables_Hols} gets mitigated, even though its tendency is preserved. This reduction may suggest an energy transfer from electrons to phonons in which the latter {\em absorb} part of the kinetic energy from the former in the form of heat. We will further develop this aspect at the end of Sec.~\ref{sec:Ein_ph_spec} by analyzing the phonon spectra.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig2.pdf}
\caption{(a) Electron spectral function $A(\omega)$ and (b) e-ph SE $-\text{Im}\Sigma^{\text{R}}_{\text{e-ph}}(\omega)$ at $F=4t^{\ast}$. Black vertical arrows in (b) point at the in-gap peaks at $\omega\approx \pm\omega_{\text{E}}$, while the horizontal ones highlight the separation $\delta\approx 2\omega_{\text{E}}$ between the subpeaks in which the main bands are split. Panels (c) and (d) show the same quantities at $F=5t^{\ast}$, while (e) and (f) refer to $F=8t^{\ast}$. Default parameters are specified in setup O in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.2t^{\ast}$.)}
\label{fig:EL_SFs_Hols}
\end{figure}
\subsubsection{Spectral properties}\label{sec:Ein_ph_spec}
By the analysis of the spectral properties of both the electrons and phonons we can explain the differences between the SC and NSC schemes observed in Fig.~\ref{fig:observables_Hols}. In particular we focus on the exemplary cases $F=4t^{\ast}$, $F=5t^{\ast}$ and $F=8t^{\ast}$.
At $F=4t^{\ast}$, Fig.~\ref{fig:EL_SFs_Hols}(a), the electron spectral function $A(\omega)$ shows in-gap states~\cite{aron.12,mu.we.18,ma.ga.22} around $\omega=0$~\footnote{We recall that the in-gap states are due to filled bands of neighboring sites entering the gap of their adjacent ones under the action of the electric field, thus allowing electron tunnelling from the LHB to the UHB.}. These states are accompanied by the filling of the gap in the e-ph SE, Fig.~\ref{fig:EL_SFs_Hols}(b): we observe subpeaks at $\omega\approx \pm\omega_{\text{E}}$ and the splitting of the Hubbard bands~\footnote{Strictly speaking, one should talk of {\em Hubbard bands} only when referring to the electron spectral function $A(\omega)$. In this paper we improperly use the name Hubbard bands also for the main bands in which the e-ph SE is split into. Depending on the context, it should be clear to which objects the authors are referring to.} by an amount $2\omega_{\text{E}}$. This rich structure in the e-ph SE resembles the results in Ref.~\cite{ha.ar.22u}, in which it is argued that electron relaxation across the band gap is mediated by multiple emissions of phonons of energy $\omega_{\text{E}}$. In the SC scheme, the satellite peaks in the e-ph SE are broadened, resulting in an even more pronounced closing of the gap that increases the number of states available to dissipation in the e-ph channel. At $F=5t^{\ast}$ the in-gap states in the electron spectral function are located at $\omega \approx \pm t^{\ast}$, Fig.~\ref{fig:EL_SFs_Hols}(c): the corresponding peak structure in the e-ph SE in Fig.~\ref{fig:EL_SFs_Hols}(d) now gets smeared out in both the SC and NSC schemes. Notice that the SC treatment still leads to a filling of the gap.
\begin{figure}[b]
\includegraphics[width=\linewidth]{Fig3.pdf}
\caption{The electron spectral occupation function $N_{\text{e}}(\omega)$~\eqref{eq:Filling_func} is shown for field strengths (a) $F=4t^{\ast}$, (b) $F=5t^{\ast}$ and (d) $F=8t^{\ast}$ in the SC and NSC schemes. The inset (c) magnifies the peaks at $\omega \approx t^{\ast}$ and $\omega \approx 4t^{\ast} = U/2$ (highlighted by vertical dashed black lines) which are shown in panel (b). Default parameters are specified in setup O in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.2t^{\ast}$.)}
\label{fig:Hols_EL_occupation_SFs}
\end{figure}
To understand the differences between these two cases, we recall that at $F=U/2$ electrons are promoted to the upper Hubbard band (UHB) through the in-gap states~\cite{ma.ga.22} shown in Fig.~\ref{fig:EL_SFs_Hols}(a). From the electron spectral occupation function $N_{\text{e}}(\omega)$ in Fig.~\ref{fig:Hols_EL_occupation_SFs}(a), we see that in the SC case the occupation of the UHB is reduced. This is due to the increase in the rate of electrons relaxing within the gap via phonon emission --- governed by the in-gap states in Fig.~\ref{fig:EL_SFs_Hols}(b).
The net result is the drop in the current observed in Fig.~\ref{fig:observables_Hols}.
On the other hand, at $F=5t^{\ast}$ electron migration to the UHB requires higher order processes compared to the {\em resonant} transition at $F=U/2$, as evidenced by the in-gap double-peak structure in the spectral function, Fig.~\ref{fig:EL_SFs_Hols}(c). In the SC scheme the occupation of the UHB and of the states around $\omega\approx t^{\ast}$ is reduced as well, see Fig.~\ref{fig:Hols_EL_occupation_SFs}(b) and the corresponding inset (c). With the field being off-resonance, the slight enhancement of both $J$ and $d$ noted in Sec.~\ref{sec:Ein_ph_obs} can be attributed to particle flow through the broader in-gap states (Figs~\ref{fig:EL_SFs_Hols}(c), \ref{fig:Hols_EL_occupation_SFs}(b) and~\ref{fig:Hols_EL_occupation_SFs}(c)) induced by the closing of the gap in the e-ph SE shown in Fig.~\ref{fig:EL_SFs_Hols}(d).
At the resonance $F=U$ the (empty) UHB and the (full) LHB of any pair of neighboring sites match perfectly~\cite{mu.we.18,ma.ga.22}, which explains the absence of in-gap subpeaks in the electron spectral function, see Fig.~\ref{fig:EL_SFs_Hols}(e), and the maximum in the current $J$ observed in Fig.~\ref{fig:observables_Hols}. In the SC scheme, the strong {\em renormalization} of the e-ph SE shown in Fig.~\ref{fig:EL_SFs_Hols}(f) provides the necessary states to relax the electrons within the gap, reducing the occupation of the UHB in favor of the in-gap states, see Fig.~\ref{fig:Hols_EL_occupation_SFs}(d). As in the case of $F=U/2$ this leads to the suppression of the current, double occupation and kinetic energy observed in Fig.~\ref{fig:observables_Hols}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig4.pdf}
\caption{(a) Phonon spectral function $A_{\text{ph}}(\omega)$ corresponding to the field strengths shown in Fig.~\ref{fig:EL_SFs_Hols}: the inset (b) compares $F=4t^{\ast}$ and $F=5t^{\ast}$. (c) Nonequilibrium phonon distribution function $F_{\text{ph}}(\omega)$, see Eq.~\eqref{eq:NEFD-dist}, corresponding to (a) and (b). The black arrow denotes the direction of increasing temperature. Default parameters are specified in setup O in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.2t^{\ast}$.)}
\label{fig:PH_SFs_Hols}
\end{figure}
In Sec.~\ref{sec:Ein_ph_obs}, we speculated that the increase of $E_{\text{kin}}$ for field strengths at which both $J$ and $d$ are suppressed could be the signature of the injected energy turning into disordered motion of particles, which may eventually lead to an increase in the temperature of the system. While the phonon spectral function in Figs~\ref{fig:PH_SFs_Hols}(a) and (b) provides a measure of the {\em renormalization} of the phonon spectrum due to the SC treatment, the phonon nonequilibrium distribution function in Fig.~\ref{fig:PH_SFs_Hols}(c) shows that by increasing the field $F$ phonons experience an increase in temperature. It should be noted that away from equilibrium the phonon temperature cannot be inferred by simply fitting $F_{\text{ph}}(\omega)$ by means of a Bose-Einstein distribution function. However, given that $F_{\text{ph}}(\omega)$ in Fig.~\ref{fig:PH_SFs_Hols}(c) departs from the equilibrium one as the applied field $F$ grows larger, we can conclude that in the SC scheme the phonon temperature does increase.
\subsubsection{Role of the Hubbard $U$}\label{sec:Hubbard_U_dep}
\begin{figure}[b]
\includegraphics[width=\linewidth]{Fig5.pdf}
\caption{(a) Current $J$, (b) double occupation $d$ and (c) kinetic energy $E_{\text{kin}}$ as function of the applied field $F$ for selected values of the Hubbard $U$ in the NSC scheme: black arrows in (a) point at the position of the resonance $F\approx U/3$ which becomes fainter starting at $U=6t^{\ast}$ and turns into an even weaker shoulder at $U=5t^{\ast}$ in the SC scheme, see panel (d). Panels (d), (e) and (f) show the same quantities for the SC case. Default parameters are specified in setup O in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.12t^{\ast}$.)}
\label{fig:Obs_SC_NSC_Us}
\end{figure}
In the previous section we argued that SC phonons reduce the band gap by relaxing excited carriers from the UHB into it, see Fig.~\ref{fig:EL_SFs_Hols}. Also, it is known that the gap in a single-band Hubbard model at equilibrium ($F=0$) increases as the interaction $U$ grows larger. In this section we want to investigate the effect of SC phonons on a system which exhibits a less pronounced band gap, corresponding to a weaker insulating phase. To this end, we discuss the effects of SC phonons for selected values of the Hubbard interaction $U$.
As shown in Fig.~\ref{fig:Obs_SC_NSC_Us}, within the NSC scheme we still observe two main resonances at $F\approx U/2$ and $F\approx U$ for both $J$ and $d$, see panels (a) and (b)~\footnote{It should be noted that this double-peak structure in the current and double occupation vanishes as soon as $U$ is too small for the system to develop a band gap. As a matter of fact, below $U=5t^{\ast}$ and with $F=0$ the system does not exhibit a clear gap, thus losing its insulating properties.}. The small resonance at $F\approx U/3$~\cite{mu.we.18,ma.ga.22} can be noticed as well, whereas in the SC scheme the latter gets fainter by decreasing $U$ until it becomes a shoulder to the resonance at $F\approx U/2$ for $U=5t^{\ast}$, see Fig.~\ref{fig:Obs_SC_NSC_Us}(d).
The qualitative difference between the peak (in both $J$ and $d$) at $F\approx U/2$ and $F\approx U$ lies in the fact the the former is reduced by increasing $U$ while the latter approximately preserves its height independently of the value of $U$, see Figs~\ref{fig:Obs_SC_NSC_Us}(a), (b), (d) and (e). This can be explained from the fact that electron transitions from LHB to UHB via in-gap states are suppressed by a larger $U$ if the field strength is off-resonance (as in the case of $F\approx U/2$) while at $F\approx U$ the field allows direct transitions from filled to empty bands, regardless of the value of $U$. Notably, within the NSC scheme the height of the peak in $J$ at $F\approx U$ stays the same, while in the SC treatment the latter changes slightly by varying $U$.
Also, within the SC scheme the peak currents at $F\approx U$ are reduced with respect to the NSC treatment, while $J$ is a bit enhanced away from resonance(s), see for instance Figs~\ref{fig:Obs_SC_NSC_Us}(a) and (d). The double occupation in panels (b) and (e), as well as the kinetic energy, shown in panels (c) and (f), are in agreement with the overall broadening of the $J$-$F$ characteristics, confirming the effects of phonons renormalization discussed in Sec.~\ref{sec:Ein_ph_spec}.
We also notice that the {\em shoulder}-like features occurring in the kinetic energy near the metallic phase in the NSC case turn into {\em cusps} in the SC treatment, see Figs~\ref{fig:Obs_SC_NSC_Us} (c) and (f). Another interesting feature is that for small field strengths (i.e. $F\leq 3t^{\ast}$) a larger U translates into a higher kinetic energy while at large fields ($F$ in between the two main resonances) a larger U suppresses the kinetic energy.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig6.pdf}
\caption{(a) Current $J$, (b) double occupation $d$ and (c) kinetic energy $E_{\text{kin}}$ as function of the applied field $F$ for selected values of the inverse temperature $\beta$ within the SC scheme. Default parameters can be found in setup O in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.12t^{\ast}$.)}
\label{fig:Obs_SC_betas}
\end{figure}
Finally a remark on the role of temperature in our simulations. The default inverse temperature has been chosen as $\beta t^{\ast}=20$. As shown in Figs~\ref{fig:Obs_SC_betas}(a), (b) and (c), the current, double occupation and kinetic energy are essentially not affected by lowering the temperature of the electron and phonon bath. This is due the fact that the characteristic frequency of optical phonons $\omega_{\text{E}}$ is typically larger than the temperature of the system.
\subsection{Acoustic phonons}\label{sec:AMEA_ac_ph}
In this section we discuss the influence of acoustic phonons on the electronic properties of the lattice. With a dispersion relation of the form~\eqref{eq:ac_ohmic_DOS}, we expect the cutoff frequency $\omega_{\text{D}}$ to determine the relaxation pathways contributing to heat dissipation. In particular, a smaller $\omega_{\text{D}}$, corresponding to long-wavelength vibrations~\cite{ga.ma.22}, should be more effective in carrying away the heat for longer distances.
In this setup the phonon coupling strength is defined as $\Gamma_{\text{ph}}=2\pi g^{2} \rho_{\text{D}}(\omega_{\text{D}})$, see Sec.~\ref{sec:ac_ph_formalism} and especially Eq.~\eqref{eq:ac_ohmic_DOS}. The default parameters can be found in setup A in Tab.~\ref{tab:default_pars}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig7.pdf}
\caption{(a) Current $J$, (b) double occupation $d$ and (c) kinetic energy $E_{\text{kin}}$ as function of the applied field $F$ for $\Gamma_{\text{e}}=0.12t^{\ast}$. Panels (d) to (f) show the same quantities for $\Gamma_{\text{e}}=0.16t^{\ast}$, while the results for $\Gamma_{\text{e}}=0.2t^{\ast}$ are displayed in panels (g) to (i). Black arrows highlight the suppression of the resonance at $F\approx U/2$ in $d$ as function of the increasing $\Gamma_{\text{e}}$. Default parameters can be found in setup A in Tab.~\ref{tab:default_pars}.}
\label{fig:ac_ph_obs_Gammaes}
\end{figure}
\subsubsection{Current, energy and double occupation}\label{sec:coup_ferm_bath_ac_ph}
\paragraph{Role of the electron coupling strength $\Gamma_{\text{e}}$.}\label{sec:role_el_bath}
We start from the analysis of $J$, $d$ and $E_{\text{kin}}$: in Fig.~\ref{fig:ac_ph_obs_Gammaes} these quantities are shown as function of the applied field $F$ for selected values of the electron coupling strength $\Gamma_{\text{e}}$. Let alone the small suppression of $J$ at around $F\approx U/2$ in the SC case, which is visible in Figs~\ref{fig:ac_ph_obs_Gammaes}(a), (d) and especially (g), there are no remarkable differences between the SC and NSC schemes as the observables almost lie on top of each other, see panels (b), (e) and (h) for $d$ and (c), (f) and (i) for $E_{\text{kin}}$. This is in contrast with the case of optical phonons discussed in Sec.~\ref{sec:AMEA_Ein_ph}, where the effects of the SC treatment were plainly visible.
For later purposes, we just want to stress the suppression of the resonant peak in the double occupation at $F\approx U/2$, see Figs~\ref{fig:ac_ph_obs_Gammaes}(b), (e) and (h), within both the SC and NSC schemes as $\Gamma_{\text{e}}$ is increased.
\paragraph{Role of the phonon cutoff frequency $\omega_{\text{D}}$.}\label{sec:obs_ac_ph}
\begin{figure}[b]
\includegraphics[width=\linewidth]{Fig8.pdf}
\caption{(a) Current $J$, (b) double occupation $d$ and (c) kinetic energy $E_{\text{kin}}$ as function of the applied field $F$ for $\omega_{\text{D}}=0.05t^{\ast}$ within the NSC and SC schemes. Panels (d) to (f) show the same quantities for $\omega_{\text{D}}=0.1t^{\ast}$, (g) to (i) for $\omega_{\text{D}}=0.2t^{\ast}$, while the results for $\omega_{\text{D}}=0.3t^{\ast}$ are displayed in panels (l) to (n). The black arrows highlight the suppression of the resonance at $F\approx U/2$ in $d$ as $\omega_{\text{D}}$ is increased. Default parameters can be found in setup A in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.2t^{\ast}$.)}
\label{fig:Obs_ac_ph}
\end{figure}
In Fig.~\ref{fig:Obs_ac_ph} we show $J$, $d$ and $E_{\text{kin}}$ as function of the applied field for $\Gamma_{\text{e}}=0.2t^{\ast}$ and selected values of $\omega_{\text{D}}$. As noted in Sec.~\ref{sec:role_el_bath}, at $\omega_{\text{D}}=0.05t^{\ast}$ the current $J$ is slightly suppressed at $F\approx U/2$ within the SC scheme, see Fig.~\ref{fig:Obs_ac_ph}(a). However, the $J$-$F$ curve does not show appreciable changes for all the other $\omega_{\text{D}}$'s used in this paper, see Figs~\ref{fig:Obs_ac_ph}(d), (g) and (l). On the other hand, the resonance at $F\approx U/2$ in the double occupation $d$ is suppressed by increasing the phonon cutoff frequency $\omega_{\text{D}}$ within both the SC and NSC schemes as it is clear from Figs~\ref{fig:Obs_ac_ph}(b), (e), (h) and (m). Notably, such an effect occurs already for increasing coupling $\Gamma_{\text{e}}$, again see the discussion in~\ref{sec:role_el_bath}. Finally, the kinetic energy is not affected by changing the soft cutoff frequency $\omega_{\text{D}}$ or by the SC scheme, as one can see by direct inspection of Figs~\ref{fig:Obs_ac_ph}(c), (f), (i) and (n).
\subsubsection{Spectral properties}\label{sec:spectral_prop_ac_ph}
To explain the findings discussed in Sec.~\ref{sec:coup_ferm_bath_ac_ph}, we study the spectral properties of both the electrons and phonons.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig9.pdf}
\caption{(a) Electron spectral function $A(\omega)$ at $F=4t^{\ast}$ for selected values of the phonon cutoff frequency $\omega_{\text{D}}$ within the SC and NSC schemes. Panel (b) magnifies the {\em quasi-particle} peak at $\omega\approx 0$. Default parameters can be found in setup A in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.2t^{\ast}$.)}
\label{fig:EL_SFs_ac_ph}
\end{figure}
In Fig.~\ref{fig:EL_SFs_ac_ph}(a) the electron spectral function $A(\omega)$ is shown at $F= U/2$ for several values of the phonon cutoff frequency $\omega_{\text{D}}$, from which we see that the SC scheme does not alter the overall electron spectral properties. On the other hand, looking at the low-energy region $\omega\approx 0$ in Fig.~\ref{fig:EL_SFs_ac_ph}(b) we notice appreciable differences in the {\em quasi-particle} peak therein especially at $\omega_{\text{D}}=0.05t^{\ast}$. As a matter of fact, for $\omega_{\text{D}}/t^{\ast}=\left\{ 0.2,0.3 \right\}$ the quasi-particle peak at $\omega \approx 0$ is essentially unaltered by the SC treatment, while a slight suppression (within the SC scheme) can be detected starting from $\omega_{\text{D}}=0.1t^{\ast}$.
As already pointed out in this paper, when the applied field is far from the main resonance ($F\approx U$) the in-gap spectral weight contributes to excite particles from the LHB to the UHB. In this framework, the suppression of the {\em quasi-particle} peak at $\omega \approx 0$ that occurs in the SC scheme at $\omega_{\text{D}}=0.05t^{\ast}$ signals that fewer states are available within the gap, with the consequent reduction of the current $J$ with respect to the NSC case that shown in Fig.~\ref{fig:Obs_ac_ph}(a). This is further proved from the fact that the current $J$ is not affected by the SC treatment in all the other cases $\omega_{\text{D}}/t^{\ast}=\left\{ 0.1,0.2,0.3 \right\}$, see Figs~\ref{fig:Obs_ac_ph}(d), (g) and (l), for which there is no reduction in the quasi-particle peak at $\omega \approx 0$ as shown in Fig.~\ref{fig:EL_SFs_ac_ph}(b).
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig10.pdf}
\caption{(a) Imaginary part of the electron-phonon self-energy $\text{Im}[\Sigma_{\text{e-ph}}(\omega)]$ at $F=4t^{\ast}$ for selected values of the phonon cutoff frequency $\omega_{\text{D}}$ within the SC and NSC schemes. Panel (b) magnifies the {\em quasi-particle} peak at $\omega\approx 0$. Default parameters can be found in setup A in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.2t^{\ast}$.)}
\label{fig:eph_SEs_ac_ph}
\end{figure}
\begin{figure}[b]
\includegraphics[width=\linewidth]{Fig11.pdf}
\caption{(a) Phonon spectral function $A_{\text{ph}}(\omega)$ at $F=4t^{\ast}$ for selected values of $\omega_{\text{D}}$. Panel (b) shows the nonequilibrium phonon distribution function $F_{\text{ph}}(\omega)$ at $F=4t^{\ast}$ and $\omega_{\text{D}}=0.05t^{\ast}$, where the difference between SC and NSC schemes can be appreciated. Default parameters can be found in setup A in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.2t^{\ast}$.)}
\label{fig:ph_SFs_ac_ph}
\end{figure}
The e-ph SE at $F=4t^{\ast}=U/2$ is shown in Fig.~\ref{fig:eph_SEs_ac_ph}(a): we observe that the smaller $\omega_{\text{D}}$ the higher the in-gap peak in $\text{Im}[\Sigma^{\text{R}}_{\text{e-ph}}(\omega)]$, see also the magnification of the low-energy region $\omega \approx 0$, Fig.~\ref{fig:eph_SEs_ac_ph}(b), especially for $\omega_{\text{D}}=0.05t^{\ast}$. Also, it should be noted that the height of the peak at $\omega \approx 0$ is always larger in the SC than in the NSC treatment, as opposed to what happens in the electron spectral function, Fig.~\ref{fig:EL_SFs_ac_ph}(b).
Also, while with optical phonons the increase in the in-gap spectral weight in $\text{Im}[\Sigma^{\text{R}}_{\text{e-ph}}(\omega)]$ is accompanied by an increase in $A(\omega)$, see Fig.~\ref{fig:EL_SFs_Hols} in Sec.~\ref{sec:Ein_ph_spec}, when considering acoustic phonons an increase in the height of the in-gap states in the e-ph SE is characterized by fewer states available in the electron spectral function, see Figs~\ref{fig:EL_SFs_ac_ph}(b) and~\ref{fig:eph_SEs_ac_ph}(b).
It is worth stressing that the height of the in-gap peak in $\text{Im}[\Sigma^{\text{R}}_{\text{e-ph}}(\omega)]$ is suppressed and the latter is split by increasing the cutoff phonon frequency $\omega_{\text{D}}$, see again Fig.~\ref{fig:eph_SEs_ac_ph}(b).
In Fig.~\ref{fig:ph_SFs_ac_ph}(a) we compare the phonon spectral function $A_{\text{ph}}(\omega)$ at $F=4t^{\ast}$ for different values of the phonon cutoff frequency $\omega_{\text{D}}$. The SC treatment shifts the phonon cutoff frequency towards smaller values and increases the height of the phonon spectral function the more the smaller $\omega_{\text{D}}$. In fact, the phonon spectral functions in the SC and NSC schemes tend to coincide as $\omega_{\text{D}}$ is increased. As already pointed out, one can argue that by decreasing $\omega_{\text{D}}$ acoustic phonons should be more effective in carrying away the current-induced heat from the lattice due to their long-wavelength character. In this framework the suppression of $J$ at $F\approx U/2$ for $\omega_{\text{D}}=0.05t^{\ast}$, see Fig.~\ref{fig:Obs_ac_ph}(a), can be explained as the result of the increased spectral weight in both $A_{\text{ph}}(\omega)$ and $\text{Im}[\Sigma^{\text{R}}_{\text{e-ph}}(\omega)]$. Also, the phonon temperature in the SC and NSC schemes is almost the same for this case, as it can be inferred from the nonequilibrium distribution function $F_{\text{ph}}(\omega)$ in Fig.~\ref{fig:ph_SFs_ac_ph}(b). We remark that for the other values of $\omega_{\text{D}}$ used in this paper there are no appreciable changes in the nonequilibrium phonon distribution function $F_{\text{ph}}(\omega)$ between the SC and NSC schemes.
\subsubsection{Temperature dependence}
In this section we discuss the dependence of the above results on the temperature of the electron and phonon baths.
\begin{figure}[h]
\includegraphics[width=\linewidth]{Fig12.pdf}
\caption{(a) Current $J$, (b) double occupation $d$ and (c) kinetic energy $E_{\text{kin}}$ as function of the applied filed $F$ for selected values of the inverse temperature $\beta$ within the SC scheme. Lowering the temperature leads to a detachment of the current curves especially at the resonance $F\approx U/2$. Default parameters can be found in setup A in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.12t^{\ast}$.)}
\label{fig:ac_ph_obs_temp_dep}
\end{figure}
In Fig.~\ref{fig:ac_ph_obs_temp_dep} we show the current $J$, double occupation $d$ and kinetic energy $E_{\text{kin}}$ as function of the applied field $F$ for selected values of the inverse temperature $\beta$ within the SC scheme. We see that in contrast to the case of optical phonons, see Fig.~\ref{fig:Obs_SC_betas}, here the curves differ appreciably. In particular, at $F\approx U/2$ the current $J$ is suppressed by increasing the temperature, see Fig.~\ref{fig:ac_ph_obs_temp_dep}(a), while at the same field strength both $d$ and $E_{\text{kin}}$ are essentially not altered, see Figs~\ref{fig:ac_ph_obs_temp_dep}(b) and (c).
\begin{figure}[t]
\includegraphics[width=\linewidth]{Fig13.pdf}
\caption{(a) Quasi-particle peak at $\omega\approx 0$ in the electron spectral function $A(\omega)$ at field strength $F=4.2t^{\ast}$ for selected values of the inverse temperature $\beta$. (b) Nonequilibrium phonon distribution function $F_{\text{ph}}(\omega)$ corresponding to the same situation as in panel (a). The black arrow denotes the direction of increasing temperature. Default parameters can be found in setup A in Tab.~\ref{tab:default_pars}. (Here $\Gamma_{\text{e}}=0.12t^{\ast}$.)}
\label{fig:ac_ph_SF+DF_temp}
\end{figure}
Finally, in Fig.~\ref{fig:ac_ph_SF+DF_temp}, we show the electron spectral function $A(\omega)$ and the nonequilibrium phonon distribution function $F_{\text{ph}}(\omega)$ at $F\approx U/2$ for several values of the inverse temperature $\beta$. One can observe the suppression of the quasi-particle peak at $\omega \approx 0$ as the temperature is increased in both the SC and NSC cases, see panel (a). Also, the SC scheme always reduces the in-gap spectral weight of the electrons with respect to the NSC scheme, see again Fig.~\ref{fig:ac_ph_SF+DF_temp}(a). It should be noted that the largest difference in the height of the quasi-particle peak between the SC and NSC cases occurs at $\beta t^{\ast}= 40$.
Figure~\ref{fig:ac_ph_SF+DF_temp}(b) shows that the phonons experience temperature increase within the SC scheme; in relative terms, the temperature change between SC and NSC treatments is the largest at $\beta t^{\ast}=40$ --- notice the difference in the distribution function $F_{\text{ph}}(\omega)$ between the two cases in Fig.~\ref{fig:ac_ph_SF+DF_temp}(b).
\section{Conclusions}\label{sec:conclusions}
In this work we study the response of a Mott-insulating system to a constant electric field. We focus on a single-band Hubbard model attached to fermionic baths, with the inclusion of either optical or acoustic phonons as dissipation mechanism. The introduction of phonons is crucial for a correct description of heat transfer within the system upon approaching the metallic phase and thus of the dielectric breakdown. We show that by employing optical phonons within a self-consistent scheme, the steady-state current in the metallic phase is sensibly suppressed with respect to the nonself-consistent case. This reduction of the current is accompanied by an increase in the phonon temperature, signalling the exchange of heat between phonons and the hot electrons of the lattice. Also, we find that the temperature of the baths does not affect these results, as the latter is smaller than the phonons characteristic frequency.
On the other hand, in the case of acoustic phonons, self-consistency does not influence the current characteristics significantly. Its effect can be detected in a slight reduction of the steady-state current at field strengths close to half of the gap, and thus away from the metallic phase, especially for very small values of the phonon cutoff frequency. This seems to confirm that long-wavelength phonons are well-suited to dissipate the excess heat. Also, in contrast to optical phonons the steady-state current seems to be slightly dependent on the temperature in this case.
This aspect is most likely related to the fact that acoustic phonons can have very small energies in their spectrum so they are prone to be affected by any arbitrarily low temperature.
\begin{acknowledgments}
We thank J. Lotze for contributing theoretical discussions and useful insights. This research was funded by the Austrian Science Fund (Grant No. P 33165-N) and by NaWi Graz. The results have been obtained using the Vienna Scientific Cluster and the D-Cluster Graz.
\end{acknowledgments}
\begin{center}
{\bf Author information}
\end{center}
{\bf Contributions:} E.A. conceived the project and supervised the work. Code development: T.M.M. contributed phonons implementation and produced theoretical data, D.W. contributed configuration interaction impurity solver, P.G. helped in developing the phonon implementation and E.A. contributed improvements in the fitting routine. The manuscript was drafted by T.M.M. with contributions from all authors.
| proofpile-arXiv_065-1910 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:Intro}
In order to develop a suitable testbed to evaluate wireless applications in the field of functional safety, the wireless technologies employed with its specific advantages are vital to realize an all-wireless, software-defined, and safety-focused sensor-to-cloud automation system, which improves the flexibility of manufacturing processes and enhances the degree of interconnections of \acp{CPS}.
Key factors such as cycle time, bandwidth, availability, reliability, deterministic communication and security of wireless channels are important regarding its applicability. Therefore, testbeds are needed to find indicators of the key performance factors. Furthermore, it is essential to develop prediction methods for reliability and latency as well as protection goals for confidentiality, integrity, authenticity, and availability of the wireless system,
which shall be operated in industrial environments in the domain of functional safety.
Next to security enhancements on its own, also the combination of safety and security in the communication of industrial automation is a broad field for research (e.g. \cite{SchillerJuddSupavatanakulHardtWieczorek+2022+38+52, ReichenbachSafetySecurity}). In the field \ac{IOLW}, the combination of safety and security was investigated within a recent publication \cite{IOLWcrypto2022}. Other communication technologies may be investigated within future work.
In the following Section \ref{sec:KeyTech}, key technologies of functional safety-relevant wireless communication are described. In Section \ref{sec:SRS}, the safety requirements with its relevant parameter and necessary calculations are evaluated and in Section \ref{sec:TestbedArchSolApproach} a possibility to validate and verify the testbed architecture solution approach is demonstrated. A conclusion and an outlook are given in Section \ref{sec:Conclusion}.
\section{Key Technologies}
\label{sec:KeyTech}
The intended testbed with wireless safety-relevant communication, the specific technologies with features employing a testbed for safety-relevant wireless communication are presented in the following.
\subsection{IO-Link Wireless}
\ac{IOLW} is the extension of the proven IO-Link standard \cite{ILC2019} being known as \ac{SDCI} or IEC 61131-9 \cite{IEC61131}. Sensor/actuator field-bus communication within the factory automation structure is the main usage of \ac{IOLW} \cite{ILC2019,IEC61131,IOLWtestReverbChambers2022,Heynicke2018}. There are general surveys of \ac{IOLW} as an open-vendor communication solution for factory automation on the shop floor in \cite{Heynicke2018,Wolberg2018,UniHelmutSchmidt2021a} with a focus on roaming in \cite{Rentschler2017}, antenna planning in \cite{Cammin2019a}, coexistence in \cite{coexistence,IOLWcoexTool}, security enhancement in \cite{IOLWcrypto2022,IOLWcryptoPrecisionSMSI2021,iolwCryptoPerf2021}, and functional safety \cite{Doebbert2021_SafetyArchitect}, and on \ac{IOLW} testing in \cite{IOLWtestReverbChambers2022,cammin-jsss2018,jsss-8-185-2019}. Additionally, a short introduction to \ac{IOLW} is given here.
\ac{IOLW} supports bidirectional wireless communication for (cyclic) process data and (acyclic) on-request data between a \ac{W-Master} and \acp{W-Device} \cite{iolw}, \cite{IEC61131} and, therefore, \ac{IOLW} is directly intended by design for fast and reliable communication on the shop-floor with dedicated technical key system properties \cite{Heynicke2018}, \cite{iolw}. \ac{IOLW} operates in the 2.4\,GHz ISM-band and its base station (i.e., \ac{W-Master}) supports a star-topology. Without performance reduction within a single manufacturing cell with a distance of up to 20\,m from the \ac{W-Master} in total up to 120 sensor or actuator nodes can operate reliably. \ac{IOLW} uses frequency hopping to mitigate fading effects due to multi-path propagation and to improve the coexistence behavior with other wireless systems \cite{coexistence}, achieving a latency below 5\,ms in typical industrial environments with a remaining failure probability of 10\textsuperscript{-9} \cite{iolw}. Therefore, the average receive power must be sufficiently high and the system must not be interfered.
\subsection{5G Campus Network / Industrial 5G Campus Network}
In the past, safety function response times in the order of 10\,ms for safety applications up to \ac{SIL}\,3 (e.g., \cite{PROFIsafeUniversity}) are critical, because response times in this range were typically not guaranteed by legacy cellular technologies such as 4G. The 5G technology shall provide better performance factors (e.g., \cite{JamesBlackmannApril2021a}) standardized in the 3GPP process \cite{Baker20200420}.
The main application domains as universal communication solution for 5G networks are: \ac{eMBB} with high data rates and transmission capacities for data and streaming applications, \ac{mMTC} with a low power requirement, low complexity, low data rates and a high number and density of communication devices, and \ac{uRLLC} for communication with high reliability and very low latency \cite{EsswieDecember2020c}. In safety-relevant communication, \ac{uRLLC} is the preferred configuration being stated in Rel-16 as services regarding high levels of reliability. Rel-16 also includes the integration of \ac{TSC}, and enhancements for private network deployments being fully isolated. Private campus networks are available with Rel-15 as the first full set of 5G standards and of particular interest for industrial applications \cite{9299391,Alabbasi2019,8715451}. Further support for private networks through neutral host models is introduced in Rel-17, allowing access to standalone private networks with third-party credentials, including public network operators. Key factors are here controlability and data sovereignty \cite{5GA19} being operated e.g., with a dedicated radio access network of 3.7 GHz band width and core network functions running in premises or in the cloud respectively at the service provider.
\subsection{Open Platform Communications Unified Architecture}
\ac{OPC UA}, which is published as IEC 62541 \cite{IEC62541}, combines open-source and platform-independent industrial data-exchange with state-of-the-art IT security features for authentication, authorization and encryption. \ac{OPC UA} defines basic transport protocols for different demands and may be mapped to other Ethernet-based protocols like AMQP and MQTT \cite{OPCUAOnlineRef}. Besides the transport of live, real-time, historical or aggregated data, alerts, events and method-calls, \ac{OPC UA} provides unified information models for the multi-provider-compatible semantic description of machines, plants and entire companies. It is designed as service-oriented architecture and supports client-server as well as publish-subscribe (PubSub) communication \cite{OPCUAOnlineRef}. The core specification introduces in part 15 \cite{OPCFoundation20191031c} an IEC 61784-3 compatible \ac{SCL} to ensure deterministic data exchange between safety-related devices. Therefore, the underlaying \ac{OPC UA} channel according to the black channel-principle is used. For consistent integration into existing plants and applications several companion specification are released to unify the deployment of other communication protocols like IO-Link \cite{OPCUAIOLink} and Profinet or to standardize the information model for use-cases and markets for example Robotics and MachineVision \cite{OPCUAOnlineRef}. With regard to Industry 4.0 related topics such as cloud or edge computing and Industrial Internet of Things, the harmonized information model provides easy scalability, broad availability and high interoperability. To integrate the communication model of \ac{OPC UA} down to the shop floor, the OPC Foundation is extending the standards towards controller-to-controller and controller-to-device communication under the name of OPC UA Field eXchange (OPC UA FX), which also includes \ac{TSN}.
\subsection{Time Sensitive Networking }
A key-factor for safety-related communication is the reliable and deterministic data transmission in converged Ethernet-based networks. \ac{TSN} enables the transmission of real-time traffic besides best-effort traffic on the same Ethernet connection \cite{Jasperneite2020}.
Next to time synchronization, bounded low latency in form of multiple schedulers, reliability and resource management are the basic parts of the \ac{TSN} standards \cite{IECJune2020c}. The specified features and key performance indicators enable convergent networks down to the field level with bump-less reconfiguration and scalable bandwidth. Due to the Ethernet-based specification, \ac{TSN} may be integrated into other communication technologies such as \ac{OPC UA} and 5G \cite{9299391} for wireless applications.
This allows to converge all necessary communication streams to a single connected wire for each device.
Multiple benefits for the domain of industrial processing are arising by this, but the development for domain specific standards e.g., IEC/IEEE 60802 for industrial automation is not completed yet.
However, the trend of converged network brings also new challenges, for instance the attack surface and number of potential threats towards the real-time communication domain increases. Therefore, by using the technology of \ac{TSN} and its beneficial features, also security aspects and an effective protection strategy must be considered. Within the \ac{TSN} specification 802.1Qci is the only standard, which addresses cyber security in from of per-stream filtering and policing. This filtering is based on layer 2 (according to the ISO/OSI model), since common firewalls work on layer 3/4 or upwards and would impact the real-time capability in a non-acceptable manner. The selection and effective integration of mitigation strategies and tools is a challenging task for the \ac{TSN} domain and makes more in-depth research work necessary.
\section{Measures for Safety Requirements}
\label{sec:SRS}
According to the IEC 61508 series of international standards on functional safety \cite{IEC61508}, any electrical, electronic or programmable electronic system performing safety-related functions must be developed under functional safety management to reduce the residual risks to people, equipment or the environment to an acceptable level. The required risk reduction is measured by the \ac{SIL} scaled with four steps having necessary measures and values for key factors assigned. The \ac{SIL} is evaluated in the risk analyses as the first step of the \ac{VuV} process, which describes the entire safety life cycle. Further aspects of the so-called V-model \cite{IEC61131} are, for example, the definition of the \ac{SRS}, the system, hardware and software design, test concepts for each step, verification through testing and finally the validation of the safety system of the \ac{SRS}. The safety-related system could be divided into subsystems, whereas each must meet the aspired \ac{SIL}. This enables the deployment and combination of (pre-)certified commercial hardware and software components e.g., in distributed safety functions or even networks.
Certified fail-safe field-buses could be used to communicate between the subsystems. The corresponding IEC 61784-3 standard \cite{IEC61784-3} contains guidelines that should be followed for the exchange of safety-related data in a distributed network. Based on the black channel principle as defined in \cite{IEC61508}, no safety validation of the communication channel used is necessary when the explained techniques are applied to the \ac{SCL} and the developed \ac{FSCP}. This is advantageous for wireless connections over time-varying and frequency-selective radio channels, since there is no detailed knowledge about the channel needed. The standard recommends that the sum of failures contributed by communication should remain below one percent of the accepted \ac{PFH} of the corresponding target \ac{SIL}. Exemplary, a \ac{SIL}\,3 application with less than 10\textsuperscript{-7}\,/h dangerous failures for the entire safety-function results in a \ac{RER} for the entire \ac{SCL} of 10\textsuperscript{-9}\,/h. To comply with \cite{IEC61784-3}, communication errors related to repetition, deletion, insertion, incorrect sequence, corruption, delay, masquerade and wrong addressing must be considered. There are deterministic measures to reduce the likelihood of these communication errors such as counter/inverted counter, timeout with receipt, time expectation, communication authenticity, receipt, data integrity and e.g., redundancy with cross-check. Furthermore, models are introduced to calculate values for error rates in the domain of authenticity, timeliness, masquerade and data integrity described below. The sum of these error rates results in the total \ac{RER} ($\uplambda$\textsubscript{SC}) for the safety-channel.
\subsection{Authenticity}
To guarantee authenticity only correctly addressed massages from authenticated sources should be accepted. In the functional safety domain, this could be achieved by using connection-IDs as \ac{A-code} transmitted with every package. The \ac{A-code} is transmitted explicit secured by integrity measures and the fact that the rate of misrouted messages shall not exceed the message rate of the system (v), the value for the \ac{RER} for authenticity errors (RR\textsubscript{A}) may be assumed with \cite{IEC61784-3}:
\begin{equation}
RR_A=0 . \label{eq:RRa}
\end{equation}
\subsection{Timeliness}
Communication errors, like delay or deletion, should be discovered to achieve the generic safety property of timeliness. Suitable methods are watchdogs, timestamps or counter values identifying the message with the \ac{T-code}. The contribution to $\uplambda$\textsubscript{SC} caused by timeliness errors is the \ac{RER} for timeliness (RR\textsubscript{T}) is
\begin{equation}
RR_T=2^{-LT}\cdot w \cdot R_T \cdot RP_{FCSP\_T} , \label{eq:RRt}
\end{equation}
with the bitlength of the \ac{T-code} (LT), the number of accepted \acp{T-code} (w), the rate of not actual messages (R\textsubscript{T}), which should be assumed in the worst case to v and additional \ac{REP} of measures regarding timeliness (RP\textsubscript{FSCP\_T}).
\subsection{Masquerade}
If a non-safety message imitates a safety message undetected, all other safety requirements for authenticity, timeliness and integrity have to be fulfilled by coincidence or accident. Thus, the \ac{RER} for masquerade (RR\textsubscript{M}) will always be low and is defined as
\begin{equation}
RR_M=2^{-LA} \cdot 2^{-LT} \cdot w \cdot 2^{-r} \cdot RP_u \cdot 2^{-LR} \cdot R_M , \label{eq:RRm}
\end{equation}
with the bitlength of the \ac{A-code} (LA), the bitlength (r) of the signature of the \ac{CRC}, the \ac{REP} for other specific data fields marking a correct safety message (RP\textsubscript{U}), bitlength of the redundant message part in case of redundancy with cross-check (LR) and the rate of occurrence of masked messages (R\textsubscript{M}), which is set to 10\textsuperscript{-3}/h for every device by default.
\subsection{Data Integrity}
Data integrity is the basic requirement for any safe decision; therefore, it is necessary to ensure that corruption is detected with a high and determined probability by the \ac{SCL}. In general, auxiliary redundancy must be added to double check the integrity of the data. Most popular are error-detecting codes such as \ac{CRC}, which are also proposed by the standard. For the estimation of the \ac{RER} for data integrity (RR\textsubscript{I}), the probability for an error in one bit must be assumed together with the likelihood, that this error could be detected by the selected safety measure.\\
\subsubsection{Bit error probability}
The black channel principle, as used according to \cite{IEC61784-3}, is based on the \ac{BSC} model.
This model pretends, that the probability for a bit error is equal for the transmission of a digital one or digital zero at every position and could be assumed to be
\begin{equation}
0\leq BEP \leq0.5 . \label{eq:BEP}
\end{equation}
Since there would be no communication possible with a higher error probability, the standard specifies a \ac{BEP} of 10\textsuperscript{-2} to be considered unless a prove for a lower \ac{BEP} is given. Within this field of study, an ongoing discussion about the combination of safety and security measure is running in the community, which is caused by the fact that some cryptographic algorithms change the probability distribution so that the assumed \ac{BSC} is not preserved \cite{SchillerJuddSupavatanakulHardtWieczorek+2022+38+52}.\\
\subsubsection{Properness of \ac{CRC} generator polynomials}
The likelihood that a \ac{CRC} calculation on an erroneous message has the same result as the calculation on the original message is a degree for the properness of the \ac{CRC} generator polynomial. This probability should be calculated for every possible data length of the \ac{FSCP} explicit. If the value never exceeds the so-called conservative limit of 2\textsuperscript{-r}, where r is the \ac{CRC}-bit length, the generator polynomial is called proper and the \ac{REP} follows the calculation results.\\
\subsubsection{Residual error rates}
The \ac{REP} for data integrity (RP\textsubscript{I}) is given by the \ac{REP} of the proper \ac{CRC} with the specified \ac{BEP}. The equation
\begin{equation}
RR_I=RP_I \cdot v \cdot RP_{FSCP\_I} , \label{eq:RRi}
\end{equation}
with the \ac{REP} of additional safety measures for data integrity (RP\textsubscript{FSCP\_I}) is the last part to quantify the \ac{RER} for the entire safety communication layer per hour ($\uplambda$\textsubscript{SCL}) as
\begin{equation}
\lambda_{SCL}=(RR_T+RR_A+RR_M+RR_I) \cdot m , \label{eq:Lscl}
\end{equation}
with the maximum number of logical connections allowed (m) for the safety function.
By applying the mentioned principles, a \ac{SRS} for a \ac{IOLS} related fail-safe \ac{IOLW} derivative is proposed and the \ac{FSCP} could be assessed using the demonstrator \cite{Doebbert2021_SafetyArchitect}.
\section{Assessment of the Testbed Architecture Solution Approach}
\label{sec:TestbedArchSolApproach}
The objective of the testbed is the \ac{VuV} of the system and protocol design with the according \ac{SRS}, which is often also integrated within a safety life cycle V-model evaluation for safety-relevant measures to assess its functionality in a test environment. Further evaluation may also include safety performance and system availability \cite{9738459}.
\subsection{Validation of the Safety Requirement Specification}
With the help of a certified organization, a \ac{SRS} is reviewed to survey that all risks are identified in the evaluation to meet the specification. The certified organization also reviews the \ac{SRS} regarding completeness, contradictions, and correctness. In our case, the main focus is the concept and architecture of \ac{IOLWS} rather than hardware or software implementation of the \ac{SRS}. Nevertheless, the parameter of the measures used in software to reduce risks are also reviewed. Therefore, measures such as plausibility test for the calculated $\uplambda$\textsubscript{SCL} are implemented and evaluated.
\begin{figure*}[htp]
\hspace{-1cm}
\includegraphics[width=\textwidth]{Schaubild_DTEC_2022_IOLS-Wireless-V4.pdf}
\caption{\textsc{Modular Sensor-2-Cloud Automation Topology (mS2CAT) for safe and secure wireless communication employing a 5G-IOLW Gateway.}}
\label{fig:overview}
\end{figure*}
\subsection{Verification of System and Protocol Design}
The verification of the testbed architecture solution approach may be realized by means of analyses, reviews, or by using a demonstrator setup, which can be based on separated lab setups. For safety functions, it is necessary to use a \ac{FDS}, which may be completed by separate function tests or complete module tests.
Module tests may include hardware setup tests in a control cabinet, analysis of address ranges between modules, limit value analysis, or e.g. compliance tests with programming guidelines. In our research, the overall system design and the initial software structure is emphasizes on as well as the \ac{IOLWS} protocol design. The hardware design will not be part of the module tests.
Function tests focus on the program functionality, which involve process simulations, parameter checks, and e.g. limit value tests. Therefore, various tests and analysis such as IO tests, acceptance tests, function tests, response time tests, or signal path tests are possible \cite{IEC61508}.
Furthermore, \acp{FAT} for functional safety functions are possible using EN ISO 13849-2 and EN 62061.
In our case, function, response time, and signal path tests are performed by using a demonstrator performing a specific application, which is described in the following.
\subsection{Demonstrator for Functional Safety-relevant Wireless Communication}
In \cite{9613484}, a \ac{mS2CAT} for safe and secure wireless communication employing a 5G-IOLW Gateway is described. Multiple cells are depicted for wireless communication enhancing flexible operations supporting \acp{CPS}.
The \ac{mS2CAT} of \figref{fig:overview} illustrates a roaming \ac{FS-Device} being connected to an \ac{FS-W-Bridge} (in the following described as \ac{FS-W-Device}). In \cite{iolw}, the flexibility and mobility that a predefined \ac{W-Device} is able to connect to multiple predefined \ac{W-Master} cells (called roaming) is described.
Depending on the location of the \ac{AGV} or person wearing a emergency stop, the \ac{FS-W-Device} is locked into Cell 1, Cell 2 or Cell n. A \ac{FS-W-Device} is only allowed to be connected to one \ac{FS-W-Master} at a time. When switching from one cell to another cell, the \ac{FS-W-Device} must first disconnect from one \ac{FS-W-Master} before connecting to another \ac{FS-W-Master}. Between the cells, a commissioning must take place to enter another safety cell depending on the application. It is not possible to enter another cell before being connected to the specific \ac{FS-W-Master}. If a safety cell is entered by accident, the cell must be set to a safe state. Time slots may vary for entering a safety cell without interrupting the manufacturing process.
The \ac{FS-W-Master} in each safety cell is connected to an \ac{OPC UA} server aggregating the FS-Data using the standard master interface as \ac{FS-Gateway} according to \cite{IOLSafetySysExtensions}. The server in turn is accessed by a virtual safety \ac{PLC} integrated into the 5G core system via 5G and a 5G modem module. To set up a continuous fail-safe communication from source to drain the mapping between \ac{IOLWS} and \ac{OPC UA} safety need to be designed compliant to \cite{IEC61784-3}, also \ac{OPC UA} PubSub and \ac{TSC} measures shall be taken into account to reduce latency and ensure deterministic response times. The described functions are combined as 5G-IOLW gateway and the \ac{OPC UA} server as software service is integrated into either the \ac{FS-W-Master} or the modem module. Both options shall be considered and compared as part of the project.
The testbed with the demonstrator shall be used to test modules and functions such as the \ac{IOLWS} protocol, the response time between the 5G core system and the \ac{FS-Device} using the black channel principle through two different wireless technologies as well as in-between the protocols, the signal path within safety-relevant states and the overall functionality. Tests will be part of the future evaluation of the testbed for functional safety-relevant wireless communication. \\
\section{Conclusion and Outlook}
\label{sec:Conclusion}
In this contribution, a testbed for functional safety-relevant wireless communication based on \ac{IOLW} and 5G is presented. Therefore, the well-known functional safety protocol \ac{IOLS} is used in conjunction with a new introduced wireless communication protocol \ac{IOLWS}. Furthermore, a gateway solution is enhanced to connect small-scale \ac{IOLWS} in the short-range machine-area networks with medium-scale Industrial 5G in the medium-range factory area employing technologies such as \ac{TSN} for deterministic communication and \ac{OPC UA} for safe and secure distributed communication.
For \ac{IOLWS} the necessary \ac{SRS} is described using IEC 61508 and IEC 61784-3. After assessment of the \ac{SRS}, the testbed will be validated and verified using the described demonstrator for functional safety-relevant wireless communication.
In the next step, calculations for \ac{SRS} will be evaluated and the testbed for the functional safety demonstrator will be realized.
\section*{Acknowledgment}
This work is an extended continuation of the publications under \cite{9613484, SysINT_Cammin2023}.
The authors would like to acknowledge Christoph Cammin and Ralf Heynicke from the Helmut-Schmidt-University as well as Kunbus GmbH for their continuous support.\\
This contribution is funded by dtec.bw – Digitalization and Technology Research Center of the Bundeswehr which we gratefully acknowledge (project "Digital Sensor-2-Cloud Campus Platform" (DS2CCP) with the project website \cite{UniHelmutSchmidt2021a}).
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-1914 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\begin{figure}[!t]\centering
\includegraphics[width=.55\linewidth]{equivariance_double_col.pdf}
\caption{An illustration of the translational and rotational invariance on a mesh grid with INO. Here, $\mathcal{G}$ represents the learnt mapping, $R$ represents the rotation and translation of coordinate frames. To design an invariant neural operator, our key idea is to characterize the interaction between nodes via an invariant kernel. The solid red lines represent the selected local reference edges and the dashed red lines indicate the two components of relative Euclidean distance, $\overline{x_j-x_i}:=[\verti{x_j-x_i}\cos\theta,\verti{x_j-x_i}\sin\theta]$, with which we parameterize the proposed kernel form.}
\label{fig:equivariance}
\end{figure}
Neural operators \citep{anandkumar2020neural, li2020fourier,lu2019deeponet} have gained popularity in recent years as a form of implicit solution operators to unveil hidden physics of complex real-world physical systems from data. Benefiting from their integral form of the architecture, neural operators {learn a surrogate mapping between function spaces, which} are resolution independent and can be generalized to different input instances \citep{kovachki2021neural}. Resolution independence empowers the learned operator to retain consistent accuracy in prediction regardless of the variation of input resolutions, while being generalizable to different input instances offers the possibility to solve unseen input instances with only a forward pass of the trained network without the hassle of repeating the time-consuming training process. These facts make neural operators excellent candidates for providing surrogate models for complex physical systems \citep{li2021physics,goswami2022pino}.
Despite of the notable advances in the development of neural operators over traditional neural networks (NNs), their performance \YY{highly rely on the amount of available data}, especially when the governing PDE is unknown \cite{goswami2022pino}. An effective way to alleviate this pathology is to incorporate into the designed architecture the intrinsic preservation of fundamental physical laws in data, as is the case of the conservation of linear and angular momentums in most physical systems. In \cite{you2022physics}, it was found that incorporating partial physics, such as the no-permanent-set assumption, can enhance the prediction of neural operators in out-of-distribution prediction tasks. However, to the authors' best knowledge, neural operators that preserve the conservation laws of linear and angular momentums have not yet been explored.
To enhance the accuracy of neural operators in physical system predictions, in this work we propose the Invariant Neural Operator (INO), a novel integral neural operator architecture that remains invariant under frame translations and rotations.
Specifically, we substitute the frame-dependent coordinate information with its invariant counterpart
(cf. Figure~\ref{fig:equivariance}) in the kernel space, so that the learned kernel is independent of geometric transformations.
Compared to existing neural operator methods, the proposed architecture mainly carries four significant advantages. First, different from existing physics-informed neural operators, our approach only requires observed data pairs and does not rely on known governing equations \citep{goswami2022physics,goswami2022pino,li2021physics,wang2021learning}. Therefore, it is readily applicable to learn physical systems directly from experimental measurements \citep{ranade2021generalized} or simulation data \citep{kim2019peri}, for which the underlying governing equations may not be available.
Second, the invariant properties in our approach are realized through built-in kernel functions, which is anticipated to be more robust and efficient than data augmentation techniques \citep{quiroga2018revisiting}.
More importantly, through embedding the invariant kernel and updating frame-dependent coordinate information with a separate network, our architecture naturally stems from the interpretation of its layer update as a particle-based method \citep{karplus2002molecular,liu2010smoothed}, which significantly simplifies model interpretation. Last but not least, analogous to the E(n) equivariance concept in \cite{satorras2021n}, our architecture is not limited to two- or three-dimensional invariance. As a matter of fact, it can be easily scaled up to higher dimensions.
In summary, the contributions of our work are:
\vspace{-0.1in}
\begin{itemize}
\item{We propose INO, a novel integral neural operator architecture that is translation- and rotation-invariant, to learn complex physical systems with guaranteed conservation of linear and angular momentums.}
\vspace{-0.05in}
\item{Equipped with the shallow-to-deep initialization technique and a coordinate embedding network, our INO finds a physical interpretation from a particle-based method, and obtains stable predictions as the network proliferates in depth.}
\vspace{-0.05in}
\item{Our approach only requires data pairs and does not rely on \textit{a priori} domain knowledge, while the guaranteed momentum conservation laws improve the learning efficacy, especially in small data regime.}
\vspace{-0.25in}
\item{We demonstrate the expressivity and generalizability of INO across a variety of synthetic and real-world experimental datasets, and show that our learned neural operator is not only generalizable in handling translated and rotated datasets, but also provides improved prediction from the baseline neural operators.}
\end{itemize}
\section{Background and Related Work}\label{sec:back}
In this section, we briefly introduce the concept of translational and rotational invariance in classical mechanics, and present its connection to the laws of momentum conservation. Moreover, we review relevant concepts of invariance/equivariance and hidden physical system learning with NNs, which will later become complementary to the proposed INO definition.
Throughout this paper, we use lower case letters to denote vectors, upper case letters for matrices, bold letters for functions, calligraphic letters for operators, and blackboard-bold letters for Banach spaces. For any vector $v$, we use $\verti{v}$ to denote its $l^2$ norm. For any function $\bm{f}$ taking values at nodes $\chi :=\{x_1,x_2,\dots,x_M\}$, $\vertii{\bm{f}}$ denotes its $l^2$ norm, i.e., $\vertii{\bm{f}}:=\sqrt{\sum_{i=1}^M(\bm{f}(x_i))/M}$. $\mathbb{R}^d$ represents the dimension-$d$ Euclidean space.
\subsection{Invariance, Equivariance, and Momentum Conservation Laws}
\vspace{-0.05in}
We consider the learning of complex physical responses of a mechanical system, based on a number of observations of the loading field $\bm{f}_i(x)\in\mathbb{F}({\Omega};\mathbb{R}^{d_f})$ and the corresponding physical system response $\bm{u}_i(x)\in\mathbb{U}({\Omega};\mathbb{R}^{d_u})$. Here, $i$ denotes the sample index, ${\Omega}\in\mathbb{R}^d$ is the bounded domain of interests, and $\mathbb{F}$ and $\mathbb{U}$ describe the Banach spaces of functions taking values in $\mathbb{R}^{d_f}$ and $\mathbb{R}^{d_u}$, respectively. To model the physical responses of such a system, we aim to learn a surrogate operator $\mathcal{G}:\mathbb{F}\rightarrow \mathbb{U}$, that maps the input function $\bm{f}(x)$ to the output function $\bm{u}(x)$.
Let $\mathcal{T}_g:\mathbb{F}\rightarrow\mathbb{F}$ be a set of transformation operators for an abstract group $g$, we say that the operator $\mathcal{G}$ is invariant to $g$ if \vspace{-0.03in}
\begin{equation}
\mathcal{G}\circ\mathcal{T}_g[\bm{f}]=\mathcal{G}[\bm{f}] \text{ ,}\vspace{-0.02in}
\end{equation}
and $\mathcal{G}$ is equivariant to $g$ if there exists an equivariant transformation $\mathcal{S}_g:\mathbb{U}\rightarrow\mathbb{U}$, such that
\begin{equation}
\mathcal{G}\circ\mathcal{T}_g[\bm{f}]=\mathcal{S}_g\circ\mathcal{G}[\bm{f}] \text{ .}\vspace{-0.06in}
\end{equation}
Considering a mechanical response problem as a practical example in physical systems, we have the input function $\bm{f}(x)$ as the initial location and $\bm{u}(x)$ as the resulting mechanical response in the form of a displacement field. First, let $\mathcal{T}_g$ be a translation on the reference frame, i.e., $\mathcal{T}_g[\bm{f}]=\tilde{\bm{f}}$, where $\tilde{\bm{f}}(x+g):=\bm{f}(x)$ and $g\in\mathbb{R}^{d}$ is a constant vector. Translational invariance means that translating the input function $\bm{f}$ first and then applying the response operator $\mathcal{G}$ will deliver the same result. As such, the resultant physical model does not vary with locations in space, and the Noether's theorem \citep{noether1971invariant} guarantees the conservation of linear momentum. On the other hand, let $\mathcal{T}_g$ be a rotation on the reference frame, which rotates the coordinate $x$ as well as the input function, i.e., $\mathcal{T}_g[\bm{f}]=\tilde{\bm{f}}$, with $\tilde{\bm{f}}(Rx):=R\bm{f}(x)$ and $R$ being an orthogonal matrix. Rotational equivariance means that rotating the input function $\bm{f}$ first and then applying the response operator $\mathcal{G}$ will lead to the same result as first applying $\mathcal{G}$ and then rotating the output function.
As such, the described physical model does not vary under rotations against the origin, and the Noether's theorem \citep{noether1971invariant} guarantees the conservation of angular momentum.
In this work, the proposed INO is designed to handle the following four types of invariance/equivariance:
\begin{enumerate}
\vspace{-0.05in}
\item {\it Translational Invariance.} Translating the reference frame by $g\in\mathbb{R}^{d}$ results in an invariant output, i.e., $\mathcal{G}[\tilde{\bm{f}}](x+g)=\mathcal{G}[\bm{f}](x)$, where $\tilde{\bm{f}}(x+g):=\bm{f}(x)$.
\vspace{-0.05in}
\item {\it Translational Equivariance.} Translating the reference frame by $g\in\mathbb{R}^{d}$ results in an equivariant translation of the output, i.e., $\mathcal{G}[\tilde{\bm{f}}](x+g)=\mathcal{G}[\bm{f}](x)+g$, where $\tilde{\bm{f}}(x+g):=\bm{f}(x)$.
\vspace{-0.05in}
\item {\it Rotational Invariance.} Rotating the reference frame results in an invariant output, i.e., for any orthogonal matrix $R\in\mathbb{R}^{d\times d}$, one has $\mathcal{G}[\tilde{\bm{f}}](Rx)=\mathcal{G}[\bm{f}](x)$, where $\tilde{\bm{f}}(Rx):=R\bm{f}(x)$.
\vspace{-0.05in}
\item {\it Rotational Equivariance.} Rotating the reference frame results in an equivariant rotation of the output, i.e., for any orthogonal matrix $R\in\mathbb{R}^{d\times d}$, one has $\mathcal{G}[\tilde{\bm{f}}](Rx)=R\mathcal{G}[\bm{f}](x)$, where $\tilde{\bm{f}}(Rx):=R\bm{f}(x)$.
\end{enumerate}
\vspace{-0.05in}
\subsection{Learning Hidden Physics}
\vspace{-0.05in}
Learning how complex physical systems respond is essential in science and engineering. For decades, physics-based PDEs have been commonly employed to model such systems, and traditional numerical methods \citep{leveque1992numerical} are developed to solve for unobserved system responses. However, the choice of certain governing PDEs is often determined \textit{a priori}, and these PDEs need to be solved numerically for each specified boundary/initial conditions and loading/source terms, which makes classical PDE-based methods insufficient in expressivity and computationally expensive.
Several recent developments in deep NNs have been devoted to providing an efficient surrogate directly from data \citep{ghaboussi1998autoprogressive,ghaboussi1991knowledge,carleo2019machine,karniadakis2021physics,zhang2018deep,cai2022physics,pfau2020ab,he2021manifold,besnard2006finite}. Among others, neural operators manifest superiority in predicting physical responses as \YY{function mappings}. Contrary to classical NNs that operate between finite-dimensional Euclidean spaces, neural operators are designed to learn mappings between infinite-dimensional function spaces \citep{li2020neural,li2020multipole,li2020fourier,you2022nonlocal,Ong2022,gupta2021multiwaveletbased,lu2019deeponet,lu2021learning,goswami2022physics, gupta2021multiwavelet}. A remarkable advantage of neural operators lies in their resolution independence, which implies that the prediction accuracy is invariant to the resolution of input functions. Moreover, neural operators are generalizable to different input instances,
and hence they can serve as efficient surrogates in downstream applications. Furthermore, in contrast to classical PDE-based approaches, neural operators can be trained directly from data, and hence requires no domain knowledge nor pre-assumed PDEs. All these advantages make neural operators a promising tool for learning complex physical systems \citep{yin2022simulating,goswami2022physics,yin2022interfacing,you2022physics,li2020neural,li2020multipole,li2020fourier,lu2021comprehensive}.
\YY{Despite the aforementioned advances of neural operators, purely data-driven neural operators still suffer from data challenge. In particular, in order to generalize the solution, they require a large corpus of paired datasets, which is prohibitively expensive in many engineering applications. To resolve this challenge, the physics-informed neural operator (PINO) \cite{li2021physics} and physics-informed DeepONets \cite{goswami2022physics,wang2021learning} are introduced, where a PDE-based loss is added to the training loss as a penalization term. However, these approaches still require \textit{a priori} knowledge of the underlying PDEs, which restricts their applications to (known) PDE-solving tasks.
}
\vspace{-5pt}
\subsection{Integral Neural Operators}\label{sec:integralNO}
\vspace{-5pt}
Integral neural operators, first proposed in \cite{li2020neural} and further developed in \cite{li2020multipole,li2020fourier,you2022nonlocal,you2022learning}, have the foundation in the representation of a PDE solution by the Green's function. An integral neural operator is comprised of three building blocks. First, the input function, $\bm{f}(x)\in\mathbb{F}$, is lifted to a higher-dimension representation via $\bm{h}(x,0)=\mathcal{P}[\bm{f}](x):=P[x,\bm{f}(x)]^T+p$. Here, $P\in\mathbb{R}^{(d+d_f)\times d_h}$ and $p\in\mathbb{R}^{d_h}$ define an affine pointwise mapping. Next, the feature vector function $\bm{h}(x,0)$ goes through an iterative layer block where the layer update is defined via the sum of a local linear operator, a nonlocal integral kernel operator, and a bias function: $\bm{h}(\cdot,j+1)=\mathcal{J}_{j+1}[\bm{h}(\cdot,j)]$, for $j=0,\cdots,L-1$. Here, $\bm{h}(\cdot,j)$ is a sequence of functions representing the values of the network at each hidden layer, taking values in $\mathbb{R}^{d_h}$. $\mathcal{J}_1,\cdots,\mathcal{J}_{L}$ are the nonlinear operator layers, which will be further discussed in the later contents. Finally, the output $\bm{u}(\cdot)\in\mathbb{U}$ is obtained through a projection layer. A common practice is to project the last hidden layer representation $\bm{h}(\cdot,L)$ onto $\mathbb{U}$ as:
$\bm{u}(x)=\mathcal{Q}[\bm{h}(\cdot,L)](x):=Q_2\sigma(Q_1\bm{h}(x,L)+q_1)+q_2$. Here, $Q_1\in\mathbb{R}^{d_{Q}\times d_h}$, $Q_2\in\mathbb{R}^{d_{u}\times d_Q}$, $q_1\in\mathbb{R}^{d_Q}$ and $q_2\in\mathbb{R}^{d_u}$ are the appropriately sized matrices and vectors that are part of the trainable parameter set. $\sigma$ is an activation function, which is often taken to be the popular rectified linear unit (ReLU) function.
Let $\mathcal{D}:=\{(\bm{f}_i,\bm{u}_i)\}_{i=1}^N$ be a support set of observations where the input $\{\bm{f}_i\}\subset\mathbb{F}$ is a sequence of independent and identically distributed (i.i.d.) random fields from a known probability distribution $\mu$ on $\mathbb{F}$, and $\bm{u}_i(x)\in\mathbb{U}$, possibly noisy, is the observed corresponding solution. We aim to learn the system response by building a surrogate operator: \vspace{-0.05in}
$$\tilde{\mathcal{G}}[\bm{f};\theta](x):=\mathcal{Q}\circ\mathcal{J}_{L}\circ\cdots\circ\mathcal{J}_1\circ\mathcal{P}[\bm{f}](x)\approx \bm{u}(x) \text{ ,}\vspace{-0.05in}$$
where the parameter set $\theta$ is obtained by solving the following optimization problem:
\begin{align}
\min_{\theta\in\Theta}\mathcal{L}_{\mathcal{D}}(\theta)&=\min_{\theta\in\Theta}\mathbb{E}_{\bm{f}\sim\mu}\vertii{\tilde{\mathcal{G}}[\bm{f};\theta]-\mathcal{G}[\bm{f}]}\approx \frac{1}{N}\min_{\theta\in\Theta}\sum_{i=1}^N\vertii{\tilde{\mathcal{G}}[\bm{f}_i;\theta]-\bm{u}_i} \text{ .}\label{eqn:opt}
\end{align}
The particular choice of an integral neural operator varies by the architecture of the iterative layer block, $\mathcal{J}_{j+1}$. In \cite{li2020neural}, graph neural operators (GNOs) are proposed, where the iterative kernel integration is invariant across layers, i.e., $\mathcal{J}_1=\mathcal{J}_2=\cdots=\mathcal{J}_L:=\mathcal{J}^{GNO}$, with the update of each layer network given by
\begin{align}
\bm{h}(x&,j+1) = \mathcal{J}^{GNO}(\bm{h}(x,j)):=\sigma\left(W\bm{h}(x,j)+\int_{\Omega} \bm{m}(x,y) \bm{h}(x,j)dy + c\right) \text{ ,}\label{eq:gkn_1}\\
\bm{m}(x&,y):= \kappa \left(x,y,\bm{f}(x),\bm{f}(y);v \right)\text{ .}\label{eq:gkn_2}
\end{align}
Here, $W\in\mathbb{R}^{d_h\times d_h}$ and $c\in\mathbb{R}^{d_h}$ are learnable tensors, and $\kappa\in\mathbb{R}^{d_h\times d_h}$ is a tensor kernel function that takes the form of a (usually shallow) NN with learnable parameters $v$.
\YY{Since the layer update in integral neural operators is formulated as a continuous integral operator, the learned network parameters are resolution-independent: the learned $W$, $c$, and $v$ are close to optimal even when used with different resolutions.
Besides GNOs, when both the domain and the discretized points are structured, Fourier Neural Operators (FNOs) \citep{li2020fourier} and Multiwavelet-based Operators (MWT) \citep{gupta2021multiwaveletbased} can be employed. In FNOs, the fast Fourier transform is employed to evaluate the integrals, which presents superior efficiency.
Nevertheless, despite the rapid advancement in neural operators, existing methods fail to preserve the invariance/equivariance properties under translation and rotation operations.}
\subsection{Invariant and Equivariant Neural Networks}
\vspace{-0.05in}
Recently, invariant and equivariant NNs have been developed in the context of convolutional neural networks \citep{lang2020wigner,chirikjian2000engineering,knapp2001representation} and graph neural networks (GNNs) \citep{bruna2013spectral,defferrard2016convolutional,kipf2016semi}, and their effectiveness is demonstrated via a variety of machine learning tasks, such as in image classification \citep{cohen2016group,cohen2016steerable,weiler2019general,romero2020group} and dynamical system modelling \citep{rezende2019equivariant,satorras2021n}. To achieve equivariance, the authors in \cite{thomas2018tensor,fuchs2020se} proposed to utilize spherical harmonics to compute a set of basis for transformations between higher-order representations. As another line of efforts \citep{schutt2017quantum,klicpera2020directional,anderson2019cormorant,miller2020relevance,satorras2021n}, GNNs were considered based on a message passing framework \cite{brandstetter2022message}, in which the translational and rotational invariance/equivariance were imposed by specially designed edge and node update operations. However, the above-mentioned invariant/equivariant networks are restricted to a discrete ``vector-to-vector'' mapping, and the learned parameters cannot be reused in networks of different input/output resolutions, \YY{which hinders their application to learn hidden physics laws in the form of function mappings}. \YY{Therefore, the goal in this work is to design neural operators and impose minimal physical law assumptions as the translational and rotational invariance/equivariance, so as to provide a data-driven model form that learns complex physical systems with guaranteed momentum conservation.}
\vspace{-5pt}
\section{Neural Operators with Momentum Conservation}
\vspace{-5pt}
In this section, we develop the invariant/equivariant architecture based on integral neural operators.
Our developments have two-folds. First, a node embedding update scheme is proposed, that is physically invariant and preserves invariance/equivariance to translations and rotations on a continuous domain. The essential idea is to devise a message passing neural network where its arguments and relevant representation embeddings are invariant to transformations. As such, we can convert transformation-sensitive representations to their transformation-invariant counterparts. Second, to handle general domains and accelerate training, we also modify the GNO architecture in Eqs.~\eqref{eq:gkn_1}-\eqref{eq:gkn_2} in such a way that each layer resembles a discretized time-dependent nonlocal equation \cite{you2022nonlocal}. As a result, our proposed architecture can be seen as a resemblance with translational and rotational invariant/equivariant nonlocal differential equations, allowing for generalization of its optimal parameters from shallow to deep networks.
To establish a transformation-invariant kernel in Eq.~\eqref{eq:gkn_2}, we introduce two types of transformation-invariant quantities as arguments to the kernel function: the vector Euclidean norm of the edge between $x$ and $y$, i.e., $\verti{y-x}$, and the orientation of the vector $y-x$ with respect to a local reference vector in the undeformed coordinates. For example, the local reference edge in a rectangular domain can be either the horizontal or vertical edge of the rectangle. In the perspective of numerical implementation, one can take the vector formed by any two fixed nodes as the reference edge, as illustrated in Figure \ref{fig:equivariance}(b). In physical problems with 2D domains, ${\Omega}\subset\mathbb{R}^2$, we pass in as kernel arguments the decomposed Euclidean norm in the following form:
\begin{equation}\label{eq:norm_decompose}
\overline{y-x} := [\verti{y-x} \cos\theta,
\verti{y-x} \sin\theta] \text{ ,}
\end{equation}
where $x$ and $y$ are the source and target nodes connected by the edge, and $\theta$ denotes the computed local orientation. Similarly, for ${\Omega}\subset\mathbb{R}^3$, three kernel arguments are passed in, based on two angles from the computed local orientation. Here, we point out that the idea of parameterizing the edge feature with its Euclidean norm, $\verti{y-x}$, was also employed in the equivariant graph neural network proposed in \cite{satorras2021n}. However, our approach has for the first time considered the local edge orientation together with its Euclidean norm, which makes the resulting network more expressive. As will be demonstrated in the ablation study in Section \ref{sec:exp}, an Euclidean norm-based kernel architecture may not be sufficiently expressive.
\textbf{INO for scalar-valued functions.} We first consider the scenario where the output function takes scalar values, i.e., $d_u=1$, and the physical system is both translation- and rotation-invariant. Examples in this category involve the prediction of energies in environmentally-powered systems \citep{cammarano2012pro}, pressure monitoring in subsurface flows \citep{fumagalli2011numerical}, and the prediction of damage field in brittle fracture \citep{fan2022meshfree}. In this context, we propose the the following INO-scalar architecture: for the lifting block, we only pass in the Euclidean norm of the input function:
\begin{equation}\label{eq:inos_p}
\bm{h}(x,0)=\mathcal{P}[\bm{f}](x):=P\verti{\bm{f}(x)}+p \text{ ,
\end{equation}
where $P,p\in\mathbb{R}^{d_h}$. Then, for the iterative layer, we introduce a fictitious time step, $\tau$, and regard different layer features as the solution of a time-dependent nonlocal equation at different time instances:
\begin{align}
&\bm{h}(x,(j+1)\tau):=\bm{h}(x,j\tau)+\tau\sigma\left(W\bm{h}(x,j\tau)+\int_{\Omega} \bm{m}(x,y)\bm{h}(y,j\tau) dy + c\right) \text{ ,}\label{eq:inos_1}\\
&\bm{m}(x,y):= \kappa \left(\overline{y-x},\verti{\bm{f}(x)},\verti{\bm{f}(y)};v \right)\text{ .}\label{eq:inos_2}
\end{align}
Finally, the projection block is taken as a 2-layer multilayer perceptron (MLP), same as the other integral neural operators. Herein, we notice that the architecture in Eq.~\eqref{eq:inos_1} resembles the time-dependent nonlocal equation: if we divide both sides of Eq.~\eqref{eq:inos_1} by the fictitious time step, $\tau$, the term $(\bm{h}(x,(j+1)\tau)-\bm{h}(x,j\tau))/\tau$ corresponds to the discretization of a first order derivative so that this architecture can indeed be interpreted as a nonlinear differential equation in the limit of deep layers, as $\tau\to 0$. Hence, in practice we can employ the shallow-to-deep learning technique \citep{haber2018learning,you2022nonlocal}, that corresponds to training the network for increasing values of network layers and using optimal parameters obtained with $L$ layers as initial guesses for the $\tilde{L}$-layer INO. Here $\tilde{L}>L$. Moreover, we point out that all network arguments are translational and rotational invariant, and hence we have the following theorem, with its detailed proof provided in \ref{app:a}:
\begin{theorem}[Invariance for INO-scalar] \label{thm:INO-scalar}
The INO-scalar architecture proposed in Eqs.~\eqref{eq:inos_p}-\eqref{eq:inos_2} is translational and rotational invariant. That means, when translating the reference frame by $g\in\mathbb{R}^d$ and rotating it by an orthogonal matrix $R\in\mathbb{R}^{d\times d}$, the following property holds true:
$$\tilde{\mathcal{G}}[\tilde{\bm{f}};\theta](Rx+g)=\tilde{\mathcal{G}}[\bm{f};\theta](x) \text{ ,}$$
where $\tilde{\bm{f}}(Rx+g):=R\bm{f}(x)$.
\end{theorem}
\begin{figure}[!t]\centering
\includegraphics[width=.6\linewidth]{ino-vector.pdf}
\caption{An illustration of the proposed INO architecture. We start from input $\bm{f}(x)$ and the initial coordinate $\bm{x}(x,0):=x$. Then, the iterative layers are built as integral operators based on invariant kernel functions, to obtain a frame-invariant layer feature vector update $\bm{h}(x,j)$ and a frame-dependent coordinate update $\bm{x}(x,j)$ which embeds the coordinate rotation information. Lastly, we project the last hidden layer representation (for scalar-valued functions) or the coordinate update (for vector-valued functions) to the target function space. \vspace{-0.1in}}
\label{fig:ino_architecture}
\end{figure}
\textbf{INO for vector-valued functions.} We now further consider the case where the output function takes vector values, and hence the output should rotate equivariantly with the rotation of the reference frame. Under this scenario, rotation-equivariant property is required to achieve the conservation of angular momentum. As such, besides the layer feature function $\bm{h}(x,j\tau)$, $j=0,\cdots,L$, we further introduce an additional coordinate function, $\bm{x}(x,j\tau)$, which is defined on domain ${\Omega}$ and takes values in $\mathbb{R}^d$. Then, the key is to preserve invariance to rotations on $\bm{h}$, as well as equivariance on $\bm{x}$. In light of this, when translational invariance and rotational equivariance are desired, we propose the following INO-vector architecture (cf. Figure~\ref{fig:ino_architecture}): for the lifting block, we provide the Euclidean norm of the original feature to $\bm{h}$ and carry the coordinate information into $\bm{x}$:
\begin{align}
&\bm{h}(x,0)=\mathcal{P}[\bm{f}](x):=P\verti{\bm{f}(x)}+p \text{ ,}\label{eq:inov_p1}\\
&\bm{x}(x,0):=x\label{eq:inov_p2} \text{ ,}
\end{align}
with $P,p\in\mathbb{R}^{d_h}$. Then, the $(l+1)$-th iterative layer network update of a $L$-layer INO is defined as,
\begin{align}
&\bm{h}(x,(j+1)\tau):=\bm{h}(x,j\tau)+\tau\sigma\left(W\bm{h}(x,j\tau)+\int_{\Omega} \bm{m}(x,y)\bm{h}(y,j\tau) dy + c\right),\label{eq:inov_1}\\
&\bm{x}(x,(j+1)\tau):=\bm{x}(x,j\tau)+\tau\int_{\Omega} (x-y)\phi(\bm{m}(x,y)\bm{h}(y,j\tau);w) dy\text{ ,}\label{eq:inov_2}\\
&\bm{m}(x,y):= \kappa \left(\overline{y-x},\verti{\bm{f}(x)},\verti{\bm{f}(y)};v \right)\text{ .}\label{eq:inov_3}
\end{align}
Finally, we define the projection block and the output function as,
\begin{equation}\label{eq:inov_q}
\bm{u}(x)=\mathcal{Q}[\bm{x}(x,L\tau)](x):=\bm{x}(x,L\tau)-x \text{ .}
\end{equation}
Here, $\kappa$ and $\phi$ are two separate (usually shallow) MLPs for computing edge and coordinate messages, with $v$ and $w$ being the corresponding trainable parameters, respectively. In Eq.~\eqref{eq:inov_2}, $\phi$ takes as input the edge embeddings and outputs a scalar representing the weight associated with its neighboring node. The nodal positions are then updated as the weighted sum of coordinate differences. When considering the output function $\bm{u}$ as the displacement field and $\bm{x}$ as the updated position of material points, the proposed INO-vector architecture can be seen as an analogue to the particle-based methods, since both approaches describe the motion of a particle by the summation of forces due to its neighboring particles. Additionally, the INO architecture preserves the continuous integral treatment of the interactions between nodes that characterizes neural operators. As a result, INO also permits resolution independence with respect to the inputs, and hence serves as a surrogate operator between function spaces. Formally, we have the following theorem, with the detailed proof provided in \ref{app:a}:
\begin{theorem}[Invariance/equivariance for INO-vector] \label{thm:INO-vector-1}
The INO-vector architecture proposed in Eqs.~\eqref{eq:inov_p1}-\eqref{eq:inov_q} is translation-invariant and rotation-equivariant. That means, when translating the reference frame by $g\in\mathbb{R}^d$ and rotating it by an orthogonal matrix $R\in\mathbb{R}^{d\times d}$, the following property holds true:
$$\tilde{\mathcal{G}}[\tilde{\bm{f}};\theta](Rx+g)=R\tilde{\mathcal{G}}[\bm{f};\theta](x) \text{ ,}$$
where $\tilde{\bm{f}}(Rx+g):=R\bm{f}(x)$.
\end{theorem}
\vspace{-0.1in}
\section{Experiments}\label{sec:exp}
\vspace{-0.1in}
\begin{figure}[!t]\centering
\includegraphics[width=0.75\linewidth]{Schematic_new2.pdf}
\caption{Illustration of input/output settings in our three exemplar problems.\vspace{-0.1in}}
\label{fig:experiments}
\end{figure}
In this section, we demonstrate the empirical effectiveness of INOs. Specifically, we conduct experiments on both 2D synthetic and real-world dataset
, and compare the proposed INO against three data-based neural operator baselines -- GNOs \cite{li2020neural}, FNOs \cite{li2020fourier}, and MWTs \cite{gupta2021multiwavelet}, a physics-informed neural operator baseline (PINO \cite{li2021physics}), and an equivariant GNN (EGNN \cite{satorras2021n}). Herein, we employ $L=4$ iterative layers for all neural operators. All experiments are tested using PyTorch with Adam optimizer. For fair comparison, we tune the hyperparameters for each method, including the learning rates, decay rates, and regularization coefficient, to minimize the error on the validation dataset. In all tests, we report the averaged relative error, $||\bm{u}_{i,pred}-\bm{u}_{i}||/||\bm{u}_{i}||$, as the comparison metric (lower means better). An illustration of our three test examples are provided in Figure \ref{fig:experiments}, with further details of each dataset and experimental settings provided in \ref{app:b}.
\vspace{-0.05in}
\subsection{Synthetic dataset: sub-surface flow}
\vspace{-0.05in}
As a benchmark for scalar-valued output functions, we consider the modeling of 2D sub-surface flow through a porous medium with heterogeneous permeability field.
The high-fidelity synthetic simulation data in this example is described by the Darcy flow, which has been considered in a series of neural operator studies \citep{li2020neural,li2020multipole,li2020fourier,lu2021comprehensive,you2022nonlocal,you2022learning}. Specifically, the governing differential equation is defined as:
\begin{equation}\label{eq:darcy_PDE}
\begin{split}
-\nabla \cdot (\bm{f}(x)\nabla \bm{u}(x)) &= 1,\, \forall \;\; x\in{\Omega}:=[0,1]^2,\\
\text{subjected to}\;\;\;\bm{u}_{BC}(x) &= 0, \, \forall \;\; x \in \partial \Omega,
\end{split}
\end{equation}
where $\bm{f}$ is the conductivity field, and $\bm{u}$ is the hydraulic head (both are scalar-valued functions). In this context, we aim to learn a solution operator of the Darcy's equation that maps each realization of the conductivity field $\bm{f}(x)$ to the corresponding solution field $\bm{u}(x)$. For training, we employ the dataset from \cite{li2020neural}, where the conductivity field $\bm{f}(x)$ is modeled as a two-valued piecewise constant function with random geometries such that the two values have a ratio of $4$. Then, the ground-truth solutions of $\bm{u}(x)$ were generated using a second-order finite difference scheme on a fine resolution of $241\times 241$ grids and downsampled to $16\times 16$ and $31\times 31$. Herein, we split the provided test samples into $40$ samples as validation for hyperparameter tuning and $40$ samples as test. To investigate the performance of each model under small data regime, we train with $N^{\text{train}}=\{5,10,20,40,100\}$ numbers of labelled data pairs. The dimension of representation is set to $d_h=64$, and the kernel $\kappa$ is modeled by a 3-layer MLP with width $(n, 512, 1024, d_h^2)$, where $n$ is the number of kernel arguments for each architecture. For 2D problems, $n=6$ for GNOs and $n=4$ for INOs.
\begin{figure*}[h!]
\centering%
\includegraphics[width=0.31\columnwidth]{ablation_bar_gaussian_finetune.pdf}
\includegraphics[width=0.31\columnwidth]{loss_fno_gno_ino_new.pdf}
\includegraphics[width=0.36\columnwidth]{loss_darcy_data_aug_merge2.pdf}
\caption{Results for cases with scalar-valued output function. Left: comparison between INO, GNO, norm-INO, aug-GNO, and other baseline models in small and medium data regimes. Middle: comparison between FNO, GNO, and INO with varying numbers of training samples and test on grids with different resolutions. Right: Generalization results to test samples with translated and rotated reference frames.}
\label{fig:darcy}
\end{figure*}
\textbf{Ablation study. }We first conduct an ablation study on the proposed algorithm, with four settings: 1) The original INO-scalar architecture; 2) The original GNO architecture; With this test, we aim to investigate the expressivity of our invariant kernel compared to the original kernel of \eqref{eq:gkn_2}. 3) The INO-scalar architecture, with its kernel $\kappa$ depending on the Euclidean norm of the edge only (denoted as ``norm-INO''); With this test, we study if the local edge orientation, $\theta$, plays an important role in the model. 4) The GNO architecture with training data augmentation (denoted as ``aug-GNO''), where we train the GNO model with additional translated/rotated samples. Specifically, we translate the reference frame by setting $\tilde{{\Omega}}\in R([C_x,1+C_x]\times [C_y,1+C_y])$. Herein, the randomly generated constants $C_x,C_y\sim \mathcal{U}[-1,1]$, where $\mathcal{U}[-1,1]$ denotes the uniform distribution on $[-1,1]$. Similarly, for rotation we randomly generate $C_\theta\sim\mathcal{U}[0,2\pi]$ and rotate the reference coordinates counter-clockwisely. For each training sample, we repeat the above process to augment the training set for 3 times. With this test, we investigate if our invariant architecture outperforms an invariance-agnostic approach with data augmentation. On each of these four settings, we report the training and test errors with $N^{\text{train}}=\{10,100\}$\footnote{Since the GNO model could not finish its training in the ``aug-GNO, $N^{\text{train}}=100$'' case within 72 hours, we only report the results from $N^{\text{train}}=10$ for this setting.} to study the performance of each model in small and medium data regimes. Unless otherwise stated, all trainings and tests are performed on $16\times 16$ structured grids. We plot the results in the top left of Figure \ref{fig:darcy}, with further error comparisons provided in Table \ref{tab:darcy_results} of \ref{app:b}.
As shown in the top left of Figure \ref{fig:darcy}, INOs outperform GNOs in both small and medium data regimes. When $N^{\text{train}}=100$, INOs present $1.2\%$ test error while GNOs have $2.0\%$ test error. When $N^{\text{train}}=10$, INOs have $6.1\%$ test error, which also outperforms GNOs (with $7.7\%$ test error) by $20\%$. This is possibly due to the fact that INOs have only 4 arguments in its kernel function $\kappa$, while GNOs have 6. Hence, INOs are less likely to overfit with small and medium data. Surprisingly, the data-augmentation strategy did not help much, and the aug-GNOs show a similar accuracy to the original GNOs. On the other hand, when comparing the performance between INOs and norm-INOs, we notice that norm-INOs achieves the best performance in the small data regime, possibly due to the fact that it further reduces the number of kernel arguments to 3. However, when we increase $N^{\text{train}}$ to $100$, the performance of norm-INOs deteriorates due to the lack of flexibility. Hence, in this paper we focus on INOs with a 4-argument kernel, since it outperforms GNOs in both small and medium data regimes, and has better expressivity than the norm-INOs in the medium data regime.
\textbf{Comparison with more baselines. }
We now present the comparison with other baselines methods by comparing the test errors of INO-scalar with GNO, FNO, MWT, PINO, and EGNN. To obtain a fair comparison, the numbers of trainable parameters for all models are within the range of $[4.2M,4.8M]$ (see Table \ref{tab:darcy_para} in \ref{app:b} for further details). As shown in the top left of Figure \ref{fig:darcy} and Table \ref{tab:darcy_results}, our proposed INOs have achieved the best accuracy in both small ($N^{\text{train}}=10$) and medium ($N^{\text{train}}=100$) data regimes. Here, we note that in the PINO architecture, we follow \cite{li2021physics} and add the error associated with the PDE governing equation as a penalization term in the loss. The penalty parameter was tuned together with other hyperparameters, so as to optimize the validation error. With this test, we aim to compare the efficacy of INOs with another popular physics-informed approach, where full physics knowledge is infused via a soft constraint (when the governing equation is known). As can be seen from the results, INO still outperforms PINO even in the small data regime, where the best PINO presents $9.7\%$ error. Generally, when the PDE loss has a large weight, PINO gets closer to PINN and suffers from slow convergence. On the other hand, our INOs embed the physics knowledge in the architecture instead of a soft constraint, and do not suffer from the challenges in optimization.
\begin{table}[]
\centering
{\small\begin{tabular}{|c|ccc|cc|}
\hline
{Dataset} & \multicolumn{3}{c|}{Glass-Ceramics} & \multicolumn{2}{c|}{Biological Tissue}\\
\cline{1-6}
$N^{\text{train}}$& 10 & 40 & 100 & 10 & 100 \\
\hline
GNO, train & 9.48\% & 4.09\% & 1.58\% & 9.63\% & 6.76\%\\
GNO, test & 31.25\% & 10.16\% & 8.50\% & 38.36\% & 14.15\%\\
\hline
INO, train & 4.43\% & 7.20\% & 7.28\% & 4.50\% & 3.65\%\\
INO, test & \textbf{12.65}\% & \textbf{8.19}\% & \textbf{7.94}\% & \textbf{17.37}\% & \textbf{6.38}\%\\
\hline
FNO, train & - & - & - & 8.49\% & 3.95\%\\
FNO, test & - & - & - & 36.73\% & 8.49\%\\
\hline
MWT, train & - & - & - & 21.16\% & 2.24\%\\
MWT, test & - & - & - & 41.79\% & 7.57\%\\
\hline
\end{tabular}}
\caption{Results for cases with vector-valued output function. Training and test errors in small and medium data regimes, where bold numbers highlight the best method for each case.}
\label{fig:vector_table}
\end{table}
\begin{figure*}[h!]
\centering%
\includegraphics[width=0.6\columnwidth]{loss_lps_tissue3.pdf}
\caption{Generalization results to test samples with translated and rotated reference frames, for cases with vector-valued output function.}
\label{fig:vector}
\end{figure*}
\textbf{Performance with the change of $N^{\text{train}}$ and resolutions. } Next, we further compare our INOs with two popular baseline neural operators, FNOs and GNOs, for $N^{\text{train}}=\{5,10,20,40,100\}$. For the purpose of testing generalization properties with respect to resolution, we train all models on $16\times 16$ structured grids, and consider two testing datasets: the ``original-resolution'' dataset with $16\times 16$ grids, and a ``fine-resolution'' dataset with $31\times 31$ grids. In the top right plot of Figure \ref{fig:darcy}, we report the training and test errors on both original- and fine-resolution datasets for each model, with further visual comparison on an exemplar test instance provided in Figure \ref{fig:plot_cross_resolution} of \ref{app:b}. One can observe that INOs consistently out-performe GNOs and FNOs. Among all three neural operators, in the small data regime FNOs have the smallest training error and the largest test error, indicating that they suffer more from overfitting. This observation is also consistent with the findings in \cite{you2022nonlocal,you2022learning}. When comparing the test results across two resolutions, we observe that test errors at different resolutions remain on a similar scale for all three methods. This fact again verifies the capability of neural operators in handling different resolutions. On the contrary, the EGNN model learns a vector-vector mapping rather than a function-function mapping, which makes it not resolution-invariant: testing EGNN on $31\times 31$ grid samples yields $492\%$ error.
\textbf{Translation and rotation tests. }Lastly, we verify the capability of INOs in handling translated and rotated samples. In the translated test dataset, we translate the reference frame of each test sample by shifting it onto a new domain, $\tilde{{\Omega}}\in[C_x,1+C_x]\times [C_y,1+C_y]$. Here the movement of domain is randomly generated as $C_x,C_y\sim \mathcal{U}[-C,C]$ and the constant $C$ defines the translation range. Similarly, in the rotated dataset we randomly generate $C_\theta\sim[0,C]$ for each test sample, and rotate the reference frame by $C_\theta$ counter-clockwisely. The test errors of GNOs, FNOs, and INOs are reported in the bottom plot of Figure \ref{fig:darcy}, along with results from GNOs with data-augmentation, where $N^{\text{aug}}$ represents the total number of augmented training samples in addition to $N^{\text{train}}$. Perhaps unsurprisingly, while INOs exhibit invariant performance on translated and rotated datasets, the performances of GNOs and FNOs deteriorate as the range of translation/rotation, $C$, increases. When comparing the results from original GNOs and aug-GNOs, we notice that the data-augmentation trick is generally helpful in handling transformations, although it increases the training dataset size and requires longer training time. In Figure \ref{fig:plot_trans_rot_pred} of \ref{app:b} we plot the test results of INO, GNO, and FNO on an exemplar instance with translated and rotated frame coordinates, respectively, where one can see that the INO solutions are invariant while the solutions from GNO and FNO are not.
\vspace{-10pt}
\subsection{Synthetic dataset: glass-ceramics deformation}
\vspace{-0.05in}
In this example, we study material deformation in a glass-ceramic specimen as a prototypical exemplar on the heterogeneous material response prediction. A glass-ceramic material is the product of controlled crystallization of a specialized glass composition, which results in a microstructure of one or more crystalline phases within the residual amorphous glass \citep{prakash2022investigation,serbena2012internal}. We consider an idealized microstructural realization on a circular domain with radius$=0.4$, which is subject to displacement-type loading on its boundary. This microstructure realization is composed of randomly distributed crystals embedded in a glassy matrix, such that the crystals occupy $40\%$ of the volume. To generate the training and test samples, we adopted the mechanical parameters in \cite{fan2022meshfree} and employed the quasi-static linear peridynamic solid (LPS) solver to generate the high-fidelity simulation data. We train $N^{\text{train}}=\{10,40,100\}$ numbers of labelled data pairs, while validate/test the model with 40/40 samples, respectively. In this example, the specimen deformation is driven by the loading on its boundary, and hence our goal is to learn the mapping from $\bm{f}(x):=[\bm{u}_{BC}(x),\bm{a}(x)]$ to $\bm{u}(x)$, where $\bm{u}_{BC}(x)$ stands for (padded) Dirichlet-type boundary condition, and $\bm{a}(x)$ provides the microstructure information such that $\bm{a}(x)=0$ if the material point $x$ is glass and $\bm{a}(x)=0$ if $x$ is crystal. The INO-vector architecture is employed. Here, we emphasize that the domain is no longer structured and hence FNOs and MWT are not applicable. Therefore, in this example we compare the performances of GNOs and INOs.
In Table \ref{fig:vector_table}, we report our experimental results on $N^{\text{train}}=\{10,40,100\}$ samples. The proposed INOs obtain the lowest relative errors on the test dataset compared to GNOs. Furthermore, in Figure \ref{fig:vector}, we study the performance of both neural operators on translated and rotated test samples. One can see that the error from INOs is invariant, whereas GNOs errors increase with the increase of the translation and rotation ranges.
\vspace{-10pt}
\subsection{Real-world dataset: biological tissue deformation}
\vspace{-0.05in}
We now take one step further to demonstrate the performance of our method on a real-world physical response dataset not generated by solving PDEs. We consider learning the mechanical response of multiple biological tissue specimens from DIC displacement tracking measurements \citep{you2022physics}. In this example the constitutive equations and material microstructure are both unknown, and the dataset has unavoidable measurement noise. In this task, we aim to model the tissue response by learning a neural operator mapping the boundary displacement loading to the interior displacement field. We train with $N^{\text{train}}=\{10,100\}$ numbers of samples, validate with $40$ samples, and test the learned model with $4500$ samples. Since there is no known governing PDE, the PINO architecture is not applicable. Hence, we compare INOs with other three neural operator baselines (GNOs, FNOs, and MWTs). We note that experimental measurements are generally not provided on Cartesian grids. To test FNOs and MWTs, we apply a cubic spline interpolation to the
displacement field, to obtain measurements on a structured grid. The results are provided in Table \ref{fig:vector_table} and Figure \ref{fig:vector}. As one can see, INOs again perform the best. Interestingly, compared with GNOs, in this example INOs halve the test errors not only in the small data regime ($N^{\text{train}}=10$), but also in the medium data regime ($N^{\text{train}}=100$). This is possibly due to the fact that the experimental measurement noise makes the learning with a small number of samples more challenging. Therefore, this example validates the robustness of our INOs not only in a small data regime, but also on noisy real-world datasets.
\vspace{-0.05in}
\section{Conclusion}
\vspace{-0.05in}
We proposed INO to learn complex physical systems with guaranteed momentums conservation. The key is to design the network architecture that preserves the translational and rotational invariance/equivariance properties. Our approach finds a physical interpretation from a particle-based method, and only requires observed data pairs with minimal physical assumptions.
We demonstrate with both synthetic and real-world datasets the expressivity and generalizability of the proposed INOs, and show that the guaranteed momentum conservation improves the learning efficacy, especially in small and noisy data regimes. INO is not only generalizable in handling translated and rotated datasets, but also provides improved prediction from the baseline neural operator models. Herein, we point out that our INOs represent the first conservation law guaranteed neural operator architecture, and we believe that it is a novel and promising framework applicable to many examples in physical system learning.
\section*{Acknowledgements}
The authors would like to thank the reviewers for their careful reading and valuable suggestions that help improve the quality of the paper.
Y. Yu, H. You, and N. Tatikola would like to acknowledge support by the National Science Foundation under award DMS-1753031 and the AFOSR grant FA9550-22-1-0197. Portions of this research were conducted on Lehigh University's Research Computing infrastructure partially supported by NSF Award 2019035.
| proofpile-arXiv_065-1930 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The outcomes of high energy collider experiments depend to a large extent on event simulations obtained with MC generators. So do the planning and development of future machines and measurements \cite{Azzi:2019yne,Feng:2022inv,Mangano:2016jyj,LHeC:2020van,Proceedings:2020eah}. The baseline MCs are based on the description of hadron structure provided by collinear PDFs \cite{Kovarik:2019xvh}, while a more complete, 3D description of hadron structure is given by TMD PDFs \cite{Angeles-Martinez:2015sea}. There are thus efforts to include elements of TMD physics in the modern MC generators and in the parton-branching algorithms on which they are based. The idea of the work \cite{Hautmann:2022xuc} described in this article is to include the TMD splitting functions obtained from the high-energy (or small-x) limit of partonic amplitudes \cite{Catani:1994sq} in a parton branching algorithm, with the goal to incorporate in the parton evolution both small-x and Sudakov contributions. Thanks to its applicability over a wide kinematic region, the algorithm provided by the TMD Parton Branching (PB) method \cite{Hautmann:2017xtx,Hautmann:2017fcj} was chosen to perform this research.
\section{The TMD Parton Branching method}
The PB method is a flexible, widely applicable MC approach to obtain QCD high energy predictions based on TMD PDFs, simply called TMDs.
One of its main ingredients is a forward evolution equation \cite{Hautmann:2017xtx,Hautmann:2017fcj}.
The evolution of the parton density is expressed in terms of real, resolvable branchings and virtual and non-resolvable contributions, which are treated with Sudakov form factors.
Thanks to the momentum sum rule \footnote{Momentum sum rule for the DGLAP splitting functions $P_{ab}(z,\mu^2)$ yields $\sum_a\int_0^1 \textrm{d} z \; z P_{ab}(z,\mu^2) = 0$. }
and unitarity, the Sudakov form factor can be written in terms of real, resolvable splittings and interpreted as a non-emission probability.
Owing to the simple, intuitive picture of the evolution in terms of cascade of branchings and the probabilistic interpretation of the splitting functions and the Sudakov form factors, the PB evolution equation can be solved with MC techniques using a parton branching algorithm.
Additionally to the evolution equation, PB provides also a procedure to fit parameters of the initial distribution to the experimental data using \texttt{xFitter} platform \cite{Alekhin:2014irh}. Obtained PB TMDs and PDFs \cite{BermudezMartinez:2018fsv,Jung:2021vym,Jung:2021mox} are accesible via TMDlib \cite{Abdulov:2021ivr} and in LHAPDF \cite{Buckley:2014ana} format for the usage in (TMD) MC generators. A generator of a special importance is the TMD MC generator Cascade \cite{Baranov:2021uol} where
the TMD initial state parton shower is implemented with the backward evolution guided by the PB TMDs.
The PB method provides the procedure to match PB TMDs with next-to-leading order (NLO) matrix elements \cite{BermudezMartinez:2019anj} to obtain predictions. Recently, there was also a merging procedure developed \cite{BermudezMartinez:2021lxz}.
The PB method was used to study different evolution scenarios
like ordering conditions or resolution scales, see e.g. \cite{Hautmann:2017xtx,Hautmann:2019biw}. The PB predictions have been calculated for multiple measurements, in very different energy and mass regimes, including hadron colliders, fixed target experiments and $ep$ collider \cite{BermudezMartinez:2018fsv,BermudezMartinez:2019anj,BermudezMartinez:2020tys,Yang:2022qgk,Abdulhamid:2021xtt,H1:2021wkz}.
All those successful PB studies were performed with the DGLAP \cite{Gribov:1972ri,Lipatov:1974qm,Altarelli:1977zs,Dokshitzer:1977sg} splitting functions calculated in the collinear approximation. However, in some infrared-sensitive phase space regions, the collinear approximation is not enough
\cite{Dooling:2012uw,Dooling:2014kia}. In this work the PB approach was extended by using the TMD splitting functions \cite{Catani:1994sq,Gituliar:2015agu,Hentschinski:2016wya,Hentschinski:2017ayz}.
\section{TMD splitting functions}
The concept of the TMD splitting functions originates from the high energy factorization \cite{Catani:1994sq}, where the TMD splitting function for the splitting of an off-shell gluon into quark, $\widetilde{P}_{qg}$, was calculated. The other channels were obtained in \cite{Gituliar:2015agu,Hentschinski:2016wya,Hentschinski:2017ayz}.
The splitting functions have well defined collinear and high energy limits.
It was demonstrated that in the limit of small incoming transverse momenta, after angular average, the TMD splitting functions converge to the DGLAP leading order (LO) splitting functions. For finite transverse momenta, the TMD splitting function \cite{Catani:1994sq} can be written as an expansion in powers of the transverse momenta with $z$-dependent coefficients, which, after convoluting them with the TMD gluon Green's functions \cite{Kuraev:1977fs,Balitsky:1978ic}, give the
corrections to the splitting function logarithmically enhanced for $z\rightarrow 0$. Therefore, the work presented next on the implementation of
TMD splitting functions in the PB method can be viewed as a step toward
constructing full MC generators for small-$x$ physics (see e.g. \cite{Chachamis:2015zzp,Andersen:2011zd,Jung:2010si,Hoeche:2007hlb,Golec-Biernat:2007tjf}).
\section{TMD splitting functions in the PB method}
The DGLAP splitting functions $P_{ab}^R (z, \mu^{\prime})$ were replaced by the TMD ones $\tilde{P}_{ab}^{R}\left(z, k_{\bot} +(1-z)\mu_{\bot}^{\prime}, \mu_{\bot}^{\prime}\right)$ in the PB evolution equation for the momentum weighted parton density $x{\mathcal{A}}_a = \tilde{\mathcal{A}}_a$ \cite{Hautmann:2017fcj}
\begin{multline}
\tilde{\mathcal{A}}_a\left( x,k_{\bot}^2, \mu^2\right) =
\Delta_a\left(\mu^2,k_{\bot}^2\right)\tilde{\mathcal{A}}_a\left( x,k_{\bot}^2, \mu_0^2\right) +
\sum_b\int\frac{d^2\mu_{\bot}^{\prime}}{\pi\mu_{\bot}^{\prime 2}}\Theta(\mu_{\bot}^{\prime 2}-\mu_0^2)\Theta(\mu^2-\mu_{\bot}^{\prime 2})
\\
\times \int\limits_x^{z_M }\textrm{d}z\, \frac{ \Delta_a\left(\mu^2, k_{\bot}^2 \right) }
{ \Delta_a\left(\mu_{\bot}^{\prime 2}, k_{\bot}^2 \right)} \tilde{P}_{ab}^{R}\left(z, k_{\bot} +(1-z)\mu_{\bot}^{\prime}, \mu_{\bot}^{\prime}\right)
\tilde{\mathcal{A}}_b\left( \frac{x}{z}, (k_{\bot}+(1-z)\mu_{\bot}^{\prime})^2, \mu_{\bot}^{\prime 2}\right),
\label{EvolEq}
\end{multline}
where $a,b$- are the flavour indices, $x$- the fraction of the proton's longitudinal momentum carried by the parton $a$, $k_{\bot}$-the transverse momentum, $\mu$ - the evolution scale, $\mu_0$- the initial evolution scale, $z$ - the momentum transfer in the splitting and $z_M$- the soft gluon resolution scale which can be scale dependent.
To treat the virtual/non-resolvable emissions, a new TMD Sudakov form factor was introduced \cite{Hautmann:2022xuc}
\begin{equation}
\Delta_a(\mu^2,\mu_0^2,k_{\bot}^2)\equiv\Delta_a(\mu^2,k_{\bot}^2)=\exp\left(-\sum_b\int_{\mu_0^2}^{\mu^2}\frac{d\mu'^2}{\mu'^2}\int_0^{z_M}dz\ z\bar P^R_{ba}(z,k_{\bot}^2,\mu'^2)\right),
\label{TMDSud}
\end{equation}
using the angular averaged TMD splitting functions $\bar P^R_{ba}(z,k_{\bot}^2,\mu'^2)$. This construction was possible thanks to the momentum sum rule and unitarity.
As an intermediate step, a scenario with the TMD splittings included in the real resolvable emissions but with
the default PB Sudakov form factor
\begin{equation}
\Delta_a(\mu^2,\mu_0^2)\equiv\Delta_a(\mu^2)=\exp\left(-\sum_b\int_{\mu_0^2}^{\mu^2}\frac{d\mu'^2}{\mu'^2}\int_0^{z_M}dz\ z P^R_{ba}(z,\mu^{\prime 2})\right)
\label{CollSud}
\end{equation}
was studied.
It was shown analytically \cite{Hautmann:2022xuc}, that only when the same type of splitting functions are used both in the real emissions and Sudakov form factors, the evolution equation from Eq.~\ref{EvolEq} satisfies the momentum sum rule.
In other words, for the evolution equation Eq.~\ref{EvolEq} with the TMD Sudakov form factor in the form given by Eq.~\ref{TMDSud} the momentum sum rule holds, whereas with the collinear Sudakov form factor from Eq.~\ref{CollSud} it is broken.
\begin{figure}[htb]
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_iTMDx-down-mu100.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_iTMDx-gluon-mu100.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_kt-down-x1e-3-mu100_linear.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_kt-gluon-x1e-3-mu100_linear.pdf}
\end{minipage}
\hfill
\caption[]{
Down quark and gluon distributions for scenarios with the collinear splitting functions (red), with the TMD splitting functions in the real emissions and the collinear Sudakov form factor (blue) and with the TMD splitting functions both in the real emissions and in the Sudakov form factor (purple).
Top: integrated TMDs as a function of $x$ at $\mu=100\;\textrm{GeV}$. Bottom: TMDs as a function of $|k_{\bot}|$ at $x=0.001$ and $\mu=100\;\textrm{GeV}$ \cite{Hautmann:2022xuc}. }
\label{Fig:Distributions}
\end{figure}
\section{Numerical results}
In the upper part of Fig.~\ref{Fig:Distributions}, the integrated distributions (iTMDs) as a function of $x$ at the scale $\mu=100\;\textrm{GeV}$ are shown for down quark and gluon for 3 evolution scenarios: the dashed red curve is obtained from the PB evolution equation with collinear splitting functions, the blue dotted curve with the model with TMD splitting functions in real resolvable emissions but with the collinear Sudakov form factors and the solid magenta line with the TMD splitting functions both in the real resolvable emissions and the Sudakov form factors. In the bottom of Fig.~\ref{Fig:Distributions} the down quark and gluon TMDs as a function of $|k_{\bot}|$ are shown at $x=0.001$, $\mu=100\;\textrm{GeV}$ for the same 3 models.
The bottom panel of each plot shows the ratios obtained with respect to the fully collinear scenario.
For the purpose of this study, the same starting distribution was used for all those 3 models, which means that the differences between the curves come only from the evolution, i.e. purely from the treatment of the splitting functions. For the iTMDs, the effect of the TMD splitting functions is visible especially at low $x$, for the TMDs, the effects are visible in the whole $k_{\bot}$ region. It is worth reminding that for both the red and magenta curves the momentum sum rule holds, whereas the blue curve violates it. The numerical check of the momentum sum rule was performed in \cite{Hautmann:2022xuc}.
\section{Conclusions}
In this work a parton branching algorithm to obtain TMDs and integrated distributions, which for the first time includes TMD splitting functions and fulfils momentum sum rule, was presented.
A new TMD Sudakov form factor was constructed using the momentum sum rule and unitarity.
The studies presented here are at the level of the forward evolution but it is a
first step towards a full TMD MC generator covering the small-$x$ phase space.
\section*{Acknowledgements}
Presented results were obtained in collaboration with F. Hautmann, M. Hentschinski, L. Keersmaekers, A. Kusina and K. Kutak.
A. Lelek acknowledges funding by Research Foundation-Flanders (FWO) (application number: 1272421N).
\bibliographystyle{mybibstyle}
{
| proofpile-arXiv_065-1936 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\input{sections/introduction}
\section{Simulation Parameters}
\input{sections/parameters}
\section{Analysis of the Deconfinement Transition}
\input{sections/analysis}
\section{Numerical Results}
\input{sections/results}
\acknowledgments{
The authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 \enquote{Strong-interaction matter under extreme conditions} – project number 315477589 – TRR 211 and by the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006).
We thank the Helmholtz Graduate School for Hadron and Ion Research (HGS-HIRe) for its support as well as the staff of L-CSC at GSI Helmholtzzentrum für Schwerionenforschung and the staff of Goethe-HLR at the Center for Scientific Computing Frankfurt for computer time and support.
}
\bibliographystyle{JHEP}
| proofpile-arXiv_065-1953 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Over the years, roboticists have sought to develop robots that can play various sports such as soccer~\cite{robogames,robocup}, sumo~\cite{robot_sumo}, and table tennis~\cite{buchler2022learning} to demonstrate and test the capabilities of their systems. Sporting applications serve as natural milestones for robotic systems to achieve human-level performance, as to be effective at a sport, the system needs to be able to reason about the state of the game, be agile in its response to changing observations, and be mindful of objects in its vicinity. Prior work has sought to learn such complex behaviors by either using reinforcement learning~\cite{jonas_rl_tabletennis_2021,gao_modelfree2020,abeyruwan2022isimreal} or utilizing data from an expert to train the robot.
Learning for Demonstrations (LfD)~\cite{ARGALL2009469} is a framework for learning a policy from a set of demonstrations provided by an expert. Prior work has shown how movement primitives obtained from kinesthetic teaching can be used to teach robots to hit table tennis strokes~\cite{movementprimitives_mulling2010,mulling_momp2013,gonzalez2016prompstrike,gomez2020adaptation}.
It has also been shown that kinesthetic demonstrations can be used successfully to teach robots to play various styles of strokes for table tennis~\cite{chen_strategyinference2020,pmlr-v155-chen21b}.
More recently, the problem of improving a policy learned from expert demonstrations using reinforcement learning, inverse reinforcement learning, and self-supervised learning has also been explored~\cite{pmlr-v155-chen21b,chen2020bail,Cheng2018FastPL,tianli_goalseye2022}. However, most of these works are evaluated on either simulated environments or on robot arms mounted on a stationary platform.
The need for algorithms that support learning robot behavior for versatile robot platforms is exacerbated in larger-scale racket sports, such as tennis. Tennis is a more challenging problem for robots as it requires a responsive agile mobile base and higher racket head speeds than table tennis. In this work, we demonstrate the first attempt to extend the ProMP framework to an agile mobile manipulator to achieve successful tennis groundstrokes with a wheelchair tennis robot. We additionally, describe an approach to refine the learned primitives from demonstrations based on human feedback.
\section{Preliminaries}
\label{prereq}
In this section, we provide an overview of the Probabilistic Movement Primitive~(ProMP) and the notations we will use in the paper. Interested readers are encouraged to refer to \cite{gomez2020adaptation} for more details.
The ProMP is a modeling technique that compactly represents a probability distribution over robot trajectories~\cite{paraschos2013probabilistic}. Let $q_t$ represent the joint states of the robot at time $t$. The ProMP defines a set of time-dependent basis functions (represented by $\Phi_t$) and a weight vector, $w$, that compactly encodes the robot trajectory, $\tau = \{ q_t \}$ by representing $q_t \sim \mathcal{N}(\Phi_t w, \Sigma_y)$, where $\Sigma_y$ models any white noise.
Given a dataset of demonstrations, ProMP learns a distribution over weights, $P(w\mid\{\mu_w, \Sigma_w\}) = \mathcal{N}(\mu_w, \Sigma_w)$), that captures the common features across trajectories while factoring in the variance that captures the variation across demonstrations. The parameters, $\{ \mu_w, \Sigma_w, \Sigma_y \}$, of this Hierarchical Bayesian Model~(HBM) formulation can be computed from demonstrations by obtaining Maximum Likelihood Estimates (MLE) via exact methods or Expectation-Maximization~(EM) procedures. A key property of ProMP that we leverage in this work is to \emph{condition} it to pass through desired end-effector waypoints. Since the ProMP representation is in the joint space, some form of inverse kinematics~(IK) is required to perform this conditioning operation.
\section{Method}
\label{sec:method}
In \Cref{subsec:system}, we provide a brief overview of our experiment setup: a wheelchair tennis robot. In \Cref{subsec:stroke-controller}, we describe the details of the deployed stroke controller that safely executes the learned primitives. In \Cref{subsec:refine-prim}, we present our proposed method for improving movement primitives based on human feedback.
\subsection{System Overview}
\label{subsec:system}
We mounted a 7-DOF high-speed Barrett WAM arm on a motorized Top End Pro Tennis Wheelchair to build an agile mobile manipulator system~\cite{zaidi2022athletic}. The system is designed to emulate the athletic gameplay of regulation wheelchair tennis, where players need to react quickly in the order of a few hundred milliseconds.
To sense and track the movement of the tennis ball, we make use of a decentralized array of stereo cameras that provides measurements from different perspectives. These estimates are fused by an Extended Kalman Filter~(EKF)~\cite{moore2016generalized} to output the ball's estimated state, which is propagated forward in time to predict the ball's future trajectory.
The problem of whole-body control of mobile manipulators is challenging -- particularly in an agile robotics setting -- so we limit the scope of our study by constraining the wheelchair to move along one dimension. We choose to allow the robot to move laterally; notably, human players exhibit only lateral movements for the majority of the strokes~\cite{kovacs2009movement}.
We model the lateral movement as a prismatic joint and obtain a kinematics model for the system (illustrated in \Cref{app:kinematic-model}).
\subsection{Stroke Controller}
\label{subsec:stroke-controller}
We build our stroke controller upon ProMP, originally proposed for table tennis in~\cite{gonzalez2016prompstrike}. We make two key advances for our wheelchair tennis robot.
First, since the strokes executed often reach racket-head speeds $\sim10$\mps, it is critical to ensure that the conditioned joint space configuration at the time of impact is safe and achievable without any self-collisions. Thus, we propose to clip IK solutions that pass through the desired end-effector position to be within pre-specified limits, doing so alleviates the risk of getting bad but feasible IK solutions.
When a ball is launched, the desired hit point for conditioning is identified by computing where the predicted ball trajectory crosses a pre-specified hit plane, and this point is then transformed into the frame of the mobile base. Iterative IK updates to the available Degrees of Freedom~(DoFs) are performed to reach this desired hit point, and the total updates are clipped to be within the specified limits. The pseudocode of this procedure is presented in \Cref{app:stroke-controller}.
Second, based on the observation from \cite{zaidi2022athletic} that early positioning movement for tennis-playing robots can improve the chances of a successful return, we allow the wheelchair to continuously adjust its position during the conditioning process and execute the stroke independently based on the anticipated ball arrival time. A flow chart explaining the state-machine of the stroke controller is illustrated in \Cref{app:stroke-controller}.
\subsection{Refining primitives through human feedback}
\label{subsec:refine-prim}
In~\cite{gomez2020adaptation}, the parameters of ProMP are trained through a dataset of successful demonstrations obtained through kinesthetic teaching or engineered controllers. While the ProMP model is initially trained to recreate a demonstrated behavior (i.e., a robot arm trajectory that hits a ball), the result is suboptimal due to both the suboptimality of the demonstration itself, out-of-distribution incoming flights of the tennis ball and the hardware-software system's inability to perfectly execute a commanded trajectory.
Nonetheless, the trained primitive does serve as an excellent starting point for exploring better trajectories that can be executed on the hardware. Therefore, we propose to iteratively improve the primitives by having a human evaluator assign scalar feedback indicative of the quality of the executed trajectory. We collect human feedbacks $\{r_n\}_{n=1}^N$ from the evaluator for $N$ executed trajectories $\{\{ q_{nt} \}_{t=1}^{T_n}\}_{n=1}^N$ and then construct a dataset $\mathcal{D}$ which consists of trajectories and the associated importance weights ($\alpha_n$, obtained as softmax over feedbacks) to optimize the following weighted log-likelihood objective:
\vspace{-25pt}
\begin{align}
\vspace{-45pt}
\texttt{WeightedLogLikelihood}(\mathcal{D} \mid \theta) = \sum_{i=1}^N \alpha_n \log(\texttt{Likelihood}(\;\{q_{nt}\}_{t=1}^{T_n} \mid \theta\;))
\end{align}
The parameters are optimized with the EM procedure outlined in \cite{gomez2020adaptation}. In the M-step we compute the weighted average of the estimates from the E-step as a consequence of the weighted log-likelihood objective. This algorithm can be considered a part of the TAMER\cite{knox2009interactively} class of algorithms, as we are interactively shaping the distribution of executed trajectories using human feedback.
\begin{algorithm}
\caption{Iterative refinement of ProMP parameters}
\label{alg:iter-emwll}
\setstretch{1.15}
\begin{algorithmic}
\Require ProMP parameters $\theta$ = ($\mu_w$, $\Sigma_w$, $\Sigma_y$) trained from human demonstrations
\Repeat
\State Execute N trajectories $\{\tau_n\}$ from conditioned execution of $\theta$ and obtain human feedback $\{r_n\}$
\State Compute importance weights over trajectories, $\{\alpha_n\} \gets \texttt{SoftMax}_{n=1}^N(\{r_n\})$
\State With dataset $\mathcal{D} = \{ (\tau_n, \alpha_n) \}$, perform $\theta \gets \texttt{EM-WeightedLogLikelihood}(\mathcal{D}, \theta_{\text{init}} = \theta)$
\Until {convergence}
\end{algorithmic}
\label{alg:refine}
\end{algorithm}
\subsection{Experiment Setup}
\label{sec:exp-setup}
For our experiments, we evaluate the performance of the ProMP stroke controller in the lab setup illustrated in \Cref{subfig:lab-setup}. We initialize the ProMP parameters by training it on a dataset of successful demonstration obtained from a manually engineered stroke (\Cref{subfig:example-hit}); this serves as our base primitive. To evaluate the proposed fine-tuning algorithm (\Cref{subsec:refine-prim}) we collect a dataset by running the base primitive. For each ball launched, we record the joint states to a ROS\cite{quigley2009ros} bag file and store the associated feedback based on criteria listed in \Cref{tab:reward-criteria}. We segment the trajectories from the bag files by analyzing the maximum of all joint velocities to determine points of inflection, this constitutes the start and end of the trajectory. The hit phase parameter is chosen based on where the end-effector approximately crosses the pre-specified hit plane. \Cref{fig:traj-seg} illustrates the process of trajectory extraction. We also evaluate the impact of the number of trajectories used for refining the primitives by comparing the performance of the primitives trained on datasets of 20 and 50 trajectories. Performance measures hit rate, success rate (success corresponds to good hits), and the average reward on a consecutive sequence of ball launches.
\begin{figure}
\centering
\label{fig:collection-overview}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{img/lab_setup.png}
\caption{}
\label{subfig:lab-setup}
\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[height=3.2cm]{img/execute_promp.png}
\caption{}
\label{subfig:example-hit}
\end{subfigure} \hfill
\caption{(a) Overview of the lab setup. (b) Wheelchair Tennis Robot executing a stroke.}
\end{figure}
\begin{table}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[height=3cm]{img/auto_traj_seg.png}
\captionof{figure}{Example of trajectory segmentation from recorded joint states.}
\label{fig:traj-seg}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\label{tab:reward-criteria}
\centering
\begin{tabular}{p{3.5cm}|l}
\hline
{\bf criteria} & {\bf reward} \\[0.2em] \hline
miss by a large margin & 0 \\[0.2em]
miss but close ($\leq$ 5\cm) & 0.25 \\[0.2em]
hit but not good enough & 0.5 \\[0.2em]
good hit (hit side pillar) & 1 \\[0.2em]
good hit (above the net) & 2 \\ \hline
\end{tabular}
\vspace{1em}\caption{Criteria of human feedback}
\end{minipage}\hfill
\end{table}
\section{Results}
\label{sec:results}
We report the base primitive performance and the performance post-refinement in \Cref{tab:result} based on the setup described in \Cref{sec:exp-setup}. Performance is reported over a consecutive run of 10 balls.
\begin{table}[!ht]
\centering
\begin{tabular}{l|c|cc}
\hline
{\bf \# trajectories} & 0 (base primitive) & 20 & 50 \\[0.2em] \hline
{\bf Hit Rate} & 60\% & 40\% & 50\% \\[0.2cm] \hline
{\bf Success Rate} & 40\% & 40\% & 40\% \\[0.2cm] \hline
{\bf Avg. Reward} & 0.75 & 0.85 & 0.85 \\[0.2cm] \hline
\end{tabular}
\vspace{0.3cm}
\caption{Performance over the number of trajectories used for refining the base primitive.}
\label{tab:result}
\end{table}
\vspace{-10mm}
\section{Conclusion}
\label{sec:conclusion}
We have successfully demonstrated a safely executed ground strike primitive on a wheelchair tennis robot.
We proposed a formulation to fine-tune learned primitives online and conducted an evaluation. While we do not observe significant improvements in success rates with fine-tuning, we do see a small increase in the average feedback rewarded to the primitives, so most balls have been missed by small margins. Future work in this direction can consider different reward designs and study how the choice of the reward impacts the learned primitive.
\textbf{Future Work}
In the future, we would like to explore methods to improve upon learned primitives to 1) ensure safe behavior and 2) increase task performance. Improving learned robot behavior has relations across Active Learning with Human Feedback \cite{ARGALL2009469}, Reinforcement Learning of robot skills \cite{Ibarz2021HowTT, Carvalho2022ResidualRL}, and Human-Robot Interaction. We would additionally like to consider several feedback paradigms in improving robot motion including natural language, kinesthetic teaching, third-person demonstration, etc.
\clearpage
| proofpile-arXiv_065-1960 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{Background} A central aim of deep learning theory is to understand the properties of overparameterized networks trained to fit large datasets. Key questions include: how do learned networks use training data to make predictions on test points? Which neural network architectures lead to more parsimonious models? What are the joint scaling laws connecting the quality of learned models to the number of training datapoints, network depth and network width?
To the best of our knowledge, the present article is the first to give exact answers to such questions for any learning procedure with a class of neural networks in which one can simultaneously vary input dimension, number of training datapoints, network width, and network depth. As we explain in detail in \S \ref{sec:lit-rev}, the limits where these four structural parameters tend to infinity do not commute, causing all prior work to miss important aspects of how they jointly influence learning. Our results pertain to \textit{deep linear networks}
\[
f(x) = W^{(L+1)}\cdots W^{(1)} x,\qquad W^{(\ell)}\in \mathbb R^{N_{\ell}\times N_{\ell-1}},\quad x\in \mathbb R^{N_0},\quad N_{L+1}=1
\]
with input dimension $N_0$, $L$ hidden layer of widths $N_\ell$, and output dimension $N_{L+1}=1$. As a form of learning we take zero noise Bayesian interpolation starting from a Gaussian prior on network weights $W^{(\ell)}$ and a negative log-likelihood given as the empirical mean squared error over a training dataset of $P$ examples. See \S \ref{sec:setup} for the precise definitions.
\begin{figure}
\centering
\includegraphics[scale=.9]{Perp.png}
\caption{Each input $x\in \mathbb R^{N_0}$ can be decomposed into its projection $x_{||}$ onto the directions spanned by the training data (denoted by the column space $\mathrm{Col}(X)$ of the matrix $X$ whose columns are the inputs from the training data) and the projection $x_{\perp}$ onto the orthogonal complement.}
\label{fig:perp}
\end{figure}
Since we've insisted that the output dimension equals $1$, we may write $f(x)=\theta^Tx$ for a vector $\theta\in \mathbb R^{N_0}$. What differentiates our work from a classical Bayesian analysis of Gaussian linear regression, which in our notation corresponds to setting $L=0$, is that as soon as $L\geq 1$ the components of $\theta$ are all correlated under the prior. Unlike in models that are linear in their parameters, this allows the posterior over predictions $f(x_{\perp})$ in directions orthogonal to the training data to differ from the prior (see Figure \ref{fig:perp}), and it is precisely this data-dependent posterior that we seek to understand. We discuss the nature of posterior over $f(x_\perp)$ informally around \eqref{eq:opt-post-def} below and more precisely in \S \ref{sec:feature-learning}.
\subsection{Informal Overview of Results}\label{sec:informal}
The starting point for our results is Theorem \ref{thm:Z-form}, which gives exact non-asymptotic formulas for both the predictive posterior (i.e. the distribution of $f(x)$ jointly for all inputs $x$ when $W^{(\ell)}$ are drawn from the posterior) and the Bayesian model evidence in terms of a class of meromorphic special functions of a single complex variable called Meijer-G functions \cite{meijer1936whittakersche}. These results hold for arbitrary training datasets, input dimensions, hidden layer widths, and network depth. As far as we are aware, they represent a novel enlargement of the class of models for which Bayesian posteriors are exactly computable. In particular, they give exact expressions for the predictive posterior over deep Gaussian processes \cite{damianou2013deep} in which the covariance in every layer is a constant times the identity.
To glean insights from the non-asymptotic results in Theorem \ref{thm:Z-form}, we provide in Theorem \ref{thm:logG} asymptotic expansions of Meijer-G functions that allow us to compute expressions for the Bayesian model evidence and the predictive posterior under essentially any joint scalings of $P,N_\ell,L\rightarrow \infty$ with $P/N_0\rightarrow \alpha_0\in (0,1)$\footnote{As explained in \S \ref{sec:limit-thms}, since our networks are linear with respect to the inputs and we insist on interpolating the training data, the Bayesian posterior is a delta function on the minimal $\ell_2$-norm data interpolant as soon as $P\geq N_0.$}.
In particular, we allow $L$ to either stay finite or to grow together with $P,N_\ell$. These results are recorded in Theorems \ref{thm:finite-L}, \ref{thm:LN}, and \ref{thm:PLN}. What emerges from our analysis is a rich new picture of the role of depth in both model selection and the nature of extrapolation with deep linear networks. To give a sense of our results, we start with an informal re-statement of Theorem \ref{thm:bayesfeature}:
\begin{align*}
\text{\textbf{Takeaway: }} &\text{Evidence maximizing priors give the same Gaussian predictive posterior for}\\
&\text{any architecture in the large data limit $P\rightarrow \infty,\, P/N_0\rightarrow \alpha_0\in (0,1)$.}
\end{align*}
This distinguished posterior represents, from a Bayesian point of view, a notion of optimal extrapolation. Indeed, because $f(x)$ is linear in $x$, it is natural to decompose
\[
x = x_{||}+x_{\perp},\qquad x\in \mathbb R^{N_0}
\]
where $x_{||}$ is the projection of $x$ onto directions spanned by inputs from the training data, and $x_\perp$ is the projection of $x$ onto the orthogonal complement (see Figure \ref{fig:perp}). Predictions under the evidence-maximizing posterior at test input $x$ take the form
\begin{equation}\label{eq:opt-post-def}
f(x) = f(x_{||})+f(x_{\perp})=\mathcal N\lr{\theta_*^T x_{||},~ \frac{\norm{\theta_*}^2}{\alpha_0} \norm{x_\perp}^2},
\end{equation}
where $\theta_*$ is the minimal norm interpolant of the training data (cf. \eqref{eq:Vstar-def}). The deterministic mean $\theta_*^Tx_{||}$ appears because our posteriors interpolate the training data and thus we must have $\theta_{||}=\theta_*$ (see \eqref{eq:post-def}). The scalar $\norm{\theta_*}^2/\alpha_0$, in contrast, sets a data-dependent variance for making predictions in directions orthogonal to inputs in the training data. As we detail in \S \ref{sec:feature-learning}, this particular value for the variance is the most likely given that our posteriors are necessarily isotropic in directions orthogonal to the training data. We refer the reader to \S \ref{sec:setup} and \S \ref{sec:prior} as well as Theorems \ref{thm:finite-L}, \ref{thm:LN}, and \ref{thm:PLN} for more on this point.
In general, a linear network that maximizes Bayesian evidence, and hence produces the posterior \eqref{eq:opt-post-def}, may require the prior distribution over network weights to be data-dependent. In machine learning contexts, we hope to find instead optimal but data-\emph{independent} priors. Theorems \ref{thm:LN} and \ref{thm:PLN} (in combination with Theorem \ref{thm:bayesfeature}) show that this is possible in linear networks with large depth:
\begin{align*}
\text{\textbf{Takeaway:} }& \text{Deep linear networks (with comparable depth, width, number of datapoints)}\\ &\text{with data-\emph{independent} priors extrapolate in the same way --- i.e. using the }\\
&\text{posterior \eqref{eq:opt-post-def} --- as shallow networks with optimal data-\emph{dependent} priors.}
\end{align*}
This result highlights the remarkable role of depth in allowing training data to affect the posterior over predictions in orthogonal directions of feature space. Just how large network depth must be to ensure optimal extrapolation is explained in Theorem \ref{thm:PLN}, which provides universal scaling laws for Bayesian posteriors in terms of a single novel parameter that couples depth, width, and dataset size. Informally, we have the following:
\begin{align*}
\text{\textbf{Takeaway:} }& \text{In linear networks ($1\ll$ depth, datapoints $\ll$ width) with data-agnostic priors}\\
&\text{the posterior depends only on the \text{effective posterior depth} } \\
&\qquad\qquad \qquad \qquad \lambda_{\mathrm{post}}:=\frac{\text{network depth $~\times~$ dataset size}}{\text{network width}}.\\
&\text{As $\lambda_{\mathrm{post}} \rightarrow \infty$, the evidence grows and the posterior converges to \eqref{eq:opt-post-def}. Instead,}\\
&\text{if $\lambda_{\mathrm{post}} \rightarrow 0$, then the posterior is that of a depth zero (linear) model.}
\end{align*}
The preceding takeaway can be viewed as a scaling law relating depth, width, and training set size. In particular, it predicts that for large linear networks it is $\lambda_{\mathrm{post}}$, rather than depth, width, and dataset size separately, that determines the quality of the learned model. We refer the reader to \S \ref{sec:prior} for an discussion of why $\lambda_{\mathrm{post}}$ is a natural measure of depth or complexity for the posterior. We also note that $\lambda_{\mathrm{post}}$ appears at leading order in the perturbative expressions (32) - (35) of \cite{zavatone2022contrasting} (in their notation $\lambda_{\mathrm{post}}=\ell \gamma/\alpha$) for computing the $\ell_2$-generalization error in the regime of fixed $L$ and $N_0,N_\ell,P\rightarrow \infty$ with $P/N_\ell$ remaining order $1$.
As the preceding statement suggests, at least with data-agnostic priors and wide linear networks, maximizing Bayesian evidence requires large depth, as measured by $\lambda_{\mathrm{post}}$. Moreover, evidence maximization is not possible at finite $\lambda_{\mathrm{post}}$. The final result we present (see Theorem \ref{thm:LN}) concerns maximization of Bayesian evidence and can be summarized as follows:
\begin{align*}
\text{\textbf{Takeaway: }} &\text{With data-agnostic priors and depth that is proportional to width, Bayesian}\\
&\text{evidence is maximal in networks with depth equal to a data-dependent}\\
&\text{constant times width. Mis-specification of this constant only results in an}\\
&\text{order one decrease in evidence.}
\end{align*}
In comparison, using a network with smaller depth results in an exponential decrease in evidence. The preceding takeaways give perhaps the first principled Bayesian justification for preferring neural networks with large depth, albeit in the restricted setting of linear networks.
\begin{align*}
\end{align*}
\subsection{Review of Literature}\label{sec:lit-rev}
Our work, like all approaches to analyzing learning with neural networks, must confront the fact that networks are non-linear functions of their parameters. Whether learning is done using gradient-based optimization, Bayesian inference, or some other method, it is therefore generally difficult to determine analytically the properties of trained networks. To put the present article into perspective, we therefore indicate several ways prior work has approached this problem.
First, a range of influential articles analyze so-called NTK/kernel regimes in which neural networks reduce to linear models \cite{du2017gradient, jacot2018neural}. Such analyses were the first to propose a convincing reason for the success of non-convex optimization in the context of training large neural networks using gradient-based methods. Approaches of this kind preclude a nuanced analysis of how neural networks make data-dependent predictions at test time. Indeed, in the language of \S \ref{sec:informal} predictions $f(x_{\perp})$ in directions orthogonal to the span of the training data are affected neither by Bayesian inference nor by gradient descent. Moreover, the NTK regime only appears after sending the network width to infinity at fixed settings of depth, input dimension, and number of training datapoints. The effect of these fundamental parameters on the properties of trained networks is therefore obscured. Subsequent work has shown, for example, that to understand the impact of depth on feature learning, extrapolation, and optimization requires studying networks that are simultaneously deep and wide \cite{hanin2018neural,hanin2020products,hanin2018start, hanin2019finite, li2022neural,roberts2022principles}. A key takeaway of these articles is that at fixed dataset size $P$ the depth-to-width ratio $L/N$ controls the nature of both Bayesian inference and training by gradient descent in the limit where $N,L$ tend to infinity. In \cite{roberts2022principles}, for instance, the properties of Bayesian posteriors in this regime are shown to depend on $L/N$. Naively, the $P$-dependence scales like $P^2 L/N$ or $P^4 L/N$ (see for instance in equations (6.81), (6.82), (6.97)). However, as we show in this present article, the correct scaling is really $PL/N$, at least for linear networks.
A second important class of approaches to deep learning theory seeks to explain aspects of the phenomenology of learning with large neural networks by pointing out the surprisingly rich range of behaviors displayed by linear models such as kernel or random features methods in high dimensions. A key message from this line of work is that the joint effects of network width (e.g. number of features), dataset size, and input dimension on learning are only revealed by taking all three to infinity together \cite{hastie2022surprises,montanari2022interpolation,adlam2019random,adlam2020neural,advani2020high,mei2019generalizationb, mei2021generalization}. In such limits, one can reproduce phenomena such as double descent \cite{belkin2019reconciling, belkin2020two}, benign overfitting \cite{bartlett2020benign}, and affine dependence of generalization error under covariate shift \cite{adlam2020neural}. As we have already mentioned, a significant drawback of these works is that linear models inspired by large neural networks cannot leverage the training data to make predictions $f(x_\perp)$ in directions not already present in the training data. Also, prior work focuses exclusively on regimes in which network width and training dataset size tend to infinity at fixed depth. Hence, they do not necessarily reflect the properties of learning with networks in which the depth is allowed to scale with the other large parameters.
Another illuminating vein of prior work analyzes gradient-based optimization of large networks in the mean field limit \cite{mei2018mean,rotskoff2018neural,chizat2018global,sirignano2020mean,sirignano2021mean,yu2020normalization,pham2021global,yang2021tensoriv}. While such networks are studied at finite depth and infinite (or very large) width, due to a non-NTK scaling for network weights at initialization, they do not reduce to linear models and are therefore capable of making non-trivial predictions $f(x_\perp)$ on inputs orthogonal to directions in the training data. The limiting trajectory of optimization in these models is described by complicated non-linear evolution equations, however, and relatively little is known about the exact nature of the features they can learn. Moreover, as far as we are aware, such a mean field analysis has never been extended to settings in which the number of training datapoints and network depth are allowed to grow with network width.
While the preceding discussion pertained primarily to work studying the properties of either minimal norm interpolants in regression or networks trained by gradient descent, there are is also a vibrant literature on Bayesian inference with neural networks at large or infinite width \cite{mackay1992bayesian}. The case of one-layer networks at infinite width was considered in the pioneering work of \cite{neal1996priors}. Bayesian inference with deep non-linear networks in the NTK regime was taken up in \cite{lee2017deep}. An important contribution to understanding the effects of depth on perturbative corrections to this infinite width limit was obtained in \cite{yaida2019non} and then significantly refined in \cite{roberts2022principles}. More generally, the distribution over functions induced by a randomly initialized neural network is also called deep Gaussian processes \cite{damianou2013deep}. A variety of approaches have been proposed for using them in practice. Articles such as \cite{damianou2013deep, garnelo2018conditional} use variation approaches, while work like \cite{havasi2018inference, dunlop2018deep} studies sampling-based methods. Still other prior work studies Bayesian uncertainty quantification \cite{gal2016dropout,abdar2021review}, properties of Bayesian networks such as posterior contraction \cite{finocchio2021posterior}, and depth dependence of Bayesian priors \cite{pleiss2021limitations}.
These articles pertain to non-linear networks. For prior work on the special case of the deep linear models studied in the present article, we point the reader to \cite{zavatone2021exact,noci2021precise}, which study the prior, and \cite{zavatone2022contrasting, li2021statistical,hron2022wide}, which concern the posterior. The articles \cite{zavatone2022contrasting,li2021statistical} are especially close to our work. The former uses non-rigorous replica method computations to obtain exact formulas for the expected $\ell_2$-generalization error under the posterior in the limit when the depth is fixed but the input dimension, hidden layer widths, and number of training datapoints all tend to infinity at a fixed ratio. In contrast, the work \cite{li2021statistical}, whose formalism for computing partition functions we take as a starting point in our analysis, considers the same regime as \cite{zavatone2022contrasting} but primarily concerns the posterior of the feature-feature covariance kernel in hidden layers. In both these works the network width, input dimension, and number of training datapoints are sent to infinity at fixed depth. As we shall below, the full role of depth in both model selection and feature learning can only be understood in the regime when depth, width, and number of training datapoints are simultaneously large.
Beyond a Bayesian analysis of deep linear networks, a number of authors have taken up the study of linear networks trained by gradient descent. The pioneering article \cite{baldi1989neural} showed that under certain conditions, when $L=1$, there are no spurious local minima in the landscape of the empirical risk. More recent follows-ups include \cite{kawaguchi2016deep, lu2017depth, laurent2018deep}, which obtain deeper and wider analogs of the results in \cite{baldi1989neural}. The dynamics of gradient descent with linear networks were also considered in \cite{saxe2013exact, arora2018optimization, arora2018convergence, ji2018gradient, lampinen2018analytic,du2019width}.
Finally, we point the reader to several further strands of literature. The first is a variety of interesting Bayesian perspectives on learning with neural networks. This includes articles such as \cite{wilson2020bayesian, lotfi2022bayesian, izmailov2021bayesian, wenzel2020good, he2020bayesian}, which consider the effects of Bayesian averaging/ensembling on generalization, robustness, and calibration. There is also and interesting set of articles such as \cite{naveh2021predicting,naveh2021self,seroussi2021separation} that study certain aspects of Bayesian feature learning in wide CNNs. For work in a somewhat different vein, we point the reader to the recent article \cite{hron2022wide}, which computes the Bayesian posterior over network parameters in the NTK regime.
Finally, we point out to the articles \cite{kaplan2020scaling,hoffmann2022training,maloney2022solvable, bahri2021explaining} for empirical and analytic insights into the joint scaling of dataset and model size.
\subsection{Acknowledgments}
This work started at the 2022 Summer School on the Statistical Physics of Machine Learning held at \'Ecole de Physique des Houches. We are grateful for the wonderful atmosphere at the school and would like to express our appreciation to the session organizers Florent Krzakala and Lenka Zdeborov\'a as well as to Haim Sompolinsky for his series of lectures that got us started thinking about Bayesian analysis of deep linear networks. We further thank Edward George for pointing out the connection between our work and the deep Gaussian process literature. Finally, we thank Matias Cattaneo, Isaac Chuang, David Dunson, Jianqing Fan, Aram Harrow, Jason Klusowski, Cengiz Pehlevan, Veronika Rockova, and Jacob Zavatone-Veth for their feedback and suggestions.
\subsection{Limitations} To conclude the introduction, we emphasize three limitations of our results. First, we consider only \textit{linear networks} (see \S \ref{sec:setup}), which are linear as a function of their inputs. However, they are not linear as a function of their parameters, making learning by Bayesian inference non-trivial. Second, we study learning by Bayesian inference and leave extensions to learning by gradient descent to future work. Finally, our results characterize the \textit{predictive posterior}, i.e. the distribution over model predictions after inference. It would be interesting to derive the form of the posterior over network weights as well.
\subsection{Outline for Remainder of Article} In \S \ref{sec:setup} we introduce notation and set up the problem of computing posteriors and model evidence using deep linear networks with Gaussian weights priors and MSE negative log-likelihood. We provide next in \S \ref{sec:prior} intuition about the \textit{prior} over network outputs. In particular, we explain why the prior has a low-rank bias and introduce the effective prior depth $\lambda_{\mathrm{prior}}$ in \eqref{eq:lpre-def} and effective posterior depth $\lambda_{\mathrm{post}}$ in \eqref{eq:lpost-def}. These two parameters will play a crucial role in our results. In \S \ref{sec:feature-learning} we make several remarks about the nature of the posterior in deep linear networks, providing a simple description of the nature of feature learning in Bayesian inference with deep linear networks.
We then state our first result, Theorem \ref{thm:Z-form}, in \S \ref{sec:Z-form}. This gives an exact non-asymptotic formula for the characteristic function of predictive posterior and the model evidence in terms of Meijer-G functions. We complement this in \S \ref{sec:G-results} by giving in Theorem \ref{thm:logG} asymptotic expansions for Meijer-G functions. The next collection of results, provided in \S \ref{sec:limit-thms}, details the model evidence and posterior as number of datapoints, input dimension, width, and depth tend to infinity in various regimes. The results in \S \ref{sec:Lfinite-thms} pertain specifically to the analysis of networks with a finite number of hidden layers in the regime where number of training datapoints, input dimension, and width tend to infinity. The main result is Theorem \ref{thm:finite-L}. In contrast, \S \ref{sec:Linfinit-thms} and \S \ref{sec:scaling-thms} consider regimes in which depth also tends to infinity. The main result is Theorem \ref{thm:PLN}. Finally, \S \ref{sec:properties} contains simple corollaries connecting our results to scaling laws for the generalization error (Theorem \ref{thm:scaling}) and double descent (Theorem \ref{thm:dd}).
\section{Problem Setup and Exact Form for Posterior, Evidence}
In this section, we begin by recalling the setup for Bayesian interpolation with deep linear networks in \S \ref{sec:setup}. We go on to explain in \S \ref{sec:prior} the low-rank bias of the prior and introduce notions of effective network depth for both the prior and the posterior. We then proceed in \S \ref{sec:feature-learning} to pinpoint the nature and origins of feature learning in deep linear networks.
With this preparatory work out of way, we state in \S \ref{sec:Z-form} two non-asymptotic results. The first, Theorem~\ref{thm:Z-form}, gives a closed form for the predictive posterior and Bayesian model evidence in terms of the Meijer-G function, a meromorphic special function defined by complex contour integration (see \S \ref{sec:G-def}). The second result, Theorem~\ref{thm:logG}, provides several novel asymptotic expansions of the Meijer-G function that will be useful in computing the limiting behavior of posteriors and model evidence in the large model and data limit, which we present in \S \ref{sec:limit-thms}.
\subsection{Definitions of Data, Likelihood, Prior, Posterior, and Evidence}\label{sec:setup}
Fix $1\leq P \leq N_0$\footnote{As we previously mentioned in the introduction, we take $P\leq N_0$ because in models that interpolate the training data and are linear functions of their inputs observing $P>N_0$ training datapoints generally determines completely predictions on all test inputs.} and consider a training dataset
\begin{equation}\label{eq:data-def}
X_{N_0}=\lr{x_{1,N_0},\ldots,x_{P,N_0} }\in \mathbb R^{N_0 \times P},\qquad Y_{N_0}=\lr{y_1,\ldots, y_P}\in \mathbb R^P
\end{equation}
consisting of $P$ datapoints with $X_{N_0}$ having full rank. A key role in our results will be played by the solution to ordinary linear least squares regression of $Y_{N_0}$ onto $X_{N_0}$
\[
\theta_{*,N_0}:=\argmin_{\theta\in \mathbb R^{N_0}}\norm{\theta^T X_{N_0} - Y_{N_0}}^2.
\]
Further, we fix $N_1,\ldots, N_L\geq 1$ and consider fitting the training data $(X_{N_0},Y_{N_0})$ by a linear model
\[
f(x) = \theta^Tx\in \mathbb R,\qquad \theta,x\in \mathbb R^{N_0}
\]
equipped with quadratic negative log-likelihood
\[
\mathcal L(\theta~|~X_{N_0},Y_{N_0}):=\frac{1}{2}\norm{\theta^TX_{N_0}-Y_{N_0}}_2^2.
\]
and a \textit{deep linear prior}
\begin{equation}\label{eq:prior-def}
\theta\sim \mathbb P_{\mathrm{prior}}^{\mathrm{param}} \quad \Longleftrightarrow \quad \theta = W^{(L+1)}\cdots W^{(1)}
\end{equation}
in which
\[
W^{(\ell)}\in \mathbb R^{N_{\ell}\times N_{\ell-1}},\qquad W_{ij}^{(\ell)}\sim \mathcal N\lr{0,\frac{\sigma^2}{N_{\ell-1}}}\quad \text{independent}.
\]
Thus, the predictions $f(x)$ with $\theta\sim \mathbb P_{\mathrm{prior}}^{\mathrm{param}}$ are precisely the outputs of a randomly initialized deep linear network. Our goal is to study the posterior distribution over the set of $\theta\in\mathbb R^{N_0} $ that exactly fit the training data. Explicitly, writing $d\mathbb P_{\mathrm{prior}}^{\mathrm{param}}(\theta)$ for the prior density, the posterior has density given by
\begin{align}
\label{eq:post-def}d\mathbb P_{\mathrm{post}}^{\mathrm{param}}\lr{\theta~|~L,N_\ell, \sigma^2, X_{N_0}, Y_{N_0}}:= \lim_{\beta\rightarrow \infty}\frac{d\mathbb P_{\mathrm{prior}}^{\mathrm{param}}\lr{\theta~|~N_0,L,N_\ell, \sigma^2}\exp\left[-\beta\mathcal L(\theta~|~X_{N_0},Y_{N_0})\right]}{Z_\beta\lr{X_{N_0},Y_{N_0}~|~L,N_\ell, \sigma^2}}.
\end{align}
We will find it convenient to work not directly with the posterior $\mathbb P_{\mathrm{post}}^{\mathrm{param}}$ but rather with its characteristic function, which is naturally expressed as a ratio of partitions functions:
\begin{equation}\label{eq:char-part}
\mathbb E_{\mathrm{post}}\left[\exp\left\{-i{\bf t}\cdot \theta\right\}\right]=\frac{Z_\infty({\bf t}~|~L,N_\ell, \sigma^2,X_{N_0},Y_{N_0})}{Z_\infty({\bf 0}~|~L,N_\ell, \sigma^2,X_{N_0},Y_{N_0})} ,\qquad {\bf t} = \lr{t_1,\ldots, t_{N_0}}\in \mathbb R^{N_0}.
\end{equation}
Here, $\mathbb E_{\mathrm{post}}[\cdot]$ is the expectation with respect to the posterior \eqref{eq:post-def}. We have defined for $\beta>0$ the partition function $Z_\beta({\bf t})=Z_\beta( {\bf t}~|~L,N_\ell, \sigma^2,X_{N_0},Y_{N_0})$ by
\begin{align}\label{eq:part-def}
Z_\beta({\bf t})&:= A_\beta \int \exp\left[-\sum_{\ell=1}^{L+1}\frac{N_{\ell-1}}{2\sigma^2}\big|\big|W^{(\ell)}\big|\big|_F^2 -\frac{\beta}{2}\big|\big|Y-\prod_{\ell=1}^{L+1} W^{(\ell)}X\big|\big|_2^2 -i\theta\cdot {\bf t} \right]\prod_{\ell=1}^{L+1}dW^{(\ell)}
\end{align}
and set
\[
Z_\infty( {\bf t}~|~L,N_\ell, \sigma^2,X_{N_0},Y_{N_0}):= \lim_{\beta\rightarrow \infty}Z_\beta( {\bf t}~|~L,N_\ell, \sigma^2,X_{N_0},Y_{N_0}),
\]
where
\[
A_{\beta} =\exp\left[\frac{\beta}{2}\norm{Y_{N_0,\perp}}^2\right]\det(X_{N_0}^TX_{N_0})^{1/2}\lr{2\pi \beta}^{P/2}
\]
and $Y_{N_0,\perp}$ is the projection of $Y_{N_0}$ onto the orthogonal complement to the row span of $X_{N_0}$. The denominator $Z_\infty({\bf 0})$ is often called the Bayesian model evidence and represents the probability of the data $(X_{N_0},Y_{N_0})$ given the model (i.e. depth $L$, layer widths $N_1,\ldots, N_L$ and prior scale $\sigma^2$). Note that the constant $A_\beta$ cancels in the ratio \eqref{eq:post-def} and in any computations involving maximizing ratios of model evidence.
Before presenting our results, we introduce some important language for understanding the prior distribution \eqref{eq:prior-def} (see \S \ref{sec:prior}) and for understanding how to measure feature learning in deep linear networks (see \S \ref{sec:feature-learning}).
\subsection{Understanding the Deep Linear Prior: Effective Depth and Rank}\label{sec:prior}
The number of layers $L$ does not provide a useful measure of complexity for the prior distribution over network outputs $f(x)=\theta^Tx$ when $\theta = W^{(L+1)}\cdots W^{(1)} \sim \mathbb P_{\mathrm{prior}}^{\mathrm{param}}$. This is true in both linear and non-linear networks at large width (see e.g. \cite{hanin2018neural, hanin2019finite, hanin2022correlation, roberts2022principles} for a treatment of deep non-linear networks). A more useful notion of depth is
\begin{equation}\label{eq:lpre-def}
\lambda_{\mathrm{prior}} =\lambda_{\mathrm{prior}}(N_1,\ldots, N_L) =\text{ effective depth of prior }:= \sum_{\ell=1}^L \frac{1}{N_\ell},
\end{equation}
and it is indeed $\lambda_{\mathrm{prior}}$ that plays an important role in our results. We offer two related justifications for calling $\lambda_{\mathrm{prior}}$ the effective depth.
\begin{itemize}
\item Aside from the simple multiplicative dependence on $\sigma^2$, the norm of $\theta$ under the prior depends at large $N,L$ only on $\lambda_{\mathrm{prior}}$. For instance, a simple computation (Equation (9) in \cite{hanin2020products}) shows that with $\sigma^2 =1 $
\[
\lim_{\substack{N_1+\cdots+N_L\rightarrow \infty \\ \lambda_{\mathrm{prior}}(N_1,\ldots, N_L)\rightarrow \lambda}} \log \norm{\theta}^2 = \mathcal N\lr{-\frac{\lambda}{2},\lambda},
\]
where the convergence is in distribution. Due to the rotational invariance $\theta$, we thus see that with $\sigma^2 = 1$, it is $\lambda_{\mathrm{prior}}$, as opposed to $L$, that gives a full description of the prior. Moreover, taking $\lambda_{\mathrm{prior}}\rightarrow 0$ (even if $L\rightarrow \infty$) gives the same (Gaussian) prior over predictions $\theta^Tx$ as one would obtain by simply starting with $L=0$.
\item With $N_1=\cdots=N_L=N$, a relatively recent result (see Theorem 1.2 in \cite{hanin2021non}) shows that when $\sigma^2=1$, under the prior, the squared singular values of $\lr{W^{(L)}\cdots W^{(1)}}^{1/L}$ converge to the uniform distribution on $[0,1]$. Hence, at least heuristically,
\[
W^{(L)}\cdots W^{(1)} \text{ is supported on matrices of rank approximately } \lambda_{\mathrm{prior}}^{-1}.
\]
Indeed, only the squared singular values of $\lr{W^{(L)}\cdots W^{(1)}}^{1/L}$ lying in intervals of the form $[1-CL^{-1},1]$ correspond to singular values of $W^{(L)}\cdots W^{(1)}$ that remain uniformly bounded away from $0$ at large $L$. Given that the squared singular values for $\lr{W^{(L)}\cdots W^{(1)}}^{1/L}$ are approximately uniform on $[0,1]$, we expect heuristically there are order of $\lambda_{\mathrm{prior}}^{-1}$ non-trivial singular values for $W^{(L)}\cdots W^{(1)}$.\footnote{ Strictly speaking, we do not know this for sure since uniformity for the distribution of singular values at the right edge of the spectrum does not follow from a result only about the global density of singular values.}
\end{itemize}
The second justification --- i.e., the effective rank of $a$ being $\lambda_{\mathrm{prior}}^{-1}$ --- motivates the introduction of a \emph{posterior effective depth} $\lambda_{\mathrm{post}}$. The predictor $\theta^Tx$ applies a linear map of rank $\lambda_{\mathrm{prior}}^{-1}$ and regresses $Y_{N_0}$ onto the result. Since the number of degrees of freedom specified by the training data is measured by the number of datapoints, the posterior will naturally be sensitive to the ratio
\begin{equation}\label{eq:lpost-def}
\lambda_{\mathrm{post}}=\lambda_{\mathrm{post}}(N_1,\ldots, N_L)=\text{ effective depth of posterior }:=\frac{P}{\lambda_{\mathrm{prior}}^{-1}}= P\sum_{\ell=1}^L \frac{1}{N_\ell}.
\end{equation}
Indeed, as we'll see in Theorem \ref{thm:PLN}, it is precisely $\lambda_{\mathrm{post}}$ that controls the structure of the posterior.
\subsection{Extrapolation Under the Posterior} \label{sec:feature-learning} The purpose of this section is to explain how deep linear networks use information in the training data to make posterior predictions in directions orthogonal to inputs in the training data. For this, let us write (similar to Figure \ref{fig:perp})
\[
\theta = \theta_{||}+\theta_\perp,\qquad \theta_{||}\in \mathrm{col}(X_{N_0}),\quad \theta_\perp \in \mathrm{col}(X_{N_{0}})^\perp.
\]
Since the prior over $\theta$ is invariant under all orthogonal transformations, the likelihood is invariant under arbitrary transformations of $\theta_\perp$ (since they fix $\theta^TX$), and the posterior satisfies $\theta_{||}=\theta_{*,N_0}$ we have the following equality in distribution
\begin{equation}\label{eq:a-form}
\theta\sim \mathbb P_{\mathrm{post}}^{\mathrm{param}}\qquad \Longleftrightarrow \qquad \theta ~\stackrel{d}{=}~\theta_{*,N_0} ~+~ u\cdot \norm{\theta_\perp},
\end{equation}
where $u$ is independent of $\norm{\theta_\perp}$ and is uniformly distributed on the unit sphere in $\mathrm{col}(X_{N_0})^\perp\subseteq \mathbb R^{N_0}$. Thus, the only degree of freedom in the posterior is the distribution of the radial part $\norm{\theta_\perp}$ of the vector $\theta_\perp$. To better understand the meaning of this for extrapolation consider a test input
\[
x=x_{||}+x_\perp\in \mathbb R^{N_0},\qquad x_{||}\in \mathrm{col}(X_{N_0}),\quad x_\perp\in \mathrm{col}(X_{N_0})^\perp.
\]
According to \eqref{eq:a-form}, the resulting posterior prediction is
\[
f(x) = \theta^Tx= \lr{\theta_{*,N_0}}^T x_{||} + \norm{\theta_\perp}\cdot u^T x_\perp.
\]
This formula shows that $\norm{\theta_\perp}$ controls the scale of predictions $f(x)$ in directions not spanned by the training data. For example if $x=x_{||}\in \mathrm{col}(X_{N_0})$, then $f(x)$ has zero variance since it is determined completely by the training data. More generally, by the Poincare-Borel Theorem (see \cite{diaconis1987dozen,borel1914introduction}) we have
\[
f(x_\perp) ~\approx~ \mathcal N\lr{\lr{\theta_{*,N_0}}^Tx_{||},~\norm{\theta_\perp}^2\frac{\norm{x_\perp}^2}{N_0-P}}\qquad \text{at large }N_0,P.
\]
At depth $L=0$ the projection $\theta_\perp$ of the parameter vector $\theta$ is independent of $a_{||}$ under the prior and doesn't influence the likelihood. Hence, in depth $L=0$ networks the prior and posterior distributions for $\norm{\theta_\perp}^2$ coincide, precluding any feature learning. When $L\geq 1$, in contrast, while the likelihood is still independent of $\theta_\perp$, all components of $\theta$ are correlated under the prior and hence information from the training data can be encoded into $\theta_\perp$ by virtue of its correlation with $\theta_{||}$. We will see in Theorem \ref{thm:PLN} and \S \ref{sec:limit-thms} that it is the effective posterior depth $\lambda_{\mathrm{post}}$ from \eqref{eq:lpost-def} that quantifies how much information the model learns about $\theta_\perp$ from $\theta_{||}$, with models having larger and larger $\lambda_{\mathrm{post}}$ inferring more and more about $\norm{\theta_\perp}$ from information about $\theta_{*,N_0}$.
To complete our discussion, let us suppose that $N_0,P$ are large and give an intuitive explanation for why it is reasonable to expect that the specific choice
\[
\norm{\theta_\perp}^2 =\frac{\norm{\theta_{*,N_0}}^2}{\alpha_0},
\]
will maximize Bayesian evidence, in accordance with the discussion in the first takeaway in \S \ref{sec:informal}. Observe that the prior distribution over $a$ is rotationally invariant. Hence, the posteriors we consider cannot distinguish between two datasets $(X_{N_0},Y_{N_0})$ and $(X_{N_0}', Y_{N_0}')$ in $Y_{N_0}=Y_{N_0}'$ and $X_{N_0} = \mathcal O X_{N_0}'$ for an orthogonal matrix $\mathcal O$. Moreover, we consider only zero noise posteriors in which we exactly interpolate the training data. Thus, our Bayesian framework implicitly considers only data generating processes of the form
\[
f(x) = \kappa\cdot u^T x
\]
for some possibly random scale $\kappa\geq 0$ and an independent uniformly random unit vector $u$. Each training datapoint gives a noisy estimate for kappa:
\[
y_{i,N_0} = \kappa\cdot u^Tx_{i,N_0}.
\]
A reasonable estimate for $\kappa^2$ is therefore the ratio $\norm{\theta_{*,N_0}}^2/\alpha_0$ of the squared magnitude of $\kappa u$ in the observed $P$ directions spanned by inputs form the training data divided by the overall proportion $\alpha_0$ of directions represented by the training data.
\subsection{Results: Posterior and Evidence via Meijer-G Functions}\label{sec:Z-form}
We are now ready to formulate our first main result (Theorem~\ref{thm:Z-form}), which expresses the partition function $Z_\infty({\bf t})$ defined in \eqref{eq:part-def} in terms of the Meijer-G function; this allows the Bayesian evidence and predictive posterior to be written in exact closed form for any choice of network depth, hidden layer widths, dataset size, and input dimension. Compared to prior work using either iterative saddle point approximations of integrals encoding the Bayesian posterior~\cite{li2021statistical} or more involved methods such as the replica trick~\cite{zavatone2022contrasting}, our method provides a direct representation of the network posterior via the partition function in terms of a single contour integral without specializing to limiting cases. Additional quantities, such as the variance of the posterior, are simply expressed as a ratio of Meijer-G functions. We shall later recover known limiting cases and uncover new asymptotic results from expansions of the Meijer-G function (Theorem~\ref{thm:logG}).
\begin{theorem}\label{thm:Z-form}
Fix $P,L,N_0,\ldots, N_L\geq 1,\, \sigma^2>0$ as well as training data $X_{N_0},Y_{N_0}$ as in \S \ref{sec:setup}. Fix ${\bf t}\in \mathbb R^{N_0}$ and write
\[
{\bf t} = {\bf t}^{||}+ {\bf t}^\perp,\qquad {\bf t}^{||}\in \mathrm{col}(X_{N_0}),\quad {\bf t}^{\perp}\in \mathrm{col}(X_{N_0})^\perp.
\]
The partition function $Z_\infty({\bf t})$ of the predictive posterior defined in \eqref{eq:part-def} is
\begin{align}
Z_\infty({\bf t}) &= \lr{\frac{4\pi }{\norm{\theta_*}^2}}^{\frac{P}{2}} \exp\left[-i\inprod{\theta_{*,N_0}}{{\bf t}}\right]\prod_{\ell=1}^L \Gamma\lr{\frac{N_\ell}{2}}^{-1} \nonumber\\
&\quad \times \sum_{k=0}^\infty \frac{(-1)^k}{k!}\big|\big|{\bf t}^{\perp} \big|\big| ^{2k} M^{k} \G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{P}{2}, \frac{ N_1}{2}+k, \ldots, \frac{ N_L}{2}+k},
\end{align}
where
$\G{m,n}{p,q}{z}{a_1,\ldots, a_p}{b_1,\ldots, b_q}$ is a Meijer-G function (see \eqref{eq:G-def}) and
\begin{align}\label{eq:M-def}
4M := \prod_{\ell=0}^L \frac{2\sigma^2}{N_\ell}.
\end{align}
In particular, the Bayesian model evidence equals
\begin{align}
\label{eq:evidence-form}Z_\infty(0) &= \lr{\frac{4\pi }{\norm{\theta_*}^2}}^{\frac{P}{2}} \prod_{\ell=1}^L \Gamma\lr{\frac{N_\ell}{2}}^{-1} \G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{P}{2}, \frac{ N_1}{2}, \ldots, \frac{ N_L}{2}}.
\end{align}
Further, given $x\in \mathbb R^{N_0}$, the mean of the predictive posterior is
\begin{equation}\label{eq:mean-G}
\Ep{f(x)}= (\theta_{*,N_0})^Tx,
\end{equation}
while the variance is
\begin{equation}\label{eq:var-G}
\Var_{\mathrm{post}}\left[f(x)\right] = 2M\norm{x_{\perp}}^2\frac{\G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{P}{2}, \frac{ N_1}{2}+1, \ldots, \frac{ N_L}{2}+1}}{\G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{P}{2}, \frac{ N_1}{2}, \ldots, \frac{ N_L}{2}}},
\end{equation}
where $x_\perp$ is the projection of $x$ onto the orthogonal complement of the span $\mathrm{col}(X_{N_0})$ of the training data.
\end{theorem}
\subsection{Asymptotics of Meijer-G Function}\label{sec:G-results}
To evaluate the predictive posterior and evidence in Theorem~\ref{thm:Z-form} in the limits where $N_0, P, N_\ell$ (and potentially $L$) tend to infinity, we require novel expansions of the Meijer-G function obtained by the Laplace method. We are interested in regimes where $N_0,P$ grow and will assume a mild compatibility condition on the training data:
\begin{equation}\label{eq:Vstar-def}
\forall \alpha_0\in (0,1)\quad \exists \text{ constant }\norm{\theta_*}\quad \text{s.t.}\quad \lim_{\substack{P,N_0\rightarrow \infty \\ P/N_0\rightarrow \alpha_0\in (0,1)}}\norm{\theta_{*,N_0}} ~=~ \norm{\theta_*},
\end{equation}
where the convergence is in distribution. This assumption is very generic and is (for example) satisfied for a Gaussian data model where inputs are Gaussian
\[
X_{N_0}=\lr{x_{i,N_0},\,i=1,\ldots, P},\qquad x_{i,N_0}\sim \mathcal N(0,\Sigma_{N_0})\quad\text{independent},
\]
outputs are linear plus noise
\[
Y = V_{N_0} X_{N_0} + \epsilon_{N_0},\qquad V_{N_0}\sim \mathrm{Unif}\lr{S^{N_0-1}},\quad \epsilon_{N_0}\sim \mathcal N\lr{0,\sigma_\epsilon^2 I_{N_0}},
\]
and the spectral density of the design matrices $\Sigma_{N_0}$ converges weakly as $N_0\rightarrow \infty$ to a fixed probability measure on $\mathbb R_+$ with finite moments.
To minimize notation, we report here the expansions in terms of a single layer width $N = N_1 = \dots = N_L$, but expansions with distinct $N_\ell$ (and to higher order) are provided in the proof (\S \ref{pf:logG}). To notational convenience, we will write
\[
\G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{\mathbf{N}}{2}+k,\frac{P}{2}}:= \G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{N_1}{2}+k,\ldots, \frac{N_L}{2}+k,\frac{P}{2}}.
\]
\begin{theorem}[Asymptotic Expansions of the $G$-function]\label{thm:logG}
Set $N_1,\ldots,N_L = N$ and define $\mathbf{N} = (N_1, \dots, N_L)$. Suppose that the training data satisfies \eqref{eq:Vstar-def}. In different limiting cases such that $\{P, N\} \to \infty$ with fixed $P/N_0 = \alpha_0$, we evaluate the quantities
\begin{align*}
\log G &:= \log \G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{\mathbf{N}}{2}+k,\frac{P}{2}}\\
\Delta(\log G)[k] &:= \log \G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{\mathbf{N}}{2}+k,\frac{P}{2}} - \log \G{L+1,0}{0,L+1}{\frac{\norm{\theta_{*,N_0}}^2}{4M}}{-}{\frac{\mathbf{N}}{2},\frac{P}{2}}.
\end{align*}
We will see in each case that to leading order $\log G$ does not depend on $k$ while $ \Delta(\log G)[k] $ does.
\begin{enumerate}[label=(\alph*)]
\item Fix $L < \infty,\, \alpha,\sigma^2>0$. Suppose $P,N\rightarrow \infty$ with $P/N\rightarrow \alpha$. Then,
\begin{align}
\label{eq:logG-Lfinite}\log G &= \frac{N\alpha}{2}\left[\log\lr{\frac{N\alpha}{2}} + \log\lr{1+\frac{z_*}{\alpha}} - \lr{1+\frac{z_*}{\alpha}}\right] \nonumber \\
&\qquad + \frac{NL}{2}\left[\log\lr{\frac{N}{2}} + \log\lr{1+z_*} - (1+z_*)\right] + \tilde O(1)\\
\label{eq:DlogG-Lfinite} \Delta(\log G)[k] &= kL\left[\log\lr{\frac{N}{2}} + \log(1+z_*)\right] + \tilde O\left(\frac{1}{N}\right),
\end{align}
where $z_*$ is the unique solution to
\begin{equation}\label{eq:Psi-crit}
\lr{1+\frac{z_*}{\alpha}}\lr{1+z_*}^L = \frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0},\quad z_* > \min\set{-\alpha,-1}.
\end{equation}
\item Fix $\lambda_{\mathrm{prior}},\alpha>0$ and suppose $L=\lambda_{\mathrm{prior}} N, \, P,N\rightarrow \infty,\, P/N\rightarrow \alpha, \, \sigma^2=1$. Then,
\begin{align}
\label{eq:logG-LN} \log G &= \frac{\lambda_{\mathrm{prior}} N^2}{2}\left[\log\lr{\frac{N}{2}} - 1\right] + \frac{\lambda_{\mathrm{prior}} N}{2}\left[-\log\lr{\frac{N}{2}} + \log(2\pi)\right]\nonumber\\
&\qquad + \frac{N\alpha}{2}\left[\log\lr{\frac{N\alpha}{2}} - 1\right] + \tilde O(1)\\
\label{eq:DlogG-LN} \Delta(\log G)[k] &= k\left[\lambda_{\mathrm{prior}} N \log\lr{\frac{N}{2}} + \log \lr{\frac{\norm{\theta_*}^2}{\alpha_0}}\right] + \tilde O\left(\frac{1}{N}\right).
\end{align}
\item Fix $\lambda_{\mathrm{post}} >0$. Suppose $N,P,L\rightarrow \infty$ with $LP/N\rightarrow \lambda_{\mathrm{post}}, \, \sigma^2=1$ and $L/N\to 0$. Then,
\begin{align}
\label{eq:logG-PLN} \log G &= \frac{P}{2}\left[\log\lr{\frac{P}{2}}-1\right] + \frac{\lambda_{\mathrm{post}} N}{2}\frac{N}{P}\left[\log\lr{\frac{N}{2}}-1\right] \nonumber \\
&\qquad + \frac{P}{2}\left[\log(1+t_*)-t_*\lr{1+\frac{\lambda_{\mathrm{post}} t_*}{2}}\right] \nonumber \\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{2}\left[\frac{N}{P}\lr{1+\frac{P}{N}t_*}\log\lr{1+\frac{P}{N}t_*} - t_*\right] + \tilde O(1)\\
\label{eq:DlogG-PLN} \Delta(\log G)[k] &= k\left[L \log \lr{\frac{N}{2}} + \lambda_{\mathrm{post}} t_*\right] + \tilde{O}\left(\frac{1}{N}\right),
\end{align}
where $t_*$ is the unique solution to
\begin{align}
e^{\lambda_{\mathrm{post}} t_*}(1+t_*) &= \frac{\norm{\theta_*}^2}{\alpha_0}.
\end{align}
\end{enumerate}
In all the estimates above, $\tilde O(1) = O(\max\{\log P, \log N\})$ suppresses lower-order terms of order 1, up to logarithmic factors; similarly, $\tilde O(1/N) = O(\max\{\log P, \log N\}/N)$. Suppressed terms are included in \S \ref{pf:logG}.
\end{theorem}
\section{Results on Model Selection and Extrapolation with Deep Linear Networks}\label{sec:limit-thms}
In this section, we combine our non-asymptotic formulas for the posterior and evidence from Theorem \ref{thm:Z-form} with the Meijer-G function expansions from Theorem \ref{thm:logG} to investigate two fundamental questions about Bayesian interpolation with linear networks:
\begin{itemize}
\item \textbf{Model Selection.} How should we choose the prior weight variance $\sigma^2$, model depth $L$ and layer widths $N_\ell$? Recall that the partition function $Z_\beta\lr{{\bf 0}~|~L,N_\ell, \sigma^2,X_{N_0},Y_{N_0}}$ from \eqref{eq:char-part} (known as the Bayesian model evidence) represents the likelihood of observing the data $(X_{N_0},Y_{N_0})$ given the architecture $L,N_\ell$ and the prior weight variance $\sigma^2$. Maximizing the Bayesian evidence is therefore maximum likelihood estimation over the space of models and gives a principled Bayesian method for model selection. We will see in Theorems \ref{thm:finite-L}, \ref{thm:PLN}, and \ref{thm:LN} that maximizing the Bayesian model evidence in wide networks either requires choosing priors that are data-dependent or taking models of infinite posterior depth $\lambda_{\mathrm{post}}$. This gives, at least in the restricted setting of deep linear networks with output dimension $1$, a principled reason to prefer deeper networks. \\
\item \textbf{Explaining Extrapolation.} A hallmark of neural networks used in practice is their ability to learn data-dependent features. In the context of deep linear networks, as explained in \S \ref{sec:feature-learning}, extrapolation in our models $f(x)=\theta^Tx$ consists of using correlations between the projection $\theta_{||}$ of $\theta$ onto direction spanned by training inputs and the complementary projection $\theta_{\perp}$ in the prior to inject information about the training data into the posterior distribution of $\norm{a_\perp}$. Theorems \ref{thm:finite-L}, \ref{thm:LN}, and \ref{thm:PLN} explain exactly how and to what extent this happens. They show in particular that effective extrapolation requires that the effective posterior depth $\lambda_{\mathrm{post}}$ is strictly positive (see \eqref{eq:lpost-def}). They also reveal that, at infinite posterior depth, universal data-independent priors give rise to the same posterior one would obtain at finite depth by optimizing the priors in a data-dependent way (e.g. by empirical Bayes).\\
\end{itemize}
For notational simplicity, we report our results here in terms of a single hidden layer width $N = N_1 = \dots = N_L$. The general form with generic $N_\ell$, as well as related results not stated here, are available by direction application of the general expansions in \S \ref{pf:logG}. As elsewhere, we emphasize that we focus on the regime
\[
\frac{P}{N_0}\rightarrow \alpha_0\in (0,1).
\]
When $\alpha_0\geq 1$, Theorem \ref{thm:Z-form} still holds but immediately yields that $\theta=\theta_{*,N_0}$ almost surely since $\theta_\perp =0$. In \S \ref{sec:doubledescent}, we will briefly return to this point when discussing double descent for the $\ell_2$ generalization error. As explained in \S \ref{sec:feature-learning}, in all our results, $\norm{\theta_*}$ and $\alpha_0$ typically appear in the following combination
\begin{align}
\nu &:= \frac{\norm{\theta_*}^2}{\alpha_0}.
\end{align}
To help contextualize our results below, we state the following simple Lemma relating asymptotic statements about the distribution of $\norm{\theta_\perp}$ to asymptotic statements about the joint distribution under the posterior of network outputs over a collection of test points.
\begin{lemma}[Asymptotic Normality of Posterior]
\label{lem:normal}
For each $N_0$, consider a collection of $k\geq 1$ test points
\[
{\bf x}_{N_0} = \lr{x_{j;N_0},\,j=1,\ldots,k}.
\]
Suppose that
\begin{itemize}
\item For each $\alpha_0\in (0,1)$ there exists a vector $\mu_{*}=\lr{\mu_{*,j},\, j=1,\ldots, k}\in \mathbb R^k$ such that
\begin{equation}\label{eq:mu-def}
\lim_{\substack{P,N_0\rightarrow \infty \\ P/N_0\rightarrow \alpha_0\in (0,1)}}\inprod{\theta_{*,N_0}}{x_{j,N_0}} = \mu_{*,j}\qquad \text{ almost surely}.
\end{equation}
\item For each $\alpha_0\in (0,1)$ there exists a positive semi-definite $k\times k$ matrix $\Sigma_\perp$ such that
\begin{equation}\label{eq:Sig-def}
\lim_{\substack{P,N_0\rightarrow \infty \\ P/N_0\rightarrow \alpha_0\in (0,1)}} \lr{\frac{1}{N_0-P}\inprod{x_{i,N_0}^\perp}{x_{j,N_0}^\perp}}_{1\leq i,j\leq k} = \Sigma_\perp\qquad \text{ almost surely},
\end{equation}
where $x_{j,N_0}^\perp$ denotes the projection of $x_j$ onto the orthogonal complement of the column space of $X_{N_0}$.
\end{itemize}
Assume that the convergence of the posterior for $\norm{\theta_\perp}^2$ in the large-$N$ limit satisfies
\begin{align*}
\norm{\theta_\perp}^2 \to \nu(1-\alpha_0)c
\end{align*}
for some constant $c$. Then the distribution over posterior predictions evaluated on ${\bf x}_{N_0}$
\[
f({\bf x}_{N_0}) = \lr{a^Tx_{j,N_0},\, j=1,\ldots, k},\qquad \theta\sim \mathbb P_{\mathrm{post}}^{\mathrm{param}}
\]
converges weakly to a $k$-dimensional Gaussian:
\begin{align}
\label{eq:normal-post}f({\bf x}_{N_0}) ~\rightarrow~ \mathcal N\lr{\mu_*, ~ \nu c\Sigma_{\perp}}.
\end{align}
\end{lemma}
In general, the value of the constant $c$ in Lemma~\ref{lem:normal} may depend on the architecture of the neural network. The following theorem shows that maximizing Bayesian evidence in the limit of large datasets corresponds to choosing $c=1$, regardless of the network architecture:
\begin{theorem}[Maximizing Evidence gives Universal, Architecture-Independent Posteriors]
\label{thm:bayesfeature}
Fix $L, N_1\dots, N_L \geq 1$, $\sigma^2 > 0$, $\alpha_0\in (0,1)$, and consider sequences of training data sets $X_{N_0}\in \mathbb R^{N_0\times P}, Y_{N_0}\in \mathbb R^{1\times P}$ as in \S \ref{sec:setup} such that
\begin{align*}
P, N_0 \to \infty, \qquad \frac{P}{N_0} \to \alpha_0
\end{align*}
and \eqref{eq:Vstar-def} holds. Let
\[
Z_\infty(0~|~L,N_\ell, \sigma^2,\alpha_0) = \lim_{\substack{N_0,P\rightarrow \infty\\
P/N_0\rightarrow \alpha_0\in (0,1)}} Z_\infty(0~|~X,Y,L,N_\ell,\sigma^2)
\]
denote the limiting Bayesian model evidence. Then, adopting the notation of Lemma \ref{lem:normal}, the following statements are equivalent:
\begin{enumerate}[label=(\alph*)]
\item $\sigma^2$ maximizes $ Z_\infty(0~|~L,N_\ell,\sigma^2,\alpha_0)$
\item the posterior distribution over $\norm{\theta_\perp}^2$ converges weakly to the delta function at $\nu(1-\alpha_0)$
\item the posterior over predictions $f(\mathbf{x}_{N_0})$ converges weakly to a Gaussian $\mathcal{N}\lr{\mu_*, \nu \Sigma_\perp}$.
\end{enumerate}
\end{theorem}
We prove Theorem \ref{thm:bayesfeature} in \S \ref{sec:bayesfeature-pf} and refer the reader to \S \ref{sec:feature-learning} for an informal discussion. We emphasize that Theorem \ref{thm:bayesfeature} holds for any choice of depth and hidden layer widths. At finite depth, we shall see that identifying the evidence-maximizing prior $\sigma_*^2$ requires knowledge of the data-dependent parameter $\nu$. In contrast, infinite-depth networks have evidence maximized by $\sigma_*^2 = 1$, allowing them to successfully infer $\nu$ from training data despite a data-independent prior.
\subsection{Results on Models with a Finite Number of Hidden Layers}\label{sec:Lfinite-thms}
We present here a characterization of the infinite-width posterior and model evidence for networks with a fixed number of hidden layers. Recall that, as we explained at the start of \S \ref{sec:limit-thms}, the posterior distribution over regression vectors $\theta\in \mathbb R^{N_0}$ satisfies, for every $N_0$,
\[
\theta \stackrel{d}{=} \theta_{*,N_0} + u \norm{\theta_\perp},
\]
where $u$ is a uniform random vector on the unit sphere in $\mathrm{col}(X_{N_0})^\perp$ that is independent of magnitude $\norm{\theta_\perp}$ of the radial part of $\theta$ projected onto $\mathrm{col}(X_{N_0})^\perp$. To characterize the posterior distribution of $a$ it therefore suffices to describe the distribution of $\norm{\theta_\perp}$ (cf. Lemma \ref{lem:normal}).
\begin{theorem}[Posterior and Evidence at Finite $L$]\label{thm:finite-L}
For each $N_0,P\geq 1$, consider training data $X_{N_0},Y_{N_0}$ as in \S \ref{sec:setup} satisfying \eqref{eq:Vstar-def}. Fix constants $L\geq 0,\,\sigma^2 > 0$ and suppose that
\begin{equation}\label{eq:finite-L}
P,N_0,N \rightarrow \infty,\qquad \frac{P}{N_0} \rightarrow \alpha_0\in (0,1),\qquad \frac{P}{N}\rightarrow \alpha\in (0,\infty).
\end{equation}
In this limit, the posterior for $\norm{\theta_\perp}^2$ converges in distribution to a random variable depending only on $\nu,L,\sigma,\alpha,\alpha_0$:
\begin{align}
\label{eq:fin-a}&\norm{\theta_\perp}^2~\rightarrow~ \nu(1-\alpha_0)\lr{1+\frac{z_*}{\alpha}}^{-1},
\end{align}
where $z_*$ is the unique solution to
\begin{equation}\label{eq:zstar}
\frac{\nu}{\sigma^{2(L+1)}} = \lr{1+\frac{z_*}{\alpha}}\lr{1+z_*}^L,\qquad z_* > \max\set{-1,-\alpha}.
\end{equation}
Additionally, we have the following large-$P$ expansion of the Bayesian model evidence
\begin{align}
\log Z_\infty(0) &~\rightarrow~ \frac{P}{2}\left[\log\lr{\frac{P}{2}} + \log\lr{1+\frac{z_*}{\alpha}} - \lr{1+\frac{z_*}{\alpha}} - \log\lr{\frac{\norm{\theta_*}^2}{4\pi}}\right] \nonumber \\
&\qquad + \frac{NL}{2}\left[\log\lr{1+z_*} - z_*\right] + O(\max\set{\log P, \log N}).
\end{align}
\end{theorem}
\noindent Theorem \ref{thm:finite-L} implies several interesting consequences for the posterior, feature learning, and model selection at finite $L$. As a first we deduce the emergence of feature learning in the posterior. Lemma \ref{lem:normal} and Theorem \ref{thm:finite-L} immediately imply the following.
\begin{corollary}[Posterior Predictor Distribution]\label{cor:Lfix-post}
With the assumptions and notation of Lemma \ref{lem:normal} and Theorem \ref{thm:finite-L}, the posterior over predictions $f({\bf x}_{N_0})$ converges weakly to a Gaussian
\begin{align*}
\mathcal{N}\lr{\mu_*,~\nu\Sigma_\perp\lr{1+\frac{z_*}{\alpha}}^{-1}}.
\end{align*}
\end{corollary}
In regimes where $z_*$ tends to zero, the posterior will converge to the evidence-maximizing posterior independent of the architecture and prior. By \eqref{eq:zstar}, this occurs for instance in the limit of infinite depth with $\sigma=1$ or when $\nu=\sigma^{2(L+1)}$. This last condition corresponds to maximizing the Bayesian evidence $Z_\infty(0)$ at finite $L$:
\begin{corollary}[Bayesian Model Selection at Finite $L$]\label{cor:Lfix-evid}
In the setting of Theorem \ref{thm:finite-L} the Bayesian evidence $Z_\infty(0)$ satisfies:
\begin{align}
\label{eq:sig-star}\sigma_*^2 &= \argmax_{\sigma}Z_\infty(0) = \nu^{\frac{1}{L+1}}\\
\label{eq:L-star} L_* &= \argmax_{L}Z_\infty(0) =\frac{\log(\nu)}{\log(\sigma^2)}-1.
\end{align}
In particular, given a prior variance with $\sgn{\nu-1}=\sgn{\sigma^2-1}$ satisfying $|\sigma^2-1| \leq \epsilon$, the optimal depth network satisfies $L_* \geq |\log(\nu)|/\epsilon$.
\end{corollary}
To put this Corollary into context, note that in the large width limit \eqref{eq:finite-L} there are only two remaining model parameters, $L$ and $\sigma^2$. Model selection can therefore be done in two ways. The first is an empirical Bayes approach in which one uses the training data to determine a data-dependent prior $\sigma_*^2$ given by \eqref{eq:sig-star}. With this prior, inspection of \eqref{eq:zstar} shows that $z_*=0$. By Corollary \ref{cor:Lfix-post}, at large $N_0,P$, predictions $f(x_\perp)$ on test points $x_\perp$ orthogonal to inputs take the simple form
\[
f(x_\perp)~\approx~ \mathcal N\lr{0,\,\nu\,\frac{\norm{x_\perp}^2}{N_0-P}}.
\]
In this way, the model leverages the scale of labels observed in the training data to make corresponding predictions at test time.
The other approach to model selection, which more closely follows the use of neural networks in practice, is to seek a universal, data-agnostic value of $\sigma^2$ and optimize instead the network architecture. The expression \eqref{eq:sig-star} shows that the only way to choose $\sigma^2,L$ to (approximately) maximize model evidence for any fixed $\nu$ is to take $\sigma^2\approx 1$ with $\sgn{\sigma^2-1}=\sgn{\nu-1}$ and $L\rightarrow \infty$. Hence, restricting to the data-independent prior $\sigma^2=1$ naturally leads to a Bayesian preference for infinite-depth networks, regardless of the training data. This motivates us to consider large-$N$ limits in which $L$ tends to infinity, which we take up in the next two sections.
\subsection{Results: Infinite-Depth Models}\label{sec:Linfinit-thms}
The results in the previous section show that the best data-agnostic setting of prior variance corresponds to $\sigma^2=1$ and that, with this value of $\sigma^2$, one cannot maximize the Bayesian evidence in the regime \eqref{eq:finite-L} at finite values of $L$. In this section, we therefore fix $\sigma^2=1$ and investigate feature learning and model selection via Bayesian evidence maximization in regimes where $L\rightarrow \infty$.
\begin{theorem}[Posterior and Evidence at Fixed $\lambda_{\mathrm{prior}}$ and $\sigma^2=1$]
\label{thm:LN}
For each $N_0,P\geq 1$, consider training data $X_{N_0},Y_{N_0}$ as in \S \ref{sec:setup} satisfying \eqref{eq:Vstar-def}. Moreover, fix $\lambda_{\mathrm{prior}},\alpha \in (0,\infty),\, \alpha_0\in (0,1).$ Suppose that $N_1=\cdots=N_L=N$ and that
\begin{equation}\label{eq:infinite-depth}
P,N_\ell,L\rightarrow \infty,\qquad \frac{P}{N_0}\rightarrow \alpha_0,\qquad\frac{P}{N}\rightarrow \alpha,\qquad \frac{L}{N}\rightarrow \lambda_{\mathrm{prior}}.
\end{equation}
Then, under the posterior, we have the following convergence in distribution
\begin{equation}\label{eq:inf-a}
\norm{\theta_\perp}^2~ \rightarrow~ \nu (1-\alpha_0),
\end{equation}
which is independent of $\alpha,\lambda_{\mathrm{prior}}$. Additionally, the evidence admits the following asymptotic expansion:
\begin{align}
\log Z_\infty(0) &= \frac{P}{2}\left[\log\lr{\frac{P}{2}} - 1 - \log\lr{\frac{\norm{\theta_*}^2}{4\pi}}\right] + O(\max\set{\log P, \log N}).
\end{align}
\end{theorem}
Comparing \eqref{eq:inf-a} with \eqref{eq:fin-a} from Theorem \ref{thm:finite-L} reveals the remarkable nature of data-driven feature learning in infinitely deep networks.
\begin{corollary}[Feature Learning at Infinite Depth]\label{cor:feature-learning}
In the setting of Theorem \ref{thm:LN}, he posterior distribution over regression weights $a$ is the same in the following two settings:
\begin{itemize}
\item We fix $L\geq 0$, take the data-dependent prior variance $\sigma_*$ that maximizes the Bayesian model evidence as a function of the training data and network depth as in \eqref{eq:sig-star}, and send the network width $N$ to infinity.
\item We fix $\lambda_{\mathrm{prior}} >0$, take a data-agnostic prior $\sigma^2=1$, and send both the network depth $L:=\lambda_{\mathrm{prior}} \cdot N$ and network width $N$ to infinity together.\
\end{itemize}
\end{corollary}
This corollary makes precise the statement that infinitely deep networks perform the same feature learning with data-agnostic priors as finite depth networks with optimal data-dependent priors. Our next result shows that not only does setting $\sigma^2=1,\lambda_{\mathrm{prior}}>0$ in the large-$N$ limit give desirable posteriors, but it also enjoys a significant preference with respect to model evidence.
\begin{corollary}[Bayesian Preference for Infinite Depth]
\label{cor:infevd}
As in the setting of Theorem \ref{thm:LN}, fix $\lambda_{\mathrm{prior}}>0$. For each fixed $L\geq 0$
\[
\lim_{N\rightarrow \infty}e^{cN}\frac{Z_\infty\lr{0~|~L,N_\ell=N,\sigma^2=1,X_{N_0},Y_{N_0}}}{Z_\infty\lr{0~|~L=N\lambda_{\mathrm{prior}},N_\ell=N,\sigma^2=1,X_{N_0},Y_{N_0}}} =1,
\]
where
\[
c = -\frac{\alpha}{2}\left[\log\lr{1+\frac{z_*}{\alpha}} - \frac{z_*}{\alpha}\right] - \frac{L}{2}\left[\log(1+z_*)-z_*\right]
\]
satisfies $c \leq 0$, and $z_*$ is defined as in \eqref{eq:zstar}.
\end{corollary}
The preceding Corollary shows that, when $\sigma^2=1$, any choice of $\lambda_{\mathrm{prior}}$ results in exponentially greater evidence than a network with finitely many hidden layers in the large-$N$ limit. Finally, as shown in \S \ref{sec:evidence} by a higher-order expansion of the evidence, we find that the Bayes-optimal depth is given by an $O(1)$ choice of $\lambda_{\mathrm{prior}}$. That is, unlike in the finite-depth limit where $L_*\to\infty$ when using an ML prior, choosing $L=O(N)$ has an attainable optimal depth that maximizes Bayesian evidence.
\begin{corollary}[Bayes-optimal infinite depth]
\label{cor:optevd}
In the setting of Theorem \ref{thm:LN}, we have
\[
\lambda_{\mathrm{prior},*} = \sqrt{1 + \log(\nu)^2}-1 = \argmax_{\lambda}\lim_{\substack{P,N_\ell \rightarrow \infty\\ P/N_0\rightarrow \alpha_0 \in (0,1)\\ \lambda_{\mathrm{prior}}(N_1,\ldots, N_L)\rightarrow \lambda}} Z_\infty\lr{{\bf 0}~|~L,N_\ell,\sigma^2=1,X_{N_0},Y_{N_0}}.
\]
Moreover, for any $\lambda>0$, there exists $c>0$ such that we have
\begin{align*}
c < \lim_{\substack{P,N_\ell \rightarrow \infty\\ P/N_0\rightarrow \alpha_0 \in (0,1)}} \frac{Z_\infty\lr{{\bf 0}~|~L=N\lambda,N_\ell=N,\sigma^2=1,X_{N_0},Y_{N_0}}}{ Z_\infty\lr{{\bf 0}~|~L=N\lambda_{\text{prior},*},N_\ell=N,\sigma^2=1,X_{N_0},Y_{N_0}}}\leq 1,
\end{align*}
i.e., the ratio of evidences is lower-bounded by a constant.
\end{corollary}
The preceding result show that although the optimal choice of $\lambda_{\mathrm{prior}}$ is data-dependent, choosing the incorrect constant for $\lambda_{\mathrm{prior}}$ reduces the evidence by a negligible amount compared to the exponential suppression in $N$ that occurs for finite depths (Corollary~\ref{cor:infevd}). We conclude that, at $\sigma^2=1$, wide networks with depth comparable to width are robustly preferred to shallow networks.
\subsection{Results: Scaling Laws for Feature Learning and Model Selection}\label{sec:scaling-thms}
We consider the limit where depth, width, and dataset size all go to infinity simultaneously. To emphasize the similarity between dataset size and depth, we take the limit to infinity while holding $LP/N$ constant. This results in a scaling law for feature learning that only depends on $LP/N$, as seen in the following theorem.
\begin{theorem}[Posterior and Evidence at Fixed $\lambda_{\mathrm{post}}$]
\label{thm:PLN}
For each $N_0,P\geq 1$, consider training data $X_{N_0},Y_{N_0}$ as in \S \ref{sec:setup} satisfying \eqref{eq:Vstar-def}. Moreover, fix constants $\lambda_{\mathrm{post}} > 0,\, \alpha_0\in (0,1), \, \sigma^2=1$. Suppose that $N_1=\cdots = N_L=N$ and suppose that
\begin{align}
P, N_\ell, L \to \infty, \qquad \frac{P}{N_0} \to \alpha_0 , \qquad \frac{P}{N} \to 0, \qquad \frac{L}{N} \to 0, \qquad \frac{LP}{N} \to \lambda_{\mathrm{post}}.
\end{align}
Then, under the posterior, we have the following convergence in distribution
\begin{align}
\norm{\theta_\perp}^2 \to \nu(1-\alpha_0)(1+t_*)^{-1},
\end{align}
where $t_*$ is the unique solution to
\begin{align}
\nu = (1+t_*)e^{\lambda_{\mathrm{post}} t_*}, \qquad t_* > -1.
\end{align}
Additionally, we have the following asymptotic expansion for the Bayesian model evidence
\begin{align*}
\log Z_\infty(0) &= \frac{P}{2}\lr{\log\lr{\frac{P}{2}}-1} - \frac{P}{2} \log\lr{\frac{\norm{\theta_*}^2}{4\pi}}+ \frac{P}{2}\left[\log(1+t_*)-t_* - \frac{1}{2}\lambda_{\mathrm{post}} t_*^2\right]\\
&+ \tilde O(\max\set{\log P, \log N, \log L}).
\end{align*}
\end{theorem}
In the limit $\lambda_{\mathrm{post}}\to\infty$, Theorem~\ref{thm:PLN} implies that optimal feature learning is rapidly approached --- specifically, like the harmonic mean of 1 and $\lambda_{\mathrm{post}}/\log \nu$. Bayesian model selection also drives $\lambda_{\mathrm{post}}$ to infinity.
\begin{corollary}[Bayesian Model Selection at Fixed $\lambda_{\mathrm{post}}$]
In the setting of Theorem~\ref{thm:PLN}, the Bayesian evidence $Z_\infty(0)$ is monotonically increasing in $\lambda_{\mathrm{post}}$. Specifically,
\begin{align}
\frac{\partial \log Z_\infty(0)}{\partial \lambda_{\mathrm{post}}} = \frac{Pt_*^2}{4} \geq 0.
\end{align}
\end{corollary}
These results provide simple scaling laws that apply independently of the choice of dataset, demonstrating the behavior of the predictor's posterior distribution and model evidence in terms of $\lambda_{\mathrm{post}}$. The coupling of depth and dataset size in $\lambda_{\mathrm{post}}$ provides a novel interpretation of depth as a mechanism to improve learning in a manner similar to additional data: larger datasets and larger depths contribute equally towards aligning the prior $\sigma^2=1$ towards the correct posterior.
\section{Properties of Deep Linear Networks}\label{sec:properties}
In this section, we relate our work to prior results in the literature: benign overfitting~\cite{bartlett2020benign}, variance-limited scaling laws~\cite{bahri2021explaining}, and sample-wise double descent~\cite{belkin2019reconciling, belkin2020two,zavatone2022contrasting}. To make the discussion concrete, we shall focus on the architecture introduced in Theorem~\ref{thm:LN}, where depth, width, and dataset size scale linearly with each other to produce a posterior that exhibits optimal feature learning given prior $\sigma^2=1$.
\subsection{Scaling Laws}
We examine the scaling behavior of model error with respect to the infinite-width or infinite-dataset limits. The work of~\cite{bahri2021explaining} shows that the difference between the finite-size loss and the infinite-size loss scales like $1/x$ for $x = N$ or $P$ while the other parameter is held fixed ($P$ or $N$, respectively). Here, we demonstrate analogous scaling laws in the limits considered above, showing that optimal feature learning is again approached with a rate like $1/N$ and $1/P$. Moreover, we can directly evaluate scaling laws with respect to the depth $L$ of the network, finding that convergence also occurs at a rate of $1/L$. Similarly to the results of~\cite{bahri2021explaining}, these scaling laws are independent of the choice of dataset and consequently provide a universal insight into how performance improves with larger models or more data.
\begin{theorem}[Scaling Laws of Finite-Width Corrections to Deep Networks]\label{thm:scaling}
For each $N_0\geq 1$ consider training data $X_{N_0},Y_{N_0}$ as in \S \ref{sec:setup} satisfying \eqref{eq:Vstar-def}. Fix constants $\lambda_{\mathrm{prior}} > 0,\alpha>0, \, \sigma^2=1$. Suppose that $N_1=\cdots=N_L= N$, that $L=\lambda_{\mathrm{prior}} N$ and number of datapoints satisfies $P=N\alpha$. Then, as $N\rightarrow \infty$,
\begin{align*}
\Var_{\mathrm{post}}\left[f(x)\right] = \lim_{N\to\infty} \Var_{\mathrm{post}}\left[f(x)\right] +\frac{C}{N}+O\lr{\frac{\log N}{N^2}}
\end{align*}
where $C\in \mathbb R$ is a universal constant.
\end{theorem}
\noindent We present the proof in \S \ref{sec:scaling-pf}.
\subsection{Double Descent}
\label{sec:doubledescent}
We demonstrate double descent in $\alpha_0=P/N_0$ consistent with previous results in the literature~\cite{zavatone2022contrasting}. As a concrete example, we shall consider a Gaussian data model and evaluate double descent for the posterior of optimal feature learning; this posterior is achieved by, for example, deep networks with $\sigma^2=1$ (Theorem \ref{thm:LN}), or finite-depth networks with data-tuned priors $\sigma_*^2$ (Theorem \ref{thm:finite-L}).
\begin{theorem}[Double Descent in $\alpha_0$]\label{thm:dd}
Consider generative data model
\[
x_i\in \mathbb R^{N_0}\sim \mathcal N(0,\mathrm{I}),\qquad y_i = V_0x_i+\epsilon_i,\qquad \epsilon_i\sim \mathcal N(0,\sigma_\epsilon^2),\qquad \norm{V_0}^2=1
\]
and posterior distribution
\begin{align*}
\mathcal{N}\lr{\mu_*, \nu\Sigma_\perp}.
\end{align*}
We have error
\[
\mathbb E_{x,X,\epsilon}\left[\left\langle f(x)-V_0x \right \rangle^2\right] = \begin{cases}
\frac{1}{\alpha_0} - \alpha_0 + \frac{1}{1-\alpha_0}\sigma_\epsilon^2,&\quad \alpha_0<1\\
\frac{1}{\alpha_0-1}\sigma_\epsilon^2,&\quad \alpha_0 \geq 1\end{cases},
\]
which diverges at $\alpha_0 = 1$.
\end{theorem}
\noindent We refer the reader to \S \ref{sec:dd-pf} for the proof.
\section{Background}
In this section, we introduce background and results needed for our proofs. Specifically, \S \ref{sec:gammafn} recalls basic definitions and asymptotic expansions for Gamma and Digamma functions (\S \ref{sec:gammafn}), \S \ref{sec:gammarv} recalls the moments of Gamma random variables, and \S \ref{sec:G-def} defines Meijer-G functions and introduces their basic properties.
\subsection{Gamma and Digamma Functions}\label{sec:gammafn} We will need the following asymptotic expansions for Euler's Gamma function
\[
\Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt,\qquad \mathbb R(z) >0
\]
and Digamma function $\phi^{(1)}(z)=\frac{d}{dz}\log \Gamma(z)$ (see Equations 6.1.7, 6.3.13 in \cite{abramowitz1964handbook}):
\begin{proposition}\label{prop:gamma-exp}
We have the following analytic asymptotic expansion for the Gamma function
\begin{equation}\label{eq:Gamma-exp}
\log \Gamma\lr{z} \sim \lr{z-\frac{1}{2}}\log(z) - z + \text{const}+O\lr{\abs{z}^{-1}},\quad \text{as }\abs{z}\rightarrow \infty.
\end{equation}
In particular, we also have
\begin{equation}\label{eq:diGamma-exp}
\phi^{(1)}(z)=\frac{d}{dz}\log \Gamma\lr{z} \sim \log(z) + O(\abs{z}^{-1}),\quad \text{as }\abs{z}\rightarrow \infty.
\end{equation}
Both expansions hold uniformly on sets of the form $\abs{z}< \pi - \delta$ for a fixed $\delta >0$.
\end{proposition}
\subsection{Gamma Random Variables}\label{sec:gammarv}
We will need the following well-known exact formulas for the moments of Gamma random variables, which follow directly for the formula for their density:
\[
\mathrm{Den}_{X}(x)=\begin{cases}\frac{x^{k-1}}{\Gamma(k)\theta^{k}}e^{-x/\theta},&\quad x>0\\
0,&\quad x \leq 0\end{cases},\qquad X\sim \Gamma(k,\theta), \quad k,\theta>0
\]
and the definition of the Gamma function.
\begin{proposition}
Let $k,\theta >0$ and suppose $\phi \sim \Gamma(k,\theta)$. Then for any $t\in \mathbb R$ we have
\begin{equation}\label{E:gamma-moments}
\E{\phi^t} = \theta^t\frac{\Gamma\lr{k+t}}{\Gamma\lr{k}}.
\end{equation}
In particular
\begin{equation}\label{E:gamma-MGF}
\E{e^{-it \log\phi}} = \exp\left[-it\log \theta + \log \Gamma\lr{k-it}-\log \Gamma(k)\right]
\end{equation}
is a meromorphic function of $t$ with poles on the negative imaginary axis:
\[
t = -i(\nu +k),\quad \nu = 0,1,2,\ldots.
\]
\end{proposition}
\subsection{Meijer G-Functions}\label{sec:G-def}
The Meijer-G function is defined as the contour integral
\begin{equation}\label{eq:G-def}
\G{p,q}{m,n}{z}{{\bf a}}{{\bf b}} = \frac{1}{2\pi i}\int_{\mathcal C} z^s \chi(s)ds,
\end{equation}
where
\[
\chi(s):=\frac{\prod_{j=1}^m \Gamma\lr{b_j-s}\prod_{k=1}^n\Gamma\lr{1-a_k+s} }{\prod_{j=m+1}^q \Gamma\lr{1-b_j+s}\prod_{k=n+1}^p \Gamma\lr{a_k+s}}
\]
and $\mathcal C$ is a Mellin-Barnes contour in the complex plane that separates the poles of $\Gamma(b_j-s)$ from those of $\Gamma(1-a_k+s)$ (see Figure \ref{fig:G-contour}).
\begin{figure}
\centering
\includegraphics[scale=.8]{G-contour.png}
\caption{This article concerns only the case when $a_k,b_j$ are real. The poles of $\Gamma(b_j-s)$ (black dots) extend to infinity along the positive real axis when $b_j$ are real. Similarly, the poles of $1-a_k+s$ (black squares) extend to infinity along the negative real axis when $b_j$ are real. The contour of integration (solid black line) in \eqref{eq:G-def} starts at $-\infty$, keeps the poles of $\Gamma(b_j-s)$ to the right, keeps the poles of $\Gamma(1-a_k+s)$ to the left, and ends at $+i\infty$.}
\label{fig:G-contour}
\end{figure}
We will need several properties of Meijer-G functions, which we now recall.
\begin{proposition}
For $\eta,\omega>0$, We have
\begin{align}
\label{E:G-cancel} & \G{m,n}{p,q}{x}{\alpha,{\bf a}}{{\bf b},\alpha} =\G{m,n-1}{p-1,q-1}{x}{{\bf a}}{{\bf b}} \\
\label{E:G-weighted-Laplace} &\int_0^\infty e^{-\omega x} x^{-\alpha} \G{m,n}{p,q}{\eta x}{{\bf a}}{{\bf b}} dx=\omega^{\alpha-1} \G{m,n+1}{p+1,q}{\frac{\eta}{\omega}}{\alpha, {\bf a}}{{\bf b}} \\
\label{E:G-inversion}&\G{m,n}{p,q}{z}{{\bf a}}{{\bf b}} =\G{n,m}{q,p}{z^{-1}}{1-{\bf b}}{1-{\bf a}}\\
\label{E:G-polyprod}&z^\rho \G{m,n}{p,q}{z}{{\bf a}}{{\bf b}} =\G{m,n}{p,q}{z}{\rho+{\bf a}}{\rho+{\bf b}}\\
\label{E:G-BesselJ} &\G{1,0}{0,2}{\frac{x^2}{4}}{-}{\frac{\nu}{2}, -\frac{\nu}{2}}= J_{\nu}(x)\\
\label{E:G-deriv}
&z^h \frac{d^h}{dz^h}\G{m,n}{p,q}{z}{{\bf a}}{{\bf b}} = \G{m,n+1}{p+1,q+1}{z}{{0, \bf a}}{{\mathbf{b}, h}}\\
\label{E:G-conv}
&\int_0^\infty \G{m,n}{p,q}{\eta x}{\mathbf a}{\mathbf b} \G{\mu, \nu}{\sigma, \tau}{\omega x}{\mathbf c}{\mathbf d} dx = \frac{1}{\eta}\G{n+\mu,m+\nu}{q+\sigma,p+\tau}{\frac{\omega}{\eta}}{-b_1,\dots,-b_m, \mathbf c, -b_{m+1},\dots,-b_q}{-a_1,\dots,-a_n, \mathbf d, -a_{n+1},\dots,-a_p}\\
\label{E:G-series} &\int_0^\infty \tau^{\alpha-1}\G{s,t}{u,v}{\sigma+\tau}{c_1,\ldots,c_u}{d_1,\ldots, v_v} \G{m,n}{p,q}{\omega \tau}{a_1,\ldots, a_p}{b_1,\ldots, b_q}d\tau \\
\notag &= \sum_{k=0}^\infty \frac{\lr{-\sigma}^k}{k!} \G{m+t,n+s+1}{p+v+1,q+u+1}{\omega }{1-\alpha,a_1,\ldots, a_n, k-\alpha-d_1,\ldots, k-\alpha-d_v+1,a_{n+1},\ldots, a_p}{b_1,\ldots, b_m, k-\alpha-c_1+1,\ldots, k-\alpha-c_u+1, k-\alpha+1, b_{m+1},\ldots, b_q}
\end{align}
For \eqref{E:G-series} we require
\[
\abs{\arg(\sigma)}, \abs{\arg(\omega)}<\pi
\]
and
\[
-\min\set{\Re(b_1),\ldots, \Re(b_m)}
< \Re(\alpha)<2 - \max\set{\Re(a_1),\ldots, \Re(a_n)}- \max\set{\Re(c_1),\ldots, \Re(c_t)}.
\]
Finally, fix $L\geq 1$ and
\[
k_\ell, \theta_\ell >0,\qquad \ell=1,\ldots, L
\]
and let
\[
\phi_\ell \sim \Gamma\lr{k_\ell,\theta_\ell} \quad \text{independent}.
\]
We have for every $y>0$
\begin{align}
\label{eq:G-gamma} \prod_{\ell=1}^L \Gamma\lr{k_\ell}^{-1 } \frac{1}{A}\G{L,0}{0,L}{\frac{y}{A}}{-}{k_1-1,\ldots, k_L-1} = \mathrm{Den}_{\prod_{\ell=1}^L\phi_\ell}(y),\qquad A:=\prod_{\ell=1}^L \theta_\ell.
\end{align}
\end{proposition}
\section{Proof of Theorem \ref{thm:Z-form}}
Recall that, by definition,
\[
Z_\beta({\bf x}_{N_0},{\bf t})= A_{\beta} \Ep{ \exp\left[ -\frac{\beta}{2}\big|\big|Y-\prod_{\ell=1}^{L+1}W^{(\ell)}X\big|\big|_2^2 -i\inprod{ f({\bf x}_{N_0})}{{\bf t}}\right]} ,
\]
where
\[
{\bf x}_{N_0} = \lr{x_{j,N_0},\, j=1,\ldots, k}\subseteq \mathbb R^{N_0},\quad x_{j,N_0}\in \mathbb R^{N_0},
\]
the expectation is over $W_{ij}^{(\ell)}\sim \mathcal N(0,\sigma^2/N_{\ell-1})$ and
\begin{equation*}
A_{\beta} =\exp\left[\frac{\beta}{2}\norm{Y_\perp}^2\right]\det(X^TX)^{1/2}\lr{2\pi \beta}^{P/2}.
\end{equation*}
The first step in proving Theorem \ref{thm:Z-form} is to decompose
\[
Y = \theta_*^T X + Y_{\perp},\qquad Y_\perp \in \mathrm{col}(X)^\perp,
\]
where here and throughout the proof we suppress the subscripts in $X_{N_0},Y_{N_0}, \theta_{*,N_0}$. This yields
\begin{equation}\label{eq:Z-form-1}
Z_\beta({\bf x}_{N_0},{\bf t})= A_\beta \exp\left[-\frac{\beta}{2}\norm{Y_\perp}^2\right] \Ep{ \exp\left[ -\frac{\beta}{2}\big|\big|\theta_*^TX-\prod_{\ell=1}^{L+1}W^{(\ell)}X\big|\big|_2^2 -i\inprod{ f({\bf x}_{N_0})}{{\bf t}}\right]}.
\end{equation}
Next, since $f(x)$ is a linear function of $x$, we have
\[
\inprod{ f({\bf x}_{N_0})}{{\bf t}} = f\lr{{\bf x}_{N_0}\cdot {\bf t}},
\]
and hence, suppressing the dependence on $N_0$, we will write
\[
Z_\beta({\bf x}_{N_0},{\bf t}) = Z_\beta(x,1)=:Z_\beta(x),\qquad x:={\bf x}_{N_0}\cdot {\bf t}.
\]
To prove Theorem \ref{thm:Z-form}, we first derive the following expression for $Z_\beta(x)$ for general $\beta$.
\begin{proposition}\label{prop:Z-gen}
For any $\beta > 0$, the partition function $Z_\beta(x)$ equals
\begin{align*}
&\prod_{\ell=1}^L\Gamma\lr{\frac{N_\ell}{2}}^{-1} \exp\left[-i\theta_*^Tx_{||}\right]\\
&\qquad \times \int_{\mathrm{col}{(X)}} d\zeta\exp\left[-\frac{\norm{X^{\dagger}(\tau - x_{||})}^2}{2\beta}+i\theta_*^T\zeta\right]~ \G{1,L}{L,1}{M\norm{\zeta}^2 +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0},
\end{align*}
where $X^{\dagger}$ is the pseudo-inverse of $X$ and
\[
4M = \prod_{\ell=0}^L \frac{2\sigma^2}{N_\ell}.
\]
\end{proposition}
\begin{proof}
We being the proof of Proposition \ref{prop:Z-gen} by integrating out the final layer weights $W^{(L+1)}$ and introducing a dual variable $t\in \mathbb R^{P}$, as in the following
\begin{lemma}\label{L:partition-form}
Write
\[
X^L:=W^{(L)}\cdots W^{(1)} X,\qquad x^L =W^{(L)}\cdots W^{(1)} x.
\]
We have
\begin{equation}\label{E:Z-noa}
Z_\beta(x) =\det(X^TX)^{1/2}\int_{\mathbb R^P} \Ep{\exp\left[-\frac{\norm{t}_2^2}{2\beta}+i\theta_*^TXt-\frac{\sigma^2}{2N_L}\norm{X^Lt + x^L}^2\right] }dt,
\end{equation}
\end{lemma}
\begin{proof}
From the following identity
\begin{align*}
1= \int_{\mathbb R^P} \frac{dt}{(2\pi \beta)^{P/2}} \exp\left[-\frac{1}{2\beta} \norm{t^T- i\beta\lr{\theta_*^TX-W^{(L+1)}X^L}}^2\right]
\end{align*}
we conclude
\begin{align*}
& \exp\left[-\frac{\beta}{2}\norm{\theta_*^TX-W^{(L+1)}X^L}^2\right]\\
&\quad = \int \frac{d t}{(2\pi \beta)^{P/2}} \exp\left[-\frac{1}{2\beta} \norm{t^T- i\beta\lr{\theta_*^TX-W^{(L+1)} X^L}}^2-\frac{\beta}{2}\norm{\theta_*^TX-W^{(L+1)} X^L}^2\right] \\
&\quad =\int \frac{d t}{(2\pi \beta)^{P/2}} \exp\left[-\frac{1}{2\beta} \norm{t}^2 +it\lr{\theta_*^TX-W^{(L+1)} X^L}\right].
\end{align*}
Substituting this into \eqref{eq:Z-form-1} we find
\begin{align*}
Z_\beta(x)&= \frac{A_{\beta}}{(2\pi \beta)^{P/2}}\exp\left[-\frac{\beta}{2}\norm{Y_\perp}^2\right] \int_{\mathbb R^P}\Ep{ \exp\left[i\lr{\theta_*^TXt-W^{(L+1)}\lr{X^Lt +x^L}} -\frac{\norm{t}^2}{2\beta}\right]} dt
\end{align*}
Now we compute the expectation over the final layer weights $W^{(L+1)}$ by completing the square:
\begin{align*}
-\frac{N_L}{2\sigma^2}\norm{W^{(L+1)}}_2^2 - iW^{(L+1)}\lr{X^Lt + x^L}= &-\frac{N_L}{2\sigma^2}\left[\norm{W^{(L+1)}-i\frac{\sigma^2}{N_L}(X^Lt+x^L\twiddle{t})}_2^2\right]\\
&- \frac{\sigma^2}{2N_L}\norm{X^Lt + x^L}_2^2.
\end{align*}
This yields
\[
Z_\beta (x) =\frac{A_{\beta}}{(2\pi \beta)^{P/2}}\exp\left[-\frac{\beta}{2}\norm{Y_\perp}^2\right] \int_{\mathbb R^P}\Ep{ \exp\left[ -\frac{\norm{t}^2}{2\beta}+i\theta_*^TXt-\frac{\sigma^2}{2N_L}\norm{X^Lt + x^L}^2\right]}dt,
\]
completing the proof.
\end{proof}
\noindent For each fixed $t$, note that
\[
X^Lt + x^L = W^{(L)}\cdots W^{(1)}\lr{Xt + x}.
\]
Hence, the expectation
\[
\Ep{ \exp\left[-\frac{\sigma^2}{2N_L}\norm{X^Lt + x^L}^2\right]}
\]
equals the Laplace transform
\[
\mathcal L_{Q_{N,L}}\lr{M'\norm{Xt+x}^2}
\]
of the random variable
\begin{equation}\label{E:QNL-def}
Q_{N,L} = \norm{ \left\{\prod_{\ell=1}^L \widehat{W}^{(\ell)}\right\} u}^2,\qquad \widehat{W}^{(\ell)} \sim \mathcal N\lr{0,I_{N_{\ell}\times N_{\ell-1}}}\text{ independent},
\end{equation}
where $u$ is any unit vector (the distribution is the same for any $u$ since $W^1$ is rotationally invariant) evaluated at $\norm{Xt+x}^2$ times
\[
M' :=\frac{1}{2} \lr{\prod_{\ell=1}^L \frac{\sigma^2}{N_\ell}}
\]
Thus, we obtain
\begin{equation}\label{E:Z-lap}
Z_\beta(x) =\det(X^TX)^{1/2} \int dt \exp\left[-\frac{\norm{t}^2}{2\beta}+i\theta_*^TXt\right] \mathcal L_{Q_{N,L}}\lr{M'\norm{Xt+x\twiddle{t}}^2},
\end{equation}
To proceed we rewrite the Laplace transform in the preceding line in terms of a Meijer-G function.
\begin{lemma}\label{L:QNL-form}
Let $Q_{N,L}$ be defined as in \eqref{E:QNL-def}. Then,
\begin{align*}
\mathcal L_{Q_{N,L}}\lr{\tau}= \lr{\prod_{\ell=1}^L \Gamma \lr{\frac{N_\ell}{2}}}^{-1}~\G{1,L}{L,1}{2^L\tau}{1-\frac{N_L}{2},\dots,1-\frac{N_1}{2}}{0}.
\end{align*}
\end{lemma}
\begin{proof}
The density of $Q_{N,L}^{1/2}$ is known from \cite{zavatone2021exact}:
\[
p_{Q_{N,L}^{1/2}}(\rho) = \frac{2^{1-L/2}}{\Gamma\lr{\frac{N_1}{2}}\cdots \Gamma\lr{\frac{N_{L}}{2}}} \G{L,0}{0,L}{\frac{\rho^2}{2^L }}{-}{\frac{N_L-1}{2},\cdots, \frac{N_1-1}{2}}.
\]
Hence, the density of $\widehat{Q}_{N,L}$ is
\begin{align*}
p_{Q_{N,L}}(\rho) &= \frac{1}{2\rho^{1/2}} p_{Q_{N,L}}(\rho^{1/2})\\
&=\frac{2^{-L}}{\Gamma\lr{\frac{N_1}{2}}\cdots \Gamma\lr{\frac{N_{L}}{2}}} \lr{\frac{2^L}{\rho}}^{1/2}\G{L,0}{0,L}{\frac{\rho}{2^L }}{-}{\frac{N_L-1}{2},\cdots, \frac{N_1-1}{2}}\\
&=\frac{2^{-L}}{\Gamma\lr{\frac{N_1}{2}}\cdots \Gamma\lr{\frac{N_{L}}{2}}} \G{L,0}{0,L}{\frac{\rho}{2^L }}{-}{\frac{N_L}{2}-1,\cdots, \frac{N_1}{2}-1},
\end{align*}
where in the last step we've used \eqref{E:G-polyprod}. Hence,
\begin{align*}
\mathcal L_{Q_{N,L}}(\tau) &= \int_0^\infty e^{-\tau \rho} \G{L,0}{0,L}{\frac{\rho}{2^L }}{-}{\frac{N_L}{2}-1,\cdots, \frac{N_1}{2}-1} d\rho\\
&=\lr{\Gamma\lr{\frac{N_1}{2}}\cdots \Gamma\lr{\frac{N_{L}}{2}}}^{-1} \frac{1}{2^L\tau} \G{L,1}{1,L}{\frac{1}{2^L \tau }}{0}{\frac{N_L}{2}-1,\cdots, \frac{N_1}{2}-1}\\
&=\lr{\Gamma\lr{\frac{N_1}{2}}\cdots \Gamma\lr{\frac{N_{L}}{2}}}^{-1} \G{L,1}{1,L}{\frac{1}{2^L \tau }}{0}{\frac{N_L}{2},\cdots, \frac{N_1}{2}}\\
&=\lr{\Gamma\lr{\frac{N_1}{2}}\cdots \Gamma\lr{\frac{N_{L}}{2}}}^{-1} \G{1,L}{L,1}{2^L \tau }{1-\frac{N_L}{2},\cdots, 1-\frac{N_1}{2}}{0}
\end{align*}
where we've used \eqref{E:G-weighted-Laplace} and \eqref{E:G-inversion}.
\end{proof}
\noindent Combining the preceding Lemma with \eqref{E:Z-lap} gives
\begin{align}
\label{eq:ztilde-old}
Z(x) &= \frac{\det(X^TX)^{1/2}}{\prod_{\ell=1}^L\Gamma\lr{\frac{N_\ell}{2}}} \int_{\mathbb R^{P}} dt \exp\left[-\frac{\norm{t}^2}{2\beta}+i\theta_*^TXt\right]~ \G{1,L}{L,1}{M\|Xt + x\|^2}{1-\frac{N_L}{2},\dots,1-\frac{N_1}{2}}{0},
\end{align}
where
\[
4M = \prod_{\ell=0}^L \frac{2\sigma^2}{N_\ell}.
\]
To complete the proof of Proposition \ref{prop:Z-gen}, we write
\[
x = x_{||}+x_{\perp},\qquad x_{||}\in \mathrm{col}(X),\quad x_{\perp}\in \mathrm{col}(X)^\perp.
\]
Note that $X$ gives an isomorphism from $\mathbb R^P$ to $\mathrm{im}{(X)}$. Thus, we may change variables to write $Z_\beta(x)$ as
\begin{align*}
\prod_{\ell=1}^L\Gamma\lr{\frac{N_\ell}{2}}^{-1} \int_{\mathbb R^{P}} d\zeta\exp\left[-\frac{\norm{X^{\dagger}\zeta}^2}{2\beta}+i\theta_*^T\zeta\right]~ \G{1,L}{L,1}{M\|\zeta + x\twiddle{t}\|^2}{1-\frac{{\bf N}}{2}}{0},
\end{align*}
where $X^{\dagger}$ the pseudo-inverse of $X$. Finally, note that
\[
\norm{\zeta+x}^2 = \norm{\zeta + x_{||}}^2 + \norm{x_\perp}^2 .
\]
Hence, changing variables to $\tau = \zeta + x_{||}$ yields the following expression for $Z_\beta(x)$:
\begin{align*}
&\prod_{\ell=1}^L\Gamma\lr{\frac{N_\ell}{2}}^{-1} \exp\left[-i\theta_*^Tx_{||}\right]\\
&\qquad \times \int_{\mathrm{col}{(X)}} d\zeta\exp\left[-\frac{\norm{X^{\dagger}(\tau - x_{||})}^2}{2\beta}+i\theta_*^T\zeta\right]~ \G{1,L}{L,1}{M\norm{\zeta}^2 +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}.
\end{align*}
This is precisely the statement of Proposition \ref{prop:Z-gen}.
\end{proof}
\noindent By taking $\beta \rightarrow \infty$ in Proposition \ref{prop:Z-gen}, we see that
\begin{equation}\label{eq:Z-form-2}
Z_\infty(x)=\frac{\exp\left[-i\theta_*^Tx_{||}\right]}{\prod_{\ell=1}^L\Gamma\lr{\frac{N_\ell}{2}}
}
\int_{\mathbb R^P} \exp\left[i\theta_*^T\zeta\right]~ \G{1,L}{L,1}{M\norm{\zeta}^2 +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\zeta.
\end{equation}
In order to simplify this expression further, we pass to polar coordinates
\begin{align*}
&\int_{\mathbb R^P} \exp\left[i\theta_*^T\zeta\right]~ \G{1,L}{L,1}{M\norm{\zeta}^2 +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\zeta \\
&\qquad =\int_{0}^\infty \rho^{P-1}\left\{ \int_{S^{P-1}}\exp\left[i\rho \theta_*^T\theta\right]d\theta\right\}~ \G{1,L}{L,1}{M\rho^2 +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\rho.
\end{align*}
By the definition of the Bessel function and the relation \eqref{E:G-BesselJ}, we have
\begin{align*}
\int_{S^{P-1}}\exp\left[i\rho \theta_*^T\theta\right]d\theta&= (2\pi)^{P/2}\lr{\rho \norm{\theta_*}}^{-\frac{P-2}{2}} J_{\frac{P-2}{2}}\lr{\rho \norm{\theta_*}}\\
&= (2\pi)^{P/2}\lr{\rho^2 \norm{\theta_*}^2}^{-\frac{P-2}{4}} \G{1,0}{0,2}{\frac{\rho^2\norm{\theta_*}^2}{4}}{-}{\frac{P-2}{4},-\frac{P-2}{4}}\\
&=2\pi^{P/2} \G{1,0}{0,2}{\frac{\rho^2\norm{\theta_*}^2}{4}}{-}{0,-\frac{P-2}{2}}.
\end{align*}
We therefore obtain
\begin{align*}
&\int_{\mathbb R^P} \exp\left[i\theta_*^T\zeta\right]~ \G{1,L}{L,1}{M\norm{\zeta}^2 +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\zeta \\
&\qquad =\pi^{P/2}\int_{0}^\infty \rho^{\frac{P-2}{2}} \G{1,0}{0,2}{\frac{\rho\norm{\theta_*}^2}{4}}{-}{0,-\frac{P-2}{2}}~ \G{1,L}{L,1}{M\rho +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\rho\\
&\qquad =\lr{\frac{4}{\norm{\theta_*}^2}}^{\frac{P-2}{2}}\pi^{P/2}\int_{0}^\infty \G{1,0}{0,2}{\frac{\rho\norm{\theta_*}^2}{4}}{-}{\frac{P-2}{2},0}~ \G{1,L}{L,1}{M\rho +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\rho\\
&\qquad =\lr{\frac{4}{\norm{\theta_*}^2}}^{\frac{P}{2}}\pi^{P/2}\frac{\norm{\theta_*}^2}{4M}\int_{0}^\infty \G{1,0}{0,2}{\frac{\rho\norm{\theta_*}^2}{4M}}{-}{\frac{P-2}{2},0}~ \G{1,L}{L,1}{\rho +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\rho.
\end{align*}
We now apply \eqref{E:G-series} to find
\begin{align*}
&\int_{0}^\infty \G{1,0}{0,2}{\frac{\rho\norm{\theta_*}^2}{4M}}{-}{\frac{P-2}{2},0}~ \G{1,L}{L,1}{\rho +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\rho\\
&\qquad = \sum_{k=0}^\infty \frac{1}{k!}\lr{-M\norm{x_\perp}^2}^k \G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{P-2}{2}, \frac{{\bf N}}{2}+k-1}.
\end{align*}
Therefore,
\begin{align*}
&\int_{\mathbb R^P} \exp\left[i\theta_*^T\zeta\right]~ \G{1,L}{L,1}{M\norm{\zeta}^2 +M\norm{x_\perp}^2}{1-\frac{{\bf N}}{2}}{0}d\zeta \\
&\qquad =\lr{\frac{4}{\norm{\theta_*}^2}}^{\frac{P}{2}}\pi^{P/2} \sum_{k=0}^\infty \frac{1}{k!}\lr{-M\norm{x_\perp}^2}^k \G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{P}{2}, \frac{{\bf N}}{2}+k}.
\end{align*}
Putting this all together yields
\begin{align*}
Z_\infty(x) = \lr{\frac{4\pi}{\norm{\theta_*}^2}}^{\frac{P}{2}}\prod_{\ell=1}^L \Gamma\lr{\frac{N_\ell}{2}}^{-1}\sum_{k=0}^\infty \frac{1}{k!}\lr{-M\norm{x_\perp}^2}^k \G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{P}{2}, \frac{{\bf N}}{2}+k},
\end{align*}
completing the proof.
\hfill $\square$
\section{Proof of Theorem \ref{thm:logG}}
\label{pf:logG}
In this section, we derive several apparently novel asymptotic expansion of the Meijer-$G$ functions of the form
\begin{equation}\label{eq:G-fn}
\G{L+1,0}{0,L+1}{\norm{\theta_*}^2}{-}{\frac{{\bf N}}{2}+k, \frac{P}{2}} := \G{L+1,0}{0,L+1}{\norm{\theta_*}^2}{-}{\frac{N_1}{2}+k,\ldots,\frac{N_L}{2}+k, \frac{P}{2}},
\end{equation}
where as in Theorem \ref{thm:logG} we will be interested in various regimes where $N_0, P, N_\ell\rightarrow \infty$ and $L$ may or may not diverge.
Our first step is to obtain a contour integral representation of the Meijer-G functions we are studying. To state the exact result, consider the following independent $\Gamma$ random variables:
\begin{align}
\label{eq:phi-def} \phi_j \sim \begin{cases}\Gamma\left(\frac{N_j}{2}+k+1, \frac{2\sigma^2}{N_j}\right), &j=1,\dots,L\\ \Gamma\left(\frac{P}{2}+1, \frac{2\sigma^2}{P}\frac{\alpha_0}{\norm{\theta_*}^2}\right), &j=0\end{cases}.
\end{align}
As we recalled in \S \ref{sec:gammarv}, the moments of $\phi_j$ can be explicitly written in terms of $\Gamma$ functions. Moreover, the Meijer-G functions \eqref{eq:G-fn} can be interpreted, up to a scaling factor, as densities of the product of products of $\phi_j$'s. This is allows us to obtain the following
\begin{lemma}\label{lem:contour}
Fix $L, N, N_0,\ldots, N_L\geq 1$ as well as $\norm{\theta_*}>0$ and define $M$ as in \eqref{eq:M-def}. For any $N\geq 1$ we have
\begin{align}
\notag \mathrm{Den}_{\phi_0\cdots \phi_{L}}(1) &= \frac{\norm{\theta_*}^2}{4M}\G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{{\bf N}}{2}+k,\frac{P}{2}}\left[\Gamma\left(\frac{P}{2}+1\right)\right]^{-1}\prod_{\ell=1}^L\left[\Gamma\left(\frac{N_\ell}{2}+k+1\right)\right]^{-1}\\
\label{eq:G-contour}&= \frac{1}{2\pi}\int_{\mathcal C} \exp\left[\Phi(z)\right]dz,
\end{align}
where $\mathcal C\subseteq\mathbb C$ is the contour that runs along the real line from $-\infty$ to $\infty$ and
\begin{align}
\notag \Phi(z)&= -iz\log\lr{\frac{2\sigma^2}{P}\frac{\alpha_0}{\norm{\theta_*}^2}}+\log\lr{\frac{\Gamma\lr{\frac{P}{2}+1-iz}}{\Gamma\lr{\frac{P}{2}+1}}}\\
&+\sum_{\ell=1}^L\left\{-iz\log\lr{\frac{2\sigma^2}{N_\ell}}+\log\lr{\frac{\Gamma\lr{\frac{N_\ell}{2}+k+1-iz}}{\Gamma\lr{\frac{N_\ell}{2}+k+1}}} \right\}.
\end{align}
\end{lemma}
\begin{proof}
We use the relationship \eqref{eq:G-gamma} between $G$ functions and densities of products of Gamma random variables to write
\begin{align*}
\prod_{\ell=1}^L\left[\Gamma\left(\frac{N_\ell}{2}+k+1\right)\right]^{-1}\left[\Gamma\left(\frac{P}{2}+1\right)\right]^{-1}\frac{\norm{\theta_*}^2}{4M}\G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{{\bf N}}{2}+k,\frac{P}{2}} &= \mathrm{Den}_{\phi_0\cdots \phi_{L}}(1).
\end{align*}
For any positive random variable $X$ with density $d\mathbb P_X$, we have
\[
d\mathbb P_X(t) = t^{-1}d\mathbb P_{\log(X)}(\log(t)).
\]
Hence,
\begin{align*}
\mathrm{Den}_{\phi_0\cdots \phi_{L}}(1) &= \mathrm{Den}_{\sum_{\ell=0}^L\log \phi_\ell}(0)
\end{align*}
is proportional to the density of a sum of independent random variables evaluated at $\log(1)=0$. Further, recalling \eqref{E:gamma-MGF}, we find by Fourier inversion that
\[
\frac{1}{2\pi}\int_{-\infty}^\infty \exp\left[\Phi(t)\right] dt
\]
equals
\begin{align*}
\prod_{\ell=1}^L\left[\Gamma\left(\frac{N_\ell}{2}+k+1\right)\right]^{-1}\left[\Gamma\left(\frac{P}{2}+1\right)\right]^{-1}\frac{\norm{\theta_*}^2}{4M}\G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{{\bf N}}{2}+k,\frac{P}{2}},
\end{align*}
where
\begin{align}
\notag \Phi(t)&= -it\log 1-it\log\lr{\frac{2\sigma^2}{P}\frac{\alpha_0}{\norm{\theta_*}^2}}+\log\lr{\frac{\Gamma\lr{\frac{P}{2}+1-it}}{\Gamma\lr{\frac{P}{2}+1}}}\\
&+\sum_{\ell=1}^L\left\{-it\log\lr{\frac{2\sigma^2}{N_\ell}}+\log\lr{\frac{\Gamma\lr{\frac{N_\ell}{2}+k+1-it}}{\Gamma\lr{\frac{N_\ell}{2}+k+1}}} \right\}.
\end{align}
\end{proof}
Note that \eqref{eq:G-contour-1} expresses $G$ as a contour integral of a meromorphic function $\exp(N\Psi)$ with poles at
\[
-\frac{m}{2}-k-1 -\nu,\qquad \nu \in \mathbb N,\, m\in \set{P,N_1,\ldots, N_L}.
\]
Our goal is to evaluate the contour integral representation \eqref{eq:G-contour} for the Meijer-G function using the Laplace method. To do so, we will use the following standard procedure:
\begin{enumerate}
\item Contour deformation: $\mathcal C$ into a union of several contours $\mathcal C_1\cup\cdots \cup \mathcal C_R$. Contours $\mathcal C_1,\mathcal C_R$ are non-compact, whereas the contours $\mathcal C_2,\ldots, \mathcal C_{R-1}$ do not extend to infinity. We will need to choose the contours $\mathcal C_2,\ldots, \mathcal C_{R-1}$ so that exactly one of them passes through what will turn out to be the dominant critical point $\zeta_*$ of $\Psi$ and does so in the direction of steepest descent. Moreover, on the contour $\mathcal C_2$, the imaginary part of the phase will be constant (in fact equal to $0$).
\item Localization to compact domain of integration: Show that the integrand $\exp\left[N\Psi(z)\right]$ is integrable and exponentially small in $N$ on the contours $\mathcal C_1,\mathcal C_R$. In particular, for any $K>0$, we will find that modulo errors of size $O(e^{-KN})$ we may therefore focus on the integral over $\mathcal C_2,\ldots, \mathcal C_{R-1}$. The ability to choose $K$ will be important since the entire integral is exponentially small in $N$.
\item Computing derivatives of $\Psi$ at $\zeta_*$: Now that we have reduced the integral \eqref{eq:G-contour} to a compact domain of integration it remains only to check that the critical point is non-degenerate, to compute $\Psi(\zeta_*), \frac{d}{dz}\Psi(\zeta_*),\frac{d^2}{dz^2}\Psi(\zeta_*)$, and to apply the Laplace method.
\end{enumerate}
We now proceed to give the details in the case when
\[
L\text{ is fixed},\quad N:=\min\set{N_0, P, N_\ell}\rightarrow \infty,\quad \frac{P}{N_0}\rightarrow \alpha_0\in (0,1),\quad \frac{P}{N}\rightarrow \alpha\in (0,\infty).
\]
This is a generalization of case (a) of Theorem \ref{thm:logG}. To proceed, we make the change of variables $t\mapsto Nt$ in \eqref{eq:G-contour} to get that
\begin{align}
\notag &\frac{\norm{\theta_*}^2}{4M}\G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{{\bf N}}{2}+k,\frac{P}{2}}\left[\Gamma\left(\frac{P}{2}+1\right)\right]^{-1}\prod_{\ell=1}^L\left[\Gamma\left(\frac{N_\ell}{2}+k+1\right)\right]^{-1}\\
&\qquad = \frac{N}{2\pi}\int_{\mathcal C} \exp\left[N\Psi(z)\right]dz,
\end{align}
where
\begin{align}
\notag \Psi(z)&= -iz\log\lr{\frac{2\sigma^2}{P}\frac{\alpha_0}{\norm{\theta_*}^2}}+\frac{1}{N}\log\lr{\frac{\Gamma\lr{\frac{P}{2}+1-iNz}}{\Gamma\lr{\frac{P}{2}+1}}}\\
\label{E:psi-def} &+\sum_{\ell=1}^L\left\{-iz\log\lr{\frac{2\sigma^2}{N_\ell}}+\frac{1}{N}\log\lr{\frac{\Gamma\lr{\frac{N_\ell}{2}+k+1-iNz}}{\Gamma\lr{\frac{N_\ell}{2}+k+1}}} \right\}.
\end{align}
With this rescaling, we now deform the contour of integration as follows by fixing constants $\delta,T>0$ (the value of $T$ will be determined by Lemma \ref{lem:Psi-est} and the value of $\delta$ will be determined by \eqref{eq:zeta-star2-def} and the sentence directly after it) and deforming the contour $\mathcal C$ as follows
\[
\mathcal C\quad \mapsto \quad \bigcup_{j=0}^4\mathcal C_j,
\]
where
\begin{align*}
\mathcal C_0&=\mathcal C_0(T)=(-\infty,-T]\\
\mathcal C_1&=\mathcal C_1(T,C_\delta)=\text{linear interpolation from }-T\in \mathbb R\text{ to }-iC_\delta\in i\mathbb R\\
\mathcal C_2&= \mathcal C_2(C_\delta)=[-iC_\delta,iC_\delta]\\
\mathcal C_3&=\mathcal C_3(T,C_\delta)=\text{linear interpolation from }iC_\delta\in i\mathbb R\text{ to }T\in \mathbb R\\
\mathcal C_4&=\mathcal C_4(T)=[T,\infty),
\end{align*}
where
\[
C_\delta := \delta -\min\set{\frac{P}{2N}, \frac{N_1}{2N}, \ldots,\frac{N_L}{2N}}.
\]
\begin{figure}
\centering
\includegraphics[scale=.2]{Contour-C.jpeg}
\caption{Deformed contour of integration for the proof of Theorem \ref{thm:logG}.}
\label{fig:C-contour}
\end{figure}
See Figure \ref{fig:C-contour}. In order to evaluate the integral over each $\mathcal C_j$, it will be convenient to introduce
\[
\Psi(z)=\sum_{\ell=0}^L\Psi_\ell(z)
\]
where
\begin{equation}\label{eq:Psi-ell-def}
\Psi_\ell(z)=\begin{cases}
-iz\log\lr{\frac{2\sigma^2}{P}\frac{\alpha_0}{\norm{\theta_*}^2}}+\frac{1}{N}\log\lr{\frac{ \Gamma\lr{\frac{P}{2}+1-iNz}}{ \Gamma\lr{\frac{P}{2}+1}}},&\quad \ell = 0\\
-iz\log\lr{\frac{2\sigma^2}{N_\ell}}+\frac{1}{N}\log\lr{\frac{ \Gamma\lr{\frac{N_\ell}{2}+k+1-iNz}}{ \Gamma\lr{\frac{N_\ell}{2}+k+1}}},&\quad \ell = 1,\ldots, L
\end{cases}.
\end{equation}
The following Lemma allows us to throw away the contribution to \eqref{eq:G-contour-1} coming from $\mathcal C_0,\mathcal C_4$.
\begin{lemma}\label{lem:Psi-est}
There exist $c,T_0>0$ such that
\begin{equation}\label{eq:Psi-est-1}
\sup_{\substack{\abs{z}>T\\ z\in \mathbb R}} \frac{\Re \Psi(z)}{1+\abs{z}} \leq - c,\qquad \forall T\geq T_0.
\end{equation}
Moreover, for any $T,\delta>0$, writing
\begin{equation}\label{eq:S-def}
S_{N,\delta,T}:=\set{z\in \mathbb C~|~ \abs{z}< T, \Im(z) >C_\delta},
\end{equation}
there exists $C>0$ such that
\begin{equation}\label{eq:Psi-est-2}
\sup_{z\in S_{N,\delta, T}}\max\set{\abs{\Psi(z)},\abs{ \frac{d}{dz}\Psi(z)}} \leq C.
\end{equation}
\end{lemma}
\begin{proof}
To show both \eqref{eq:Psi-est-1} and \eqref{eq:Psi-est-2} we will use the asymptotic expansion
\begin{equation}\label{eq:Gamma-exp-pf}
\log \Gamma\lr{z} \sim \lr{z-\frac{1}{2}}\log(z) - z + \text{const}+O\lr{\abs{z}^{-1}},\quad \text{as }\abs{z}\rightarrow \infty,
\end{equation}
which holds uniformly on sets of the form $\abs{z}< \pi - \epsilon$ for a fixed $\epsilon >0$. With $\Psi_\ell$ defined in \eqref{eq:Psi-ell-def}, we find for each $\ell=1,\ldots, L$ that uniformly over $z\in \mathbb R$
\begin{align}
\notag \Psi_\ell(z)&=-iz\log\lr{\frac{2\sigma^2}{N_\ell}} +\frac{1}{N}\lr{\log \Gamma\lr{\frac{N_\ell}{2}+k+1-iNz}-\log \Gamma\lr{\frac{N_\ell}{2}+k+1}}\\
\notag &=-iz\left[-1+\log\lr{\frac{2\sigma^2}{N_\ell}} + \log\lr{\frac{N_\ell}{2}+k+1-iNz}\right]\\
\notag &\qquad +\frac{1}{N}\lr{\frac{N_\ell+1}{2}+k+1}\log\lr{1-\frac{iNz}{\frac{N_\ell}{2}+k+1}} + O(N^{-1})\\
\notag &=-iz\left[-1+\log(\sigma^2) + \log\lr{1-\frac{2i N z}{N_\ell}}\right] + \frac{N_\ell}{2N}\log\lr{1-\frac{2iNz}{N_\ell}\cdot\frac{1}{1+\frac{2(k+1)}{N_\ell}}}+O(N^{-1})\\
\notag &=-iz\left[-1+\log(\sigma^2)+\log\lr{1-\frac{2i N z}{N_\ell}}\right] + \frac{N_\ell}{2N}\log\lr{1-\frac{2iNz}{N_\ell}}+O(N^{-1}).
\end{align}
Obtaining a similar expression for $\ell=0$ and summing over $\ell$ proves \eqref{eq:Psi-est-2}. Moreover, for $\ell=1,\ldots, L$ we obtain uniformly on $z\in \mathbb R$
\begin{align*}
\Re\Psi_\ell(z) = z\arg\lr{1-\frac{2iNz}{N_\ell}} + \frac{N_\ell}{2N}\log\abs{1-\frac{2iNz}{N_\ell}}+O(N^{-1}).
\end{align*}
Note that for $T_0$ sufficiently large there we have
\begin{equation}\label{eq:RePsi-bound}
\abs{z}> T_0\quad \Rightarrow \quad \arg\lr{1-\frac{2iNz}{N_\ell}} \sgn{z} \leq -\frac{\pi}{4}\quad \Rightarrow \quad \Re\Psi_\ell(z) \leq -\frac{\pi}{8}\abs{z}.
\end{equation}
A similar analysis applies to $\ell=0$ and completes the proof of \eqref{eq:Psi-est-1}.
\end{proof}
\noindent Estimate \eqref{eq:Psi-est-1} in the previous Lemma shows that, for any $K,\delta >0$ there exists $T>0$ such that
\begin{equation}\label{eq:G-contour-1}
\frac{N}{2\pi}\int_{\mathcal C} \exp\left[N\Psi(z)\right] dz = \frac{N}{2\pi}\int_{\mathcal C_1(T,C_\delta)\cup \mathcal C_2(C_\delta) \cup \mathcal C_3(T,C_\delta)} \exp\left[N\Psi(z)\right] dz + O(e^{-KN}).
\end{equation}
To evaluate the right hand size of \eqref{eq:G-contour-1}, we derive the following uniform asymptotic expansion for $\Psi(z)$.
\begin{lemma}
Fix $\delta, T>0$ and define $S_{N,\delta, T}$ as in \eqref{eq:S-def}. Then, uniformly over $S_{N,\delta, T}$, we have
\[
\Psi(z) = \widehat{\Psi}(z)+\frac{1}{N}\twiddle{\Psi}(z)+O(N^{-2}),
\]
where
\begin{align}
\label{eq:Psihat-def} \widehat{\Psi}(z):&=iz\left[\log\lr{\frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0}} + L + 1 - \log\lr{1-\frac{2iNz}{P}} - \sum_{\ell=1}^L \log\lr{1-\frac{2iNz}{N_\ell}}\right]\\
\notag &+\frac{P}{2N}\log\lr{1-\frac{2iNz}{P}} +\sum_{\ell=1}^L \frac{N_\ell}{2N} \log\lr{1-\frac{2iNz}{N_\ell}}\\
\label{eq:Psitilde-def} \twiddle{\Psi}(z):&=\frac{1}{2}\log\lr{1-\frac{2iNz}{P}} + \lr{k+\frac{1}{2}}\sum_{\ell=1}^L \log\lr{1-\frac{2iNz}{N_\ell}}.
\end{align}
\end{lemma}
\begin{proof}
This follows directly from expanding $\Psi_\ell(z)$ using \eqref{eq:Gamma-exp} for each $\ell=0,\ldots, L.$
\end{proof}
Combining the preceding Lemma with \eqref{eq:G-contour-1} shows that for any $K>0$ there exists $T,\delta>0$ so that
\begin{align}\label{eq:G-contour-2}
\notag &\frac{N}{2\pi}\int_{\mathcal C} \exp\left[N\Psi(z)\right] dz\\
&\qquad = \frac{N}{2\pi}\int_{\mathcal C_1(T,C_\delta)\cup \mathcal C_2(C_\delta) \cup \mathcal C_3(T,C_\delta)} \exp\left[N\widehat{\Psi}(z)+\twiddle{\Psi}(z)\right] dz\lr{1 + O(N^{-1})} + O(e^{-KN})
\end{align}
Moreover, differentiating \eqref{eq:Psihat-def} yields
\begin{align*}
\frac{d}{dz}\widehat{\Psi}(z) &= i\left[\log\lr{\frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0}} - \log\lr{1-\frac{2iNz}{P}}-\sum_{\ell=1}^L \log\lr{1-\frac{2iNz}{N_\ell}}\right].
\end{align*}
Computing the real part of both sides show that
\[
\Re\lr{\frac{d}{dz}\widehat{\Psi}(z)} = \arg\lr{1-\frac{2iNz}{P}}+\sum_{\ell=1}^L \arg\lr{1-\frac{2iNz}{N_\ell}}.
\]
Since all the arguments have the same sign, we conclude that the real part of $\frac{d}{dz}\widehat{\Psi}$ vanishes only when $z = i\zeta$ for $\zeta\in \mathbb R$. Further,
\[
\Im\lr{\frac{d}{d\zeta}\widehat{\Psi}(i\zeta)} =-\log\lr{\frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0}} +\log\abs{1+\frac{2N\zeta}{P}}+\sum_{\ell=1}^L \log\abs{1+\frac{2N\zeta}{N_\ell}},
\]
which vanishes on $\mathcal C_2$ if and only if $\zeta = \zeta_*$ is the unique solution to \begin{equation}\label{eq:Psi-crit-2}
\frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0} = \lr{1+\frac{2N\zeta_*}{P}}\prod_{\ell=1}^L \lr{1+\frac{2N\zeta_*}{N_\ell}}.
\end{equation}
Indeed, observe that the right hand side of the equation on the preceding line increases monotonically in $\zeta_*$ from $-\infty$ to $+\infty$ as $\zeta_*$ varies in $(-\min\set{\frac{P}{2N},\frac{N_1}{2N},\ldots, \frac{N_L}{2N}},\infty)$.
Computing the correction to the saddle point, we differentiate
\begin{align*}
\Im\lr{\frac{d}{d\zeta}\lr{\widehat{\Psi}(i\zeta) + \frac{1}{N}\twiddle{\Psi}(i\zeta)}} &= -\log\lr{\frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0}} +\log\abs{1+\frac{2N\zeta}{P}}+\sum_{\ell=1}^L \log\abs{1+\frac{2N\zeta}{N_\ell}} \\
&\qquad + \frac{1}{N}\left[\frac{1}{2\zeta + P/N} + \sum_{\ell=1}^L\frac{N}{N_\ell}\frac{1+2k}{1+2\zeta N_{\ell}/N}\right]
\end{align*}
and set the derivative to zero at $\zeta = \zeta_* + \zeta_{**}/N$, giving
\begin{align*}
0 = \frac{1}{N}\left[\frac{2\zeta_{**}}{2\zeta_*+P/N} + \frac{1}{2\zeta_*+P/N} + \sum_{\ell=1}^L \frac{2\zeta_{**}}{2\zeta_*+N_\ell/N} + \frac{N}{N_\ell}\frac{1+2k}{1+2\zeta_* N_{\ell}/N}\right].
\end{align*}
This is solved by
\begin{align}\label{eq:zeta-star2-def}
\zeta_{**} &= -\frac{1}{2}\frac{[2\zeta_*+P/N]^{-1} + (1+2k)\sum_\ell[2\zeta_* + N_\ell/N]^{-1}}{[2\zeta_*+P/N]^{-1} + \sum_\ell[2\zeta_* + N_\ell/N]^{-1}}.
\end{align}
Hence, by choosing $\delta$ sufficiently small, we can ensure that $\mathcal C_2$ contains the unique solution to \eqref{eq:Psi-crit}. Further, a direct computation shows that
\[
\frac{d^2}{d\zeta^2} \widehat{\Psi}(i\zeta)\bigg|_{\zeta = \zeta_*} = -\left[\frac{1}{\frac{P}{2N}+\zeta_*}+\sum_{\ell=1}^L \frac{1}{\frac{N_\ell}{2N}+\zeta_*}\right] < 0,
\]
proving that that $i\zeta_*$ is a non-degenerate critical point of $\widehat{\Psi}$.
Defining $\zeta_0=\zeta_* + \zeta_{**}/N$ and
\begin{align*}
\Psi_0(\zeta_0) &= \widehat{\Psi}(i\zeta_0) + \frac{1}{N}\twiddle{\Psi}(i\zeta_0),
\end{align*}
the Laplace method gives
\begin{align*}
\log \mathrm{Den}_{\sum_{\ell=0}^L\log \phi_\ell}(0) &= \log\lr{\frac{N}{2\pi}\int_{\mathcal C_1(T,C_\delta)\cup \mathcal C_2(C_\delta)\cup \mathcal C_3(T,C_\delta)}\exp\left[N\Psi(z)\right]dz}\\
&= N\Psi_0(\zeta_0) + \frac{1}{2}\log\lr{\frac{N}{2\pi\Psi_0''(\zeta_0)}}\\
&= -N\zeta_0\left[\log\lr{\frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0}} + L + 1 - \log\lr{1+2\zeta_0\frac{N}{P}} - \sum_{\ell=1}^L \log\lr{1+2\zeta_0\frac{N}{N_\ell}}\right]\\
&\qquad + \frac{N}{2}\left[\frac{P}{N} \log\lr{1+2\zeta_0\frac{N}{P}} + \frac{N_\ell}{N}\sum_{\ell=1}^L \log\lr{1+2\zeta_0\frac{N}{N_\ell}}\right]\\
&\qquad + \frac{1}{2}\log\lr{1+2\zeta_0\frac{N}{P}} + \lr{k+\frac{1}{2}}\sum_{\ell=1}^L \log\lr{1+2\zeta_0\frac{N}{N_\ell}}\\
&\qquad + \frac{1}{2}\log(N) - \frac{1}{2}\log(2\pi) - \frac{1}{2}\log\left[\frac{2}{2\zeta_0+P/N} + \sum_{\ell=1}^L\frac{2}{2\zeta_0+N/N_\ell}\right].
\end{align*}
Taking
\[
N_\ell = N, \quad P = \alpha N
\]
and observing that
\[
\log\lr{\frac{\norm{\theta_*}^2}{\sigma^{2(L+1)}\alpha_0}} = L \log(1+2\zeta_*) + \log\lr{1+\frac{2\zeta_*}{\alpha}}
\]
simplifies the density to
\begin{align*}
\log \mathrm{Den}_{\sum_{\ell=0}^L\log \phi_\ell}(0) &= \frac{N}{2}\left[\alpha\lr{ \log\lr{1+\frac{2\zeta_*}{\alpha}} - \frac{2\zeta_*}{\alpha}}+ L\lr{\log\lr{1+2\zeta_*} - 2\zeta_*}\right] + \frac{1}{2}\log(N) - \frac{1}{2}\log(\pi)\\
&\qquad + \frac{1}{2}\log\lr{1+\frac{2\zeta_*}{\alpha}} + \lr{k+\frac{1}{2}}L \log\lr{1+2\zeta_*} - \frac{1}{2}\log\lr{\frac{1}{\alpha+2\zeta_*} + \frac{L}{1+2\zeta_*}}.
\end{align*}
Solving for the $G$-function given \eqref{eq:G-gamma}, we conclude that
\begin{align*}
\log G &= \log \mathrm{Den}_{\sum_{\ell=0}^L\log \phi_\ell}(0) + L \log\left[\Gamma\left(\frac{N}{2}+k+1\right)\right] + \log\left[\Gamma\left(\frac{N\alpha}{2}+1\right)\right] \\
&\qquad - \log \frac{\norm{\theta_*}^2}{\alpha_0} - L\log\left(\frac{N}{2\sigma^2}\right) -\log\left(\frac{N\alpha}{2\sigma^2}\right)\\
&= \frac{N}{2}\left\{\alpha\left[\log\lr{\frac{N\alpha}{2}} + \log\lr{1+\frac{2\zeta_*}{\alpha}} - \lr{1+\frac{2\zeta_*}{\alpha}}\right]+ L\left[\log\lr{\frac{N}{2}} + \log\lr{1+2\zeta_*} - (1+2\zeta_*)\right]\right\}\\
&\qquad + \frac{L(2k-1)}{2}\left[\log\lr{\frac{N}{2}} + \log(1+2\zeta_*)\right] + \frac{L}{2}\log(2\pi) - \frac{1}{2}\log\lr{1 + \frac{\alpha+2\zeta_*}{1+2\zeta_*}L}.
\end{align*}
The leading order part of this expression reproduces \eqref{eq:logG-Lfinite}, and taking differences between the value at a fixed $k$ and at $k=0$ gives \eqref{eq:DlogG-Lfinite}. Note that at $L=0$, direct computation gives
\begin{align*}
\log G &= -\frac{\norm{\theta_*}^2}{\alpha_0}\frac{N\alpha}{2\sigma^2} + \frac{N\alpha}{2}\left(\log\frac{N\alpha}{2\sigma^2} + \log\frac{\norm{\theta_*}^2}{\alpha_0}\right),
\end{align*}
which we see is reproduced by the above. The derivations of the formulas in cases (b) and (c) of Theorem \ref{thm:logG} are very similar to those of (a). So we indicate only the salient differences, starting with case (b). We will actually consider the following somewhat more general regime:
\[
N:=\min \set{N_1,\ldots, N_L}\rightarrow \infty,\quad P,N_0,N,L \rightarrow \infty,\quad \frac{P}{N_0}\rightarrow \alpha_0, \quad \sum_{\ell=1}^L \frac{1}{N_\ell} \rightarrow \lambda_{\mathrm{prior}},
\]
with $\alpha_0\in (0,1)$ and $\lambda_{\mathrm{prior}}\in (0,\infty)$. Our starting point is again Lemma \ref{lem:contour}. The first modification in the proof is that we must redefine the contours $\mathcal C_j,\, j=0,\ldots, 4$ by replacing
\[
T\mapsto T/L.
\]
Next, Lemma \ref{lem:Psi-est} now reads.
\begin{lemma}\label{lem:Psi-estb}
There exist $c,T_0>0$ such that
\begin{equation}\label{eq:Psi-est-1b}
\sup_{\substack{\abs{z}>T/L\\ z\in \mathbb R}} \frac{\Re \Psi(z)}{1+\abs{z}} \leq - c,\qquad \forall T\geq T_0.
\end{equation}
Moreover, for any $T,\delta>0$, writing
\begin{equation}\label{eq:S-defb}
S_{N,L,\delta,T}:=\set{z\in \mathbb C~|~ \abs{z}< T/L, \Im(z) >C_\delta},
\end{equation}
there exists $C>0$ such that
\begin{equation}\label{eq:Psi-est-2b}
\sup_{z\in S_{N,L,\delta, T}}\max\set{\abs{\Psi(z)},\abs{ \frac{d}{dz}\Psi(z)}} \leq C.
\end{equation}
\end{lemma}
The proof of Lemma \ref{lem:Psi-estb} is essentially identical to that of Lemma \ref{lem:Psi-est} with the main difference being that the analogous estimates in \eqref{eq:RePsi-bound} must now be summed from $\ell=0$ to $\ell=L$, which involves a growing number of terms. In each term, for $T$ sufficiently large, the real part of $\Psi_\ell(z)$ is bounded above by $-c*(1+\abs{z})$ as soon as $\abs{z}/L > T$. As a result, the previous Lemma shows that, for any $K,\delta >0$ there exists $T>0$ such that
\begin{equation}
\frac{N}{2\pi}\int_{\mathcal C} \exp\left[N\Psi(z)\right] dz = \frac{N}{2\pi}\int_{\mathcal C_1(T/L,C_\delta)\cup \mathcal C_2(C_\delta) \cup \mathcal C_3(T/L,C_\delta)} \exp\left[N\Psi(z)\right] dz + O(e^{-KN}).
\end{equation}
Next, exactly as in the derivation \eqref{eq:Psi-crit-2} and \eqref{eq:zeta-star2-def}, we find that $\Psi$ has a unique critical point $\zeta_*+\frac{1}{N}\zeta_{**}+O(N^{-2})$ along $\mathcal C_2$ and no critical points along $\mathcal C_1(T/L,C_\delta)$ and $\mathcal C_2(T/L,C_\delta)$. Moreover, recalling that $\sigma^2=1$ in this regime, a direct inspection of \eqref{eq:Psi-crit-2} and \eqref{eq:zeta-star2-def} reveals that there is a constant $C_*>0$ so that
\[
\abs{\zeta_*+\frac{1}{N}\zeta_{**}}\leq C_*\lr{\frac{1}{L}+\frac{1}{N}}.
\]
This allows us to choose $\delta$ in the definition of $ C_\delta$ to be independent of $L,N, P,N_0$. The remainder of the derivation is a direct computation of the value, first derivative, and second derivative of $\Psi$ at the its critical point followed by a straight-forward application of the Laplace method. Namely, the critical point of $\Psi$ on the contour $\mathcal C_2$ takes the form
\begin{align*}
\frac{d}{dz}\Psi(i\zeta) =0\quad \Longleftrightarrow \quad \zeta = \zeta_* +\frac{1}{N} \zeta_{**}+O(N^{-2}),
\end{align*}
where the critical point is given in terms of $\lambda_{\mathrm{prior}} = \sum_{\ell=1}^N N_\ell^{-1}$:
\begin{align*}
\zeta_*&=0\\
\zeta_{**} &= \frac{1}{2}\left[\frac{1}{\lambda_{\mathrm{prior}}}\log\lr{\frac{\norm{\theta_*}^2}{\alpha_0}} - (2k+1)\right].
\end{align*}
The corresponding critical value is given by
\begin{align*}
\Psi\lr{i\lr{\zeta_*+\frac{1}{N}\zeta_{**}}} &= -\frac{\lambda_{\mathrm{prior}}}{4N}\lr{2k+1 - \frac{1}{\lambda_{\mathrm{prior}}}\log \frac{\norm{\theta_*}^2}{\alpha_0}}^2,
\end{align*}
and the Hessian is, to leading order,
\begin{align*}
\frac{d^2}{d\zeta^2}\Psi\lr{i\lr{\zeta_*+\frac{1}{N}\zeta_{**}}} & = - 2N\lambda_{\mathrm{prior}} < 0,
\end{align*}
showing that $i\zeta_{**}/N$ is a non-degenerate critical point.
Including the terms from the Gamma function prefactors, we obtain the $G$ function
\begin{align*}
\log \G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{\mathbf{N}}{2}+k, \frac{P}{2}} &= \sum_{\ell=1}^L \frac{N_\ell}{2}\left[\log\lr{\frac{N_\ell}{2}}-1\right] + \frac{P}{2}\left[\log\lr{\frac{P}{2}}-1\right] + \frac{L}{2}\log(2\pi)\\
&\quad + \lr{k-\frac{1}{2}}\sum_{\ell=1}^L \log\lr{\frac{N_\ell}{2}} - \frac{1}{2}\log\lr{\frac{P}{2}} + \lr{k-\frac{1}{2}}\log\lr{\frac{\norm{\theta_*}^2}{\alpha_0}}\\
&\quad - \frac{1}{12}\lambda_{\mathrm{prior}} - \frac{1}{4\lambda_{\mathrm{prior}}}\left[\log\lr{\frac{\norm{\theta_*}^2}{\alpha_0}}\right]^2 - \frac{1}{2}\log(2\lambda_{\mathrm{prior}}).
\end{align*}
When specialized to the case when $N_1=\cdots=N_L=N$, these are the results stated in \eqref{eq:logG-LN} and \eqref{eq:DlogG-LN}. Finally, for case (c), we our results apply to the regime where
\[
N:=\min \set{N_1,\ldots, N_L}\rightarrow \infty,\quad P,N_0,N \rightarrow \infty,\quad \frac{P}{N_0}\rightarrow \alpha_0, \quad P\sum_{\ell=1}^L \frac{1}{N_\ell} \rightarrow \lambda_{\mathrm{post}},
\]
with $\alpha_0\in (0,1)$ and $\lambda_{\mathrm{post}}\in (0,\infty)$. The analysis in this case mirrors almost exactly case (b), but we use the variable substitution $t\mapsto Pt$ when changing from $\Phi$ to $\Psi$, and use the fact that both $P/N$ and $L/N$ vanish. Here, we record only the result:
\begin{align*}
\log \G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{\mathbf{N}}{2}+k, \frac{P}{2}} &= \frac{P}{2}\left[\log\lr{\frac{P}{2}}-1 + \log(1+t_*) -t_*\lr{1 + \frac{\lambda_{\mathrm{post}} t_*}{2}}\right] - \log P \\
&\quad + \frac{L}{2}\log(2\pi) + \sum_{\ell=1}^L \frac{N_\ell}{2}\left[\log\lr{\frac{N_\ell}{2}}-1\right] + \frac{1}{2}(2k-1)\log\lr{\frac{N_\ell}{2}} \\
&\quad - \frac{1}{2}\log\lr{\lambda_{\mathrm{post}} + \frac{1}{1+t_*}} + \frac{1}{2}\left[(2k+1)\lambda_{\mathrm{post}} t_* + \log(1+t_*)\right],
\end{align*}
where $t_*$ is the unique solution to
\[
e^{\lambda_{\mathrm{post}} t_*}\lr{1+t_*}=\frac{\norm{\theta_*}^2}{\alpha_0}.
\]
When specialized to the case when $N_1=\cdots=N_L=N$, this gives the results stated in \eqref{eq:logG-PLN} and \eqref{eq:DlogG-PLN}. This completes the proof of Theorem \ref{thm:logG}. \hfill $\square$
\iffalse
{\color{purple}
\subsection{Fixed $LP/N$}
This section is to be removed, but just sketches the fixed $\lambda_{\mathrm{post}}$ computation. We start with \eqref{E:psi-def} and localize with $t \mapsto Pt$ at $\sigma^2=1$ to get
\begin{align*}
\Psi(t) &= -Pt\log\lr{\frac{\norm{\theta_*}^2}{\alpha_0}} - Pt \log \lr{\frac{P}{2}} - LPt \log\lr{\frac{N}{2}} + \log\lr{\frac{\Gamma\lr{\frac{P}{2}+1+Pt}}{\Gamma\lr{\frac{P}{2}+1}}} \\
&\qquad + L\log\lr{\frac{\Gamma\lr{\frac{N}{2}+k+1+Pt}}{\Gamma\lr{\frac{N}{2}+k+1}}}\\
&= -Pt\log\lr{\frac{\norm{\theta_*}^2}{\alpha_0}} - Pt \log \lr{\frac{P}{2}} - \lambda_{\mathrm{post}} Nt \log\lr{\frac{N}{2}} + \log\lr{\frac{\Gamma\lr{\frac{P}{2}+1+Pt}}{\Gamma\lr{\frac{P}{2}+1}}} \\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{P}\log\lr{\frac{\Gamma\lr{\frac{N}{2}+k+1+Pt}}{\Gamma\lr{\frac{N}{2}+k+1}}}.
\end{align*}
The fixed point is given by setting $\Psi'(t)$ to zero, i.e.,
\begin{align*}
0 &= -P\log\lr{\frac{\norm{\theta_*}^2}{\alpha_0}} - P\log \lr{\frac{P}{2}} - \lambda_{\mathrm{post}} N \log\lr{\frac{N}{2}} + P\psi\lr{\frac{P}{2}+1+Pt_*} + \lambda_{\mathrm{post}} N \psi\lr{\frac{N}{2}+k+1+Pt_*}.
\end{align*}
Expanding in large $N$ then large $P$ (which implicitly takes $P/N\to 0$), we have
\begin{align*}
0 &= -P\log\lr{\frac{\norm{\theta_*}^2}{\alpha_0}} + 2\lambda_{\mathrm{post}} Pt_* + P \log \lr{1+2t_*},
\end{align*}
or equivalently,
\begin{align*}
(1+z_*)e^{\lambda_{\mathrm{post}} z_*} = \frac{\norm{\theta_*}^2}{\alpha_0}
\end{align*}
for $z_*=2t_*$. Note that
\begin{align*}
\frac{dz_*}{d\lambda_{\mathrm{post}}} &= -\frac{z_*(1+z_*)}{1+\lambda_{\mathrm{post}}(1+z_*)}.
\end{align*}
The $G$-function, ignoring terms of order $\tilde O(1)$, is given by
\begin{align*}
\log G &= \Psi(t_*) + \log \Gamma\lr{\frac{P}{2}+1} + L \log \Gamma \lr{\frac{N}{2}+k+1}\\
&= -\frac{P}{2}z_*\left[\lambda_{\mathrm{post}} z_* + \log(1+z_*)\right] - \frac{P}{2}z_* \log\lr{\frac{P}{2}} - \frac{\lambda_{\mathrm{post}} N}{2}z_*\log\lr{\frac{N}{2}} \\
&\qquad + \log \Gamma \lr{\frac{P}{2}(1+z_*) + 1} + \frac{\lambda_{\mathrm{post}} N}{P} \log \Gamma\lr{\frac{N}{2}\lr{1+\frac{P}{N}z_*}+k+1}\\
&= -\frac{P}{2}z_*\left[\lambda_{\mathrm{post}} z_* + \log(1+z_*)\right] - \frac{P}{2}z_* \log\lr{\frac{P}{2}} - \frac{\lambda_{\mathrm{post}} N}{2}z_*\log\lr{\frac{N}{2}} \\
&\qquad + \frac{P}{2}(1+z_*)\lr{\log\lr{\frac{P}{2}} + \log(1+z_*) - 1} \\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{P} \lr{\frac{N}{2}}\lr{1+\frac{P}{N}z_*}\lr{\log\lr{\frac{N}{2}} + \log\lr{1+\frac{P}{N}z_*} - 1} \\
&\qquad - \frac{1}{2}\frac{\lambda_{\mathrm{post}} N}{P}\lr{\log\lr{\frac{N}{2}} + \log\lr{1+\frac{P}{N}z_*}-\log(2\pi)}\\
&= \frac{P}{2}\lr{\log\lr{\frac{P}{2}}-1} + \frac{P}{2}\left[\log(1+z_*)-z_*(1+\lambda_{\mathrm{post}} z_*)\right] + \frac{\lambda_{\mathrm{post}} N}{2}\frac{N}{P}\left[\log\lr{\frac{N}{2}}-1\right]\\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{2}\left[\frac{N}{P}\lr{1+\frac{P}{N}z_*}\log\lr{1+\frac{P}{N}z_*} - z_*\right]\\
&\qquad - \frac{1}{2}\frac{\lambda_{\mathrm{post}} N}{P}\lr{\log\lr{\frac{N}{2}} + \log\lr{1+\frac{P}{N}z_*}-\log(2\pi)}\\
&= \frac{P}{2}\lr{\log\lr{\frac{P}{2}}-1} + \frac{\lambda_{\mathrm{post}} N}{2}\frac{N}{P}\left[\log\lr{\frac{N}{2}}-1 - \frac{1}{N}\lr{\log\lr{\frac{N}{2}} - \log(2\pi)}\right]\\
&\qquad + \frac{P}{2}\left[\log(1+z_*)-z_*(1+\lambda_{\mathrm{post}} z_*)\right]\\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{2}\left[\frac{N}{P}\lr{1+\frac{P}{N}z_* - \frac{1}{N}}\log\lr{1+\frac{P}{N}z_*} - z_*\right].
\end{align*}
This implies evidence
\begin{align*}
\log Z &= \frac{P}{2}\lr{\log\lr{\frac{P}{2}}-1} - \frac{P}{2} \log\lr{\frac{\norm{\theta_*}^2}{4\pi}} + \frac{P}{2}\left[\log(1+z_*)-z_*(1+\lambda_{\mathrm{post}} z_*)\right]\\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{2}\left[\frac{N}{P}\lr{1+\frac{P}{N}z_* - \frac{1}{N}}\log\lr{1+\frac{P}{N}z_*} - z_*\right].
\end{align*}
We can see that this implies that $dZ/d\lambda_{\mathrm{post}} \geq 0$ by breaking it down into two cases.
Taking the limit where $P/N\to 0$ as $N\to \infty$, this is
\begin{align*}
\log G &= \frac{P}{2}\lr{\log\lr{\frac{P}{2}}-1} + \frac{P}{2}\left[\log(1+z_*)-z_* - \frac{1}{2}\lambda_{\mathrm{post}} z_*^2\right] \\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{2}\frac{N}{P}\left[\log\lr{\frac{N}{2}}-1 - \frac{1}{N}\lr{\log\lr{\frac{N}{2}} - \log(2\pi)}\right].
\end{align*}
This implies evidence
\begin{align*}
\log Z &= \frac{P}{2}\lr{\log\lr{\frac{P}{2}}-1} - \frac{P}{2} \log\lr{\frac{\norm{\theta_*}^2}{4\pi}}+ \frac{P}{2}\left[\log(1+z_*)-z_* - \frac{1}{2}\lambda_{\mathrm{post}} z_*^2\right]
\end{align*}
which satisfies
\begin{align*}
\frac{\partial \log Z}{\partial \lambda_{\mathrm{post}}} &= \frac{P}{2}\cdot \frac{z_*^2}{2} \geq 0,
\end{align*}
i.e., the Bayes-optimal $\lambda_{\mathrm{post}}$ goes to infinity. We summarize the $P/N\to 0$ result as follows:
\begin{align*}
\norm{a_\perp}^2 &\to \nu(1-\alpha_0)(1+z_*)^{-1}\\
\log Z_\infty(0) &\to \frac{P}{2}\left[\log\lr{\frac{P}{2}}-1\right] - \frac{P}{2} \log\lr{\frac{\norm{\theta_*}^2}{4\pi}} + \frac{P}{2}\left[\log(1+z_*)-z_*(1+\lambda_{\mathrm{post}} z_*)\right] \\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{2}\left[\frac{N}{P}\lr{1+\frac{P}{N}z_*}\log\lr{1+\frac{P}{N}z_*} - z_*\right] + \tilde O(1)\\
\nu &= e^{\lambda_{\mathrm{post}} z_*}(1+z_*).
\end{align*}
In the second case, we take the limit $P/N\to \alpha$ for constant $\alpha$ and restore the fixed-$L$ analysis. Reproducing the result from the main text but substituting $L \to \lambda_{\mathrm{post}} N/P$ and $z_* N/P \to z_*$, we previously obtained
\begin{align*}
\|a_\perp\|^2 &\to \nu(1-\alpha_0)\lr{1+z_*}^{-1}\\
\log Z_\infty(0) &~\rightarrow~ \frac{P}{2}\left[\log\lr{\frac{P}{2}} - 1\right] - \frac{P}{2} \log\lr{\frac{\norm{\theta_*}^2}{4\pi}} + \frac{P}{2} \left[\log\lr{1+z_*} - z_*\right] \\
&\qquad + \frac{\lambda_{\mathrm{post}} N}{2}\frac{N}{P}\left[\log\lr{1+\frac{P}{N}z_*} - \frac{P}{N}z_*\right] + \tilde O(1)\\
\nu &= \lr{1+z_*}\lr{1+\frac{P}{N}z_*}^{\lambda_{\mathrm{post}} N/P}.
\end{align*}
As can be seen by the formula for $z_*$, the scaling of the feature learning does not solely depend on $\lambda_{\mathrm{post}}$; it also depends on $P/N$. These two different cases imply that $\lambda_{\mathrm{post}}$ is only a ``universal" parameter that alone drives feature learning in the $P/N\to 0$ case, not the general fixed-$\lambda_{\mathrm{post}}$ regime. (Perhaps tangentially, you'll see that expanding the fixed-$L$ result at small $P/N$ exactly reproduces the $P/N\to 0$ result.)
}
\fi
\section{Proof of Theorem \ref{thm:bayesfeature}}\label{sec:bayesfeature-pf}
Consider the setup from Theorem~\ref{thm:Z-form}. We adopt the notation
\begin{align*}
z &= \frac{\norm{\theta_*}^2}{4M} = \frac{\nu}{\sigma^{2(L+1)}}\frac{P}{2}\lr{\frac{N}{2}}^L
\end{align*}
so that we have variance
\begin{align*}
\Var_\mathrm{post}[f(x)] &= \frac{\norm{x_\perp}^2}{2}\frac{\norm{\theta_*}^2}{z} \frac{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2}+1,\dots,\frac{N_L}{2}+1}}{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}}}\\
&= \frac{\norm{x_\perp}^2}{2}\norm{\theta_*}^2 \frac{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2}-1,\frac{N_1}{2},\dots,\frac{N_L}{2}}}{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}}}.
\end{align*}
A Bayes-optimal $\sigma^2$ implies
\begin{align*}
\frac{\partial Z_\infty(0)}{\partial \sigma^2} &= 0 \implies \frac{d}{d z} \G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}} = 0.
\end{align*}
Using the identity
\begin{align*}
\frac{d}{dz}\left[z^{-b_1}\G{m,n}{p,q}{z}{\mathbf{a_p}}{\mathbf{b_q}}\right] = -z^{-1-b_1}\G{m,n}{p,q}{z}{\mathbf{a_p}}{b_1+1,b_2,\dots,b_q}
\end{align*}
which holds when $m \geq 1$, we find that
\begin{align*}
\frac{d}{dz} \G{m,n}{p,q}{z}{\mathbf{a_p}}{\mathbf{b_q}} &= \frac{1}{z}\left[b_1\G{m,n}{p,q}{z}{\mathbf{a_p}}{\mathbf{b_q}} - \G{m,n}{p,q}{z}{\mathbf{a_p}}{b_1+1, b_2, \dots, b_q} \right]
\end{align*}
and thus
\begin{align}\label{eq:sigmazero}
\frac{P}{2}\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}} &= \G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2}+1,\frac{N_1}{2},\dots,\frac{N_L}{2}}.
\end{align}
To show both directions of Theorem~\ref{thm:bayesfeature}, it hence suffices to show that to leading order in the large-$P$ limit,
\begin{align}
\label{eq:gratio}
\frac{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2}-1,\frac{N_1}{2},\dots,\frac{N_L}{2}}}{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}}} = \frac{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}}}{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2}+1,\frac{N_1}{2},\dots,\frac{N_L}{2}}} = \frac{2}{P},
\end{align}
which ensures variance
\begin{align*}
\Var_\mathrm{post}[f(x)] &= \lr{\frac{\norm{x_\perp}^2}{2}\norm{\theta_*}^2}\lr{\frac{2}{P}}= \frac{\norm{x_\perp}^2}{N_0} \frac{\norm{\theta_*}^2}{\alpha_0}.
\end{align*}
This simultaneously sets the predictor variance to $\nu \Sigma_\perp$ that we recover above. Conversely, if the variance is given by $\nu \Sigma_\perp$ to leading order, \eqref{eq:gratio} implies that $\partial Z_\infty(0)/\partial \sigma^2 = 0$.
To show \eqref{eq:gratio}, we evaluate a saddle point approximation of the relevant $G$ function
\begin{align*}
\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2}+k,\frac{N_1}{2},\dots,\frac{N_L}{2}}
\end{align*}
with constant $L, N_1, \dots, N_L$. Similarly to the result of Lemma~\ref{lem:contour}, we use the density of independent $\Gamma$ random variables
\begin{align*}
\phi_j \sim \begin{cases}\Gamma\left(\frac{N_j}{2}+1, \frac{2\sigma^2}{N_j}\right), &j=1,\dots,L\\ \Gamma\left(\frac{P}{2}+k+1, \frac{2\sigma^2}{P}\frac{\alpha_0}{\norm{\theta_*}^2}\right), &j=0\end{cases}.
\end{align*}
Note that unlike in Lemma~\ref{lem:contour}, these variables offset $P$ by $k$, not $N_\ell$. The density is thus
\begin{align*}
\mathrm{Den}_{\phi_0\cdots \phi_{L}}(1) &= \frac{\norm{\theta_*}^2}{4M}\G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{P}{2}+k,\frac{{\bf N}}{2}}\left[\Gamma\left(\frac{P}{2}+k+1\right)\right]^{-1}\prod_{\ell=1}^L\left[\Gamma\left(\frac{N_\ell}{2}+1\right)\right]^{-1}\\
&= \frac{1}{2\pi}\int_{\mathcal C} \exp\left[\Phi(z)\right]dz,
\end{align*}
where $\mathcal C\subseteq\mathbb C$ is the contour that runs along the real line from $-\infty$ to $\infty$ and
\begin{align*}
\notag \Phi(z)&= -iz\log\lr{\frac{2\sigma^2}{P}\frac{\alpha_0}{\norm{\theta_*}^2}}+\log\lr{\frac{\Gamma\lr{\frac{P}{2}+k+1-iz}}{\Gamma\lr{\frac{P}{2}+k+1}}}\\
&+\sum_{\ell=1}^L\left\{-iz\log\lr{\frac{2\sigma^2}{N_\ell}}+\log\lr{\frac{\Gamma\lr{\frac{N_\ell}{2}+1-iz}}{\Gamma\lr{\frac{N_\ell}{2}+1}}} \right\}.
\end{align*}
The fixed point equation $d\Phi(i\zeta)/d\zeta=0$ is solved by $\zeta_*$ satisfying
\begin{align*}
\sum_{\ell=1}^L - \log\lr{\frac{N}{2}} + \psi\lr{\frac{N_\ell}{2}+\zeta_*+1} = \log\lr{\frac{\|\theta_*\|^2}{\alpha_0 \sigma^{2(L+1)}}},
\end{align*}
i.e., by $\zeta_* = O(1)$. Directly applying Laplace's method
\begin{align*}
\log \frac{1}{2\pi}\int_{\mathcal C} \exp\left[\Phi(z)\right]dz &= \Phi(z_*) - \frac{1}{2}\log(2\pi) - \frac{1}{2}\log\Phi''(z_*)
\end{align*}
and evaluating the ratios in \eqref{eq:gratio}, we find
\begin{align*}
\log \frac{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2}-1,\frac{N_1}{2},\dots,\frac{N_L}{2}}}{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}}} &= \log \frac{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2},\frac{N_1}{2},\dots,\frac{N_L}{2}}}{\G{L+1,0}{0,L+1}{z}{-}{\frac{P}{2}+1,\frac{N_1}{2},\dots,\frac{N_L}{2}}}\\
&= \log\lr{\frac{2}{P}} + O\lr{\frac{1}{P}},
\end{align*}
which we note is independent of $\sigma^2$ to leading order. That is, the large dataset overwhelms the prior, causing all choices of $\sigma^2$ to become optimal. This completes the proof.
\section{Proof of Corollary~\ref{cor:optevd}}
\label{sec:evidence}
In Corollary~\ref{cor:optevd}, we compute the Bayes-optimal neural network depth by maximizing Bayesian evidence~\cite{mackay1992bayesian}. A similar computation to the following is used to find Bayes-optimal parameters throughout the main text; we provide the proof to Corollary~\ref{cor:optevd} as an example. To evaluate $\partial \log Z / \partial \lambda_{\mathrm{prior}} = 0$ for the partition function given by Theorem~\ref{thm:Z-form}, we take the partition function with $N_1 = \dots = N_L = N$ obtained from case (b) of Theorem~\ref{thm:logG},
\begin{align*}
\log Z_\infty(0) &= \frac{P}{2}\log\lr{\frac{4\pi}{\|\theta_*\|^2}} + \frac{P}{2}\left[\log\lr{\frac{P}{2}}-1\right] -\frac{1}{2}\log\lr{\frac{P}{2}}\\
&\quad - \frac{1}{2}\log(2\lambda_{\mathrm{prior}}) - \frac{1}{4\lambda_{\mathrm{prior}}}\lr{\lambda_{\mathrm{prior}} + \log\lr{\frac{\|\theta_*\|^2}{\alpha_0}}}^2,
\end{align*}
and we evaluate the derivative with variable substitution $\nu = \|\theta_*\|^2/\alpha_0$, yielding
\begin{align*}
\frac{\partial \log Z_\infty(0)}{\partial \lambda_{\mathrm{prior}}} &= \frac{\log(\nu)^2-\lambda_{\mathrm{prior}}(2+\lambda_{\mathrm{prior}})}{4\lambda_{\mathrm{prior}}^2} = 0.
\end{align*}
Solving gives
\begin{align*}
\lambda_{\mathrm{prior},*} &= \sqrt{1 + \log^2 \nu} - 1 \geq 0.
\end{align*}
Moreover, the second derivative is
\begin{align*}
\frac{\partial^2 \log Z_\infty(0)}{\partial \lambda_{\mathrm{prior}}^2} &= \frac{\lambda_{\mathrm{prior}}-\log^2\nu}{2\lambda_{\mathrm{prior}}^3},
\end{align*}
which at $\lambda_{\mathrm{prior},*}$ is
\begin{align*}
\frac{\partial^2 \log Z_\infty(0)}{\partial \lambda_{\mathrm{prior}}^2} &= -\frac{\lambda_{\mathrm{prior},*}+1}{2\lambda_{\mathrm{prior},*}^2} < 0.
\end{align*}
Hence, Bayesian evidence is maximized at $\lambda_{\mathrm{prior},*} = O(1)$.
Moreover, evaluating the evidence at a different choice of fixed $\lambda_{\mathrm{prior}}$ such that $\lambda_{\mathrm{prior}} \neq \lambda_{\mathrm{prior},*}$, we see that the evidence ratio is
\begin{align*}
\log Z_\infty(0; \lambda_{\mathrm{prior}}) - \log Z_\infty(0; \lambda_{\mathrm{prior},*}) &= -\frac{1}{2}\log\lr{\frac{\lambda_{\mathrm{prior}}}{\lambda_{\mathrm{prior},*}}} - \frac{1}{4\lambda_{\mathrm{prior}}}\lr{\lambda_{\mathrm{prior}} + \log\lr{\frac{\|\theta_*\|^2}{\alpha_0}}}^2 \\
&\quad + \frac{1}{4\lambda_{\mathrm{prior},*}}\lr{\lambda_{\mathrm{prior},*} + \log\nu}^2,
\end{align*}
which is a constant.
\section{Proof of Theorem \ref{thm:scaling}}\label{sec:scaling-pf}
In order to prove Theorem \ref{thm:scaling}, we expand $\Delta(\log G)[k]$ in the case of $L=\lambda_{\mathrm{prior}} N, \, P/N=\alpha, \, \sigma^2=1$ to higher order than reported in Theorem~\ref{thm:logG}. Specifically, we find
\begin{align*}
\Delta(\log G, k=1) &:= \log \G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{\mathbf{N}}{2}+1,\frac{P}{2}} - \log \G{L+1,0}{0,L+1}{\frac{\norm{\theta_*}^2}{4M}}{-}{\frac{\mathbf{N}}{2},\frac{P}{2}}\\
&= \lambda_{\mathrm{prior}} N \log\lr{\frac{N}{2}} + \log(\nu) + \frac{c}{N}
\end{align*}
for
\begin{align*}
c &= -\frac{8\lambda_{\mathrm{prior}}}{3}+2\lr{1+\log(\nu)} + \lr{\frac{1}{\alpha}+\log(\nu)}\lr{1-\frac{\log(\nu)}{\lambda_{\mathrm{prior}}}}.
\end{align*}
Applying Theorem~\ref{thm:Z-form}, the difference in variance compared to infinite $N$ is
\begin{align*}
\Var_{\mathrm{post}}\left[f(x)\right] - \lim_{N\to\infty} \Var_{\mathrm{post}}\left[f(x)\right] &= \frac{c}{N}\nu \Sigma_\perp \propto \frac{1}{N} \propto \frac{1}{P} \propto \frac{1}{L}.
\end{align*}
\section{Proof of Theorem \ref{thm:dd}}\label{sec:dd-pf}
To prove Theorem \ref{thm:dd} we begin with the bias-variance decomposition:
\[
\left\langle f(x)-V_0x \right \rangle^2 = \lr{\theta_*x_{||}-V_0x}^2 + \frac{\norm{x_\perp}^2\norm{\theta_*}^2}{P},
\]
where $x_\perp = x - x_{||}$ and
\[
x_{||} = \mathrm{im}(X) \mathrm{im}(X)^T x.
\]
Consider first the case when $\alpha_0 < 1$. Then
\[
\theta_* = V_0 + \epsilon (X^TX)^{-1}X^T.
\]
Thus, the error introduced by the bias is
\begin{align*}
\mathbb E\left[\lr{\theta_*x_{||}-V_0x}^2 \right] &= \mathbb E\left[(V_0 x_\perp)^2\right] + \mathbb E\left[(\epsilon (X^T X)^{-1} X^T x_{||})^2\right]\\
&= \mathbb{E}\left[(V_0x)^2 - 2(V_0x)(V_0x_{||}) + (V_0x_{||})^2\right] + \sigma_\epsilon^2 \mathbb{E}\left[\|(X^T X)^{-1} X^T x_{||}\|^2\right]\\
&= 1-\alpha_0 + \sigma_\epsilon^2 \mathbb{E}\left[\|(X^T X)^{-1} X^T x_{||}\|^2\right].
\end{align*}
We observe that, for $X^\dagger = (X^T X)^{-1} X^T$,
\begin{align*}
\mathbb{E}\left[\|(X^T X)^{-1} X^T x_{||}\|^2\right] &= \E{\mathrm{Tr}\lr{x_{||}^T \lr{X^{\dagger}}^T X^{\dagger}x_{||}}}\\
&= \E{\mathrm{Tr}\lr{ \lr{X^{\dagger}}^T X^{\dagger}x_{||}x_{||}^T}}\\
&= \frac{\alpha_0}{1-\alpha_0}.
\end{align*}
Hence, the bias is
\begin{align}
\label{eq:biasl1}
\mathbb E\left[\lr{\theta_*x_{||}-V_0x}^2 \right] &= 1-\alpha_0 + \frac{\alpha_0}{1-\alpha_0}\sigma_\epsilon^2.
\end{align}
The error introduced by the variance is
\begin{align*}
\mathbb{E}\left[\frac{\norm{x_\perp}^2\norm{\theta_*}^2}{P}\right] &= \frac{1}{P}\mathbb{E}\left[\left(\|V_0\|^2 + \|\epsilon(X^TX)^{-1}X^T\|^2\right)\left(\|x\|^2 + \|x_{||}\|^2 - 2 \|\mathrm{im}(X)^Tx\|^2\right)\right]\\
&= \frac{1-\alpha_0}{P}\mathbb{E}\left[\|x\|^2\left(1 + \sigma_\epsilon^2\|(X^TX)^{-1}X^T\|^2\right)\right]\\
&= \left(\frac{1}{\alpha_0} - 1\right)\left(1 + \sigma_\epsilon^2\mathbb{E}\left[\|(X^TX)^{-1}X^T\|^2\right]\right).
\end{align*}
Observing that
\begin{align*}
\mathbb{E}\left[\left\|(X^TX)^{-1}X^T\right\|^2\right] &= \mathbb{E}\left[\mathrm{Tr}\lr{(X^TX)^{-1}X^TX(X^TX)^{-1}}\right] = \mathbb{E}\left[\mathrm{Tr}\lr{(X^TX)^{-1}}\right] = \frac{\alpha_0}{1-\alpha_0}
\end{align*}
from the $-1$st moment of the Marchenko-Pastur distribution for $\alpha_0 < 1$, we obtain
\begin{align*}
\left\langle f(x)-V_0x \right \rangle^2 &= 1-\alpha_0 + \frac{\alpha_0}{1-\alpha_0}\sigma_\epsilon^2 + \left(\frac{1}{\alpha_0} - 1\right)\left(1 + \frac{\alpha_0}{1-\alpha_0}\sigma_\epsilon^2\right)\\
&= \frac{1}{\alpha_0} - \alpha_0 + \frac{\sigma_\epsilon^2}{1-\alpha_0}.
\end{align*}
In the case of $\alpha_0 > 1$, the variance is zero since $\|x_\perp\|^2 = 0$. Given
\begin{align*}
\theta_* &= V_0 + \epsilon X^T(XX^T)^{-1},
\end{align*}
the total error originates from the bias, i.e.,
\begin{align}
\left\langle f(x)-V_0x \right \rangle^2 &= \mathbb{E}\left[(\epsilon (X^TX)^{-1}X^T x)^2\right] \nonumber\\
&= \frac{\sigma_\epsilon^2}{\alpha_0 - 1} \label{eq:biasg1}
\end{align}
similarly to the bias computation for $\alpha_0 < 1$.
| proofpile-arXiv_065-1968 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It follows from a result of Dyckerhoff \cite[Lemma 5.6]{dyckerhoff} that matrix factorization categories associated to complete local hypersurface rings are idempotent complete.
In this paper, we generalize this result to the equivariant case.
Throughout the paper, we let $G$ be a finite group acting on a
noetherian local ring $(Q,\mathfrak{m})$ in such a way that $Q$ is module finite
over the invariant subring $Q^G$,
and we assume $f$ is an element of $\mathfrak{m}$ that is fixed by $G$.
(The assumption that $Q^G \into Q$ is module finite holds quite generally; see Remark~\ref{finiteextension}.) We write $[\on{mf}_G(Q,f)]$ for the (triangulated) homotopy category of $G$-equivariant matrix factorizations of $f$;
see \S\ref{mf} for the definition. Our main result is:
\begin{thm} \label{main}
In the setting above: if $Q$ is henselian, then $[\on{mf}_G(Q, f)]$ is idempotent complete.
\end{thm}
Suppose now that $Q$ is regular, $(Q', \mathfrak{m}')$ is another regular local ring with $G$-action, and $\phi: Q \to Q'$ is a $G$-equivariant homomorphism of local rings.
Setting $f' = \phi(f)$, we have an induced triangulated functor
$$
\phi_*: [\on{mf}_G(Q,f)] \to [\on{mf}_G(Q',f')]
$$
given by extension of scalars along $\phi$.
Building from the aforementioned result of Dyckerhoff \cite[Lemma 5.6]{dyckerhoff}, we also prove:
\begin{prop} \label{introprop} Assume $Q/f$ and $Q'/f'$ have isolated singularities, $Q^G \subseteq Q$ and
$(Q')^G \subseteq Q'$ are module finite, $|G|$ is a unit in $Q$, $\phi$ is flat, and
the canonical map $Q' \otimes_Q Q/\mathfrak{m} \to Q'/\mathfrak{m}'$ is an isomorphism.
The functor $\phi_*$ induces an equivalence of triangulated categories
$$
\phi_*: [\on{mf}_G(Q,f)]^\vee \xra{\cong} [\on{mf}_G(Q',f')]^\vee,
$$
where ${}^\vee$ denotes idempotent completion.
\end{prop}
From Theorem~\ref{main} and Proposition~\ref{introprop}, we deduce:
\begin{cor}
\label{introcor}
If $|G|$ is a unit in $Q$, $Q^G \subseteq Q$ is module finite, and the local hypersurface $Q/f$ has an isolated singularity, then the canonical functors
$$
[\on{mf}_G(Q, f)]^\vee \xra{} [\on{mf}_G(Q^h, f)] \xra{} [\on{mf}_G(\widehat{Q}, f)]
$$
are equivalences of triangulated categories, where $Q^h$ and $\widehat{Q}$ are the henselization and $\mathfrak{m}$-adic completion of $Q$, respectively.
\end{cor}
As another application, we combine Theorem~\ref{main} and a result of Spellmann-Young \cite{SY} to conclude that $[\on{mf}_G(Q, f)]$ is equivalent to the category of $G$-equivariant objects in the triangulated category $[\on{mf}(Q, f)]$; see Corollary~\ref{fixedpoints} below for the precise (and more general)
statement. This gives an analogue of a result of Elagin involving bounded derived categories of equivariant sheaves \cite[Theorem 9.6]{elagin1}. This consequence of the statement
of Theorem~\ref{main} was observed by Spellman-Young \cite[Remark 3.7]{SY} and provided a main source of motivation for this work.
\subsection*{Acknowledgments}
We thank Jack Jeffries and Anurag Singh for suggesting the argument in the proof of Proposition~\ref{newprop}.
\section{Background}
\subsection{Twisted group rings}
Let $A$ be a commutative ring with action of a finite group $G$.
\begin{defn} The {\em twisted group ring}, written $A \# G$, has underlying set given by formal sums $\sum_{g \in G} a_g g$, with $a_g \in A$ for all $g$, and multiplication determined by the rule
$$
ag \cdot bh = ab^ggh
$$
for $a,b \in A$ and $g,h \in G$, where $b^g$ is the result of acting by $g$ on $b$.
\end{defn}
The map $A \to A \# G$ sending $a$ to $ae_G$ is a ring homomorphism, but beware that $A \# G$ is only an $A$-algebra when the action of $G$ on $A$ is trivial, in which case $A \#
G$ coincides with the group ring $A[G]$.
In general, letting $A^G$ denote the ring of invariants $\{a \in A \mid a^g = a \text{ for all $g \in G$}\}$, the composition $A^G \into A \to A \# G$ exhibits $A \# G$ as an
$A^G$-algebra.
A left module over $A \# G$ is the same thing as a set $M$ that is equipped with a left $A$-module structure and a left $G$-action such that
$g (a m) = a^g (g m)$ for all $g \in G$, $a \in A$ and $m \in M$.
Suppose $A$ is local with maximal ideal $\mathfrak{m}$. Since $G$ is finite, the
inclusion $A^G \into A$ is integral,
and it is therefore a consequence of the Going Up Theorem that the invariant
ring $A^G$ is also local.
We observe also that the henselization $A^h$ and $\mathfrak{m}$-adic completion
$\widehat{A}$ inherit canonical $G$-actions. In more detail, every
morphism of local rings $\phi: A \to B$ induces unique morphisms on
henselizations $\phi^h: A^h \to B^h$ and completions $\widehat{\phi}:
\widehat{A} \to \widehat{B}$
that cause the evident squares to commute. Applying this when $B = A$ and $\phi$ ranges over the isomorphisms determined by the actions of the group elements of $G$
gives the actions of $G$ on $A^h$ and $\widehat{A}$.
\begin{prop} \label{newprop}
Assume a finite group $G$ acts on a local ring $A$ in such a way that the extension $A^G \subseteq A$ is module finite. Both of the extensions
$(A^h)^G \subseteq A^h$ and $(\widehat{A})^G \subseteq \widehat{A}$ are module finite, $(A^h)^G$ is henselian, and $(\widehat{A})^G$ is complete.
\end{prop}
\begin{rem}
\label{finiteextension}
The assumption that the extension $A^G\subseteq A$ is module finite holds in many cases of interest.
For instance, if $A$ is a finite type $F$-algebra for a field $F$ contained in $A^G$, then, since $A^G \subseteq A$ is integral and finite type, it is module finite.
More generally, if $A$ is any equivariant localization of an example of this kind, then $A^G \subseteq A$ is module finite.
Additionally, if $A$ is a noetherian domain, and $|G|$ is invertible in $A$, then $A^G \subseteq A$ is
module finite \cite[Proposition 5.4]{LW}. More generally, if $A$ is any equivariant quotient of an example of this kind,
then $A^G \subseteq A$ is module finite.
Indeed, we know of no examples where $A^G \subseteq A$ fails to be module finite when $|G|$ is invertible in $A$, but there are examples of such failure
when $A = F[[x,y]]$, $F$ is a field of infinite transcendence degree over the field with $p$ elements for a prime $p$, and $G$ is cyclic of order $p$; see \cite{GS}.
\end{rem}
\begin{proof}[Proof of Proposition~\ref{newprop}]
For any module finite extension of local rings $B \subseteq A$,
the induced map $B^h \subseteq A^h$ is also a module finite extension, and the canonical map $B^h \otimes_B A \xra{\cong} A^h$ is an isomorphism \cite[Lemma 10.156.1]{stacksproject}.
Applying this when $B = A^G$, and using that $(A^G)^h \subseteq (A^h)^G \subseteq A^h$, we obtain the first result. Similarly, we have $\widehat{A} \cong \widehat{A^G} \otimes_{A^G} A$ is module finite over $\widehat{A^G}$,
and $\widehat{A^G} \subseteq (\widehat{A})^G \subseteq \widehat{A}$, hence $(\widehat{A})^G \subseteq \widehat{A}$ is module finite.
Let us return to the general setting of a module finite extension of local rings
$B \subseteq A$. If $A$ is complete, then $B$ is complete. To see this, note that $\widehat{B} \otimes_B A \cong \widehat{A} = A$, and hence $B \subseteq \widehat{B} \subseteq A$,
so that $\widehat{B}$ is a module finite extension of $B$. Thus, $\widehat{B} \otimes_B (\widehat{B}/B) = 0$. Since $\widehat{B}$ is a faithfully flat $B$-module, it follows that $B = \widehat{B}$. Likewise, if $A$ is henselian, so is $B$. Indeed, we have $B \subseteq B^h \subseteq A$, since $B^h \subseteq A^h$ and $A = A^h$. Thus, $B^h$ is module finite over $B$.
Taking completions, and using that $\widehat{(B^h)} \cong \widehat{B}$, we get $\widehat{B} \otimes_B B^h/B = 0$, and hence $B = B^h$.
\end{proof}
\section{Proof of the Main Theorem}
\label{mf}
Recall that a not-necessarily-commutative ring $E$ is called {\em nc local} if $E/J(E)$ is a division ring, where $J(E)$ denotes the Jacobson radical of $E$ (i.e., the intersection of all the
maximal left ideals of $R$). An additive category $A$ is called {\em Krull-Schmidt}
if every object is a finite direct sum of objects with nc local endomorphism rings.
By a result of Krause \cite[Corollary 4.4]{krause}, every Krull-Schmidt additive category is idempotent complete. We observe that if $A$ is a Krull-Schmidt additive category and $B$ is a quotient of $A$,
by which we mean $B$ has the same objects of $A$ and the hom groups of $B$ are quotients of the hom groups of $A$, then $B$ is also Krull-Schmidt.
In particular, any quotient of a Krull-Schmidt additive category is idempotent complete.
\begin{lem} \label{lem126} Suppose $A$ is an $R$-linear additive category, where $R$ is a henselian (local noetherian) ring.
If $A$ is idempotent complete, and the endomorphism ring of every object of $A$ is finitely generated as an $R$-module, then
$A$ is Krull-Schmidt.
\end{lem}
\begin{proof} Our argument follows the proof of \cite[1.8]{LW}.
The assumptions imply that the endomorphism ring of every object is noetherian,
and it follows that every object of $A$ is a finite direct sum of indecomposable objects.
Given any indecomposable object $X$ of $A$, set $E \coloneqq \End_A(X)$.
By assumption, $E$ is a module finite $R$-algebra, and so, since $R$ is henselian, every idempotent of $E/J(E)$ lifts to an idempotent of $E$ \cite[A.30]{LW}.
Since $X$ is indecomposable and $A$ is idempotent complete, $E$ has no nontrivial idempotents. We conclude that $E/J(E)$ has no nontrivial idempotents.
Again using that $E$ is module finite over $R$, by \cite[1.7]{LW} we have $\mathfrak{m}_R E\subset J(E)$ and thus $E/J(E)$ is a module finite algebra over the field
$R/\mathfrak{m}_R$. This shows $E/J(E)$ is artinian and hence semi-simple. Since it has no nontrivial idempotents, it must be a division ring.
\end{proof}
For a group $G$ acting on a commutative ring $Q$ and an element $f \in Q$ fixed by the action,
we write $\on{mf}_G(Q,f)$ for the additive category of equivariant matrix factorizations.
Objects are pairs $P = (P, d)$ with $P$ a $\Z/2$-graded module over the twisted group ring $Q \# G$-module that is finitely generated and projective as a $Q$-module
and $d$ a $Q \# G$-linear endomorphism of $P$ of odd degree that squares to $f \cdot \id_P$. (We do not assume $|G|$ is a unit in $Q$ here; if it is, then such a $P$ is finitely generated and projective
as a module over $Q \# G$.)
We write $[\on{mf}_G(Q,f)]$ for the quotient of $\on{mf}_G(Q,f)$ obtained by modding out by homotopy in the usual sense.
Our main result, Theorem~\ref{main}, is an immediate consequence of the following slightly stronger statement:
\begin{thm}
\label{prop:technical}
Let $G$ be a finite group acting on a commutative ring $Q$, and assume $f \in Q$ is fixed by $G$.
If $Q$ is (local noetherian) henselian, and the ring extension $Q^G \into Q$ is module finite, then $[\on{mf}_G(Q,f)]$ is a Krull-Schmidt category and hence idempotent complete.
\end{thm}
\begin{proof}
Since $Q$ is henselian, $Q^G$ is also henselian by Proposition \ref{newprop}.
Since $Q^G$ belongs to the center of $Q \# G$, the additive category $\on{mf}_G(Q,f)$ is $Q^G$-linear.
The endomorphism ring of every object $P$ of $\on{mf}_G(Q,f)$ is contained in $\End_Q(P)$ and hence is module finite over $Q^G$.
So, since $[\on{mf}_G(Q,f)]$ is a quotient of $\on{mf}_G(Q,f)$, by Lemma \ref{lem126} it suffices to prove $\on{mf}_G(Q,f)$ is idempotent complete.
Let $e$ be an idempotent endomorphism of an object $(P, d)$ in $\on{mf}_G(Q,f)$.
The category of all modules over $Q \# G$ is certainly idempotent complete, and so
$P$ decomposes as $P = \ker(e) \oplus \im(e)$ over this ring.
Since $P$ is $Q$-projective, so are both $\ker(e)$ and $\im(e)$.
Since $e$ commutes with $d$, we have $d(\ker(e)) \subseteq \ker(e)$ and $d(\im(e)) \subseteq \im(e)$.
Thus, $(\ker(e), d|_{\ker(e)})$ and $(\im(e), d|_{\im(e)})$ are objects of $\on{mf}_G(Q,f)$,
and the canonical maps
$p: (P, d) \onto (\im(e), d|_{\im(e)})$ and $i: (\im(e), d|_{\im(e)}) \into (P,d)$ are morphisms in $\on{mf}_G(Q,f)$.
Since $e = i \circ p$, this proves $\on{mf}_G(Q,f)$ is idempotent complete.
\end{proof}
We now address a remark of Spellmann-Young in \cite{SY}. Let $[\on{mf}(Q, f)]^G$ be the category of equivariant objects in $[\on{mf}(Q, f)]$, as defined, for instance, by Carqueville-Runkel in \cite[\S 7.1]{CR}. There is a canonical functor
\begin{equation}
\label{canonical}
[\on{mf}_G(Q, f)] \to [\on{mf}(Q, f)]^G,
\end{equation}
and it is proven in \cite[Proposition 3.6]{SY} that, under certain circumstances, \eqref{canonical} exhibits the target as the idempotent completion of the source (in fact, this result applies more generally to Spellmann-Young's notion of Real equivariant matrix factorizations as well). Spellmann-Young note in \cite[Remark 3.7]{SY} that, if the map \eqref{canonical} (or its Real generalization) were an equivalence, some of their arguments could be shortened; we now apply Theorem~\ref{main} to prove this.
\begin{cor}
\label{fixedpoints}
Suppose we are in the setting of Theorem~\ref{main}, and assume that $|G|$ is a unit in $Q$. The functor \eqref{canonical} is an equivalence.
\end{cor}
\begin{proof}
The proof of \cite[Proposition 3.6]{SY} extends verbatim to our setting; the assumption that $|G|$ is a unit in $Q$ is necessary for their application of \cite[Theorem
8.7]{elagin2}. This shows that the functor \eqref{canonical} is an idempotent completion, and thus
Theorem~\ref{main} implies it must be an equivalence.
\end{proof}
\section{Proof of Proposition~\ref{introprop} and Corollary~\ref{introcor}}
Recall that, if ${\mathcal T}$ is a triangulated category, a subcategory ${\mathcal S}$ of ${\mathcal T}$ is called {\em thick}
if ${\mathcal S}$ is full, triangulated, and closed under summands.
Given a collection ${\mathcal X}$ of objects of ${\mathcal T}$, the {\em thick closure} of ${\mathcal X}$ in ${\mathcal T}$, written $\on{Thick}_{{\mathcal T}}({\mathcal X})$, is the intersection of all thick subcategories
of ${\mathcal T}$ that contain ${\mathcal X}$. Let us say that an object $X$ of ${\mathcal T}$ {\em builds} ${\mathcal T}$ if $\on{Thick}_{{\mathcal T}}(\{X\}) = {\mathcal T}$. Concretely, this means that every object of ${\mathcal T}$ is obtained
from $X$ by a finite process of taking mapping cones, suspensions, and summands.
Given a dg-category $\mathcal{C}$, we write $[\mathcal{C}]$ for its homotopy category, which has the same objects as $\mathcal{C}$ and morphisms $\Hom_{[C]}(X, Y) \coloneqq H^0 \Hom_C(X,Y)$.
We say $\mathcal{C}$ is {\em pre-triangulated} if the image of the dg-Yoneda embedding $[\mathcal{C}] \into [\operatorname{Mod}_\dg(\mathcal{C})]$ is a triangulated subcategory of $[\operatorname{Mod}_\dg(C)]$. See, e.g., \cite [Section 2.3]{Orlov}
for more details; roughly this means that $\mathcal{C}$ has notions of suspension and mapping cone making $[\mathcal{C}]$ into a triangulated category. For example, the dg-category $\on{mf}(Q,f)$ is pre-triangulated.
We use the following well-known fact:
\begin{lem} \label{lem1}
Suppose $\phi: \mathcal{C} \to \mathcal{D}$ is a dg-functor between two pre-triangulated dg-categories. Assume there exists an object $X \in \mathcal{C}$ such that
\begin{itemize}
\item $X$ builds $[\mathcal{C}]$,
\item $\phi(X)$ builds $[\mathcal{D}]$, and
\item the map $\phi: \End_{\mathcal{C}}(X) \to \End_{\mathcal{C}}(\phi(X))$ of dga's is a quasi-isomorphism.
\end{itemize}
The dg-functor $\phi$ induces an equivalence $[\mathcal{C}]^\vee \xra{\cong} [\mathcal{D}]^\vee$ on idempotent completions of the associated homotopy categories.
\end{lem}
\begin{proof}
This follows from \cite[Proposition 2.7]{Orlov}. In more detail: we have a commutative square of triangulated categories
$$
\xymatrix{
\on{Perf}(\End_\mathcal{C}(X)) \ar[r]^-{\cong} \ar[d]^\cong & \on{Perf}(\mathcal{C}) \ar[d] \\
\on{Perf}(\End_\mathcal{D}(\phi(X))) \ar[r]^-{\cong} & \on{Perf}(\mathcal{D}).
}
$$
Since we assume $X$ builds $\mathcal{C}$ and $\phi(X)$ builds $\mathcal{D}$, the cited result gives that the horizontal functors are equivalences as shown.
The left arrow in this square is an equivalence since we assume $\End_\mathcal{C}(X) \to \End_\mathcal{D}(\phi(X))$ is a quasi-isomorphism.
Thus, the right arrow must also be an equivalence.
Since $[\mathcal{C}]^\vee \cong \on{Perf}(\mathcal{C})$ and $[\mathcal{D}]^\vee \cong \on{Perf}(\mathcal{D})$, the result follows.
\end{proof}
Let $F: \on{mf}_G(Q, f) \to \on{mf}(Q,f)$ be the evident dg-functor that forgets the group action.
Since $Q \# G$ is free of finite rank as a $Q$-module, given $(P, d) \in \on{mf}(Q,f)$, the pair $((Q \# G) \otimes_Q P, \id \otimes d)$ is an object of $\on{mf}_G(Q,f)$.
We extend this to a rule on morphisms in the evident way to obtain a dg-functor $E: \on{mf}(Q, f) \to \on{mf}_G(Q,f)$.
\begin{lem} \label{lem2} Assume $|G|$ is a unit in $Q$. Given $P \in \on{mf}(Q,f)$, if $P$ builds $[\on{mf}(Q,f)]$, then $E(P)$ builds $[\on{mf}_G(Q,f)]$.
\end{lem}
\begin{proof} Given objects $X$ and $Y$ in a triangulated category ${\mathcal T}$, we use the notation $X \models_{\mathcal T} Y$ as a shorthand for ``$X$ builds $Y$ (in ${\mathcal T}$)".
The goal is to prove $E(P) \models_{[\on{mf}_G(Q,f)]} Y$ for all $Y \in [\on{mf}_G(Q,f)]$.
For any such $Y$, by assumption we have $P \models_{[\on{mf}(Q,f)]} F(Y)$. Since $E$ induces a triangulated functor on homotopy categories, it follows that
$E(P) \models_{[\on{mf}_G(Q,f)]} E(F(Y))$. It therefore suffices to prove
$E(F(Y)) \models_{[\on{mf}_G(Q,f)]} Y$; in fact, we show $Y$ is a summand of $E(F(Y))$ in $\on{mf}_G(Q,f)$.
The object $E(F(Y))$ has underlying module $(Q \# G) \otimes_Q Y$, with $G$-action through the left tensor factor (and the $G$ action on $Y$ ignored) and differential $\id \otimes d_Y$.
There is an evident surjection $p: E(F(Y)) \onto Y$ in $\on{mf}_G(Q,f)$ given by multiplication. Define
$j: Y \into E(F(Y))$ by $j(y) = \frac{1}{|G|} \sum_{g \in G} g^{-1} \otimes y^g$.
One readily verifies that (a) $j$ is $Q \# G$ linear,
(b) $j$ commutes with the differentials, and (c) $p \circ j = \id_Y$, so that $j$ is a splitting of $p$ in $\on{mf}_G(Q,f)$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{introprop}]
Let $k = Q/\mathfrak{m}$ be the residue field of $Q$. For a sufficiently high $Q/f$-syzygy $M$ of $k$, we have that $M$ is a maximal Cohen-Macaulay (MCM) $Q/f$-module. By a Theorem of Eisenbud \cite{eisenbud}, the MCM module $M$ determines an object in $\on{mf}(Q,f)$; let $k^{\on{stab}}$ be such a matrix factorization.
(We note that the object $k^{\on{stab}}$ depends on $M$ only up to a shift in $[\on{mf}(Q, f)]$.)
Since $Q/f$ has an isolated singularity, it follows from \cite[Corollary 4.12]{dyckerhoff} that $k^{\on{stab}}$ builds $[\on{mf}(Q,f)]$.
Let us write $(k')^{\on{stab}}$ for the image of $k^{\on{stab}}$ in $[\on{mf}(Q',f')]$ under $\phi_*$; that is, $(k')^{\on{stab}} = Q' \otimes_Q k^{\on{stab}}$.
Since $\phi$ is flat, $(k')^{\on{stab}}$ is the matrix factorization associated to the MCM $Q'/f'$-module $Q' \otimes_Q M$, which is
a high syzygy of $Q' \otimes_Q Q/\mathfrak{m} \cong Q'/\mathfrak{m}'$. Since $Q'/f'$ is an isolated singularity,
we see that $(k')^{\on{stab}}$ builds $[\on{mf}(Q',f')]$.
Set $X = E(k^{\on{stab}})$ and $X' = E((k')^{\on{stab}})$, where $E$ is the extension of scalars functor introduced above.
By Lemma \ref{lem2}, we have that $X$ and $X'$ build $[\on{mf}_G(Q, f)]$ and $ [\on{mf}_G(Q', f')]$, respectively.
Moreover, $X$ maps to $X'$ under the functor $[\on{mf}_G(Q, f)] \to [\on{mf}_G(Q', f')]$. Since $\phi$ is flat, we have an isomorphism
$$
Q' \otimes_Q H_*(\End_{\on{mf}_G(Q,f)}(X)) \cong H_*(\End_{\on{mf}_G(Q',f')}(X'))
$$
of $Q'$-modules.
Since the singularities are isolated, $H_*(\End_{\on{mf}_G(Q,f)}(X))$ and $H_*(\End_{\on{mf}_G(Q',f')}(X'))$ are finite length $Q$-modules,
and so, since the natural map $Q' \otimes_Q Q/\mathfrak{m} \to Q'/\mathfrak{m}'$
is an isomorphism, the natural map
$$
H_*(\End_{\on{mf}_G(Q,f)}(X)) \xra{\cong} Q' \otimes_Q H_*(\End_{\on{mf}_G(Q,f)}(X))
$$
is an isomorphism as well. It follows that the map of dga's
$$
\End_{\on{mf}_G(Q,f)}(X) \to \End_{\on{mf}_G(Q',f')}(X')
$$
is a quasi-isomorphism, so that Lemma \ref{lem1} yields an equivalence
$
[\on{mf}_G(Q,f)]^\vee \xra{\cong} [\on{mf}_G(Q',f')]^\vee.
$
\end{proof}
\begin{proof}[Proof of Corollary~\ref{introcor}]
By Proposition \ref{newprop}, each of $(Q^h)^G \subseteq Q^h$ and $\widehat{Q}^G \subseteq \widehat{Q}$ are module finite extensions. Proposition \ref{introprop} therefore gives equivalences
$$
[\on{mf}_G(Q,f)]^\vee \xra{\cong} [\on{mf}_G(Q^h,f)]^\vee \xra{\cong} [\on{mf}_G(\widehat{Q},f)]^\vee;
$$
applying Theorem \ref{main} to both $[\on{mf}_G(Q^h,f)]$ and $[\on{mf}_G(\widehat{Q},f)]$ finishes the proof.
\end{proof}
\bibliographystyle{amsplain}
| proofpile-arXiv_065-1979 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\input{sections/introduction}
\section{Simulation Details}
\input{sections/simulation-details}
\section{Analysis of the Deconfinement Transition}
\input{sections/analysis}
\section{Simulation Results}
\input{sections/results}
\section{Development of an Effective Ginzburg-Landau Theory}
\input{sections/landau}
\section{Conclusions}
\input{sections/conclusion}
\acknowledgments{
The authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 \enquote{Strong-interaction matter under extreme conditions} – project number 315477589 – TRR 211 and by the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006).
We thank the Helmholtz Graduate School for Hadron and Ion Research (HGS-HIRe) for its support as well as the staff of L-CSC at GSI Helmholtzzentrum für Schwerionenforschung and the staff of Goethe-HLR at the Center for Scientific Computing Frankfurt for computer time and support.
}
\bibliographystyle{JHEP}
| proofpile-arXiv_065-1991 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{More details on the design}
\label{sec:appendixdesign}
\subsection{Detailed architectures of the augmentation network }
We consider two VAE architectures in LeMDA, depending on the architecture of the task network. The latent dimension in VAE is set as 8. We adopted the KL divergence regularizer on the encoder distribution. Note that we do not use the reconstruction loss between the input and the output. In MLP-VAE, the encoder and decoder are standard fully connected layers with ReLU as the activation function. Dropout is used with $p = 0.5$. In Attention-VAE, the encoder is implemented as torch.nn.TransformerEncoder. We set numlayers as 4 and nhead as 8. One fully connected layer is used to mapThe decoder is symmetric as the encoder. Features from all modalities are treated as token embedding with no cross-attention.
We use VAE for its simplicity. The main focus of this paper is to demonstrate the effectiveness of a learnable augmentation network for multimodal learning. Other generative models, such as diffusion models and GANs, are also valid architectures. The main concern may lie in efficiency, and we leave this direction as future work.
\subsection{Implementation details over the training procedure}
In practice, we iterative train the task and augmentation networks using the same batch of training data. Specifically, we perform two separate forward passes using $\mathcal{F}_{\mathsf{after}}$ for easy implementation with pyTorch Autograd. We use two optimizers, one for the task network and one for the augmentation network.
\section{Experiment details}
\subsection{Additional studies on the training cost}
One limitation of a learning-based approach is the extra training cost. LeMDA optimizes the augmentation network along with the task network and does incur extra training costs. Here, we investigate the training throughput to provide a more complete understanding of the method. We summarize the training throughput(it/second) in Table~\ref{table:throughput}. As expected, we observe lower throughput for LeMDA compared to other baselines.
\begin{table*}[h!]
\centering
\begin{tabular}{c||c||c|c|c|c||c}
\hline
& \thead{Multimodal \\ Network} & \thead{Input \\ Augmentation} & Mixup & \thead{Manifold \\ MixUp} & MixGen & LeMDA \\
\hline\hline
Hateful Memes & 2.39 & 2.17 & 2.35 & 1.63 & 2.35 & 1.41 \\
\hline
Food101 & 4.27 & 4.46 & 4.31 & 4.48 & 4.47 & 2.21 \\
\hline
Petfinder & 2.36 & 2.29 & 2.95 & 2.36 & - & 1.87 \\
\hline
Melbourne Airbnb & 5.66 & 5.94 & 5.59 & 5.69 & - & 4.13\\
\hline
News Channel & 8.14 & 7.18 & 7.31 & 7.12 & - & 5.12\\
\hline
Wine Reviews & 12.54 & 11.60 & 11.89 & 11.46 & - & 6.28 \\
\hline
Kick Starter Funding & 12.37 & 12.57 & 12.62 & 12.21 & -& 6.69 \\
\hline
\end{tabular}
\vspace{-2mm}
\caption{This table summarizes the training throughput, measured as it/second. Experiments were conducted on a server with 8 V100 GPU. As expected, learning-based approach incur higher training cost. }
\label{table:throughput}
\end{table*}
However, efficiency can be improved. The most straightforward direction is to reduce the frequency of updating the augmentation network. Currently, the augmentation network is updated every iteration. However, the parameters for our task network change slowly, especially in the later stage of training. We leave this part as future direction.
\subsection{Additional studies on the hyper-parameters}
The optimization for the augmentation network is a min-max game, which leads to hyperparameters to balance the contradicting loss. Specifically, $- w_1\nabla L(\hat{y}_{\mathcal{G}}) +w_2 \nabla L_{\mathsf{consist}}(\hat{y}, \hat{y}_{\mathcal{G}}) + w_3 \nabla L_{\mathsf{VAE}}$, where $L_{\mathsf{VAE}}$ refers to the KL divergence regularizer on the latent encoding distribution.
In our main experiment, we use $w_1 = 0.0001, w_2 = 0.1, w_3 = 0.1$ on all datasets except Melbourne Airbnb and SNLI-VE. On Melbourne Airbnb and SNLI-VE, we use $w_1 = 0.001, w_2 = 0.1, w_3 = 0.1$. Note that the hyperparameters are relative consistent across datasets.
Further, we investigate the influence of the different combinations of $w_1$, $w_2$, and $w_3$. We summarize the result on Petfinder Table~\ref{table:hyperparameter}. We observe consistent improvements over the original multimodal network across various combinations.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c}
\hline
$w_1$ & $w_2$ & $w_3$ & Accuracy \\
\hline
0.0001 & 0.1 & 0.1 & 0.3539 \\
\hline
0.0001 & 0.01 & 0.01 & 0.3400\\
\hline
0.005 & 0.1 & 0.1 & 0.3482 \\
\hline
0.005 & 0.01 & 0.01 & 0.3464\\
\hline
0.001 & 0.1 & 0.1 & 0.3371\\
\hline
0.001 & 0.01 & 0.01 & 0.3467 \\
\hline
\hline
\multicolumn{3}{c}{Multimodal Network} & 0.2911 \\
\hline
\end{tabular}
\vspace{-2mm}
\caption{ LeMDA improves over Multimodal Network in accuracy with different sets of hyperparameters on Petfinder. Specifically, $- w_1\nabla L(\hat{y}_{\mathcal{G}}) +w_2 \nabla L_{\mathsf{consist}}(\hat{y}, \hat{y}_{\mathcal{G}}) + w_3 \nabla L_{\mathsf{VAE}}$ }
\label{table:hyperparameter}
\end{table}
\section{Motivational examples when only augmenting one modality}
We have performed an additional set of experiments to investigate the effect of augmenting a single modality on Hateful Memes using state-of-the-art augmentation techniques. In Hateful Memes, both text and image are required to decide if the content is hateful. Two modalities provide complementary information to each other. We run baseline augmentation to only one modality or independently on two modalities. We observe no consistent improvements. Essentially, performing augmentation naively to one modality or jointly without considering cross-modality relationships won’t lead to effective augmentation.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c}
\hline
Multimodal Network & Method & Image & Text & Image + Text \\
\hline
\multirow{4}{*}{0.6939} &Trivial Augment & 0.7040 &0.6860 & 0.7057 \\
\cline{2-5}
&MixUp & 0.6855 & 0.6777 & 0.6939 \\
\cline{2-5}
&Manifold Mixup & 0.6323 & 0.7444 & 0.6878 \\
\cline{2-5}
&MixGen & 0.7427 &0.6872 & 0.7510 \\
\hline
\end{tabular}
\vspace{-4mm}
\label{table:singlemodality}
\end{table}
\section{Conclusion}
Jointly learning from multiple different modalities will be crucial in our quest to build autonomous intelligent agents. We introduce the first method, LemDA, for jointly learning data augmentation across arbitrary modalities. LeMDA is simple, automatic, and achieves promising results over a wide range of experiments. Moreover, our results provide several significant conceptual findings about multimodal data augmentation in general: (1) separately augmenting each modality performs much worse than joint augmentation; (2) although feature augmentation is less popular than input augmentation for single-modality tasks because it is less interpretable, feature augmentation is particularly promising for modality-agnostic settings; (3) a learning-based multimodal augmentation policy can outperform even tailored augmentations, and significantly improve accuracy when augmentation transformations are not obvious such as for categorical data.
Our investigation has primarily focused on late-fusion architectures, showing strong results over a wide range of settings. In general, applying feature augmentation strategies to early-fusion architectures is an open question. Early fusion combines a large number of latent features (e.g.,\ a long sequence of token embeddings), resulting in typically intractable computational costs for augmenting every latent feature. Our experiment with an early-fusion architecture shows however that developing more efficient augmentation networks, or selectively generating only a few important latent vectors, is a promising direction for future work.
\section{Introduction}
Imagine watching a film with no sound, or subtitles. Our ability to learn is greatly enhanced through jointly processing multiple data modalities, such as visual stimuli, language, and audio. These information sources are often so entangled that it would be near impossible to learn from only one modality in isolation --- a significant constraint on traditional machine learning approaches. Accordingly, there have been substantial research efforts in recent years on developing multimodal deep learning to jointly process and interpret information from different modalities at once \citep{https://doi.org/10.48550/arxiv.1705.09406}. Researchers studied multimodal deep learning from various perspectives such as model architectures~\citep{https://doi.org/10.48550/arxiv.2102.03334,https://doi.org/10.48550/arxiv.1903.06496, DBLP:journals/corr/abs-2107-00135, choi2019embracenet}, training techniques~\citep{DBLP:journals/corr/abs-2107-07651, DBLP:journals/corr/abs-1909-11740}, and theoretical analysis~\citep{https://doi.org/10.48550/arxiv.2106.04538, DBLP:journals/corr/abs-2007-06793}. However, data augmentation for multimodal learning remains relatively unexplored~\citep{Kim2021ViLTVT}, despite its enormous practical impact in single modality settings.
Indeed, data augmentation has particularly proven its value for data efficiency, regularization, and improved performance in computer vision~\citep{Ho2019PopulationBA, NEURIPS2020_d85b63ef, trivial, zhang2017mixup, Yun_2019_ICCV} and natural language processing~\citep{eda, aeda, fadaee-etal-2017-data, https://doi.org/10.48550/arxiv.1511.06709,wang-yang-2015-thats, andreas-2020-good, kobayashi-2018-contextual}. These augmentation methods are largely tailored to a particular modality in isolation. For example, for object classification in vision, we know certain transformations such as translations or rotations should leave the class label unchanged. Similarly, in language, certain sentence manipulations like synonym replacement will leave the meaning unchanged.
The most immediate way of leveraging data augmentation in multimodal deep learning is to separately apply well-developed unimodal augmentation strategies to each corresponding modality. However, this approach can be problematic because transforming one modality in isolation may lead to disharmony with the others. Consider Figure~\ref{fig:augmented-snli}, which provides four training examples from SNLI-VE~\citep{xie2019visual}, a vision-language benchmark dataset. Each description is paired with the image on the top left, and the label refers to the relationship between the image and description. The bottom row provides four augmented images generated by state-of-the-art image augmentation methods~\citep{cubuk2019autoaugment, trivial}. In the image generated by AutoAugment-Cifar10 and AutoAugment-SVHN, the plane is entirely cropped out, which leads to mislabeling for data (a), (b), (c), and (d). In the image generated by AutoAugment-ImageNet, due to the change in smoke color, this plane could be on fire and falling down, which leads to mislabeling for data (a) and (d). In the image generated by TrivialAugment~\citep{trivial}, a recent image augmentation method that randomly chooses one transformation with a random magnitude, the loop is cropped out, which leads to mislabeling for data (a) and (c). Mislabeling can be especially problematic for over-parameterized neural networks, which tend to confidently fit mislabeled data, leading to poor performance~\citep{pleiss2020identifying}.
\begin{table}[!tb]
\begin{minipage}[b]{0.25\linewidth}
\raggedright
\includegraphics[width=30mm]{image/original.pdf}
\label{fig:snli-data}
\end{minipage}
\begin{minipage}[b]{0.69\linewidth}
\raggedleft
\begin{tabular}{c|c|c}
& Label & Description \\ \hline
(a) &Entailment & The plane is doing tricks while flying down. \\
(b) &Entailment & There is a plane in the air. \\
(c) &Entailment & The pilot is aware that the plane is doing a loop.\\
(d) & Neutral & The plane is falling down.
\end{tabular}
\label{table:student}
\end{minipage}
\hfill
\end{table}
\begin{figure*}[!tb]
\centering
\includegraphics[width=0.9\textwidth]{image/example.pdf}
\caption{The top row shows four training samples drawn from SNLI-VE~\citep{xie2019visual}, a visual entailment dataset. Each text description is paired with the image on the top left. The task is to predict the relationship between the image and the text description, which can be ``Entailment'', ``Neutral'', or ``Contradiction''. The bottom row shows four augmented images generated by different image-only augmentation methods. If we pair the text description with the augmented images, we observe mislabeled data. For example, the smoke loop is cropped out in the image augmented via TrivialAugment. The new image does not match the description: ``The pilot is aware that the plane is doing a loop", as in data (c). However, the label of the augmented pair will still be ``Entailment". }
\label{fig:augmented-snli}
\end{figure*}
There are two key challenges in designing a general approach to multimodal data augmentation. First, multimodal deep learning takes input from a diverse set of modalities. Augmentation transformations can be obvious for some modalities such as vision and language, but not others, such as sensory data which are often numeric or categorical. Second, multimodal deep learning includes a diverse set of tasks with different cross-modal relationships. Some datasets have redundant or totally correlated modalities while others have complementary modalities. There is no reasonable assumption that would generally preserve labels when augmenting modalities in isolation.
In this work, we propose LeMDA (\emph{\underline{Le}arning \underline{M}ultimodal \underline{D}ata \underline{A}ugmentation}) as a general multimodal data augmentation method. LeMDA augments the latent representation and thus can be applied to any modalities. We design the augmentation transformation as a learnable module such that it is adaptive to various multimodal tasks and cross-modal relationships. Our augmentation module is learned together with multimodal networks to produce informative data through adversarial training, while preserving semantic structure through consistency regularization. With no constraints over the modalities and tasks, one can simply plug-and-play LeMDA with different multimodal architectures.
We summarize our contributions as follows.
\begin{itemize}
\item In Section~\ref{sec:method}, we introduce LeMDA, a novel approach to multimodal data augmentation. Section~\ref{sec:method1} shows how to use LeMDA with multimodal networks, and Section~\ref{sec:method2} describes how to train the augmentation module to produce informative and label preserving data. The method is notable for several reasons: (1) it can be applied to any modality combinations; (2) it is attractively simple and easy-to-use; (3) it is the first augmentation method to be applied to the joint of text, image, and tabular data, which is essentially uncharted territory.
\item In Section~\ref{sec:experiment}, we show that LeMDA consistently boosts accuracy for multimodal deep learning architectures compared to a variety of baselines, including state-of-the-art input augmentation and feature augmentation methods.
\item In Section~\ref{sec:ablation}, we provide an ablation study validating the design choices behind LeMDA. In particular, we study the architecture of the augmentation module, and the effects of consistency regularizer. We demonstrate that the consistency regularizer clearly outperforms $L_2$ regularization~\citep{https://doi.org/10.48550/arxiv.2007.09271}.
\end{itemize}
\vspace{-2mm}
\section{Background and Related Work}
\label{sec:related}
\paragraph{Multimodal network architectures.}
Multimodal deep learning architectures are categorized as performing early or late fusion, depending on the stage of combining information from each modality. In \emph{early fusion}, the network
combines the raw input or token embedding from all the modalities. Early fusion architectures can be designed to exploit the interaction between low-level features, making it a good choice for multimodal tasks with strong cross-modal correlations~\citep{https://doi.org/10.48550/arxiv.2011.07191,10.1162/neco_a_01273}. For example, there exist low-level correspondence in image captioning task because different words in the caption may relate to different objects in the image. We note that feature-space augmentation procedures are typically computationally intractable on early-fusion architectures, because early fusion would require combining a large number of latent features, such as a long sequence of token embeddings.
On the other hand, in \emph{late fusion}, the focus of our work, input from each modality is independently processed by different backbones. The representations provided by different backbones are fused together in later layers, often just before the classifier layer~\citep{agmultimodaltext, https://doi.org/10.48550/arxiv.1704.03470, schnfeld2019crosslinked, https://doi.org/10.48550/arxiv.2002.06661}. This design is straightforward to apply to any new modality and any multimodal task. Late fusion often uses pre-trained networks as backbones in each modality, making it more computationally tractable. In both early and late fusion, there are a variety of methods to fuse information. Standard approaches include (1) feed all modalities as token embedding into the network, (2) perform cross-attention between modalities, (3) concatenate representations from all modalities, and (4) combine the predictions from each modality in an ensemble~\citep{https://doi.org/10.48550/arxiv.1705.09406}. Researchers usually design the multimodal network by considering the task objective, the amount of data available, and the computation budget \citep{shi2021benchmarking, https://doi.org/10.48550/arxiv.1909.11740, https://doi.org/10.48550/arxiv.2201.12086, tsai2018learning,NEURIPS2020_24bea84d}. \citet{https://doi.org/10.48550/arxiv.1705.09406} provides further readings.
\paragraph{Data augmentation for single modality tasks.} Data augmentation is widely adopted in vision and natural language tasks. In vision, we can manually intervene on a per-task basis to apply transformations that should leave our label invariant --- e.g., translations, rotations, flipping, cropping, and color adjustments. A transformation on one task may not be suitable for another: for example, flipping may be reasonable on CIFAR-10, but would lose semantic information on MNIST, because a flipped `6' becomes a `9'. Accordingly, there are a variety of works for automatic augmentations in vision, including neural architecture search~\citep{Ho2019PopulationBA, NEURIPS2020_d85b63ef}, reinforcement learning~\citep{cubuk2019autoaugment}, generative modelling~\citep{ratner2017learning}, mixing aspects of the existing data \citep{zhang2017mixup, Yun_2019_ICCV}, and adversarial training for informative examples \citep{7533048,Goodfellow2015ExplainingAH,zhang2019adversarial,https://doi.org/10.48550/arxiv.2202.12513, https://doi.org/10.48550/arxiv.2007.09271, https://doi.org/10.48550/arxiv.1703.05908} . Similarly, in natural language processing there are a variety of standard interventions (replacement, deletion, swapping) \citep{eda, aeda, fadaee-etal-2017-data}, and more automatic approaches such as back-translation \citep{https://doi.org/10.48550/arxiv.1511.06709}, context augmentation \citep{wang-yang-2015-thats, andreas-2020-good, kobayashi-2018-contextual}, and linear interpolation of training data~\citep{https://doi.org/10.48550/arxiv.2010.02394}. Data augmentation is less explored for tabular data, but techniques in vision, such as mixup \citep{zhang2017mixup} and adversarial training \citep{Goodfellow2015ExplainingAH} have recently been adapted to the tabular setting with promising results~\citep{kadra2021welltuned}. Latent space augmentation is much less explored than input augmentation, as it is less obvious what transformations to apply. To augment latent vectors produced by passing data inputs through a neural network (\emph{feature space} augmentation), researchers have considered interpolation, extrapolation, noise addition, and generative models \citep{pmlr-v97-verma19a, 8545506, DBLP:journals/corr/abs-1910-04176}.
\paragraph{Multimodal data augmentation.} There are a small number of works considering multimodal data augmentation, primarily focusing on vision-text tasks. In visual question answering, \citet{Tang2020SemanticEA} proposes to generate semantic similar data by applying back-translation on the text and adversarial noise on the image. \cite{https://doi.org/10.48550/arxiv.2105.04780} generates text based on images using a variational autoencoder. In cross-modal retrieval, \cite{https://doi.org/10.48550/arxiv.2104.08108} query similar data from external knowledge sources for cross-modal retrieval tasks. The state-of-the-art augmentation procedure for visual-language representation learning generates new image-text pairs by interpolating between images and concatenating texts in a method called \emph{MixGen} \citep{https://doi.org/10.48550/arxiv.2206.08358}.
All prior work on multimodal data augmentation relies on tailored modality-specific transformations. By contrast, our proposed approach is fully automatic and can be applied to any arbitrary modality. Indeed, for the first time, we consider augmentation jointly over the tabular, image, and language modalities. Moreover, even for image-text specific problems, we show that our approach outperforms MixGen, the state-of-the-art tailored approach.
\begin{figure*}[t]
\vspace{-2mm}
\centering
\includegraphics[width=0.75\textwidth]{image/diagram_fnew.pdf}
\includegraphics[width=0.75\textwidth]{image/diagram_fa.pdf}
\caption{LeMDA training as described in Algorithm~\ref{alg:training}. \textbf{Top:} the training process for the task network. Latent representations for each modality $z_i$ are passed into the augmentation network, which generates a new latent vector for each modality. Both original features and augmented features are passed into the rest of the task network. \textbf{Bottom:} the training process for the augmentation network. The augmentation network is trained to maximize task loss while minimizing consistency loss. We describe our standard choices for fusion in Section~\ref{sec:related}, and the design of our augmentation network in Section~\ref{sec:choice}.}
\label{fig:training}
\vspace{-2mm}
\end{figure*}
\vspace{-4mm}
\section{LeMDA: Learning Multimodal Data Augmentation}
\label{sec:method}
We now introduce LeMDA, a simple and automatic approach to multi-modal data augmentation. LeMDA learns an \emph{augmentation network} $\mathcal{G}$, along with the multimodal \emph{task network} $\mathcal{F}$ to generate informative data that preserves semantic structure. In Sections~\ref{sec:method1} and \ref{sec:method2} we describe how we learn the parameters the augmentation and task networks, respectively. We summarize the training algorithm for LeMDA in Figure~\ref{fig:training} and Algorithm~\ref{alg:training}. In Section~\ref{sec:consistency} we provide intuition for the consistency loss. Finally, in Section~\ref{sec:choice} we describe how we design the augmentation network.
\subsection{Training the Task Network}
\label{sec:method1}
The task network can be divided into two parts at the fusion layer $\mathcal{F}(x) = \mathcal{F}_{\mathsf{after}}(\mathcal{F}_{\mathsf{before}}(x)) $ where $\mathcal{F}_{\mathsf{before}}$ denotes the layers before fusion, $\mathcal{F}_{\mathsf{after}}$ denotes the layers after the fusion. Given a training sample $x$, we pass $x$ until the fusion layer and obtain the latent features for each modality $\{z_i\}_{i=1}^N = \mathcal{F}_{\mathsf{before}}(x)$ where $N$ is the number of modalities. Taken $\{z_i\}_{i=1}^N$ as inputs, the augmentation network $\mathcal{G}$ generates additional latent vectors $\mathcal{G}(\{z_i\}_{i=1}^N)$. Both $\{z_i\}_{i=1}^N$ and ${\mathcal{G}}(\{z_i\}_{i=1}^N)$ are fed through the rest of target network $\mathcal{F}_{\mathsf{after}}$ as distinct training data. Then, the task network is trained in the standard way, taking the
task loss function on both original data and augmented data, to find $\min\mathbb{E}_{x \sim \mathcal{X}}(L(\hat{y})+L(\hat{y}_{\mathcal{G}}))$ where $\hat{y} = \mathcal{F}_{\mathsf{after}}(\mathcal{F}_{\mathsf{before}}(x))$ and $\hat{y}_{\mathcal{G}} = \mathcal{F}_{\mathsf{after}}(\mathcal{G}(\mathcal{F}_{\mathsf{before}}(x)))$.
\vspace{-2mm}
\begin{algorithm*}[t]
\caption{LeMDA Training }\label{alg:cap}
\begin{algorithmic}
\State \textbf{Input:} Task network before fusion $\mathcal{F}_{\mathsf{before}}$; Task network after fusion $\mathcal{F}_{\mathsf{after}}$; Augmentation network $\mathcal{G}$; Training set $\mathcal{X}$; Task loss function $L$; Consistency loss $L_{\mathsf{consist}}$;
\While{$\mathcal{F}$ not converged}
\State Sample a mini-batch from $\mathcal{X}$
\State Compute $z \leftarrow \mathcal{F}_{\mathsf{before}}(x)$
\State Generate augment feature $\mathcal{G}(z)$
\State $\hat{y} \leftarrow \mathcal{F}_{\mathsf{after}}(z)$, $\hat{y}_{\mathcal{G}} \leftarrow \mathcal{F}_{\mathsf{after}}(\mathcal{G}(z))$
\State Update the augmentation network $\mathcal{G}$ by stochastic gradient $- \nabla L(\hat{y}_{\mathcal{G}}) + \nabla L_{\mathsf{consist}}(\hat{y}, \hat{y}_{\mathcal{G}})$
\State Update the task network $\mathcal{F}$ by stochastic gradient $ \nabla L(\hat{y}) + \nabla L(\hat{y}_{\mathcal{G}})$
\EndWhile
\end{algorithmic}
\label{alg:training}
\end{algorithm*}
\subsection{Training the Augmentation Network}
\label{sec:method2}
Inspired by adversarial data augmentation, we optimize parameters for the augmentation network to maximize the task loss such that the task network's representation is encouraged to be updated by the augmented data. At the same time, we introduce a consistency regularizer that encourages a similar output distribution given the original data and the augmented data to preserve the semantic structure. Formally, we find
$ \max\mathbb{E}_{x \sim \mathcal{X}}(L(\hat{y}_{\mathcal{G}})) + \min\mathbb{E}_{x \sim \mathcal{X}}(L_{\mathsf{consist}}(\hat{y}, \hat{y}_{\mathcal{G}}) )$
where $ L_{\mathsf{consist}}(\hat{y}, \hat{y}_{\mathcal{G}})$ denotes a divergence metric between the logit outputs on original data $\hat{y}$ and on augmented data $ \hat{y}_{\mathcal{G}}$ such as the Kullback-Leibler divergence.
\textbf{Confidence masking.} For classification problems, we apply the consistency term only to the samples whose highest probability is greater than a threshold $\alpha$. If the task network can't make a confident prediction, it is unlikely the prediction provides a good reference to the ground truth label.
\textbf{Design decisions.} The simplicity and generality of this approach, combined with its strong empirical performance in Section~\ref{sec:experiment}, are LeMDA's most appealing features. The few design decisions for training involve how the consistency regularizer should be defined and to what extent it should be applied. For example, as an alternative to a KL-based consistency regularizer, we could minimize the $L_2$ distance of the augmented feature vector to the original feature vector as a proxy for preserving the label of the augmentation. We provide ablations of these factors in Section~\ref{sec:ablation}.
\subsection{The Design of Augmentation Network}
\label{sec:choice}
The augmentation network can take various forms depending on the multimodal learning tasks and the fusion strategies. In our experiments, we use a variational autoencoder (VAE) as the augmentation network, since VAEs have generally been found effective for augmentation purposes~\citep{https://doi.org/10.48550/arxiv.2007.09271}. We consider two architectural choices:
\textbf{MLP-VAE:} The encoder and decoder of VAE are MLPs. $\{z_i\}_{i=1}^N$ are concatenated as the input.
\textbf{Attention-VAE:} The encoder and decoder are made of self-attention and feedforward networks. $\{z_i\}_{i=1}^N$ are treated as $N$ tokens where each token has an embedding $z_i$.
There are two loss terms in the standard VAE, the reconstruction loss, and the KL divergence regularizer. We only adopt the KL regularizer on the encoder distribution. The updating step for augmentation networks is $- \nabla L(\hat{y}_{\mathcal{G}}) + \nabla L_{\mathsf{consist}}(\hat{y}, \hat{y}_{\mathcal{G}}) + + \nabla L_{\mathsf{VAE}}$, where $L_{\mathsf{VAE}}$ refers to the KL divergence regularizer on the latent encoding distribution.
The major deciding factor between MLP-VAE and Attention-VAE is the multimodal task network architectures. With late fusion architectures, which is the primary focus of this paper, $z_i$ refers to the representation from a single modality backbone (e.g. CLS embedding from a BERT model), and $N$ is the number of modalities or the number of backbone models. We can concatenate $\{z_i\}_{i=1}^N$ as one vector input to MLP-VAE, or we can treat $\{z_i\}_{i=1}^N$ as a sequence of $N$ tokens to Attention-VAE. Attention-VAE may be less intuitive here because $N$ is usually a small number in late fusion architectures( 2 or 3 in our experiment). We provide a performance comparison between these two architectures in Section~\ref{sec:ablation}. On the other hand, for early fusion architectures, $z_i$ could be a sequence of token embedding for a text or a sequence of patch embedding for an image. Concatenation will result in a really high dimension input, which makes MLP-VAE less favorable.
\subsection{Intuition on Why Consistency Regularizer Discourages Mislabeled Data }
\label{sec:consistency}
\begin{wrapfigure}{r}{0.45\textwidth}
\includegraphics[width=0.9\textwidth]{image/consist.pdf}
\caption{Motivation for the consistency regularizer. The solid and dashed green lines are the ground truth and model decision boundaries, respectively. Darker background corresponds to a higher loss for the task network. We intuitively prefer D1, because the augmented point should be informative but preserve the same label. The consistency loss will prefer D1 over D2, because D2 crosses the model's decision boundary, even though both points incur the same training loss.}
\label{fig:vis}
\end{wrapfigure}
In Figure~\ref{fig:vis} we provide intuition for the consistency regularizer using a simple illustrative binary classification. Darker background corresponds to higher task training loss, the solid green line is the actual decision boundary, and the dashed green line is the model's decision boundary. Starting from a point in feature space, moving to D1 and D2 would provide a similar increase in task loss and thus are equally favored by the adversarial loss term. However, D2 crosses the model's decision boundary, and thus would be heavily penalized by the consistency regularizer --- as we would hope, since such a point is likely to have a different class label. On the other hand, an L2 regularizer between the original and augmented points in feature space would have no preference between D1 and D2, as they are an equal distance away from the starting point. Empirically, in Section~\ref{sec:experiment}, we see the consistency loss confers accuracy improvements over both pure adversarial training and L2 regularizer.
Similar intuition is in \cite{https://doi.org/10.48550/arxiv.2202.12513}, which uses the logits distribution from the teacher model (an exponential moving average over the model's weights) as the soft target such that the augmented data is still recognizable by the teacher, and \cite{https://doi.org/10.48550/arxiv.1904.12848}, which designs the unsupervised training objective to encourage similar logits for augmented data.
\vspace{-2mm}
\section{Experiments}
\label{sec:experiment}
\begin{table}[b!]
\vspace{-1em}
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\hline
Dataset & \# Train & \#Test & Metric& Image & Text & Tabular \\
\hline
\href{https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set/}{Hateful Memes}
& 7134 & 1784 & Accuracy & \checkmark & \checkmark& \\
\hline
\href{https://www.kaggle.com/datasets/gianmarco96/upmcfood101}{Food101}& 67972 & 22716 & Accuracy & \checkmark & \checkmark & \\
\hline
\href{https://github.com/necla-ml/SNLI-VE}{SNLI-VE}
& 529527 & 17901 & Accuracy & \checkmark & \checkmark & \\
\hline
\href{https://www.kaggle.com/c/petfinder-adoption-prediction}{Petfinder}
& 11994 & 2999 & Quadratic Kappa & \checkmark & \checkmark & \checkmark \\
\hline
\href{https://www.kaggle.com/tylerx/melbourne-airbnb-open-data}{Melbourne Airbnb}
& 18316 & 4579 & Accuracy & & \checkmark & \checkmark\\
\hline
\href{https://archive.ics.uci.edu/ml/datasets/online+news+popularity}{News Channel} & 20284 & 5071 & Accuracy & & \checkmark & \checkmark\\
\hline
\href{https://www.kaggle.com/PromptCloudHQ/wine\_reviews}{ Wine Reviews }
& 84123 & 21031 & Accuracy & & \checkmark & \checkmark\\
\hline
\href{https://www.kaggle.com/datasets/codename007/funding-successful-projects?select=train.csv}{Kick Starter Funding}
& 86502 & 21626 & ROC-AUC & & \checkmark & \checkmark \\
\hline
\end{tabular}
\caption{This table provides a summary of the source, statistic, and modality identity.}
\label{table:latedataset}
\end{table}
We evaluate LeMDA over a diverse set of real-world multimodal datasets. We curate a list of public datasets covering image, text, numerical, and categorical inputs. Table~\ref{table:latedataset} provides a summary of the source, statistic, and modality identity. We introduce baselines in Section~\ref{sec:baseline}, and describe experimental settings in Section~\ref{sec:expdetail} We provide the main evaluation result in Section~\ref{sec:expmain}. Finally, we investigate the effects of the consistency regularizer and the choices of augmentation model architecture in Section~\ref{sec:ablation}.
\subsection{Baselines}
\label{sec:baseline}
To the best of our knowledge, there exist no general-purpose multimodal augmentation methods. We compare against a diverse set of state-of-the-art data augmentation methods from vision, language, and vision-text tasks. We additionally consider baselines for feature augmentation, since LeMDA augments in the feature space. Finally, we compare with state-of-the-art multimodal augmentation methods from the vision-text tasks, although we note, unlike LeMDA, these methods are not general purpose and cannot be directly applied to our datasets that have tabular inputs.
\begin{itemize}
\item{ \bf{Input Augmentation.}} We apply state-of-the-art input augmentation independently on the data from each modality. For images, we use TrivialAugment~\citep{trivial}, a simple and effective method for image classification tasks. For text, we apply EDA~\citep{eda} and AEDA~\citep{aeda}. We randomly sample one transformation from all transformations proposed in EDA and AEDA with a randomly generated magnitude.
\item{\bf{Mixup.}} Mixup was originally proposed to perform interpolation between two images in the training data for image classification. We adopt the original Mixup for images and numerical features and extend it for text and categorical features. Specifically, given a pair of data, we construct the mixed data as follows. We generate a random number $j$ uniformly between 0.0 to 1.0. If $j < \alpha$, we use the first data, else we use the second data.
\item{ \bf{Manifold Mixup.}} Manifold Mixup~\citep{pmlr-v97-verma19a} performs interpolation between hidden representations and thus can be applied to all modalities. We applied Manifold Mixup to the exact feature in the multimodal network as LeMDA.
\item{ \bf{MixGen.}} MixGen~\citep{https://doi.org/10.48550/arxiv.2206.08358} is a state-of-the-art data augmentation designed specifically for vision-text tasks. MixGen generates new data by interpolating images and concatenating text. We apply MixGen to datasets only consisting of images and text.
\end{itemize}
\subsection{Experiment Setup}
\label{sec:expdetail}
We use Multimodal-Net~\citep{agmultimodaltext} for all the datasets except SNLI-VE. Multimodal-Net passes input from each modality through separate backbones, concatenates the representation(e.g. the CLS embedding) from all backbones, and passes them through fusion MLP. We use the default hyper-parameters provided by Multimodal-Net and plug LeMDA before the fusion layer. We use ConvNet as the image backbone and ELECTRA as the text backbone.
To further demonstrate LeMDA's generalizability, we evaluate LeMDA with early fusion architectures ALBEF~\citep{DBLP:journals/corr/abs-2107-07651} on SNLI-VE. ALBEF performs cross-attention between image patch embeddings and text token embeddings. We keep all original configurations except the batch size due to limitations in computation memory. We set the batch size to be half of the default. We load the 4M pre-trained checkpoints. In this setting, we apply LeMDA before the cross-attention layer. The augmentation network augments every image patch embedding and every text token embedding.
For LeMDA, we set the confidence threshold for consistency regularizer $\alpha$ as 0.5, and we study this choice in Section~\ref{sec:ablation}.
For our baselines, we follow the recommended hyperparameters. For Mixup and Manifold Mixup, we set $\alpha$ as 0.8, and for MixGen, we set $\lambda$ as 0.5.
\vspace{-2mm}
\begin{table*}[tb!]
\vspace{-2mm}
\centering
\begin{tabular}{c||c||c|c|c|c||c}
\hline
& \thead{Multimodal \\ Network} & \thead{Input \\ Augmentation} & Mixup & \thead{Manifold \\ MixUp} & MixGen & LeMDA \\
\hline\hline
Hateful Memes & 0.6939 & 0.7057 & 0.6939 & 0.6878 & 0.7510 & \bf{0.7562} \\
\hline
Food101 & 0.9387 & 0.9432 & 0.9400 & 0.9390 & 0.9432 &\bf{0.9452} \\
\hline
Petfinder & 0.2911 & 0.3236 & 0.3244 & 0.3492 & - &\bf{0.3537} \\
\hline
Melbourne Airbnb & 0.3946 & 0.3978 & 0.3966 & 0.3840 & - &\bf{0.4047}\\
\hline
News Channel & 0.4754 &0.4745 & 0.4723 & 0.4757 & - & \bf{0.4798}\\
\hline
Wine Reviews &0.8192 &0.8212 & 0.8143 & 0.8126 & - &\bf{0.8262} \\
\hline
Kick Starter Funding & 0.7571 &0.7572 & 0.7597 & 0.7578 & -&\bf{0.7614} \\
\hline\hline
SNLI-VE & 0.7916 & 0.7931 & 0.7957 & 0.7929 & 0.7950 &\bf{0.7981} \\
\hline
\end{tabular}
\vspace{-3mm}
\caption{LeMDA not only significantly increases accuracy over the original architectures but also outperforms all baselines.}
\label{table:result}
\end{table*}
\subsection{Main Results}
\label{sec:expmain}
We summarize the performance comparison in Table~\ref{table:result}.
Plugging LeMDA in both Multimodal-Net and ALBEF leads to consistent accuracy improvements.
There are also some particularly notable improvements, such as a 6\% increase in accuracy for both Hateful Memes and Petfinder. Table~~\ref{table:result} illustrates how LeMDA performs comparing to the baselines. We see that single modality input augmentation methods can hurt accuracy, for example, on News Channel, in accordance with the intuition from our introductory example in Figure~\ref{fig:augmented-snli}. Mixup also can hurt accuracy, for example, on Wine Reviews. Similarly, in the latent space, Manifold Mixup fails to improve accuracy across datasets. On Melbourne Airbnb and Wine Reviews, Manifold Mixup results in accuracy drops. On the contrary, LeMDA consistently improves upon original architectures and provides clearly better performance than a wide range of baselines.
\subsection{Ablation Study}
We now perform three ablation studies to support the design choice of LeMDA.
\label{sec:ablation}
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c}
\hline
Dataset & No Regularizer & Consistency & L2 & Consistency + L2 \\
\hline
Hateful Memes & 0.7433 & \textbf{0.7562} &0.7472 & 0.7545 \\
\hline
Food101& 0.9433 & \textbf{0.9452} & 0.9415 & 0.9438 \\
\hline
Petfinder & 0.3369 & \textbf{0.3537} & 0.3420 & 0.3461 \\
\hline
Melbourne Airbnb& 0.3935 &0.4047 & 0.3987 &\textbf{0.4051} \\
\hline
News Channel& 0.4851 &0.4869 & 0.4869 & \textbf{0.4894} \\
\hline
Wine Reviews& 0.8228 & \textbf{0.8263} & 0.8255 & 0.8262\\
\hline
Kick Starter Funding & 0.7609 &\textbf{0.7614} &0.7604 & \textbf{0.7614}\\
\hline
\end{tabular}
\vspace{-2mm}
\caption{The effects of regularizer choice. Regularization over the augmentation network generally lead to better performance. Consistency regularizer consistently outperforms a standard L2 regularizer in feature space. Moreover, combining the consistency regularizer with an L2 regularizer improves over only using an L2 regularizer.}
\label{table:ablationregularzier}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c}
\hline
Dataset & No Augmentation& MLP-VAE & Attention-VAE \\
\hline
Hateful Memes & 0.6939 &\bf{0.7562} & 0.7483 \\
\hline
Food101& 0.9387&\bf{0.9452} & 0.9443\\
\hline
Petfinder & 0.2911 & \bf{0.3537} & 0.3456 \\
\hline
Melbourne Airbnb& 0.3946 &\bf{0.4047} & 0.4031\\
\hline
News Channel& 0.4754& \bf{0.4798} & 0.4733\\
\hline
Wine Reviews& 0.8192 &\bf{0.8262} & 0.8250 \\
\hline
Kick Starter Funding & 0.7571 &\bf{0.7614} & 0.7586 \\
\hline
\end{tabular}
\vspace{-2mm}
\caption{Both MLP-VAE and Attention-VAE augmentation networks provide significant gains over no augmentation. MLP-VAE outperforms Attention-VAE in the late fusion setting because the input of augmentation networks is only 2 or 3 latent representations.}
\label{table:ablation1}
\end{table}
\begin{table}[h]
\vspace{-4mm}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
Dataset & $\alpha=0$ & $\alpha=0.3$ & $\alpha=0.5$ & $\alpha$ = 0.8 \\
\hline
Hateful Memes & 0.7410 & 0.7443 & \textbf{0.7556} & 0.7371 \\
\hline
Food101& 0.9431 & 0.9438 &\textbf{0.9447} & 0.9438 \\
\hline
Petfinder & 0.3243 & 0.3462 & \textbf{0.3676} & 0.3497 \\
\hline
Melbourne Airbnb & 0.3988 & 0.3964 & \textbf{0.3988} & 0.3964\\
\hline
News Channel & \textbf{0.4869} & 0.4851 &\textbf{0.4869} & 0.4695\\
\hline
Wine Reviews& 0.8228 & 0.8274 & \textbf{0.8275}& 0.8274 \\
\hline
Kick Starter Funding & 0.7614 & 0.7617 & \textbf{0.7620}& 0.7618 \\
\hline
\end{tabular}
\vspace{-2mm}
\caption{The influence of confidence-based masking. $\alpha=0$ indicates no masking such that consistency loss is calculated with all data. We see that filtering out low-confidence data leads to better end-to-end accuracy.}
\label{table:abalationconfidence}
\end{table}
\vspace{-2mm}
\textbf{Augmentation Network Regularizer.} We argue in Section~\ref{sec:consistency} that consistency regularizer helps preserve the semantic structure of augmentations. In Table~\ref{table:ablationregularzier}, we see that this consistency regularizer significantly improves performance, and also outperforms L2 regularization in feature space. While
L2 regularization attempts to keep augmented features close in distance to the original as a proxy for semantic similarity, the consistency regularization has access to the softmax
outputs of the target and augmentation networks, providing direct information about labels.
\textbf{Architecture Difference.} We consider the two augmentation architectures introduced in Section~\ref{sec:choice}, MLP-VAE and Attention-VAE. In Table~\ref{table:ablation1} we see both architectures increase performance over no augmentation. We also see that MLP-VAE generally outperforms Attention-VAE. We suspect the reason is that Multimodal-Net passes the concatenation of $N$ latent vector into fusion layers, where $N$ is the number of modalities (2 or 3 in our experiments). For Attention-VAE, this means that the input is only 2 or 3 tokens. However, we note that MLP-VAE is not reasonable for ALBEF, since it would require concatenating thousands of tokens.
\textbf{Confidence Masking.} Here, we investigate the effect of confidence masking, as well as the choice of $\alpha$ in Table~\ref{table:abalationconfidence}. $\alpha=0$ means no masking, and all training data are used to calculate consistency loss. We see that confidence masking generally leads to higher accuracy, and that the performance is not particularly sensitive to the precise value of $\alpha$.
\subsection{The Relationship between Modalities}
We can categorize the relationship between available modalities by looking at $P(y|x)$ where $y \sim \mathcal{Y}$ and $\mathcal{Y}$ is the target domain. Let $x = \{ x_{1}, x_{2}, \dots, x_{N} \}$ consist of $N$ modalities.
\textbf{Perfect Correlation $P( y | x ) = P(y| x_n)$}. Essentially, one modality alone provides enough information to make the right prediction. Nevertheless, data still comprises multiple modalities for reasons such as easier training~\citep{https://doi.org/10.48550/arxiv.2106.04538}. One example could be Food101, where the task is to predict the food from the text of a recipe and the photo of the food.
\textbf{Complementary $P( y | x ) = P( y | \{ x_{1}, x_{2}, \dots, x_{N} \})$}.
This category suggests that information aggregated from all modalities is necessary to make the right prediction. Each modality is complementary to each other, and missing one modality would lead to information loss. One example could be Hateful Memes, where only the combined meaning of text and image indicates harmful content.
The design for LeMDA does not exploit any assumption over the cross-modal relationship. We observe from Table \ref{table:result} that LeMDA consistently improves performance regardless of the relationship.
\newpage
\section{Conclusion}
Jointly learning from multiple different modalities will be crucial in our quest to build autonomous intelligent agents. We introduce the first method, LemDA, for jointly learning data augmentation across arbitrary modalities. LeMDA is simple, automatic, and achieves promising results over a wide range of experiments. Moreover, our results provide several significant conceptual findings about multimodal data augmentation in general: (1) separately augmenting each modality performs much worse than joint augmentation; (2) although feature augmentation is less popular than input augmentation for single-modality tasks because it is less interpretable, feature augmentation is particularly promising for modality-agnostic settings; (3) a learning-based multimodal augmentation policy can outperform even tailored augmentations, and significantly improve accuracy when augmentation transformations are not obvious such as for categorical data.
Our investigation has primarily focused on late-fusion architectures, showing strong results over a wide range of settings. In general, applying feature augmentation strategies to early-fusion architectures is an open question. Early fusion combines a large number of latent features (e.g.,\ a long sequence of token embeddings), resulting in typically intractable computational costs for augmenting every latent feature. Our experiment with an early-fusion architecture shows however that developing more efficient augmentation networks, or selectively generating only a few important latent vectors, is a promising direction for future work.
| proofpile-arXiv_065-2012 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Approach}
Due to these nonidealities, tailored solutions are required for both hardware and software. To address these tasks, Peng et al.~\cite{b10} introduced DNN+NeuroSim as a circuit-level macro model for benchmarking neuro-inspired architectures on performance metrics such as chip area, latency, and energy. It supports a hierarchical organization from the device level (memory technology) through the circuit level (peripherals) and chip level (tiles, processing elements \& subarrays) to the algorithm level (neural network topologies)~\cite{b10}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.49\textwidth]{FIG/Simulator-2.pdf}
\caption{Schematic of chip floorplan for NeuroSim~\cite{b10}. The hierarchical design can be subdivided by chip into tiles, which are comprised of processing elements (PEs). These PEs are further divided into synaptic arrays of many possible memory types.}
\label{fig:neurosim}
\end{figure}
To model different technologies and levels of abstraction, the structure of the underlying hardware scheme is shown in Fig.~\ref{fig:neurosim}. In general, it is assumed that the on-chip memory is sufficient to store the synaptic weights of the entire neural network. Thus, the only access to off-chip memory is to fetch the input data and buffer intermediate values. From a top-down perspective, the chip consists of multiple tiles on which the weights are stored, a global buffer, and neural function units, including accumulation and activation units. Each tile can be further broken down into several processing elements, a tile buffer for loading activations, accumulation modules for partial sums, and an output buffer. At the lowest level, a processing element is composed of subarrays, which perform the analog calculations. For computing weight gradients, special Weight Gradient Units (WGUs) on static random-access memory (SRAM) are utilized~\cite{b10}.
The simulator must be adapted to fit the computational requirements of DFA. A direct consequence of the additional backward matrices introduced in DFA includes additional memory requirements. Launay et al.~\cite{b7}, however, propose using sliced matrices from the matrix with the largest dimensions. As all backward matrices agree on the error dimension, the second dimension can be chosen as the highest number of output features. Nevertheless, to compute the gradients of $N$ layers in parallel, $N$ WGUs are required. In contrast, only one is needed for sequential Backpropagation.
Another implication of using random matrices for error feedback as opposed to transposed weight matrices is the elimination of the need for transposable subarrays. To support on-chip training with BP, synaptic arrays must be modified to support forward and backward computations: a transposable readout scheme realizes transposed computations with additional peripheral circuitry. Decoders, shift adders, and ADCs are duplicated and rotated 90° from the original~\cite{b10}. Without the need for transposable arrays, the duplicated peripheral circuitry is no longer needed.
One more architectural difference between BP and DFA results from the different dimensions of the matrices used during the backward pass. As the error dimension used for DFA is typically smaller than the number of hidden features in BP, and as the FLOPs for matrix-multiplications scale proportionally to their dimension, DFA consumes potentially less computational energy. Having outlined potential sources for differences in our design metrics of accuracy, area, latency, and energy, a multiobjective optimization problem arises. Different functions for these metrics need to be maximized or minimized, and ideal approaches should be found to optimize each target value while considering their impact on other metrics.
\section{Technical Background}
\subsection{Direct Feedback Alignment}
The central question of credit assignment in neural networks is answered in backpropagation by passing back the error as a teaching signal. Mathematically, this corresponds to calculating the gradient with respect to the output of each layer, leading to (\ref{backprop}), where $\odot$ denotes the elementwise Hadamard product, $W$ is the $i$th layer's weight matrix, $a$ is its activation. Further $\delta a_N$ is defined as the error, $e$.
\begin{equation} \label{backprop}
\forall i \in [1, ..., N-1]: \quad \delta a_i = \frac{\partial L}{\partial a_i} = (W_{i+1}^T \delta a_{i+1}) \odot f_i'(a_i)
\end{equation}
The final weight updates are given as
\begin{equation}
\forall i \in [1, ..., N]: \quad\delta W_i = -\delta a_i h_{i-1}^T
\end{equation}
Parameter updates are propagated backward sequentially through the network, allowing ``blame'' for the error to be assigned precisely to each neuron. This structure results in a computational flow where forward and backward passes are performed on the same set of weights. The update of synaptic weights requires all information of the forward process to be known within the backward process, increasing buffer requirements in hardware. Furthermore, each neuron needs precise knowledge about neurons of subsequent layers, making the gradient calculation global. In contrast, the brain is highly parallel, asymmetric, and local in its computations. However, BP is unable to parallelize the backward pass due to its sequential layer dependencies, leading to a training bottleneck within deep networks.
One potential solution to the biological implausibilities and resulting performance limitations of BP is Direct Feedback Alignment (DFA), introduced by N{\o}kland~\cite{b9}. While the algorithm retains the same general structure of BP, it employs independent forward and backward paths. Instead of symmetric weights, random matrices are incorporated for the error projection. Still, learning occurs as the network learns how to make the teaching signal useful. In mathematical terms, Equation~(\ref{dfaeq}) defines the error projection for DFA, whereas the formulas for updating the weights and the last layer's error remain the same.
\begin{equation} \label{dfaeq}
\forall i \in [1, ..., N-1]: \quad \delta a_i = (B_ie) \odot f_i'(a_i)
\end{equation}
Compared to (\ref{backprop}), the fixed random matrix $B_i$, with corresponding dimensions \emph{output features} $\times$ \emph{number of classes}, replaces the transposed forward weight matrix, $W_i^T$. Furthermore, the error $e$ is passed directly to each layer in parallel in place of $\delta a_{i+1}$, resolving layer dependencies and making it an almost entirely local and biologically plausible process~\cite{b9}. Experiments by N{\o}kland~\cite{b9}, Launay et al.~\cite{b12}, and Sanfiz et al.~\cite{b8} show DFA's performance on par with BP for simple tasks like MNIST \cite{b15} image classification to state-of-the-art methods such as Graph Convolutional Networks. Yet, DFA fails to train Convolutional Neural Networks, as experiments by Launay et al.\cite{b12} show. Hence, this work focuses on training fully connected networks only.
\subsection{Computing-in-Memory}
Despite fundamental differences in the computational process, both learning rules are based on matrix-vector multiplications, which require large numbers of parallel MAC (Multiply \& Accumulate) operations, as seen in Equations (\ref{backprop}) and (\ref{dfaeq}). The number of computations required increases with network depth, and therefore, the demand for computational power on underlying hardware systems grows extensively. CIM aims to close the gap between memory and computation in the von Neumann model: By storing the network's weights in memory, the analog nature of the underlying devices can be exploited to perform MAC operations and directly incorporate the physical memory in computation. Since the weight transfer from memory to chip is no longer needed, high performance and energy efficiency are achieved~\cite{b13}. In addition, similar to the brain, the calculations are carried out in a highly parallelized scheme, allowing for further computational speedup.
The MAC operations are executed in analog memory by encoding the input vector as a voltage signal $V$ and applying to the memory cells, which are programmed to conductances representing matrix weights, along each row. According to Ohm's Law, the current through each device is the product of $V$ and the device conductance, while currents accumulate down each column according to Kirchhoff's current law. Hence, the resulting currents
\begin{equation}
I_j = \sum_{i=1}^{n} V_i g_{ij}
\end{equation}
are summed along the column, where $g_{ij}$ is the conductance of the memristor in the $i$th row and $j$th column. This corresponds to a dot product of the column conductances and the input voltage signal. Each column of weights can represent a separate dot product executed in parallel. Results can be obtained with a single read operation, enabling enormous throughput and high energy efficiency~\cite{b13}.
However, this scheme encounters key challenges involving various device- and circuit-level nonidealities attributed to the memory cells and data-conversion circuitry. Potential issues arise from both the fabrication and computing processes; e.g., process variations cause devices across a single chip to behave differently regarding conductivity ranges and programmability, and analog-to-digital converters (ADCs) have limited precision in interpreting analog outputs from memory. The resulting errors accumulate in matrix-vector multiplications and degrade the model's performance. It is therefore critical to study the impact of hardware-induced imperfections on large-scale DNNs.
\section{Conclusion}
While Backpropagation yields better model performance in most cases, especially in noisy environments, Direct Feedback Alignment resolves the bottleneck of sequential dependencies, thus allowing for parallelization. In resource-constrained environments, such as edge devices, DFA provides a valuable alternative: Tradeoffs between area and latency due to incorporating additional hardware can be adjusted for each use case. When a smaller chip design is preferred, the degree of parallelization and the number of area-costly WGUs can be decreased. This potentially saved area could be utilized to increase the precision of quantized values, e.g., activation values. Hence, DFA could allow for higher precision values at the same chip area as BP. Accuracy improvements would be achieved, allowing DFA to overcome its imprecise nature.
From a biological perspective, DFA on neuromorphic hardware enables local computations and therefore solves fundamental implausibilities of learning in artificial neural networks. Spiking neural networks could build on this and further enhance the idea of bio-plausibility. This would allow for even higher energy efficiency and noise tolerance, which have been discussed for conventional neural networks in detail in this paper.
\section{Introduction}
As model complexities in deep neural networks (DNN) increase, machine learning programs continue to achieve remarkable successes, even surpassing human performance~\cite{b1}. These models place heavy demands on computing power, a problem exacerbated by the discontinuation of Moore's Law, as the limits of transistor technology are being reached~\cite{b2}. Fetching, computing, and updating millions of weights puts enormous demand on data movement~\cite{b3}. In particular, a growing imbalance results from the strict separation of computation and memory in von Neumann architectures. Expensive memory accesses are communicated via buses with limited bandwidth, creating a drastic bottleneck: two orders of magnitude more energy is consumed by data transfers between memory and chip than by floating-point operations (FLOPs)~\cite{b2}.
Computation-in-Memory (CIM) is an emerging solution in which memory modules and data processing units are unified to reduce data transportation from memory to chip. Resistive random-access memory (RRAM) is a new generation of non-volatile memory systems with low access time and energy consumption~\cite{b5}. The ability of RRAM macros to execute high-throughput dot-products with high computational density and energy efficiency is desirable for constructing real-time, power-constrained computing systems~\cite{b3}. Therefore, this work focuses specifically on the interrelationship between deep learning and neuromorphic hardware, with an emphasis on energy, area, and latency.
While the brain has always inspired computing paradigms, there is little empirical relation between deep learning and neuroscience. In fact, the common Backpropagation (BP) algorithm is often criticized for its biological implausibility~\cite{b6}. Launay et al.~\cite{b7} primarily cite the global dependence of each of the network's layers on postsynaptic layers. Known as ``backward locking'', this prevents the network from parallelizing the learning process. Furthermore, biological synapses are assumed to be unidirectional and do not allow symmetrical weight structures in feedforward and feedback directions, commonly cited as the ``weight transportation problem''~\cite{b8}. Consequently, new algorithms have been developed with more faithful neural behavior. A family of related approaches was introduced by N{\o}kland~\cite{b9}, including Direct Feedback Alignment (DFA), which successfully solves these implausibilities and achieves competitive performance despite a fundamentally altered computational flow~\cite{b9}.
Furthermore, there is need for algorithms that can robustly tolerate the nonideal properties of neuromorphic hardware and fully seize their potential as a novel computing scheme. In this work, we set out to analyze different learning rules' performance on neuromorphic hardware architectures and vice-versa. This involves measuring the effects of in-memory implementations, with particular attention to chip area, latency, and energy, while improving the software-side accuracy. The primary focus is on deep learning and its hardware/software co-design. Instead of optimizing absolute learning performance, relative differences will be compared over different technologies and various metrics. In Section II, we will provide a more detailed description of the DFA algorithm and CIM schemes. In Section III, we will outline our approach for determining algorithm effects on hardware and vice-versa. Finally, we will discuss our results in Section IV and conclude the paper in Section V.
\section{Discussion}
With previous works by N{\o}kland~\cite{b9}, Launay et al.~\cite{b12}, and Sanfiz et al.~\cite{b8} showing DFA's accuracy on par with BP's under given constraints, the focus of the following experiments is predominantly on the relative hardware comparisons between the algorithms and their scaling in the context of deep learning. As the estimates were relatively constant over the training process, only the breakdown of the first epoch is depicted. Throughout the experiments, a default network of five layers with 1024 hidden features per layer is favored as BP and DFA achieve comparable classification performance on the Fashion MNIST~\cite{b14} task with this topology. While different analyses might involve adjustments to these parameters, any changes will be indicated.
\subsection{Model Performance of Direct Feedback Alignment}
When sweeping different hardware and network hyperparameters, it is vital to consider the general performance of DFA compared to BP. This ensures comparable performance on the classification task, which remains the fundamental output measure. Experiments by Sanfiz et al.~\cite{b8} and N{\o}kland~\cite{b9} indicate that the best performance for DFA is obtained by using Xavier initialization and ReLu activation functions. Furthermore, we observed substantially better results when training with larger batches, seemingly averaging out the noise induced by DFA's inherent gradient approximation. Initially, we compared the number of iterations required for the models to converge to a stable training accuracy. Fig.~\ref{fig:howmany} displays the resulting model performance for each learning rule on a 2-layered network compared to 8 layers.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{FIG/many.pdf}
\caption{Accuracy of DFA and BP over 25 epochs for 2 layers and 8 layers. These results demonstrate that DFA requires more epochs for convergence as network depth increases.}
\label{fig:howmany}
\end{figure}
This leads to an important insight for latency evaluations within deep networks: as the network depth increases, DFA requires more epochs to converge. Especially within the first few epochs, worse performance is observed, whereas BP appears slightly more stable in its convergence. Refinetti et al.~\cite{b11} characterize this phenomenon as ``memorise, then learn'', where first the matrices $W$ and $B$ are aligned before the learning begins. Since this process of alignment occurs sequentially, the effect becomes more noticeable for deeper networks. A second drawback of the required alignment phase becomes more apparent when different matrix widths, i.e., the number of hidden features, are swept. Table~\ref{tab:width} compares the final test accuracy for both learning rules after 100 epochs of training over different matrix widths.
\begin{table}[h]
\centering
\caption{Final test accuracy for different layer widths}
\begin{tabular}{c | c | c}
\textbf{Width} &\textbf{DFA} &\textbf{BP} \\
\hline
8 &0.08 &0.44\\
16 &0.53 &0.73\\
32 &0.69 &0.81\\
64 &0.77 &0.82\\
128 &0.75 &0.82\\
256 &0.79 &0.80\\
\end{tabular}
\label{tab:width}
\end{table}
The gap is most prominent for 8 hidden features per layer, where DFA does not exceed the 10\% threshold of random guessing. Evidently, a certain degree of freedom (given by the number of weights) is required to align the weights and still have the flexibility to adjust them for learning.
\subsection{Impact of Circuit Imperfections on DFA-Based Training}
With the above findings in mind, the impacts of circuit parameters, namely ADC precision and subarray size, on network performance will be investigated. However, corresponding nonidealities cannot be integrated into the BP analysis. As BP involves complex data dependencies, analyzing circuit imperfections on the software side requires significant graphics memory. In contrast, DFA does not rely on maintaining its computational graph, and training analyses and assumptions can be tested. As both DFA and BP rely on the same hardware computations, general trends can be inferred for on-chip training. To cope with memory requirements, a three-layer network is preferred on the MNIST task for this analysis.
\begin{figure}[t]
\centering
\includegraphics[width=0.24\textwidth]{FIG/adc-2.pdf}
\includegraphics[width=0.24\textwidth]{FIG/subarray-2.pdf}
\caption{The effects of ADC precision (left) and subarray size (right) on DFA-based training. At least a 3-bit ADC is required for adequate accuracy results, while increasing subarray size increases errors due to IR drop.}
\label{fig:circuitTrain}
\end{figure}
Fig.~\ref{fig:circuitTrain} displays how training stops for 1-bit ADC precision due to vanishing gradients and quantization errors, while the 2-bit precision does not achieve results above the 10\% threshold. In contrast, precision levels of 3 and 4 yield a significant improvement in accuracy. Still, a large number of epochs is required for convergence, indicating a slow learning process. At the same time, it should be considered that ADC precision has significant impact on area and energy consumption, constituting a major bottleneck in the CIM scheme. The tradeoff introduced by subarray size is linked to the overhead for peripheral circuits. Larger dimension subarrays invoke less overhead by processing more array data at each peripheral circuit. However, this also results in lower accuracy, as seen in Fig.~\ref{fig:circuitTrain}, due to the substantial IR drop associated with a larger subarray, generating noise in the partial sum accumulation.
\subsection{Noise Induced by Quantization \& Device Imperfections}
Unlike circuit nonidealities, device imperfections and predefined precision can be evaluated for both DFA and BP.
\subsubsection{Quantization Precision}
When considering quantization effects, the gradient vanishing problem becomes critical for BP, especially for deep networks. Sanfiz et al.~\cite{b8}, however, indicate that DFA is less sensitive to this because the errors are not propagated but passed directly to each layer. Still, it is assumed that with lower gradient precision, it becomes increasingly difficult to align the matrices $W$ and $B$. As a result, when sweeping weight and gradient precisions, learning for both algorithms stopped after 4 bits or less. Given a gradient precision of 5 bits, the error quantization leads to more pronounced differences. DFA learns at all swept precisions, although 1-bit began to improve only after the 50th epoch. For BP, learning is observed without delay.
\begin{figure}[b]
\centering
\includegraphics[width=0.45\textwidth]{FIG/error.pdf}
\caption{Effects of error precision on test accuracy for BP.}
\label{fig:error}
\end{figure}
Focusing on the relative differences within a given learning rule in Fig.~\ref{fig:error}, however, visible performance steps are observed for BP for corresponding precision levels. In contrast, DFA yields almost the same accuracy for all precisions. One possible explanation is that DFA quantizes error only once before passing to each layer. For BP, on the other hand, the error is repeatedly quantized from layer to layer, resulting in a higher degree of accumulated error for initial layers at low precisions. Regarding hardware metrics, no substantial differences were observed for either learning rule in changing error precision due to the error's small dimensional size.
In addition, DFA is not observed having much robustness to general noise and quantization. In sweeping precision of the activation values, DFA only achieves competitive performance for at least 3 bits, whereas BP achieves high performance for all precisions tested. Since storing many activation values during training is essential, smaller precisions are preferred on the hardware side due to lower area, energy, and latency. Therefore, the quantization robustness of BP is a measurable advantage over DFA in this regard.
\subsubsection{Device Imperfections}
In addition to the precision of the models, hardware nonidealities can also lead to computational errors causing imprecise results. Experimentally demonstrated in this work, imperfections from device-to-device are absorbed during iterative training: No visible difference was observed across different levels of variability. However, error in programmed conductance states as a result of pulse-to-pulse (cycle-to-cycle) variation can have an impact of model performance. Sweeping typical ranges from 1\% to 5\% standard deviation over the mean of conductances, the learning rules yield fundamentally different results: BP converges to comparable accuracies over different degrees of variability, whereas DFA performs well only for 1\% deviation. More severe imperfections of 2\% lead to strongly fluctuating test accuracies with a drop in overall performance.
As the cycle-to-cycle variation changes over time in contrast to the constant device-to-device variations, these errors are not absorbed during the iterative training of DFA. Furthermore, unlike BP, which is able to overcome this variation and converge, the cycle-to-cycle variation combined with the pressure for the forward weights to align with the random weights seems to prevent the algorithm from converging.
\subsection{Chip Area}
As Fig.~\ref{fig:areabreakdown} shows, the additional weight matrix expressed by the CIM area consumption (representing the area in terms of memory cells) is marginal, consuming less than 1\% of total chip area.
\begin{figure}[b]
\centering
\includegraphics[width=0.24\textwidth]{FIG/areaBreakdownBp.pdf}
\includegraphics[width=0.21\textwidth]{FIG/areaBreakdownDfa.pdf}
\caption{Breakdown of chip area for BP (left) and DFA (right). While the ADC circuitry consumes the most area for BP, the WGUs consume the most area for DFA.}
\label{fig:areabreakdown}
\end{figure}
Instead, peripheral circuits determine the majority of chip area, including ADCs, ICs (integrated circuits), accumulation logic, and weight gradient units (WGUs), with BP chip area mainly attributed to ADCs due to the need for transposable weights with twice the ADCs per CIM macro (left) and DFA chip area mainly attributed to WGUs due to parallel weight updates for each layer (right). Thus, the comparison between chip areas comes down to the driving forces behind the use of ADCs and WGUs, as well as how the number of both changes with deeper networks. Fig.~\ref{fig:growthrates} reveals the trends of both ADCs in BP and WGUs in DFA to be almost identical, resulting in similar trends for total chip area as summarized in Fig.~\ref{fig:area}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{FIG/First.pdf}
\caption{Hardware-specific area consumption over network depth, including ADC (left) and WGU (right) area for both BP and DFA.}
\label{fig:growthrates}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=0.4\textwidth]{FIG/areaOverDepth.pdf}
\caption{Total chip area for both BP and DFA over network depth.}
\label{fig:area}
\end{figure}
Despite the additional matrix and memory requirements, DFA consumes marginally less area than BP. As the number of layers increases, this saving becomes relatively smaller, resulting in DFA requiring slightly more area for 10 layers.
Scaling with the network depth, WGUs implemented on SRAM cells become more costly in terms of area than the peripherals. Expanding the network in terms of matrix width, a more significant difference is observed for memory utilization: The discrete step of tiles becomes more evident when comparing the charts in Fig.~\ref{fig:width}, where for a fixed network size of 5 layers the number of hidden features is altered. At 1025 hidden features, the tile limit of 1024 memory cells is exceeded. Therefore, a 1x1 tile structure has to be extended to a 2x2 structure to account for all weights. This results in low utilization and a significant area overhead for peripherals.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{FIG/Second.pdf}
\caption{Impact of the number of hidden features on memory utilization (left) and chip area (right). A single tile's limit of 1024 memory cells means that 1025 hidden features requires more tiles, significantly increasing area and reducing memory utilization.}
\label{fig:width}
\end{figure}
\subsection{Energy Consumption}
Previous sections discussed a reduction in the number of peripherals and FLOPs required to compute the gradients in DFA. Breaking down energy consumption into different hardware categories, however, shows a 95\% share is allocated to accessing off-chip buffers. Activations, errors, and gradients have to be loaded to and from the off-chip memory, making buffering the bottleneck of the training process. Hence, off-chip accesses become the driving force of energy consumption, underlining the importance of the CIM paradigm. Even though FLOPs are saved, the resulting matrices are of the same size, and so are the intermediate values. Thus, no differences in buffer accesses are observed for DFA and BP, resulting in virtually the same energy consumption.
\subsection{Training Latency}
A similar pattern is seen for training latency, where 91\% thereof is not caused by the actual computations but by buffering intermediate results. Given the assumption that different layers access different DRAM blocks in parallel, a considerable speedup can be achieved by parallelizing the weight gradient calculation and writing, which represent 96\% of the overall latency. Thus, fundamental differences can be observed when compared over network depth, as shown in Fig.~\ref{fig:latency}.
In contrast to BP with strictly sequential computational flow, DFA is executed in a heavily parallelized scheme. As network depth increases, the latency for BP grows linearly, while DFA takes almost the same training time for an increased number of layers. As more layers are added, the ratio of latencies of BP to DFA approaches a scaling factor of $N$ for $N$ parallelized layers. Considering the central finding of subsection A that DFA requires more epochs to converge, the speedup with respect to performance is reduced, however. Still, parallelization enables substantial acceleration and thus higher throughput.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{FIG/latencyOverDepth.pdf}
\caption{Training latency for BP and DFA over network depth. The ability of DFA to parallelize weight updates leads to considerable reduction in latency.}
\label{fig:latency}
\end{figure}
| proofpile-arXiv_065-2038 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
With the ever-growing volume of online information, users can easily access an increasingly vast number of online products and services.
To alleviate information overload and expedite the acquisition of information, retrieval systems have emerged and begun to play important roles in modern society.
Search and recommendation are two of the most important domains of retrieval systems; both can be viewed as ranking systems that output an ordered list.
Search systems aim to retrieve relevant entities with respect to the information need(s) behind a query launched by users from a collection of resources.
Recommendation systems are primarily interested in using the user-item interaction history to predict personalized user interests, hence recommending potential satisfactory items to users.
For a long time, \textit{relevance} dominates the research in both search and recommendation, where the key is to measure if the system is able to retrieve those items that are regarded as ``relevant'' given part of the ground truth labels~\cite{DBLP:conf/uai/RendleFGS09, DBLP:conf/www/SarwarKKR01, he2020_lgcn}.
Although these systems are able to retrieve or recommend most relevant items, they may jeopardize the utilities of stakeholders in the system.
Take the recommendation scenario as an example, systems with only high relevance have potential harms for both sides of the two most critical stakeholders: customers (the user side) and providers (the item side).
For customers, they generally suffer from the \textit{redundancy issue} that recommends redundant or very similar items, which may cause ``filter bubble''~\cite{filterbubble} and harm the customers' satisfaction in the long term.
For instance, a movie recommendation system keeping recommending Marvel action movies to a customer who once clicked ``\textit{The Batman}'' may block out the opportunity for her from observing other genres of movies.
This does not necessarily indicate the customer only likes Marvel action movies.
It is highly possible that the customer has various interests but the recommender never explores other choices, thus jeopardizing the customer's long-term user experience.
For providers, they generally suffer from the \textit{exposure unfairness} due to the ``super-star''~\cite{superstar} economy where a very small number of most popular items and providers take up an extremely large proportion of exposure to customers.
Such unfairness may make those new-coming or less popular providers feel disappointed on attracting customers and finally quit the platforms.
Once only several most popular providers remain, monopoly is highly possible, which is detrimental to a healthy marketplace and society.
From the perspective of search which is less concerned with personalization compared to recommendation, similar limitations occur when merely focusing on relevance.
Assuming there is an image search system, it will always retrieve images for ``jaguar vehicles'' when the query ``jaguar'' is entered.
Although it cannot be disputed that the output of this search system is significantly relevant to the input query, it cannot be considered ideal because the query ``jaguar'' has additional meanings, such as ``jaguar as an animal'', which are never retrieved.
Customers are dissatisfied with the system if they are unable to obtain the various desired information.
On the other side, this system suffers from the similar \textit{exposure unfairness} described in the recommendation scenario for providers who offer jaguar animal pictures.
This is another example in the field of search that demonstrates how a singular focus on relevance can have bad effects on numerous stakeholders in a system.
Thus, in recent years, many criteria other than \textit{relevance} have gained tremendous attention in the field of recommendation, and \textit{diversity} is one of the most widely studied.
It has been recognized that diversified search and recommendation can not only increase the likelihood of satisfying various needs of customers in both short term and long term, but also assist to increase item exposure especially for those less popular providers~\cite{abs-1905-06589_Survey_DivRS, DBLP:conf/kdd/SunGZZRHGTYHC20}.
Considering the critical role of diversity in maintaining a satisfactory and healthy information retrieval marketplace, we hereby offer a comprehensive review on definitions, metrics, and techniques of diversity studied in search and recommendation.
\textbf{Necessity of this Survey}.
Although many papers have been published on this topic recently, to the best of our knowledge, none of them has provided a global picture of diversity in both search and recommendation, as well as the corresponding diversity metrics and techniques.
We find that the usage of the terminology ``diversity'' in recent works is usually inconsistent across papers, without a clear claim as to which diversity perspective is emphasized.
In addition, some studies lacked an explanation of why they chose particular diversity criteria for measurement.
Given the growing awareness of the importance of diversity and the rapid development of diversity techniques in both search and recommendation, we believe our survey can provide a comprehensive summary and organization of the diversity concerns in these fields and offer future researchers a better comprehension of the current state-of-the-art and openness problems on this topic.
\textbf{Difference with Existing Surveys}.
A number of surveys in search and recommendation have been published recently, focusing on different perspectives.
For instance, in the field of search, ~\citet{AzadD19_Survey_QE} review the Query Expansion (QE) techniques and ~\citet{Azzopardi21_Survey_Cog} summarizes the usage of cognitive bias.
In the field of recommendation, ~\citet{Huang_2019_Survey_Privacy} provide a summary on privacy protection in recommendation systems and ~\citet{abs-2010-03240_Survey_Bias} focus on bias and debias techniques.
Some other well-cited surveys focus on more general problems in search and recommendation, such as~\cite{10.5555/222929_Survey_IR} and ~\cite{ZhangYST19_Survey_DLRS}.
However, the perspective of diversity has
not been well reviewed in existing search and recommendation surveys.
To the best of our knowledge, there exist several surveys on the diversity in recommendation~\cite{2017survey:DBLP:journals/kbs/KunaverP17, abs-1905-06589_Survey_DivRS}, but they did not systematically organize the diversity concerns in both search and recommendation, and the papers they collected are not up-to-date.
To offer a comprehensive review on this topic, we make the following contributions in this survey:
\begin{itemize}
\item Collecting the latest works and summarizing the types, metrics, and techniques of diversity in both search and recommendation systematically under a unified organization.
\item Conducting a detailed analysis and presenting a taxonomy of existing diversity techniques, as well as discussing their strengths and drawbacks.
\item Recognizing openness problems and discussing future directions to inspire more research on diversity in search and recommendation.
\end{itemize}
\textbf{Papers Collection}.
We collect over 80 papers that analyze the diversity issues or propose novel techniques in search and recommendation.
We first search the related top-tier conferences and journals to find related work, inculding KDD, NeurIPS, CIKM, RecSys, ICDM, AAAI, WSDM, The WebConf, SIGIR, SIGMOD, TKDD, TKDE, TOIS, etc., with the keywords ``search'', ``recommendation'', ``ranking'' or ``retrieval'' combined with ``diversity'', ``serendipity'' or ``coverage'' from the year 2016 to 2022.
We then traverse the citation graph of the identified papers, retaining the papers that focus on diversity.
Fig.~\ref{fig:statistics} illustrates the statistics of collected papers with the publication time and venue.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/statistics.png}
\vspace{0.05mm}
\caption{The statistics of publications related to diversity in search and recommendation with the publication year and venue.}
\label{fig:statistics}
\end{figure}
\textbf{Survey Audience and Organization}.
This survey is useful for researchers who are new to diversity problems and are looking for a guide to quickly enter this field, as well as those who wish to stay abreast of the most recent diversity strategies in search and recommendation.
The rest of the survey is organized as follows:
\begin{itemize}
\item In Section~\ref{sec:div in search} and~\ref{sec:div in rec}, we summarize the categories and concerns of diversity in search and recommendation.
\item In Section~\ref{sec:preliminaries}, we provide background and preliminaries on search and recommendation systems, followed by listing the notations we used in this survey.
\item In Section~\ref{sec:metrics}, we review the metrics of diversity generally used in search and recommendation, and systematically categorize them using a unified taxonomy.
\item In Section~\ref{sec:offline-approach} and~\ref{sec:online-approach}, we review the approaches for enhancing diversity in search and recommendation, from both the offline and online perspectives.
\item In Section~\ref{sec:future}, we summarize the openness problems and future directions.
\end{itemize}
\section{Diversity in Search}
\label{sec:div in search}
Diversifying search results has received attention since the end of last century, where one of the earliest works is Maximal Marginal Relevance (MMR) proposed by~\citet{CarbonellG98MMR} in 1998.
Later, ~\citet{ClarkeKCVABM08alphaNDCG} present a framework that systematically rewards novelty and diversity for measuring information retrieval systems, which promotes a series of works on diversity measurement and improvement in search.
As summarized by~\citet{RadlinskiBCJ09SIGIR_Forum} in the 2009 SIGIR Forum, diversity in search can be generally categorized into two classes based on whether the diversity is treated as uncertainty about the information need, or part of the information need.
These two concerns are named as (\romannumeral1) $\textit{extrinsic diversity}$ and (\romannumeral2) $\textit{intrinsic diversity}$ respectively, which are demonstrated as follows.
\subsection{Extrinsic Diversity}
Extrinsic diversity is related to the situation where uncertainty occurs in the search, which can be further divided into the \textit{ambiguity} of the query meaning and the \textit{variability} of the user intent.
Generally, these two uncertainties co-occur in a search, as was the case with the query ``jaguar''.
In other cases, even if there is no $\textit{ambiguity}$ in the query, the user intents may still contain $\textit{variability}$.
For instance, considering a query ``BioNTech, Pfizer vaccine'', a patient may seek for more information on the vaccination's effect, whereas a doctor may be more concerned with the pharmaceutical ingredients and an entrepreneur may be more interested in the company operations of BioNTech.
The greater the ability of search results to encompass various query meanings and satisfy multiple user intents, the greater the extrinsic diversity.
\subsection{Intrinsic Diversity}
Different from extrinsic diversity which treats diversity as uncertainty about the information need, intrinsic diversity treats diversity as part of the information need itself, even given a single well-defined intent.
Under this definition, intrinsic diversity can be comprehended as the need for avoiding redundancy in the result lists, which is comparable to the novelty definition by~\citet{ClarkeKCVABM08alphaNDCG}.
The motivation for intrinsic diversity is intuitive: presuming the input query is ``jaguar as an animal'' with little ambiguity, users may anticipate the search results to contain images of different jaguars from diverse views and angles, rather than the same jaguar with the same view.
As such, the less redundancy in the search results, the greater the intrinsic diversity.
To clarify the distinction between extrinsic diversity and intrinsic diversity, the former is a response to a scenario with various search intents, whereas the latter is a response to a scenario with a single search purpose.
In real-world cases, both diversity concerns are significant in search and can be measured in a hierarchical and joint way.
For instance, a search system may be expected to satisfy various information needs for diverse search intents, while avoiding redundancy for each specific one.
\section{Diversity Concerns in Recommendation}
\label{sec:div in rec}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{fig/diversity_concerns.png}
\caption{Diversity in search and recommendation. Pink boxes indicate that they were originally proposed and generally used in search, while blue boxes indicate that they are generally used in recommendation.}
\label{fig:div_concern}
\end{figure}
As one of the most significant applications of information retrieval, the diversity in recommendation systems has also been explored.
Although the definition of diversity in search is also applicable in the field of recommendation, researchers study the diversity in this field from other perspectives.
There are generally two categories of diversity in recommendation across different works: (\romannumeral1) \textit{individual-level diversity} and (\romannumeral2) \textit{system-level (aggregated) diversity}.
Each diversity concern is relevant to one of the two most significant stakeholders: customers and providers, which represent the user side and the item side, respectively.
The \textit{individual-level diversity} is relevant to the satisfaction of customers, while the \textit{system-level diversity} is relevant to the fairness of providers.
In this section, we offer a review and comparison of these two diversity concerns in recommendation systems.
\subsection{Individual-level Diversity}
The customer is one of the two most significant stakeholders in recommendation systems, whose satisfaction can be influenced by not only the recommendation relevance but also the diversity, serendipity, novelty, etc.
Therefore, one category of diversity in recommendation, individual-level diversity, puts the customer at the core and is intended to quantify the degree to which recommended items are unique to each individual customer.
As a result, the recommendation list for each individual is considered separately.
From another perspective, individual-level diversity focuses on the problem of how to avoid recommending redundant (but still relevant) items to a customer given the previous recommendation list.
Thus, a higher degree of individual-level diversity can provide customers with the opportunity to view more novel items, thereby satisfying diverse demands and facilitating the exploration of various interests.
\subsection{System-level Diversity}
Rather than focusing on the redundancy of recommended items to each customer, system-level diversity reflects the ability of the entire system to recommend those less popular or hard-to-find items.
Under this category, all customers are aggregated all together at a system level and the diversity measures the dissimilarity among all the recommended items the entire system had made.
A high degree of system-level diversity now indicates that the system can recommend a wide range of items rather than only those bestsellers or popular items, and is especially beneficial to those minority provider groups.
In other words, system-level diversity is relevant to the exposure fairness among providers, which is important for maintaining a healthy marketplace.
It is worth noting that individual-level diversity and system-level diversity address two distinct concerns with little overlap.
System-level diversity is not a simple average of individual-level diversity across all customers.
It is conceivable for a system to have a high degree of individual-level diversity but a low degree of system-level diversity, and vice versa.
For example, if the system recommends to all users the same five best-selling items that are not similar to each other, the recommendation list for each user is diverse (i.e., high individual-level diversity), but only five distinct items are recommended to all users and purchased by them (i.e., resulting in low system-level diversity or high sales concentration).
In the other case, if the system recommends the same and unique category of items to each user, then the individual-level diversity is low, while the system-level diversity can be high.
A toy example is provided in Fig.~\ref{fig:toy_example}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{fig/toy_example.png}
\caption{A toy example to show that the individual-level diversity and system-level diversity in recommendation are different concerns with little overlap. In this illustration, different shapes refer to different categories, while different colors with the same shape refer to different items under the same category. Consider a top-5 recommendation and assume that there are an extremely large number of users and items in the system (the same as the real-world scenarios). In case 1, the system recommends the same 5 categories of items to each user; in case 2, the system always recommends a unique category of items to each unique user. Therefore, in case 1, the individual-level diversity is high and the system-level diversity is low, while in case 2, the individual-level diversity is low and the system-level diversity is high.}
\label{fig:toy_example}
\end{figure}
\section{Preliminaries and Notations}
\label{sec:preliminaries}
The preliminaries and notations we used in this paper are shown in Table~\ref{table:notation1}.
\begin{table}[h]
\centering
\caption{\label{tab:formulation}Description of Notations.}
\vspace{1mm}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{cl}
\toprule
Notations & Description \\
\midrule
$\mathcal{U}, \mathcal{D}$ & The set of all users and items\\
$\mathcal{D}_u$ & The set of interacted items of user $u$\\
$u, d$ & An individual user and item\\
$\Theta$ & Learnable model embeddings\\
$\mathbf{M}$ & The historical interaction matrix between users and items\\
$\sigma$ & A list of items \\
$\sigma^{-1}(d)$ & The ranking position of item $d$ in $\sigma$ \\
$n_S$ & The total number of subtopics (categories)\\
$\mathcal{S}(d)$ & The set of subtopics covered by item $d$\\
$c(l, s)$ & The number of items covering subtopic (category) $s$ in list $l$\\
$e_i$ & The exposure of item $d_i$ in the entire system\\
$o(d|u)$ & The score of $d$ with respect to $u$\\
\bottomrule
\end{tabular}}
\label{table:notation1}
\end{table}
\section{Metrics of Diversity in Search and Recommendation}
\label{sec:metrics}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{fig/diversity_metrics.png}
\caption{Diversity metrics in search and recommendation. We unify the metrics in one unified classification since all metrics can be theoretically used in both fields. Metrics in pink boxes indicate that they were originally proposed and generally used in search, while those in blue boxes indicate that they are generally used in recommendation.}
\label{fig:div_metric}
\end{figure}
Although many diversity metrics were proposed separately in either the field of search or recommendation, they can actually be applied interchangeably in both fields, since they all commonly aim to measure the dissimilarity and non-redundancy among a list of items.
Therefore, we summarize the metrics of diversity in both fields under one unified taxonomy.
We first categorize these metrics into two classes based on whether the relevance of items to the user (query) is taken into consideration as: (\romannumeral1) \textbf{Relevance-oblivious Diversity Metrics} and (\romannumeral2) \textbf{Relevance-aware Diversity Metrics}.
We further classify these metrics into sub-classes with a more fine-grained manner.
A summary table is maintained to highlight the applicability of all these metrics in either search, or recommendation, or both.
To clarify, the term \underline{``\textit{item}''} in the rest of this paper can refer to both \underline{entities retrieved from search systems} and \underline{goods displayed by recommendation systems}.
These metrics and corresponding works are summarized in Table~\ref{tab:metrics}.
\begin{table}[h]
\caption{A lookup table for the diversity work from several metrics.}
\label{tab:metrics}
\vspace{1mm}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l|l|l|l|l}
\toprule
\toprule
\multicolumn{4}{l|}{\textbf{Diversity Metrics}} & \textbf{Related Work} \\ \hline
\multirow{10}{*}{\makecell{Relevance-oblivious\\Diversity Metrics}} & \multirow{3}{*}{Distance-based} & \multicolumn{2}{c|}{Cosine Diversity Distance} & \tabincell{l}{\cite{DBLP:conf/recsys/ZhangH08,DBLP:conf/www/ZieglerMKL05,NIPS2018:DBLP:conf/nips/ChenZZ18,DBLP:conf/kdd/HuangWZX21,CIKM2020:DBLP:conf/cikm/ChenRC0R20,ChengWMSX17Acc_Diverse,li2020directional} \\ \cite{GanLC20DPP4,Chen2017DPP1,liang2021enhancing,DBLP:conf/wsdm/ParaparR21,WasilewskiH16Diverse_LTR,vargas2011intent,DBLP:conf/wsdm/StamenkovicKAXK22}} \\
\cline{3-5}
& & \multicolumn{2}{c|}{Jaccard Diversity Distance} & \cite{DBLP:conf/sigir/GongZC0B0YQ22,DBLP:conf/recsys/TsukudaG19} \\
\cline{3-5}
& & \multicolumn{2}{c|}{Gower Diversity Distance} & \cite{Haritsa09Gower} \\
\cline{2-5}
& \multirow{6}{*}{Coverage-based} & \multirow{3}{*}{Rank-unaware} & P-Coverage & \cite{DBLP:journals/tois/HerlockerKTR04,DBLP:conf/recsys/GeDJ10,DBLP:conf/recsys/PaudelHB17,DBLP:conf/wsdm/StamenkovicKAXK22,DBLP:conf/sigir/BalloccuBFM22} \\
\cline{4-5}
& & & C-Coverage & \cite{DBLP:journals/tois/HerlockerKTR04,DBLP:conf/recsys/GeDJ10,Wilhelm2018DPP2,13KwonHH20,ZhengGCJL21DGCN-pre} \\
\cline{4-5}
& & & S-Coverage & \cite{DBLP:conf/recsys/GeDJ10,CIKM2020:DBLP:conf/cikm/ZhouAK20,he2019diversity,ChengWMSX17Acc_Diverse,liang2021enhancing} \\
\cline{3-5}
& & \multirow{3}{*}{Rank-aware} & S-RR@100\% & \cite{Zhai_Subtopic} \\
\cline{4-5}
& & & S-Recall@K & \cite{CIKM2020:DBLP:conf/cikm/QinDW20,Zhai_Subtopic} \\
\cline{4-5}
& & & S-Precision@K & \cite{Zhai_Subtopic} \\
\cline{2-5}
& \multirow{2}{*}{\hfil Social Welfare} & \multicolumn{2}{c|}{SD Index} & \cite{CIKM2020:DBLP:conf/cikm/ZhouAK20,simpson1949measurement} \\
\cline{3-5}
& & \multicolumn{2}{c|}{Gini Index} & \cite{antikacioglu2017post,DBLP:conf/recsys/Sanz-CruzadoC18,ZhengGCJL21DGCN-pre,GiniIndex} \\
\hline
\multirow{4}{*}{\makecell{Relevance-aware\\Diversity Metrics}} & \multirow{3}{*}{Novelty-based} & \multicolumn{2}{c|}{$\alpha$-nDCG@K} & \cite{ClarkeKCVABM08alphaNDCG,CIKM2020:DBLP:conf/cikm/QinDW20,CIKM2020:DBLP:conf/cikm/ZhouAK20,DBLP:conf/recsys/ParaparR21,ChengWMSX17Acc_Diverse,LiZZZL17DiverseMF,SantosMO10xquard,vargas2011intent,DBLP:conf/aaai/Yu22} \\
\cline{3-5}
& & \multicolumn{2}{c|}{NRBP} & \cite{CIKM2020:DBLP:conf/cikm/QinDW20} \\
\cline{3-5}
& & \multicolumn{2}{c|}{nDCU@K} & \cite{YangLLHKR07nDCU, YangL09EGU} \\
\cline{2-5}
& Intent-aware & \multicolumn{2}{c|}{M-IA} & \cite{AgrawalGHI09Intent,CIKM2020:DBLP:conf/cikm/QinDW20,LiZZZL17DiverseMF,SantosMO10xquard,vargas2011intent,DBLP:conf/aaai/Yu22} \\
\bottomrule
\bottomrule
\end{tabular}%
}
\end{table}
\subsection{Relevance-oblivious Diversity Metrics}
The relevance-oblivious metrics do not take the relevance between items and user (query) into consideration, while merely focus on the diversity measurement of a ranking list itself.
We further categorize these metrics into the following three sub-classes: \textit{Distance-based Metrics}, \textit{Coverage-based Metrics}, and \textit{Social Welfare Metrics}.
\subsubsection{\textbf{Distance-based Metrics}}
One of the most widely used metrics for measuring diversity is the distance-based metric.
As the name says, it evaluates the diversity by calculating the pair-wise distance among all the items in a list, where a smaller distance value indicates a poorer diversity.
Given a specific criterion for computing the pair-wise distance, most works follow the \textbf{ILAD} and \textbf{ILMD} (short for \textbf{I}ntra-\textbf{L}ist \textbf{A}verage \textbf{D}istance and \textbf{I}ntra-\textbf{L}ist \textbf{M}inimal \textbf{D}istance) paradigm for obtaining the diversity value of the item list.
These two paradigms originated from paper~\cite{DBLP:conf/recsys/ZhangH08} for measuring the diversity of a list, which extended the metrics of intra-list similarity proposed by paper~\cite{DBLP:conf/www/ZieglerMKL05}.
We denote $\sigma$ as a retrieval or recommendation list, $\text{dis}_{ij}$ as the distance between item $d_i$ and $d_j$.
Then the ILAD can be defined as the average dissimilarity of all pairs of items in the list, while the ILMD can be defined analogously.
\begin{equation} \text{ILAD}=\mathop{\text{mean}}\limits_{d_i,d_j\in \sigma,i\neq j}\text{dis}_{ij}, \quad \text{ILMD}=\mathop{\text{min}}\limits_{d_i,d_j\in \sigma,i\neq j}\text{dis}_{ij}.
\label{eq:ilad}
\end{equation}
In some specific applications such as sequential recommendation, the items are displayed as a sequence to the user, and the diversity is only required for $w$ successive items~\cite{NIPS2018:DBLP:conf/nips/ChenZZ18}.
We denote $\text{rank}(d|\sigma)$ as the rank position of item $d$ in $\sigma$, then we can obtain two variants of ILAD and ILMD as ILALD and ILMLD (short for \textbf{I}ntra-\textbf{L}ist \textbf{A}verage \textbf{L}ocal \textbf{D}istance and \textbf{I}ntra-\textbf{L}ist \textbf{M}inimal \textbf{L}ocal \textbf{D}istance), which can be defined as follows:
\begin{equation}
\text{ILALD}=\mathop{\text{mean}}\limits_{d_i,d_j\in \sigma,i\neq j, \atop |\text{rank}(d_i|\sigma)-\text{rank}(d_j|\sigma)|\leq w}\text{dis}_{ij}, \quad \text{ILMLD}=\mathop{\text{min}}\limits_{d_i,d_j\in \sigma,i\neq j, \atop |\text{rank}(d_i|\sigma)-\text{rank}(d_j|\sigma)|\leq w}\text{dis}_{ij}.
\end{equation}
Based on different ways to calculate the pair-wise distance $\text{dis}_{ij}$, several specific metrics are summarized below.
\begin{itemize}
\item \textbf{Cosine Diversity Distance}.
The most traditional and widely adopted way for defining the pair-wise distance is to use the cosine similarity between item embeddings, where $\text{dis}_{ij}$ is generally defined as $\text{dis}_{ij}=1-cos\langle\vec{d}_i, \vec{d}_j\rangle$, where $cos\langle\cdot, \cdot\rangle$ refers to the cosine similarity.
One of the primary advantages of cosine similarity is its simplicity, especially for sparse vectors — only non-zero entries need to be considered.
This is also how the original ILAD and ILMD proposed by~\citet{DBLP:conf/www/ZieglerMKL05} define the pair-wise distance.
After computing the cosine distance between any pair of items within the list, the Cosine diversity distance of the whole list can be obtained using the paradigm in Eq.~\ref{eq:ilad}.
\item \textbf{Jaccard Diversity Distance}.
Proposed by~\citet{YuLA09JaccardDD}, the Jaccard diversity distance between items is calculated similar to a standard Jaccard index paradigm.
However, the exact distance is not computed based on item embeddings, but relied on $\textit{explanation}$ between user-item pairs.
The $\textit{explanation}$ is defined differently given different recommendation models.
If an item $d$ is recommended to user $u$ by a content-based strategy, then an $\textit{explanation}$ for recommendation $d$ is defined as:
\begin{equation}
\text{Expl}(u, d)=\{d'\in\mathcal{D}|\text{sim}(d, d')>0 \text{ and } d'\in\mathcal{D}_u\}.
\end{equation}
Whereas if $d$ is recommended to $u$ based on a collaborative filtering method, then the $\textit{explanation}$ is defined as:
\begin{equation}
\text{Expl}(u, d)=\{u'\in\mathcal{U}|\text{sim}(u, u')>0 \text{ and } d\in\mathcal{D}_{u'}\},
\end{equation}
where the ``\text{sim}'' is a similarity function and $\mathcal{D}_u$ is the set of items $u$ interacted before.
Thereafter, the Jaccard diversity distance (JDD) between two items recommended to a specific user can be defined using the pre-computed $\textit{explanation}$:
\begin{equation}
\text{JDD}(d_i, d_j|u)=1-\frac{|\text{Expl}(u, d_i)\cap\text{Expl}(u, d_j)|}{|\text{Expl}(u, d_i)\cup\text{Expl}(u, d_j)|}.
\end{equation}
Then the Jaccard diversity distance of the whole list can be defined as Eq.~\ref{eq:ilad}.
\item \textbf{Gower Diversity Distance}. Another metric belonging to the distance-based category is the Gower diversity distance, proposed by~\citet{Haritsa09Gower}, focusing on retrieving $K$ nearest and diversified items with respect to a given query.
Motivated by the Gower coefficient~\cite{Gower_origin}, they define the distinction between two items as a weighted average of the respective attribute value differences.
Assuming $\delta_k$ is the difference of the $k^{th}$ attribute between two items and $w_k$ is the corresponding weight on that attribute, then the Gower diversity distance (GDD) between two items $d_i$ and $d_j$ can be defined as:
\begin{equation}
\text{GDD}(d_i, d_j)=\sum_{k}w_k\cdot \delta_k.
\end{equation}
The rest of the computation over a whole list follows the same paradigm in Eq.~\ref{eq:ilad}.
\end{itemize}
\subsubsection{\textbf{Coverage-based Metrics}} Coverage-based metrics are also popular for the diversity measurement in search and recommendation.
Most metrics in this category used in prior works aim to quantify the breadth of \textit{subtopics}~\footnote{The term \textit{subtopic} originates from information retrieval to indicate that there may exist multiple themes or keywords relevant to the input query.
To be consistent with this terminology, we use \textit{subtopic} to represent the (\romannumeral1) \textit{category of items}, (\romannumeral2) \textit{aspect of queries}, and (\romannumeral3) \textit{intent of users} in this survey.} given a list of unique items.
Based on whether the ranks of items matter, we classify the metrics into \textit{rank-unaware} and \textit{rank-based}, where the first category does not care about the ranking positions of items within the list.
\paragraph{\textbf{Rank-unaware}}
Rank-unaware metrics are similar to the conventional metrics on accuracy in search and recommendation (e.g., Precision@K and Recall@K) since both of them will not be influenced by the rank positions of items in a given list.
Depending on the coverage of ``what'' they measure, we can classify these metrics into three sub-classes: P-Coverage, C-Coverage, and S-Coverage.
\begin{itemize}
\item \textbf{P-Coverage} (short for \textbf{P}rediction Coverage). The measure of prediction coverage is the number of unique items for which the predictions can be formulated as a proportion of the total number of items~\cite{DBLP:journals/tois/HerlockerKTR04,DBLP:conf/recsys/GeDJ10}.
We denote $\mathcal{D}$ as the set of all available items, $\mathcal{D}_p$ as the set of items for which a prediction can be provided.
Then the P-Coverage can be defined as follows:
\begin{equation}
\text{P-Coverage}=\frac{\left|\mathcal{D}_p\right|}{\left|\mathcal{D}\right|}.
\end{equation}
The construction of $\mathcal{D}_p$ is highly dependent on the task formulation and chosen models.
For instance, paper~\cite{DBLP:conf/recsys/GeDJ10} mentioned that some collaborative filtering systems are just able to make predictions for items which have more than a fixed number of ratings assigned.
In such a case, $\mathcal{D}_p$ can be considered as the set of items for which the number of available ratings meets the requirement.
P-Coverage generally focuses on the system level, and is hardly used for measurement on a single or several ranking lists.
From another view, we can also understand the P-Coverage as the system's ability to address the ``cold-start'' problem.
However, as more and more research on ``cold-start'' problem emerges, most models are capable of making predictions for those items even with very few interactions.
Thus, P-Coverage is not widely used in search and recommendation.
\item \textbf{C-Coverage} (short for \textbf{C}atalog Coverage). In order to quantify the proportion of unique items that can be retrieved or recommended in the system, catalog coverage metric directly focuses on the output result list and generally considers the union of all lists produced during the measurement time~\cite{DBLP:journals/tois/HerlockerKTR04,DBLP:conf/recsys/GeDJ10}.
C-Coverage can be used to measure the diversity of either a single ranking list or a group of lists, but is more widely used at a system level.
Assuming there are $N$ lists, the metric can be formulated as follows:
\begin{equation}
\text{C-Coverage}=\frac{\left|\bigcup_{i=1}^ N\sigma_i\right|}{\left|\mathcal{D}\right|}.
\end{equation}
\item \textbf{S-Coverage} (short for \textbf{S}ubtopic-Coverage). This metric is one of the most widely used measurements for diversity in search and recommendation~\cite{DBLP:conf/recsys/GeDJ10, he2019diversity}.
Different from C-Coverage which concentrates on items themselves, S-Coverage cares more about the scope and abundance of different item categories or genres within the list.
This is also more natural in terms of human conceptions compared to distance-based metrics, as humans rarely compute the pair-wise distance based on embeddings to determine whether a recommendation list is diverse, but can directly tell whether diverse topics appear as frequently as feasible in a list.
S-Coverage can be measured on either a single list or a group of lists, corresponding to the measurement of individual-level diversity and system-level diversity, respectively.
Denoting $N$ as the number of lists under consideration, $\mathcal{S}(d)$ as the set of subtopics covered by item $d$, $n_S=\left|\bigcup_{d\in\mathcal{D}}\mathcal{S}(d)\right|$ as the total number of subtopics, then the S-Coverage can be formulated as follows:
\begin{equation}
\text{S-Coverage}=\frac{\left|\bigcup_{i=1}^ N\left(\bigcup_{d\in \sigma_i}\mathcal{S}(d)\right)\right|}{n_S},.
\end{equation}
\end{itemize}
\paragraph{\textbf{Rank-aware}}
It has been realized in many works that users do not provide all items in a ranking list with the same amount of attention due to users' patience may decay exponentially as they browse deeper through a list.
As a result, those items ranked higher (i.e., at the top of the list) may receive more exposure.
Thus, when considering relevance, many metrics have been proposed to offer a higher score for ranking relevant items at the top of a list, such as the normalized discounted cumulative gain (nDCG).
The similar idea is also applicable when considering diversity of a list: a user may feel the list is redundant if those items ranked in the top positions are similar with each other, even if there are many diverse items in later positions.
This conception cannot be captured by prior described metrics since they are invariant when the ranks of items change.
Here, we summarize the rank-based metrics on measuring coverage-based diversity, where most of them are defined upon the conventional metrics for measuring accuracy in search and recommendation.
These metrics care about not only how diverse the items are in a ranking list but also what locations they appear.
\begin{itemize}
\item \textbf{S-RR@100\%} (short for \textbf{S}ubtopic-\textbf{R}eciprocal \textbf{R}ank@100\%).
This metric is proposed in~\cite{Zhai_Subtopic} for evaluating the diversity of solutions for subtopic retrieval problems.
The subtopic retrieval problem is originally concerned with finding documents that cover many different subtopics given a query of keywords.
S-RR@100\% is a variation to Reciprocal Rank (RR), defined as the inverse of the rank position on which a complete coverage of subtopics is obtained.
Thus, the output value of this metric cannot be smaller than the total number of different subtopics.
Using the same notation as before, the S-RR@100\% can be defined as follows:
\begin{equation}
\text{S-RR@100\%}=\min\limits_k\left(\left|\bigcup_{i=1}^k \mathcal{S}(d_i)\right|=n_S\right).
\end{equation}
\item \textbf{S-Recall@K} (short for \textbf{S}ubtopic-Recall@K).
As the name says, this metric is a variation of the Recall@K metric that is widely used for measuring relevance in search and recommendation.
S-Recall@K is also proposed in~\cite{Zhai_Subtopic} and can be defined as the percentage of subtopics covered by the first $k$ items given a retrieved or recommendation list:
\begin{equation}
\text{S-Recall@K}=\frac{\left|\bigcup_{i=1}^k \mathcal{S}(d_i)\right|}{n_S}.
\end{equation}
\item \textbf{S-Precision@K} (short for \textbf{S}ubtopic-Precision@K).
Analogous to S-Recall@K, this metric is a variation of the Precision@K metric that is widely used for measuring relevance in search and recommendation.
It can be defined as follows:
\begin{equation}
\text{S-Precision@K}=\frac{\left|\bigcup_{i=1}^k \mathcal{S}(d_i)\right|}{k}.
\end{equation}
\end{itemize}
\subsubsection{\textbf{Social Welfare Metrics}}
Diversity is not only a research problem of information retrieval in computer science.
Additionally, it has been received lots of attention in other disciplinaries such as ecology and economics.
Recently, several works borrow the diversity notions from other fields for the evaluation in search and recommendation results.
We summarize these metrics as follows.
\begin{itemize}
\item \textbf{SD Index} (short for \textbf{S}impson's \textbf{D}iversity Index).
SD Index originated from paper~\cite{simpson1949measurement} for measuring the biodiversity in a habitat.
Regarding each subtopic (category) in recommendation as a kind of species in ecology, SD Index can be defined as the probability that two items selected randomly and independently without replacement belong to the same category.
Thus, a smaller SD Index indicates a higher diversity.
We denote $n_S$ as the number of different subtopics, $l$ as the list of items under consideration (which can be a single recommendation list or the aggregation of multiple lists), and $c(l, s_i)$ as the number of items in $l$ covering the subtopic $s_i$.
Then the SD Index over the list $l$ can be defined as follows:
\begin{equation}
\text{SD Index}=\frac{\sum_{i=1}^{n_S}\left[c(l,s_i)\cdot\left(c(l,s_i)-1\right)\right]}{|l|\left(|l|-1\right)}.
\end{equation}
To show this idea more clearly, we assume there are $3$ subtopics in total.
System $A$ recommends $10$ items and the number of items covering each subtopic is $8$, $1$, $1$, respectively.
System $B$ also recommends $10$ items, while the number of items covering each subtopic is $4$, $3$, $3$, respectively.
Then we can compute the SD Index of both systems and find out that the index value of system $A$ is larger than that of system $B$: $\frac{8\times 7+1\times 0+1\times 0}{10\times 9}>\frac{4\times 3+3\times 2+3\times 2}{10\times 9}$.
This is to say that system $A$'s recommendation is less diverse than system $B$'s recommendation, which is aligned with our intuition.
\item {\textbf{Gini Index}}.
The Gini Index proposed by ~\citet{GiniIndex} is originally a measure of the distribution of income across a population.
A higher Gini Index indicates greater inequality, with high-income individuals receiving much larger percentages of the total income of the population.
Recently, some researchers also adopt the Gini Index in the filed of recommendation to measure the inequality among values of a frequency distribution, e.g., numbers of occurrences (exposures) in the recommendation list.
This measurement is generally at the system level by aggregating all the recommendation lists across all users, which can also indicate how diverse the system is in regarding to all the items it can retrieve or recommend.
Assuming the occurrence of the $i^{th}$ item is $e_i$, where $i=1, ..., |\mathcal{D}|$, the Gini Index over all the items of the whole system is calculated as:
\begin{equation}
\setlength\abovedisplayskip{2pt
\setlength\belowdisplayskip{2pt}
\text{Gini Index}=\frac{1}{2|\mathcal{D}|^2\overline{e}}\sum_{i=1}^{|\mathcal{D}|}\sum_{j=1}^{|\mathcal{D}|}|e_i-e_j|,
\end{equation}
where $\overline{e}=\frac{1}{|\mathcal{D}|}\sum_{i=1}^{|\mathcal{D}|}e_i$ is the mean occurrence of items.
Thus, a smaller Gini Index indicates a more fair distribution on the occurrences of items in the output results. This may indicate a higher diversity since different items have more equal opportunity being exposed.
\end{itemize}
\subsection{Relevance-aware Diversity Metrics}
Although diversity is an important property algorithm designers need to consider, relevance is still at the heart of the ranking problems in both search and recommendation.
Therefore, a metric that is solely concerned with diversity cannot adequately assess a system's effectiveness.
For instance, one can randomly select items from various topics to ensure that the ranking list performs exceptionally well on metrics such as S-Coverage and S-RR@100\%.
However, the overall relevance of the list obtained in such a way may be extremely low.
Therefore, if the algorithm designer aims to use the relevance-oblivious metrics to measure the diversity, she has to use other metrics (e.g., nDCG, rank-biased precision (RBP)~\cite{MoffatZ08RBP}) to measure the relevance.
In contrast to those relevance-obvious metrics on diversity, there exist relevance-aware metrics that attempt to incorporate both relevance and diversity into a single measurement, where almost all of them originated from research in search.
Several works conduct axiomatic analysis on the relevance constraints of diversity metrics~\cite{SIGIR2013:AmigoGV13, SIGIR2018:AmigoSA18}.
Here, we highlight two of the most critical properties on the relevance in ranking, \textit{\textbf{priority}} and \textit{\textbf{heaviness}}, that the relevance-aware metrics in any ranking task must satisfy.
\begin{itemize}
\item \textit{\textbf{Property 1: Priority}. Swapping items in concordance with their relevance scores should increase the overall score of the whole ranking.}
We denote $r(d)$ as the relevance score of an item with respect to the query or user, $Q(\sigma)$ as the overall score of the ranking list $\sigma$, ``$\leftrightarrow$'' as swapping two items.
Formally, the \textit{\textbf{priority}} property requires that: if $r(d_i)<r(d_j)$ and $\text{rank}(d_i|\bm{\sigma})<\text{rank}(d_j|\bm{\sigma})$, then $Q(\bm{\sigma}_{d_i\leftrightarrow{d_j}})>Q(\bm{\sigma})$.
\item \textit{\textbf{Property 2: Heaviness}. Items with the same relevance score contribute more when at earlier positions in the ranking.}
Using the same notations, the \textit{\textbf{heaviness}} property requires that: if $r(d_i)=r(d_j)<r(d'_i)=r(d'_j)$ and $\text{rank}(d_i|\bm{\sigma})<\text{rank}(d_j|\bm{\sigma})<\text{rank}(d'_i|\bm{\sigma})<\text{rank}(d'_j|\bm{\sigma})$, then $Q(\bm{\sigma}_{d_i\leftrightarrow{d'_i}})>Q(\bm{\sigma}_{d_j\leftrightarrow{d'_j}})$.
\end{itemize}
It is easy to see that those widely used relevance metrics in information retrieval and recommendation, such as the nDCG, satisfy both of the two properties.
Now, we categorize the relevance-aware diversity metrics that satisfy the above properties into the following two categories.
\subsubsection{\textbf{Novelty-based Metrics}}
As a metric with very close connection to diversity, novelty was also studied in prior works.
~\citet{ClarkeKCVABM08alphaNDCG} point out the precise distinction between novelty and diversity in the field of information retrieval as: \textit{novelty refers to the need to avoid redundancy, while diversity refers to the need to resolve ambiguity in the query}, which corresponds to \textit{intrinsic diversity} and \textit{extrinsic diversity}, respectively.
However, even with this difference, we still find out several works in the literature categorize the novelty-based metrics as part of the metrics in the diversity family, since novelty on topics and categories can also be regarded as an improvement on diversity.
We follow this paradigm and summarize the novelty-based metrics as follows.
\begin{itemize}
\item \textbf{$\alpha$-nDCG@K} (short for Novelty-biased \textbf{N}ormalized \textbf{D}iscounted \textbf{C}umulative \textbf{G}ain@K). This metric is proposed by~\citet{ClarkeKCVABM08alphaNDCG}, using to balance the trade-off between retrieving both relevant and non-redundant items.
This is also one of the earliest metrics aiming to combine the measurement of both relevance and diversity.
Prior to this, most metrics in information retrieval and recommendation such as mean average precision (MAP) and nDCG assume that the relevance of each item can be judged in isolation, independently from other items, thus ignoring important factors such as redundancy between items.
To address this, the authors present a framework for assessing diversity and novelty based on the cumulative gain.
The key is to define the utility gain of adding the $k^{th}$ item ($d_k$) in the list be considered in the left of all items ranked above position $k$.
The authors assume that subtopics are independent and equally likely to be relevant, and the assessment of positive relevance judgments of an item for a subtopic $r(d|s)$ involves an uncertainty that can be modeled with a fixed probability $\alpha$ of success in the judgment.
We denote $c(k, s_i)$ as the number of items covering subtopic $s_i$ till position $k$ in a ranking list, then we can first formally define the $\alpha$-DCG@K over a ranking list $\sigma$ as:
\begin{align}
\text{Gain}(d_k)&=\sum_{i=1}^{n_S}r(d_k|s_i)(1-\alpha)^{c(k, s_i)},\label{eq:Gain}\\
\textbf{$\alpha$-DCG@K}&= \sum_{k=1}^{K}\frac{1}{\log(k+1)}\cdot \text{Gain}(d_k).
\label{eq:alpha-DCG}
\end{align}
Analogous to the definition of nDCG@K, we can find an ``ideal'' ranking that maximizes $\alpha$-DCG@K, denoting as the $\alpha$-iDCG@K.
The ideal ranking computation is known to be an NP-Complete problem, poined out by~\citet{Carterette09_NP_Diversity}.
The ratio of $\alpha$-DCG@K to $\alpha$-iDCG@K defines the $\alpha$-nDCG@K.
\item \textbf{NRBP} (short for \textbf{N}ovelty- and \textbf{R}ank-\textbf{B}iased \textbf{P}recision).
Following the paper~\cite{ClarkeKCVABM08alphaNDCG},~\citet{ClarkeKV09NRBP} propose another metric built upon the rank-biased precision (RBP), rather than the nDCG, with a very similar motivation and paradigm.
Described by~\citet{MoffatZ08RBP}, the RBP model assumes that the user browses a list of items in sequential order and with a probability $\beta$ (i.e., $0<\beta<1$) to continue at each position.
In other words, a user has a probability of $\beta^k$ to observe all the items till the $k^{th}$ position.
Based on this idea and use the same notation as $\alpha$-nDCG@K in Eq.~\ref{eq:alpha-DCG}, the NRBP can be formally defined as follows:
\begin{equation}
\textbf{NRBP}=\frac{1-(1-\alpha)\beta}{n_S}\sum_{k=1}^{|\sigma|}\beta^{k-1}\sum_{i=1}^{n_S}r(d_k|s_i)(1-\alpha)^{c(k, s_i)}.
\end{equation}
Here, the normalization factor includes division by the number of subtopics, allowing us to average the measure across multiple queries with varying subtopic counts.
It is also worthy to mention that in contrast to $\alpha$-nDCG@K which is typically presented at a particular browsing depth, NRBP is more of a summary metric that illustrates the diversity/novelty of the entire list.
\item \textbf{nDCU@K} (short for \textbf{N}ormalized \textbf{D}iscounted \textbf{C}umulative \textbf{U}tility@K).
The metric nDCU@K was proposed by~\citet{YangLLHKR07nDCU} around the same period when the $\alpha$-nDCG@K being proposed.
It is also motivated by extending the original nDCG@K metric, whereas the ``G'' for \textit{gain} is replaced with ``U'' for \textit{utility}.
Given a list of retrieved items based on a query, the authors define the utility of an item $d_k$ at the $k^{th}$ position as:
\begin{equation}
\text{Utility}(d_k)=\text{Gain}(d_k)-\text{Loss}(d_k),
\label{eq:utility}
\end{equation}
where the $\text{Gain}(d_k)$ refers to the information users receive for observing $d_k$ depending on the relevance and novelty, while $\text{Loss}(d_k)$ denotes the time and energy spent in going through the item.
There are various ways to define these two terms.
~\citet{YangLLHKR07nDCU} adopt a very similar formulation as that in Eq.~\ref{eq:Gain} to define the $\text{Gain}(d_k)$, and they define the $\text{Loss}(d_k)$ as a constant.
Then the DCU@K can be defined as:
\begin{equation}
\textbf{DCU@K}= \sum_{k=1}^{K}\frac{1}{\log(k+1)}\cdot \text{Utility}(d_k).
\end{equation}
Analogous to the nDCG@K, the ratio of DCU@K to the ideal DCU@K defines the nDCU@K.
Considering that nDCU@K is well-defined only for a single ranking list~\cite{YangLLHKR07nDCU}, ~\citet{YangL09EGU} extend the nDCU@K metric to make it be capable of measuring multiple ranking lists conditioned on different user browsing patterns.
Specifically, they compute the mathematical expectation of Eq.~\ref{eq:utility} over $n$ ranking lists and $n$ user browsing models (one list and one browsing model for each user), where each browsing model corresponds to a list of $n$ different stop positions in the $n$ ranking lists.
Since this is just a slight modification of nDCU@K, we do not treat it as a separate metric in this survey.
\end{itemize}
\subsubsection{\textbf{Intent-aware Metrics}}
Sometimes, a simple combination of diversity and per-intent graded relevance is still not enough.
When considering diversity, it is not always ideal to retrieve or recommend different items covering various topics without distinguishing which topic is more important.
Take the search problem as an example, ``intent'' is defined as the ``information needs'', which can also be referred as users' expectations on distributions of different ``subtopics'' in the search result with respect to a query.
The motivation of proposing intent-aware metrics can also be depicted by the following example described in paper~\cite{AgrawalGHI09Intent}.
Considering a query $q$ that is related to two subtopics $s_1$ and $s_2$, but is much more related to $s_2$ than $s_1$.
Now we have two items $d_1$ and $d_2$, where $d_1$ rated 5 (out of 5) for $s_1$ but unrelated to $s_2$, $d_2$ rated 4 (out of 5) for $s_2$ but unrelated to $s_1$.
All traditional relevance metrics will tend to rank $d_1$ over $d_2$, but users may find $d_2$ is more related than $d_1$ on average.
As such,~\citet{AgrawalGHI09Intent} propose the family of intent-aware metrics for search results diversification.
Formally, given a distribution on the subtopics for a query and a relevance label on each item, they compute the outcome over a list $\sigma$ by applying the intent-aware scheme on a conventional relevance metric $M$ as follows:
\begin{align}
\text{$M$-IA}(\bm{\sigma})=\sum_{i=1}^{n_S}p(s_i|q)\cdot \text{$M$}(\bm{\sigma}|s_i),
\end{align}
where $s$ is the subtopic, denoting the user intent.
$M$ is the traditional metric for measuring the ranking quality on relevance, such as nDCG, MRR, and MAP.
When computing the intent-dependent $M@K(\bm{\sigma}|s)$, they simply treat any item that does not match the subtopic $s$ as non-relevant items, then compute the same way as the original $M@K(\bm{\sigma})$.
Thus, the family of $M$-IA metrics take into account the distributions of intents, and force a trade-off between adding items with higher relevance scores and those that cover intents with higher weights.
\section{Offline Approaches for Enhancing Diversity}
\label{sec:offline-approach}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{fig/diversity_approach.png}
\caption{Diversity approaches in search and recommendation, from both offline and online perspectives. Approaches in blue boxes indicate that they are generally used in recommendation, the pink box indicates it is generally used in search, while those in yellow boxes are equally widely used in search and recommendation.}
\label{fig:div_approach}
\end{figure}
\begin{table}[h]
\caption{A lookup table for the diversity work from several approaches.}
\label{tab:method}
\vspace{1mm}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l|l|l|l|l}
\toprule \toprule
\multicolumn{4}{l|}{\textbf{Diversity Approaches}} & \textbf{Related Work} \\ \hline
\multirow{8}{*}{Offline Approaches} & \multirow{3}{*}{Pre-processing Methods} & \multicolumn{2}{l|}{Pre-define User Types} & \cite{13KwonHH20} \\
\cline{3-5}
& & \multicolumn{2}{l|}{Pre-define Sampling Strategies} & \cite{ZhengGCJL21DGCN-pre} \\
\cline{3-5}
& & \multicolumn{2}{l|}{Pre-define Ground-truth Label} & \cite{DBLP:conf/recsys/PaudelHB17,ChengWMSX17Acc_Diverse} \\
\cline{2-5}
& \multirow{2}{*}{In-processing Methods} & \multicolumn{2}{l|}{Diversity as Regularization} & \cite{CIKM2020:DBLP:conf/cikm/ChenRC0R20,CIKM2020:DBLP:conf/cikm/MaropakiCDN20,CIKM2020:DBLP:conf/cikm/ZhouAK20,DBLP:conf/recsys/Sanz-CruzadoC18,WasilewskiH16Diverse_LTR,DBLP:conf/sigir/BalloccuBFM22} \\
\cline{3-5}
& & \multicolumn{2}{l|}{Diversity as Score} & \cite{CIKM2020:DBLP:conf/cikm/QinDW20,LiZZZL17DiverseMF,DBLP:conf/aaai/Yu22}\\
\cline{2-5}
& \multirow{2}{*}{Post-processing Methods} & \multirow{2}{*}{Greedy-based} & MMR & \cite{CarbonellG98MMR,SantosMO10xquard,vargas2011intent} \\
\cline{4-5}
& & & DPP & \cite{macchi1975DPP_origin,DBLP:conf/kdd/HuangWZX21,NIPS2018:DBLP:conf/nips/ChenZZ18,Wilhelm2018DPP2,Chen2017DPP1,GanLC20DPP4,DBLP:conf/sigir/GongZC0B0YQ22} \\
\cline{3-5}
& & \multicolumn{2}{l|}{Refinement-based} & \cite{li2020directional,DBLP:conf/recsys/TsukudaG19, ZieglerMKL05_topic_diverse, YuLA09} \\
\hline
\multirow{3}{*}{Online Approaches} & \multirow{2}{*}{Bandit Strategies} & \multicolumn{2}{l|}{Diversity as Score} & \cite{li2020cascading, DBLP:conf/aaai/DingLMCT21, QinCZ14CCB, WangWWH17BiUCB}\\
\cline{3-5}
& & \multicolumn{2}{l|}{Diversity as Architecture} &
\cite{ParaparR21Bandit_Diverse_Arm, RadlinskiKJ08_RBA}\\
\cline{2-5}
& \multicolumn{3}{l|}{Reinforcement Learning} & \cite{ZhengZZXY0L18DRN, DBLP:conf/wsdm/StamenkovicKAXK22} \\
\bottomrule \bottomrule
\end{tabular}%
}
\end{table}
We intend to use a unified framework to categorize the approaches for enhancing diversity in both search and recommendation since these are lots of similarities in these two scenarios.
In this section, we focus on offline processes, where the methods do not need to care about users' real-time feedback based on the displayed results from the last round.
Based on when the approaches of diversity intervene relative to the training procedure, the methods for enhancing search and recommendation diversity can be divided into three categories: (\romannumeral1) \textbf{pre-processing}, (\romannumeral2) \textbf{in-processing}, and (\romannumeral3) \textbf{post-processing}.
Pre-processing methods are adopted prior to the model training process.
They typically pre-define diversity-aware techniques with the expectation that these designs will result in a diverse output.
In-processing methods directly participate during the model training.
They guarantee a diversified outcome through learning matching scores for users and items where the diversity constraints or scores are added.
Post-processing methods are used after the model is well trained.
They generally re-rank the final item lists to improve diversity.
These approaches and corresponding works are summarized in Table~\ref{tab:method}.
\subsection{Pre-processing Methods}
Pre-processing methods intervene in the system before the model training.
We review the following three sub-classes (\romannumeral1) \textit{pre-define user types}, (\romannumeral2) \textit{pre-define sampling strategies}, and (\romannumeral3) \textit{pre-define ground-truth label}.
\subsubsection{Pre-define User Types.}
\citet{13KwonHH20} improve the diversity in recommendation through pre-defining user types.
They first interview fashion professionals and categorize the user purchase behavior into four types: (\romannumeral1) \textit{gift type}: purchasing items to give to others; (\romannumeral2) \textit{coordinator type}: purchasing items associated with previous purchases; (\romannumeral3) \textit{carry-over type}: purchasing items similar to existing purchases; and (\romannumeral4) \textit{trend-setter type}: purchasing items affected by the trends of other people’s purchases.
For each type, they use a specific algorithm to recommend top-5 items to each user.
Since each user may have multiple purchase behaviors, they adopt a hybrid strategy to simply combine the recommendation lists of four algorithms to generate up to 20 item candidates for each user.
The C-Coverage improves from 24\% (using the CF algorithm) to 78\%, coupled with a 3.2\% increase in the purchasing rate and a \$13 increase in the average purchase amount per customer in the experimental group.
\subsubsection{Pre-define Sampling Strategies.}
Since the interactions between users and items can be naturally represented as a graph, a number of graph-based recommendation algorithms are widely used nowadays, such as the Graph Neural Network (GNN)-based models.
These models represent the users and items as nodes in a graph, followed by learning their embeddings through message passing: at each layer, they aggregate the neighbor's information for each target node.
Due to their ability to capture higher-order connectivity between user nodes and item nodes, GNN-based methods can generally achieve state-of-the-art accuracy and relevance.
Since higher-order neighbors of a user tend to cover more diverse items, GNN-based approaches have the potential to improve recommendation diversity as a byproduct.
However, without specific design, those items from the popular categories tend to be learned more often, because they take up the majority of the edges on the graph.
To address this, ~\citet{ZhengGCJL21DGCN-pre} propose two pre-defined sampling strategies for two processes in the model.
The first strategy aims to re-balance the \textit{neighborhood sampling} process in the message passing, where the key idea is to increase the selecting probabilities for those items from the less popular categories and decrease those from the popular categories.
In such a way, those less popular items can still have a chance to be sampled and well learned.
The second strategy affects the \textit{negative sampling} process.
In contrast to random negative sampling in~\cite{DBLP:conf/uai/RendleFGS09}, they propose to select similar (from the same category) but negative items with an increased probability, so that less similar (not from the same category) items are not pushed too far away from the user in the embedding space.
As a result, items from different categories are likely to appear in the recommendation list for each user, thus enhancing the individual-level diversity.
An illustration of this category-boosted negative sampling is shown in Fig.~\ref{fig:pre_neg_sample}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{fig/C_Neg_Sample.png}
\caption{An illustration of the category-boosted negative sampling. Negative items are sampled from outside positive items. The strategy boosts the probability of sampling from items of positive categories (the light green area). The figure is borrowed from paper~\cite{ZhengGCJL21DGCN-pre}.}
\label{fig:pre_neg_sample}
\end{figure}
\subsubsection{Pre-define Ground-truth Label.} \citet{ChengWMSX17Acc_Diverse} propose a method to construct the ground truth label by incorporating diversity constraints, to idealize the optimization target directly.
By using supervised learning, they regard each user as a training instance and heuristically choose a subset of relevant and diverse items as the ground-truth label for each user.
Specifically, their labeling method has two steps: for each user $u$, (\romannumeral1) filter a set
of items $\mathcal{C}_u$ with high ratings as a set of candidates, followed by (\romannumeral2) selecting a set of items from $\mathcal{C}_u$ by maximizing the relevance and diversity trade-off.
In the first step, an item $d$ has a high rating from user $u$ if $r(d|u)\geq\gamma\cdot\overline{r}(\cdot|u)$, where $r(d|u)$ denotes the observed rating on item $d$ from user $u$, $\overline{r}(\cdot|u)$ is the average rating score from $u$, and $\gamma$ is a trade-off parameter.
All selected items in step (\romannumeral1) form the set of candidates $\mathcal{C}_u$.
In step (\romannumeral2), they select a subset $\mathcal{Y}_u$ from $\mathcal{C}_u$ (i.e., $\mathcal{Y}_u\subseteq\mathcal{C}_u$, $|\mathcal{Y}_u|=K$) as the ground-truth label for user $u$ by balancing the trade-off between relevance and diversity, using a metric similar to the F-measure~\cite{Baeza-YatesR99_Fmeasure}.
Specifically, the selected $\mathcal{Y}_u$ aims to maximize the following equation:
\begin{align}
\max\limits_{\mathcal{Y}_u}\quad&\frac{2\cdot f(\mathcal{Y}_u)\cdot g(\mathcal{Y}_u)}{f(\mathcal{Y}_u)+g(\mathcal{Y}_u)},\\
&\text{s.t.,} \quad \mathcal{Y}_u\subseteq\mathcal{C}_u, |\mathcal{Y}_u|=K.
\end{align}
Here, $f(\mathcal{Y}_u)$ and $g(\mathcal{Y}_u)$ represent the measurement for relevance and diversity over the whole set $\mathcal{Y}_u$, respectively.
For the relevance, denoting the set of items rated by $u$ as $\mathcal{D}_u$, \citet{ChengWMSX17Acc_Diverse} define $f(\cdot)$ as a pair-wise comparison on the ratings between the items in $\mathcal{Y}_u$ and $\mathcal{D}_u\backslash\mathcal{Y}_u$ as follows:
\begin{align}
f(\mathcal{Y}_u)=\frac{\sum\limits_{d_i\in\mathcal{Y}_u}\sum\limits_{d_j\in\mathcal{D}_u\backslash\mathcal{Y}_u}\text{compare}\left(r(d_i|u)-r(d_j|u)\right)}{|\mathcal{Y}_u|\cdot |\mathcal{D}_u\backslash\mathcal{Y}_u|},
\end{align}
where $\text{compare}(x)$ equals 1 if $x>0$; else, equals -1.
For the diversity measurement $g(\cdot)$, the authors define it as the ILAD as described in Eq.~\ref{eq:ilad}.
Afterwards, the obtained ground-truth label $\mathcal{Y}_u$ for user $u$ can guide the model training.
\subsection{In-processing Methods}
In-processing methods act during the model training process.
We categorize them into the following two sub-classes: (\romannumeral1) \textit{diversity as regularization} and (\romannumeral2) \textit{diversity as score}.
\subsubsection{Diversity as Regularization}
Since relevance is the primary goal of search and recommendation systems, the most intuitive way to enhance diversity through an in-processing way is to treat diversity as a regularization on the loss function to guide the training.
\citet{WasilewskiH16Diverse_LTR} first propose a prototype to constrain the relevance loss with a trade-off parameter $\lambda$:
\begin{equation}
\min\limits_{\Theta} \mathcal{L}_{\text{rel}}(\Theta) + \lambda\cdot \mathcal{L}_{\text{div}}(\Theta),
\end{equation}
where $\Theta$ is the learnable embeddings, $\mathcal{L}_{\text{rel}}(\cdot)$ and $\mathcal{L}_{\text{div}}(\cdot)$ refer to the relevance loss and diversity regularization which can be both self-defined.
For instance, ~\citet{WasilewskiH16Diverse_LTR} define the relevance loss as the pair-wise ranking loss~\cite{DBLP:conf/uai/RendleFGS09}, and the diversity loss as the negative of the intra-list average distance (ILAD)~\cite{DBLP:conf/recsys/ZhangH08}.
Several later works follow this line of research, such as paper~\cite{CIKM2020:DBLP:conf/cikm/ChenRC0R20}.
Rather than only modeling the dissimilarities among items for defining the diversity, the authors take the user intents into consideration.
Here, the user intents can be comprehended as the user's interest in different subtopics (categories).
Specifically, the authors define the diversity of a recommendation list $\sigma_u$ to user $u$ as the probability of each subtopic $s_i$ having at least one relevant item in $\sigma_u$, then the regularization can be formulated as:
\begin{equation}
\mathcal{L}^{\sigma_u}_{\text{div}}=-\sum\limits_{i=1}^{n_S}p(s_i|u)\cdot\left(1-\prod\limits_{d\in\sigma_u}\left(1-p\left(d|s_i\right)\right)\right).
~\label{eq:intent-div-loss}
\end{equation}
Here, $p(s_i|u)$ represents the user $u$'s interest in subtopic $s_i$, and $p\left(d|s_i\right)$ refers to the relatedness of item $d$ to the subtopic $s_i$.
Both terms can be computed through a softmax function using the corresponding embeddings of users, items, and subtopics.
In addition to merely focus on the diversification of retrieval results, some researchers also care about how to generate diverse explanations for the output results.
For instance, ~\citet{DBLP:conf/sigir/BalloccuBFM22} conceptualize, assess, and operationalize three novel properties (linking interaction recency, shared entity popularity, and explanation type diversity) to monitor explanation quality at user level in recommendation, and propose re-ranking approaches able to optimize for these properties.
They optimize these three indicators for measuring the quality of explanations as a regularization term in the re-rank stage.
Here we classify this as an in-processing method.
\subsubsection{Diversity as Score}
\label{sec:div_as_score}
Another widely adopted in-processing method for diversity is to treat diversity as a score during ranking.
As such, the score of an item is composed of two parts: one from the perspective of relevance, and the other from the perspective of diversity.
The most significant difference between these two types of scores is that the relevance score typically assumes the independence of items in a list, but the diversity score is highly dependent on the other items.
Following this line, ~\citet{LiZZZL17DiverseMF} propose one of the earliest methods of diversified recommendation.
They focus on a sequential recommendation process, where the model recommends one item at a time to form the entire recommendation list.
They define the score of an item at the current position as the sum of a relevance part and a discounted subtopic diversification part.
In detail, for user $u$, given an un-selected $k^{th}$ item $d_k$ and a list of selected $k-1$ items $\sigma^{1:k-1}_u$, they define the score of $d_k$ as:
\begin{equation}
o(d_k|u)=o^{\text{rel}}(d_k|u)+\lambda\cdot o^{\text{div}}(d_k|\sigma^{1:k-1}_u, u),
\label{eq:Diverse_score}
\end{equation}
where $o^{\text{rel}}(d_k|\cdot)$ and $o^{\text{div}}(d_k|\cdot)$ denote the score of $d_k$ from the view of relevance and diversity, respectively.
Specifically, the authors define the relevance score as the inner-product between the user embedding and the item embedding: $o^{\text{rel}}(d_k|u)=\Theta_u\cdot \Theta_{d_k}^\intercal$.
They define the diversity score as discounted subtopic's contribution, which reduces exponentially as the number of items covering that subtopic increases in the entire list:
\begin{equation}
o^{\text{div}}(d_k|\sigma^{1:k-1}_u,u)=\sum\limits_{i=1}^{n_S}\beta^{c(\sigma^{1:k}_u, s_i)}\cdot \Theta_u\cdot \Theta_{s_i}^\intercal, \quad \sigma_u^{1:k}=\sigma_u^{1:k-1}\cup\{d_k\}.
\end{equation}
Here $\beta$ is the decay factor (i.e., 0$<\beta<$1), $\Theta_u$ is the embedding of user $u$, $\Theta_{s_i}$ is the embedding of subtopic $s_i$, $c(\sigma^{1:k}_u,s_i)$ denotes the number of items covering subtopic $s_i$ in $\sigma^{1:k}_u$.
As such, for each user, the model greedily selects an un-selected item to maximize the score in Eq.~\ref{eq:Diverse_score} at each position to form the final recommendation list.
In order to train all the model parameters $\Theta$, ~\citet{LiZZZL17DiverseMF} assume that the ideal recommendation lists for some sample users are available.
Then, the learning process aims to penalize those generated recommendations which do not respect the sequence in ideal lists.
Taking an individual user $u$ as an example, the loss function on a pair of sampled items $(d_i, d_j)$ can be formulated through a binary cross-entropy loss as follows:
\begin{equation}
\mathcal{L}_u(d_i, d_j)=-y_{ij}\cdot\log(p_{ij})-(1-y_{ij})\cdot\log(1-p_{ij}),
\end{equation}
where $y_{ij}=1$ if $d_i$ is ranked above $d_j$ in the ideal ranking list of $u$, $p_{ij}$ refers to the probability of ranking $d_i$ above $d_j$ in the current model, which is computed as $p_{ij}=\text{sigmoid}(o(d_i|u)-o(d_j|u))$.
Several other works also follow the similar idea to treat diversity as a score.
For instance, ~\citet{DBLP:conf/aaai/Yu22} present a novel framework for search result diversification based on the score-and-sort method using direct metric optimization.
They express each item's diversity score specifically, which determines its rank position based on a probability distribution.
\subsection{Post-processing Methods}
Most of the earliest diversity approaches follow the re-ranking paradigm: they aim to achieve diversity after the training procedure by re-ranking the list based on both the item relevance scores and some diversity metrics.
Due to the separation of model training and diversified ranking, these approaches are regarded as post-processing, which can be applied to any recommendation model as a consecutive layer with excellent scalability.
Based on how the diversified list is generated, we categorize them as (\romannumeral1) \textit{greedy-based} and (\romannumeral2) \textit{refinement-based}.
\subsubsection{Greedy-based Methods}
As the name suggests, greedy selection methods iteratively select the item that maximizes a joint measure of relevance and diversity to each position, and finally provide an output ranking list.
Two of the most representative post-processing methods in this category are: (\romannumeral1) \textit{Maximal Marginal Relevance (MMR)} and (\romannumeral2) \textit{Determinantal Point Process (DDP)}.
\paragraph{\textbf{MMR}}
Maximal Marginal Relevance (MMR)~\cite{CarbonellG98MMR} is the most pioneering diversity approach in this category.
\citet{CarbonellG98MMR} develop the concept of ``marginal relevance'' as a linear combination of the relevance and diversity of each item, in response to the fact that user needs include not only \textit{relevance} but also \textit{novelty} and \textit{diversity}.
In particular, an item has high marginal relevance if it is both relevant to the user and has low similarity to previously selected items.
Based on this protocol, MMR greedily selects the item that can maximize the marginal relevance to form the final ranking list.
We can formulate the process of MMR selecting the $k^{th}$ item for user $u$ as follows:
\begin{equation} d_k=\max\limits_{d\in(\mathcal{D}\backslash\mathcal{D}_u)\backslash\sigma^{1:k-1}_u}[o^{\text{rel}}(d|u)+\lambda\cdot o^{\text{div}}(d|\sigma^{1:k-1}_u, u)].
\end{equation}
Different researchers adopt similar ways to model $o^{\text{rel}}(\cdot)$ as the inner product between the already well-trained embeddings, while adopting different ways to model the $o^{\text{div}}(\cdot)$.
~\citet{CarbonellG98MMR} define the diversity term as follows:
\begin{equation}
o^\text{div}(d|\sigma^{1:k-1}_u, u)=-\max\limits_{d_j\in\sigma^{1:k-1}_u}\text{sim}(d, d_j).
\end{equation}
One may observe that the recommendation generation process of MMR is quite similar to that of paper~\cite{LiZZZL17DiverseMF}, which falls under the category ``In-processing - Diversity as Score''.
The difference between these two works is as follows.
MMR is a post-processing method that performs a greedy selection based on already learned model embeddings.
In contrast, paper~\cite{LiZZZL17DiverseMF} adopts an in-processing method, whose greedy selection is not based on well-trained embeddings.
In other words, the diversification of MMR is added after model training, while the diversification in paper~\cite{LiZZZL17DiverseMF} is added during the model training procedure.
This is also the primary distinction between any post-processing and in-processing approaches.
\paragraph{\textbf{DPP}}
Determinantal Point Process (DPP) is one of the cutting-edge post-processing methods for diversity enhancement in search and recommendation.
First introduced by~\citet{macchi1975DPP_origin} with the name ``fermion process'', DPP was originally used to precisely describe the repulsion and diversity phenomenon for fermion systems in thermal equilibrium.
Recently, it has been applied in search and recommendation for enhancing diversity~\cite{Chen2017DPP1}.
Prior to the application of DPP in recommendation, the majority of approaches to diversity, such as the basic version of MMR~\cite{CarbonellG98MMR}, compute the similarity between items in a pair-wise way and avoid recommending redundant items to improve diversity.
However, these methods are sub-optimal since the pair-wise dissimilarities may not capture some complex similarity relationships among the whole list, also the relevance and diversity are captured separately~\cite{Chen2017DPP1}.
Thanks to DPP's outstanding ability to capture the global correlations among data with an elegant probabilistic model~\cite{KuleszaT12DPP4ML}, DPP-based methods directly model the dissimilarities among items in a set-wise way using a unified model.
The idea of DPP can be demonstrated as follows.
A point process $\mathcal{P}$ on a set $\mathcal{D}$ (e.g.,
a set of $|\mathcal{D}|$ items) is a probability distribution on the powerset of $\mathcal{D}$ (the set of all subsets
of $\mathcal{D}$).
That is, $\forall \mathcal{C}\subseteq\mathcal{D}$, $\mathcal{P}$ assigns some probability $p(\mathcal{C})$, and $\sum_{\mathcal{C}\subseteq\mathcal{D}}p(\mathcal{C})=1$.
Although a DPP defines a probability distribution over an exponential number of sets, it can be compactly parameterized by a single positive semi-definite (PSD) matrix $\mathbf{L}\in\mathbb{R}^{|\mathcal{D}|\times |\mathcal{D}|}$~\cite{BorodinDPP}.
In other words, the probability of a subset $\mathcal{C}$ represented by a DPP can be written as:
\begin{align}
p(\mathcal{C})\propto\text{det}(\mathbf{L}_{\mathcal{C}}).
\label{eq:DPP_def}
\end{align}
where $\mathbf{L}_{\mathcal{C}}=[\mathbf{L}_{ij}]_{d_i, d_j\in\mathcal{C}}$.
However, it is still unclear how the determinant unifies the relevance and diversity.
To show this, we offer two views.
The first comprehension is based on the geometric meaning of determinant.
Since $\mathbf{L}$ is PSD, we can find a matrix $\mathbf{B}$ such that $\mathbf{L}=\mathbf{B}^\intercal\mathbf{B}$.
Then we have $\text{det}(\mathbf{L}_{\mathcal{C}})=\text{Vol}^2(\{\mathbf{B}_i\}_{i\in\mathcal{C}})$, which can be represented as the squared volume of the parallelepiped spanned by the columns of $\mathbf{B}$ corresponding to elements in $\mathcal{C}$.
The volume here is determined by two factors: the magnitude of column vectors and the orthogonality among them.
If we treat the columns of the matrix $\mathbf{B}$ as item embeddings, then it is clear to see that a larger magnitude of the vectors (\underline{\textit{higher relevance}}) and a stronger orthogonality among them (\underline{\textit{higher dissimilarity}}) lead to a higher volume (\underline{\textit{higher determinant}}).
Thus, the determinant of a matrix unifies both relevance and diversity.
The second comprehension is based on a simple case where the matrix is $2\times 2$:
\begin{align}
\mathbf{L}_{\mathcal{C}}=\left [\begin{array}{cccc}
l_{11} & l_{21} \\
l_{12} & l_{22}
\end{array}\right],
\end{align}
whose determinant is $\text{det}(\mathbf{L}_{\mathcal{C}})=l_{11}\cdot l_{22}- l_{21}\cdot l_{12}$.
Assuming the diagonal entries indicate the relevance of items and the off-diagonal entries indicate the similarity among items, then the determinant can be represented as relevance minus similarity.
Although this comprehension is for a 2-dimensional case, a similar intuition holds for higher dimensions.
Based on the above comprehension and intuition, \citet{Chen2017DPP1} construct user-specific $\mathbf{L}$ as $\mathbf{L}=\text{diag}(\bm{r})\cdot \mathbf{S}\cdot\text{diag}(\bm{r})$,
where $\text{diag}(\bm{r})$ is a diagonal matrix whose diagonal entries are the relevance scores between items to the user, and the $(i, j)$-th element of $\mathbf{S}$ is the similarity score between the $i^{th}$ item and the $j^{th}$ item.
Thus, Eq.~\ref{eq:DPP_def} can also be written as:
\begin{align}
p(\mathcal{C})&\propto\text{det}(\mathbf{L}_{\mathcal{C}}),\\
&=\left(\prod\limits_{i\in\mathcal{C}}r_i^2\right)\cdot\text{det}({\mathbf{S}_{\mathcal{C}}}).
\end{align}
Finally, to obtain the diversified top-$K$ recommendation for user $u$ given the whole item set $\mathcal{D}$, we first construct a user-specific matrix $\mathbf{L}$ as aforementioned.
Then the task can be formulated as follows:
\begin{equation}
\sigma_u=\argmax_{\mathcal{C}\subseteq (\mathcal{D}\backslash\mathcal{D}_u), |\mathcal{C}|=K}\text{det}(\mathbf{L}_{\mathcal{C}}).
\label{eq:DPP_task}
\end{equation}
However, direcly solving this task is expensive.
Approximate solutions to Eq.~\ref{eq:DPP_task} can be obtained by several algorithms, among which the greedy solution was previously considered as the fastest one.
Initializing $\sigma_u$ as empty, an item $d_j$ that maximizes the following equation is added to $\sigma_u$ iteratively.
Specifically, the selection of the $k^{th}$ item for user $u$ can be described as follows:
\begin{equation}
d_k=\argmax_{d\in (\mathcal{D}\backslash\mathcal{D}_u)\backslash{\sigma_u^{1:k-1}}}\left(\text{det}(\mathbf{L}_{{\sigma_u^{1:k-1}}\cup\{d\}}) - \text{det}(\mathbf{L}_{{\sigma_u^{1:k-1}}})\right).
\end{equation}
Based on the strength of DPP,
~\citet{DBLP:conf/sigir/GongZC0B0YQ22} propose a diversity-aware Web APIs recommendation methodology for choosing diverse and suitable APIs for mashup creation.
The APIs recommendation issue for mashup creation is specifically treated as a graph search problem that seeks the smallest group of Steiner trees in an API correlation graph.
Their method innovatively employs determinantal point processes to diversify the recommended results.
\subsubsection{Refinement-based Methods}
Different from greedy-based methods that select each item iteratively to form the entire ranking list, refinement-based methods adjust the positions or replace several items from existing ranking lists.
Generally, by using the refinement-based methods, items are first ranked based on some relevance metrics and then refined by introducing some diversity metrics.
Several earlier works follow this line of approach.
For instance, ~\citet{ZieglerMKL05_topic_diverse} construct two ranking lists for retrieving the diversified top-K items: $\sigma_{rel}$ and $\sigma_{div}$, where the first list is constructed merely based on the relevance score, while the second one is constructed based on the diversity score of each item in the whole candidate sets.
Both lists rank the items in descending order based on their scores.
For achieving a single diversified ranking list, the authors merge the two lists using a scaling factor to tradeoff how much to rely on the rankings in $\sigma_{rel}$ or $\sigma_{div}$.
The similar strategy is also used by~\citet{YuLA09}.
Only starting from one ranking list with the $K$ highest scoring items, the authors swap the item which contributes the least to the diversity of the entire set with the next highest scoring item among the remaining items.
They also set a threshold for the relevance when replacing the items in order to avoid the dramatic drop in the overall relevance value.
\section{Online Approaches for Diversity}
\label{sec:online-approach}
So far, we have reviewed the offline approaches for enhancing diversity in search and recommendation.
These methods generally train the model in an offline manner using the existing data with ground-truth labels.
However, in some situations, these labeled data are insufficient or unavailable, especially in the recommendation scenario.
For instance, one of the most well-known challenges is the ``cold-start'' problem where new users join the system.
To resolve these problems, one effective way is to use online approaches where the systems first display item lists to users, gather user feedback, and then update the model for the next turn.
Based on whether the user preference change, we further divide them as (\romannumeral1) \textit{bandit strategies} (i.e., invariant user preference) and (\romannumeral2) general \textit{reinforcement learning} (i.e., dynamic user preference).
In this section, we review how to achieve diversity in these approaches.
\subsection{Bandit Strategies}
As one of the simplest examples of reinforcement learning (RL), the bandit problem was first introduced by~\citet{Thompson33_bandit} in 1933.
The most classical bandit problem is known as the multi-armed bandit (MAB), whose name comes from imagining a gambler at a row of slot machines, who has to decide how to play these machines to gain as much money as possible in a time horizon~\cite{Weber92_Gittins}.
A bandit problem can be generally defined as a sequential game between an agent and an environment~\cite{lattimore20_bandit}.
The game is played over $T$ rounds (i.e., the time horizon), while in each round $t\in[T]$, the agent first chooses an action $A_t$ from a given set $\mathcal{A}$, and the environment then reveals a reward $r_t\in R$.
The goal of the agent is to maximize the $T$-step cumulative reward or, equivalently, minimize the $T$-step cumulative regret.
Here, the cumulative regret is defined as the expected difference between the reward sum associated with an optimal strategy and the sum of the collected rewards:
\begin{equation}
\rho=T\cdot\mu^*-\sum_{t=1}^T r_t,
\end{equation}
where $\mu^*$ is the maximal reward mean associated with the optimal strategy.
It is intuitive to formulate the online search and recommendation into an MAB by treating the algorithm as the agent, items as arms, displaying an item as the action of selecting the corresponding arm, and the user feedback as the reward.
However, MAB never uses the information from the state, which is also called as the context (i.e., the features of users and items); therefore, it generally cannot show optimal performance especially in recommendation since the personalization is a key requirement.
To address this, most works use an extension of MAB called Contextual MAB (CMAB) to model the online search and recommenation problems.
It has been shown in an abundance of work that CMAB can generally outperform MAB on the relevance of output lists.
Due to the simplicity of implementation and capability of making real-time decisions, recent research also aims to incorporate diversity in bandit algorithms for search and recommendation.
There are generally two ways to enhance diversity in these methods: either to treat diversity as part of the scores of each arm or to design a different bandit architecture that can lead to a diversified result.
We review both of these two ideas in the following paragraphs.
\subsubsection{Diversity as Score}
Most works interpret diversity as part of the score on each arm (item) in the bandit algorithms for search and recommendation.
For instance, ~\citet{li2020cascading} formulate the diversified retrieval of top-$K$ items as a bandit problem with cascading user behavior, where a user browses the displayed list from top to bottom, clicks the first attractive item, and stops browsing the rest.
If the user clicks on item, the reward is 1, otherwise 0.
Then the objective is to minimize the following $T$-step cumulative regret:
\begin{equation}
R(T)=\sum_{t=1}^{T}\mathbb{E}[r(\sigma^*, \alpha_t)-r(\sigma_t, \alpha_t)].
\end{equation}
Here, $r(\cdot)$ is the binary reward from the user feedback.
$\sigma_t$ is the displayed ranking list at time step $t$, while $\sigma^*$ is the optimal ranking list, with constraint that $|\sigma_t|=|\sigma^*|=K$.
$\alpha_t$ is a vector of length $K$, indicating the \textit{attraction} of each arm (item) in the ranking list at time step $t$, where is how diversity comes in.
Specifically, the authors define the \textit{attraction} as a combination of relevance and diversity, following a very similar way to Eq.~\ref{eq:Diverse_score} in section~\ref{sec:div_as_score}.
Again, all the definitions of diversity are applicable, while both paper~\cite{li2020cascading} and~\cite{DBLP:conf/aaai/DingLMCT21} use the gain on coverage of subtopics (S-Coverage) of adding item $d_i$ as the attraction score of which from the diversity component.
Several other works choose different ways to define the diversity score.
For instance, ~\citet{QinCZ14CCB} use the entropy regularizer, while ~\citet{WangWWH17BiUCB} propose three separate solutions, borrowing from MMR~\cite{CarbonellG98MMR}, entropy regularizer~\cite{QinCZ14CCB}, and temporal user-based switching~\cite{LathiaHCA10Time}.
\subsubsection{Diversity as Architecture}
Rather than merely treating diversity as part of the score, ~\citet{ParaparR21Bandit_Diverse_Arm} design a different bandit architecture for enhancing diversity.
Different from prior works that interpret each arm as an individual item, the authors first make each arm represent a unique item category, and further consider retrieving different items under each category.
Such a two-stage design can not only guarantee the items are diverse (i.e., satisfy the distance-based metrics), but also guarantee different category being covered as much as possible (i.e., satisfy coverage-based metrics).
In such a way, the algorithm can be efficiently used to construct user profiles with diverse preference elicitation.
All the works above lie in the recommendation scenario, where the personalization is at the core.
However, the output of a conventional web search is typically static, so it is more concerned with satisfying a population of users as opposed to each individual.
Following this line, ~\citet{RadlinskiKJ08_RBA} propose to learn diverse rankings in web search systems through MAB.
Their proposed approach, Ranked Bandits Algorithm (RBA), runs an MAB instance $\text{MAB}_i$ for each rank $i$ (i.e., $1\leq i\leq K$), where the arm of each MAB indicates a unique item.
When user $u_t$ arrives at time $t$, each $\text{MAB}_i$ sequentially and independently decides which item to select at the rank $i$ for displaying to $u_t$.
Assuming $u_t$ follows a cascading browsing behavior (i.e., click at most one relevant item in the list), if $u_t$ clicks on an item actually selected by an MAB instance, the reward for the arm corresponding to that item for the MAB at that rank is 1.
The reward for the arms corresponding to all other items is 0.
In such a way, each MAB can update the value of each item iteratively through multiple rounds.
Although RBA shows effectiveness both empirically and theoretically, it is worth noting that it is hard to be extended to non-binary payoffs.
\subsection{Reinforcement Learning}
Although bandit strategies show efficieny and effectiveness in online search and recommendation, there exist several obvious limitation in them.
Firstly, bandit algorithms has only one state with several actions that lead back to the same state.
In other words, they assume that user preference will always remain the same, which does not hold in most real-world scenarios.
Secondly, bandit algorithms only care about the immediate reward, while the long-term reward is still significant to real-world users.
To address these, most research naturally brings in the reinforcement learning (RL) framework to model the problem, where the state can be affected by the action of agents and the long-term reward is also captured during recommendation.
In the RL setting, diversity has been promoted by employing efficient exploration-exploitation strategies.
\citet{ZhengZZXY0L18DRN} first use a Deep Q-Learning Network (DQN)~\cite{MnihKSRVBGRFOPB15DQN} to capture the long-term award of users' actions.
As for the diversity, they adopt a Dueling Bandit Gradient Descent (DBGD)~\cite{GrotovR16tutorial, HofmannSWR13fastIR, YueJ09dueling} algorithm to do exploration in the DQN framework.
Specifically, during their exploration strategy, the agent aims to generate a recommendation list $\sigma$ using the current network $Q$ and another list $\sigma'$ using an explore network $Q'$, which shares the same architecture as $Q$ with a small disturbation added on the paratemers of $Q$.
Then the authors conduct a probabilistic interleave~\cite{GrotovR16tutorial} to generate the merged recommendation list based on $L$ and $L'$ for obtaining a diversified ranking list.
Other researchers such as~\citet{DBLP:conf/wsdm/StamenkovicKAXK22} first define and presente the next item recommendation objective as a Multi-objective MDP problem.
Thereafter, they propose a Scalarized Multi-objective Reinforcement Learning (SMORL) model, which works as a regularizer, incorporating desired properties into the recommendation model to balance the relevance, diversity, and novelty of recommendation.
\section{Openness and Future Directions}
\label{sec:future}
Researchers have realized the importance of improving diversity in retrieval systems and have started the exploration.
However, we argue that there still exist openness in this area.
In this section, we discuss a number of open challenges and point out some future opportunities in an effort to encourage additional research in this field.
\subsection{Time Dependency}
Existing research on diversity-aware retrieval systems focuses primarily on a single time point without taking a continuous time span into account.
In real-world systems, however, time plays an important role in the study of user behaviors and intentions, as humans may require varying degrees of diversity at different stages of their interaction with the system.
We argue that an intriguing future research direction is to investigate how to ensure personalized and time-dependent diversity in a continuous learning setting in which data arrive in a time-series fashion.
For instance, when a new user first joins a system, it is reasonable for the algorithm to display more diverse results in order to help the user better explore her interests.
As more data is collected about the user's interaction with the system, the algorithm should be able to adjust itself to adaptively balance relevance and diversity in order to not only provide items that the user likes based on the user's past preferences, but also show serendipity to the user at some point in order to attract and retain the user.
\subsection{Direct Optimization of Metrics}
One of the challenges in enhancing the diversity of search results in retrieval systems is that some metrics are difficult to optimize directly.
Although methods have been proposed to make some metrics differentiable (e.g., $\alpha$-nDCG, Gini Index)~\cite{WuCZZ09SmoothDCG, DoU22Gini}, most metrics, such as coverage-based metrics and SD Index, are difficult to optimize directly.
This challenge hinders the capacity of in-processing methods to make end-to-end tradeoffs between diversity and other metrics.
Exploring a more general method for differentiating these metrics for end-to-end training could be an intriguing line of research.
\subsection{Diversity in Explainability}
The majority of diversity-aware research on search and recommendation merely focuses on displaying a diverse list of items to users.
Nonetheless, diversity can also refer to other dimensions, such as explainability, which is also crucial for retaining and satisfying users.
For instance, it is not ideal that the explanation of a recommender system to a user is always \textit{``based on your previous history''} or \textit{``other users similar to you also like...''}.
To the best of our knowledge, this research direction receives very limited attention, only a few works aim to touch this area~\cite{li2020directional, DBLP:conf/sigir/BalloccuBFM22}.
We argue that it is interesting to investigate what features of users and items lead to the varying degrees of diversity in the output lists, as this will help us better understand the causal factors of diversity and direct us to generate diverse explanations for users.
\section{Application}
\section{Conclusion}
In this survey, we introduce the foundations, definitions, metrics, and approaches of diversity in retrieval systems from the perspective of search and recommendation.
We begin the survey with a brief introduction of why diversity is important in retrieval systems for the benefit of multiple stakeholders (i.e., customers and providers).
To help better understand the diversity concepts, we summarize the different diversity concerns in search and recommendation, highlighting their connection and distinctions.
For the main body of the survey, we provide a unified taxonomy to classify the metrics and approaches of diversification in both search and recommendation.
To close the survey, we discuss the openness research questions of diversity-aware research in retrieval systems in the hopes of inspiring future innovations and encouraging the deployment of diversity in real-world systems.
\subsubsection{\@startsection{subsubsection}{3}%
\graphicspath{ {./fig/} }
\begin{document}
\title{A Survey of Diversification Techniques in Search and Recommendation}
\author{Haolun Wu}
\authornote{Both authors contributed equally to this survey.}
\email{[email protected]}
\orcid{0000-0002-8206-4969}
\affiliation{%
\institution{School of Computer Science, McGill University \& Mila}
\streetaddress{845 Sherbrooke St W}
\city{Montreal}
\state{Quebec}
\country{Canada}
}
\author{Yansen Zhang}
\authornotemark[1]
\email{[email protected]}
\orcid{0000-0002-8206-4969}
\affiliation{%
\institution{Department of Computer Science, City University of Hong Kong}
\streetaddress{83 Tat Chee Avenue, Kowloon Tong}
\country{Hong Kong SAR}
}
\author{Chen Ma}
\authornote{Corresponding Author.}
\email{[email protected]}
\affiliation{%
\institution{Department of Computer Science, City University of Hong Kong}
\streetaddress{83 Tat Chee Avenue, Kowloon Tong}
\country{Hong Kong SAR}
}
\author{Fuyuan Lyu}
\email{[email protected]}
\affiliation{%
\institution{School of Computer Science, McGill University}
\streetaddress{845 Sherbrooke St W}
\city{Montreal}
\state{Quebec}
\country{Canada}
}
\author{Fernando Diaz}
\email{[email protected]}
\affiliation{%
\institution{Google \& Mila, Canadian CIFAR AI Chair}
\streetaddress{1253 McGill College Ave}
\city{Montreal}
\state{Quebec}
\country{Canada}
\postcode{H3B 2Y5}}
\author{Xue Liu}
\email{[email protected]}
\affiliation{%
\institution{School of Computer Science, McGill University}
\streetaddress{845 Sherbrooke St W}
\city{Montreal}
\state{Quebec}
\country{Canada}
\postcode{H3A 0G4}
}
\renewcommand{\shortauthors}{Wu and Zhang et al.}
\begin{abstract}
Diversifying search results is an important research topic in retrieval systems in order to satisfy both the various interests of customers and the equal market exposure of providers. There has been a growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, the diversity-aware studies in retrieval systems lack a systematic organization and are rather fragmented. In this survey, we are the first to propose a unified taxonomy for classifying the metrics and approaches of diversification in both search and recommendation, which are two of the most extensively researched fields of retrieval systems. We begin the survey with a brief discussion of why diversity is important in retrieval systems, followed by a summary of the various diversity concerns in search and recommendation, highlighting their relationship and differences. For the survey's main body, we present a unified taxonomy of diversification metrics and approaches in retrieval systems, from both the search and recommendation perspectives. In the later part of the survey, we discuss the openness research questions of diversity-aware research in search and recommendation in an effort to inspire future innovations and encourage the implementation of diversity in real-world systems.
\end{abstract}
\maketitle
\input{01_introduction}
\input{02_diversity_search}
\input{03_diversity_recsys}
\input{04_preliminary}
\input{05_metric}
\input{06_offline_approach}
\input{07_online_approach}
\input{08_future}
\input{09_application}
\newpage
\bibliographystyle{ACM-Reference-Format.bst}
| proofpile-arXiv_065-2050 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Broader Impact}
Our work relates to batch offline reinforcement learning, which is crucial for scaling up RL algorithms for practical applications, espeically in healthcare domains or marketing applications. In healthcare systems, often data is expensive to collect, and requires full utilization of available data. Batch offline RL provides a way towards this, especially when solving the task requires sequential decision making under fixed data. Furthermore, our work also relates to risk-sensitive optimization and safe RL, which has a number of applications beyond academia. For example, in robotic systems and autonomous driving, it is imperative to prevent collisions of the car with objects/people on the road even during training. Similarly, in manipulation tasks, we want to ensure that the work space is not damaged by a robotic arm and in turn the arms do not get damaged by rigid objects it interact with. Learning RL policies under constraints, ensuring safety when deploying the agent for real world applications is therefore of vital importance for success of artificial general systems to be useful for the society.
\section{Introduction}\vspace*{-0.2cm}
Offline batch reinforcement learning (RL) algoithms are key towards scaling up RL for real world applications, such as robotics~\citep{sergey_robotics} and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datasets, similar to supervised learning, avoiding continual interaction with the environment, which could be problematic for safety and feasibility reasons. However, significant mismatch between the fixed collected data and the policy that the agent is considering can lead to high variance of value function estimates, a problem encountered by most off-policy RL algorithms \citep{ImportanceSamplingDoina}. A complementary problem is that the value function can become overly optimistic in areas of state space that are outside the visited batch, leading the agent in data regions where its behavior is poor~\cite{BCQ}. Recently there has been some progress in offline RL~\citep{ BEAR, behaviour_regularized,BCQ}, trying to tackle both of these problems.
In this work, we study the problem of offline policy optimization with variance minimization. To avoid overly optimistic value function estimates, we propose to learn value functions under variance constraints, leading to a pessimistic estimation, which can significantly help offline RL algorithms, especially under large distribution mismatch. We propose a framework for variance minimization in offline RL, such that the obtained estimates can be used to regularize the value function and enable more stable learning under different off-policy distributions.
We develop a novel approach for variance regularized offline actor-critic algorithms, which we call \texttt{Offline Variance Regularizer}\xspace (\texttt{OVR}\xspace). The key idea of \texttt{OVR}\xspace is to constrain the policy improvement step via variance regularized value function estimates.
Our algorithmic framework avoids the double sampling issue that arises when computing gradients of variance estimates, by instead considering the variance of stationary distribution corrections with per-step rewards, and using the Fenchel transformation \citep{ConvexBoyd} to formulate a minimax optimization objective. This allows minimizing variance constraints by instead optimizing dual variables, resulting in simply an augmented reward objective for variance regularized value functions.
We show that even with variance constraints, we can ensure policy improvement guarantees, where the regularized value function leads to a lower bound on the true value function, which mitigates the usual overestimation problems in batch RL
The use of Fenchel duality in computing the variance allows us to avoid double sampling, which has been a major bottleneck in scaling up variance-constrained actor-critic algorithms in prior work~\cite{PrashanthVarianceAC, RiskSensitive}. Practically, our algorithm is easy to implement, since it simply involves augmenting the rewards with the dual variables only, such that the regularized value function can be implemented on top of \textit{any} existing offline policy optimization algorithms.
We
evaluate our algorithm on existing offline benchmark tasks based on continuous control domains. Our empirical results demonstrate that the proposed variance regularization approach is particularly useful when the batch dataset is gathered at random, or when it is very different from the data distributions encountered during training.
\section{Preliminaries and Background}
We consider an infinite horizon MDP as $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \gamma)$ where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the set of actions, $\mathcal{P}$ is the transition dynamics and $\gamma$ is the discount factor. The goal of reinforcement learning is to maximize the expected return $\mathcal{J}(\pi) = \mathbb{E}_{s \sim d_{\beta}} [ V^{\pi}(s) ]$, where $V^{\pi}(s)$ is the value function $V^{\pi}(s)=\mathbb{E} [ \sum_{t=0}^{\infty} \gamma^{t} r(s_t, a_t) \mid s_0 = s ]$, and $\beta$ is the initial state distribution. Considering parameterized policies $\pi_{\theta}(a|s)$, the goal is maximize the returns by following the policy gradient \citep{pg_theorem}, based on the performance metric defined as :
\begin{align}
\label{eq:primal_dual_equivalency}
J(\pi_{\theta}) = \mathbb{E}_{s_0 \sim \rho, a_0 \sim \pi(s_0)} \Big[ Q^{\pi_{\theta}}(s_0, a_0) \Big] = \mathbb{E}_{(s,a) \sim d_{\pi_{\theta}}(s,a)} \Big[ r(s,a) \Big]
\end{align}
where $Q^{\pi}(s,a)$ is the state-action value function, since $V^{\pi}(s) = \sum_{a} \pi(a|s) Q^{\pi}(s,a)$. The policy optimization objective can be equivalently written in terms of the normalized discounted occupancy measure under the current policy $\pi_{\theta}$, where $d_{\pi}(s,a)$ is the state-action occupancy measure, such that the normalized state-action visitation distribution under policy $\pi$ is defined as :
$d_{\pi}(s,a) = (1-\gamma) \sum_{t=0}^{\infty} \gamma^{t} P(s_t = s, a_t = a | s_0 \sim \beta, a \sim \pi(s_0) )$. The equality in equation \ref{eq:primal_dual_equivalency} holds and can be equivalently written based on the linear programming (LP) formulation in RL. In this work, we consider the off-policy learning problem under a fixed dataset $\mathcal{D}$ which contains (s, a, r, s') tuples under a known behaviour policy $\mu(a|s)$. Under the off-policy setting, importance sampling \citep{ImportanceSamplingDoina} is often used to reweight the trajectory under the behaviour data collecting policy, such as to get unbiased estimates of the expected returns. At each time step, the importance sampling correction $\frac{\pi(a_t|s_t)}{\mu(a_t|s_t)}$ is used to compute the expected return under the entire trajectory as
$$J(\pi) = (1 - \gamma) \mathbb{E}_{(s,a) \sim d_{\mu}(s,a)} [ \sum_{t=0}^{T} \gamma^{t} r(s_t, a_t) \left( \prod_{t=1}^{T} \frac{\pi(a_t \mid s_t)}{\mu(a_t \mid s_t} \right)]$$
Recent works \citep{BCQ} have demonstrated that instead of importance sampling corrections, maximizing value functions directly for deterministic or reparameterized policy gradients \citep{DDPG, TD3} allows learning under fixed datasets, by addressing the over-estimation problem, by maximizing the objectives of the form $\max_{\theta} \mathbb{E}_{s \sim \mathcal{D}} \Big[ Q^{\pi_{\theta}}(s, \pi_{\theta}(s)) \Big]$.
\section{Variance Regularization via Fenchel Duality in Offline RL} \vspace*{-0.2cm}
\label{sec:Variance_constraints}
In this section, we first present our approach based on variance of stationary distribution corrections, compared to importance re-weighting of episodic returns in section \ref{sec:var_dual}. We then present a derivation of our approach based on Fenchel duality on the variance, to avoid the double sampling issue \citep{SBEED}, leading to a variance regularized offline optimization objective in section \ref{sec:var_reg_max_return}. Finally, we present our algorithm in \ref{algorithm:offline_variance_regularizer}, where the proposed regularizer can be used in any existing offline RL algorithm.
\subsection{Variance of Rewards with Stationary Distribution Corrections}
\label{sec:var_dual}
In this work, we consider the variance of rewards under occupancy measures in offline policy optimization. Let us denote the returns as $D^{\pi} = \sum_{t=0}^{T} \gamma^{t} r(s_t, a_t)$, such that the value function is $V^{\pi} = \mathbb{E}_{\pi} [ D^{\pi} ]$. The 1-step importance sampling ratio is $\rho_t = \frac{\pi(a_t|s_t)}{\mu(a_t | s_t)}$, and the T-steps ratio can be denoted $\rho_{1:T} = \prod_{t=1}^{T} \rho_t$. Considering per-decision importance sampling (PDIS) \citep{ImportanceSamplingDoina}, the returns can be similarly written as $D^{\pi} = \sum_{t=0}^{T} \gamma^{t} r_t \rho_{0:t}$. The variance of episodic returns, which we denote by $\mathcal{V}_{\mathcal{P}}(\pi)$, with off-policy importance sampling corrections can be written as : $\mathcal{V}_{\mathcal{P}}(\pi) =\mathbb{E}_{s \sim \beta, a \sim \mu(\cdot | s), s' \sim \mathcal{P}(\cdot| s,a)} \Big[ \Big( D^{\pi}(s,a) - J(\pi) \Big)^{2} \Big]$.
Instead of importance sampling, several recent works have instead proposed for marginalized importance sampling with stationary state-action distribution corrections \citep{BreakingHorizon, DualDICE, GenDICE, MWL_Uehara}, which can lead to lower variance estimators at the cost of introducing bias. Denoting the stationary distribution ratios as $\omega(s,a) = \frac{d_{\pi}(s,a)}{d_{\mu}(s,a)}$, the returns can be written as $W^{\pi}(s,a) = \omega(s,a) r(s,a)$. The variance of marginalized IS is :
\begin{align}
\label{eq:variance_marginalized_IS}
& \mathcal{V}_{\mathcal{D}}(\pi) = \mathbb{E}_{(s,a) \sim d_{\mu}(s,a)} \Big[ \Big(W^{\pi}(s,a) - J(\pi) \Big)^{2} \Big] \notag \\
& = \mathbb{E}_{(s,a) \sim d_{\mathcal{\mu}}(s,a)} \Big[ W^{\pi}(s,a)^{2} \Big] - \mathbb{E}_{(s,a) \sim d_{\mathcal{\mu}}(s,a)} \Big[ W^{\pi}(s,a) \Big]^{2}
\end{align}
Our key contribution is to first consider the variance of marginalized IS $\mathcal{V}_{\mathcal{D}}(\pi)$ as risk constraint, and show that constraining the offline policy optimization objective with variance of marginalized IS, and using the Fenchel-Legendre transformation on the variance can help avoid the well-known double sampling issue in variance risk constrained RL (for more details on how to compute the gradient of the variance term, see appendix \ref{app-sec:variance_episodic_per_step}). We emphasize that the variance here is solely based on returns with occupancy measures, and we do not consider the variance due to the inherent stochasticity of the MDP dynamics.
\vspace{-0.4cm}
\subsection{Variance Regularized Offline Max-Return Objective}
\label{sec:var_reg_max_return}
We consider the variance regularized off-policy max return objective with stationary distribution corrections $\omega_{\pi/\mathcal{D}}$ (which we denote $\omega$ for short for clarity) in the offline fixed dataset $\mathcal{D}$ setting:
\begin{equation}
\label{eq:overall_objective_regularized}
\max_{\pi_{\theta}} J(\pi_{\theta}) := \mathbb{E}_{s \sim \mathcal{D}} \Big[ Q^{\pi_{\theta}}(s, \pi_{\theta}(s)) \Big] - \lambda \mathcal{V}_{\mathcal{D}}(\omega, \pi_{\theta})
\end{equation}
where $\lambda \geq 0$ allows for the trade-off between offline policy optimization and variance regularization (or equivalently variance risk minimization). The max-return objective under $Q^{\pi_{\theta}}(s,a)$ has been considered in prior works in offline policy optimization \citep{BCQ, BEAR}. We show that this form of regularizer encourages variance minimization in offline policy optimization, especially when there is a large data distribution mismatch between the fixed dataset $\mathcal{D}$ and induced data distribution under policy $\pi_{\theta}$.
\subsection{Variance Regularization via Fenchel Duality}
At first, equation \ref{eq:overall_objective_regularized} seems to be difficult to optimize, especially for minimizing the variance regularization w.r.t $\theta$. This is because finding the gradient of $\mathcal{V}(\omega, \pi_{\theta})$ would lead to the double sampling issue since it contains the squared of the expectation term. The key contribution of \texttt{OVR}\xspace is to use the Fenchel duality trick on the second term of the variance expression in equation \ref{eq:variance_marginalized_IS}, for regularizing policy optimization objective with variance of marginalized importance sampling. Applying Fenchel duality, $x^{2} = \max_{y} (2xy - y^{2})$, to the second term of variance expression, we can transform the variance minimization problem into an equivalent maximization problem, by introducing the dual variables $\nu(s,a)$. We have the Fenchel conjugate of the variance term as :
\begin{equation}
\begin{split}
\mathcal{V}( \omega, \pi_{\theta}) &= \max_{\nu} \quad \Big \{ - \frac{1}{2}\nu(s,a)^{2} + \nu(s,a) \omega(s,a) r(s,a) + \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \omega(s,a) r(s,a)^{2} \Big] \Big \} \\
&= \max_{\nu} \quad \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ - \frac{1}{2} \nu(s,a)^{2} + \nu(s,a) \omega(s,a) r(s,a) + \omega(s,a) r(s,a)^{2} \Big]
\end{split}
\end{equation}
Regularizing the policy optimization objective with variance under the Fenchel transformation, we therefore have the overall max-min optimization objective, explicitly written as :
\begin{equation}
\label{eq:overall_objective_variance_duality}
\max_{\theta} \min_{\nu} J(\pi_{\theta}, \nu) := \mathbb{E}_{s \sim \mathcal{D}} \Big[ Q^{\pi_{\theta}}(s, \pi_{\theta}(s)) \Big] - \lambda \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \Big( - \frac{1}{2} \nu^{2} + \nu \cdot \omega \cdot r + \omega \cdot r^{2} \Big) (s,a) \Big]
\end{equation}
\subsection{Augmented Reward Objective with Variance Regularization}
\vspace{-0.3em}
In this section, we explain the key steps that leads to the policy improvement step being an augmented variance regularized reward objective. The variance minimization step involves estimating the stationary distribution ration \citep{DualDICE}, and then simply computing the closed form solution for the dual variables. Fixing dual variables $\nu$, to update $\pi_{\theta}$, note that this leads to a standard maximum return objective in the dual form, which can be equivalently solved in the primal form, using augmented rewards. This is because we can write the above above in the dual form as :
\begin{align}
\label{eq:overall_objective_variance_duality2}
& J(\pi_{\theta}, \nu, \omega) := \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \omega(s,a) \cdot r(s,a) - \lambda \Big( - \frac{1}{2} \nu^{2} + \nu \cdot \omega \cdot r + \omega \cdot r^{2} \Big) (s,a) \Big] \notag \\
& = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \omega(s,a) \cdot \Big( r - \lambda \cdot \nu \cdot r - \lambda \cdot r^{2} \Big)(s,a) + \frac{\lambda}{2} \nu(s,a)^{2} \Big] \notag \\
& = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \omega(s,a) \cdot \Tilde{r}(s,a) + \frac{\lambda}{2} \nu(s,a)^{2} \Big]
\end{align}
where we denote the augmented rewards as :
\begin{equation}
\label{eq:augmented_rewards}
\Tilde{r}(s,a) \equiv [ r - \lambda \cdot \nu \cdot r - \lambda \cdot r^{2} ](s,a)
\end{equation}
The policy improvement step can either be achieved by directly solving equation \ref{eq:overall_objective_variance_duality2} or by considering the primal form of the objective with respect to $Q^{\pi_{\theta}}(s,\pi_{\theta})$ as in \citep{BCQ,BEAR}. However, solving equation \ref{eq:overall_objective_variance_duality2} directly can be troublesome, since the policy gradient step involves findinding the gradient w.r.t $\omega(s,a) = \frac{d_{\pi_{\theta}}(s,a)}{d_{\mathcal{D}}(s,a)}$ too, where the distribution ratio depends on $d_{\pi_{\theta}}(s,a)$. This means that the gradient w.r.t $\theta$ would require finding the gradient w.r.t to the normalized discounted occupancy measure, ie, $\nabla_{\theta} d_{\pi_{\theta}}(s)$. Instead, it is therefore easier to consider the augmented reward objective, using $\Tilde{r}(s,a)$ as in equation \ref{eq:augmented_rewards} in \textit{any} existing offline policy optimization algorithm, where we have the variance regularized value function $\Tilde{Q}^{\pi_{\theta}}(s,a)$.
Note that as highlighted in \citep{sobel}, the variance of returns follows a Bellman-like equation. Following this, \citep{BisiRiskAverse} also pointed to a Bellman-like solution for variance w.r.t occupancy measures. Considering variance of the form in equation \ref{eq:variance_marginalized_IS}, and the Bellman-like equation for variance, we can write the variance recursively as a Bellman equation:
\begin{equation}
\mathcal{V}_{\mathcal{D}}^{\pi}(s,a) = \Big( r(s,a) - J(\pi) \Big)^{2} + \gamma \mathbb{E}_{s' \sim \mathcal{P}, a' \sim \pi'(\cdot | s')} \Big[ \mathcal{V}_{\mathcal{D}}^{\pi}(s', a') \Big]
\end{equation}
Since in our objective, we augment the policy improvement step with the variance regularization term, we can write the augmented value function as $ Q^{\pi}_{\lambda}(s,a) := Q^{\pi}(s,a) - \lambda \mathcal{V}^{\pi}_{\mathcal{D}}(s,a)$. This suggests we can modify existing policy optimization algorithms with augmented rewards on value function.
\textit{Remark : } Applying Fenchel transformation to the variance regularized objective, however, at first glance, seems to make the augmented rewards dependent on the policy itself, since $\Tilde{r}(s,a)$ depends on the dual variables $\nu(s,a)$ as well. This can make the rewards non-stationary, thereby the policy maximization step cannot be solved directly via the maximum return objective. However, as we discuss next, the dual variables for minimizing the variance term has a closed form solution $\nu(s,a)$, and thereby does not lead to any non-stationarity in the rewards, due to the alternating minimization and maximization steps.
\textbf{Variance Minimization Step : } Fixing the policy $\pi_{\theta}$, the dual variables $\nu$ can be obtained using closed form solution given by $ \nu(s,a) = \omega(s,a) \cdot \Tilde{r}(s,a) $. Note that directly optimizing for the target policies using batch data, however, requires a fixed point estimate of the stationary distribution corrections, which can be achieved using existing algorithms \citep{DualDICE, BreakingHorizon}. Solving the optimization objective additionally requires estimating the state-action distribution ratio, $\omega(s,a) = \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)}$. Recently, several works have proposed estimating the stationary distribution ratio, mostly for the off-policy evaluation case in infinite horizon setting \citep{GenDICE, MWL_Uehara}. We include a detailed discussion of this in appendix \ref{app:sec-estimating_dist_ratio}.
\subsection{}
\textbf{Algorithm : } Our proposed variance regularization approach with returns under stationary distribution corrections for offline optimization can be built on top of any existing batch off-policy optimization algorithms. We summarize our contributions in Algorithm \ref{algorithm:offline_variance_regularizer}. Implementing our algorithm requires estimating the state-action distribution ratio, followed by the closed form estimate of the dual variable $\nu$. The augmented stationary reward with the dual variables can then be used to compute the regularized value function $Q_{\lambda}^{\pi}(s,a)$. The policy improvement step involves maximizing the variance regularized value function, e.g with BCQ \citep{BCQ}.
\begin{algorithm}[h!]
\caption{\texttt{Offline Variance Regularizer}\xspace}
\label{algorithm:offline_variance_regularizer}
\begin{algorithmic}
\STATE Initialize critic $Q_{\phi}$, policy $\pi_\theta$, network $\omega_{\psi}$ and regularization weighting $\lambda$; learning rate $\eta$
\FOR{$t=1$ {\bfseries to} $T$}
\STATE Estimate distribution ratio $\omega_{\psi}(s,a)$ using any existing DICE algorithm
\STATE Estimate the dual variable $\nu(s,a) = \omega_{\psi}(s,a)\cdot\tilde{r}(s,a)$
\STATE Calculate augmented rewards $\tilde{r}(s,a)$ using equation~\ref{eq:augmented_rewards}
\STATE Policy improvement step using \textit{any} offline policy optimization algorithm with augmented rewards $\tilde{r}(s,a)$ :
\mbox{$\theta_t = \theta_{t-1} + \eta \nabla_{\theta} J(\theta, \phi, \psi, \nu) $}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Theoretical Analysis}\vspace{-0.3cm}
\label{sec:theory_analysis}
In this section, we provide theoretical analysis of offline policy optimization algorithms in terms of policy improvement guarantees under fixed dataset $\mathcal{D}$. Following then, we demonstrate that using the variance regularizer leads to a lower bound for our policy optimization objective, which leads to a pessimistic exploitation approach for offline algorithms.
\subsection{Variance of Marginalized Importance Sampling and Importance Sampling}
We first show in lemma \ref{lemma:variance_upper_bound} that the variance of rewards under stationary distribution corrections can similarly be upper bounded based on the variance of importance sampling corrections. We emphasize that in the off-policy setting under distribution corrections, the variance is due to the estimation of the density ratio compared to the importance sampling corrections.
\begin{lemma} \label{lemma:variance_upper_bound} The following inequality holds between the variance of per-step rewards under stationary distribution corrections, denoted by $\mathcal{V}_{\mathcal{D}}(\pi)$ and the variance of episodic returns with importance sampling corrections $\mathcal{V}_{\mathcal{P}}(\pi)$
\begin{equation}
\mathcal{V}_{\mathcal{P}}(\pi) \leq \frac{\mathcal{V}_{\mathcal{D}}(\pi)}{(1-\gamma)^{2}}
\end{equation}
\end{lemma}
The proof for this and discussions on the variance of episodic returns compared to per-step rewards under occupancy measures is provided in the appendix \ref{app-lem:upper_bound_variance}.
\subsection{Policy Improvement Bound under Variance Regularization}
In this section, we establish performance improvement guarantees \citep{CPI} for variance regularized value function for policy optimization.
Let us first recall that the performance improvement can be written in terms of the total variation $\mathcal{D}_{\text{TV}}$ divergence between state distributions \citep{ppo_dice} (for more discussions on the performance bounds, see appendix \ref{app:sec_monotonic_improvement})
\begin{lemma} \label{lem:per_improvement_state_dist} For all policies $\pi'$ and $\pi$, we have the performance improvement bound based on the total variation of the state-action distributions $d_{\pi'}$ and $d_{\pi}$
\begin{equation}
J(\pi') \geq \mathcal{L}_{\pi}(\pi') - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi})
\end{equation}
\end{lemma}
where $\epsilon^{\pi} = \max_{s} | \mathbb{E}_{a \sim \pi'(\cdot \mid s)}[ A^{\pi}(s,a) ] |$, and $\mathcal{L}_{\pi}(\pi') = J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} [A^{\pi}(s,a)]$. For detailed proof and discussions, see appendix \ref{app:sec_monotonic_improvement}. Instead of considering the divergence between state visitation distributions, consider having access to both state-action samples generated from the environment. To avoid importance sampling corrections we can further considers the bound on the objective based on state-action visitation distributions, where we have an upper bound following from \citep{RatioMatching} : $ D_{\text{TV}}(d_{\pi'}(s) || d_{\pi} (s)) \leq D_{\text{TV}}( d_{\pi'}(s,a) || d_{\pi}(s,a) )$. Following Pinsker's inequality, we have:
\begin{equation}
J(\pi') \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}(s), a \sim \pi'(\cdot | s)} \Big[ A^{\pi}(s,a) \Big] - \epsilon^{\pi} \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ \sqrt{ \mathcal{D}_{\text{KL}}(d_{\pi'}(s,a) || d_{\pi}(s,a) ) } \Big]
\end{equation}
Furthermore, we can exploit the relation between KL, total variation (TV) and variance through the variational representation of divergence measures. Recall that the total divergence between P and Q distributions is given by : $\mathcal{D}_{\text{TV}}(p, q) = \frac{1}{2} \sum_{x} | p(x) - q(x) | $. We can use the variational representation of the divergence measure. Denoting $d_{\pi}(s,a) = \beta_{\pi'}(s,a) $, we have
\begin{equation}
D_{\text{TV}}( \beta_{\pi'} || \beta_{\pi} ) = \text{sup}_{f : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} \Big[ \mathbb{E}_{(s,a) \sim \beta_{\pi'}} [ f(s,a) ] - \mathbb{E}_{(s,a) \sim \beta(s,a)} [ \phi^{*} \circ f(s,a) ] \Big]
\end{equation}
where $\phi^{*}$ is the convex conjugate of $\phi$ and $f$ is the dual function class based on the variational representation of the divergence.
Similar relations with the variational representations of f-divergences have also been considered in \citep{AlgaeDICE, ppo_dice}. We can finally obtain a bound for the policy improvement following this relation, in terms of the per-step variance:
\begin{theorem} \label{thm:policy_improvement_variance_bound} For all policies $\pi$ and $\pi'$, and the corresponding state-action visitation distributions $d_{\pi'}$ and $d_{\pi}$, we can obtain the performance improvement bound in terms of the variance of rewards under state-action occupancy measures.
\begin{equation}
J(\pi') - J(\pi) \geq \mathbb{E}_{s \sim d_{\pi}(s), a\sim \pi'(a|s)} [ A^{\pi}(s,a) ] - \text{Var}_{(s,a) \sim d_{\pi}(s,a)} \Big[ f(s,a) \Big]
\end{equation}
where $f(s,a)$ is the dual function class from the variational representation of variance.
\end{theorem}
\begin{proof} For detailed proof, see appendix \ref{app:thm:policy_improvement_variance_bound}.
\end{proof}
\subsection{Lower Bound Objective with Variance Regularization}
In this section, we show that augmenting the policy optimization objective with a variance regularizer leads to a lower bound to the original optimization objectiven $J(\pi_{\theta})$. Following from \citep{POIS}, we first note that the variance of marginalized importance weighting with distribution corrections can be written in terms of the $\alpha-$Renyi divergence. Let p and q be two probability measures, such that the Renyi divergence is $\mathcal{F}_{\alpha} = \frac{1}{\alpha} \log \sum_{x} q(x) \Big( \frac{p(x)}{q(x)} \Big)^{\alpha}$. When $\alpha=1$, this leads to the well-known KL divergence $\mathcal{F}_{1}(p||q) = \mathcal{F}_{\text{KL}}(p||q)$.
Let us denote the state-action occupancy measures under $\pi$ and dataset $\mathcal{D}$ as $d_{\pi}$ and $d_{\mathcal{D}}$. The variance of state-action distribution ratios is $\text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \omega_{\pi/\mathcal{D}}(s,a) ]$. When $\alpha=2$ for the Renyi divergence, we have :
\begin{equation}
\text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \omega_{\pi/\mathcal{D}}(s,a)] = \mathcal{F}_{2}( d_{\pi} || d_{\mathcal{D}} ) - 1
\end{equation}
Following from \citep{POIS}, and extending results from importance sampling $\rho$ to marginalized importance sampling $\omega_{\pi/\mathcal{D}}$, we provide the following result that bounds the variance of the approximated density ratio $\hat{\omega}_{\pi/\mathcal{D}}$ in terms of the Renyi divergence :
\begin{lemma} \label{lemma:lower_bound_lemma} Assuming that the rewards of the MDP are bounded by a finite constant, $||r||_{\infty} \leq R_{\text{max}}$. Given random variable samples $(s,a) \sim d_{\mathcal{D}}(s,a)$ from dataset $\mathcal{D}$, for any $N > 0$, the variance of marginalized importance weighting can be upper bounded as :
\begin{equation}
\text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \hat{\omega}_{\pi/\mathcal{D}}(s,a) \Big] \leq \frac{1}{N} || r ||_{\infty}^{2} \mathcal{F}_{2} ( d_{\pi} || d_{\mathcal{D}} )
\end{equation}
\end{lemma}
See appendix \ref{app:sec-lower_bound_lemma} for more details.
Following this, our goal is to derive a lower bound objective to our off-policy optimization problem. Concentration inequalities have previously been studied for both off-policy evaluation \citep{HCOPE} and optimization \citep{Thomas_Improvement}. In our case, we can adapt the concentration bound derived from Cantelli's ineqaulity and derive the following result based on variance of marginalized importance sampling. Under state-action distribution corrections, we have the following lower bound to the off-policy policy optimization objective with stationary state-action distribution corrections
\begin{theorem} \label{thm:pdl_variance_bounds} Given state-action occupancy measures $d_{\pi}$ and $d_{\mathcal{D}}$, and assuming bounded reward functions, for any $0 < \delta \leq 1$ and $N > 0$, we have with probability at least $1 - \delta$ that :
\begin{equation}
\label{eq:lower_bound_obj_theorem}
J(\pi) \geq \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) \Big] - \sqrt{ \frac{1 - \delta}{\delta} \text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) ] }
\end{equation}
\end{theorem}
Equation \ref{eq:lower_bound_obj_theorem} shows the lower bound policy optimization objective under risk-sensitive variance constraints. The key to our derivation in equation \ref{eq:lower_bound_obj_theorem} of theorem \ref{thm:pdl_variance_bounds} shows that given off-policy batch data collected with behaviour policy $\mu(a|s)$, we are indeed optimizing a lower bound to the policy optimization objective, which is regularized with a variance term to minimize the variance in batch off-policy learning.
\section{Experimental Results on Benchmark Offline Control Tasks}
\vspace{-0.2em}
\label{sec:exp_settings}
\textit{Experimental Setup :} We demonstrate the significance of variance regularizer on a range of continuous control domains ~\citep{todorov2012mujoco} based on fixed offline datasets from \citep{D4RL}, which is a standard benchmark for offline algorithms.
To demonstrate the significance of our variance regularizer \texttt{OVR}\xspace, we mainly use it on top of the BCQ algorithm and compare it with other existing baselines, using the benchmark D4RL \citep{D4RL} offline datasets for different tasks and off-policy distributions. Experimental results are given in table \ref{tab:my_label}
\begin{table}[t!]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l||c | c| c| c| c| c|}
\hline
\textbf{Domain} & \textbf{Task Name} & \textbf{BCQ+\texttt{OVR}\xspace} & \textbf{BCQ} & \textbf{BEAR} & \textbf{BRAC-p} & \textbf{aDICE} & \textbf{SAC-off} \\
\hline \hline
\multirow{15}{*}{Gym}
& halfcheetah-random & 0.00 & 0.00 & 25.1 & 24.1 & -0.3 & \textbf{30.5}\\
& hopper-random & 9.51 & 9.65 & \textbf{11.4} & 11 & 0.9 & 11.3\\
& walker-random & 5.16 & 0.48 & \textbf{7.3} & -0.2 & 0.5 & 4.1\\
& halfcheetah-medium & 35.6 & 34.9 & 41.7 & \textbf{43.8} & -2.2 & -4.3\\
& hopper-medium & \textbf{71.24} & 57.76 & 52.1 & 32.7 & 1.2 & 0.9\\
& walker-medium & 33.90 & 27.13 & 59.1 & \textbf{77.5} & 0.3 & 0.8\\
& halfcheetah-expert & \textbf{100.02} & 97.99 & - & - & - & -\\
& hopper-expert & \textbf{108.41} & 98.36 & - & - & - & -\\
& walker-expert & 71.77 & \textbf{72.93} & - & -& - & -\\
& halfcheetah-medium-expert & \textbf{59.52} & 54.12 & 53.4 & 44.2 & -0.8 & 1.8\\
& hopper-medium-expert & 44.68 & 37.20 & \textbf{96.3} & 1.9 & 1.1 & -0.1\\
& walker-medium-expert & 34.53 & 29.00 & 40.1 & \textbf{76.9} & 0.4 & 1.6\\
& halfcheetah-mixed & \textbf{29.95} & 29.91 & - & - & - & -\\
& hopper-mixed & \textbf{16.36} & 10.88 & - & - & - & - \\
& walker-mixed & \textbf{14.74} & 10.23 & - & - & - & - \\
\hline
\multirow{3}{*}{FrankaKitchen}
& kitchen-complete & 4.48 & 3.38 & 0 & 0 & 0 & \textbf{15}\\
& kitchen-partial & \textbf{25.65} & 19.11 & 13.1 & 0 & 0 & 0\\
& kitchen-mixed & 30.59 & 23.55 & \textbf{47.2} & 0 & 2.5 & 2.5\\
\hline
\end{tabular}}
\caption{ The results on D4RL tasks compare BCQ \citep{BCQ} with and without \texttt{OVR}\xspace, bootstrapping error reduction (BEAR) \citep{BEAR}, behavior-regularized actor critic with policy (BRAC-p) \citep{BRAC}, AlgeaDICE (aDICE) \citep{AlgaeDICE} and offline SAC (SAC-off) \citep{SAC}.
The results presented are the normalized returns on the task as per \citet{D4RL} (see Table 3 in \citet{D4RL} for the unnormalized scores on each task).
We see that in most tasks we are able to significant gains using \texttt{OVR}\xspace.
Our algorithm can be applied to any policy optimization baseline algorithm that trains the policy by maximizing the expected rewards.
Unlike BCQ, BEAR \citep{BEAR} does not have the same objective, as they train the policy using and MMD objective.}
\label{tab:my_label}
\end{table}
\textbf{Performance on Optimal and Medium Quality Datasets : } We first evaluate the performance of \texttt{OVR}\xspace when the dataset consists of optimal and mediocre logging policy data. We collected the dataset using a fully (\textit{expert}) or partially (\textit{medium}) trained SAC policy. We build our algorithm \texttt{OVR}\xspace on top of BCQ, denoted by $\text{BCQ + VAR}$. Note that the \texttt{OVR}\xspace algorithm can be agnostic to the behaviour policy too for computing the distribution ratio \citep{DualDICE} and the variance. We observe that even though performance is marginally improved with \texttt{OVR}\xspace under expert settings, since the demonstrations are optimal itself, we can achieve significant improvements under medium dataset regime. This is because \texttt{OVR}\xspace plays a more important role when there is larger variance due to distribution mismatch between the data logging and target policy distributions. Experimental results are shown in first two columns of figure \ref{fig:mainfig_no_noise}.
\textbf{Performance on Random and Mixed Datasets :} We then evaluate the performance on \textit{random} datasets, i.e, the worst-case setup when the data logging policy is a random policy, as shown in the last two columns of figure \ref{fig:mainfig_no_noise}. As expected, we observe no improvements at all, and even existing baselines such as BCQ \citep{BCQ} can work poorly under random dataset setting. When we collect data using a mixture of random and mediocre policy, denoted by \textit{mixed}, the performance is again improved for \texttt{OVR}\xspace on top of BCQ, especially for the Hopper and Walker control domains. We provide additional experimental results and ablation studies in appendix \ref{app:sec_experiment_ablation_studies}.
\begin{figure*}[h]
\centering
\hspace*{-0.9cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/cheetah_expert.png}
\caption{Cheetah Expert}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/cheetah_medium.png}
\caption{Cheetah Medium}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/cheetah_random.png}
\caption{Cheetah Random}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/cheetah_mixed.png}
\caption{Cheetah Mixed}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.9cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_expert.png}
\caption{Hopper Expert}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_medium.png}
\caption{Hopper Medium}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_random.png}
\caption{Hopper Random}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_mixed.png}
\caption{Hopper Mixed}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.9cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/walker_expert.png}
\caption{Walker Expert}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/walker_medium.png}
\caption{Walker Medium}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/walker_random.png}
\caption{Walker Random}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{plots/walker_mixed.png}
\caption{Walker Mixed}
\label{fig:reachidea}
\end{subfigure}%
\caption{Evaluation of the proposed approach and the baseline BCQ \citep{BCQ} on a suite of three OpenAI Gym environments. Details about the type of offline dataset used for training, namely \textit{random}, \textit{medium}, \textit{mixed}, and \textit{expert} are included in Appendix. Results are averaged over 5 random seeds \citep{RLRepro}. We evaluate the agent using standard procedures, as in ~\cite{BEAR, BCQ}}.
\label{fig:mainfig_no_noise}
\vspace*{-0.2cm}
\end{figure*}
\section{Related Works}\vspace*{-0.3cm}
We now discuss related works in offline RL, for evaluation and opimization, and its relations to variance and risk sensitive algorithms. We include more discussions of related works in appendix \ref{app:sec-related_works}. In off-policy evaluation, per-step importance sampling \citep{ImportanceSamplingDoina, off_policy_TD} have previously been used for off-policy evaluation function estimators. However, this leads to high variance estimators, and recent works proposed using marginalized importance sampling, for estimating stationary state-action distribution ratios \citep{BreakingHorizon, DualDICE, GenACE}, to reduce variance but with additional bias. In this work, we build on the variance of marginalized IS, to develop variance risk sensitive offline policy optimization algorithm. This is in contrast to prior works on variance constrained online actor-critic \citep{PrashanthVarianceAC, ChowRiskSensitive, Variance_Actor_Critic} and relates to constrained policy optimization methods \citep{AchiamCPO, RewardConstrainedPO}.
For offline policy optimization, several works have recently addressed the overestimation problem in batch RL \citep{BCQ, BEAR, behaviour_regularized}, including the very recently proposed Conservative Q-Learning (CQL) algorithm \citep{CQL}. Our work is done in parallel to CQL, due to which we do not include it as a baseline in our experiments. CQL learns a value function which is guaranteed to lower-bound the true value function. This helps prevent value over-estimation for out-of-distribution (OOD) actions, which is an important issue in offline RL. We note that our approach is orthogonal to CQL in that CQL introduces a regularizer on the state action value function $Q^\pi(s,a)$ based on the Bellman error (the first two terms in equation 2 of CQL), while we introduce a variance regularizer on the stationary state distribution $d_\pi(s)$. Since the value of a policy can be expressed in two ways - either through $Q^\pi(s,a)$ or occupancy measures $d_\pi(s)$, both CQL and our paper are essentially motivated by the same objective of optimizing a lower bound on $J(\theta)$, but through different regularizers. Our work can also be considered similar to AlgaeDICE \citep{AlgaeDICE}, since we introduce a variance regularizer based on the distribution corrections, instead of minimizing the f-divergence between stationary distributions in AlgaeDICE. Both our work and AlgaeDICE considers the dual form of the policy optimization objective in the batch setting, where similar to the Fenchel duality trick on our variance term, AlgaeDICE instead uses the variational form, followed by the change of variables tricks, inspired from \citep{DualDICE} to handle their divergence measure.
\section{Discussion and Conclusion}\vspace*{-0.2cm}
We proposed a new framework for offline policy optimization with variance regularization called \texttt{OVR}\xspace, to tackle high variance issues due to distribution mismatch in offline policy optimization. Our work provides a practically feasible variance constrained actor-critic algorithm that avoids double sampling issues in prior variance risk sensitive algorithms \citep{Variance_Actor_Critic, PrashanthVarianceAC}. The presented variance regularizer leads to a lower bound to the true offline optimization objective, thus leading to pessimistic value function estimates, avoiding both high variance and overestimation problems in offline RL. Experimentally, we evaluate the significance of \texttt{OVR}\xspace on standard benchmark offline datasets, with different data logging off-policy distributions, and show that \texttt{OVR}\xspace plays a more significant role when there is large variance due to distribution mismatch.
\section{Appendix : Additional Experimental Results}
\label{safety_appendix}
\subsection{Experimental Ablation Studies}
\label{app:sec_experiment_ablation_studies}
In this section, we present additional results using state-action experience replay weightings on existing offline algorithms, and analysing the significance of our variance regularizer on likelihood corrected offline algorithms. Denoting $\omega(s,a)$ for the importance weighting of state-action occupancy measures based on samples in the experience replay buffer, we can modify existing offline algorithms to account for state-action distribution ratios.
The ablation experimental results using the Hopper control benchmark are summarized in figure \ref{fig:ablation}. The same base BCQ algorithm is used with a modified objective for BCQ~\citep{BCQ} where the results for applying off-policy importance weights are denoted as ``BCQ+I.W.''. We employ the same technique to obtain $\omega(s,a)$ for both the baseline and for adding variance regularization as described. The results suggest that adding the proposed per-step variance regularization scheme significantly outperforms just importance weighting the expected rewards for off-policy policy learning.
\begin{figure*}[h!]
\centering
\hspace*{-0.9cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_expert_ablation.png}
\caption{Hopper Expert ablation}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_medium_ablation.png}
\caption{Hopper Medium ablation}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_random_ablation.png}
\caption{Hopper Random ablation}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_mixed_ablation.png}
\caption{Hopper Mixed ablation}
\label{fig:reachidea}
\end{subfigure}%
\caption{Ablation performed on Hopper. The mean and standard deviation are
reported over 5 random seeds. The offline datasets for these experiments are same as the corresponding ones in Fig 1 of the main paper.}
\label{fig:ablation}
\vspace*{-0.2cm}
\end{figure*}
\begin{table}[t!]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l||c | c| c| c| c| c|}
\hline
\textbf{Domain} & \textbf{Task Name} & \textbf{BCQ+\texttt{OVR}\xspace} & \textbf{BCQ} & \textbf{BEAR} & \textbf{BRAC-p} & \textbf{aDICE} & \textbf{SAC-off} \\
\hline
\hline
\multirow{12}{*}{Adroit}
& pen-human & \textbf{64.12} & 56.58 & -1 & 8.1 & -3.3 & 6.3\\
& hammer-human & \textbf{1.05} & 0.75 & 0.3 & 0.3 & 0.3 & 0.5\\
& door-human & 0.00 & 0.00 & -0.3 & -0.3 & 0 & \textbf{3.9}\\
& relocate-human & -0.13 & \textbf{-0.08} & -0.3 & -0.3 & -0.1 & 0 \\
& pen-cloned & 40.84 & \textbf{41.09} & 26.5 & 1.6 & -2.9 & 23.5 \\
& hammer-cloned & \textbf{0.78} & 0.35 & 0.3 & 0.3 & 0.3 & 0.2\\
& door-cloned & \textbf{0.03} & \textbf{0.03} & -0.1 & -0.1 & 0 & 0\\
& relocate-cloned & -0.22 & -0.26 & -0.3 & -0.3 & -0.3 & \textbf{-0.2}\\
& pen-expert & \textbf{99.32} & 89.42 & \textbf{105.9} & -3.5 & -3.5 & 6.1\\
& hammer-expert & \textbf{119.32} & 108.38 & \textbf{127.3} & 0.3 & 0.3 & 25.2 \\
& door-expert & \textbf{100.39} & 101.33 & \textbf{103.4} & -0.3 & 0 & 7.5\\
& relocate-expert & 31.31 & 23.55 & \textbf{98.6} & -0.3 & -0.1 & -0.3 \\
\hline
\end{tabular}}
\caption{The results on D4RL tasks compare BCQ \citep{BCQ} with and without \texttt{OVR}\xspace, bootstrapping error reduction (BEAR) \citep{BEAR}, behavior-regularized actor critic with policy (BRAC-p) \citep{BRAC}, AlgeaDICE (aDICE) \citep{AlgaeDICE} and offline SAC (SAC-off) \citep{SAC}.
The results presented are the normalized returns on the task as per \citet{D4RL} (see Table 3 in \citet{D4RL} for the unnormalized scores on each task).}
\label{tab:my_label__}
\end{table}
\subsection{Experimental Results in Corrupted Noise Settings}
We additionally consider a setting where the batch data is collected from a noisy environment, i.e, in settings with \textit{corrupted rewards},
$r\rightarrow r +\epsilon$, where $\epsilon\sim \mathcal{N}(0,1)$. Experimental results are presented in figures \ref{fig:mainfig_no_noise}, \ref{fig:mainfig_corrupted_noise}.
From our results, we note that using \texttt{OVR}\xspace on top of BCQ \citep{BCQ}, we can achieve significantly better performance with variance minimization, especially when the agent is given sub-optimal demonstrations. We denote it as \textit{medium} (when the dataset was collected by a half trained SAC policy) or a \textit{mixed} behaviour logging setting (when the data logging policy is a mixture of random and SAC policy). This is also useful for practical scalability, since often data collection is expensive from an expert policy. We add noise to the dataset, to examine the significance of \texttt{OVR}\xspace under a noisy corrupted dataset setting.
\setlength\belowcaptionskip{-0.05ex}
\begin{figure*}[h]
\centering
\hspace*{-0.9cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_expert_noise.png}
\caption{Hopper Expert w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_medium_noise.png}
\caption{Hopper Medium w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_random_noise.png}
\caption{Hopper Random w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/hopper_mixed_noise.png}
\caption{Hopper Mixed w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.9cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/walker_expert_noise.png}
\caption{Walker Expert w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/walker_medium_noise.png}
\caption{Walker Medium w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/walker_random_noise.png}
\caption{Walker Random w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\hspace*{-0.4cm}
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{plots/walker_mixed_noise.png}
\caption{Walker Mixed w/ Noise}
\label{fig:reachidea}
\end{subfigure}%
\caption{Evaluation of the proposed approach and the baseline BCQ on a suite of three OpenAI Gym environments. We consider the setting of rewards that are corrupted by a Gaussian noise. Results for the uncorrupted version are in Fig.~\ref{fig:mainfig_no_noise}. Experiment results are averaged over 5 random seeds}
\label{fig:mainfig_corrupted_noise}
\end{figure*}
\subsection{Experimental Results on Safety Benchmark Tasks}
\textbf{Safety Benchmarks for Variance as Risk : } We additionally consider safety benchmarks for control tasks, to analyse the significance of variance regularizer as a risk constraint in offline policy optimization algorithms. Our results are summarized in table \ref{table:safety_results}.
\begin{table}[t]
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption{Results on the Safety-Gym environments \citep{safetygym}. We report the mean and S.D. of episodic returns and costs over five random seeds and 1 million timesteps. The goal of the agent is to maximize the episodic return, while minimizing the cost incurred.}
\label{table:safety_results}
\end{minipage}
\begin{minipage}[c]{0.45\textwidth}
\begin{tabular}{l|c c | c c }
\toprule
& \multicolumn{2}{c|}{\textbf{PointGoal1}} & \multicolumn{2}{c}{\textbf{PointGoal2}} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& Reward & Cost & Reward & Cost \\
\midrule
\textbf{BCQ} & 43.1 $\pm$ 0.3 & 137.0 $\pm$ 3.6 & 32.7$ \pm$ 0.7 & 468.2 $\pm$ 9.1 \\
\midrule
\textbf{BCQ+\texttt{OVR}\xspace} & \textbf{44.2} $\pm$ 0.3 & \textbf{127.1} $\pm$ 4.0 & \textbf{33.2} $\pm$ 0.7 & \textbf{453.9} $\pm$ 7.3 \\
\midrule
\midrule
& \multicolumn{2}{c|}{\textbf{PointButton1}} & \multicolumn{2}{c}{\textbf{PointButton2}} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& Reward & Cost & Reward & Cost \\
\midrule
\textbf{BCQ} & \textbf{30.9} $\pm$ 2.2 & 330.8 $\pm$ 8.3 & 18.1 $\pm$ 1.1 & 321.6 $\pm$ 4.1 \\
\midrule
\textbf{BCQ+\texttt{OVR}\xspace} & 30.7 $\pm$ 2.3 & \textbf{321.5} $\pm$ 6.8 & \textbf{19.6} $\pm$ 1.0 & \textbf{305.7} $\pm$ 6.1 \\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{-1.8em}
\end{table}
\section{Estimating Distribution Ratio in Offline Policy Optimization - Bi-Level Optimization Perspective}
To optimize the off-policy optimization problem with state-action distribution corrections effectively, we can view this as an alternating dual optimization problem such that the resulting algorithm can converge to a unique Nash equilibrium, ie, $(\pi, \mu) = (\pi^{*}, \pi^{*})$. For the bi-level optimization perspective and analysis in this section, we assume we have access to the data logging policy $\mu$ which collects the batch data $\mathcal{D}$.
The off-policy optimization objective often takes the form :
\begin{align}
\label{eq:trpo_objective_offpolicy}
J(\pi_\theta) = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)}\biggl[ \frac{d_{\pi_{\theta}}(s, a)}{d_{\mathcal{D}} (s, a)}\cdot Q(s,a) \biggr]
- \beta \cdot \mathcal{D}_{f}(d_{\pi} || d_{\mathcal{D}})
\end{align}
where for clarity of understand, we have additionally introduced the divergence regularizer term $\mathcal{D}_{f}(d_{\pi} || d_{\mathcal{D}})$. Note that similar regularizers of this form has also been considered in offline algorithms such as AlgaeDICE \citep{AlgaeDICE}.
In the outer loop optimization, we update $\pi$ using samples under $d_{\mathcal{D}}(s,a)$ where we perform policy optimization regularizing with the behaviour policy. In the following sections, we show that we can formulate an overall dual off-policy optimization problem, where in the primal form, we solve for the policy gradient objective with off-policy data under $\mu$, and in the dual form solve for the dual variables to estimate the distribution ratio required for the local distribution matching step.
\textit{Inner Optimization : Distribution Matching : } Under our alternating minimax formulation, we first perform a \textit{local} imitation learning or distribution matching step that involves estimating or correcting for the ratio between state-action distributions $d_{\pi}$ and $d_{\mu}$. We introduce a dual transformation using Fenchel duality, inspired from \citep{DualDICE, SBEED, 2019GENDICEGO}, such that optimizing the dual function is equivalent to estimating the distribution ratio. This leads to an optimization objective
\begin{equation}
\max_{h} \quad \mathcal{J}_{\text{GAIL}}(h) = \mathbb{E}_{(s,a) \sim d_{\pi}} \Big[ \log h(s,a) \Big] +
\mathbb{E}_{(s,a) \sim d_{\mu}} \Big[ \log (1 - h(s,a)) \Big],
\end{equation}
similar to GAIL \citep{GAIL}, where $h$ is the discriminator function to discriminate between the samples of $d_{\pi}$ and $d_{\mu}$. The local distribution matching step involves solving the dual problem which is equivalent to learning a behaviour policy $\mu$ such that it remains close to the target policy, minimizing the variance between the state-action samples under the two policies \citep{behaviour_policy_search}. The inner loop involves minimizing the KL divergence term, where we need to estimate the ratio of the state-action distributions. Considering the dual form of the KL divergence term where we introduce the dual function $\nu$, such that the $- \text{KL} = \min_{x} \mathcal{K}_{\text{DICE}}$, given by :
\begin{align*}
\label{eq:kl_duality}
& - \text{KL}( d_{\mathcal{D}}(s,a) || d_{\pi}(s,a) ) =
& \underset{x}{\text{min}} \quad \mathbb{E}_{s \sim d_{\pi}(s), a \sim \pi(a,s)} \Big[ \exp{ x(s,a) } \Big]
& - \mathbb{E}_{s,a \sim d_{\mathcal{D}(s,a)}} \Big[ x(s,a) \Big]
\end{align*}
\textit{Change of Variables : } The first term is based on an expectation w.r.t $d_{\pi}(s)$ which we do not have access to in practice. Using the change of variables trick, as in \citep{DualDICE}, we can transform this distribution to be over the initial state distribution. Let us use an arbitrary variable $\nu(s,a) : \mathcal{S} \times \mathcal{A} \leftarrow \mathcal{R}$, which is an arbitrary state-action distribution that satisfies $\nu(s,a) = x(s,a) + \gamma \mathbb{E}_{s' \mid s, a} \Big[ \nu(s',a') \Big]$. We therefore have :
\[
\mathbb{E}_{(s, a) \sim d_{\pi}(s,a)} \Big[ x(s,a) \Big] = (1 - \gamma) \mathbb{E}_{s \sim \beta, a \sim \pi(s)} \Big[ \nu(s,a) \Big]
\]
We therefore introduce the dual function $\nu(s,a)$ which has a similar form as the value function with the Bellman-like operator $\mathcal{B}^{\pi} = \gamma \mathbb{E}_{s' \sim P, a' \sim \pi(\cdot \mid s'))} [ \nu(s', a') ]$. By applying the change of variables trick in the dual form of the $\text{KL}$ divergence, and considering the Donsker-Varadhan representation of the KL divergence term to avoid instability due to the exponential term, we therefore have :
\begin{multline}
- \text{KL}(d_{\mathcal{D}}(s,a) || d_{\pi}(s,a) ) = \min_{\nu} \log \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} \Big[ \exp{( \nu(s,a) - \mathcal{B}^{\pi}(s', a') )} \Big] \\
- \mathbb{E}_{s,a \sim d_{\mathcal{D}(s,a)}} \Big[ \nu(s,a) - \mathcal{B}^{\pi}\nu(s', a') \Big]
\end{multline}
For the second term, we can either telescope to reduce the expectation over initial states, or compute the second term as it is, since we have access to all the past off-policy samples in the experience replay buffer. Denoting the dual form of the $\text{KL}$ divergence as $\mathcal{K}_{\text{DICE}}$, where by solving for the dual form to find optimal $\nu(s,a)$, we can compute the ratio of $d_{\pi}(s,a)$ and $d_{\mu}(s,a)$ exactly, as given by
\begin{equation}
\label{eq:optimal_dual}
x^{*}(s,a) = \nu^{*}(s,a) - \mathcal{B}^{\pi}\nu^{*}(s', a') = \log \frac{d_{\mu}(s,a)}{d_{\pi}(s,a)}
\end{equation}
The solution from equation \ref{eq:optimal_dual} is equivalent to minimizing the $\text{KL}$ divergence. However, note that equation \ref{eq:optimal_dual} and the dual solution still depends on both $d_{\pi}$ and $d_{\mu}$, where in the off-policy case, we usually do not have access to $d_{\pi}$. Compared to this, \citep{AlgaeDICE, DualDICE} uses the off-policy batch setting where the state-action distribution can be replaced with samples from a fixed batch data. In the online off-policy case, however, this is not the case, and therefore requires an approximation. Here, we make an approximation based on the following observation that instead of taking $d_{\pi}(s,a)$ exactly, we can instead consider only the last trajectory ie, $d_{\pi_{\text{old}}(s,a)}$. In other words, we compute the optimal ratio between $d_{\mu}(s,a)$ (ie, state-action samples in the entire replay buffer) and $d_{\pi_{\text{old}}}$ where we only take the last old trajectory, to get $\log \frac{d_{\mu}(s,a)}{d_{\pi_{\text{old}}}(s,a)}$.
\textit{Outer Optimization : Off-Policy Policy Gradient : } Following from our off-policy optimization objective in equation \ref{eq:trpo_objective_offpolicy} and applying the dual form of the KL divergence term, we get the outer loop objective $J(\pi_{\theta}, \nu)$ under off-policy samples :
\begin{align}
\label{eq:off_objective}
\mathcal{J}(\pi_{\theta}, \nu) = \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Bigg[ \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)} \cdot r(s,a) \Bigg]
- \lambda \mathbb{E}_{d_{\mathcal{D}(s,a)} \Bigg[ \Big[ \nu(s,a) - \gamma \mathbb{E}_{s' \mid s, a} \Big[ \nu(s', a') \Big] \Big] \Bigg] + \notag \\
\lambda \Bigg[ \mathbb{E}_{d_{\beta}(s,a)} \Big[ \exp{ ( \nu(s_0,a_0) ) } \Bigg]
\end{align}
where $d_{\beta}(s,a)$ is the starting state distribution. Since from the inner loop, we can estimate the last two terms by optimizing the dual parameters $\nu(s,a)$ by minimizing the error $(\nu(s,a) - \mathcal{B}^{\pi} \nu(s,a))$, we have further shown that the optimal solution $\nu^{*}(s,a)$ gives the optimal state-action distribution ratio in the primal form, as in equation \ref{eq:optimal_dual}, from which we can further find the optimal ratio $\frac{d_{\pi}(s,a)}{d_{\mathcal{D}(s,a)}$ as it appears in the off-policy objective \ref{eq:off_objective} :
\[
\frac{d_{\pi}(s,a)}{d_{\mathcal{D}(s,a)} = \frac{1}{\exp{( x^{*}(s,a) )}}
\]
which can be thought of as approximating the importance sampling correction term as in off-policy actor-critic \citep{OffPAC}. The solution to the dual-problem is equivalent to minimizing the discrepancy in the importance sampling correction term. Furthermore, using the optimal primal solution $x^{*}(s,a)$ or the optimal dual solution $\nu^{*}(s,a)$, we can establish that we get a lower bound to the $\text{KL}$ divergence term, in terms of the variance of the importance sampling estimation, similar to POIS \citep{POIS}
\textit{Overall Min-Max Objective:} We can write our overall objective as a min-max alternating update scheme :
\begin{align*}
& \min_{\nu} \max_{\theta} \mathcal{J}(\theta, \nu) =
\mathbb{E}_{d_{\mathcal{D}}(s,a)} \Bigg[\frac{1}{ \exp{ (\nu(s,a) - \mathcal{B}^{\pi}\nu(s, a) ) }} \cdot r(s,a) \Bigg] \notag \\
& - \lambda \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Bigg[ \Big[ \nu(s,a) - \gamma \mathbb{E}_{s' \mid s, a} \Big[ \nu(s', a') \Big] \Big] \Bigg]
+ \lambda \Bigg[ \mathbb{E}_{d_{\beta}(s,a)} \Big[ \exp{ ( \nu(s_0,a_0) ) } \Bigg],
\end{align*}
where note that the minimization step in the above minmax formulation implicitly depends on the inner loop optimization itself, where in the inner loop we perform distribution matching between the behaviour policy $d_{\mu}(s,a)$ and the target policy $d_{\pi}(s,a)$ state-action distributions
\begin{align*}
- \text{KL}(d_{\mathcal{D}} || d_{\pi}) =
\underset{\nu : \mathcal{S \times \mathcal{A} \rightarrow \mathcal{R}}}{\text{min}} \Big \{\log \mathbb{E}_{d_{\beta}(s,a)} \Big[ \exp{ ( \nu(s_0, a_0) ) } \Big]
- \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \Big(\nu(s,a) - \mathcal{B}^{\pi} \nu(s,a) \Big)^{2} \Big] \}
\end{align*}
where we can find the optimal state-action distribution correction by minimizing the overall mean squared error :
\[
\Big( (\nu^{*} - \mathcal{B}^{\pi}\nu )(s,a) \Big) = \frac{d_{\mu}(s,a)}{d_{\pi}(s,a)}
\]
The minimax optimization problem with alternating updates can be cast as a bilevel optimization problem. Under this scheme, the solution from the inner loop matching the distributions, or equivalently finding optimal dual function, further determines the solution in the outer loop which performs a state-action distribution correction in the off-policy policy gradient. Recall that an actor-critic algorithm \citep{Konda} can also be seen as a bilevel optimization problem (see Appendix for more details). We can write our overall bilevel formulation as :
\begin{equation}
\label{eq:bilevel_objective}
\begin{align}
& \underset{\pi_{\theta}}{\text{maximize}}
& & \mathcal{J}(\theta, \nu) = \mathbb{E}_{d_{\mu}(s,a)} \Bigg[ \frac{1}{\nu^{*}} \cdot Q^{\pi_{\theta}}(s,a) - \mathcal{K}_{\text{KL}}(\nu^{*}) \Bigg]\\
& \text{subject to}
& & \nu^{*} = \underset{\nu}{\text{argmin}} \quad \Big( ( \nu(s,a) - \mathcal{B}^{\pi}\nu(s,a) )(s,a) \Big)^{2}
\end{align}
\end{equation}
The overall off-policy policy gradient objective depends on the solution $\nu^{*}$ from the inner loop of the optimization probem, where we minimize the expected Bellman error for the dual function $\nu(s,a)$. The solution from the dual form in the inner optimization is further used in the upper level optimization for optimal corrections for the stationary distributions.
Considering non-linear function approximation, where the behaviour regularized off-policy actor-critic algorithm consists of three set of parameters, $\theta$ for the target policy, $\omega$ for the mixture of weights for $\mu_{\omega}$ and a critic approximation $Q_{\phi} \approx Q^{\pi}$. In the inner loop, in addition to optimizing for $\omega$, as in the bi-level actor-critic formulation discussed above, we would also need to minimize the Bellman error to optimize the weights for the critic estimator $\phi$. We therefore have the overall minimax optimization problem : $\min_{\nu, \phi} \max_{\theta} \mathcal{L}(\theta, \phi, \nu)$, which can be considered as a three-time scale algorithm too. However, for ease of analysis, we resort to a two time-scale update. We have the following two minimization problems, for $\phi$ and $\omega$, where first we minimize the expected Bellman error (MSBE) :
\[
\phi^{*} = \min_{\phi} \mathcal{G}(\phi, \theta) = \min_{\phi} || Q_{\phi} - \mathcal{T}^{\pi_{\theta}} Q_{\phi} ||_{d_{\pi}}^{2}
\]
and then alternately minimize the mean squared Bellman error, such that we can find the optimal ratio between the state-action distribution samples $\phi$, given by
\[
\nu^{*} = \min_{\nu} \Big( (\nu - \mathcal{B}^{\pi} \nu )(s,a) \Big)^{2}
\]
Our overall optimization problem can further be written as a bi-level optimization problem, involving a three time-scale algorithm consisting of two inner loops solving minimization problems, and an outer loop for the policy maximization step. The proposition below shows this :
\begin{prop} An alternating optimization framework based on the objective $\mathcal{L}(\theta, \phi, \nu)$. We use $\pi_{\theta}$ as the fast policy and $\mu_{\omega}$ as the slower reactive policy. This has the equivalent bi-level optimization problem as follows
\begin{equation}
\label{eq:ac_bilevel_minmin_max}
\begin{align}
& \underset{\theta}{\text{maximize}}
& & \mathcal{F}(\theta, \omega) = \mathbb{E}_{s_0, a_0 \sim \beta} \Bigg[ Q_{\omega}^{\pi_{\theta}}(s_0,a_0) \Bigg] \\
& \text{subject to}
& & \phi^{*} = \min_{\phi} \mathcal{G}(\phi, \theta) = \min_{\phi} || Q_{\phi} - \mathcal{T}^{\pi_{\theta}} Q_{\phi} ||_{d_{\pi}}^{2}, \\
& \text{subject to}
& & \nu^{*} = \min_{\nu} \Big( (\nu - \mathcal{B}^{\pi} \nu )(s,a) \Big)^{2}
\end{align}
\end{equation}
\end{prop}
\textbf{Updating $\pi_{\theta}$ :} In the maximization step for updating $\pi_{\theta}$, we can use the samples from $\mu_{\omega}$ and perform a trust region optimization approach for optimizing the off-policy gradient objective, which follows from the regular TRPO objective as usual, with the only difference being that the samples are collected under behaviour policy $\mu(a \mid s)$.
\textbf{Distribution Matching Step:} The minimization step involves matching between $d_{\mu}$ and $d_{\pi}$ by minimizing $\text{KL}(d_\mu) || d_{\pi_{\theta}})$, or equivalently optimizing the dual function $\nu(s,a)$, to estimate the optimal state-action distribution correction.
\textbf{Combining Updates for $\pi_{\theta}$ and $\nu(s,a)$: } We can alternate the updates for $\nu$ (ie, minimizing Bellman-like error) and $\pi_{\theta}$ (ie, off-policy gradient update) to form a min-max optimization objective where both the policies are tightened together to guide each other's improvements similar to Dual Policy Iteration \citep{DPI}.
\subsection{What makes Offline Off-Policy Optimization Difficult? }
Offline RL optimization algorithms often suffer from distribution mismatch issues, since the underlying data distribution in the batch data may be quite different from the induced distribution under target policies. Recent works \citep{BCQ, BEAR, RishabhDQN, CQL} have tried to address this, by avoiding overestimation of Q-values, which leads to the extraplation error when bootstrapping value function estimates. This leads to offline RL agents generalizing poorly for unseen regions of the dataset. Additionally, due to the distribution mismatch, value function estimates can also have large variance, due to which existing online off-policy algorithms \citep{SAC, DDPG, TD3} may fail without online interactions with the environment. In this work, we address the later problem to minimize variance of value function estimates through variance related risk constraints.
\subsection{Discussions on Offline Off-Policy Optimization with State-Action Distribution Ratios}
\label{app:sec-estimating_dist_ratio}
In this section, we include several alternatives by which we can compute the stationary state-action distribution ratio, borrowing from recent works \citep{MWL_Uehara, DualDICE}.
\textbf{Off-Policy Optimization with Minimax Weight Learning (MWL) : } We discuss other possible ways of optimizing the batch off-policy optimization objective while also estimating the state-action density ratio.
Following from \citep{MWL_Uehara} we further modify the off-policy optimization part of the objective $J(\theta)$ in $\mathcal{L}(\theta, \lambda)$ as a min-max objective, consisting of weight learning $\omega_{\pi/\mathcal{D}}$ and optimizing the resulting objective $J(\theta, \omega)$. We further propose an overall policy optimization objective, where a single objective can be used for estimating the distribution ratio, evaluating the critic and optimizing the resulting objective. We can write the off-policy optimization objective with its equivalent starting state formulation, such that we have :
\begin{equation}
\mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi_{\theta}/\mathcal{D}}(s,a) \cdot r(s,a) \Big] = (1 - \gamma) \mathbb{E}_{s_0 \sim \beta_{0}(s), a_0 \sim \pi(\cdot \mid s_0)} \Big[ Q^{\pi}(s_0, a_0) \Big]
\end{equation}
Furthermore, following Bellman equation, we expect to have $\mathbb{E} [r(s,a)] = \mathbb{E} [ Q^{\pi}(s,a) - \gamma Q^{\pi}(s', a') ]$
\begin{equation}
\mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi_{\theta}/\mathcal{D}}(s,a) \cdot \{ Q^{\pi}(s,a) - \gamma Q^{\pi}(s', a') \} \Big] = (1 - \gamma) \mathbb{E}_{s_0 \sim \beta_{0}(s), a_0 \sim \pi(\cdot \mid s_0)} \Big[ Q^{\pi}(s_0, a_0) \Big]
\end{equation}
We can therefore write the overall objective as :
\begin{multline}
\label{eq:MWL_Optimization}
J(\omega, \pi_{\theta}, Q) = \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi_{\theta}/\mathcal{D}}(s,a) \cdot \{ Q^{\pi}(s,a) - \gamma Q^{\pi}(s', a') \} \Big] \\ - (1 - \gamma) \mathbb{E}_{s_0 \sim \beta_{0}(s), a_0 \sim \pi(\cdot \mid s_0)} \Big[ Q^{\pi}(s_0, a_0) \Big]
\end{multline}
This is similar to the MWL objective in \citep{MWL_Uehara} except we instead consider the bias reduced estimator, such that accurate estimates of $Q$ or $\omega$ will lead to reduced bias of the value function estimation. Furthermore, note that in the first part of the objective $J(\pi_{\theta}, \omega, Q)^{2}$, we can further use entropy regularization for smoothing the objective, since instead of $Q^{\pi}(s', a')$ in the target, we can replace it with a log-sum-exp and considering the conjugate of the entropy regularization term, similar to SBEED \citep{SBEED}. This would therefore give the first part of the objective as an overall min-max optimization problem :
\begin{multline}
\label{eq:overall_PG_objective_1}
J(\omega, \pi_{\theta})
= \mathbb{E}_{d_{\mu}(s,a)} \Big[ \omega_{\pi_{\theta}/\mathcal{D}}(s,a) \cdot \{ r(s,a) + \gamma Q^{\pi}(s', a') + \tau \log \pi(a \mid s) - Q^{\pi}(s,a) \} \Big] \\
+ (1 - \gamma) \mathbb{E}_{s_0 \sim \beta_{0}(s), a_0 \sim \pi(\cdot \mid s_0)} \Big[ Q^{\pi}(s_0, a_0) \Big]
\end{multline}
such that from our overall constrained optimization objective for maximizing $\theta$, we have turned it into a min-max objective, for estimating the density ratios, estimating the value function and maximizing the policies
\begin{equation}
\omega_{\pi/\mathcal{D}}^{*}, Q^{*}, \pi^{*} = \underset{\omega, Q}{\text{argmin}} \quad \underset{\pi}{\text{argmax}} \quad J(\pi_{\theta}, \omega, Q)^{2}
\end{equation}
where the fixed point solution for the density ratio can be solved by minimizing the objective :
\begin{multline}
\omega_{\pi/\mathcal{D}}^{*} = \underset{\omega}{\text{argmin}} \quad \mathcal{L}(\omega_{\pi/\mathcal{D}}, Q)^{2} = \mathbb{E}_{d_{\mu}(s,a)} \Big[ \{\gamma \omega(s,a) \cdot Q^{\pi}(s',a') - \omega(s,a) Q^{\pi}(s,a)\} + \\
(1 - \gamma) \mathbb{E}_{\beta(s,a)} Q^{\pi}(s_0, a_0) \Big]
\end{multline}
\textbf{DualDICE : } In contrast to MWL \citep{MWL_Uehara}, DualDICE \citep{DualDICE} introduces dual variables through the change of variables trick, and minimizes the Bellman residual of the dual variables $\nu(s,a)$ to estimate the ratio, such that :
\begin{equation}
\nu^{*}(s,a) - \mathcal{B}^{\pi} \nu^{*}(s,a) = \omega_{\pi/\mathcal{D}}(s,a)
\end{equation}
the solution to which can be achieved by optimizing the following objective
\begin{equation}
\label{eq:DualDICE}
\min_{\nu} \mathcal{L}(\nu) = \frac{1}{2} \mathbb{E}_{d_{\mathcal{D}}} \Big[ (\nu - \mathcal{B}^{\pi} \nu)(s,a)^{2} \Big] - (1 - \gamma) \mathbb{E}_{s_0, a_0 \sim \beta(s,a)} \Big[ \nu(s_0, a_0) \Big]
\end{equation}
\textbf{Minimizing Divergence for Density Ratio Estimation : } The distribution ratio can be estimated using an objective similar to GANs \citep{GANs, GAIL}, as also similarly proposed in \citep{DistributionMatchingOfir}.
\begin{equation}
\label{eq:GANs}
\max_{h} \mathcal{G}(h) = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \log h(s,a) \Big] + \mathbb{E}_{(s,a) \sim d_{\pi}} \Big[ \log (1 - h(s,a)) \Big]
\end{equation}
where $h$ is the discriminator class, discriminating between samples from $d_{\mathcal{D}}$ and $d_{\pi}$. The optimal discriminator satisfies :
\begin{equation}
\label{eq:opt_discriminator_GANs}
\log h^{*}(s,a) - \log (1 - h^{*}(s,a)) = \log \frac{d_{\mathcal{D}}(s,a)}{d_{\pi}(s,a)}
\end{equation}
The optimal solution of the discriminator is therefore equivalent to minimizing the divergence between $d_{\pi}$ and $d_{\mathcal{D}}$, since the KL divergence is given by :
\begin{equation}
- D_{\text{KL}}(d_{\pi} || d_{\mathcal{D}}) = \mathbb{E}_{(s,a) \sim d_{\pi}} \Big[ \log \frac{d_{\mathcal{D}}(s,a)}{d_{\pi}(s,a)} \Big]
\end{equation}
Additionally, using the Donsker-Varadhan representation, we can further write the KL divergence term as :
\begin{equation}
\label{eq:KL_DV_Representation}
- D_{\text{KL}}(d_{\pi} || d_{\mathcal{D}}) = \min_{x} \log \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \exp{x(s,a)} \Big] - \mathbb{E}_{(s,a) \sim d_{\pi}} \Big[ x(s,a) \Big]
\end{equation}
such that now, instead of the discriminator class $h$, we learn the function class $x$, the optimal solution to which is equivalent to the distribution ratio plus a constant
\begin{equation}
x^{*}(s,a) = \log \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)}
\end{equation}
However, note that both the GANs like objective in equation \ref{eq:GANs} or the DV representation of the KL divergence in equation \ref{eq:KL_DV_Representation} requires access to samples from both $d_{\pi}$ and $d_{\mathcal{D}}$. In our problem setting however, we only have access to batch samples $d_{\mathcal{D}}$.
To change the dependency on having access to both the samples, we can use the change of variables trick, such that : $ x(s,a) = \nu(s,a) - \mathcal{B}^{\pi} \nu(s,a)$, to write the DV representation of the KL divergence as :
\begin{equation}
- D_{\text{KL}}(d_{\pi} || d_{\mathcal{D}}) = \min_{\nu} \log \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \exp{\nu(s,a) - \mathcal{B}^{\pi} \nu(s,a)} \Big] - \mathbb{E}_{(s,a) \sim d_{\pi}} \Big[ \nu(s,a) - \mathcal{B}^{\pi} \nu(s,a) \Big]
\end{equation}
where the second expectation can be written as an expectation over initial states, following from DualDICE, such that we have
\begin{equation}
- D_{\text{KL}}(d_{\pi} || d_{\mathcal{D}}) = \min_{\nu} \quad \log \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \exp{\nu(s,a) - \mathcal{B}^{\pi} \nu(s,a)} \Big] - (1 - \gamma) \mathbb{E}_{(s,a) \sim \beta_{0}(s,a)} \Big[ \nu(s_0, a_0) \Big]
\end{equation}
By minimizing the above objective w.r.t $\nu$, which requires only samples from the fixed batch data $d_{\mathcal{D}}$ and the starting state distribution. The solution to the optimal density ratio is therefore given by :
\begin{equation}
x^{*}(s,a) = \nu^{*}(s,a) - \mathcal{B}^{\pi} \nu^{*}(s,a) = \log \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)} = \log \omega^{*}(s,a)
\end{equation}
\textbf{Empirical Likelihood Ratio : }
We can follow \citet{sinha2020experience} to compute the state-action likelihood ratio, where they use a binary a classifier to classify samples between an on-policy and off-policy distribution. The proposed classifier, $\phi$, is trained on the following objective, and takes as input the state-action tuples $(s, a)$ to return a probability score that the state-action distribution is from the target policy.
The objective for $\phi$ can be formulated as
\begin{equation}
\mathcal{L}_{cls} = \max_{\phi} -\mathbb{E}_{s, a \sim \mathcal{D}}[\log(\phi(s,a))] + \mathbb{E}_{s \sim \mathcal{D}}[\log(\phi(s, \pi(s))]
\end{equation}
where $s, a \sim \mathcal{D}$ are samples from the behaviour policy, and $s, \pi(s)$ are samples from the target policy. The density ratio estimates for a given $s, a \sim \mathcal{D}$ are simply $\omega(s,a) = \sigma(\phi(s, a))$ like in \citet{sinha2020experience}. We then use these $\omega(s,a)$ for density ratio corrections for the target policy.
\section{Appendix : Lower Bound Objective with Variance Regularization}
\subsection{Proof of Lemma \ref{lemma:lower_bound_lemma}}
\label{app:sec-lower_bound_lemma}
Recalling lemma \ref{lemma:lower_bound_lemma} which states that, the proof of this follows from \citep{POIS}. We extend this for marginalized importance weighting, and include here for completeness. Note that compared to importance weighting, which leads to an unbiased estimator as in \citep{POIS}, correcting for the state-action occupancy measures leads to a biased estimator, due to the approximation $\hat{\omega}_{\pi/\mathcal{D}}$. However, for our analysis, we only require to show a lower bound objective, and therefore do not provide any bias variance analysis as in off-policy evaluation.
\begin{equation}
\text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \hat{\omega}_{\pi/\mathcal{D}} \Big] \leq \frac{1}{N} || r ||_{\infty}^{2} \mathcal{F}_{2} ( d_{\pi} || d_{\mathcal{D}} )
\end{equation}
\begin{proof} Assuming that state action samples are drawn i.i.d from the dataset $\mathcal{D}$, we can write :
\begin{align}
& \text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \hat{\omega}_{\pi/\mathcal{D}}(s,a) \Big] \leq \frac{1}{N} \text{Var}_{(s_1, a_1) \sim d_{\mathcal{D}}(s,a)} \Big[ \frac{d_{\pi}(s_1, a_1}{d_{\mathcal{D}}(s_1, a_1)} \cdot r(s_1, a_1) \Big] \notag \\
& \leq \frac{1}{N} \mathbb{E}_{(s_1, a_1) \sim d_{\mathcal{D}}(s,a)} \Big[ \Big( \frac{d_{\pi}(s_1, a_1)}{d_{\mathcal{D}}(s_1, a_1)} \cdot r(s_1, a_1) \Big)^{2} \Big] \notag \\
& \leq \frac{1}{N} || r ||_{\infty}^{2} \mathbb{E}_{(s_1, a_1) \sim d_{\mathcal{D}}(s,a)} \Big[ \Big( \frac{d_{\pi}(s_1, a_1)}{d_{\mathcal{D}}(s_1, a_1)} \cdot r(s_1, a_1) \Big)^{2} \Big] = \frac{1}{N} || r ||_{\infty}^{2} \mathcal{F}_{2} ( d_{\pi} || d_{\mathcal{D}} )
\end{align}
\end{proof}
\subsection{Proof of Theorem \ref{thm:pdl_variance_bounds}: }
First let us recall the stated theorem \ref{thm:pdl_variance_bounds}. By constraining the off-policy optimization problem with variance constraints, we have the following lower bound to the optimization objective with stationary state-action distribution corrections
\begin{equation}
\label{eq:lower_bound_obj}
J(\pi) \geq \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)} r(s,a) ] - \sqrt{ \frac{1 - \delta}{\delta} \text{Var}_{(s,a) \sim d_{\mu}(s,a)} [ \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)} r(s,a) ] }
\end{equation}
\begin{proof}
The proof for the lower bound objective can be obtained as follows. We first define a relationship between the variance and the $\alpha$-divergence with $\alpha=2$, as also similarly noted in \citep{POIS}. Given we have batch samples $\mathcal{D}$, and denoting the state-action distribution correction with $\omega_{\pi/\mathcal{D}}(s,a)$, we can write from lemma \ref{lemma:lower_bound_lemma} :
\begin{equation}
\text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \hat{\omega}_{\pi/\mathcal{D}} \Big] \leq \frac{1}{N} || r ||_{\infty}^{2} \mathcal{F}_{2} ( d_{\pi} || d_{\mathcal{D}} )
\end{equation}
where the per-step estimator with state-action distribution corrections is given by $\omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a)$. Here, the reward function $r(s,a)$ is a bounded function, and for any $N > 0$ the variance of the per-step reward estimator with distribution corrections can be upper bounded by the Renyi-divergence ($\alpha=2$). Finally, following from \citep{POIS} and using Cantelli's inequality, we have with probability at least $1 - \delta$ where $0 < \delta < 1$ :
\begin{align}
\text{Pr} \Big( \omega_{\pi/\mathcal{D}} - J(\pi) \geq \lambda \Big) \leq \frac{1}{ 1 + \frac{\lambda^{2}}{ \text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) ]}}
\end{align}
and by using $\delta = \frac{1}{ 1 + \frac{\lambda^{2}}{ \text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) ]}}$ we get that with probability at least $1 - \delta$, we have:
\begin{align}
J(\pi) = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \geq \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) ] - \sqrt{ \frac{1 - \delta}{\delta} \text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) ] }
\end{align}
where we can further replace the variance term with $\alpha=2$ for the Renyi divergence to conclude the proof for the above theorem. We can further write the lower bound for for $\alpha$-Renyi divergence, following the relation between variance and Renyi-divergence for $\alpha=2$ as :
\begin{equation*}
J(\pi) = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} [ r(s,a) ] \geq \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} [ \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)} \cdot r(s,a) ] - ||r||_{\infty} \sqrt{\frac{(1 - \delta) d_{2} (d_{\pi} || d_{\mathcal{D}})}{\delta N}}
\end{equation*}
This hints at the similarity between our proposed variance regularized objective with that of other related works including AlgaeDICE \citep{AlgaeDICE} which uses a f-divergence $D_{f}(d_{\pi} || d_{\mathcal{D} })$ between stationary distributions.
\end{proof}
\section{Appendix : Monotonic Performance Improvement Guarantees under Variance Regularization}
\label{app:sec_monotonic_improvement}
We provide theoretical analysis and performance improvements bounds for our proposed variance constrained policy optimization approach. Following from \citep{CPI, TRPO, AchiamCPO}, we extend existing performance improvement guarantees based on the stationary state-action distributions instead of only considering the divergence between the current policy and old policy. We show that existing conservative updates in algorithms \citep{TRPO} can be considered for both state visitation distributions and the action distributions, as similarly pointed by \citep{AchiamCPO}. We can then adapt this for the variance constraints instead of the divergence constraints. According to the performance difference lemma \citep{CPI}, we have that, for all policies $\pi$ and $\pi'$ :
\begin{equation}
\label{eq:pdl_kakade}
J(\pi') - J(\pi) = \mathbb{E}_{s \sim d_{\pi'}, a \sim \pi'} [ A^{\pi}(s,a) ]
\end{equation}
which implies that when we maximize \ref{eq:pdl_kakade}, it will lead to an improved policy $\pi'$ with policy improvement guarantees over the previous policy $\pi$. We can write the advantage function with variance augmented value functions as :
\[
A^{\pi}_{\lambda} = Q^{\pi}_{\lambda}(s,a) - V^{\pi}_{\lambda}(s) = \mathbb{E}_{s' \sim \mathcal{P}} \Big[ r(s,a) - \lambda (r(s,a) - J(\pi) )^{2} + \gamma V^{\pi}_{\lambda}(s') - V^{\pi}_{\lambda}(s) \Big]
\]
However, equation \ref{eq:pdl_kakade} is often difficult to maximize directly, since it additionally requires samples from $\pi'$ and $d_{\pi'}$, and often a surrogate objective is instead proposed by \citep{CPI}. Following \citep{TRPO}, we can therefore obtain a bound for the performance difference based on the variance regularized advantage function :
\begin{equation}
J(\pi') \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}(s), a \sim \pi'(a|s)} \Big[ A^{\pi}_{\lambda}(s,a) \Big]
\end{equation}
where we have the augmented rewards for the advantage function, and by following Fenchel duality for the variance, can avoid policy dependent reward functions. Otherwise, we have the augmented rewards for value functions as $\Tilde{r}(s,a) = r(s,a) - \lambda ( r(s,a) - J(\pi) )^{2}$. This however suggests that the performance difference does not hold without proper assumptions \citep{BisiRiskAverse}. We can therefore obtain a monotonic improvement guarantee by considering the KL divergence between policies :
\begin{equation}
\label{eq:surrogate_pdl_kakade}
\mathcal{L}_{\pi}(\pi') = J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'}[ A^{\pi}(s,a) ]
\end{equation}
which ignores the changes in the state distribution $d_{\pi'}$ due to the improved policy $\pi'$. \citep{TRPO} optimizes the surrogate objectives $\mathcal{L}_{\pi}(\pi')$ while ensuring that the new policy $\pi'$ stays close to the current policy $\pi$, by imposing a KL constraint $(\mathbb{E}_{s \sim d_{\pi}}[ \mathcal{D}_{\text{KL}}(\pi'(\cdot \mid s) || \pi(\cdot \mid s) ] \leq \delta)$. The performance difference bound, based on the constraint between $\pi$ and $\pi'$ as in TRPO \citep{TRPO} is given by :
\begin{lemma} The performance difference lemma in \citep{TRPO}, where $\alpha = \mathcal{D}_{\text{TV}}^{\text{max}} = \max_{s} \mathcal{D}_{\text{TV}}(\pi, \pi')$
\begin{equation}
J(\pi') \geq \mathcal{L}_{\pi}(\pi') - \frac{4 \epsilon \gamma}{(1 - \gamma)^{2}} (\mathcal{D}_{\text{TV}}^{\text{max}}(\pi' || \pi))^{2}
\end{equation}
where $\epsilon = \max_{s,a} | A^{\pi}(s,a) |$, which is usually denoted with $\alpha$, where
\end{lemma}
The performance improvement bxound in \citep{TRPO} can further be written in terms of the KL divergence by following the relationship between total divergence (TV) and KL, which follows from Pinsker's inequality, $\mathcal{D}_{\text{TV}}(p || q)^{2} \leq \mathcal{D}_{\text{KL}}(p || q)$, to get the following improvement bound :
\begin{equation}
\label{eq:per_imp_TV}
J(\pi') \geq \mathcal{L}_{\pi}(\pi') - \frac{4 \epsilon \gamma}{(1 - \gamma)^{2}} \mathcal{D}_{\text{KL}}(\pi' || \pi)
\end{equation}
We have a performance difference bound in terms of the state distribution shift $d_{\pi'}$ and $d_{\pi}$. This justifies that $\mathcal{L}_{\pi}(\pi')$ is a sensible lower bound to $J(\pi')$ as long as there is a total variation distance between $d_{\pi'}$ and $d_{\pi}$ which ensures that the policies $\pi'$ and $\pi$ stay close to each other.
Finally, following from \citep{AchiamCPO}, we obtain the following lower bound, which satisfies policy improvement guarantees :
\begin{equation}
\label{eq:pdl_tv}
J(\pi') \geq \mathcal{L}_{\pi}(\pi') - \frac{2 \gamma \epsilon^{\pi}}{1 - \gamma} \mathbb{E}_{s \sim d_{\pi}} [ \mathcal{D}_{\text{TV}}(\pi'(\cdot \mid s) || \pi(\cdot \mid s)) ]
\end{equation}
Equation \ref{eq:per_imp_TV} and \ref{eq:pdl_tv} assumes
that there is no state distribution shift between $\pi'$ and $\pi$. However, if we explicitly assume state distribution changes, $d_{\pi'}$ and $d_{\pi}$ due to $\pi'$ and $\pi$ respectively, then we have the following performance improvement bound :
\begin{lemma} For all policies $\pi'$ and $\pi$, we have the performance improvement bound based on the total variation of the state-action distributions $d_{\pi'}$ and $d_{\pi}$
\begin{equation}
J(\pi') \geq \mathcal{L}_{\pi}(\pi') - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi})
\end{equation}
where $\epsilon^{\pi} = \max_{s} | \mathbb{E}_{a \sim \pi'(\cdot \mid s)}[ A^{\pi}(s,a) ] |$
\end{lemma}
which can be further written in terms of the surrogate objective $\mathcal{L}_{\pi}(\pi')$ as :
\begin{align}
\label{eq:pdl_tv_statedistn}
& J(\pi') \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} [A^{\pi}(s,a)] - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi})\notag \\
& = \mathcal{L}_{\pi}(\pi') - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi})
\end{align}
\subsection{Proof of Theorem \ref{thm:policy_improvement_variance_bound} : Policy Improvement Bound with Variance Regularization}
\label{app:thm:policy_improvement_variance_bound}
\begin{proof} We provide derivation for theorem \ref{thm:policy_improvement_variance_bound}. Recall that for all policies $\pi'$ and $\pi$, and corresponding state visitation distributions $d_{\pi'}$ and $d_{\pi}$, we can obtain the performance improvement bound in terms of the variance of state-action distribution corrections
\begin{equation}
J(\pi') - J(\pi) \geq \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - \text{Var}_{s \sim d_{\pi}, a \sim \pi} \Big[ f(s,a) \Big]
\end{equation}
where $f(s,a)$ is the dual function class, for the divergence between $d_{\pi'}(s,a)$ and $d_{\pi}(s,a)$
Following from Pinsker's inequality, the performance difference lemma written in terms of the state visitation distributions can be given by :
\begin{align}
& J(\pi') \geq \mathcal{L}_{\pi}(\pi') - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi}) \notag \\
& \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} [ A^{\pi}(s,a) ] - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi}) \notag \\
& \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} [ A^{\pi}(s,a) ] - \epsilon^{\pi} \sqrt{ \mathcal{D}_{\text{KL}}(d_{\pi'} || d_{\pi}) }
\end{align}
Following from \citep{TRPO}, we can alternately write this as follows, through the use of Pinsker's inequality,
\begin{align}
& J(\pi') \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \mathbb{E}_{s \sim d_{\pi}} \Big [\mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi})^{2} \Big]\notag \\
&= J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \mathbb{E}_{s \sim d_{\pi}} \Big [ \Big( \max_{f} \{ \mathbb{E}_{s \sim d_{\pi'}, a \sim \pi} [ f(s,a) ] - \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a) ] \} \Big)^{2} \Big]\notag \\
& \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \max_{f} \mathbb{E}_{s \sim d_{\pi}} \Big[ \Big( \mathbb{E}_{s \sim d_{\pi'}, a\sim \pi} [ f(s,a) ] - \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a) ] \Big)^{2} \Big] \notag \\
&= J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \max_{f} \Big\{ \Big( \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a) ] - \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a)] ] \Big)^{2} \Big \} \notag \\
& = J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \max_{f} \text{Var}_{s \sim d_{\pi}, a \sim \pi} \Big[ f(s,a) \Big]
\end{align}
Therefore the policy improvement bound depends on maximizing the variational representation $f(s,a)$ of the f-divergence to guaranetee improvements from $J(\pi)$ to $J(\pi')$. This therefore leads to the stated result in theorem \ref{thm:policy_improvement_variance_bound}.
\end{proof}
\section{Appendix : Additional Discussions}
\subsection{Extended Related Work}
\label{app:sec-related_works}
\textbf{Other related works : } Several other prior works have previously considered the batch RL setting \citep{Lange2012BatchRL} for off-policy evaluation, counterfactual risk minimization \citep{SwamCounterfactual, SwamCounterfactual2}, learning value based methods such as DQN \citep{RishabhDQN}, and others \citep{BEAR, behaviour_regularized}. Recently, batch off-policy optimization has also been introduced to reduce the exploitation error \citep{BCQ} and for regularizing with arbitrary behaviour policies \citep{behaviour_regularized}. However, due to the per-step importance sampling corrections on episodic returns \citep{ImportanceSamplingDoina}, off-policy batch RL methods is challenging. In this work, we instead consider marginalized importance sampling corrections and correct for the stationary state-action distributions \citep{DualDICE, MWL_Uehara, GenDICE}. Additionally, under the framework of Constrained MDPs \citep{Altman99constrainedmarkov}, risk-sensitive and constrained actor-critic algorithms have been proposed previously \citep{ChowRiskSensitive, CVaRChow, AchiamCPO}. However, these works come with their own demerits, as they mostly require minimizing the risk (ie, variance) term, where finding the gradient of the variance term often leads a double sampling issue \citep{Baird}. We avoid this by instead using Fenchel duality \citep{ConvexBoyd}, inspired from recent works \citep{RLviaFenchel, SBEED} and cast risk constrained actor-critic as a max-min optimization problem. Our work is closely related to \citep{BisiRiskAverse}, which also consider per-step variance of returns, w.r.t state occupancy measures in the on-policy setting, while we instead consider the batch off-policy optimization setting with per-step rewards w.r.t stationary distribution corrections.
Constrained optimization has previously been studied in in reinforcement learning for batch policy learning \citep{BatchPolicyLearning}, and optimization \citep{AchiamCPO}, mostly under the framework of constrained MDPs \citep{Altman99constrainedmarkov}. In such frameworks, the cumulative return objective is augmented with a set of constraints, for safe exploration \citep{SafeRLSurvey, LyaponovSafe, SafeExploration}, or to reduce risk measures \citep{ChowRiskSensitive, RiskSensitive, Variance_Actor_Critic}. Batch learning algorithms \citep{Lange2012BatchRL} have been considered previously for counterfactual risk minimization and generalization \citep{SwamCounterfactual, SwamCounterfactual2} and policy evaluation \citep{HCOPE, LihongMinimaxOffPolicy}, although little has been done for constrained offline policy based optimization. This raises the question of how can we learn policies in RL from fixed offline data, similar to supervised or unsupervised learning.
\section{Proofs of Theorems}
\subsection{Proof of Theorem 1 }
\textcolor{red}{\textbf{TODO : Check the theorem derivation}}
\begin{proof} We provide derivation for theorem \ref{thm:pdl_variance}. Recall that for all policies $\pi'$ and $\pi$, and corresponding state visitation distributions $d_{\pi'}$ and $d_{\pi}$, we can obtain the performance improvement bound in terms of the variance of state-action distribution corrections
\begin{equation}
J(\pi') - J(\pi) \geq \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \max_{f} \text{Var}_{s \sim d_{\pi}, a \sim \pi} \Big[ f(s,a) \Big]
\end{equation}
where $f(s,a)$ is the dual function class, for the divergence between $d_{\pi'}(s,a)$ and $d_{\pi}(s,a)$
Following from Pinsker's inequality, the performance difference lemma written in terms of the state visitation distributions can be given by :
\begin{align}
& J(\pi') \geq \mathcal{L}_{\pi}(\pi') - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi}) \notag \\
& \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} [ A^{\pi}(s,a) ] - \epsilon^{\pi} \mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi}) \notag \\
& \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} [ A^{\pi}(s,a) ] - \epsilon^{\pi} \sqrt{ \mathcal{D}_{\text{KL}}(d_{\pi'} || d_{\pi}) }
\end{align}
Following from \citep{TRPO}, we can alternately write this follows, where we further apply the variational form of TV
\begin{align}
& J(\pi') \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \mathbb{E}_{s \sim d_{\pi}} \Big [\mathcal{D}_{\text{TV}}(d_{\pi'} || d_{\pi})^{2} \Big]\notag \\
&= J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \mathbb{E}_{s \sim d_{\pi}} \Big [ \Big( \max_{f} \{ \mathbb{E}_{s \sim d_{\pi'}, a \sim \pi} [ f(s,a) ] - \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a) ] \} \Big)^{2} \Big]\notag \\
& \geq J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \max_{f} \mathbb{E}_{s \sim d_{\pi}} \Big[ \Big( \mathbb{E}_{s \sim d_{\pi'}, a\sim \pi} [ f(s,a) ] - \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a) ] \Big)^{2} \Big] \notag \\
&= J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \max_{f} \Big\{ \Big( \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a) ] - \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ \mathbb{E}_{s \sim d_{\pi}, a \sim \pi} [ f(s,a)] ] \Big)^{2} \Big \} \notag \\
& = J(\pi) + \mathbb{E}_{s \sim d_{\pi}, a \sim \pi'} \Big[ A^{\pi}(s,a) \Big] - C \cdot \max_{f} \text{Var}_{s \sim d_{\pi}, a \sim \pi} \Big[ f(s,a) \Big] \notag \\
& = J(\pi) + \mathbb{E}_{s \sim d_{\mathcal{D}}, a \sim \mu_{\mathcal{D}}} \Big[ \frac{d_{\pi}(s)}{d_{\mathcal{D}}(s)} \cdot \frac{\pi(a|s)}{\mu_{\mathcal{D}}(a|s)} \codt A^{\pi}(s,a) \Big] - C \cdot \text{Var}_{(s,a) \sim d_{\mathcal{D}}(s,a)}( \frac{d_{\pi}(s,a)}{d_{\mathcal{D}}(s,a)} \cdot f(s,a) )
\end{align}
\end{proof}
\section{Appendix : Per-Step versus Episodic Variance of Returns}
\label{app-sec:variance_episodic_per_step}
Following from \citep{Variance_Actor_Critic, PrashanthVarianceAC}, let us denote the returns with importance sampling corrections in the off-policy learning setting as :
\begin{equation}
\label{eq:variance_traj_IS_basic}
D^{\pi}(s,a) = \sum_{t=0}^{T} \gamma^{t} r(s_t, a_t) \Big( \prod_{t=1}^{T} \frac{\pi(a_t \mid s_t)}{\mu(a_t \mid s_t)} \Big) \mid s_0 = s, a_0 = a, \tau \sim \mu
\end{equation}
From this definition in equation \ref{eq:variance_traj_IS_basic}, the action-value function, with off-policy trajectory-wise importance correction is $Q^{\pi}(s,a) = \mathbb{E}_{(s,a) \sim d_{\mu}(s,a)} [ D^{\pi}(s,a) ]$, and similarly the value function can be defined as : $V^{\pi}(s) = \mathbb{E}_{s \sim d_{\mu}(s)} [ D^{\pi}(s) ]$. For the trajectory-wise importance corrections, we can define the variance of the returns, similar to \citep{RiskSensitive} as :
\begin{align}
\label{eq:variance_returns_traj}
& \mathcal{V}_{\mathcal{P}}(\pi) = \mathbb{E}_{(s,a) \sim d_{\mu}(s,a)} [ D^{\pi}(s,a)^{2} ] - \mathbb{E}_{(s,a) \sim d_{\mu}(s,a)} [ D^{\pi}(s,a) ]^{2}
\end{align}
where note that as in \citep{sobel}, equation \ref{eq:variance_returns_traj} also follows a Bellman like equation, although due to lack of monotonocitiy as required for dynamic programming (DP), such measures cannot be directly optimized by standard DP algorithms \citep{RiskSensitive}.
In contrast, if we consider the variance of returns with stationary distribution corrections \citep{DualDICE, BreakingHorizon}, rather than the product of importance sampling ratios, the variance term involves weighting the rewards with the distribution ratio $\omega_{\pi/\mu}$. Typically, the distribution ratio is approximated using a separate function class \citep{MWL_Uehara}, such that the variance can be written as :
\begin{equation}
\label{eq:returns_state_dist}
W^{\pi}(s,a) = \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) \mid s=s, a \sim \pi(\cdot \mid s), (s,a) \sim d_{\mathcal{D}}(s,a)
\end{equation}
where we denote $\mathcal{D}$ as the data distribution in the fixed dataset, collected by either a known or unknown behaviour policy. The variance of returns under occupancy measures is therefore given by :
\begin{equation}
\label{eq:variance_returns_stationary}
\mathcal{V}_\mathcal{\mathcal{D}}(\pi) = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ W^{\pi}(s,a)^{2} \Big] - \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ W^{\pi}(s,a) \Big]^{2}
\end{equation}
where note that the variance expression in equation \ref{eq:variance_returns_stationary} depends on the square of the per-step rewards with distribution correction ratios. We denote this as the dual form of the variance of returns, in contrast to the primal form of the variance of expected returns \citep{sobel}.
Note that even though the variance term under episodic per-step importance sampling corrections in equation \ref{eq:variance_returns_traj} is equivalent to the variance with stationary distribution corrections in equation \ref{eq:variance_returns_stationary}, following from \citep{BisiRiskAverse}, considering per-step corrections, we will show that the variance with distribution corrections indeed upper bounds the variance of importance sampling corrections. This is an important relationship, since constraining the policy improvement step under variance constraints with occupancy measures therefore allows us to obtain a lower bound to the offline optimization objective, similar to \citep{CQL}.
\subsection{Proof of Lemma \ref{lemma:variance_upper_bound} : Variance Inequality}
\label{app-lem:upper_bound_variance}
Following from \citep{BisiRiskAverse}, we show that the variance of per-step rewards under occupancy measures, denoted by $\mathcal{V}_{\mathcal{D}}(\pi)$ upper bounds the variance of episodic returns $\mathcal{V}_{\mathcal{P}}(\pi)$.
\begin{equation}
\mathcal{V}_{\mathcal{P}}(\pi) \leq \frac{\mathcal{V}_{\mathcal{D}}(\pi)}{(1-\gamma)^{2}}
\end{equation}
\begin{proof} Proof of Lemma \ref{lemma:variance_upper_bound} following from \citep{BisiRiskAverse} is as follows. Denoting the returns, as above, but for the on-policy case with trajectories under $\pi$, as $D^{\pi}(s,a) = \sum_{t=0}^{\infty} \gamma^{t} r(s_t,a_t)$, and denoting the return objective as $J(\pi) = \mathbb{E}_{s_0 \sim \rho, a_t \sim \pi(\cdot | s_t), s' \sim \mathcal{P}} \Big[ D^{\pi}(s,a) \Big]$, the variance of episodic returns can be written as :
\begin{align}
\label{eq:var_episodic_expansion}
& \mathcal{V}_{\mathcal{P}}(\pi) = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ \Big( D^{\pi}(s,a) - \frac{J(\pi)}{(1 - \gamma)} \Big)^{2} \Big] \\
& = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ ( D^{\pi}(s,a) )^{2} \Big] + \frac{J(\pi)}{(1-\gamma)^{2}} - \frac{2 J(\pi)}{(1-\gamma)} \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ D^{\pi}(s,a) \Big]\\
& = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ D^{\pi}(s,a)^{2} \Big] - \frac{ J(\pi)^{2}}{ (1 - \gamma)^{2} }
\end{align}
Similarly, denoting returns under occupancy measures as $W^{\pi}(s,a) = d_{\pi}(s,a) r(s,a)$, and the returns under occupancy measures, equivalently written as $J(\pi) = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} [ r(s,a) ]$ based on the primal and dual forms of the objective \citep{MWL_Uehara, RLviaFenchel}, we can equivalently write the variance as :
\begin{align}
\label{eq:var_per_step_expansion}
& \mathcal{V}_{\mathcal{D}}(\pi) = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ \Big( r(s,a) - J(\pi) \Big)^{2} \Big]\\
& = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ r(s,a)^{2} \Big] + J(\pi)^{2} - 2 J(\pi) \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} [ r(s,a) ]\\
& = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ r(s,a)^{2} \Big] - J(\pi)^{2}
\end{align}
Following from equation \ref{eq:var_episodic_expansion} and \ref{eq:var_per_step_expansion}, we therefore have the following inequality :
\begin{align}
& (1 - \gamma)^{2} \mathbb{E}_{s_0 \sim \rho, a \sim \pi} \Big[ D^{\pi}(s,a)^{2} \Big] \leq (1 - \gamma)^{2} \mathbb{E}_{s_0 \sim \rho, a \sim \pi} \Big[ \Big( \sum_{t=0}^{\infty} \gamma^{t} \Big) \Big( \sum_{t=0}^{\infty} \gamma^{t} r(s_t, a_t)^{2} \Big) \Big] \\
& = (1 - \gamma) \mathbb{E}_{s_0 \sim \rho, a \sim \pi} \Big[ \sum_{t=0}^{\infty} \gamma^{t} r(s_t, a_t)^{2} \Big]\\
& = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ r(s,a)^{2} \Big]
\end{align}
where the first line follows from Cauchy-Schwarz inequality. This concludes the proof.
\end{proof}
We can further extend lemma \ref{lemma:variance_upper_bound}, for off-policy returns under stationary distribution corrections (ie, marginalized importance sampling) compared importance sampling. Recall that we denote the variance under stationary distribution corrections as :
\begin{align}
& \mathcal{V}_{\mathcal{D}}(\pi) = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \Big( \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) - J(\pi) \Big)^{2} \Big]\\
& = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi/\mathcal{D}}(s,a)^{2} \cdot r(s,a)^{2} \Big] - J(\pi)^{2}
\end{align}
where $J(\pi) = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) \Big]$. We denote the episodic returns with importance sampling corrections as : $D^{\pi} = \sum_{t=0}^{T} \gamma^{t} r_t \rho_{0:t} $. The variance, as denoted earlier is given by :
\begin{equation}
\mathcal{V}_{\mathcal{P}}(\pi) = \mathbb{E}_{(s,a) \sim d_{\pi}(s,a)} \Big[ D^{\pi}(s,a)^{2} \Big] - \frac{ J(\pi)^{2}}{ (1 - \gamma)^{2} }
\end{equation}
We therefore have the following inequality
\begin{align}
& (1 - \gamma)^{2} \mathbb{E}_{s_0 \sim \rho, a \sim \pi} \Big[ D^{\pi}(s,a)^{2} \Big] \leq (1 - \gamma)^{2} \mathbb{E}_{s_0 \sim \rho, a \sim \pi} \Big[ \Big( \sum_{t=0}^{T} \gamma^{t} \Big) \Big( \sum_{t=0}^{T} \gamma^{t} r(s_t, a_t)^{2} \Big) \Big( \prod_{t=0}^{T} \frac{\pi(a_t | s_t)}{ \mu_{\mathcal{D}}(a_t | s_t) } \Big)^{2} \Big] \notag \\
& = (1 - \gamma) \mathbb{E}_{s_0 \sim \rho, a \sim \pi} \Big[ \sum_{t=0}^{\infty} \gamma^{t} r(s_t, a_t)^{2} \Big( \prod_{t=0}^{T} \frac{\pi(a_t | s_t)}{ \mu_{\mathcal{D}}(a_t | s_t) } \Big)^{2} \Big] \\
& = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi/\mathcal{D}}(s,a)^{2} \cdot r(s,a)^{2} \Big]
\end{align}
which shows that lemma \ref{lemma:variance_upper_bound} also holds for off-policy returns with stationary distribution corrections.
\subsection{Double Sampling for Computing Gradients of Variance}
\label{app:sec-gradient_variance}
The gradient of the variance term often leads to the double sampling issue, thereby making it impractical to use. This issue has also been pointed out by several other works \citep{PrashanthVarianceAC, Variance_Actor_Critic, ChowRiskSensitive}, since the variance involves the squared of the objective function itself. Recall that we have:
\begin{equation}
\mathcal{V}_{\mathcal{D}}(\theta) = \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a)^{2} \Big] - \Big \{ \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) \Big] \Big \}^{2}
\end{equation}
The gradient of the variance term is therefore :
\begin{align}
\label{eq:var_grad_double_sampling}
& \nabla_{\theta} \mathcal{V}_{\mathcal{D}}(\theta) = \nabla_{\theta} \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a)^{2} \Big] \notag \\
& - 2 \cdot \Big \{ \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) \Big] \Big \} \cdot \nabla_{\theta} \Big \{ \mathbb{E}_{(s,a) \sim d_{\mathcal{D}}} \Big[ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) \Big] \Big \}
\end{align}
where equation \ref{eq:var_grad_double_sampling} requires multiple samples to compute the expectations in the second term. The variance of the returns with the stationary state-action distribution corrections can therefore be written as :
\begin{equation}
\label{eq:variance_expression_a_b}
\mathcal{V}_{\mathcal{D}}(\theta) = \mathbb{E}_{d_{\mathcal{D}}(s,a)} \underbrace{\Big[ \text{IS}(\omega, \pi_{\theta})^{2} \Big]}_{\text{(a)}} - \underbrace{\mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \Big]^{2}}_{(b)}
\end{equation}
We derive the gradient of each of the terms in (a) and (b) in equation \ref{eq:variance_expression_a_b} below. First, we find the gradient of the variance term w.r.t $\theta$ :
\begin{equation}
\label{eq:variance_grad_pi}
\begin{split}
\nabla_{\theta} \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta})^{2} \Big] &= \nabla_{\theta} \sum_{s,a} d_{\mathcal{D}}(s,a) \text{IS}(\omega, \pi_{\theta})^{2} = \sum_{s,a} d_{\mathcal{D}}(s,a) \nabla_{\theta} \text{IS}(\omega, \pi_{\theta})^{2}\\
&= \sum_{s,a} d_{\mathcal{D}}(s,a) \cdot 2 \cdot \text{IS}(\omega, \pi_{\theta}) \cdot \text{IS}(\omega, \pi_{\theta}) \cdot \nabla_{\theta} \log \pi_{\theta}(a \mid s) \\
&= 2 \cdot \sum_{s,a} d_{\mathcal{D}}(s,a) \text{IS}(\omega, \pi_{\theta})^{2} \nabla_{\theta} \log \pi_{\theta}(a \mid s)\\
&= 2 \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta})^{2} \cdot \nabla_{\theta} \log \pi_{\theta}(a \mid s) \Big]
\end{split}
\end{equation}
Equation \ref{eq:variance_grad_pi} interestingly shows that the variance of the returns w.r.t $\pi_{\theta}$ has a form similar to the policy gradient term, except the critic estimate in this case is given by the importance corrected returns, since $\text{IS}(\omega, \pi_{\theta}) = [ \omega_{\pi/\mathcal{D}}(s,a) \cdot r(s,a) ]$. We further find the gradient of term (b) from equation \ref{eq:variance_expression_a_b}. Finding the gradient of this second term w.r.t $\theta$ is therefore :
\begin{equation}
\nabla_{\theta} \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \Big]^{2} = \nabla_{\theta} J(\theta)^{2} = 2 \cdot J(\theta) \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi/\mathcal{D}} \cdot \{ \nabla_{\theta} \log \pi_{\theta}(a \mid s) \cdot Q^{\pi}(s,a) \} \Big]
\end{equation}
Overall, the expression for the gradient of the variance term is therefore :
\begin{multline}
\label{eq:variance_gradient}
\nabla_{\theta} \mathcal{V}_{\mathcal{D}}(\theta) = 2 \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta})^{2} \cdot \nabla_{\theta} \log \pi_{\theta}(a \mid s) \Big]\\ - 2 \cdot J(\theta) \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \omega_{\pi/\mathcal{D}} \cdot \{ \nabla_{\theta} \log \pi_{\theta}(a \mid s) \cdot Q^{\pi}(s,a) \} \Big]
\end{multline}
The variance gradient in equation \ref{eq:variance_gradient} is difficult to estimate in practice, since it involves both the gradient of the objective and the objective $J(\theta)$ itself. This is known to have the double sampling issue \citep{Baird} which requires separate independent rollouts. Previously, \citep{Variance_Actor_Critic} tackled the variance of the gradient term using simultaneous perturbation stochastic approximation (SPSA) \citep{Spall92multivariatestochastic}, where we can keep running estimates of both the return and the variance term, and use a two time scale algorithm for computing the gradient of the variance regularizer with per-step importance sampling corrections.
\vspace{-1em}
\subsection{Alternative Derivation : Variance Regularization via Fenchel Duality}
\vspace{-0.4em}
In the derivation of our algorithm, we applied the Fenchel duality trick to the second term of the variance expression \ref{eq:var_per_step_expansion}. An alternative way to derive the proposed algorithm would be to see what happens if we apply the Fenchel duality trick to both terms of the variance expression. This might be useful since equation \ref{eq:variance_gradient} requires evaluating both the gradient terms and the actual objective $J(\theta)$, due to the analytical expression of the form $\nabla_{\theta} J(\theta) \cdot J(\theta)$, hence suffering from a double sampling issue.
In general, the Fenchel duality is given by :
\begin{equation}
\label{eq:general_fenchel_duality}
x^{2} = \max_{y} ( 2xy - y^{2} )
\end{equation}
and applying Fenchel duality to both the terms, since they both involve squared terms, we get :
\begin{equation}
\begin{split}
\label{eq:fenchel_term_a_variance}
\mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta})^{2} \Big] &\equiv \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \max_{y} \Big\{ 2 \cdot \text{IS}(\omega, \pi_{\theta}) \cdot y(s,a) - y(s,a)^{2} \Big\} \Big]\\
&= 2 \cdot \max_{y} \Bigg\{ \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot y(s,a) \Big] - \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ y(s,a)^{2} \Big] \Bigg\}
\end{split}
\end{equation}
Similarly, applying Fenchel duality to the second (b) term we have :
\begin{equation}
\begin{split}
\label{eq:fenchel_term_b_variance}
\mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \Big]^{2} = \max_{\nu} \Bigg\{ 2 \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot \nu(s,a) \Big] - \nu^{2} \Bigg\}
\end{split}
\end{equation}
Overall, we therefore have the variance term, after applying Fenchel duality as follows, leading to an overall objective in the form $\max_{y} \max_{\nu} \mathcal{V}_{\mathcal{D}}(\theta) $, which we can use as our variance regularizer
\begin{align}
\label{eq:variance_fenchel_duality}
& \mathcal{V}_{\mathcal{D}}(\theta) = 2 \cdot \max_{y} \Bigg\{ \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot y(s,a) \Big] - \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ y(s,a)^{2} \Big] \Bigg\} \notag \\
&- \max_{\nu} \Bigg\{ 2 \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot \nu(s,a) \Big] - \nu^{2} \Bigg\}
\end{align}
Using the variance of stationary distribution correction returns as a regularizer, we can find the gradient of the variance term w.r.t $\theta$ as follows, where the gradient terms dependent on the dual variables $y$ and $\nu$ are 0.
\begin{align*}
\label{eq:variance_gradient_regularizer_fenchel}
& \nabla_{\theta} \mathcal{V}_{\mathcal{D}}(\theta) = 2 \cdot \nabla_{\theta} \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot y(s,a) \Big] - 0 - 2 \cdot \nabla_{\theta} \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot \nu(s,a) \Big] + 0\\
&= 2 \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot y(s,a) \cdot \nabla_{\theta} \log \pi_{\theta}(a \mid s) \Big] - 2 \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot \nu(s,a) \cdot \nabla_{\theta} \log \pi_{\theta}(a \mid s) \Big]\\
&= 2 \cdot \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Bigg[ \text{IS}(\omega, \pi_{\theta}) \cdot \nabla_{\theta} \log \pi_{\theta}(a \mid s) \cdot \Big\{ y(s,a) - \nu(s,a) \Big\} \Bigg]
\end{align*}
Note that from the above expression, the two terms in the gradient is almost equivalent, and the difference comes only from the difference between the two dual variables $y(s,a)$ and $\nu(s,a)$. Note that our variance term also requires separately maximizing the dual variables, both of which has the following closed form updates :
\begin{equation}
\label{eq:variance_gradient_regularizer_fenchel_nu_var}
\nabla_{\nu} \mathcal{V}_{\mathcal{D}}(\theta) = - 2 \cdot \nabla_{\nu} \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot \nu(s,a) \Big] + \nabla_{\nu} \nu^{2} = 0
\end{equation}
Solving which exactly, leads to the closed form solution $\nu(s,a) = \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \Big]$. Similarly, we can also solve exactly using a closed form solution for the dual variables $y$, such that : \begin{equation}
\label{eq:variance_gradient_regularizer_fenchel_y_var}
\nabla_{y} \mathcal{V}_{\mathcal{D}}(\theta) = 2 \cdot \nabla_{y} \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ \text{IS}(\omega, \pi_{\theta}) \cdot y (s,a) \Big] - 2 \cdot \nabla_{y} \mathbb{E}_{d_{\mathcal{D}}(s,a)} \Big[ y(s,a)^{2} \Big] = 0
\end{equation}
Solving which exactly also leads to the closed form solution, such that $y(s,a) = \frac{1}{2} \cdot \text{IS}(\omega, \pi_{\theta}) = \frac{1}{2} \cdot \frac{d_{\pi}(s,a)}{d_{\mu}(s,a)} \cdot r(s,a)$. Note that the exact solutions for the two dual variables are similar to each other, where $\nu(s,a)$ is the expectation of the returns with stationary distribution corrections, whereas $y(s,a)$ is only the return from a single rollout.
| proofpile-arXiv_065-2051 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\section{An Epistemic Stance Framework for Analyzing Political Rhetoric}
\label{sec:epistemic_stance_framework}
This section formally introduces the task of epistemic stance detection and describes the details of the {F}act{B}ank\xspace dataset. We then explain how the epistemic stance framework relates to several rhetorical strategies often used in political discourse.
\subsection{Epistemic Stances} \label{sec:taskdef}
We define an epistemic stance tuple as a triple of \emph{(source, event, label)} within a sentence, where the label is the value of the source's epistemic stance (or a non-epistemic relation) toward the event.
The triples can be viewed as a fully connected graph among all sources and events in the sentence
(Figure~\ref{fig:motivating_example}).
We use the structure and theory of {F}act{B}ank\xspace \cite{sauri2012you} to identify sources, events and the stance labels.
\paragraph{Sources and Events}
A \emph{source} is an entity---either the text's author, or an entity mentioned in the sentence---which can hold beliefs.
{F}act{B}ank\xspace contains annotations for sources that are subjects of source-introducing predicates (SIPs),
a manually curated lexicon of verbs about report and belief such as \emph{claim, doubt, feel, know, say, think}.
Annotations of these embedded sources allow us to analyze the author's depiction of the embedded source's beliefs towards an event. The special \emph{Author} source is additionally included to analyze the author's own beliefs.
{F}act{B}ank\xspace's definition of \emph{events} includes a broad array of textually described eventualities, processes, states, situations, propositions, facts, and possibilities. {F}act{B}ank\xspace identifies its event tokens as those marked in the highly precise, manually annotated {T}ime{B}ank\xspace and AQUAINT TimeML\footnote{\scriptsize{\url{https://web.archive.org/web/20070721130754/http://www.timeml.org/site/publications/specs.html}}} corpora.
\begin{figure}
\includegraphics[width=3in, height=1.3in, trim=0in 7.7in 7in 0,clip]{images/stance_labels.pdf}
\caption{Stance labels used in this work, ordered along two linguistic dimensions, as well as a separate non-epistemic category. \label{fig:stance_labels}}
\vspace{-0.27cm}
\end{figure}
\paragraph{Epistemic Stance Label}
{F}act{B}ank\xspace characterizes epistemic stances along two axes, polarity and modality.
The polarity is binary, with the values positive and negative---the event did (not) happen or the proposition is (not) true.
The modality constitutes a continuum ranging from uncertain to absolutely certain, discretely categorized as \textit{possible (PS)}, \textit{probable (PR)} and \textit{certain (CT)}. An additional \textit{underspecified or uncommitted stance (Uu)} is added along both axes to account for cases such as attribution to another source (non-commitment of the source) or when the stance of the source is unknown. The epistemic stance is then characterized as a pair \textit{(modality, polarity)} containing a modality and a polarity value (e.g., \textit{CT+}) (Figure~\ref{fig:stance_labels}).
FactBank gives epistemic stance labels between certain pairs of sources and events only, based on structural syntactic relationships. However, for raw text we may not have reliable access to syntactic structures, and sources and events must be automatically identified, which may not be completely accurate. We use a simple solution by always assuming edges among the cross-product of all sources and events within a sentence, and to predict a separate \emph{{N}on-{E}pistemic}\xspace \textit{(NE)} category for the large majority of pairs. This accounts for any spurious event-source pairs, structurally invalid configurations such as an embedded source's stance towards an event outside their factive context (Figure~\ref{fig:motivating_example}: (s4,e2)), or a source that cannot be described as a belief holder (and thus, all its stances are \textit{NE}).
Given that a variety of datasets have been collected for tasks related to epistemic stance (\S\ref{sec:related_work}), \citet{stanovsky2017integrating} argues to combine them for modeling.
However, some datasets address different epistemic questions (e.g.,\ the reader's perspective), and
they follow very different annotation guidelines and annotation strategies, risking ambiguity in labels' meaning.
In preliminary work we attempted to crowdsource new annotations but found the resulting labels to be very different than FactBank,
which was created by a small group of expert, highly trained annotators.
Thus we decided to exclusively use FactBank for modeling.
\ignore{
While the modality axis exhibits an ordinal relationship among the various values, it is essential to note that no such relationship exists along the polarity axis as it is strictly binary. why does this matter? it matters because mapping it to a continuous scale is not okay. The values are mapped onto -3 to 3, where -3 is smaller than -2 which is smaller than -1, smaller than 0 and so on. So mapping onto a continuous [-3, 3] scale enforces a ordinal relationship between all values of factuality, which does not match with the original conceptualization of this framework. It thus enforces a weird relationship between stances (e.g., PR- \leq Uu stance, implying that the degrees of commitment is lower for PR- compared to the uncommitted stance). How do we interpret a stance of -0.5 value and further compare it with -3 value, which is least committed according to the continuous scale framework? This makes drawing the distinctions on a continuous valued scale difficult.}
\ignore{
More recently, there has been a lack of clarity on unification of different annotation scales and paradigms adopted independently in the literature. This separation between annotated corpora has prevented a comparison across datasets, inhibiting advancements in one dataset to carry over to other annotation efforts. To address this issue, \citet{stanovsky2017integrating} proposed a continuous-valued scale and mapped annotations from different datasets to a unified scale. Although in our work, we use a simple four-class distinction during the annotation phase (making it convenient for crowd-workers), we also offer a way to produce continuous-valued output over epistemic cases~(See Section~\ref{sec:social_contention}). These continuous-valued scores align with the unified scale proposed by~\citet{stanovsky2017integrating}.
}
\subsection{Connections between Epistemic Stances and Rhetorical Strategies}
\label{sec:epispoli}
Some epistemic stances in {F}act{B}ank\xspace's framework can be mapped to a common political rhetorical strategy. For instance, a source utilizing \textit{certainly positive/negative} (\textit{CT+/CT-}) stances more frequently can be associated with displaying higher commitment levels. The \textit{CT+/CT-} stances can also help analyze \emph{political disagreements} by identifying two sources with opposite stances towards an event, i.e., a source asserting an event (\textit{CT+}) and a source refuting the same event (\textit{CT-}). A source may exhibit a \textit{probable/possible} (\textit{PR/PS}) stance to indicate that the event could have happened, abstaining from expressing strong commitments towards this event, which can be useful to analyze \emph{hedging}.
Finally, \textit{underspecified/uncommitted} (\textit{Uu}) stances can help identify the embedded sources whose beliefs are mentioned by the author while remaining uncommitted, a strategy related to \emph{footing-shift} in political discourse. Use of \textit{Uu} stances is also helpful to identify \emph{belief holders}---entities described as having epistemic stances (\S\ref{sec:case_study})---since sometimes the author remains uncommitted while reporting the embedded source's stance.
\ignore{
Having described {F}act{B}ank\xspace's framework, we now describe each epistemic stance label and its relationship with different rhetorical strategies used in political discourse.
\paragraph{\emph{Certainly Positive/Negative} (CT+/CT-) stances indicate high levels of author commitments and are useful to identify political contentions:} A source exhibits \emph{Certainly Positive/Negative} stance indicating high level of commitment towards an event, i.e., according to the source, the event definitely happened (did not happen), or the described proposition is definitely true (false). A source utilizing such stances more frequently can be associated with displaying higher commitment levels. These stances can also help analyze political disagreements by identifying two sources with opposite stances towards an event, i.e., a source asserting an event (CT+) and a source refuting the same event (CT-).
\paragraph{\emph{Probable or Possible} (PR+/PS+) stances indicate use of hedging language or lower levels of author commitments:}
A source exhibits a probable/possible stance to indicate that the event could have happened, or the described proposition is probably/possibly true. However, the source abstains from expressing strong commitments towards this event. These stances relates to use of hedging strategies~\cite{lakoff1975hedges, fraser2010hedging, hyland1996writing} in political discourse that exhibit some degrees of uncertainty.\footnote{~\citet{sauri2012you} additionally define probably/possibly negative (PR-/PS-) stances, though these stances are much less prevalent in the corpus. Following previous work~\cite{qian-2015-ml, qian-2018-gan}, we restrict ourselves to PR+/PS+ stances in this study.}
\paragraph{\emph{Underspecified} (Uu) stance is to used attribute beliefs to another source:}
A source exhibits an \emph{Underspecified} stance to either report someone else's belief, stay uncommitted whether the event happened, or be unsure about the status of the event. Using the \emph{Underspecified} stance, we can identify embedded sources whose beliefs are mentioned by the original source while remaining seemingly uncommitted. Such \emph{footing-shift} strategies~\cite{goffman1981forms, clayman1992footing} are quite common in political rhetoric where actors advance their own beliefs while suggesting they belong to someone else.
}
\ignore{\subsection{Annotation Process}
For collecting epistemic stance annotations, we sampled one sentence from each of 308 books to ensure broad corpus diversity. (The 308 consisted of all books in an earlier version of CAIB.) We did not attempt to filter to political discussion. To ensure a dataset with a range of complex linguistic and rhetorical phenomena, we considered sentences with more than 15 tokens and at least one embedded event.
We conducted an initial pilot study with 19 sentences, attempting to delineate a \emph{Reporting} stance (where a source reports what someone else has said, without taking a stance whether it happened) versus a general \emph{Uncommitted} stance, following \citealp{prabhakaran2015new}'s annotation of reported beliefs. However, we found annotators often confused \emph{Reporting} with a general \emph{Uncommitted} stance, so we merged them.
We proceeded to the larger scale annotation; a sample prompt is included on the dataset website.
After the additional quality control filtering described below, we obtain a raw inter-annotator agreement rate $0.793$ and chance-adjusted agreement rate (Krippendorff $\alpha$) $0.554$ for 51,805 annotations.
This is broadly in line with reported chance-adjusted agreement rates for author-only annotations: $0.60$ in \citet{prabhakaran2015new} or $0.66$ in \citet{rudinger2018neural}. For multi-source annotations, \citet{sauri2012you} reported an overall chance-adjusted agreement rate of 0.81, but only for 30$\%$ of all the (manually curated) events in the corpus; \citet{de-marneffe-etal-2012-happen} reports an agreement rate of $0.66$ for the three-category (Pos, Neg, Uu) version of their reader's perspective factuality annotations.
While the agreement rate for this and related datasets is slightly lower than some of the conventional standards seen in NLP, this task has genuine, substantive ambiguity, which ought to be modeled as a distribution of annotator responses \cite{pavlick2019inherent}, rather than forcing into a single answer.
Annotations were collected on the Figure Eight platform.
Crowdworkers were selected from the highest contributor level (level 3) and were limited to those from the U.S.; we did not limit to native English speakers.
Workers were paid \$0.10 per annotation, with a median 54 (mean 115) seconds per annotation. Worker speeds varied by multiple orders of magnitude (from 1.8 to 355 sec/anno, averaged per worker).
We used several strategies for quality filtering.
(1) During annotation, we use Figure Eight's ``hidden gold'' testing facility to restrict to workers who had a high agreement rate with a number of tuples from {F}act{B}ank\xspace which we designated as a gold standard. While this may have artificially suppressed genuine disagreements, when we did not use it, we observed highly erroneous ``spam''-like responses.
(2) We remove judgments originating from IP addresses used by more than one unique worker, or from workers associated with more than five IP addresses, which seemed to be low quality annotations.
(3) After discarding items with fewer than three judgments, we infer final aggregated labels via the MACE model and software \cite{hovy2013learning},\footnote{\url{https://github.com/dirkhovy/MACE}}
which uses Bayesian unsupervised learning to weigh the judgments of different annotators by their relative reliability;
\citet{Paun2018Comparing} found MACE was one of the best models for crowdsourced annotation aggregation. We use the most probable label inferred by MACE as the final label.\footnote{We experimented with MACE's item-level posteriors for soft label training \cite{fornaciari-etal-2021-beyond}, but observed similar results as training on hard labels.} After quality filtering, our dataset consists of 8465 annotated event-source pairs spanning 308 sentences from the CAIB corpus (Table~\ref{table:stats}).
\ag{should release annotations from individual annotators, along with MACE labels.}
\begin{table}
\centering \small
\begin{tabular}{cccc|c}
Pos & Neg & Uu & NE & Total \\ \addlinespace[0.05cm] \hline \addlinespace[0.05cm]
1176 & 254 & 641 & 6394 & 8465
\end{tabular}
\caption{Counts for \emph{Positive}, \emph{Negative}, \emph{Uncommitted}, and \emph{{N}on-{E}pistemic}\xspace tuples in {P}oli{B}elief\xspace.
\label{table:stats}}
\end{table}
\ignore{
\textbf{Annotator modeling: MACE} \label{sec:mace}
\noindent
We noticed a wide variation in how different workers approached the annotation task. Since the task has substantial subjectivity, we hoped to better model which annotators were more reliable, in order to derive higher quality, aggregated labels for each item. Following \citet{Paun2018Comparing}, who found that MACE~\cite{hovy2013learning} was among several top-performing models to efficiently aggregate crowd-sourced NLP annotations, we apply the open source MACE software\footnote{\url{https://github.com/dirkhovy/MACE}} with default settings to the dataset of judgments on both test and non-test items.\footnote{We used MACE in purely unsupervised mode in order to better explore errors or disagreements between annotators and our putative gold standard, as FigureEight's quality control system already ensures all annotators had at least 80\% agreement with the test items' labels.
\bto{relax this stmt if we don't end up analyzing it}} MACE is an unsupervised, joint probabilistic model of annotators and judgments for crowd-sourced settings which infers probabilistic labels for each annotated item weighted by the estimated proficiency of each annotator.
MACE also infers posterior entropies for each item which can be used as an approximation for the model’s confidence in the prediction, lower entropy implies higher confidence~\cite{hovy2013learning}. We use these entropies to identify items with higher annotator disagreements. Table~\ref{table:disagreement} (Appendix) provides some of the difficult examples from our dataset where annotators had high disagreements. Most often, annotators find it difficult to disambiguate a positive/negative versus uncommitted factuality stance. \scomment{this is strange---shouldn't it be straightforward to disambiguate decidedly positive/negative items from uncommitted ones?} For instance, consider the following sentence with ``Author'' as source (marked in red) and ``good'' as event (marked in blue).
\emph{\textcolor{red}{Author:} But when the crisis part actually involves putting us out of work , it 's hard to see how pride in our identity will do us any \textcolor{blue}{good}.}
For this example, 2 annotators voted positive, 1 voted negative and 2 voted for uncommitted class. The counter-factual nature of this statement probably confused the annotators.
}}
\section{Case Study: Belief Holder Identification}
\label{sec:case_study}
Political discourse involves agreement and contention between the author and other belief-holding sources they cite.
As a first step, we extract major belief holders mentioned in a text to allow
analysis of ideological trends in U.S.\ political discourse.
\subsection{Corpus Description} \label{sec:caib}
We conduct our case study on the new Mass-Market Manifestos (MMM) corpus, a curated collection
of political nonfiction
authored by U.S.\ politicians, media activists, and opinion elites in English, published from 1993-2020.
It subsumes and more than triples the size of Contemporary American Ideological Books \cite{sim2013measuring}.
The corpus contains 370 books (31.9 million tokens) spanning various U.S. political ideologies.
Human coders identified 133 books as liberal or left-wing, 226 as conservative or right-wing, and 11 as explicitly centrist or independent.
Since ideological opponents often draw from a shared set of concepts---sometimes stating perceived facts and sometimes dismissing others' claims---this presents us with a perfect challenge for detection of epistemic stance.
\subsection{Belief Holder Identification}
\label{sec:bh}
\noindent
A \emph{belief holder} is defined as a non-author source that holds at least one epistemic stance toward some event.
We identify belief holders by using our best-performing model (fine-tuned RoBERTa, predictions averaged over 5 random restarts) to infer epistemic stances for all source-event pairs identified in the $370$ books in the MMM corpus. For the problem of identifying sources that are belief holders as per this definition, we obtain $77.3$ precision and $79.4$ recall on FactBank's evaluation corpus.
For aggregate analysis (\S\ref{sec:polibh}), especially for named entity sources, a longer span is more interpretable and less ambiguous. Thus, when a source token is recognized as part of a traditional named entity
(via spaCy v3.0.6; \citet{honnibal-johnson-2015-improved}),
the belief holder is defined as the full NER span; otherwise, simply the source token is used.
\subsection{Comparison to Named Entity Recognition} \label{sec:ner}
Instead of using epistemic stance-based belief holder identification,
an alternative approach is to exclusively rely on
named entity recognition (NER) from a set of predefined types.
NER has been used in opinion holder identification \cite{kim-hovy-2004-determining} and within belief evaluation in the TAC KBP Belief/Sentiment track~\cite{tackbp-2016}.
By contrast, our model can instead find \emph{any} entity as a belief holder, as long as it holds epistemic stances, without a type restriction.
To illustrate this, we compare our belief holder identifier to a standard NER implementation from spaCy v3.0.6~\cite{honnibal-johnson-2015-improved},\footnote{CPU optimized version of \texttt{en$\_$core$\_$web$\_$lg}. Stanza's~\cite{qi2020stanza} performance-optimized NER system gave broadly similar results.}
trained on English web corpus of OntoNotes 5.0~\cite{ontonotes}.
We use entities identified as one of OntoNotes' 11 non-numeric named entity types.\footnote{\emph{Event, Facility, GPE, Language, Law, Location, NORP, Organization, Person, Product, Work\_of\_Art}}
Aggregating among all books in the corpus, the set of belief holders identified by our model
has only a 0.198 Jaccard similarity with the set of NER-detected entities (Appendix \S\ref{sec:appendix_belief_holders} Table~\ref{tab:belief holders} provides qualitative examples from one conservative book).\footnote{An entity is defined as a belief holder if it is the source for at least one epistemic tuple; similarly, it is a named entity if at least one occurrence is identified as part of an NER span.}
Is it reasonable to define a set of named entity types to identify belief holders?
We calculate each named entity type's \emph{belief score}, which is the average proportion of named entities of that type that are described as holding an epistemic stance.\footnote{For each source instance with same NER type, we find the proportion of epistemic (non-NE) stances among events in its sentence, then average these values across the corpus.}
As shown in Figure~\ref{fig:ner_dist}, while the Organization, NORP, Person and GPE types have significantly higher belief score than others, there is a wide range of variation,
including non-obvious types such as Work of Art (e.g., The Bible), suggesting that a NER type whitelists undercover or overcover possible belief holders. We provide a further linguistic breakdown of identified belief holders in Appendix \S\ref{sec:appendix_linguistic_analysis}.
\begin{figure}[]
\centering
\includegraphics[width=0.37\textwidth]{images/ner_type_horizontal_without_eq.pdf}
\caption{Imperfect correlation between belief scores and OntoNotes NER types. {\footnotesize (WOA: Work of Art, PROD: Product, PER: Person, ORG: Organization, LOC: Location, NORP: Nationalities or Religious or Political Groups,
FAC: Facility, LANG: Language, GPE: Geo-Political Entity)}
\vspace{-0.2cm}
\label{fig:ner_dist}}
\end{figure}
\begin{table}[t]
\centering
\tiny
\begin{tabular}{lclc}
\multicolumn{2}{c}{{Highly Cited by Left-wing Authors}} &
\multicolumn{2}{c}{{Highly Cited by Right-wing Authors}} \\
\cmidrule(lr){1-2} \cmidrule(lr){3-4}
\multicolumn{1}{c}{{Belief Holder}} &
\multicolumn{1}{c}{{View}} &
\multicolumn{1}{c}{{Belief Holder}} &
\multicolumn{1}{c}{{View}} \\
\cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}
Tom Delay & Opposed & Paul Johnson & Respected \\
Martin Gilens & Respected & Marvin Olasky & Respected \\
Michelle Alexander & Respected & Saul Alinsky & Opposed \\
Grover Norquist & Opposed & Robert Rector & Respected \\
Jane Mayer & Respected & Thomas Sowell & Respected \\
Albert Camus & Respected & The Tax Foundation & Respected \\
Consumers & Respected & Soviets & Opposed \\
Thomas Edsall & Respected & George Soros & Opposed \\
Jacob Hacker & Respected & Pew Research & Respected \\
James Baldwin & Respected & John Edwards & Opposed \\
Jeffrey Sachs & Respected & George Stephanopoulos & Opposed \\
Michele Bachmann & Opposed & John Stossel & Respected \\
Ben Bernanke & Unclear & Thomas Sowell & Respected \\
Chris Hedges & Respected & Nicholas Eberstadt & Respected \\
Lobbyists & Opposed & James Wilson & Respected \\
Bill Moyers & Respected & Iran & Opposed \\
Daniel Bell & Respected & Hollywood & Opposed \\
David Cay Johnston & Respected & George Gilder & Respected \\
Instructor & Generic & Dennis Prager & Respected \\
Moderator & Generic & Arthur Brooks & Respected \\
\end{tabular}
\caption{Top 20 most frequently mentioned belief holders per author ideology (left vs.\ right),
among belief holders mentioned in $\geq$ 8 books in the MMM corpus.}
\vspace{-0.6cm}
\label{table:belief_holders_8}
\end{table}
\subsection{Political Analysis of Belief Holders}
\label{sec:polibh}
The MMM corpus, including both left and right-wing authors,
gives an opportunity to study the belief holder citation practices for each U.S.\ political ideology.
Using our epistemic stance and entity aggregation postprocessing (\S\ref{sec:bh}),
we count the number of books each belief holder is mentioned in.
There are 1269 sources mentioned as a belief holder in $\geq$ 8 books.
For each belief holder, we calculate its left-right citation ratio:
the proportion of left-wing books it is mentioned in, versus the proportion of right-wing books (proportions are calculated using a book pseudocount of 1 to avoid dividing by zero).
Belief holders with a ratio $\sim$ 1.0 include some generic (\emph{team, organization, official}) and
anaphoric (\emph{anyone, many}) examples.
Table~\ref{table:belief_holders_8} shows the top 20 belief holders for both left and right, as ranked by this ratio, yielding a rich set of politicians (Delay, Edwards), journalists (Mayer, Stephanopoulos), activists (Norquist, Alinsky),
and many social scientists and scholars (Gilens, Johnson).
Most of these belief holders were recognized by an expert (political scientist coauthor) as being respected or opposed from the citing ideological perspective.
Based on prior knowledge of U.S.\ politics it was straightforward to immediately give such judgments for most entries; for a few unclear ones, we checked individual sentences mentioning the belief holder.
A common strategy is to describe an opponent's views or statements---the use of a rhetorical bogeyman.
\begin{table}[]
\centering
\tiny
\begin{tabular}{lllll}
\multicolumn{2}{c}{Left-cited} & & \multicolumn{2}{c}{Right-cited} \\ \cline{1-2} \cline{4-5}
Economists & Studies & & Founders & Democrats \\
Woman & Research & & Media & Officials \\
Polls & Republicans & & Poll & President \\
Scientists & Group & & Obama & Conservatives \\
Groups & Friend & & Government & Liberals
\end{tabular}
\caption{Top 10 most frequently mentioned belief holders per author ideology, among belief holders mentioned in at least 100 books.}
\vspace{-0.2cm}
\label{table:belief_holders_100}
\end{table}
\begin{figure}
\centering
\fbox{
\centering
\begin{minipage}[t]{0.4\textwidth}
\centering
\scriptsize
\begin{itemizesquish}
\item
We know that most of the \textbf{[Founders]}$_{s}$ regarded slavery as a wrong that would have to be addressed. \emph{Chuck Norris, Black Belt Patriotism (R)}
\item
Sometimes, whether against gator or human predator, you're on your own, as the frontier-expanding \textbf{[Founders]}$_{s}$ well knew. \emph{Charlie Kirk, The MAGA Doctrine (R)}
\item
This is not to say the \textbf{[founders]}$_{s}$ believed that only religious individuals could possess good character. \emph{William Bennett, America the Strong (R)}
\item
The \textbf{[founders]}$_{s}$, however, had quite another idea, based on their experience in the colonies over the decades before, where actual religious freedom had existed. \emph{Eric Metaxas, If You Can Keep It (R)}
\item
The \textbf{[Founders]}$_{s}$ recognized that there were seeds of anarchy in the idea of individual freedom [..], for if everybody is truly free, without the constraints of birth or rank or an inherited social order [..] then how can we ever hope to form a society that coheres?
\emph{Barack Obama, The Audacity of Hope (L)}
\end{itemizesquish}
\end{minipage}
}
\caption{Examples of \emph{founders} as a belief holder.}
\vspace{-0.5cm}
\label{fig:founders_examples}
\end{figure}
Repeating the analysis for widely cited belief holders
appearing in $\geq$ 100 books, yields more general, and again politically meaningful, entities (Table~\ref{table:belief_holders_100}).
Some well-known patterns are clearly visible, such as
liberals' respect for technocratic authority (\emph{economists, scientists, research}),
and conservative respect for the semi-mythical \emph{founders} alongside derision for the \emph{media}.
Both sides frequently cite the opposition (L: \emph{Republicans}, R: \emph{Democrats}),
though interestingly the right cites both conservatives and liberals (relatively more frequently than the left).
Figure~\ref{fig:founders_examples} shows examples of \emph{founders}, with the most skewed ratio ($0.308 \approx 3.2^{-1}$) among this set of entities.
Overall, our automated belief holder identification yields a politically significant entity list, laying the groundwork for more systematic manual and computational analysis (e.g., network or targeted sentiment analysis).
\ignore{
\subsubsection{Imperfect correlation with syntactic roles}
Previous literature~\cite{bjorkelund2009multilingual, lluis2013joint, zhao2009multilingual, gormley2014low, teichert2017semantic} recognizes that neural models often rely on the syntax for language understanding tasks, which make us question whether our model also uses the syntactic role of an entity to qualify it as a belief holder? If yes, to what extent? Are the model predictions based purely on specific syntactic roles (such as the subject)?
To examine this correlation, we plot the conditional distribution of belief-score given the syntactic role: \mbox{$\mathcal{P}(Belief$-$Score~|~Syntactic~Role)$}. We consider four possibilities for syntactic roles\footnote{We use same dependency parser as mention in Section~\ref{sec:polibelief_dataset}.}: subject, direct object, any other direct dependency edge between a source and an event, and no dependency. As shown in Figure~\ref{fig:syntax}, there exists a moderate correlation between the belief-score of a source and its syntactic role. Although the correlation is higher when the source appears in the subject role, a non-trivial correlation also exists for other roles. While this imperfect correlation serves as an external validation to our approach, the absence of absolute correlation suggests that the model does not simply rely on an entity's syntactic role to identify belief holders.
For instance, consider the example shown in~\ref{racism_copula}. It demonstrates a case where the model detects a belief holder even when the source (racism) is not the subject of the sentence and is related to the event (evil) via copula dependency relation.
\begin{enumerate}[resume, label={(\arabic*)}]
\item \label{racism_copula}“\textbf{[Racism]$_{s}$} is one such \textbf{\underline{evil}$_{e}$} that seems invisible to those who don’t experience it daily and who don’t feel racist in their hearts.”~\cite{abdul-2016-writings}
\end{enumerate}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{images/syntax.pdf}
\caption{Correlation between belief-score and syntactic roles. Other: a dependency edge other than subject/object between the source and event pair. None: no direct edge between the pair.}
\label{fig:syntax}
\end{figure}
}
\ignore{
\subsection{Epistemological Differences}
\label{sec:social_contention}
We now focus on the second question concerning epistemological differences between sources: which sources diverge from the author of the text in their epistemic stances? As a first step in measuring these epistemological differences, we translate predictions from our computational model into continuous-valued stance scores. We then use the absolute difference between these continuous-valued stance scores to measure the differences between a source and the author.
The following describes our method to map the predictions from our computational model (for a tuple) to a continuous-valued stance score. Consider a random variable \emph{F} (denoting the epistemic stance) with possible outcomes $f_{c} \in \{1, 0, -1\}$ each occurring with a probability $p_{c}$. Here, we denote a positive stance by $+1$, uncommitted by $0$, and a negative stance by $-1$, aligning with the implicit ordering among these stance values. We omit the \emph{{N}on-{E}pistemic}\xspace class because it is unrelated to this implicit ordering. To compute $p_{c}$, we use the probabilistic output of our computational model, i.e., the output of the softmax activation ($\hat{f}$, Equation~\ref{eq:mlp}). However, the model provides probabilities for all four classes, including the \emph{{N}on-{E}pistemic}\xspace class. We convert the four-class probabilities to three-class conditional probabilities by conditioning on the epistemic classes only, i.e., we calculate $p_{c}=P[\hat{f}=c|c\neq ne]$. Finally, we define the expectation of \emph{F} as:
\begin{equation}
\mathop{{}\mathbb{E}}[F] = \sum_{c}f_{c}\; P(\hat{f}=c|c \neq ne)
\label{eq:exp}
\end{equation}
Following the above definition, the range of the continuous-valued epistemic stance score ($E[F]$) can vary from $[-1, 1]$ where a score of $+1$ refers to a \emph{positive} stance, $-1$ refers to a \emph{negative} stance and 0 refers to an \emph{Uncommitted} stance.
We aggregate these continuous stance scores for a source over all the events in which it participates. Using these aggregated continuous-valued stance scores, we measure the epistemological difference (ED) between two sources ($s_{1}$ and $s_{2}$) by computing the absolute difference between their respective scores~(Equation~\ref{eq:cs}).
\begin{equation}
ED({s_{1}}, {s_{2}}) = \displaystyle\left\lvert \mathop{{}\mathbb{E}}[F_{s_{1}}]-\mathop{{}\mathbb{E}}[F_{s_{2}}] \right\rvert
\label{eq:cs}
\end{equation}
The epistemological difference between two sources can vary from $[0, 2]$, with a higher score implying that the two sources significantly differ in their stances towards an event.
We now present some instances with high epistemological difference scores between a mentioned source and the author (aggregated over all tuples where this source holds belief). We observe that these are instances of reported beliefs. For instance, In \ref{times}, the author of the text is reporting the beliefs of the source ``Times". The continuous-valued stance score for ``Author" is $0.32$, which is more towards an uncommitted stance, whereas the continuous-valued stance score for ``Times" is $0.95$, which denotes a highly positive stance. The ED score for this example is $0.63$.
\begin{enumerate}[resume, label={(\arabic*)}]
\item \label{times} \textbf{$[\text{Author}]^{[0.32]}_{s1}$}: But instead of finding sanctuary in the United States, the \textbf{$[\text{Times}]^{[0.95]}_{s2}$}~said, Edwin had \textbf{\underline{ended}$_{e}$} up in a ``Kafkaesque'' web of INS bureaucracy.~\cite{coulter-2015-adios}
\end{enumerate}
Similarly, In \ref{kristol}, the author of the text reports the beliefs of the source ``Kristol". The continuous-valued stance score for ``Author" is $0.004$, implying an \emph{Uncommitted} stance. In contrast, the continuous-valued stance score for ``Kristol" is $0.22$, given the presence of modality particle "should" which expresses some degree of uncertainty. This implies ``Kristol" has a stance somewhere between neutral to positive towards the event. Overall, the ED score for this case is $0.216$.
\begin{enumerate}[resume, label={(\arabic*)}]
\item \label{kristol} \textbf{$[\text{Author}]^{[0.004]}_{s1}$}: To atone for its weakness, \textbf{$[\text{Kristol}]^{[0.22]}_{s2}$}~argued, America should \textbf{\underline{commence}$_{e}$} air strikes against the Iranian regime immediately.~\cite{carlson-2018-ship}
\end{enumerate}
The above observations are highly encouraging, given the challenges observed by existing research to identify instances of reported beliefs. For instance, ``Reported Belief" (ROB) is explored as a separate semantic category in ~\citet{prabhakaran2015new}, \citet{diab2009committed} and \citet{jiang2021thinks}. However, \citet{prabhakaran2015new} observe that model obtains a low F1 score for the ROB class as compared to other classes. \citet{de-marneffe-etal-2012-happen} and \citet{lee2015event} highlight the issue of discrepancy in the crowd-sourced annotations for events embedded under the report verbs (e.g., say). Furthermore, \citet{jiang2021thinks} report that it is difficult to segregate ROB category from factual category via a model trained on such crowd-sourced dataset. In light of these observations, results from our method seem optimistic to identify cases of reported beliefs. Thus, our approach can be considered as a reasonable method to identify such cases. Although the current model captures the cases of reported beliefs, as future work, we would also like to explore methods to capture sharper notions of disagreement, such as sources with different signed epistemic stances (e.g., positive versus negative). We would like to explore if longer discourse context is important to capture such sharp epistemological differences.
}
\ignore{
Footing
The main idea of footing (as canonically received) is that there is a principled way to distinguish between the notion/role of animator (the person voicing the speech/utterance), the role of author (the per- son or team who authored the text) and the notion of principal (the person (or persons) responsible for the utterance).
}
\section*{Acknowledgements}
We are thankful for the feedback and comments from the reviewers. We are grateful to Philip Resnik, Michael Colaresi, Arthur Spirling, Katherine Keith, Geraud Nangue Tasse, Kalpesh Krishna, Marzena Karpinska, and the UMass NLP group for valuable discussions during the course of the project. This work was supported by National Science Foundation grants 1814955 and 1845576. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
\section{Conclusion}
\vspace{-0.15cm}
Semantic modeling has exciting potential to deepen the NLP analysis of political discourse. In this work, we analyze the epistemic stance of various sources toward events, by developing a RoBERTa-based model, and using it for identifying major belief holders mentioned by political authors. We conduct a large-scale analysis of the Mass Market Manifestos corpus of U.S.\ political opinion books, where we characterize trends in cited belief holders across U.S.\ political ideologies. In future, we hope to use this framework to help construct a database of beliefs, belief holders, and their patterns of agreement and disagreement in contentious domains.
\section{Case Study: Hedging and power}
\label{sec:hedging}
\noindent
\citet{jalilifar2011power} examine the relationship between an author's perceived political power and their expressed commitment to their beliefs.
While hedging and hesitations have been utilized to measure lack of commitment~\cite{philips1985william}, political discourse can feature many more strategies beyond a simple lexicon of hedge words, such as indirect speech acts, hypothetical if-then clauses, or framing claims as questions~\cite{fraser2010hedging}. Thus, analyzing hedging requires understanding of syntactic contexts within which claims are expressed, which our model can tackle. We establish the external validity of our proposed epistemic stance framework by computationally replicating the findings of \citet{jalilifar2011power}'s manual content analysis. To ensure the external validity of our proposed epistemic stance framework, we computationally replicate the findings of \citet{jalilifar2011power}'s manual content analysis (See Appendix~\ref{}.
The study examines transcripts of topically similar television interviews of three political figures,
George W.\ Bush (at the time, incumbent U.S.\ president),
Jimmy Carter (former U.S.\ president),
and David Coltart (founding member of Zimbabwe’s main opposition party).\footnote{Authors also analyzed interviews by U.S.\ politician Sarah Palin, but we these transcripts were not available at the provided URL.}
For each interview transcript, we employ our epistemic stance classifier to predict the stance of the political figure (author source) towards all extracted events,
and
calculate each author's uncertainty level as the fraction of events with a \emph{PR+} or \emph{PS+} epistemic stance.
We find the same ordering of commitment as the previous work:
Bush using the fewest uncertain \textit{PR+/PS+} stances ($5.41\%$),
with progressively more for Carter ($8.32\%$) and Coltart ($12.2\%$).
This follows \citeauthor{jalilifar2011power}'s interpretation of commitment being correlated to power (Bush being the highest status, for example).
\section{Model}
\vspace{-0.2cm}
\label{sec:model}
\begin{table*}[h]
\centering \tiny
\begin{tabular}{lrrrrrrrr}
\hline
\textbf{Model} &
\multicolumn{1}{c}{\textbf{CT+}} &
\multicolumn{1}{c}{\textbf{CT-}} &
\multicolumn{1}{c}{\textbf{PR+}} &
\multicolumn{1}{c}{\textbf{PS+}} &
\multicolumn{1}{c}{\textbf{Uu}} &
\multicolumn{1}{c}{\textbf{NE}} &
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Macro Avg\\ (Non-NE)\end{tabular}}} &
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Macro Avg\\ (All)\end{tabular}}} \\ \hline
DeFacto~\cite{sauri2012you} & 85.0 & 75.0 & 46.0 & 59.0 & 75.0 & - & 70.0 & - \\
SVM~\cite{sauri2012you, prabhakaran2010automatic} & 90.0 & 61.0 & 29.0 & 39.0 & 66.0 & - & 59.0 & - \\
BiLSTM~\cite{qian-2018-gan} & 85.2 & 74.0 & 58.2 & 61.3 & 73.3 & - & 70.4 & - \\
AC-GAN~\cite{qian-2018-gan} & 85.5 & 74.1 & 63.1 & 65.4 & 75.1 & - & 72.6 & - \\
BERT~\cite{jiang2021thinks} & 89.7 & 69.8 & 45.0 & 46.7 & 82.8 & 97.9 & 66.8 & 72.0 \\
RoBERTa (this work) & 90.7 & 78.4 & 51.4 & 62.7 & 84.8 & 97.8 & 73.6 & 77.6 \\ \hline
\end{tabular}
\caption{F1 scores for our RoBERTa based epistemic stance classifier and all baseline models.}
\vspace{-0.5cm}
\label{tab:classifier_results}
\end{table*}
We present a simple and reproducible RoBERTa-based neural model for epistemic stance classification using a standard fine-tuning approach.\footnote{We intentionally keep the modeling simple to make it more accessible to political scientists and users with less computational experience.
We further simplify by augmenting {BERT}\xspace with a single task-specific layer, as opposed to a new task-specific model architecture proposed in~\citet{pouran-ben-veyseh-etal-2019-graph, qian-2018-gan, rudinger2018neural}.} BERT fine-tuning is effective for many NLP tasks \cite{devlin-etal-2019-bert}, and recent work on pre-trained language models such as BERT~\cite{shi-etal-2016-string, belinkov2018internal, tenney-etal-2019-bert, tenney2018what, rogers-etal-2020-primer} shows such models encode syntactic and semantic dependencies within a sentence, which is highly related to the epistemic stance task.
Recently, \citet{jiang2021thinks} use a fine-tuned {BERT}\xspace model for author-only epistemic stance prediction, obtaining strong performance on several datasets. We extend their approach, developing a {BERT}\xspace model (using the RoBERTa~\cite{liu2019roberta} pre-training variant) for the structurally more complex multi-source task, and give the first full comparison to the foundational multi-source system, DeFacto~\cite{sauri2012you}.
We leave the exploration of other advanced transformer-based models~\cite{gpt3, 2020t5} for further performance gains as future work.
To develop a model suitable for multi-source predictions, we follow \citet{tenney2018what} and \citet{rudinger-etal-2018-neural-davidsonian}'s architecture for semantic (proto-role) labeling, which they formulate as predicting labels for pairs of input embeddings.
To predict the epistemic stance for an event-source pair $(e, s)$ in a sentence,
we first compute contextual embeddings for the sentence's tokens, $[h^{L}_{1}, h^{L}_{1},...., h^{L}_{n}]$,
from a {BERT}\xspace encoder's last ($L^{th}$) layer.
We concatenate the source ($h^{L}_{s}$) and event ($h^{L}_{e}$) token embeddings (each averaged over {BERT}\xspace's sub-token embeddings), and use a single linear layer to parameterize a final softmax prediction $\hat{f} \in [0,1]^{C}$ over the $C=6$ epistemic stance classes,\footnote{CT+, CT-, PR+, PS+, Uu, NE; ~\citet{sauri2012you} additionally define probably/possibly negative (PR-/PS-) stances. However, these stances are rare in the corpus, making modeling and evaluation problematic.
Following \newcite{qian-2015-ml, qian-2018-gan}, we omit them in this study.}
which is trained with cross entropy loss over all tuples in the training set.
We apply inverse frequency class weighting to encourage accurate modeling for comparatively rare classes like the \emph{CT-, PR+ and PS+} class.
Finally, to cleanly analyze the author source in the same manner as other mentioned sources,
we augment the sentence with the prefix ``Author: '' (following a dialogue-like formatting),\footnote{With and without the trailing colon gave same results.} and use its index and embedding for inferences about the author source.
Table~\ref{tab:classifier_results} shows the performance of our RoBERTa based epistemic stance classifier. We compare our model against several baselines, including rule-based methods (DeFacto;~\citet{sauri2012you}), machine learning classifiers (SVM \citet{sauri2012you, prabhakaran2010automatic}), and neural network based methods (BiLSTM and AC-GAN by \citet{qian-2018-gan}) as described in \S\ref{sec:existing_models}.\footnote{Since the DeFacto implementation is not available, we compare our model's predictions on the FactBank test set against evaluation statistics derived from the test set confusion matrix reported by \citeauthor{sauri2012you}. We use implementation provided at \url{https://github.com/qz011/ef_ac_gan} for SVM, BiLSTM and AC-GAN baselines.} We also extend the author-only BERT model by \citet{jiang2021thinks} to support multi-source predictions in line with our modeling approach. The RoBERTa model performs the best obtaining a macro-averaged F1 score of $77.6\pm$0.011 on all six epistemic labels and an F1 score of $73.6\pm$0.031 on the original five epistemic labels (excluding the \emph{{N}on-{E}pistemic}\xspace label). Although the RoBERTa model has a much simpler architecture, it performs the same or better than AC-GAN. All pairwise significance tests resulted in $p$-values $<$ $0.01$. Details of implementations and statistical testing is provided in Appendix \S\ref{sec:appendix_implementation_details} and \S\ref{sec:appendix_significance_testing}.
The above epistemic stance classifier, like most previous modeling approaches~\cite{qian-2015-ml, sauri2012you},
requires pre-identified sources and events, which do not exist in real-world text.
We use \citet{qian-2018-gan}'s two-step approach
to first identify sources and events in the input text and then determine stances for every recognized (source, event) pair.
Source and event identification is through two RoBERTa-based token classifiers, using a linear logistic layer for binary classification of whether
a token is a source (or event), fine-tuned on the same training corpus.
Our source and event identification models achieve a macro-averaged F1 score of $81.8\pm0.019$ and $85.78\pm 0.007$, respectively, slightly improving upon the only existing prior work of ~\citet{qian-2018-gan} by $1.85\%$ and $1.29\%$ respectively, with pairwise significance tests resulting in $p$-values $<$ $0.01$. We also experimented with a joint model to identify sources and events; however, individual classifiers gave us better performance (Appendix \S\ref{sec:appendix_source_event_model}).
\section{Limitations}
Footing shift can be analyzed further with multi-modal data, such as facial expressions etc.
Coreference is not perfect in our analysis.
\section{Introduction}
\label{sec:intro}
\begin{figure*}[t]
\centering
\includegraphics[width=120mm, height=40mm, scale=0.8,clip,trim=0 0 0 0mm]{images/motivational_diagram.pdf}
\caption{Illustrative example, simplified and adapted from a sentence in the Mass Market Manifestos corpus.
There are four sources (s1--s4) and three events (e1--e3) with $4\times3=12$ labels between them;
all epistemic stances are shown, but most non-epistemic (NE) labels are hidden for clarity.
\S\ref{sec:intro} and \S\ref{sec:epistemic_stance_framework} describe the labels.
\label{fig:motivating_example}}
\end{figure*}
Political argumentation is rich with assertions, hypotheticals and disputes over opponent's claims. While making these arguments, political actors often employ several rhetorical strategies to display varying degrees of commitments to their claims. For instance, political scientists have studied the \textit{footing-shift} strategy, where actors convey their own beliefs while claiming that they belong to someone else~\cite{goffman1981forms, clayman1992footing}.
Sometimes they may attribute their beliefs to a majority of the population via \textit{argument from popular opinion}\ \cite{walton2008argumentation}.
Actors can also resort to \textit{hedging}, stating their own beliefs, but qualified with a partial degree of certainty~\cite{fraser2010hedging, lakoff1975hedges, hyland1996writing}
or express simple \textit{political disagreements},
contradicting claims made by their opponents~\cite{jang2009diverse, klofstad2013disagreeing, frances2014disagreement, christensen2009disagreement}.
Traditionally, political scientists and other scholars have manually analyzed
the impact of such strategies and argumentation on audience perception~\cite{clayman1992footing, fraser2010hedging}.
Recent advances in natural language processing (NLP) and digital repositories of political texts have enabled researchers to conduct large-scale analyses of political arguments using methods such as subjectivity analysis~\cite{liu2012sentiment, pang2009opinion}, argument mining~\cite{DBLP:conf/aaai/TrautmannDSSG20, toulmin-1958-the, walton1996argumentation}, and opinion mining~\cite{wiebe2005annotating, bethard2004automatic, kim-hovy-2004-determining, choi-2005-identifying}. While these approaches primarily concern argument structure and normative attitudes, we propose a complementary approach to analyze sources' \emph{epistemic} attitudes towards assertions~\cite{langacker-2009-investigations, anderson-1986-evidentials, arrese-2009-effective}---what they believe to be true and the extent to which they commit to these beliefs.
Consider an example shown in Figure~\ref{fig:motivating_example}, where the author of the text (s1) quotes a speculation from the Congressional Quarterly (s2) about what Mitch McConnell (s3) said concerning Obama (s4).
In this example, while the author of the text believes that the Congressional Quarterly hinted something about McConnell (thus, exhibiting a \emph{certainly positive} (\textit{CT+}) stance towards the event (e1),
she remains \emph{uncommitted} (\textit{Uu})
about the quoted event (e3) that McConnell describes (edge omitted for visual clarity).
Of course, this event is asserted as \emph{certainly negative} (\textit{CT-})
by McConnell, the speaker of the quote.
The Congressional Quarterly suggests that Mitch McConnell made a statement (a \emph{probably positive} (\textit{PR+}) stance towards e2) while remaining \emph{uncommitted} towards what he said.
Finally, \emph{Obama}'s own beliefs about whether he paid attention to Republican ideas are not expressed in this sentence; thus, s4 (Obama) has a \emph{non-epistemic} label toward the listening event (e3).
To address this challenging problem of epistemological analysis, researchers within the NLP community have created several datasets and models in various domains~\cite{minard2016meantime, Rambow2016BeSt, rudinger2018neural, lee2015event, stanovsky2017integrating, white2016universal, de-marneffe-etal-2012-happen}, often motivated directly by the interesting challenges of these linguistic semantic phenomena.
However, there is a great potential to use an epistemic stance framework to analyze social relations~\cite{soni2014modeling, prabhakaran2015new, swamy2017have}, motivating us to further advance this framework to support analysis of common rhetorical strategies and argumentation styles used in political discourse.
In this paper, we seek to further how \textit{epistemic stance} analysis can help computationally investigate the use of \textit{rhetorical strategies} employed in political discourse.
In particular, we use the theory, structure and annotations of {F}act{B}ank\xspace \cite{sauri2009factbank}, an expert-annotated corpus drawn
from English news articles,
which distinguishes different types of epistemic stances expressed in text.
{F}act{B}ank\xspace features annotations not just for the author, but also other sources (entities) mentioned in the text. Such multi-source annotations allow us to disambiguate the author's own beliefs from the beliefs they attribute to others.
Our main contributions in this work are:
\begin{itemize}
\item We conduct a literature review connecting ideas related to epistemic stance as studied across several disconnected scholarly areas of linguistics, NLP, and political science (\S\ref{sec:related_work}).
\item We develop a fine-tuned RoBERTa model~\cite{liu2019roberta} for multi-source epistemic stance prediction (\S\ref{sec:model}), whose simplicity makes it accessible to social scientist users,\footnote{All resources accompanying this project are added to our project page: \url{https://github.com/slanglab/ExPRES}} while performing at par with a more complex state-of-the-art model~\cite{qian-2018-gan}.
\item We use our model to identify the most frequent \emph{belief holders} which are epistemic sources whose views or statements are expressed by the author. Identifying belief holders is
an essential first step in analyzing rhetorical strategies and arguments.
We conduct this study on the Mass-Market Manifestos (MMM) Corpus,
a collection of 370 contemporary English-language political books authored by an ideologically diverse group of U.S. political opinion leaders. We compare results to traditional named entity recognition. Finally, we analyze differences in what belief holders tend to be cited by left-wing versus right-wing authors, revealing interesting avenues for future work in the study of U.S.\ political opinion
(\S\ref{sec:case_study}).
\item In the appendix, we additionally validate our model by replicating an existing manual case study comparing the commitment levels of different political leaders\ (\S\ref{sec:appendix_hedging}, \citealp{jalilifar2011power}), and give further analysis of the model's behavior with negative polarity items and different types of belief holders (\S\ref{sec:appendix_bh}).
\end{itemize}
\section{Data Collection}
\label{sec:data_collection}
We sample sentences for annotation from 308 political books drawn from \emph{Contemporary American Ideological Books Corpus} (CAIB; \citealp{sim2013measuring}), a collection of mass-market political nonfiction books written by contemporary U.S.\ political figures such as national politicians, media personalities, and advocates.
For collecting epistemic stance annotations, we sampled one sentence per book. To ensure a dataset with a range of complex linguistic and rhetorical phenomena, we considered sentences with more than 15 tokens and at least one embedded event clause, defined as follows.
\jhg{Did you do any filtering to get sentences that involved political rhetoric? Or did you randomly choose a single sentence without any filtering or chance for rejection (e.g. if sentence were simply the author recalling trips to Lake Erie as a child).}
\textbf{Events and Sources}\ \
In order to annotate sentences from the corpus, we must first identify, for each sentence, relevant events and sources to analyze.
Our approach is to over-generate, identifying many possible events and sources, and rely on non-epistemic annotations and their modeling to identify instances of meaningful epistemic stances.
While we use {F}act{B}ank\xspace's framing of the semantic analysis problem, we find its approach to event-source identification problematic. {F}act{B}ank\xspace identifies its event tokens as the ones marked in the highly precise, manually annotated {T}ime{B}ank\xspace and AQUAINT TimeML\footnote{\url{http://www.timeml.org/site/timebank/timebank.html}} corpora, but to analyze epistemic stances in raw text, events must of course be automatically identified.
By contrast, we seek to identify all heads of sentential clauses as events. With an automatic parser with Universal Dependencies (v1.3) formalism~\cite{NIVRE16.348}\footnote{CoreNLP 3.8.0 with its `english\_UD' pretrained model} \bto{UD version, what software parser and version? and proper citation.}, we designate the tree's
root as an event, as well as its descendants connected via any of the relations \emph{acl:relcl, advcl, ccomp, csubj, parataxis, xcomp}.\footnote{This heuristic has the limitation of excluding nominal events, such as \emph{The \textbf{firing} of the manager}, which is beyond the purview of this project.}
Finding sources also presents a challenge.
{F}act{B}ank\xspace only analyzes sources that are subjects of
\emph{source-introducing predicates} (SIPs), from a lexicon of verbs about report and belief, such as \emph{claim, doubt, feel, know, say, think} \cite{sauri2012you}.
However, this can miss potentially relevant sources.
Therefore, we use a loose automatic filter that considers all nouns or pronouns (that do not have a noun or adjective parent) as a source. The special \emph{Author} source is always included as a source. We also simplified {F}act{B}ank\xspace's source selection process by excluding nested sources\footnote{}. For a vast majority of sentences in our corpus, nested sources seemed not to be necessary. Furthermore, it was challenging to train crowd-workers to interpret stances for nested sources.
Finally, we consider the cross-product of all pairs of sources and events within a sentence, asking annotators to give a stance value for each pair. This can lead to a large number of tuples, especially in longer sentences, and it over-generates spurious candidate pairs, which makes necessary the common \emph{{N}on-{E}pistemic}\xspace~label. We believe this over-generation, however, helps give a realistic evaluation of multi-source epistemic stance analysis on a new corpus.
\textbf{Annotation Process}
We conducted an initial pilot study with 19 sentences, attempting to delineate a reporting stance (where a source reports what someone else has said, without taking a stance whether it happened) versus a general uncommitted stance, following \citealp{prabhakaran2015new} who annotated for reported beliefs; however, we found annotators often confused them for one another, so we merged them.
We proceeded to the larger scale annotation; a sample prompt is shown in Appendix's Figure~\ref{fig:prompt}.
After the additional quality control filtering described below, we obtain a raw inter-annotator agreement rate $0.793$ and chance-adjusted agreement rate (Krippendorff $\alpha$) $0.554$ for 51,805 annotations.
This is broadly in line with reported chance-adjusted agreement rates for author-only factuality annotations: $0.60$ in \citet{prabhakaran2015new} or $0.66$ in \citet{rudinger2018neural}. For multi-source factuality annotations, \citet{sauri2012you} reported an overall chance-adjusted agreement rate of 0.81, however it only covers 30$\%$ of all the events in the corpus; \citet{de-marneffe-etal-2012-happen} reports an agreement rate of $0.66$ for the three-category (positive, negative, Uu) version of their factuality annotations from reader's perspective, with the agreement rate for 7 class fine-grained categorization being even lower.
\bto{todo discuss s+p and marneffe which are a lot higher, wtf?}
While agreement rate for our dataset is slightly lower than some of the conventional standards seen in NLP, this task has genuine, substantive ambiguity, which could be captured and analyzed in future work along the lines of \citet{pavlick2019inherent}'s study of natural language inference.
Annotations were collected on the Figure Eight platform.\footnote{Annotation costs were supported by [removed for anonymity].
} Crowdworkers were selected from the highest contributor level (level 3) and were limited to those from the US; we did not limit to native English speakers.
Workers were paid \$0.10 per annotation, with a median 54 (mean 115) seconds per annotation. Worker speeds varied by multiple orders of magnitude (from 1.8 to 355 sec/anno, averaged per worker).
We used three strategies, each of which we observed improved data quality.
(1) During annotation, we used Figure Eight's ``hidden gold'' testing facility to restrict to workers who had a high agreement rate with a number of tuples from {F}act{B}ank\xspace which we designated as a gold standard. \scomment{is there any risk that there might be meaningful dissensus on these items?}
(2) We removed judgments originating from IP addresses used by more than one unique worker, or from workers associated with more than five IP addresses, which seemed to be low quality annotations.
(3) After discarding items with fewer than three judgments, we infer final aggregated labels via the MACE model and software \cite{hovy2013learning},\footnote{\url{https://github.com/dirkhovy/MACE}}
which uses Bayesian unsupervised learning to weigh the judgments of different annotators by their relative reliability;
\citet{Paun2018Comparing} found MACE was one of the best models for crowdsourced annotation aggregation. We consider the most probable label inferred by MACE to be the final label in our corpus.
After quality filtering, our dataset consists of 8465 event-source pairs spanning 308 sentences from the CAIB corpus (Table~\ref{table:stats}).
\begin{table}[]
\begin{tabular}{c|rrrrr}
\hline \addlinespace[0.05cm]
\multirow{2}{*}{\#Sentences} & \multicolumn{5}{c}{\#Tuples} \\ \addlinespace[0.05cm] \cline{2-6} \addlinespace[0.05cm]
& Pos & Neg & Uu & NE & Total \\ \addlinespace[0.05cm] \hline \addlinespace[0.05cm]
\multicolumn{1}{r|}{308} & 1176 & 254 & 641 & 6394 & 8465 \\ \addlinespace[0.05cm] \hline \addlinespace[0.05cm]
\end{tabular}
\caption{{P}oli{B}elief\xspace corpus statistics; Pos: positive, Neg: negative, Uu: uncommitted, NE: non-epistemic}
\label{table:stats}
\end{table}
\ignore{
\textbf{Annotator modeling: MACE} \label{sec:mace}
\noindent
We noticed a wide variation in how different workers approached the annotation task. Since the task has substantial subjectivity, we hoped to better model which annotators were more reliable, in order to derive higher quality, aggregated labels for each item. Following \citet{Paun2018Comparing}, who found that MACE~\cite{hovy2013learning} was among several top-performing models to efficiently aggregate crowd-sourced NLP annotations, we apply the open source MACE software\footnote{\url{https://github.com/dirkhovy/MACE}} with default settings to the dataset of judgments on both test and non-test items.\footnote{We used MACE in purely unsupervised mode in order to better explore errors or disagreements between annotators and our putative gold standard, as FigureEight's quality control system already ensures all annotators had at least 80\% agreement with the test items' labels.
\bto{relax this stmt if we don't end up analyzing it}} MACE is an unsupervised, joint probabilistic model of annotators and judgments for crowd-sourced settings which infers probabilistic labels for each annotated item weighted by the estimated proficiency of each annotator.
MACE also infers posterior entropies for each item which can be used as an approximation for the model’s confidence in the prediction, lower entropy implies higher confidence~\cite{hovy2013learning}. We use these entropies to identify items with higher annotator disagreements. Table~\ref{table:disagreement} (Appendix) provides some of the difficult examples from our dataset where annotators had high disagreements. Most often, annotators find it difficult to disambiguate a positive/negative versus uncommitted factuality stance. \scomment{this is strange---shouldn't it be straightforward to disambiguate decidedly positive/negative items from uncommitted ones?} For instance, consider the following sentence with ``Author'' as source (marked in red) and ``good'' as event (marked in blue).
\emph{\textcolor{red}{Author:} But when the crisis part actually involves putting us out of work , it 's hard to see how pride in our identity will do us any \textcolor{blue}{good}.}
For this example, 2 annotators voted positive, 1 voted negative and 2 voted for uncommitted class. The counter-factual nature of this statement probably confused the annotators.
}
\section{Experiments}
\label{sec:experiments}
\begin{table*}[t]
\centering \scriptsize
\begin{tabular}{lrrrrrrrr}
\hline
\textbf{Model} &
\multicolumn{1}{c}{\textbf{CT+}} &
\multicolumn{1}{c}{\textbf{CT-}} &
\multicolumn{1}{c}{\textbf{PR+}} &
\multicolumn{1}{c}{\textbf{PS+}} &
\multicolumn{1}{c}{\textbf{Uu}} &
\multicolumn{1}{c}{\textbf{NE}} &
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Macro Avg\\ (Non-NE)\end{tabular}}} &
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Macro Avg\\ (All)\end{tabular}}} \\ \hline
DeFacto~\cite{sauri2012you} & 85.0 & 75.0 & 46.0 & 59.0 & 75.0 & - & 70.0 & - \\
SVM~\cite{sauri2012you, prabhakaran2010automatic} & 90.0 & 61.0 & 29.0 & 39.0 & 66.0 & - & 59.0 & - \\
BiLSTM~\cite{qian-2018-gan} & 85.2 & 74.0 & 58.2 & 61.3 & 73.3 & - & 70.4 & - \\
AC-GAN~\cite{qian-2018-gan} & 85.5 & 74.1 & 63.1 & 65.4 & 75.1 & - & 72.6 & - \\
BERT~\cite{jiang2021thinks} & 89.7 & 69.8 & 45.0 & 46.7 & 82.8 & 97.9 & 66.8 & 72.0 \\
RoBERTa (this work) & 90.7 & 78.4 & 51.4 & 62.7 & 84.8 & 97.8 & 73.6 & 77.6 \\ \hline
\end{tabular}
\caption{F1 scores for our RoBERTa based epistemic stance classifier and all baseline models.}
\label{tab:classifier_results}
\end{table*}
\subsection{Performance}
\paragraph{Source and Event Identification Models}
Our source and event identification models achieve a macro-averaged F1 score of $81.8\pm0.019$ and $85.78\pm 0.007$, respectively, slightly improving upon the only existing prior work of ~\citet{qian-2018-gan} by $1.85\%$ and $1.29\%$ respectively, with pairwise significance tests resulting in $p$-values less than $0.01$. We also experimented with a joint model to identify sources and events; however, individual classifiers gave us better performance.
\paragraph{Epistemic Stance Classifier}
We compare our model against several baselines, including rule-based methods (DeFacto;~\citet{sauri2012you}), machine learning classifiers (SVM developed by \citet{sauri2012you} using lexico-syntactic features proposed by \citet{prabhakaran2010automatic}), and neural network based methods (BiLSTM and AC-GAN developed by \citet{qian-2018-gan}) as described in \S\ref{sec:existing_models}.\footnote{Since the DeFacto implementation is not available, we compare our model's predictions on the FactBank test set against evaluation statistics derived from the test set confusion matrix reported by \citeauthor{sauri2012you}. We use implementation provided at \url{https://github.com/qz011/ef_ac_gan} for SVM, BiLSTM and AC-GAN baselines.}
We also extend the author-only BERT model developed by \citet{jiang2021thinks} to support multi-source predictions in line with our modeling approach.
Table~\ref{tab:classifier_results} shows the performance of our RoBERTa based epistemic stance classifier and comparisons with above mentioned baselines. The RoBERTa model performs the best obtaining a macro-averaged F1 score of $77.6\pm$0.011 on all six epistemic labels and an F1 score of $73.6\pm$0.031 on the original five epistemic labels (excluding the \emph{{N}on-{E}pistemic}\xspace label). Although the RoBERTa model has a much simpler architecture, it performs the same or better than AC-GAN. All pairwise significance tests resulted in $p$-values less than $0.01$.
\ignore{
\subsection{Evaluation on Political Opinion Domain}
\label{sec:pb_results}
Our epistemic stance classifier is trained on {F}act{B}ank\xspace, a dataset specific to the news domain. However, before using this model for political discourse analysis, we must ensure that the model performs well under domain shift. Thus, we evaluate our epistemic stance classifier on {P}oli{B}elief\xspace; an evaluation dataset which we explicitly crowdsource in the political opinion domain. We next describe our data collection process followed by model performance on this dataset.
\paragraph{{P}oli{B}elief\xspace Corpus Description:}
We collect a simplified evaluation dataset specific to the political domain by drawing sentences from Contemporary American Ideological Books (CAIB; \citet{sim2013measuring}), a corpus of mass-market political nonfiction books in English authored by U.S. politicians, media activists, and opinion elites with diverse ideological perspectives. To reduce the annotator burden and simplify the annotation task, we heuristically pre-identify sources and events. Unlike FactBank's criteria of using SIPs that may miss some relevant sources and events annotated in TimeBank and AQUAINT TimeML\footnote{\url{https://web.archive.org/web/20070721130754/http://www.timeml.org/site/publications/specs.html}} that consider only short-term stative events of ``focal interest" in newswires, we use a loose and automatic filter to identify sources and events. We consider all nouns and pronouns as a source and identify all heads of sentential clauses as events. More details are provided in Appendix \S\ref{sec:appendix_polibelief}.
We next collect stance annotations for a full cross product of all identified sources and events. We adopt a simplified three-class coarse view of the more fine-grained {F}act{B}ank\xspace's annotations or the continuous annotations sometimes collected in previous work~\cite{stanovsky2017integrating}. We chose this approach due to ease of annotation, since \citet{de-marneffe-etal-2012-happen} found that many of the disagreements in their annotations were about degree of confidence (e.g., certain vs. probable vs. possible), and \citet{hobby-2000-communication} also found the distinction between possible and probable was hard to make in the biomedical domain.\footnote{Note that even with a coarse three-class discrete model, \citet{stanovsky2017integrating}-style continuous-valued output can be produced through probabilistic predictions.\ag{reword it}}
In particular, we do not distinguish between various degrees of expressed uncertainty, instead grouping all such cases\footnote{PR+, PS+, Uu} into \emph{Uncommitted} (Uu). In addition, we include \emph{Positive} (Pos) and \emph{Negative} (Neg) classes as parallels for CT+ and CT- categories in {F}act{B}ank\xspace. Finally, we also include the \emph{Non-Epistemic} (NE) stance when it does not make sense to assess an epistemic stance for a given source-event pair. This category account for any spurious event-source pairs that may result from the overgeneration of sources and events. See Appendix~\ref{sec:appendix_polibelief} for more details on stance definitions.
\begin{table}
\centering \small
\begin{tabular}{cccc|c}
Pos & Neg & Uu & NE & Total \\ \addlinespace[0.05cm] \hline \addlinespace[0.05cm]
1176 & 254 & 641 & 6394 & 8465
\end{tabular}
\caption{Counts for \emph{Positive}, \emph{Negative}, \emph{Uncommitted}, and \emph{{N}on-{E}pistemic}\xspace tuples in {P}oli{B}elief\xspace.
\label{table:stats}}
\end{table}
\paragraph{Annotation Details}
For collecting epistemic stance annotations, we sampled one sentence from each of 308 books to ensure broad corpus diversity. (The 308 consisted of all books in an earlier version of CAIB.) We did not attempt to filter to political discussion. To ensure a dataset with a range of complex linguistic and rhetorical phenomena, we considered sentences with more than 15 tokens and at least one embedded event.
Annotations were collected on the Figure Eight platform. A prompt with pre-identified source and event is shown to the annotator and is asked to annotate epistemic stance for this event-source pair. A sample prompt is included on the dataset website. We obtain a raw inter-annotator agreement rate $0.793$ and chance-adjusted agreement rate (Krippendorff $\alpha$) $0.554$ for 51,805 annotations. This is broadly in line with reported chance-adjusted agreement rates in prior literature (see Appendix~\ref{sec:appendix_polibelief} for more detailed comparisons).
Finally, to infer aggregated labels we use the MACE model and software \cite{hovy2013learning},\footnote{\url{https://github.com/dirkhovy/MACE}}
which uses Bayesian unsupervised learning to weigh the judgments of different annotators by their relative reliability;
\citet{Paun2018Comparing} found MACE was one of the best models for crowdsourced annotation aggregation. We use the most probable label inferred by MACE as the final label.\footnote{We experimented with MACE's item-level posteriors for soft label training \cite{fornaciari-etal-2021-beyond}, but observed similar results as training on hard labels.} After quality filtering, our dataset consists of 8465 annotated event-source pairs spanning 308 sentences from the CAIB corpus (Table~\ref{table:stats}). Additional Details of our initial pilot study, annotator details, payment rates and data filtering procedures can be found in Appendix~\ref{sec:appendix_polibelief}.
\paragraph{Model Evaluation on {P}oli{B}elief\xspace}
We next describe the evaluation of our epistemic stance classifier (trained on {F}act{B}ank\xspace) on {P}oli{B}elief\xspace dataset. Since the epistemic stance classifier trained on {F}act{B}ank\xspace predicts one of the six fine-grained stance categories and {P}oli{B}elief\xspace has coarse-grained annotations with four categories, we map {F}act{B}ank\xspace's epistemic stances to {P}oli{B}elief\xspace's coarser categories.\footnote{FactBank's certain-positive and certain-negative classes (CT+, CT-) are mapped to \emph{Positive} and \emph{Negative} classes in {P}oli{B}elief\xspace respectively; all others, that express any degree of uncertainty, are combined to \emph{Uncommitted}. This mapping also results in class prevalence close to that of {P}oli{B}elief\xspace corpus.} The model achieves a macro F1 score of 51.6$\%\pm 0.008$, which is XXX \ag{what do we compare this performance with?, how do we know what is the baseline on this dataset?} In addition, we conducted a manual evaluation of model predictions to identify cases where the model made genuine errors versus cases where noise/genuine ambiguity in annotations made it hard to interpret the correct label (details provided in Appendix~\ref{sec:appendix_manual_polibelief}). Based on our human evaluation, the model achieves an F1 score of XX$\%$, signifying that the model performs reasonable under domain shift and can be used for political discourse analysis. \ag{manual analysis of model predictions is one way i could think of justifying that evaluation under domain shift was successful. Is there any other way we can argue?}
\paragraph{Lessons Learnt from {P}oli{B}elief\xspace}
\ag{this section again is one way i could think of bringing in polibelief's discussion. Looking forward to more comments or other ways to talk about model evaluation on {P}oli{B}elief\xspace}
We next describe the challenges and key lessons we learnt while crowdsourcing epistemic stance annotations and using these annotations for model development. Our observations demonstrates that annotating epistemic stances is indeed a complex linguistic task and obtaining high quality data comparable to expert annotations is difficult.
\begin{enumerate}[leftmargin=*]
\item[] \textbf{Genuine Ambiguity in Annotations}: We conducted a manual analysis of crowdsourced annotations in {P}oli{B}elief\xspace, observing genuine ambiguity in annotations (Table~\ref{tab:polibelief_ambiguity}). We found that annotators often conflated the notion of a source that can potentially hold beliefs with the subject of a sentence, sometimes marking spurious entities as holding stances/beliefs. For instance, in Table~\ref{tab:polibelief_ambiguity}A, [dollar]$_{s}$ and [nation]$_{s}$, subject of the respective sentences, are marked as having Pos/Uu stance for most events by majority annotators, however their beliefs are not expressed in the sentence. We also observed genuine ambiguity over future tense events with some annotators marking the event with Pos stance, while others marking those events with Uu stance. In Table~\ref{tab:polibelief_ambiguity}B, three annotators marked Uu, and two annotators marked Pos stances for \underline{continue}$_{e}$ and \underline{become}$_{e}$ events). We also observed instances of annotation noise, the stance annotations for coreferent sources (e.g., \emph{author} and the first-person pronouns such as I, we, us, etc.) didn't agree for a given event. For instance, in Table~\ref{tab:polibelief_ambiguity}C, while most annotators mark (Author, figure, Uu); they also mark (I, figure, NE); even though [Author]$_{s}$ and [I]$_{s}$ are coreferent and ideally should have same stances. In some cases, annotators relied on a false heuristic, marking every event positionally appearing before the source in a given sentence with NE stance. All of these observations indicate the high linguistic complexity of this task and challenges in crowdsourcing epistemic stances. \ag{a possible cross-question: why not collect new set of annotations after revising guidelines to account for these observed issues?}
\item[] \textbf{Model over-fitting when trained on {P}oli{B}elief\xspace}: We also conducted experiments where we used {P}oli{B}elief\xspace as a training dataset, by sequentially fine-tuning {BERT}\xspace model on {F}act{B}ank\xspace followed by {P}oli{B}elief\xspace. This approach of sequential fine-tuning has been shown effective in various domain adaptation and low resource settings~\cite{pruksachatkun-etal-2020-intermediate, vu-etal-2020-exploring, han-eisenstein-2019-unsupervised}. The sequentially fine-tuned model achieved an F1 score of 67.2$\%\pm$0.016, which is better than the model trained only on {F}act{B}ank\xspace (51.6$\%\pm 0.008$; See Appenidx~\ref{sec:appendix_model_performance_polibelief} for additional results). However, on manual analysis of predictions, we observe that the model over-fits to annotation noise and idiosyncrasies of the {P}oli{B}elief\xspace dataset. Based on these observations, we decided not to use {P}oli{B}elief\xspace for training purposes. However, our manual analysis provided us with a good sense of model performance under domain shift, giving us the confidence to use this model for political discourse analysis.
\end{enumerate}
\ag{I wasn't able to fit many of the previous experimental results, as they were all based on models trained on {P}oli{B}elief\xspace, or error analysis on {P}oli{B}elief\xspace dataset. Currently they have been moved to Appendix. Open to suggestions on bringing back some experiments or planning new ones which fit the context better.}
\begin{table*}[!htp]\centering
\scriptsize
\begin{tabular}{p{0.97\linewidth}}\toprule
A. Subject as a source with epistemic stances \\
$\bullet$ Author:~This means that we should expect that the \textbf{[dollar]$_{s}$} will \textbf{\underline{continue}$^{(1,0,2,1)}_{e}$} to be overvalued for some time to come. \\
$\bullet$ Author:~Our \textbf{[nation]$_{s}$} will \textbf{\underline{become}$^{(2,0,3,0)}_{e}$} a tarnished example and weaker in the eyes of the world. \\ \vspace{0.01} \hline \vspace{0.01}
B. Disagreement over future tense \\
$\bullet$ \textbf{[Author]$_{s}$}:~This means that we should expect that the dollar will \textbf{\underline{continue}$^{(2,0,3,0)}_{e}$} to be overvalued for some time to come. \\
$\bullet$ \textbf{[Author]$_{s}$}:~Our nation will \textbf{\underline{become}$^{(2,0,3,0)}_{e}$} a tarnished example and weaker in the eyes of the world. \\ \vspace{0.01} \hline \vspace{0.01}
C. Annotation Noise \\ \vspace{0.05}
$\bullet$ \textbf{[Author]$_{s}$}:~I told Elaine that I wouldn't leave Garden Spires until we could \textbf{\underline{figure}$^{(1,0,3,1)}_{e}$} something out. \\
$\bullet$ Author:~I told Elaine that \textbf{[I]$_{s}$} wouldn't leave Garden Spires until we could \textbf{\underline{figure}$^{(0,0,1,5)}_{e}$} something out. \\ \vspace{0.01}
$\bullet$ \textbf{[Author]$_{s}$}: If this is \textbf{\underline{passed,}$^{(0,0,5,0)}_{e}$} we are going to see our budget deficit increase, according to the Congressional Budget Office. \\
$\bullet$ Author: If this is \textbf{\underline{passed,}$^{(1,0,1,3)}_{e}$} \textbf{[we]$_{s}$} are going to see our budget deficit increase, according to the Congressional Budget Office.
\bottomrule
\end{tabular}
\caption{Examples of genuine ambiguity and annotation noise in {P}oli{B}elief\xspace; sources are highlighted in bold, events are underlined, with number of (positive, negative, uncommitted, non-epistemic) stance annotations.}\label{tab:polibelief_ambiguity}
\end{table*}
}
\section{Epistemic Stance from Different Perspectives}
\label{sec:related_work}
The notion of epistemic stances has been studied under several scholarly areas, including linguistics, political science and NLP. In this section, we discuss various notions of epistemic stances and how they have been utilized in these different areas.
\subsection{Epistemic Stance in Linguistics}
A speaker's \emph{epistemic stance} is their positioning about their knowledge of, or veracity of, communicated events and assertions~\cite{biber-1989-styles, palmer-2001-mood, langacker-2009-investigations}. Epistemic stance relates to the concept of \textit{modality}, which deals with the degree of certainty of situations in the world, and has been extensively studied under linguistics~\cite{kiefer1987defining, palmer-2001-mood, lyons1977semantics, chafe-1986-evidentiality} and logic~\cite{horn1972semantic, hintikka1962knowledge, hoek1990systems, holliday2018epistemic}.
From a cognitivist perspective, epistemic stance concerns the pragmatic relation between
speakers and their knowledge regarding assertions~\cite{biber-1989-styles, mushin2001evidentiality, martin2005appraisal}.
\subsection{Epistemic Stance in Political Science}
The use of epistemic stances is widespread in political communication and persuasive language, to describe assertions when attempting to influence the reader's view~\cite{chilton-2004-analysing, arrese-2009-effective}. For instance, \citet{chilton-2004-analysing} studies use of epistemic stances by speakers/writers for legitimisation and coercion; \citet{arrese-2009-effective} examines epistemic stances taken by speakers to reveal their ideologies. In these studies, a speaker's communicated stance may follow what they believe due to their experiences, inferences, and mental state \cite{anderson-1986-evidentials}. From a psychological perspective, ~\citet{shaffer1981balance} employs balance theory~\cite{heider1946attitudes}---the cognitive effect of knowing an entity's stance towards an issue---in explaining public perceptions of presidential candidates' issue positions.
\subsection{Epistemic Stance in NLP}
\label{sec:epistemic_stance_in_nlp}
In the NLP literature, epistemic stances---typically of authors, and sometimes of mentioned textual entities---have been studied
under the related concepts of
\emph{factuality} \cite{sauri2012you, rudinger-etal-2018-neural-davidsonian, lee2015event, stanovsky2017integrating, minard2016meantime,soni2014modeling}
and
\emph{belief commitments} \citep{prabhakaran2015new,diab2009committed}. \citet{de-marneffe-etal-2012-happen} prefers the term \emph{veridicality} to study the reader's, not author's, perspective.
We use the term \emph{epistemic stance} to avoid confusion with
at least two more recent subliteratures that use \emph{factuality} differently from the above.
In misinformation detection, factuality refers to a proposition's objective truth \cite{rashkin-etal-2017-truth, mihaylova2018fact, thorne2018fever, vlachos2014fact}.
By contrast, we follow the epistemic stance approach in not assuming any objective reality---we simply model whatever subjective reality that agents assert.
Furthermore, text generation work has studied whether text summaries
conform to a source text's asserted propositions---termed the factuality or ``factual correctness'' of a summary \cite{maynez-etal-2020-faithfulness, wiseman-etal-2017-challenges, kryscinski-etal-2019-neural, dhingra2019handling}.
Several researchers in NLP have explored interesting social science applications in multiple settings such as organizational interactions~\cite{prabhakaran2010automatic}, Supreme Court hearings~\cite{danescu2012echoes}, discussion~\cite{bracewell2012motif, swayamdipta2012pursuit} and online forums~\cite{biran2012detecting, rosenthal2014detecting}. In particular, \citet{prabhakaran2010automatic} use epistemic stances to analyse power relations in organizational interactions. These studies demonstrate the potential of using epistemic stance analysis for social science applications. Motivated by these advances, we use epistemic stance framework to analyze political rhetoric, a genre that has not been explored earlier.
\begin{table}[]
\centering \tiny
\begin{tabular}{p{0.65cm}llll}
\hline \addlinespace[0.05cm]
\textbf{Type} &
\textbf{Dataset} &
\textbf{Perspective} &
\textbf{Genre} &
\textbf{Label} \\ \addlinespace[0.05cm] \hline \addlinespace[0.05cm]
\multirow{5}{*}{Factuality} &
\begin{tabular}[c]{@{}l@{}}{F}act{B}ank\xspace \\ \cite{sauri2012you}\end{tabular} &
Multi &
News &
Disc (8) \\ \addlinespace[0.1cm]
&
\citealp{stanovsky2017integrating} &
Author &
News &
Cont [-3, 3] \\ \addlinespace[0.1cm]
&
\begin{tabular}[c]{@{}l@{}}MEANTIME \\ \cite{minard2016meantime}\end{tabular} &
Multi &
\begin{tabular}[c]{@{}l@{}} News \\ (Italian) \end{tabular} &
Disc (3) \\ \addlinespace[0.1cm]
&
\citealp{lee2015event} &
Author &
News &
Cont [-3, 3] \\ \addlinespace[0.1cm]
&
\begin{tabular}[c]{@{}l@{}}UDS-IH2 \\ \cite{rudinger2018neural}\end{tabular} &
Author &
Open &
\begin{tabular}[c]{@{}l@{}}Disc (2) \&\\ Conf [0,4]\end{tabular} \\ \addlinespace[0.1cm]
&
\citealp{yao-2021-mds} &
Multi &
News &
Disc (6) \\ \addlinespace[0.1cm]
&
\citealp{vigus-2019-dependency} &
Multi &
Open &
Disc (6) \\ \addlinespace[0.1cm]
\hline \addlinespace[0.1cm]
\begin{tabular}[c]{@{}l@{}}Indirect \\ Reporting\end{tabular} &
\citealp{soni2014modeling} &
Reader &
Twitter &
Likert (5) \\ \addlinespace[0.1cm] \hline \addlinespace[0.1cm]
\begin{tabular}[c]{@{}l@{}}Pragmatic \\ Veridicality\end{tabular} &
\begin{tabular}[c]{@{}l@{}}PragBank \\ \cite{de-marneffe-etal-2012-happen}\end{tabular} &
Reader &
News &
Disc (7) \\ \addlinespace[0.1cm] \hline \addlinespace[0.1cm]
\multirow{2}{*}{Beliefs} &
\citealp{diab2009committed} &
Author &
Open &
Disc (3) \\
&
\citealp{prabhakaran2015new} &
Author &
Forums &
Disc (4) \\ \addlinespace[0.1cm] \hline
\end{tabular}
\caption{Summary of epistemic stance annotated datasets. \emph{Perspective}: which sources are considered for annotation? Stance \emph{Label} may be discrete with the given number of categories (where many or all are ordered), or continuous with a bounded range.\textsuperscript{\ref{fn:datasets}}
All datasets except MEANTIME consist of English text.
\label{tab:datasets}
}
\vspace{-0.5cm}
\end{table}
\paragraph{Existing Datasets} Several existing datasets \cite{rudinger2018neural, lee2015event, prabhakaran2015new, diab2009committed, stanovsky2017integrating} have successfully driven the progress of epistemic stance analysis in NLP, but have largely focused on author-only analysis.~\citet{soni2014modeling} and \citet{de-marneffe-etal-2012-happen}~examine epistemic stances from the reader's (not author's) perspective.~Table~\ref{tab:datasets}~summarizes these datasets.\footnote{\label{fn:datasets}UDS-IH2 collects a binary category and a confidence score. \citet{yao-2021-mds} and \citet{vigus-2019-dependency} extend multisource annotations as dependency graphs with additional edge types.}
Political discourse is a particularly interesting because the multiple sources discussed can have diverse stances towards the same event. Among all existing datasets, {F}act{B}ank\xspace~\cite{sauri2012you} and MEANTIME~\cite{minard2016meantime} explore multi-source analysis in the news domain. While MEANTIME has helped advance epistemic stance analysis in Italian, {F}act{B}ank\xspace---built on English news text---is closest to our goal.
\paragraph{Existing Models}
\label{sec:existing_models}
Several computational models have been developed for epistemic stance prediction as explicated in Table~\ref{tab:factuality_models}. Early models proposed deterministic algorithms based on hand-engineered implicative signatures for predicate lexicons \cite{lotan2013truthteller, nairn2006computing, sauri2012you}. A number of systems used lexico-syntactic features with supervised machine learning models, such as SVMs or CRFs \cite{diab2009committed, prabhakaran2010automatic, lee2015event, sauri2012you, stanovsky2017integrating}.
Lately, there has been a growing interest in using neural models for epistemic stance prediction \cite{rudinger2018neural, pouran-ben-veyseh-etal-2019-graph},
though sometimes with complex, task-specific network architectures (e.g.\ GANs; \citet{qian-2018-gan}),
which raise questions about generalization and replicability for practical use by experts from other fields.
Recently, \citet{jiang2021thinks} explore fine-tuning pre-trained language models (LM), such as {BERT}\xspace, for author-only epistemic stance prediction by adding a simple task-specific layer. We take this more robust approach, extending it to multiple sources.
\begin{table}[]
\centering \tiny
\begin{tabular}{llll}
\hline \addlinespace[0.05cm]
\textbf{Algorithm} &
\textbf{Features/Model} &
\textbf{Perspective} &
\textbf{Systems} \\ \addlinespace[0.05cm]\hline \addlinespace[0.05cm]
\multirow{3}{*}{Rule-Based} &
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Predicate \\ Lexicons\end{tabular}} &
Author &
\begin{tabular}[c]{@{}l@{}}\citealp{nairn2006computing} \\ \citealp{lotan2013truthteller}~(TruthTeller)\end{tabular}\\ \addlinespace[0.1cm]
&
&
Multiple &
\begin{tabular}[c]{@{}l@{}}\citealp{sauri2012you} \\ ({D}e{F}acto\xspace)\end{tabular} \\ \addlinespace[0.15cm] \hline \addlinespace[0.15cm]
\multirow{8}{*}{\begin{tabular}[c]{@{}l@{}}Feature- \\ Based \\ Supervised \\ Machine \\ Learning\end{tabular}} &
\multirow{5}{*}{\begin{tabular}[c]{@{}l@{}}Lexico-\\ Syntactic\end{tabular}} &
Author &
\begin{tabular}[c]{@{}l@{}}\citealp{diab2009committed}, \citealp{lee2015event} \\ \citealp{prabhakaran2015new}\end{tabular} \\ \addlinespace[0.1cm]
&
&
Reader &
\begin{tabular}[c]{@{}l@{}}\citealp{de-marneffe-etal-2012-happen}\\ \citealp{soni2014modeling}\end{tabular} \\ \addlinespace[0.1cm]
&
&
Multiple &
\citealp{qian-2015-ml} \\ \addlinespace[0.1cm] \cline{2-4} \addlinespace[0.1cm]
&
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Output of \\ Rule System\end{tabular}} &
Author &
\citealp{stanovsky2017integrating} \\
& & Multiple & \citealp{sauri2012you} \\ \addlinespace[0.3cm] \hline \addlinespace[0.15cm]
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Neural\\ Networks \\ (NN)\end{tabular}} &
LSTM &
Author &
\citealp{rudinger2018neural} \\
&
GAN &
Multiple &
\citealp{qian-2018-gan} \\
&
Graph NN &
Author &
\citealp{pouran-ben-veyseh-etal-2019-graph} \\ \addlinespace[0.15cm] \hline \addlinespace[0.15cm]
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Neural\\ Pretrained\end{tabular}} &
\multirow{2}{*}{BERT} &
Author &
\citealp{jiang2021thinks} \\
&
&
Multiple &
This work \\ \addlinespace[0.05cm] \hline \addlinespace[0.05cm]
\end{tabular}
\caption{Epistemic stance prediction models.}
\vspace{-0.27cm}
\label{tab:factuality_models}
\end{table}
\paragraph{General Stance Detection in NLP}
Recently, there has been a growing interest in analyzing stance, including a broad spectrum of stance-takers (speaker/writer), the objects of stances, and their relationship. While our work also examines the stance relationship between a source (stance-taker) and an event (object), we differ from the existing literature in several ways. For instance, unlike our work where a stance-taker is the author or a mentioned source in the text, ~\citet{mohtarami-etal-2018-automatic}, \citet{pomerleau2017fake} and \citet{zubiaga-etal-2016-stance} consider the entire document/message to be a stance-taker. Similarly, the object of the stance could be a target entity (such as a person, organization, movement, controversial topic, etc.) that may or may not be explicitly mentioned in the input document~\cite{mohammad-etal-2016-semeval}. On the contrary, in this work, event propositions (object) are always embedded within the text.
Finally, we can also analyze the kind of stance relationship exhibited by the stance-taker towards an object from two linguistic perspectives: affect and epistemic. Affect involves the expression of a broad range of personal attitudes, including emotions, feelings, moods, and general dispositions~\cite{ochs-1989-language}, and has been explored in~\citet{mohammad-etal-2016-semeval}. On the other hand, epistemic---this work's focus---refers to the speaker's expressed attitudes towards knowledge of events and her degree of commitment to the validity of the communicated information~\cite{chafe-1986-evidentiality, biber-1989-styles, palmer-2001-mood}. The analysis explored in ~\citet{mohtarami-etal-2018-automatic}, \citet{pomerleau2017fake} and \citet{zubiaga-etal-2016-stance} seems to be epistemic as they implicitly incorporate the knowledge or claims expressed in the evidence document and hence their stances towards them, although such distinctions are not made explicitly in their work. While the stance literature discussed in this section has not been connected to epistemic stance literature in NLP, we think interesting future work can be done to establish this relationship.
\section{Experimental Details}
\subsection{Implementation Details}
\label{sec:appendix_implementation_details}
All our models are implemented with PyTorch $1.9$, using \texttt{roberta-large} (with 1024-dimensional embeddings) accessed from AllenNLP $2.5.1$\ \cite{paszke2017automatic,gardner-etal-2018-allennlp}. We train the models with the Adam
optimizer~\cite{DBLP:journals/corr/KingmaB14}, using at most 20 epochs, batch size 16, and learning rate $5 \times 10^{-6}$, following \citet{zhang2021revisiting} and \citet{mosbach2021on}'s training guidelines. We use an early stopping rule if the validation loss does not reduce for more than two epochs; this typically ends training in $5-6$ epochs. We report macro-averaged precision, recall, and F1 over the original train-test set splits of {F}act{B}ank\xspace.
Since fine-tuning BERT (and its variants) can be unstable on small datasets~\cite{dodge-jesse-etal-2020-fine}, we report average performance over
five random restarts for each model.
To fine-tune BERT and RoBERTa models, we start with the pre-trained language model, updating both the task-specific layer and all parameters of the language model.
\subsection{Significance Testing}
\label{sec:appendix_significance_testing}
We use a nonparametric bootstrap~\cite[ch.~8]{wasserman2004all}
to infer confidence intervals for an individual model's performance metric (precision, recall, F1) and hypothesis testing between pairs of models.
We utilize $10^{4}$ bootstrap samples of sentences for source and event identification models and $10^{4}$ bootstrap samples of epistemic stance tuples for stance classifier in {F}act{B}ank\xspace's test set to report $95\%$ bootstrap confidence intervals (CI), via the normal interval method~\cite[ch.~8.3]{wasserman2004all},
and compare models with a bootstrap two-sided hypothesis test
to calculate a $p$-value for the null hypothesis of two models having an equal macro-averaged F1 score~\cite{mackinnon2009bootstrap}.\footnote{MacKinnon's bootstrap hypothesis test has subtle differences from \citet{berg-kirkpatrick-etal-2012-empirical}'s in the NLP litearture; we find MacKinnon's theoretical justification clearer.}
\section{Performance of Source and Event Identification Models}
\label{sec:appendix_model_performance}
\subsection{Source and Event Identification}
\label{sec:appendix_source_event_model}
Table~\ref{table:source_event_model} mentions performance scores of the source and event identification models.
\begin{table}[h]
\tiny
\begin{tabular}{lllllll}
\hline
\multicolumn{1}{c}{\multirow{2}{*}{Model}} & \multicolumn{3}{c}{Event} & \multicolumn{3}{c}{Source} \\ \cline{2-4} \cline{5-7}
\multicolumn{1}{c}{} & Prec & Recall & F1 & Prec & Recall & F1 \\ \hline
CNN (Qian et al., 2018) & 86.6 & 82.8 & 84.6 & 80.7 & 77.4 & 78.9 \\
RoBERTa (Joint) & 84.4 & 87.6 & 86.0 & 81.4 & 62.7 & 70.8 \\
RoBERTa (Individual) & 84.1 & 87.2 & 85.6 & 79.7 & 81.2 & 80.5 \\ \hline
\end{tabular}
\caption{Performance of the source and event identification models. Individual classifiers perform better than a combined classifier.}
\label{table:source_event_model}
\end{table}
\subsection{Error Analysis: Correlation with the events denoted by verb \textit{"say"}}
We conducted an error analysis of our source identification model. We tested the model to examine whether the model understands the notion of source or merely associates the notion of source with presence of vents denoted by verb “say” in a given sentence. Table~\ref{table:source_error_analysis} demonstrates that the model does not merely rely on presence or absence of such events.
\begin{table}[h]
\centering
\scriptsize
\begin{tabular}{lllll}
\hline
\textbf{“Say”} & \textbf{F1} & \textbf{Precision} & \textbf{Recall} & \textbf{\#sentences} \\ \hline
Present & 84.6 & 86.4 & 82.9 & 147 \\
Absent & 65.2 & 58.4 & 73.8 & 269 \\ \hline
\end{tabular}
\caption{Source Error Analysis}
\label{table:source_error_analysis}
\end{table}
\section{Performance of Epistemic Stance Classifier}
\label{sec:appendix_stance_classifier}
\subsection{Error Analysis: Negative Polarity Items}
The \emph{CT-} class is the most rare in {F}act{B}ank\xspace, and it is useful to identify for a possible future use case of finding disagreements in text. For corpus exploration, an alternative to our model could be to simply view sentences with
explicit negative polarity items (NPIs); such sentences\footnote{Using an NPI list of:
\emph{no}, \emph{not}, \emph{n't},
\emph{never}, \emph{nobody}, \emph{none} } indeed contain a large majority (88.2$\%$) of {F}act{B}ank\xspace's gold standard \emph{CT-} tuples. They are still uncommon within NPI-containing sentences (13.5$\%$ of such tuples are \emph{CT-}),
and quite rare within sentences without NPIs (0.33$\%$ of such tuples are \emph{CT-}).
For this challenging CT- class, the model attains a F1 score of 78.4$\%$. To examine the model performance on CT- class in political domain, we qualitatively analyzed correct classifications. We observe that the model exhibits ability to deal with complex connections between negation-bearing constructions like \emph{Unable to}, \emph{refuse}, etc. (Table \ref{tab:npi_examples}).
\begin{table*}[!htp]\centering
\scriptsize
\begin{tabular}{p{0.97\linewidth}}\toprule
$\bullet$ \textbf{[Author]$_{s}$}:~Unable to \textbf{\underline{reach}$_{e}$} Russo in the era before cell phones, the House Speaker, Jim Wright, kept the vote open for some twenty minutes while an aide coaxed a member to change his vote to yes. \\
$\bullet$ Author:~\textbf{[John Boehner]$_{s}$},~the Speaker of the House, refused to \textbf{\underline{address}$_{e}$} immigration reform in 2013. \\
$\bullet$ Author:~\textbf{[People]$_{s}$} are beginning to move worlds apart and find it increasingly difficult to~\textbf{\underline{establish}$_{e}$}~common ground. \\
$\bullet$ \textbf{[Author]$_{s}$}:~Although still incapable of actually \textbf{\underline{cutting}$_{e}$}~spending, except for needed defense, conservative leaders imply our national crisis is merely some budgeting blunder remediable through a balanced budget amendment to the Constitution. \\
\bottomrule
\end{tabular}
\caption{Examples of \emph{CT-} epistemic stances, in sentences without explicit NPIs in {P}oli{B}elief\xspace, that BERT correctly predicts; sources are highlighted in bold, and events are underlined.}\label{tab:npi_examples}
\end{table*}
\section{External Validity: A Case Study on Hedging and power}
\label{sec:appendix_hedging}
\noindent
\citet{jalilifar2011power} examine the relationship between an author's perceived political power and their expressed commitment to their beliefs.
While hedging and hesitations have been utilized to measure lack of commitment~\cite{philips1985william}, political discourse can feature many more strategies beyond a simple lexicon of hedge words, such as indirect speech acts, hypothetical if-then clauses, or framing claims as questions~\cite{fraser2010hedging}. Thus, analyzing hedging requires understanding of syntactic contexts within which claims are expressed, which our model can tackle. We establish the external validity of our proposed epistemic stance framework by computationally replicating the findings of \citet{jalilifar2011power}'s manual content analysis. To ensure the external validity of our proposed epistemic stance framework, we computationally replicate the findings of \citet{jalilifar2011power}'s manual content analysis.
The study examines transcripts of topically similar television interviews of three political figures,
George W.\ Bush (at the time, incumbent U.S.\ president),
Jimmy Carter (former U.S.\ president),
and David Coltart (founding member of Zimbabwe’s main opposition party).\footnote{Authors also analyzed interviews by U.S.\ politician Sarah Palin, but we these transcripts were not available at the provided URL.}
For each interview transcript, we employ our epistemic stance classifier to predict the stance of the political figure (author source) towards all extracted events,
and
calculate each author's uncertainty level as the fraction of events with a \emph{PR+} or \emph{PS+} epistemic stance.
We find the same ordering of commitment as the previous work:
Bush using the fewest uncertain \textit{PR+/PS+} stances ($5.41\%$),
with progressively more for Carter ($8.32\%$) and Coltart ($12.2\%$).
This follows \citeauthor{jalilifar2011power}'s interpretation of commitment being correlated to power (Bush being the highest status, for example).
\section{Case Study: Belief Holder Identification}
\label{sec:appendix_bh}
\subsection{Details of MMM Corpus}
\label{sec:appendix_mmm}
The MMM, maintained by one of the authors (\emph{anon.\ for review}),
is an example of a researcher-curated ``artisanal data" \cite{Wallach2014CSSFAT} collection,
common in political science and communication research. Books were chosen according to a number of selection criteria and not as a representative sample of any presumed population of publications. Nominees for consideration include books appearing on best-seller lists from a number of politically-oriented Amazon book categories, mostly under the heading ``Politics \& Government---Ideologies \& Doctrines.''
Additionally, all presidential primary candidates authoring a book during this period were considered, as were other officials (e.g. governors, sheriffs, senators) and ideologues attaining public prominence. Over the course of several years, scholars of American ideology have been invited to nominate additional authors for consideration, as the long-term goal is to maintain as comprehensive as possible a corpus of mass-marketed ideologically-oriented manuscripts. Among nominees, books that were more memoir than manifesto were eliminated, as were books too narrowly focused on a particular policy area.
Books in the MMM were published from 1993 through 2020, with a majority during the Obama presidential administration (233 in 2009-2016),
as well as 57 from the George W. Bush presidency (2001-2008)
and 80 during the Trump presidency (2017-2020).
\subsection{Comparison with NER: Qualitative Examples}
\label{sec:appendix_belief_holders}
Table~\ref{tab:belief holders} describes whether the book's belief holders are recognized as named entities---three of ten are not.
\begin{table}[h]
\centering
\scriptsize
\begin{tabular}{lc|lc}
\hline
\textbf{\begin{tabular}[l]{@{}l@{}}Belief \\ Holder\end{tabular}} &
\textbf{\begin{tabular}[l]{@{}l@{}}Detected \\ by NER?\end{tabular}} &
\textbf{\begin{tabular}[l]{@{}l@{}}Belief \\ Holder\end{tabular}} &
\textbf{\begin{tabular}[l]{@{}l@{}}Detected \\ by NER?\end{tabular}} \\ \hline
Media & Yes & Bernie Sanders & No \\
Democrats & Yes & Right & Yes \\
Donald Trump & Yes & Republicans & No \\
Left & No & Courts & Yes \\
Conservatives & Yes & Joe Biden & Yes \\ \hline
\end{tabular}
\caption{Top 10 sources detected as belief holders in Ben Shapiro's \emph{Facts Don't Care About Your Feelings}.}
\label{tab:belief holders}
\end{table}
\subsection{Linguistic Analysis of Belief Holders}
\label{sec:appendix_linguistic_analysis}
We identify two interesting linguistic phenomena among belief holders mentions.
\paragraph{Common and Collective Nouns}
Many belief holders can also be described by common nouns, such as a plural form referring to classes of people (or other agents), or collective nouns denoting aggregate entities, including informally defined ones. We show several examples, along with an event toward which they have an epistemic stance.
\begin{enumerate}[label={(\arabic*)}]
\item \label{senior} A recent survey of studies published in peer-reviewed scientific journals found that 97 percent of actively publishing climate \textbf{[scientists]$_{s}$} agree that global warming has been \textbf{\underline{caused}$_{e}$} by human activity.~\cite{abdul-2016-writings}
\item \label{left} The \textbf{[Left]$_{s}$} properly pointed out the widespread problems of racism and sexism in American society in the 1950s — and their diagnosis was to \textbf{\underline{destroy}$_{e}$} the system utterly.~\cite{shapiro2019facts}
\item \label{governemnt} The agents seized rosewood and ebony that the \textbf{[government]$_{s}$} believed was illegally \textbf{\underline{imported}$_{e}$}.~\cite{forbes-2012-freedom}
\item \label{media} The \textbf{[media]$_{s}$} simply asserted that Clinton was \textbf{\underline{beloved}$_{e}$} across the land — despite never being able to get 50 percent of the country to vote for him, even before the country knew about Monica Lewinsky.~\cite{coulter2009guilty}
\item Maybe American \textbf{[society]$_{s}$} concluded, at some deep level of collective unconsciousness, that it had to \textbf{\underline{reject}$_{e}$} the previous generation ’s model of strict fathering in favor of nurturing mothering.~\cite{reich2005reason}
\end{enumerate}
\vspace{-0.2cm}
\paragraph{Word Sense Disambiguation}
If an entity is described as a belief holder, that can help disambiguate its word sense or entity type.
Our model distinguishes agentive versus non-agentive versions of a geographical locations. In the following two examples, the locations or ideas ``Europe'' and ``Silicon Valley''
are belief holders with opinions toward various future scenarios
(all with uncommitted \emph{Uu} stances, which FactBank uses for all conditionals and hypotheticals).
These location entities are treated as agents with political desires and intentions, perhaps more like an organizational or geopolitical NER type, despite the fact that these instances do not represent formally defined or even universally agreed-upon entities.
\begin{enumerate}[resume, label={(\arabic*)}]
\item \label{europe_1}
\textbf{[Europe]$_{s}$} sees it [NATO expansion] as a scheme for permanent U.S. hegemony and has decided that if the Americans want to play Romans, let Americans \textbf{\underline{pay}$_{e}$} the costs and \textbf{\underline{take}$_{e}$} the risks.~\cite{pat1999republic}
\item \label{silicon} "Currently \textbf{[Silicon Valley]$_{s}$} is in the midst of a love affair with BMI, arguing that when robots \textbf{\underline{come}$_{e}$} to \textbf{\underline{take}$_{e}$} all of our jobs, we’re going to \textbf{\underline{need}$_{e}$} stronger redistributive policies to \textbf{\underline{help}$_{e}$} \textbf{\underline{keep}$_{e}$} families afloat," Annie Lowrey, who has a book on the subject coming July 10, wrote in New York magazine.~\cite{beck2018addicted}
\end{enumerate}
\noindent
By contrast, ``Europe'' and ``Iowa'' below have no epistemic stances (all edges toward sentence events are \emph{NE}), and the entities are used simply to describe geographic locations.
\begin{enumerate}[resume, label={(\arabic*)}]
\item \label{europe_2} Napoleon was the dictator of a French state so anticlerical that many in \textbf{[Europe]$_{s}$} speculated that he was the Antichrist.~\cite{dreher2018benedict}
\item \label{iowa}
While reporters waited outside in the \textbf{[Iowa]$_{s}$} cold amid a mix-up at one of Trump’s rallies [...]
\cite{abdul-2016-writings}
\end{enumerate}
| proofpile-arXiv_065-2057 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Microlensing is one of the most powerful methods that can be used to
search for extrasolar planets \citep{mao91, gould92}. Recently, two
robust microlensing detections of exoplanets were reported by
\citet{bond04} and \citet{udalski05}.
The signal of a planetary companion to microlens stars is a short-duration
perturbation to the smooth standard light curve of the primary-induced
lensing event occurring on a background source star. The planetary
perturbation occurs when the source star passes close to the caustic.
The caustic represents the set of source positions at which the
magnification of a point source becomes infinite. Studies of the
properties of the caustic are important because the characteristics of
the planetary perturbations in the microlensing light curve depend
critically on the properties of the caustic. For example, the location
of the perturbation on the lensing light curve depends on the location
of the caustic. The duration of the perturbation and the probability of
detecting the perturbation are proportional to the caustic size. In
addition, the pattern of the perturbation is closely related to the
caustic shape. Therefore, it is essential to understand the properties
of caustics for the interpretation of the planetary lensing signals.
Although some of the properties of the caustics in planetary microlensing
have been known, our knowledges of them are mostly from scattered
information based on numerical approaches. The problem of the numerical
approach is that the dependence of the planetary lensing behavior on the
planet parameters of the star-planet separation $s$ (normalized by the
Einstein ring radius $\theta_{\rm E}$) and the planet/star mass ratio
$q$ is not clear. There have been several attempts to overcome this
ambiguity using analytic methods. By treating the planet-induced deviation
as a perturbation, \citet{dominik99} and \citet{an05} derived analytic
expressions for the locations of the {\it central} caustic, which is one
of the two sets of caustics of the star-planet lens system located close to
the primary star. Based on a similar perturbative approach, \citet{asada02}
provides analytic expressions for the locations of the lensing images.
\citet{bozza00} derived analytic expressions for the locations of not only
the central caustic but also the {\it planetary} caustic, the other set of
caustics, which are located away from the central star. However, there has
been no analytic work on the detailed properties of the caustics such as
the location, size, and shape, except the very recent work of \citet{chung05}
(hereafter paper I) on the central caustics.
Following paper I, we conduct a comprehensive and analytic analysis on the
properties of the planetary caustics. Under the perturbative approximation,
we derive analytic expressions for the location, size, and shape of the
planetary caustics as a function of $s$ and $q$. Based on these expressions
combined with those for the central caustics derived in paper I, we compare
the similarities and differences between the planetary and central caustics.
We provide an expression for the size ratio between the two types of
caustics. We also derive an expression for the condition of the merging
of the two types of caustics. We finally discuss the validity of the
perturbative approximation.
\section{Empirical Known Properties}
A planetary lensing is described by the formalism of a binary lens with a
very low-mass companion. Because of the very small mass ratio, planetary
lensing behavior is well described by that of a single lens of the primary
star for most of the event duration. However, a short-duration perturbation
can occur when the source star passes the region around the caustics, which
are important features of binary lensing.
The caustics of binary lensing form a single or multiple closed figures
where each of which is composed of concave curves (fold caustics) that
meet at cusps. For a planetary case, there exist two sets of disconnected
caustics. One `central caustic' is located close to the host star. The
other `planetary caustic' is located away from the host star and its number
is one or two depending on whether the planet lies outside ($s>1$) or inside
($s<1$) the Einstein ring. The size of the caustic, which is directly
proportional to the planet detection efficiency, is maximized when the planet
is located in the `lensing zone', which represents the range of the star-planet
separation of $0.6\lesssim s\lesssim 1.6$. The planetary caustic is always
bigger than the central caustic.
\section{Analytic Approach}
We start from the formula of \citet{bozza00} for the position of the
planetary caustics (eqs. [49] and [50] of his paper). Keeping up to
the first order term, the formula are expressed as
\begin{equation}
\xi_c \simeq
q^{1/2} \left( \kappa-{1\over \kappa}+{\kappa\over s^2}\right) \cos\theta,
\label{eq1}
\end{equation}
\begin{equation}
\eta_c \simeq
q^{1/2}\left( \kappa-{1\over \kappa}-{\kappa\over s^2}\right) \sin\theta,
\label{eq2}
\end{equation}
where $\theta$ is a variable and
\begin{equation}
\kappa(\theta)=\left[ {\cos 2\theta \pm \sqrt{s^4-\sin^2 2 \theta}
\over s^2-1/s^2 }\right]^{1/2}.
\label{eq3}
\end{equation}
In these expressions, the coordinates are centered at the position on the
star-planet axis with a separation vector from the position of the star of
\begin{equation}
\bold r=\bold s\left(1-{1\over s^2}\right),
\label{eq4}
\end{equation}
where $\bold s$ is the position vector of the planet from the star normalized
by $\theta_{\rm E}$ (see Figs.~\ref{fig:one} and \ref{fig:two}). The
origin of the coordinates corresponds to the center of the planetary caustic.
For the pair of the planets with separations $s$ and $1/s$, the centers of
the caustics are separated from the star by the same distance (because
$|\bold r(s)| = |\bold r(1/s)|$) but directed toward opposite directions (because
${\rm sign} [\bold r(s)]\neq {\rm sign} [\bold r(1/s)]$). Therefore, the center
of the caustic is located on the same and opposite sides of the planet with
respect to the position of the star for the planets with $s>1$ and $s<1$,
respectively. If one defines the lensing zone as the range of the planetary
separation for which the planetary caustic is located within the Einstein
ring, the exact range of the lensing zone is
\begin{equation}
{\sqrt{5}-1\over 2} \leq s \leq {\sqrt{5}+1\over 2}.
\label{eq5}
\end{equation}
To the first-order approximation, the size of the planetary caustic is
proportional to $q^{1/2}$ as shown in equations~(\ref{eq1}) and (\ref{eq2}).
We will discuss the deviation of the approximation from the exact value in
\S\ 4.3.
\begin{figure}[t]
\epsscale{1.2}
\plotone{f1.eps}
\caption{\label{fig:one}
The location and shape of the planetary caustic of a planetary system with
a star-planet separation greater than the Einstein ring radius ($s>1$).
The upper panel shows the location of the star, the planet, and the
resulting location of the planetary caustic. The coordinates are centered
at the center of the planetary caustic, which is located on the star-planet
axis with a separation vector $\bold r=\bold s(1-1/s^2)$ from the position of
the star, where $\bold s$ is the position vector of the planet from the host
star. The circle centered at the position of the star is the Einstein ring.
The lower panel shows a blow-up of the region around the caustic enclosed
by a box. Also marked are the definitions of the horizontal ($\Delta\xi_c$)
and vertical ($\Delta\eta_c$) widths of the caustic and the designations of
the individual cusps of the caustic.
}\end{figure}
\subsection{For Planets with $s>1$}
In this case, between the two values of $\kappa$ in equation~(\ref{eq3})
only the one with `+' sign is valid because the other one with `$-$'
sign results in $\kappa^2<0$. As a result, there exists only a single
set of caustics for planets with $s>1$ as shown in Figure~\ref{fig:one}.
The planetary caustic of the planet with $s>1$ is composed of four cusps,
with two of them are located on the $\xi$ axis and the other two are
located on the $\eta$ axis (see Fig.~\ref{fig:one}). The positions of
the individual cusps, $(\xi_c,\eta_c)_i$, corresponds to the cases of
$\sin\theta=0$ (for the two cusps on the $\xi$ axis) and $\cos\theta=0$
(for the other two cusps on the $\eta$ axis). Then, the positions of the
cusps on the $\xi$ and $\eta$ axes are expressed respectively as
\begin{equation}
(\xi_c,\eta_c)_{1,2} \simeq
\left(
{2q^{1/2} \over s\sqrt{s^2-1}},0
\right),
\label{eq6}
\end{equation}
\begin{equation}
(\xi_c,\eta_c)_{3,4} \simeq
\left( 0,{2q^{1/2} \over s\sqrt{s^2+1}}
0,
\right).
\label{eq7}
\end{equation}
If we define the horizontal and vertical widths of the planetary caustic
as the separations between the cusps on the individual axes (see
Fig.~\ref{fig:one}), the widths are expressed respectively as
\begin{equation}
\Delta \xi_c \simeq
{4q^{1/2} \over s\sqrt{s^2-1}} \rightarrow
{4q^{1/2}\over s^2}\left( 1+ {1\over 2s^2}\right),
\label{eq8}
\end{equation}
\begin{equation}
\Delta \eta_c \simeq
{4q^{1/2} \over s\sqrt{s^2+1}} \rightarrow
{4q^{1/2}\over s^2}\left( 1- {1\over 2s^2}\right),
\label{eq9}
\end{equation}
where the expressions after the arrow are those evaluated to the first
non-vanishing order in $s$ in the limiting case of $s\gg 1$. Then, the
vertical/horizontal width ratio is expressed as
\begin{equation}
{\cal R}_c =
{\Delta\xi_c \over \Delta \eta_c} \simeq
\left( {1-1/s^2 \over 1+1/s^2} \right)^{1/2} \rightarrow
1-{1\over s^2}.
\label{eq10}
\end{equation}
In the limiting case of $s\gg 1$, $\Delta\xi\sim \Delta\eta\propto s^{-2}$
and ${\cal R}_c\sim 1$, i.e. the caustic size decreases as $s^{-2}$ and
the shape becomes less elongated as the star-planet separation increases.
\begin{figure}[t]
\epsscale{1.1}
\plotone{f2.eps}
\caption{\label{fig:two}
The location and shape of the planetary caustic of a planetary system with
a star-planet separation less than the Einstein ring radius ($s<1$).
Notations are same as in Fig.~\ref{fig:one}.
}\end{figure}
\subsection{For Planets with $s<1$}
In this case, $\kappa$ in equation~(\ref{eq3}) is valid only in the
following range of $\theta$
\begin{equation}
\theta\pm {\pi\over 2} < {1\over 2} \sin^{-1} s^2.
\label{eq11}
\end{equation}
For $\theta$ within these ranges, there are two possible values of $\kappa$
corresponding to the signs. As a result, there exist two sets of caustics
for planets with $s<1$; one above and the other below the star-planet
axis (see Fig.~\ref{fig:two}).
Each of the caustics for the planet with $s<1$ is composed of three cusps.
One of them is located on the $\eta$ axis but the other two are not located
on either of the axes. The caustic meets the $\eta$ axis at $\eta_{c,+}=
2q^{1/2}/[s (1+s^2)^{1/2}]$ and $\eta_{c,-}=2q^{1/2} (1-s^2)^{1/2}/s$ when
$\cos\theta=0$ (see Fig.~\ref{fig:two}). Among these two positions, the
former corresponds to the cusp, and thus the location of the on-axis cusp is
\begin{equation}
(\xi_{c},\eta_{c})_1 \simeq
\left(0, \pm{2q^{1/2}\over s\sqrt{1+s^2}} \right),
\label{eq12}
\end{equation}
where the sign `$\pm$' is for the cusps located above and below the
star-planet axis, respectively.
If we define the vertical width of the caustic as the separation between
the two crossing points at $\eta_{c,+}$ and $\eta_{c,-}$, the width is
expressed as
\begin{equation}
{\Delta\eta_c\over 2}
\simeq {2q^{1/2}\over s}
\left( {1\over \sqrt{1+s^2}}-\sqrt{1-s^2}\right) \rightarrow
q^{1/2} s^3,
\label{eq13}
\end{equation}
where the factor `1/2' is included into consideration that there exist
two planetary caustics for planets with $s<1$ and the expression after
the arrow is that evaluated to the first non-vanishing order in $s$ in
the case of $s\ll 1$. By defining the center of {\it each} caustic as
the midpoint between the two crossing points (see Fig.~\ref{fig:two}),
its position is expressed as
\begin{equation}
\eta_{c,0}
\simeq \pm {q^{1/2}\over s}
\left( {1\over \sqrt{1+s^2}}+\sqrt{1-s^2}\right) \rightarrow
{2q^{1/2}\over s} \left( 1-{1\over 2}s^2\right).
\label{eq14}
\end{equation}
The other two cusps occurs when $d\xi/d\theta=0$ (or $d\eta/d\theta=0$).
This condition is satisfied when $\cos^2 2\theta=1-3s^4/4$ (or $\sin^2
2\theta=3s^4/4$). Then, combined with the possible range in
equation~(\ref{eq11}), the values of $\theta$ corresponding to the off-axis
cusps are found to be
\begin{equation}
\theta_0={\pi\over 2}\pm{1\over 2}\sin^{-1}\left({\sqrt{3}\over 2}s^2\right).
\label{eq15}
\end{equation}
With this value combined with equations~(\ref{eq1}) and (\ref{eq2}), the
positions of the off-axis cusps are expressed as
\begin{equation}
\eqalign{
(\xi_c,\eta_c)_{2,3} \simeq
[ \pm q^{1/2}\left(\kappa_0-{1/\kappa_0}+
{\kappa_0/s^2}\right)\cos\theta_0, \cr
\pm q^{1/2}\left(\kappa_0-{1/\kappa_0}-
{\kappa_0/s^2}\right)\sin\theta_0 ],\cr}
\label{eq16}
\end{equation}
where $\kappa_0=\kappa(\theta_0)$. In the limiting case of $s\ll 1$,
equation~(\ref{eq16}) is approximated as
\begin{equation}
(\xi_c,\eta_c)_{2,3} \rightarrow
\left( \pm{3\sqrt{3}q^{1/2}s^3\over 8}, \pm {{2q^{1/2}\over s}}\right),
\label{eq17}
\end{equation}
because $\kappa_0\rightarrow s(1+s^2/4)$, $\sin\theta_0\rightarrow 1$, and
$\cos\theta_0\rightarrow \sqrt{3}s^2/4$. By defining the horizontal width
as the separation between the two off-axis cusps, the width is expressed as
\begin{equation}
{\Delta\xi_c\over 2} \simeq
2q^{1/2} \left(\kappa_0-{1\over\kappa_0}+
{\kappa_0\over s^2}\right)\cos\theta_0
\rightarrow {3\sqrt{3}\over 4} q^{1/2}s^3.
\label{eq18}
\end{equation}
Once again, the factor `1/2' is included into consideration that there are
two planetary caustics. From equations~(\ref{eq13}) and (\ref{eq18}), the
vertical/horizontal width ratio is expressed as
\begin{equation}
{\cal R}_c \simeq
{ (1+s^2)^{-1/2} - (1-s^2)^{1/2}
\over
s(\kappa_0-{1/\kappa_0}+{\kappa_0/s^2}) \cos\theta_0}
\rightarrow {4\over 3\sqrt{3}} \left( 1-{5\over 12} s^2\right).
\label{eq19}
\end{equation}
In the limiting case of $s\ll 1$, each caustic shrinks as $\propto s^{3}$,
c.f. $\propto s^{-2}$ for planets with $s>1$, and
${\cal R}_c\sim 4/3\sqrt{3}\sim 0.770$, c.f.\ ${\cal R}_c\sim 1$ for planets
with $s>1$.
\begin{figure}[t]
\epsscale{1.2}
\plotone{f3.eps}
\caption{\label{fig:three}
Upper panel: example planetary caustics of several planetary systems with
different values of the star-planet separation $s$ and the planet/star
mass ratio $q$. The orange and purple curves represent the loci of the
center of the caustic as a function of $s$ for planets with
$q=5\times 10^{-3}$ and $q=5\times 10^{-4}$, respectively. The
coordinates are centered at the position of the planet-hosting star.
The circle is the Einstein ring.
Lower panel: separation between the caustic center and the position of the
planet-hosting star as a function of $s$.
}
\end{figure}
\section{Caustic Properties in Planetary Lensing}
Based on the analytic expressions derived in the previous section, we
now investigate how the properties of the planetary caustics such as
the location, size, and shape vary depending on $s$ and $q$. We also
compare the properties of the planetary caustics with those of the central
caustics.
\subsection{Properties of Planetary Caustics}
In the upper panel of Figure~\ref{fig:three}, we present example planetary
caustics of several planetary systems with different $s$ and $q$. In the
lower panel, we present the separation of the caustic from the planet-hosting
star as a function of $s$. In Figure~\ref{fig:four}, we also present the
variation of the caustic size (as measured by the horizontal and vertical
widths) and the shape (as measured by the vertical/horizontal width ratio)
as a function of $s$.
The properties of the planetary caustics found from the figures and the
dependence of these properties on the planet parameters are as follows.
\begin{enumerate}
\item
For $s>1$, the location of the caustic center depends on $s$ but not on
$q$. On the other hand, for planets with $s<1$, the caustic location depends
on both $s$ and $q$. In this case, the caustic is located farther away from
the star-planet axis as $q$ increases (see eq.~[\ref{eq14}]).
\item
Although the caustic size depends on the mass ratio as $\propto q^{1/2}$,
the shape of the caustic does not depend on $q$ and solely dependent on $s$
(see eqs.~[\ref{eq10}] and [\ref{eq19}]).
\item
The rate of decrease of the caustic size with the increase of $|\log s|$
are different for planets with $s>1$ and $s<1$. Compared to the caustic
of the planet with $s>1$, the rate of decrease is steeper for the planet
with $s<1$. In the limiting cases of $s\gg 1$ and $s\ll 1$, the caustic
sizes decrease as $\propto s^{-2}$ and $\propto s^3$ for planets with $s>1$
and $s<1$, respectively (see eqs.~[\ref{eq8}], [\ref{eq9}], [\ref{eq13}],
and [\ref{eq18}]).
\end{enumerate}
\begin{figure}[t]
\epsscale{1.2}
\plotone{f4.eps}
\caption{\label{fig:four}
Variation of the size (normalized by $q^{1/2}$) and shape (as measured
by the vertical/horizontal width ratio) of the planetary caustic as a
function of the star-planet separation $s$.
}\end{figure}
\subsection{Comparison with Central Caustics}
\citet{chung05} presented the analytic expressions for the location, cusp
positions, width, and shape of the central caustics analogous to those
presented in the previous section for the planetary caustics. The
expressions for the location of the central caustic, analogous to
equations~(\ref{eq1}) and (\ref{eq2}) for the planetary caustic, are
\begin{equation}
\xi_c \simeq q {s+1/s+2(\cos^3\phi-2\cos\phi) \over
(s+1/s-2\cos\phi)^2 },
\label{eq20}
\end{equation}
\begin{equation}
\eta_c \simeq -q{2\sin^3\phi \over (s+1/s-2\cos\phi)^2 },
\label{eq21}
\end{equation}
where $\phi$ is a variable and the coordinates are centered at the position
of the host star. There exists a single central caustic regardless of $s$
and it has an elongated asteroid shape with four cusps, of which two are
located on the $\xi$ axis and the other two are off the axis. The analytic
expressions for the positions of the individual cusps, which are analogous
to equations~(\ref{eq6}) and (\ref{eq7}) for the planetary caustic with
$s>1$ and to equations~(\ref{eq12}) and (\ref{eq16}) for the planetary
caustic with $s<1$, are
\begin{equation}
(\xi_c,\eta_c)_{1,2} \sim \left[
\pm {q\over (1\pm s)(1\pm 1/s)}, 0 \right],
\label{eq22}
\end{equation}
\begin{equation}
(\xi_c,\eta_c)_{3,4} \sim
\left[0, \pm{2q |\sin^3\phi_c|\over (s+1/s-2\cos\phi_c)^2} \right],
\label{eq23}
\end{equation}
where $\cos\phi_c = (3/4)(s+1/s) \{ 1- [1-(32/9) (s+1/s)^{-2}]^{1/2} \}$.
The horizontal and vertical widths of the central caustic defined as the
separations between the cusps on and off the star-planet axis are expressed
respectively as
\begin{equation}
\Delta\xi_c = {4q\over (s-1/s)^{2}},
\label{eq24}
\end{equation}
\begin{equation}
\Delta\eta_c =
{4q\over (s-1/s)^{2}}
{(s-1/s)^2|\sin^3\phi_c| \over (s+1/s-2\cos\phi_c)^2},
\label{eq25}
\end{equation}
which are analogous to those in equations~(\ref{eq8}) and (\ref{eq9})
for the planetary caustic with $s>1$ and to equations~(\ref{eq13}) and
(\ref{eq18}) for the planetary caustic with $s<1$.
Then, the width ratio of the central caustic is
\begin{equation}
{\cal R}_c = {(s-1/s)^2|\sin^3\phi_c| \over (s+1/s-2\cos\phi_c)^2},
\label{eq26}
\end{equation}
which is analogous to those in equations~(\ref{eq10}) and (\ref{eq19}) for
the planetary caustics with $s>1$ and $s<1$, respectively. In the limiting
cases of $s\gg 1$ and $s\ll 1$, the size of the central caustic decreases
respectively as
\begin{equation}
\Delta\xi_c \sim \Delta\eta_c \rightarrow
\cases{
4q/s^2 & for $s\gg 1$,\cr
4qs^2 & for $s\ll 1$.\cr
}
\label{eq27}
\end{equation}
\begin{figure}[thb]
\epsscale{1.2}
\plotone{f5.eps}
\caption{\label{fig:five}
The size ratio between the planetary and central caustics as a function
of the star-planet separation $s$ and planet/star mass ratio $q$. For a
representative quantity of the caustic size, we use the horizontal width.
The region enclosed by the thick dashed lines represents the area in which
the planetary and central caustics merge together, resulting in gradual
obliteration of the distinction between the two types of caustics.
}\end{figure}
The planetary and central caustics have the following similarities and
differences.
\begin{enumerate}
\item
Unlike the planetary caustic, the pair of the central caustics with
separations $s$ and $1/s$ are identical as demonstrated by the fact that
the inversion $s \leftrightarrow 1/s$ in equations~(\ref{eq20}) and
(\ref{eq21}) results in the same expressions.
\item
While the dependence of the size of the planetary caustic on the planet/star
mass ratio is $\propto q^{1/2}$, the dependence of the central caustic is
$\propto q$. Therefore, the planetary caustic shrinks much more slowly with
the decrease of the planet mass than the central caustic.
\item
For planets with $s>1$, the rate of decrease of the size of the central
caustic with the increase of $|\log s|$ is similar to that of the planetary
caustic with $s>1$, i.e.\ $\Delta\xi \propto s^{-2}$ (see eqs.~[\ref{eq8}] and
[\ref{eq27}]), but smaller than that
of the planetary caustic with $s<1$, which shrinks as $\propto s^{3}$
(see eq.~[\ref{eq18}]).
\end{enumerate}
\begin{figure*}[t]
\epsscale{0.9}
\plotone{f6.eps}
\caption{\label{fig:six}
Comparison of the planetary caustics based on the analytic (blue caustic)
numerical (red caustic) computations for various values of the star-planet
separation $s$ and the planet/star mass ratio $q$. The coordinates are
centered at the center of the individual caustics. The scales of the
individual panels are set so that the caustics with the same $s$ appear
to have the same size.
}\end{figure*}
Then, what is the size ratio between the planetary and central caustics.
If we use the horizontal width as a representative quantity for the caustic
size, the size ratio between the two types of the caustics is found from
equations~(\ref{eq8}), (\ref{eq18}), and (\ref{eq24}) and expressed as
\begin{equation}
{\Delta\xi_{c,{\rm c}}\over \Delta\xi_{c,{\rm p}}} =
\cases{
q^{1/2} /(1-s^{-2})^{3/2} & for $s>1$, \cr
q^{1/2} /[(s-s^{-1})^2(\kappa_0-\kappa_0^{-1}+\kappa_0s^{-2}) \cos\theta_0] & for $s<1$, \cr
}
\label{eq28}
\end{equation}
where the additional subscripts `p' and `c' denote the planetary and central
caustics, respectively. In Figure~\ref{fig:five}, we present the size ratio
as a function of $s$ and $q$. Since $\Delta\xi_{c,{\rm c}}\propto q$ while
$\Delta\xi_{c,{\rm p}}\propto q^{1/2}$, the dependence of the size ratio on
the mass ratio is $\Delta\xi_{c,{\rm c}}/\Delta\xi_{c,{\rm p}}\propto q^{1/2}$.
For a given mass ratio, the size ratio is maximized at around $s\sim 1$ and
decreases rapidly with the increase of $|\log s|$.\footnote{For the case of
$s<1$, the change rate of the size ratio is reversed as $|\log s|$ further
increases beyond a critical value ($|\log s|\sim -0.3$ or $s\sim 0.5$).
However, this reversal occurs at the separation beyond the lensing zone.
}
As $s\rightarrow 1$, the location of the planetary caustic, i.e.\ $\bold r
= \bold s(1-1/s^2)$, approaches the position of the central star, around which
the central caustic is located. Then the two types of the caustics eventually
merge together, resulting in gradual loss of distinction between the two
types of caustics. The condition for the merging of the two caustics is
that the separation between the two caustics is smaller than the half of
the sum of the individual caustic widths, i.e.\
\begin{equation}
{\Delta\xi_{c,{\rm c}}+\Delta\xi_{c,{\rm p}}\over 2} \geq
\left\vert s-{1\over s} \right\vert.
\label{eq29}
\end{equation}
By using the analytic expressions for $\Delta\xi_{c,{\rm p}}$
(eqs.~[\ref{eq8}] and [\ref{eq18}]) and $\Delta\xi_{c,{\rm c}}$
(eq.~[\ref{eq24}]), we compute the region of the caustic merging
in the parameter space of $(s,q)$ and presented in Figure~\ref{fig:five}
(the region enclosed by thick dashed lines). The region is confined in a
small region around $|\bold s|\sim 1$, but the width of the region increases
as $q$ increases because the caustic size increases with the increase
of $q$.
\subsection{Validity of the Approximation}
Are the presented analytic expressions based on perturbative approximation
good enough for the description of the caustics in planetary microlensing?
We answer this question by comparing the two sets of caustics constructed
based on analytic and numerical computations.
In Figure~\ref{fig:six}, we present some pairs of the planetary caustics
with different values of the planet parameters $s$ and $q$. In each panel
of the figure, the blue caustic is drawn by using the analytic expressions
while the red caustic is the exact one based on numerical computations.
For reference, we note that the mass ratios of the planets with masses
equivalent to the Jupiter, Saturn, Neptune, and Earth around a host star
with $\sim 0.3\ M_\odot$ of the most probable Galactic lensing event are
$q\sim 3\times 10^{-3}$, $10^{-3}$, $2\times 10^{-5}$, and $10^{-5}$,
respectively. From the figure, we find that although the deviation increases
with the increase of the planet/star mass ratio, the analytic approximation
well describes the planetary caustic in most mass regime of planets
($q\lesssim [{\cal O}]10^{-3}$). For the Earth-mass planet, we find that
the two caustics are eventually indistinguishable.
\section{Conclusion}
We derived analytic expressions for the location, size, and shape of the
planetary caustic as a function of the star-planet separation and the
planet/star mass ratio under perturbative approximation. Based on these
expressions, we conducted comprehensive analysis on the properties of the
planetary caustics. Combined with the analogous expressions for the central
caustics derived in paper I, we compared the similarities and differences
between the planetary and central caustics. We also presented the expressions
for the size ratio between the two types of caustics and for the condition
of the merging of the two types of caustics. These analytic expressions
will be useful in understanding the dependence of the planetary lensing
behavior on the planet parameters and thus in interpreting the planetary
lensing signals.
\acknowledgments
We would like to thank J.\ H. An and A. Gould for making helpful comments.
Work by C.H. was supported by the Astrophysical Research Center for the
Structure and Evolution of the Cosmos (ARCSEC) of Korea Science and
Engineering Foundation (KOSEF) through Science Research Center (SRC)
program.
| proofpile-arXiv_065-2075 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Motivation}
In its original formulation (Pettini 1999), the `missing metals' problem was stated as follows. Studies of the comoving luminosity density of distant galaxies allow us to trace the cosmic star formation density (or history, SFH), $\dot\rho_\star(z)$, up to redshift $z_{max} \approx 7$. Assuming an initial mass function of such stars (IMF), one can compute the specific fraction of heavy elements (`metals') they produce, $y$, and derive the metal production rate $\dot\rho_Z(z) = y \dot\rho_\star(z)$, whose integral from $z_{max}$ gives the density of cosmic metals in units of the critical density, $\Omega_Z^{sfh}$, at any given $z$. Early searches in cosmic structures for which the metal/baryon mass ratio\footnote{When necessary, we use the following cosmological parameters ($\Omega_\Lambda, \Omega_m, \Omega_b, n, \sigma_8, h$) = ($0.7, 0.3,
0.044, 1, 0.9,0.71$), consistent with {\it WMAP} results (Spergel et al. 2003), a solar metallicity $Z_\odot=0.0189$ by mass, and adopt the notation $Y_x=Y/10^x$} (metallicity,
$Z=\Omega_Z/\Omega_b$) can be derived either via intergalactic gas quasar absorption line experiments (Damped Ly$\alpha$ Absorbers [DLAs]
or the Ly$\alpha$ `forest') or through direct spectroscopic studies (Lyman Break Galaxies [LBGs]) have found that only $\Omega_Z^{obs} \lsim 0.20\Omega_Z^{sfh}$ is stored in these components, {\frenchspacing\it i.e., } the large majority of the metals are `missing'. An analogous missing metal problem is also found by considering in a self-consistent manner the star formation rates and metallicities of DLAs alone (Wolfe et al. 2003, Prochaska et al 2003).
Newly available high-quality data allow a more precise analysis of the problem. \\
\section{Stating the problem}
To re-evaluate $\Omega_Z^{sfh}$ we use the most recent SFH compilation (Bouwens et al. 2004) corrected upwards for the effects of dust obscuration by the prescribed (Reddy \& Steidel 2004) value of 4.5 at $z \gsim 3$ and adopt $y=1/42$. Integration of $\dot\rho_Z(z)$ to $z=2.3$ yields $\Omega_Z^{sfh} = 1.84 \pm 0.34\times 10^{-5}$.
Where should we look for these metals ?\\
The most obvious locations are the galaxies used to derive the SFH, {\frenchspacing\it i.e., } LBGs. These are characterized (Reddy \& Steidel 2004) by a mean comoving number density of $6\times 10^{-3} h^3$~Mpc$^{-3}$ and $Z=0.6 Z_\odot = 0.0113$.
Stellar masses can be constrained only by assuming a range of star formation histories of the form $SFR(t) \propto \exp(-t/\tau)$ and therefore they are somewhat uncertain.
According to Shapley et al. 2005, they should be in the range $0.6 - 6 \times 10^{10} M_\odot$. Assuming the best fit
value $M_{\star}=2\times 10^{10} M_\odot$, we get $\Omega_Z^{lbg} = 3.4 \times 10^{-6} M_{\star,10} \approx 0.18 \Omega_Z^{sfh}$.
If metals are not in LBG stars or gas, they could be in DLAs or the IGM.
The metal content of DLAs is derived by noting that (Rao \& Turnsheck 2000, Prochaska \& Wolfe 2000) at $z\approx 3$ their neutral ($\approx$ total) gas density $\Omega_g^{dla}=10^{-3}$ and metallicity $Z=3.8\times 10^{-4}$ combine to give $\Omega_Z^{dla} = 3.8 \times 10^{-7} < \Omega_Z^{obs} \ll \Omega_Z^{sfh}$; we can therefore neglect their contribution. In the following, we
correct for LBG contribution by (re)defining the cosmic density of missing metals $\Omega_Z^{sfh} \equiv \Omega_Z^{sfh} - \Omega_Z^{lbg}$.\\
Hence, the missing metals should be found outside the main body of galaxies or DLAs. There are essentially two possibilities: (a) they could reside in the Ly$\alpha$ forest, or (ii) in the halos of the galaxies that have produced and ejected them. Note that the distinction between these two components is somewhat ambiguous. Our working definition is that galactic halos are gravitationally bound structures around galaxies; they are special sites affected by galactic winds.
The most widely studied tracers of Ly$\alpha$ forest metallicity are \hbox{C~$\scriptstyle\rm IV\ $} and \hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI} absorption lines.
The fraction of the critical density $\rho_{c}$ contributed by a given
element ion, $E_i$, of mass $m_E$ residing in the Ly$\alpha$ forest is
given by
\begin{equation}
\Omega_{E_i}^{ly\alpha}= {H_0\over \rho_{c} c} {\sum N_{E_i}\over \sum
\Delta X}m_E
\label{omegaei}
\end{equation}
where $\Delta X(z_+, z_-)= \int_{z_-}^{z_+} dz (1+z)^2 E(z)^{-1}$ is the absorption distance (Bahcall \& Peebles 1969),
with $E(z)=[\Omega_\Lambda + \Omega_m (1+z)^3]^{1/2}$; sums are
performed over all the redshift intervals $z_- < z < z_+$ over which
an ion column density $N_{E_i}$ is detected.
To determine $\Omega_{\nOVI}^{ly\alpha}$ and $\Omega_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}}^{ly\alpha}$, we use data from the ESO VLT Large Program\footnote{http://www2.iap.fr/users/aracil/LP/} (Bergeron et al. 2002) which provides high S/N,
high resolution spectra of an homogeneous sample of 19 QSOs in $1.7 < z < 3.8$; $\Omega_{\nOVI}^{ly\alpha}$ is currently available for four LP lines of sight (Bergeron et al. 2002, Bergeron \& Herbert-Fort 2005), for which we find $\Omega_{\nOVI}^{ly\alpha}= 1.3\times 10^{-7}$ ; two other recent estimates (Simcoe, Sargent \& Rauch 2004, Carswell, Schaye \& Kim 2002), give $\Omega_{\nOVI}^{ly\alpha}=(1.1, 0.9) \times 10^{-7}$. We adopt the sightline--weighted mean of the three values allowing for the largest error, $\Omega_{\nOVI}^{ly\alpha}=1.1\pm 0.3 \times 10^{-7}$. From the \hbox{C~$\scriptstyle\rm IV\ $} absorber distribution (Aracil et al. 2004, Scannapieco et al. 2005) in the column density range $12 < \log N_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}} < 16$ we obtain $\Omega_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}}^{ly\alpha}=7.5\pm 2.2 \times 10^{-8}$ (statistical error). This value is about two times higher than previous determinations (Songaila 2001; Simcoe, Sargent \& Rauch 2004, Schaye et al 2003) which could not account for the contribution of strong ($\log \hbox{C~$\scriptstyle\rm IV\ $} > 14$) absorption systems.
Combining the average measured $N_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}}$--$N_{HI}$ and $N_{\nOVI}$--$N_{HI}$
correlations (Aracil et al. 2004) with
the measured distribution of weak $HI$ absorbers (Petitjean et al 1993),
we have checked that systems with $\log N_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}} < 12$ contribute
less than 1\%, well within the quoted error.
For a (meteoritic) solar carbon logarithmic abundance (in number) $A_C=8.52$ with respect to hydrogen ($A_H=12$), we conclude that only a fraction
$\Omega_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}}^{ly\alpha}/\Omega_C^{sfh}= 2.4\times 10^{-2}$ of the produced carbon is observed in the \hbox{C~$\scriptstyle\rm IV\ $} state. Repeating the procedure
for O ($A_O=8.83$), gives a ratio $\Omega_{\nOVI}^{ly\alpha}/\Omega_O^{sfh}=1.3\times 10^{-2}$. To account for all uncertainties above, we will consider
values in the range $ 1.4 \times 10^{-2} < \Omega_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}}^{ly\alpha}/\Omega_C^{sfh} < 4.0 \times 10^{-2}$ and
$ 8.1\times 10^{-3} < \Omega_{\nOVI}^{ly\alpha}/\Omega_O^{sfh} < 2.1 \times 10^{-2}$.\\
We now determine the physical conditions of the gas hiding the missing C and O. Numerical simulations (Dav\'e et al. 2001) suggest that the intergalactic medium [IGM] might be a two-phase system made by a cool ($T_c\approx 10^{4-4.5}$~K), photoionized phase, and a hot, collisionally ionized one. We impose
the following conditions separately for each ion (\hbox{C~$\scriptstyle\rm IV\ $},\hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI}) and element (C,O): (1) the observed ionic abundance is the sum of the
abundances in the two phases; (2) the SFH-produced element abundance is the sum of the element abundances in the two phases; (3) the elements are
in the same abundance ratios in the two phases. More explicitly, these conditions can be mathematically expressed as
\begin{eqnarray}
f_C^c \Omega_C^c + f_C^h \Omega_C^h &=& \Omega_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}}^{ly\alpha} \\
f_O^c \Omega_O^c + f_O^h \Omega_O^h &=& \Omega_{\nOVI}^{ly\alpha} \\
\Omega_C^c + \Omega_C^h &=& \Omega_{C}^{sfh} \\
\Omega_O^c + \Omega_O^h &=& \Omega_{O}^{sfh} \\
\Omega_C^c - A\Omega_O^c &=& 0 \\
\Omega_C^h - A\Omega_O^h &=& 0
\ea
After some simple algebra, the above equations reduce to:
\begin{equation}
{{\Omega_{{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H}}^{ly\alpha}/\Omega_C^{sfh} - f_C^h}\over {f_C^c -
f_C^h}} = {{\Omega_{\nOVI}^{ly\alpha}/\Omega_O^{sfh} - f_O^h}\over
{f_O^c - f_O^h}},
\label{rel}
\end{equation}
where $f_i^j \equiv f_i^j(\Delta_j, T_j, {\cal U}_j)$ is the ionization correction for the considered ion (\hbox{C~$\scriptstyle\rm IV\ $} or \hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI}) of a
given element ($i=C,O$) in the cold or hot phase, ($j=c,h$), respectively; the overdensity, $\Delta_j$ and temperature, $T_j$, of
the two phases are the unknowns of the problem; finally, $A$ is the abundance ratio of the two elements.
We complement these conditions by further imposing that the pressure of the cool
phase does not exceed that of the hot one and assuming a temperature-density relation for the cold phase $T=T_0\Delta_c^\gamma$,
(with $T_0=2\times 10^4$~K and $\gamma=0.3$), as inferred from the Ly$\alpha$ forest data. The value of the photoionization parameter, ${\cal U}_j=n_\gamma/n_j$, is fixed by the
ionizing photon density $n_\gamma$ of the assumed UV background spectrum (Haardt \& Madau 1996) shifted so that the intensity at 1 Ryd is
$J_\nu= 0.3\times 10^{-21}$~erg ~s$^{-1}$~Hz$^{-1} = 0.3~J_{21}$, corresponding to a hydrogen photoionization rate $\Gamma=0.84\times
10^{-12} {\rm s}^{-1}=0.84~\Gamma_{12}$, in agreement with Bolton et al.\ 2005. Finally, we warn that deviations from solar abundances might be possible, and indeed there are hints that oxygen
might be overabundant (Telfer et al. 2002; Bergeron et al. 2002); here we neglect this complication.
\section{A possible solution}
By solving eq. \ref{rel}, we obtain the results plotted in Fig. 1. The hot phase is characterized by a wide density range, $\log \Delta_h > 0.4 $ and a restricted temperature range $5.8 < \log T_h < 6.4$. We find that: (i) only 5\%-9\% of the produced metals reside in the cold phase, the rest being found in the hot phase; (ii) 1\%-6\% (3\%-30\%) of the
observed \hbox{C~$\scriptstyle\rm IV\ $} (\hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI}) is in the hot phase. We conclude that more than 90\% of the metals produced during the star forming history can be placed in a hot phase of the IGM,
without violating any observational constraint. To further constrain the hot phase parameter range, we have searched in the LP \hbox{C~$\scriptstyle\rm IV\ $} line list for components with large Doppler parameters. We find no lines with $b_{\rm CIV}} \def\nOVI{{\rm OVI}} \def\nH{{\rm H} \ge 26.5$~km~s$^{-1}$, corresponding to $\log T_h > 5.7$; this result seems to exclude the high density and temperature region of
the allowed parameter space in the middle panel of Fig. 1. We checked that the above findings are insensitive to variations of $\Gamma_{12}$ of $\pm 50\%$; however, \hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI}/\hbox{C~$\scriptstyle\rm IV\ $} ratios in the cold phase might depend on the UVB shape around 4 Ryd.\\
The derived values of $T_h$ and $\Delta_h$ are suggestive of regions likely to be found around galaxies; moreover, $10^6$~K gas temperature would have a
scale height of $> 10$~kpc, hence it cannot be confined in the disk. To test this hypothesis we resort to cosmological simulations. As an illustration, Fig. 2 shows
the temperature and velocity structure in a 2D cut through the center of a simulated galaxy (we used the multiphase version
[Marri \& White 2003] of the GADGET2 code to
simulate a comoving $10 h^{-1}$~Mpc$^3$ cosmic volume) at redshift $z=3.3$; its total (dark + baryonic) mass is $2\times 10^{11} M_\odot$, the star formation
rate $\approx 20 M_\odot$~yr$^{-1}$. This galaxy has been selected to match LBG properties, but it is not unusual in the simulation volume. As often observed in
LBGs, a strong galactic wind is visible, whose expansion is counteracted by energy losses due to cooling and gravity, and ram pressure exerted by the infalling gas.
Infall is particularly effective at confining the wind into a broadly spherical region of physical radius $\approx 300$~kpc, into which cold infalling
streams of gas penetrate. Inside such wind-driven bubble the temperature (Fig. 2) is roughly constant $T\approx 10^6$~K, whereas the density spans values of
$0 < \log \Delta < 5$ [$\Delta(z=3.3)=1$ corresponds to $\approx 2 \times 10^{-5}$~cm$^{-3}$]. The cool phase is evident in the outer boundary
of the bubble, where cooling interfaces arise from the interaction with infalling streams. Hence halos of LBGs seem to meet the requirements as repositories of
missing metals.\\
Additional support for this conclusion comes from studies of the correlation properties of \hbox{C~$\scriptstyle\rm IV\ $} and \hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI} absorbers (Pichon et al 2003, Aracil et al. 2004, Bergeron \& Herbert-Fort 2005), which conclude that: (i) \hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI}
absorption in the lowest density gas is usually (about 2/3 of the times) located within $\approx 300-400$~km~s$^{-1}$ of strong \hbox{H~$\scriptstyle\rm I\ $} absorption lines; (ii) the \hbox{C~$\scriptstyle\rm IV\ $} correlation function is consistent
with metals confined within bubbles of typical (comoving) radius $\approx 1.4h^{-1}$ Mpc in halos of mass $M \ge 5\times 10^{11} M_\odot$ at $z=3$.
If each such objects hosts one bubble, the cosmic volume filling factor of metals is $f_Z =11\%$; it follows that halo metallicity is $\Omega_Z^{sfh}/f_Z \Omega_b=0.165 Z_\odot$.
A temperature of $\log T_h =5.8$ corresponds to \hbox{H~$\scriptstyle\rm I\ $} (\hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI}) Doppler parameters $b_\nHI =102$ ($b_\nOVI=25.5$)~km~s$^{-1}$
and to $N_\nOVI/N_\nHI=3$; absorbers with $\log N_\nOVI = 13$ are detectable for $b_\nOVI = 25.5$~km~s$^{-1}$
but the corresponding $\log N_\nHI = 12.4$ ones for $b_\nHI = 102$~km~s$^{-1}$ are not. This raises the possibility of finding \hbox{O~$\scriptstyle\rm VI\ $}} \def\nHI{{\rm HI}
absorbers without associated \hbox{H~$\scriptstyle\rm I\ $}.\\
\section{Implications}
The scenario proposed leads to several interesting consequences. First, metals produced by LBGs do not seem to be able to escape from their
halos, due to the confining mechanisms mentioned above. This is consistent with the prediction (Ferrara, Pettini \& Shchekinov 2000)
that galaxies of total mass ${\cal M} > 10^{12}(1+z)^{-3/2}M_\odot$ do not eject their metals into the IGM. Interestingly, the
metallicity-mass relation recently derived from the SDSS (Tremonti et al. 2004) shows that galaxies with {\it stellar} masses above $3\times 10^{10}
M_\odot$ (their total mass corresponds to ${\cal M}$ for a star formation efficiency $f_\star = 0.2$) chemically evolve as ``closed boxes,'' {\frenchspacing\it i.e., } they retain their heavy
elements.
Second, the nearly constant ($2 \le z \le 5$, $Z \approx 3.5 \times 10^{-4} Z_\odot$) metallicity of the low column density IGM (Songaila 2001) is naturally explained by the decreasing efficiency of metal loss from larger galaxies. Early pollution from low-mass galaxies allows a sufficient time for metals to cool after ejection; however, the majority of metals in LBG halos at lower redshifts are still too hot to be detected. Hence their contribution to the metallicity evolution of the IGM cannot be identified by absorption line experiments, which mostly sample the cool phase of the forest.
Third, the rapid deceleration of the wind results either in a quasi-hydrostatic halo or in a `galactic fountain' if radiative losses can cool the halo gas. In both cases this material is very poorly bound and likely to be stripped by ram pressure if, as it seems reasonable, the galaxy will be incorporated in the potential well of a larger object (galaxy group or cluster) corresponding to the next level of the hierarchical structure growth. Turbulence and hydrodynamic instabilities associated with this process are then likely to efficiently mix the metals into the surrounding gas within approximately a sound crossing time of $\sim 1$~Gyr, or $\Delta z \approx 0.5$. If metals produced and stored in LBG halos by $z=2.3$ end up in clusters, than the average
metallicity of the intracluster medium is $Z_{ICM}= \Omega_Z^{sfh} /\Omega_{ICM} = 0.31 Z_\odot$, having assumed (Fukugita, Hogan \& Peebles 1998) $\Omega_{ICM}=0.0026 h{^{-1.5}_{70}}$. Not only is this number tantalizingly close to the observed value at $z=1.2$ (Tozzi et al. 2003), but we also predict that little evolution will be found in the ICM metallicity up to
$z \approx 2$ as essentially all the metals that could have escaped galaxies during cosmic evolution had already done so by this epoch.
\acknowledgments
We thank P. Rosati for discussions and
A. Fangano for help with data analysis. We acknowledge partial
support from the Research and Training Network `The Physics of the
Intergalactic Medium' set up by the European Community under the
contract HPRN-CT2000-00126 RG29185. This work has been completed
during the KITP Program ``Galaxy-IGM Interactions'', funded by NSF
Grant PHY99-07949, whose support is kindly acknowledged.
\fontsize{10}{10pt}\selectfont
| proofpile-arXiv_065-2077 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Wolf-Rayet (W-R) star mass-loss rates are inferred to be as high as
several times $10^{-5} M_\odot \ yr^{-1}$, or $10^9$ times the mass-loss
rate of the Sun
\citep{morris-etal2000,
nugis-lamers2000}. More importantly, their mass-loss rates are
significantly enhanced over their O-star progenitors, even though
their luminosity is similar. \citet*[][hereafter CAK]{cak} showed
that O and B star winds can be driven by radiation pressure via the
opacity of a large number of UV spectral lines, so the question arises
if W-R winds may also be driven by line opacity
\citep[e.g.,][]{barlowe-etal1981, cassinelli-vanderhucht1987,
lucy-abbott1993, springmann-puls1995}. Note this would require an
enhancement in the effective line opacity in a W-R star, and in
considering how this added opacity may come about, it is the goal of
this paper to develop a conceptual language for discussing what
additional difficulties emerge.
Excellent agreement with observations is being achieved by detailed
models \citep{hillier-miller1999, demarco-etal2000,
crowther-etal2002}, but conceptual uncertainties remain, including
the maximum attainable driving efficiency, and the key differences
between O and W-R winds that generate higher mass loss. Also, it is
not clear how sensitively the models rely on details in the line lists,
and it would be advantageous to be able to generate simplified models
that retain the key physics but are able to be extended to
computationally demanding investigations such as hydrodynamic
simulations. As a complement to such conceptually
complex analyses, we will isolate particular wind properties important
for driving W-R winds using simplified analytic expressions and approximations.
Because we are concerned with the mass-loss rate and not the global momentum
deposition (the latter stems in part from the
former), our analysis is restricted to a local model at the critical point,
which is the point deep in the wind where the mass flux is most difficult to drive.
In addition, for simplicity and to get results that are as general as possible,
we avoid identifying model-specific parameters
such as the velocity law
and the location where the critical point appears, as these are not fundamental to
the overall mass driving. The goal is to analyze the impact of
frequency redistribution on the mass flux in any optically
thick line-driven flow, rather than to
model a specific W-R star or create a grid of such models.
Since the role of line opacity is the central issue, and because
the flows are observed to be highly supersonic, considerable
conceptual progress may be made by invoking the Sobolev approximation,
as is typical in OB-star applications.
Some authors
have questioned the use of the Sobolev
approximation in optically thick flows such as Wolf-Rayet winds. For
example \citet{grafener-hamann2005} employ models in which the CAK $\alpha$
parameter, which describes the ratio of optically thick to optically
thin lines, is essentially 0, presumably due to the inclusion of very
large turbulent velocities. \citet{nugis-lamers2002} argue that the
wind mass-loss rate is determined near the sonic point, and therefore use
static Rosseland mean opacities in their analysis (though they use
CAK-type line driving in the wind beyond the sonic point). In
fact the Sobolev approximation only breaks down if optically {\it thick}
lines overlap {\it locally} over their thermal linewidths. The
approximation is not challenged by thin lines, or by thick lines that
overlap only over the full wind broadening of thousands of km
s$^{-1}$ rather than the tens of km s$^{-1}$ thermal speeds. The
amount of local line overlap is
related to the amount of turbulent broadening that is assumed. In
the presence of very strong redistribution, as assumed in \S \ref{sec:ssr},
local line overlap increases the Sobolev optical depths at the expense
of the number of optically thick lines in a complicated way. Thus
we choose to focus on the possibility of relatively weak turbulence and
therefore relatively little local line overlap.
\subsection{The Sobolev Approximation}
\label{subsec:sobolev}
The Sobolev approximation asserts that photon interactions with lines
occur primarily due
to Doppler shifts that appear
by virtue of bulk flow velocities, rather than thermal motions. This
implies that each line interaction is spatially localized. Due to the
expansion of a stellar wind,
a photon in the comoving frame will
experience continuous Doppler redshifting, loosely analogous to the
Hubble redshift, and will
thus be able to eventually resonate with lines over a much wider
frequency interval than the thermal width.
The fractional Doppler shift
is thus limited only by the scale of the velocity changes $\Delta v$,
set by the terminal speed $v_\infty$.
The infleunce of the thermal speed is to determine the width of
the regions in which a photon may resonate with a given line.
This width is labeled the
Sobolev length $L$ and is given by
\begin{equation}
\label{eq:soblendef}
L = v_{th} \left(\frac{dv}{dr}\right)^{-1} \ ,
\end{equation}
where $v_{th}$ is the thermal speed.
To apply the Sobolev approximation,
we require that the mean free path between resonances with
different lines be much larger than $L$, and this is the sense
in which lines must not overlap.
The primary convenience of this approximation is that it turns the
potentially large number of scatterings within a given resonance
region into a single effective scattering, allowing the radiative
transfer to be modeled as a random walk linking these effective
scatterings. This type of effective opacity is determined not just by
the density, but also by the velocity gradient, due to the continuous
Doppler shifting referred to above. The treatment of this
type of opacity was pioneered by CAK, and the resulting description of
the wind dynamics is termed CAK theory. CAK theory describes a system
in which a typical photon scatters at most once before escaping the
wind. While adequate for OB stars, the large amount of momentum
deposited in W-R winds requires a typical photon to scatter many times
within the wind. This ``multiple scattering'' was introduced by
\citet{friend-castor1983}, verified by \citet{springmann1994} using a
Monte Carlo approach, and further refined by \citet{gayley-etal1995}.
\subsection{CAK Theory}
\label{subsec:caksec}
To apply
CAK theory, it is convenient to scale
the radiative acceleration on lines $g_{rad}$
to the radiative acceleration on free electrons $g_e$ by the factor
$M(t)$, called the force multiplier,
\begin{equation}
\label{eq:mglgr}
M(t) = \frac{g_{rad}} {g_e} \ .
\end{equation}
$M(t)$ depends on the optical depths $\tau_{sob}$ across the Sobolev
length $L$ of all the lines in the distribution, which may be
parametrized by $t$, the column depth across $L$ in electron-scattering
optical-depth units:
\begin{equation}
\label{eq:tdef}
t = \rho \kappa_e L \ ,
\end{equation}
where $\kappa_e$ is the free-electron cross-section per gram.
The Sobolev optical depth of line $i$ is given by
\begin{equation}
\label{eq:tausobdef}
\tau_{sob,i} = \kappa_i \rho L \ ,
\end{equation}
where $\kappa_i$ is the cross-section per gram of line $i$ and $\rho$ is
the local density, so
$\tau_{sob,i} \propto t$.
The heart of the CAK approach is to parametrize $M(t)$ by the relation
\begin{equation}
\label{eq:fmdef}
M(t) = k t^{-\alpha},
\end{equation}
where $k$ and $\alpha$ are obtained from the
line-strength distribution.
Specifically, $\alpha$ is a number
between 0 and 1 that relates to the fraction of lines that are
optically thick.
$M(t)$ depends on $t^{-\alpha}$ because the radiative force is saturated
with respect to lines that are already thick, and so increases in $t$
merely dilute the force on such lines over more material.
As an O star loses its hydrogen envelope and evolves into its W-R
phase, its radius shrinks, and this has two immediate consequences for
its wind. First, the smaller radius concentrates the column depth
and hence increases the optical depth
of a wind of given mass flux.
Second, since W-R stars maintain high luminosity, the reduced radius
implies higher temperatures at the wind base.
The main goal of this work is to understand
how increases in temperature and optical depth affect the star's
capacity for driving mass loss.
\section{Effectively Gray Model}
\label{sec:gray}
An informative first step in such an analysis is to
consider effectively gray opacity, combined with a
non-isotropic diffusion treatment of the radiative transfer.
Indeed, although a
truly gray opacity model requires there be no frequency
dependence at all, in the absence of frequency redistribution
there can be no correlation between the flux $F(\nu)$ and the opacity
$\chi(\nu)$, thus any redistribution-free model is {\em effectively}
gray regardless of the opacity spectrum, neglecting weak
frequency dependences in the diffusivity correction factor $F_{NID}$.
The effectively
gray model then allows CAK-type
theory to apply for appropriately flux-weighted opacities, even in the
limit of multiple scattering in a wind with large total optical depth.
To calculate the momentum
deposition rate for effectively gray opacity, one merely
needs to know the average mean free path {\it between} lines,
rather than within lines.
The effective opacity of the lines
$\chi_{eff}$ is the mean probability of scattering per unit length, and
is given by the expression \citep[e.g.,][]{pinto-eastman2000}
\begin{equation}
\chi_{eff}(\nu) = \frac{dv}{dr}\frac{\nu}{c}\frac{1}{\Delta\nu}
\sum_i \left(1-e^{-\tau_{sob,\;i}}\right) \ , \label{eq:chieff}
\end{equation}
where $i$ represents a given line and the sum is done over all lines
within an arbitrarily coarse-grained frequency step $\Delta\nu$. The
term $\left(1-e^{-\tau_{sob,\;i}}\right)$ is
the probability of interacting with line $i$, and
$\Delta\nu$ is selected such that the sum over $i$ equals unity.
When this criteria would require
$\Delta\nu > \nu v_\infty / c$, so it is greater than the Doppler shift
over the whole wind, we truncate $\Delta \nu$ at this value.
To emphasize the role of multiple scattering,
we choose to express the
effective opacity in optical-depth units,
\begin{equation}
\label{eq:effopdepthcalc}
\tau_{eff}(\nu) = v_\infty \frac{dr}{dv} \chi_{eff}(\nu) =
\frac{v_\infty}{c} \frac{\nu}{\Delta\nu} \sum_i \left(1 -
e^{-\tau_{sob,\;i}}\right) \ ,
\end{equation}
such that the opacity parameter $\tau_{eff}(\nu)$
estimates the number of mean free paths
due to line opacity over the full radial extent of the wind.
\subsection{The equations}
\label{subsec:linelist}
In the CAK model the mass-loss rate is set at the critical point,
which essentially occurs at the point where the force efficiency is at a
minimum. Any material that crosses the critical point will quickly
accelerate, eventually reaching a terminal velocity $v_\infty$. The
mass-loss rate is assumed to be the maximum value that allows a force
balance at the critical point,
\begin{equation}
\label{eq:forcebaleq1}
v \frac{dv}{dr} = g_{grav} + g_{rad},
\end{equation}
where the force due to gas pressure has been ignored, and $g_{grav}$ is
the acceleration due to gravity:
\begin{equation}
\label{eq:ggravdef}
g_{grav} = -\frac{G \mathcal{M}}{r^2},
\end{equation}
where $G$ is the gravitational constant and $\mathcal{M}$ is the mass of
the star. The radiative acceleration $g_{rad}$ has two terms, one for
the radiative acceleration from free electrons, and one for the
radiative acceleration from lines:
\begin{equation}
\label{eq:gradeqs}
g_{rad} = g_{e} + g_{L} = \frac{\kappa_e L_*}{4 \pi
r^2 c} \left[1 + M(t) F_{NID} \right],
\end{equation}
Here $F_{NID}$ is a specific correction for nonradial radiation in the
non-isotropic diffusion approximation \citep{gayley-etal1995}, as may be
used in the multiscattering Wolf-Rayet domain, although its value near
0.7 at the critical point also applies for the free-streaming radiation
of optically thin applications.
Substituting eqs. (\ref{eq:ggravdef}) and (\ref{eq:gradeqs}) into
eq. (\ref{eq:forcebaleq1}) gives
\begin{equation}
\label{eq:forcebaleq2}
v\frac{dv}{dr} = -\frac{G\mathcal{M}}{r^2} + \frac{\kappa_e L_*}{4 \pi
r^2 c} \left[1 + M(t) F_{NID} \right].
\end{equation}
It is customary to scale to the effective gravity, which includes both
the true gravity and the
radiative force on free electrons, yielding the dimensionless form
\begin{equation}
\label{eq:critpteq}
1 + y = \frac{\Gamma}{1-\Gamma} M(t) F_{NID}.
\end{equation}
The first term on the left-hand side represents effective gravity, and
the second represents the inertia, scaled as
\begin{equation}
\label{eq:ycdef}
y = \frac{r^2 v dv/dr}{G\mathcal{M}(1-\Gamma)}.
\end{equation}
$M(t) F_{NID}$ is the line force, including both the CAK contribution
$M(t)$ and the multiscattering correction $F_{NID}$
\citep{gayley-etal1995}. As usual, $\Gamma$ is the Eddington
parameter, given by
the ratio of the radiative force on free electrons to gravity,
\begin{equation}
\label{eq:edparamdef}
\Gamma = \frac{\kappa_e L_*}{4 \pi G \mathcal{M} c}.
\end{equation}
If $\Gamma > 1$, the radiation pressure on the electrons exceeds
gravity and no hydrostatic solution exists. In a W-R atmosphere
$\Gamma \sim 0.5$, so the atmosphere is static except in regions where
the line opacity augments $\Gamma$, i.e., in the wind.
The relation connecting $t$ to the mass-loss rate $\dot{M}$
may be determined by noting that in
spherical symmetry in a steady state,
\begin{equation}
\label{eq:rhodef}
\dot{M} = 4 \pi r^2 \rho(r) v(r) \ .
\end{equation}
Substituting eqs. (\ref{eq:soblendef}), (\ref{eq:ycdef}), and
(\ref{eq:rhodef}) into eq. (\ref{eq:tdef}) then yields
\begin{equation}
\label{eq:mdot-t}
\dot{M} = \frac{4 \pi G \mathcal{M} (1 - \Gamma) y t}{\kappa_e v_{th}} \ ,
\end{equation}
so maximizing $\dot{M}$ amounts to finding the maximum product
$y_c t_c$ at the critical point that allows a force balance to
be achieved.
\subsection{The force multiplier for a real line list}
\label{subsec:realist}
The line list data was taken from the Kurucz list (accessed from the
web, based on \citealt{kurucz1979}), which includes both the
oscillator strengths and their wavelengths. For the analysis in \S
\ref{sec:ssrop}, data from the Opacity Project
\citep{badnell-seaton2003} is used. Recent studies using more
up-to-date line lists have discovered enhanced opacities due to the
so-called "iron bump" at high temperatures \citep{nugis-lamers2002,
hillier2003}. However, this enhancement occurs over a very narrow
temperature range, and there is no mechanism to keep the wind within
this temperature range in the vicinity of the critical point; indeed,
it is the nature of critical points as ``choke points''
to avoid regions of extra
opacity. Instead, the
critical point occurs where the driving efficiency is at a minimum,
so is unlikely to occur within the iron bump.
However, generally elevated opacities at higher temperatures, due
to higher states of Fe ionization (but not a sharp ``bump'' feature),
may give rise to a feedback mechanism, whereby a higher
Sobolev-Rosseland mean opacity increases line blanketing
and leads to a higher temperature, further
enhancing the mass-loss rate, bringing in even higher stages
of Fe and increasing the opacity (Hillier,
private communication).
Line driving depends on the Sobolev optical depths
of these lines, so the density and velocity structure must be
supplied. Also, the atomic level populations must be determined. An
LTE code by Ivaylo Mihaylov (private communication) is used to
approximate the level populations, which only requires specification of
the
radiation temperature and the atomic partition
functions. Table \ref{table:ionizstages} shows the highest ionization states used.
Modifications were made to allow the code to calculate the
resulting effective line optical depth at $10^4$ frequency points
using eq. (\ref{eq:effopdepthcalc}).
\clearpage
\begin{deluxetable}{rrrrrr}
\tablecaption{Highest Ionization Stages of Included Elements
\protect\label{table:ionizstages}}
\tablewidth{0pt}
\tablehead{\colhead{Element} & \colhead{Stage} & \colhead{Element} &
\colhead{Stage} & \colhead{Element} & \colhead{Stage}}
\startdata
H & II & Na & V & Sc & V\\
He & III & Mg & V & Ti & V\\
Li & IV & Al & V & V & V\\
Be & V & Si & V & Cr & V\\
B & V & P & V & Mn & V\\
C & VII & S & V & Fe & IX\\
N & VIII & Cl & V & Co & V\\
O & IX & Ar & V & Ni & V\\
F & V & K & V & Cu & V\\
Ne & IX & Ca & V & Zn & V\\
\enddata
\end{deluxetable}
\clearpage
The code is also used to
calculate $M(t)$ given the hydrogen abundance $X$, helium abundance $Y$,
temperature $T$, CAK electron optical depth $t$, and electron number
density $n_e$. The code was run with $X = 0$ and $Y = 0.98$, to
simulate W-R stars in their helium-dominated ``WNE'' phase. This also
implies that metals comprise the remaining 2\% of the stellar
composition,
a canonical value that is important for line opacity.
The electron number density used in the exitation balance is scaled
proportionally to $t$, such that $n_e = 1\times10^{13} cm^{-3}$ when $t
= 0.01$, which essentially asserts that our wind model has fixed radius
and velocity and scales that are roughly characteristic of W-R winds.
This results in $n_e$ values that are rather high for O-star winds,
but this is not viewed as a fundamental difficulty as the driving
efficiency is only weakly related to $n_e$.
At a given temperature, the code can be run for several different
values of $t$, which gives $\alpha$ using eq.
(\ref{eq:tdef}). In CAK theory for an optically think wind irradiated
by a point star, the value of the inertia scaled $y$ at the critical
point is
\begin{equation}
\label{ycritalpha}
y_c = \frac{\alpha}{1 - \alpha} \ ,
\end{equation}
and here this is only slightly modified for nonradial radiation by the
$F_{NID}$ correction factor.
Once $\alpha$ is determined and $y_c$ is found, $\dot{M}$ can be
calculated from eq. (\ref{eq:mdot-t}), where $t$ must satisfy
eq. (\ref{eq:critpteq}).
Table \ref{table:mdot-gray} shows the self-consistent mass-loss rates
for gray models at $T = 4\times10^4K$ (a typical O-star temperature) and
$T = 1.3\times10^5K$, a temperature reflecting the smaller radius yet
comparable luminosity of a W-R star. Table \ref{table:assumpt} lists the
model assumptions. The $4\times10^4K$ model has a mass-loss rate of
about $1.6\times10^{-5}M_\sun $yr$^{-1}$, which is large for an O star,
but not as large as many W-R wind projections.
The $1.3\times10^5K$ gray model, however,
yields a mass-loss rate of about $3.0\times10^{-5}M_\sun $yr$^{-1}$,
which corresponds to standard clumping-modified
W-R mass-loss rates \citep[such as in][]{hillier-miller1999}.
Thus the gray-opacity analysis reveals
an important piece of the puzzle: the higher-temperature
surfaces of helium stars shifts the stellar luminosity deeper
into the EUV regime where effective line opacity is enhanced. But the
gray assumption certainly overestimates the line-driving efficiency,
because in reality the flux will tend to be frequency redistributed into
underdense line domains.
To constrain the potential severity of this problem, we now turn to the
opposite limit of extremely efficient frequency redistribution.
\clearpage
\begin{deluxetable}{rrr}
\tablecaption{Model Assumptions
\protect\label{table:assumpt}}
\tablewidth{0pt}
\tablehead{\colhead{$M_*$} & \colhead{$\Gamma$} & \colhead{$\tau_c$}}
\startdata
$12.6M_\sun$ & 0.5 & 10\\
\enddata
\end{deluxetable}
\clearpage
\begin{deluxetable}{rrrrrr}
\tablecaption{Parameters and Gray Mass-Loss Rates
\protect\label{table:mdot-gray}}
\tablewidth{0pt}
\tablehead{\colhead{$T$} & \colhead{$\alpha$} & \colhead{$t$} &
\colhead{$M(t)$} & \colhead{$y_c$} & \colhead{$\dot{M} (M_\sun yr^{-1})$}}
\startdata
$4.0\times10^4$ & 0.59 & 0.035 & 3.5 & 1.4 & $1.6\times10^{-5}$\\
$1.3\times10^5$ & 0.81 & 0.039 & 7.4 & 4.2 & $3.0\times10^{-5}$\\
\enddata
\end{deluxetable}
\clearpage
\section{Results for the SSR Model}
\label{sec:ssr}
The limit of efficient frequency redistribution over a highly diffusive
radiation field in a supersonically expanding wind allows the
application of the statistical Sobolev-Rosseland (SSR) opacity
approximation \citep{onifer-gayley2003}. This approximation asserts that
completely redistributed source functions give rise to a diffusive flux
that is inverse to the local frequency-dependent effective opacity, as
holds for the Rosseland mean in static photospheres, but the effective
opacity is controlled by the Sobolev approximation and is governed by
eq. (\ref{eq:effopdepthcalc}).
Since the radiative acceleration is governed by the frequency-integrated
product of flux times opacity, the inverse correlation between them has
important implications, and CAK theory must be augmented by a
consideration of the line frequency distribution, not just the
line-strength distribution.
Since we wish to focus on the frequency dependence of the flux that is
caused by the frequency dependence of the opacity, it is convenient to
transform to a new frequency-like variable, such that the flux
arising from truly gray opacity would be {\it flat} when
expressed in terms of this variable.
This may be accomplished by introducing
the flux ``filling factor'' $f$ given by
\begin{equation}
f \ = \ \frac{\int_{\nu_{min}}^{\nu} B(\nu')
d\nu'}{\int_{\nu_{min}}^{\nu_{max}} B(\nu) d\nu} \ ,
\label{eq:fdef}
\end{equation}
where $B(\nu)$ is the envelope function expressing the gross
frequency dependence of the stellar flux for gray opacity (approximated
here by the Planck function for simplicity).
Thus the flux
density, per interval of $f$ instead of $\nu$, is constant
everywhere over $f$-space in the absence of non-gray opacity
modifications, and hence provides a useful space to characterize such
modifications.
Regions in frequency space that have a low
incoming flux density map into narrow regions in $f$ space, and regions
of frequency space that have a large incoming flux density map into wide
regions in $f$ space (see Figure
\ref{fig:4panel1}a-d).
The large gap seen on the
left-hand side of Figure \ref{fig:4panel1}a occurs at a region
in frequency space with a low incoming flux density, as seen in Figure
\ref{fig:4panel1}b. Thus it is mapped into a small sliver of
$f$ space, as seen in Figure
\ref{fig:4panel1}c. The opacity spike seen near log$(\nu) =
15.9$ occurs at the peak of the Planck curve, and therefore is mapped
into a wide region of $f$ space. Figure \ref{fig:4panel1}d
exhibits the expected flat
profile when the frequency dependence of the flux follows $B(\nu)$ from
Figure \ref{fig:4panel1}b.
The primary advantage of working in $f$-space is the convenience of
calculating the radiative acceleration from lines, $g_L$, which
involves flux weighting
the opacity function $\tau(f)$ (here expressed in optical depth units as
per eq. [\ref{eq:effopdepthcalc}]).
The flux-weighted result is
\begin{equation}
\label{force-fspace}
g_L \ \propto \ \int_0^1 F(f) \tau_L(f) df \ ,
\end{equation}
where $F(f)$ describes the frequency dependence of the flux function,
and use of $f$-space allows the structure of $F(f)$ to be induced
by $\tau_L(f)$ independently from the frequency dependence of the
sources. In particular, when redistribution is neglected, $F(f)$ is flat
and the radiative acceleration is determined by the simple integral of
$\tau_L$, whereas in the opposite SSR limit of extreme
redistribution, $F(f)$ is inversely proportional to $\tau_L(f)$
and the entire integrand of eq. (\ref{force-fspace}) becomes flat.
The form of eq. (\ref{force-fspace}) makes it evident that $g_L$ depends
on appropriate mean opacities.
If the goal is to contrast the SSR approximation with the gray result,
the relevant effective line-opacity means, in optical-depth units, are
the gray average $\tau_g$, the SSR flux-weighted mean $\tau_{SSR}$
(including the impact of lines plus continuum),
and the pure continuum mean $\tau_c$ (assumed here to be manifestly
gray). Note that $\tau_g$
and $\tau_{SSR}$ can be determined from the line list and have
the functional form:
\begin{eqnarray}
\tau_g & = & \int_0^1 \tau_L(f) df, \label{eq:taugdef} \\
\tau_{SSR} & = & \left[\int_0^1 \left(\tau_L(f) + \tau_c\right)^{-1}
df\right]^{-1} \ , \label{eq:taussrdef}
\end{eqnarray}
where the latter expresses the abovementioned inverse scaling of flux
and opacity, in a manner entirely analogous to the static Rosseland mean
but typically generating far larger line opacities as a result of the
supersonic flow.
A key quantity that depends only on these mean opacities is the force
efficiency $\mathcal{E}$, defined as
the ratio of the actual line force to the line force
that would result for a flat $F(f)$ that did not respond to the opacity,
i.e., for an effectively gray force.
When the line acceleration is treated in the SSR approximation, this
yields \begin{equation}
\label{eq:forceffgl}
\mathcal{E} \ = \ \frac{g_{SSR}}{g_{gray}} \ ,
\end{equation}
and removing the continuum opacity to yield line accelerations in units
of mean optical depths gives
\begin{eqnarray}
g_{gray} & \propto & \tau_{gray} - \tau_c \ = \ \tau_g \label{eq:taug}\\
g_{SSR} & \propto & \tau_{SSR} - \tau_c \ , \label{eq:glssr}
\end{eqnarray}
such that
\begin{equation}
\label{eq:forceffdef}
\mathcal{E} \ = \ \frac{\tau_{SSR} - \tau_c}{\tau_g} \ ,
\end{equation}
where $\tau_g$, $\tau_{SSR}$, and $\tau_c$ are as defined above.
An additional simplification may now be added to the analysis.
Once the degree of (anti)correlation between $\tau_L(f)$ and $F(f)$ is
specified, it is no longer necessary for the purposes of eq.
(\ref{force-fspace}) that the proper {\it sequence} in $f$-space be
maintained, and
the frequency-dependent opacities and fluxes may be reordered
arbitrarily into some other $f'$ space so long as the mapping from
$df$ to $df'$ has unit Jacobian. We choose to sort the opacity
distribution in decreasing order of $\tau_L$ (see Figure
\ref{fig:4panel2}), which permits a {\it monotonic} $\tau_L(f')$
distribution over the resorted $f'$ space.
The new distribution over $f'$ generates a new $F(f')$, but
$g_L$ is of course preserved by this simple change of integration
variable.
The advantage of monotonic $\tau_L(f')$ and $F(f')$ is that
they may be approximated by analytic curves, and the properties of those
analytic fits offer insights that are not available from a direct
numerical evaluation of $g_L$.
Figure \ref{fig:4panel2} shows the opacity when sorted in decreasing
order over $f'$-space. We approximate the resulting smooth
opacity curve by an exponential of the form
\begin{equation}
\label{eq:expfiteqn}
\tau(f') \ = \ a e^{-bf'} \ ,
\end{equation}
where $b$ parameterizes the level of nongrayness.
If $b = 0$ the opacity is the same at all frequencies, and the lack of
any gaps implies that this gray result is the most efficient for driving
the wind. As $b$ increases, the importance of gaps increases,
and the wind driving efficiency and mass-loss rate drop.
For large $b$, the exponential falls so rapidly that it generates a
frequency domain that is nearly line free,
the ramifications of which are considered in \citet{onifer-gayley2003}.
The operational values of the opacity scale parameter $a$ and
nongrayness parameter $b$ are chosen to exactly recover both the gray
opacity and the SSR force efficiency $\mathcal{E}$ from numerical
integrations.
In the SSR approximation, the diffusive correction $F_{NID}$ is
further altered by the non-gray correction $\mathcal{E}$, such that eq.
(\ref{eq:critpteq}) is replaced by
\begin{equation}
\label{eq:ssrcritpnt}
1 \ + \ y_c \ = \ \frac{\Gamma}{1-\Gamma} M(t) F_{NID} \;\mathcal{E}.
\end{equation}
$\mathcal{E}$ is then calculated from eq. (\ref{eq:forceffdef}), where
$\tau_g$ and $\tau_{SSR}$ are calculated from the line list.
The value of $y$ at the critical
point, $y_c$, is calculated by setting to zero the derivative with
respect to $y$ of eq. (\ref{eq:ssrcritpnt}) at $y = y_c$.
Since $M(t)$ is
the ratio of the gray line force to the free electron force,
\begin{equation}
\label{eq:fmtgte}
M(t) \ = \ \frac{\tau_g}{\tau_e} \ ,
\end{equation}
where we assume $\tau_e = \tau_c$. We apply the CAK ansatz to obtain
\begin{equation}
\label{eq:mpropy}
M(t) \ \propto \ y^{\alpha} \ \propto \ t^{-\alpha} \ .
\end{equation}
Substituting eq. (\ref{eq:expfiteqn}) into eqs.
(\ref{eq:taugdef}) and (\ref{eq:taussrdef}) then gives
\begin{equation}
\tau_g \ = \ \frac{a}{b} \left( 1 - e^{-b} \right),
\label{eq:taugint}
\end{equation}
and
\begin{equation}
\tau_{SSR} \ = \ \frac{b \tau_c}{ln \left(\frac{b M + e^b - 1}{b M -
e^{-b} + 1} \right)} \ , \label{eq:taussrint}
\end{equation}
where $M = M(t)$ and eqs. (\ref{eq:fmtgte}) and
(\ref{eq:taugint}) have been used to eliminate $a$.
Equations (\ref{eq:taugint}) and
(\ref{eq:taussrint}) may then be substituted into eq.
(\ref{eq:forceffdef}), yielding
\begin{equation}
\label{eq:febm}
\mathcal{E} \ = \ M^{-1} \left[ \frac{b}{ln \left(\frac{bM + e^b - 1}{bM
- e^{-b} + 1}\right)} - 1 \right] \ .
\end{equation}
To find the values of $y_c$ and $M_c$ at the critical point,
we substitute eqs. (\ref{eq:febm}) and (\ref{eq:mpropy})
into eq. (\ref{eq:ssrcritpnt}) and set the derivative at $y_c$ to
zero, yielding
\begin{equation}
\label{eq:critptcond}
\frac{\Gamma' F_{NID} b^2 M_c \alpha y_c^{-1}
\left[ \left(b M_c - e^{-b} +
1 \right)^{-1} - \left(b M_c + e^b - 1\right)^{-1}\right]}
{\left[ln \left(\frac{b M_c + e^b - 1}{b M_c - e^{-b}
+ 1} \right) \right]^{2}} \ = \ 1 \ ,
\end{equation}
where $\Gamma' \ = \ \Gamma / (1 - \Gamma)$.
A second constraint on $M_c$ is obtained by solving
eq. (\ref{eq:ssrcritpnt}), after using eqs.
(\ref{eq:febm}) and (\ref{eq:mpropy}), so
\begin{equation}
\label{eq:hdef}
M_c \ = \ \frac{e^{\left(\frac{b}{1 + (1 + y_c) / \Gamma'
F_{NID}}\right)} \left(1 - e^{-b}\right) + 1 - e^b}{\left[1 -
e^{\left(\frac{b}{1 + (1 + y_c) / \Gamma'
F_{NID}}\right)}\right] b}.
\end{equation}
Equations (\ref{eq:critptcond}) and (\ref{eq:hdef}) can be combined and
$y_c$ and $M_c$ found numerically.
The line list is analyzed with specified $t$, $\tau_c$, $T$, and
$n_e$, which yield $\tau_g$, $\tau_{SSR}$, $\mathcal{E}$, and $b$. The
CAK $\alpha$ is determined the same way as in the gray case, by varying
$t$ and seeing its effect on $M_c$, where $\alpha = -d$ln $M / d$ln $t$.
The critical point is
found numerically assuming
$F_{NID} = 0.7$, and eq. (\ref{eq:ssrcritpnt}) is then checked for
consistency.
If $1 + y_c \ne \Gamma' F_{NID} M(t) \mathcal{E}$, then
the procedure is repeated with an updated $t$ until eq.
(\ref{eq:ssrcritpnt}) is self-consistent.
Equation (\ref{eq:mdot-t})
then gives the mass-loss rate.
\subsection{Results and Discussion}
\label{sec:ssrmdotsubsec}
The SSR model is most appropriate for a wind that is highly
redistributing and optically thick, such that
photons are quickly shunted into spectral gaps where long
mean-free-paths enables them to carry much of the stellar flux, and
force
efficiency drops.
Therefore, redistribution significantly reduces the
mass-loss rate relative to gray scattering.
Figures
\ref{fig:4panel1} and \ref{fig:4panel2} show the effective line
optical depth spectrum of an LTE wind with $T = 1.3\times10^5K$ at the
critical point.
The last row in Table \ref{table:mdot-ssr} shows
the parameters that result from such a model, where we have chosen a
characteristic value of the Eddington parameter $\Gamma = 0.5$ (and
the other model parameters listed in Table \ref{table:assumpt}).
The force efficiency
drops to about 23\% of the gray force efficiency, which translates into
a mass-loss rate of $4.3\times10^{-6}$.
This is
significantly below observed W-R mass-loss rates, even with
clumping corrections \citep[e.g.,][]{nugis-etal1998,
hillier-miller1999}. Thus we conclude that if the SSR limit of extreme
frequency
redistribution applies locally in W-R winds, then
line-driving theory cannot explain their
mass-loss rates using the Kurucz line list.
To increase the mass-loss rate via line driving theory, it would be
necessary to either fill the gaps by including additional lines, or to
include the finite time required to redistribute
photons into the opacity gaps by relaxing the CRD approximation, thus
allowing photons to scatter multiple times before a redistribution
occurs.
\clearpage
\begin{figure}
\plotone{f1.eps}
\caption{The conversion of frequency space to flux filling factor
space. Panels (a) and (b) show the opacity and the incoming flux,
respectively, as a function of the frequency. Panels (c) and (d) show
the same opacity and incoming flux in flux-filling-factor space. The
temperature is 130,000K. \protect\label{fig:4panel1}}
\end{figure}
\clearpage
\begin{figure}
\plotone{f2.eps}
\caption{The effective optical depth (a)
is sorted in decreasing order (b). This produces the same force,
since the incoming flux (c and d) is flat in f-space.
\protect\label{fig:4panel2}} \end{figure}
\clearpage
\begin{deluxetable}{rrrrrrrrrr}
\tablecaption{Gray and SSR Mass-Loss Rates From the Kurucz List
\protect\label{table:mdot-ssr}} \tablewidth{0pt}
\tablehead{\colhead{$T(K)$} & \colhead{Type} & \colhead{$\alpha$}
& \colhead{$t$} & \colhead{$M(t)$} & \colhead{$\mathcal{E}$}
& \colhead{$a$} & \colhead{$b$} & \colhead{$y_c$} & \colhead{$\dot{M} (M_\sun yr^{-1})$}}
\startdata
$1.3\times10^5$ & Gray & 0.81 & 0.039 & 7.4 & 1.0 &
\multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 4.3 & $3.0\times10^{-5}$\\
$1.3\times10^5$ & SSR & 0.79 & 0.016 & 15 & 0.23 & 14 & 5.2 & 1.5 &
$4.3\times10^{-6}$\\
\enddata
\end{deluxetable}
\clearpage
\section{The Opacity Project Data}
\label{sec:ssrop}
The most obvious way to fill opacity gaps is to find new opacity. To
that end, we have replaced the Kurucz oscillator strengths with
oscillator strengths from the more complete Opacity Project (OP)
\citep{badnell-seaton2003}. Kurucz oscillator strengths were used for
P, Cl, K, Sc, Ti, V, Cr, Mn, Co, Ni, Cu, and Zn, as these elements are
not available through the Opacity Project. In addition, we have
raised the highest stage of iron to XIII. As before, the temperature
is $1.3\times10^4K$. Figure \ref{fig:op1} shows the effective optical
depth as a function of f (compare to figure \ref{fig:4panel1}c). The
OP list contains about twice as many lines within the wavelength range
and ionization states of table \ref{table:ionizstages} as the Kurucz
list. As shown in Table \ref{table:mdot-ssr-op}, the SSR mass-loss rate is $\dot{M} = 7.0\times10^{-6}$, about
twice as large as the Kurucz value, but still insufficient if line
driving is to describe all but the weakest W-R winds. To acheive
higher mass-loss rates within the CAK regime, another method is needed
to introduce additional lines.
\clearpage
\begin{figure}
\plotone{f3.eps}
\caption{The effective optical depth as a function of the flux filling
factor f due to the Opacity Project line list. Compare to figure
\ref{fig:4panel1}c \protect\label{fig:op1}}
\end{figure}
\clearpage
\begin{deluxetable}{rrrrrrrrrr}
\tablecaption{Gray and SSR Mass-Loss Rates From the OP List
\protect\label{table:mdot-ssr-op}} \tablewidth{0pt}
\tablehead{\colhead{$T(K)$} & \colhead{Type} & \colhead{$\alpha$}
& \colhead{$t$} & \colhead{$M(t)$} & \colhead{$\mathcal{E}$}
& \colhead{$a$} & \colhead{$b$} & \colhead{$y_c$} & \colhead{$\dot{M} (M_\sun yr^{-1})$}}
\startdata
$4.0\times10^4$ & Gray & 0.55 & 0.029 & 3.16 & 1.0 &
\multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 1.2 & $1.2\times10^{-5}$\\
$1.3\times10^5$ & Gray & 0.73 & 0.13 & 5.24 & 1.0 &
\multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 2.7 & $6.2\times10^{-5}$\\
$1.3\times10^5$ & SSR & 0.72 & 0.035 & 13.5 & 0.22 & 332 & 5.3 & 1.1 &
$7.0\times10^{-6}$\\
\enddata
\end{deluxetable}
\section{Ionization Stratification}
\label{sec:ionstrat}
One way to fill opacity gaps {\it nonlocally}, thereby
increasing the force efficiency, was
suggested by \citet[][hereafter LA93]{lucy-abbott1993} and
involves the appearance of a
large number of additional lines via ionization stratification.
Such an ionization gradient has been observed to be fairly ubiquitous in
W-R stars \citep{herald-etal2000}, and in the inner wind appears due to the
significant temperature drop over the span of the optically thick wind
envelope. When such a gradient exists at the spatial scale over which
photons diffuse prior to being redistributed in frequency, local gaps
left by
one ionization state of a given element may be filled
by lines of another state located nearby.
This in effect creates a globally gray line list and mitigates any local
nongrayness, but is only effective over the length scale of frequency
thermalization. Thus the extreme redistribution limit would still reduce
to the local SSR approximation, but finite thermalization lengths would
yield results that are intermediate to the widely differing gray and SSR
mass-loss rates derived above.
\subsection{Two-Domain Model With Ionization Stratification}
\label{2domionstratsubsec}
To obtain conceptual insight into the effect of ionization
stratification on the overall mass-loss rate, we revisit the model of
\citet{onifer-gayley2003}, where the line list is replaced by a simple
model containing two frequency domains.
One domain contains effective
line opacity $\tau_{L1}$, while the other contains effective line
opacity $\tau_{L2}$, both in optical depth units as usual.
There is also a continuum opacity
$\tau_c$ that spans both frequency domains.
This model allows us to identify important characteristics
of the system separately from the details of the line list, and is
simple enough to permit analytic approximation even in the presence of
spatial variations.
To account in a simple conceptual way
for ionization stratification and its potential for filling
the gaps globally, an additional complication is introduced to the
plane-parallel model atmosphere that is used to signify the wind, as
illustrated in Figure \ref{fig:toymodel}. In addition to dividing the
frequency domain into two equal parts (i.e., with $f = 1/2$) and
supplying them with line optical depths $\tau_{L1}$ and $\tau_{L2}$, the
atmosphere is also divided equally into two physical-space regions,
between which the opacities in each frequency domain are {\it
interchanged}. The total continuum optical depth $\tau_c$ pervades all
domains and all regions, such that $\tau_c/2$ is the midpoint of the
atmosphere where the opacity interchange is imposed, and the total optical depth
within a given ionization zone and frequency domain $i$ is $\tau_i = \tau_{Li} + \tau_c / 2$.
This yields a kind of toy model of a wind that is globally gray (as the
total optical depth in both frequency domains is the same), but can be
very nongray locally in each region.
For conceptual purposes, the radiative transfer is treated in
the two-stream approximation,
so that the mean intensity and flux are given in terms of intensity
streams in the inward and outward directions:
\begin{eqnarray}
J_\nu & = & (I_{\nu+} + I_{\nu-}) / 2, \label{eq:jdef}\\
F_\nu & = & I_{\nu+} - I_{\nu-} \ . \label{eq:fluxdef}
\end{eqnarray}
The depth variable is chosen to be the continuum optical depth,
\begin{equation}
\label{eq:taudef}
d\tau \ \equiv \ d\tau_c \ = \ \frac{\tau_c}{\tau_i} d\tau_i \ ,
\end{equation}
where $i$ is 1 or 2 to represent the two frequency domains.
\clearpage
\begin{figure}
\plotone{f4.eps}
\caption{The ionization stratification
model. The y-axis represents f-space and the x-axis is measured in
units of the continuum optical depth, from $\tau = \tau_c$ at the left
end to $\tau = 0$ at the right end. The radiation from the star
enters the wind from the left. \protect\label{fig:toymodel}}
\end{figure}
\clearpage
To complete the radiative transfer solution for such an atmosphere, it
is necessary to specify the radiation sources.
Here we assume pure scattering in radiative equilibrium, but as
mentioned above, a key issue is the degree of frequency redistribution
per scattering.
we assume for simplicity that the continuum opacity scatters
coherently.
\citet{pinto-eastman2000} showed that large
amounts of line redistribution have a similar
effect on the overall frequency dependent envelope of the radiative
flux as does continuum redistribution. We allow the line opacity in one domain
to redistribute photons into line opacity in the other domain, using a
simple probabilistic approach.
To seek an extreme redistribution limit in such a model is to assert that all
line scattering is completely redistributing, so that
the probability that any scattering event within a given ionization stratum
will be redistributing is
\begin{equation}
\Lambda_{1} \ = \ \frac{\tau_{L1}}{\tau_{L1} + \tau_c / 2}
\label{eq:lambda1def}
\end{equation}
for frequency domain 1 and
\begin{equation}
\Lambda_{2} \ = \ \frac{\tau_{L2}}{\tau_{L2} + \tau_c / 2}
\label{eq:lambda2def}
\end{equation}
for frequency domain 2, where $\tau_c / 2$ is the continuum optical depth
within a single ionization zone. Thus if the line opacity $\tau_{Li}$ goes to 0,
no redistribution
may occur in that domain, whereas
if $\tau_{Li}$ gets very large, redistribution becomes certain.
However, not all redistribution will result in changing frequency
domain, this must be accounted for with a separate probability that
obeys the requirement of reciprocity and is independent of the original
state of the photon. Thus the probability of redistributing into a
particular frequency domain, given that a redistribution has occurred,
is
\begin{equation}
w_{1} \ = \ \frac{\tau_{L1}}{\tau_{L1} + \tau_{L2}} \label{eq:w1def}
\end{equation}
for redistribution into domain 1, and
\begin{equation}
w_{2} = \frac{\tau_{L2}}{\tau_{L1} + \tau_{L2}} \label{eq:w2def}
\end{equation}
for domain 2. Therefore, the joint probability that a photon will scatter in frequency
domain $i$, redistribute, and result in frequency domain $j$ is $\tau_i \Lambda_i w_j$.
Note that if $\tau_{Li}$ goes to 0, $w_i$ also goes to
0, and no redistribution can occur into that domain.
What is less obvious is that if $\tau_{Lj}$ goes to 0, then $w_i$ goes
to 1, but $\Lambda_j = 0$, so the joint probability $\Lambda_j w_i$ of
redistributing from frequency domain $j$ into frequency domain $i$,
given that a scattering has occured in domain $j$, is still 0.
The fact that eqs. (\ref{eq:lambda1def})-(\ref{eq:w2def}) obey
reciprocity may be seen from
\begin{equation}
\label{eq:jointprobratio}
\frac{p_{12}} {p_{21}} \ = \ \frac{\tau_1 \Lambda_1
w_2} {\tau_2 \Lambda_2 w_1} \ = \ 1 \ ,
\end{equation}
where $p_{ij}$ is the complete probability of scattering in frequency
domain $i$ and redistributing
into domain $j$.
This condition is all that is required for eqs.
(\ref{eq:lambda1def})-(\ref{eq:w2def}) to yield the
SSR limit if all optical depths are sufficiently large and all gradients
are sufficiently gradual.
However, it is exactly the impact of more rapid gradients that is being
explored by this simple model.
The radiative transport equations that must be solved are
\begin{eqnarray}
\frac{dF_{\ell1}(\tau)}{d\tau} & = & 2 \frac{\tau_1}{\tau_c} \Lambda_1
w_2
\left( J_{\ell1}(\tau) - J_{\ell2}(\tau) \right), \label{eq:rteq1}\\
\frac{dF_{\ell2}(\tau)}{d\tau} & = & 2 \frac{\tau_2}{\tau_c} \Lambda_2
w_1
\left( J_{\ell2}(\tau) - J_{\ell1}(\tau) \right), \label{eq:rte12}\\
\frac{dF_{r1}(\tau)}{d\tau} & = & 2 \frac{\tau_1}{\tau_c} \Lambda_1 w_2
\left(
J_{r1}(\tau) - J_{r2}(\tau) \right), \label{eq:rteq3}\\
\frac{dF_{r2}(\tau)}{d\tau} & = & 2 \frac{\tau_2}{\tau_c} \Lambda_2 w_1
\left(
J_{r2}(\tau) - J_{r1}(\tau) \right), \label{eq:rteq4}\\
\frac{dJ_{\ell1}(\tau)}{d\tau} & = & \frac{1}{2} \frac{\tau_1}{\tau_c}
F_{\ell1}(\tau), \label{eq:rteq5}\\
\frac{dJ_{\ell2}(\tau)}{d\tau} & = & \frac{1}{2} \frac{\tau_2}{\tau_c}
F_{\ell2}(\tau), \label{eq:rteq6}\\
\frac{dJ_{r1}(\tau)}{d\tau} & = & \frac{1}{2} \frac{\tau_1}{\tau_c}
F_{r1}(\tau), \label{eq:rteq7}\\
\frac{dJ_{r2}(\tau)}{d\tau} & = & \frac{1}{2} \frac{\tau_2}{\tau_c}
F_{r2}(\tau), \label{eq:rteq8}
\end{eqnarray}
where $\ell$ refers to the left side of the configuration space in
Figure \ref{fig:toymodel} and $r$ refers to the right, and 1 denotes the
frequency domain that initially contains $\tau_{L1}$ while 2 denotes the
frequency domain that initially contains $\tau_{L2}$.
These equations may be recast by the
following substitutions:
\begin{eqnarray}
x_\ell(\tau) & \equiv & J_{\ell2}(\tau) - J_{\ell1}(\tau) \ ,
\label{eq:xldef}\\
x_r(\tau) & \equiv & J_{r2}(\tau) - J_{r1}(\tau) \ , \label{eq:xrdef}\\
y_\ell(\tau) & \equiv & \frac{\tau_c}{\tau_1} J_{\ell1}(\tau) +
\frac{\tau_c}{\tau_2} J_{\ell2}(\tau) \ , \label{eq:yldef}\\
y_r(\tau) & \equiv & \frac{\tau_c}{\tau_1} J_{r1}(\tau) +
\frac{\tau_c}{\tau_2} J_{r2}(\tau) \ , \label{eq:yrdef}
\end{eqnarray}
and then eqs. (\ref{eq:rteq1})-(\ref{eq:rteq8}) become
\begin{eqnarray}
\frac{d^2x_{\ell}(\tau)}{d\tau^2} & = & a x_{\ell}(\tau) \ ,
\label{eq:rtxleq}\\
\frac{d^2x_r(\tau)}{d\tau^2} & = & a x_r(\tau) \ , \label{eq:rtxreq}\\
\frac{dy_{\ell}(\tau)}{d\tau} & = & F_{\ell,tot} \ , \label{eq:rtyleq}\\
\frac{dy_r(\tau)}{d\tau} & = & F_{r, tot} \ , \label{eq:rtyreq}
\end{eqnarray}
where
\begin{equation}
\label{eq:adef}
a \ = \ \left( \frac{\tau_2}{\tau_c} \right)^2 \Lambda_2 w_1 + \left(
\frac{\tau_1}{\tau_c} \right)^2 \Lambda_1 w_2 \ .
\end{equation}
The solutions are
\begin{eqnarray}
x_{\ell}(\tau) & = & C_{\ell+} e^{\sqrt{a}\tau} + C_{\ell-}
e^{-\sqrt{a}\tau} \ , \label{eq:xlsoln}\\
x_r(\tau) & = & C_{r+} e^{\sqrt{a}\tau} + C_{r-} e^{-\sqrt{a}\tau} \ ,
\label{eq:xrsoln}\\
y_{\ell}(\tau) & = & B_{\ell} + F_{\ell,tot} \tau \ ,
\label{eq:ylsoln}\\ y_r(\tau) & = & B_r + F_{r, tot} \tau \ .
\label{eq:yrsoln}
\end{eqnarray}
The eight constants $C_{\ell/r,1/2}, B_{\ell/r}, F_{\ell/r,tot}$ are
found by using the following eight boundary conditions: ({\it i}) No
photons enter the right side of the wind,
\begin{equation}
\label{eq:bc12}
I_{r-i}(0) = J_{ri}(0) - \frac{1}{2}F_{ri}(0),
\end{equation}
where $i$ is 1 or 2, so this represents two boundary conditions. ({\it
ii}) The fluxes and the mean intensities are
continuous at the midpoint of the wind,
\begin{eqnarray}
F_{\ell i}(\tau_c/2) & = & F_{ri}(\tau_c/2), \label{eq:bc34}\\
J_{\ell i}(\tau_c/2) & = & J_{ri}(\tau_c/2), \label{eq:bc56}
\end{eqnarray}
which gives four more boundary conditions.
({\it iii}) The stellar surface at the left of the atmosphere model is
assumed to be highly redistributing, so that the incoming
intensities on the left are unity,
\begin{equation}
\label{eq:bc78}
I_{\ell+i}(\tau_c) = J_{\ell i}(\tau_c) + \frac{1}{2}F_{\ell i}(\tau_c)
\ = \ 1 \ ,
\end{equation}
which gives the final two.
\clearpage
\begin{figure}
\plotone{f5.eps}
\caption{
The force efficiency as a function of the opacity ratio at
$\tau = 9$, i.e., one continuum mean-free path from the left edge.
The solid lines are the toy model calculations and the dashed lines
are the SSR results. The topmost solid and dashed curves are for
$\tau_{L1} = 10$, the middle are for $\tau_{L1} = 20$, and
the bottom are for $\tau_{L1} = 50$.
The asterisks appear where the atmosphere becomes effectively thick
to redistribution, i.e. $\tau_c = 1/\sqrt{a}$, and the crosses
appear where all frequency domains become optically thick,
i.e., $\tau_{L2} = 1$.
\protect\label{fig:fe-vs-tau-ionstrat}}
\end{figure}
\clearpage
Figure \ref{fig:fe-vs-tau-ionstrat} shows $\mathcal{E}$ a tenth
of the way through the wind, or about a continuum mean-free
path from the left edge.
The solid lines show the analytic solution, and
indicate that there are two circumstances in which a globally gray wind
can have a large force efficiency. Not surprisingly, when
$\tau_{L1} = \tau_{L2}$, the opacity is truly gray,
and $\mathcal{E} = 1$. What is more surprising is that as
$\tau_{L2}$ is made optically thin, $\mathcal{E}$ also increases,
as the flux becomes more gray even though the opacity does not.
This is because line emissivity dominates,
so the line photons can only be redistributed into other
lines.
Thus as $\tau_{L2}$ decreases, there are fewer and fewer lines
available to receive photons from the thick line region, so
in effect the
reciprocity condition prevents photons from leaving the thick line
region even if that region contains highly redistributing opacity.
Indeed, if $\tau_{L2} = 0$, then $\mathcal{E} = 1$ because no
cross-redistribution can occur. This is in stark contrast to the SSR model,
which assumes the limit of strong redistribution.
The asterisks in Figure \ref{fig:fe-vs-tau-ionstrat} denote the
locations in parameter space where the photons thermalize, which can be
seen from the exponents of eqs. (\ref{eq:xlsoln}) and
(\ref{eq:xrsoln}) to be when $\tau \approx 1 / \sqrt{a}$. To the left
of the frequency thermalization point, the photons are not efficiently
redistributed, so little anti-correlation between flux and opacity is
set up, preventing a large drop in $\mathcal{E}$. This drop does
appear to the right of the asterisk,
where strong frequency thermalization occurs. The crosses
denote the locations in parameter space where $\tau_{L2} = 1$, which is
roughly where the force reaches minimum efficiency. As
$\tau_{L2}$ gets larger than 1, the gaps fill in and the wind becomes
more gray, allowing a larger $\mathcal{E}$. Thus $\tau_{L2} = 1$ is
seen as the parameter regime where there is
enough line opacity in the thin-line domain to significantly reduce the
flux in the thick-line domain, but not enough to achieve multiscattering
in the thin-line domain. Thus we find that multiple momentum deposition
becomes most difficult whenever the line opacity is distributed such
that each ionization zone covers
about a single photon mean-free path for a significant flux-weighted
fraction of the frequency spectrum.
{\it Either} less or more nongrayness in the opacity
will achieve higher overall force
efficiencies.
\subsection{Ionization Stratification With Real Line Lists}
\label{subsec:ionstratreal}
With the schematic two-domain results in mind, we now return to the real
line lists.
We have seen that the radiative flux will respond to the opacity as
though it was effectively gray over
scales much shorter than the frequency thermalization length, which
corresponds to some finite range in
temperature and may include multiple ionization strata.
Over this range, frequency thermalization will take hold and
drive the flux toward a coarse-grained version of the SSR limit, so the
opacity must be appropriately averaged over the ionization states
present.
In this picture, the coarse-grained average SSR opacity controls the local flux,
which then interacts with truly local opacity to determine the radiative
acceleration at each point.
Again, we use the model assumptions in Table \ref{table:assumpt}. The
temperature model of LA93, chosen because of its simplicity and
emphasis on fundamental processes rather than its completeness
relative to more sophisticated models,
is used to determine the
characteristic maximum range in temperature, some subset of which
would correspond to the frequency thermalization length. In their
model, the temperature ranges from about $1.35\times10^5K$ at the
stellar surface to about $3.5\times10^4K$ at the free-electron
photosphere. Without more detailed redistribution calculations (such
as carried out by \citealt{sim2004}), the appropriate temperature range
corresponding to a thermalization depth is unclear, so we consider
temperature ranges of $8.0\times10^4K \le T \le 1.3\times10^5K$,
$6.0\times10^4K \le T \le 1.3\times10^5K$, and $4.0\times10^4K \le T
\le 1.3\times10^5K$ in an effort to span the possibilities. Figures
\ref{ld-f-multitemp1}, \ref{ld-f-multitemp2}, and
\ref{ld-f-multitemp3} show the average effective line optical depths
from the Kurucz list
for these three temperature ranges, respectively. Notice that the
gaps begin to fill in as the lower temperature limit reaches
$6.0\times10^4K$, and this continues as still lower temperatures are
included. However, we do not encounter contributions from a large
number of ionization states, in seeming contradiction with LA93 but in
agreement with the results of \citet{sim2004}. For example, the
latter author found that only two stages of iron have a significant
impact on the line-driven mass-loss rate, and this limits the
effectiveness of ionization stratification.
\subsection{Discussion}
\label{ch3iondiscsubsec}
Tables \ref{table:ionstrat} and \ref{table:ionstratop} show the effect
on the force efficiency and mass-loss rate of including a range in
temperatures before applying the SSR flux approximation. The Kurucz
list results are again insufficient to describe W-R winds. The OP list
results go as high as $8.9\times10^{-6}M_\sun yr^{-1}$. While such
mass-loss rates have been observed \citep{kurosawa-etal2002}, for this
to be an upper limit on W-R mass-loss would require large amounts of
clumping correction. This mass-loss rate would also require a flux
themalization length of about 3 stellar radii to
correspond to the required temperature range in the LA93 model, and
this seems unrealistically large given the large potential for
redistribution found by \citet{pinto-eastman2000}.
\clearpage
\begin{figure}
\plotone{f6.eps}
\caption{The effective optical depth
averaged over the temperature range $8.0\times10^4K \le T \le
1.3\times10^5K$. \protect\label{ld-f-multitemp1}}
\end{figure}
\clearpage
\begin{figure}
\plotone{f7.eps}
\caption{The effective optical depth
averaged over the temperature range $6.0\times10^4K \le T \le
1.3\times10^5K$.\protect\label{ld-f-multitemp2}}
\end{figure}
\clearpage
\begin{figure}
\plotone{f8.eps}
\caption{The effective optical
depth averaged over the temperature range $4.0\times10^4K \le T \le
1.3\times10^5K$. \protect\label{ld-f-multitemp3}}
\end{figure}
\clearpage
\begin{deluxetable}{rrrrrrrrr}
\tablecaption{Mass-Loss Rates With Ionization Stratification (Kurucz List)
\protect\label{table:ionstrat}}
\tablewidth{0pt}
\tablehead{\colhead{$\Delta T(K)$} & \colhead{$\alpha$} &
\colhead{$t$} & \colhead{$M(t)$} & \colhead{$\mathcal{E}$} &
\colhead{$a$} &
\colhead{$b$} & \colhead{$y_c$} & \colhead{$\dot{M} (M_\sun yr^{-1})$}}
\startdata
$1.3\times10^5 - 8.0\times10^4$ & 0.79 & 0.017 & 14.5 & 0.25 & 32 &
5.0 & 1.6 & $4.9\times10^{-6}$\\
$1.3\times10^5 - 6.0\times10^4$ & 0.79 & 0.018 & 13.8 & 0.29 & 50 &
4.6 & 1.8 & $5.7\times10^{-6}$\\
$1.3\times10^5 - 4.0\times10^4$ & 0.80 & 0.020 & 12.7 & 0.34 & 70 &
4.3 & 2.0 & $7.1\times10^{-6}$\\
\enddata
\end{deluxetable}
\clearpage
\begin{deluxetable}{rrrrrrrrr}
\tablecaption{Mass-Loss Rates With Ionization Stratification (OP List)
\protect\label{table:ionstratop}}
\tablewidth{0pt}
\tablehead{\colhead{$\Delta T(K)$} & \colhead{$\alpha$} &
\colhead{$t$} & \colhead{$M(t)$} & \colhead{$\mathcal{E}$} &
\colhead{$a$} &
\colhead{$b$} & \colhead{$y_c$} & \colhead{$\dot{M} (M_\sun yr^{-1})$}}
\startdata
$1.3\times10^5 - 8.0\times10^4$ & 0.72 & 0.037 & 13.0 & 0.24 & 260 &
5.2 & 1.1 & $7.6\times10^{-6}$\\
$1.3\times10^5 - 6.0\times10^4$ & 0.72 & 0.039 & 12.5 & 0.25 & 260 &
5.1 & 1.2 & $8.3\times10^{-6}$\\
$1.3\times10^5 - 4.0\times10^4$ & 0.72 & 0.041 & 12.0 & 0.26 & 298 &
4.9 & 1.2 & $8.9\times10^{-6}$\\
\enddata
\end{deluxetable}
\clearpage
\section{Conclusions}
\label{sec:concl}
The starting point of this analysis was to recognize that as long as ion
thermal speeds are not artificially enhanced by turbulent processes
\citep[as considered by][]{hamann-grafener2004},
the Sobolev approximation is entirely applicable
to optically thick W-R winds.
Thus CAK theory is applicable after modifying for the effects of
diffuse radiation on the ionization and the angle dependence of
the radiative flux, and it is found that lines are able to drive
abundant mass loss under
hot W-R plasma conditions. However, the
ubiquitous presence of frequency redistribution in an
optically thic flow introduces significant
challenges to achieving large line-driven W-R mass-loss rates, and so
the rate of frequency
thermalization is critical for quantifying how redistribution affects
the efficiency of line driving.
With rapid enough thermalization, the radiative flux avoids large
opacity domains, resulting in force efficiencies well below what is
needed to
drive the observed W-R mass-loss rates.
This drop in force
efficiency in a highly redistributing optically thick
wind may only be avoided by filling the gaps
locally, which requires the discovery of new lines in regions of the
spectrum that are less densely packed.
Our test calculation demonstrated that the greatest challenge to
driving efficiency is
presented by spectral domains in which optically thick lines barely
overlap over the wind terminal speed,
since regions where the lines are sparse are less likely to
be redistributed into, and regions where the lines are dense are
already effective at line driving.
If, on the other hand, the thermalization rate is slow enough to sample a wide
range in ionization strata, then gray-type force efficiencies may in principle be
recovered. However, in practice the line lists do not appear to exhibit
sufficiently rich contributions from the many different ionization
states to achieve widespread filling-in of the spectrum.
As a result, frequency redistribution into domains with relatively
little line overlap continues to present a severe challenge to
obtaining line-driven winds with large frequency-averaged optical depth,
as would be required to attain the largest of the Wolf-Rayet
mass-loss rates
inferred from observations.
In short, there still does not exist {\it a priori} opacity-driven models
of supersonically accelerating optically thick winds with low turbulent broadening
and non-static opacity treatments that can
explain how a smaller and hotter W-R star can have a dramatically enhanced mass-loss
rate.
Either the opacity is still incomplete and new lines are capable of filling in the
line-poor domains, or else clumping corrections reduce the need for W-R mass fluxes
to substantially exceed those of their cooler, larger, and H-rich cousins, the extreme
Of stars.
In addition, it should be noted that up-to-date opacity treatments from the
Opacity Project \citep{badnell-seaton2003}
and the OPAL opacity tables \citep{rogers-inglesias1992} are of static type
so are not in their purest form applicable to W-R winds.
They must first be
re-evaluated as expansion-type opacities, such
as the method of \citet{jeffery1995} or the SSR mean used
here, possibly also incorporating
more exact radiative transfer
\citep{pinto-eastman2000} or non-Sobolev opacity corrections
\citep{wehrse-etal2003},
before they may be appropriately applied to further investigations into
the role of high-temperature opacity contributions (such as the ``iron bump'')
in explaining high W-R mass fluxes.
Furthermore, the critical point must not be artificially placed in regions of
exceptionally high opacity, as this would belie the meaning of the critical
point as the ``choke point'' of wind acceleration.`
It must also be mentioned that although clumping corrections reduce both
the inferred W-R mass-loss rates and the difficulty in explaining them
with existing line opacities, clumping itself may introduce
dynamical challenges. This is not a problem for clumps smaller than the
Sobolev length $L$, but larger clumps will reduce the force
efficiency in ways that are classifiable according to whether the
clumps are optically thin or thick to most photons. When the clumps are
thin, their impact is felt only through the role of density and velocity
in standard CAK theory, but when the clumps are thick
\citep[e.g.,][]{brown-etal2004}, additional reductions in the force
efficiency must appear owing to the feedback onto the self-consistent
radiative flux, in a manner again similar to the spirit of a Rosseland
mean \citep{owocki-etal2004}. The development and dynamical
implications of such clumps require detailed radiation hydrodynamic
simulations, but the simplified approaches developed here may be used
to guide approximations that make such a time-dependent calculation
\citep[e.g.,][]{baron-hauschildt2004, grafener-hamann2005} computationally tractable.
Finally, the most important goal of this paper has been to develop a conceptual
vocabulary to discuss the circumstances under which a W-R wind may or
may not be efficiently driven by Sobolev-type line opacity.
Key elements of this vocabulary include the effectively gray optical
depth $\tau_g$, the nongray force efficiency
$\mathrel{E}$, the nongrayness parameter $b$ for the monotonically
reordered line distribution, and the range of ionization strata that
contribute within a photon frequency thermalization length.
Estimations of these parameters offer conceptual insights into
classifying various physical behaviors, both before and after carrying
out
optically thick radiation hydrodynamical simulations.
\acknowledgments{The authors would like to thank John Bjorkman and
Ivaylo Mihaylov for code contributions and discussion, and John
Hillier for insightful comments. This project was supported by the
National Science Foundation (AST 00-98155). Portions of this work
were performed under the auspices of the U.S. Department of Energy
by Los Alamos National Laboratory under contract No. W-7405-ENG-36}
| proofpile-arXiv_065-2107 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In statistical physics,
to calculate the free energy of solvable lattice models
for finite temperature is one of the
important problems.
For this purpose, thermodynamic Bethe ansatz (TBA)
equations have been often used \cite{Ta99}. In general,
the TBA equations are an infinite number of coupled nonlinear
integral equations (NLIE) with an infinite number unknown functions.
Then it is desirable to reduce TBA equations
to a finite number of coupled NLIE with a finite number of
unknown functions.
Destri, de Vega \cite{DD92} and Kl\"umper \cite{K93,K92}
proposed NLIE with two (or one \footnote{ if an integral contour
with a closed loop is adopted})
unknown functions for the $XXZ$ (or $XYZ$) model.
To generalize their NLIE to models
whose underlying algebras have arbitrary rank seems to be
a difficult problem as we need considerable trial and errors
to find auxiliary functions which are needed to derive the NLIE.
Then there are NLIE of abovementioned type
for models whose underlying algebras have
at most rank 3 (for example, \cite{KWZ97,FK99,D05}).
Several years ago,
Takahashi discovered \cite{Ta01} an another NLIE for the
$XXZ$ model in simplifying TBA equations.
Later, the same NLIE was rederived \cite{TSK01} from fusion relations
($T$-system) \cite{KR87}
among quantum transfer matrices (QTM) \cite{S85}.
In addition, it was also rederived \cite{KW02} for the $XXX$ model
from a fugacity expansion formula.
In view of these situations, we have derived NLIE of Takahashi type for
the $osp(1|2s)$ model \cite{T02}, the $sl(r+1)$ model \cite{T03},
the higher spin Heisenberg model \cite{T04},
the $U_{q}(\widehat{sl}(r+1))$ Perk-Schultz model \cite{TT05}.
In these cases,
the number of unknown functions and NLIE coincide with the rank of the
underlying algebras. In this paper, we will further derive NLIE with a finite
number of unknown functions
for the $U_{q}(\widehat{sl}(r+1|s+1))$ Perk-Schultz model \cite{PS81,Sc83},
which is a multicomponent generalization of the 6-vertex model and
one of the fundamental solvable lattice models in statistical mechanics.
For example, a special case of this model is related to the
supersymmetric $t-J$ model, which is important in
strongly correlated electron systems.
In section 2, we introduce the $U_{q}(\widehat{sl}(r+1|s+1))$ Perk-Schultz model,
and define the QTM for it.
As a summation over tableaux labeled by $a \times m$ Young (super) diagram, we
introduce an auxiliary function (\ref{DVF}) \cite{T97,T98}
which includes an eigenvalue formula (\ref{QTM-eigen}) of the QTM
as a special case.
We also introduce a system of functional relations ($T$-system) which is satisfied by
this auxiliary function.
In section 3, we derive two kind of NLIE which contain only $r+s+1$ unknown functions.
The first ones (\ref{nlie-general}), (\ref{nlie-generalb})
reduce to the NLIE for the $U_{q}(\widehat{sl}(r+1))$ Perk-Schultz model
in \cite{TT05} if $s=-1$.
However our new NLIE are not straightforward generalization of the ones in
our previous paper \cite{TT05}.
In fact for $r,s \ge 0$ case,
a straightforward generation of our previous NLIE
becomes a system of an infinite number of coupled NLIE which contains an
infinite number of unknown functions
(see (\ref{nlie4})).
To overcome this difficulty, we will use the
quantum (supersymmetric) Jacobi-Trudi and Giambelli formula
(\ref{jacobi-trudi}) and
a duality (\ref{dual}) for the auxiliary function,
from which a closed set of NLIE can be derived.
We will also propose another NLIE (\ref{nlie-xi=-1}) and (\ref{nlie-xi=-1b})
in the latter part of the
section 3, which have never been considered before
even for the $U_{q}(\widehat{sl}(2))$ case.
In deriving the NLIE, we assume that $q$ is generic.
However we expect that our results can be also analytically continued to
the case where $q$ is a root of unity.
In section 4, we calculate the high temperature expansion of the
free energy based on our NLIE.
In particular, we can derive coefficients (\ref{coe1})-(\ref{coe5})
up to the order of 5 for the arbitrary rank $r+s+1$.
The point is that if we fix the degree of the high temperature expansion,
we can write down a general formula of the coefficients.
On the other hand, if we specialize parameters, we can
derive the coefficients for much higher orders. For example
for $(r,s)=(2,-1),(-1,2)$, $q=1$, $\mu_{a}=0$ case,
coefficients of the high temperature expansion of the specific heat
up to the order of 40 are presented in appendix.
It will be difficult to derive the coefficients of such a
high order by other method.
Section 5 is devoted to concluding remarks.
\section{The Perk-Schultz model and the quantum transfer matrix method}
In this section, we will introduce the $U_{q}(\widehat{sl}(r+1|s+1))$
Perk-Schultz model
\footnote{$U_{q}(\widehat{sl}(r+1|s+1))$ is a quantum affine superalgebra,
which characterizes the $R$-matrix of this model.
See for example, \cite{Y99}.
We assume $\eta \in {\mathbb R}$ ($q=e^{\eta}$).
A rational limit ($q \to 1$) of the Perk-Schultz model is
the Uimin-Sutherland model \cite{U70,S75}.}
\cite{PS81,Sc83} and
the quantum transfer matrix (QTM) method
\cite{S85,SI87,K87,SAW90,K92,K93}
for it.
The QTM method was applied to the Perk-Schultz model
in ref. \cite{KWZ97}
(see also, ref. \cite{JKS97,JKS98,FK99}).
Let us introduce three sets $B=\{1,2,\dots,r+s+2\}=B_{+}\cup B_{-}$,
where $B_{+} \cap B_{-}=\phi $, $|B_{+}|=r+1$ and $|B_{-}|=s+1$
($r,s \in {\mathbb Z}_{\ge -1}$).
We define a grading parameter $p(a)$ ($a \in B$) such that
$p(a)=0$ for $a \in B_{+}$ and
$p(a)=1$ for $a \in B_{-}$.
The $R$-matrix of the $U_{q}(\widehat{sl}(r+1|s+1))$ Perk-Schultz model \cite{PS81}
is given as
\begin{eqnarray}
R(v)=
\sum_{a_{1},a_{2},b_{1},b_{2}\in B}
R^{a_{1},b_{1}}_{a_{2},b_{2}}(v)
E^{a_{1},a_{2}}\otimes E^{b_{1},b_{2}},
\end{eqnarray}
where $E^{a,b}$ is a $r+s+2$ by $r+s+2$ matrix
whose $(i,j)$ element is given as
$(E^{a,b})_{i,j}=\delta_{ai}\delta_{bj}$;
$R^{a_{1},b_{1}}_{a_{2},b_{2}}(v)$ is defined as
\begin{eqnarray}
&& R^{a,a}_{a,a}(v)=[(-1)^{p(a)}v+1]_{q}, \\
&& R^{a,b}_{a,b}(v)=(-1)^{p(a)p(b)}[v]_{q} \quad (a \ne b), \\
&& R^{b,a}_{a,b}(v)=q^{\mathrm{sign}(a-b)v}
\quad (a \ne b), \label{R-mat}
\end{eqnarray}
where $v \in \mathbb{C}$ is the spectral parameter;
$a,b \in B$;
$[v]_{q}=(q^{v}-q^{-v})/(q-q^{-1})$;
$q=e^{\eta}$.
Note that this $R$-matrix reduces to the one for the well known 6-vertex model
if $(r,s)=(1,-1)$.
Let $L$ be a positive integer (the number of lattice sites).
The row-to-row transfer matrix on $({\mathbb C}^{r+s+2})^{\otimes L}$
is defined as
\footnote{The lower index $i,j$ of $R_{ij}(v)$ is used as follows:
for example, $E^{a,b}_{k}$
is defined on $({\mathbb C}^{r+s+2})^{\otimes (L+1)}$:
$E^{a,b}_{k}=I^{\otimes k}\otimes E^{a,b}\otimes I^{\otimes (L-k)}$,
where $I$ is $r+s+2$ by $r+s+2$ identity matrix;
$k=0,1,\dots, L$.
Then
$R_{ij}(v)$ is defined as
$
R_{ij}(v)=\sum_{a_{1},a_{2},b_{1},b_{2}}
R^{a_{1},b_{1}}_{a_{2},b_{2}}(v)
E^{a_{1},a_{2}}_{i} E^{b_{1},b_{2}}_{j}
$. The trace ${\mathrm tr}_{0}$ is
taken over the auxiliary space indexed by $0$.}
\begin{eqnarray}
t(v)={\mathrm tr}_{0}(R_{0L}(v)
\cdots R_{02}(v)R_{01}(v)).
\label{rtr}
\end{eqnarray}
The main part of the Hamiltonian is proportional to
the logarithmic derivative of the row-to-row transfer matrix (\ref{rtr}):
\begin{eqnarray}
&& \hspace{-20pt}
H_{body}=\frac{J\sinh \eta}{\eta}\frac{d}{dv}\log t(v) |_{v=0}
= J\sum_{j=1}^{L}\biggl\{
\cosh \eta \sum_{a \in B} (-1)^{p(a)}E^{a,a}_{j}E^{a,a}_{j+1} +
\nonumber \\ &&
\sum_{
{\scriptsize \begin{array}{c}
a, b \in B \\
a\ne b
\end{array}}
}
\left( {\rm sign}(a-b) \sinh \eta
E^{a,a}_{j}E^{b,b}_{j+1} +
(-1)^{p(a)p(b)}E^{b,a}_{j}E^{a,b}_{j+1}
\right)
\biggl\}, \label{ham0}
\end{eqnarray}
where we adopt the periodic boundary condition
$E^{a,b}_{L+1}=E^{a,b}_{1}$.
Without breaking the integrability, we can also add the chemical
potential term
\begin{eqnarray}
H_{ch}=-\sum_{j=1}^{L}\sum_{a \in B}\mu_{a}E^{a,a}_{j} \label{hamch}
\end{eqnarray}
to $H_{body}$. Then the total Hamiltonian is $H=H_{body}+H_{ch}$.
To treat the model at finite temperature $T$,
we introduce the so-called quantum transfer matrix (QTM)\cite{S85}:
\begin{eqnarray}
&& \hspace{-30pt} t_{\mathrm{QTM}}(v)=\sum_{\{\alpha_{k}\},\{\beta_{k}\}}
t_{\mathrm{QTM}}(v)
^{\{\beta_{1},\dots, \beta_{N} \}}
_{\{\alpha_{1},\dots,\alpha_{N} \}}
E^{\beta_{1}\alpha_{1}}_{1}
E^{\beta_{2}\alpha_{2}}_{2}
\cdots
E^{\beta_{N}\alpha_{N}}_{N}, \label{QTM} \\
&& \hspace{-46pt}
t_{\mathrm{QTM}}(v)^{\{\beta_{1},\dots, \beta_{N} \}}
_{\{\alpha_{1},\dots,\alpha_{N} \}}=
\sum_{\{\nu_{k}\}}e^{\frac{\mu_{\nu_{1}}}{T}}
\prod_{k=1}^{\frac{N}{2}}
R^{\beta_{2k},\nu_{2k+1}}_{\alpha_{2k},\nu_{2k}}(u+iv)
\widetilde{R}^{\beta_{2k-1},\nu_{2k}}_{\alpha_{2k-1},\nu_{2k-1}}(u-iv),
\nonumber
\end{eqnarray}
where $N \in 2{\mathbb Z}_{\ge 1} $ is the Trotter number;
$\nu_{N+1}=\nu_{1}$; $\nu_{k},\alpha_{k},\beta_{k}
\in B$; $u=-\frac{J \sinh \eta }{\eta N T}$;
$\widetilde{R}^{a_{1},b_{1}}_{a_{2},b_{2}}(v)=
R^{b_{1},a_{2}}_{b_{2},a_{1}}(v)$ is the \symbol{"60}$90^{\circ}$ rotation' of $R(v)$.
We can express \cite{S85} the free energy per site
in terms of only the largest eigenvalue $\Lambda_{1}$ of
the QTM (\ref{QTM}) at $v=0$:
\begin{eqnarray}
f=
-T\lim_{N\to \infty}\log \Lambda_{1},
\label{free-en-qtm}
\end{eqnarray}
where the Boltzmann constant is set to $1$.
Due to the Yang-Baxter equation, the QTM (\ref{QTM}) forms
commuting family for any $v$.
Thus it can be diagonalized by the
Bethe ansatz.
The eigenvalue formula
\footnote{To be precise,
this formula is a conjecture
for general parameters $r,s,q,\mu_{a},N$.
In \cite{KWZ97}, the
algebraic Bethe ansatz for a one particle state was
executed for the QTM of the $U_{q}(\hat{sl}(r+1|s+1))$ Perk-Schultz model.
As for the $U_{q}(\hat{sl}(2))$ case, a proof of this formula by
the algebraic Bethe ansatz is similar to the
row-to-row transfer matrix case (cf. \cite{GKS04}).
This formula has a quite natural form (dressed vacuum form)
from a point of view of the analytic Bethe ansatz \cite{R83,KS95}.
An eigenvalue formula of the row to row transfer matrix (\ref{rtr})
was derived in \cite{BVV82,Sc83}. It has essentially same form as
(\ref{QTM-eigen}) except for a part which is related to
the vacuum eigenvalue.
There is also support by numerical calculations for small
$r,s$.}
of the QTM (\ref{QTM}) will be (cf. \cite{KWZ97,FK99})
\begin{eqnarray}
T^{(1)}_{1}(v)=\sum_{a\in B}z(a;v),
\label{QTM-eigen}
\end{eqnarray}
where
\begin{eqnarray}
&& z(a;v)=\psi_{a}(v) \xi_{a}
\nonumber \\
&& \times
\frac{Q_{a-1}(v-\frac{i\sum_{j=1}^{a-1}(-1)^{p(j)}}{2}-i(-1)^{p(a)})
Q_{a}(v-\frac{i\sum_{j=1}^{a}(-1)^{p(j)}}{2}+i (-1)^{p(a)})}
{Q_{a-1}(v-\frac{i\sum_{j=1}^{a-1}(-1)^{p(j)}}{2})
Q_{a}(v-\frac{i\sum_{j=1}^{a}(-1)^{p(j)}}{2})}, \nonumber \\
&& Q_{a}(v)=\prod_{k=1}^{M_{a}}\sin \eta(v-v_{k}^{(a)}),
\\
&& \psi_{a}(v)=e^{\frac{\mu_{a}}{T}}
\phi_{-}(v-i(-1)^{p(1)}\delta_{a,1})
\phi_{+}(v+i(-1)^{p(r+s+2)}\delta_{a,r+s+2}),
\nonumber \\
&& \hspace{20pt}
\phi_{\pm}(v)=\left(
\frac{\sin \eta (v\pm iu)}{\sinh \eta }\right)^{\frac{N}{2}},
\nonumber
\end{eqnarray}
where $M_{a}\in {\mathbb Z}_{\ge 0}$; $Q_{0}(v)=Q_{r+s+2}(v)=1$.
$\xi_{a} \in \{-1,1\}$ is a parameter which depends on the grading
parameter $\{p(b)\}_{b \in B}$.
$\{v^{(a)}_{k}\}$ is a root of the Bethe ansatz equation
(BAE)
\begin{eqnarray}
&& \hspace{-20pt}
\frac{\psi_{a}(v^{(a)}_{k}+\frac{i}{2}\sum_{j=1}^{a}(-1)^{p(j)})}
{\psi_{a+1}(v^{(a)}_{k}+\frac{i}{2}\sum_{j=1}^{a}(-1)^{p(j)})} \label{BAE} \\
&& =
-\varepsilon_{a}
\frac{Q_{a-1}(v^{(a)}_{k}+\frac{i(-1)^{p(a)}}{2})Q_{a}(v^{(a)}_{k}-i(-1)^{p(a+1)})
Q_{a+1}(v^{(a)}_{k}+\frac{i(-1)^{p(a+1)}}{2})}
{Q_{a-1}(v^{(a)}_{k}-\frac{i(-1)^{p(a)}}{2})Q_{a}(v^{(a)}_{k}+i(-1)^{p(a)})
Q_{a+1}(v^{(a)}_{k}-\frac{i(-1)^{p(a+1)}}{2})}
\nonumber \\
&& \hspace{40pt} \mbox{for} \quad k\in \{1,2, \dots, M_{a}\} \quad
\mbox{and} \quad a\in \{1,2,\dots, r+s+1 \}, \nonumber
\end{eqnarray}
where $\varepsilon_{a}=\frac{\xi_{a+1}}{\xi_{a}} \in \{-1,1 \} $.
From now on, we assume the relation $p(1)=p(r+s+2)$ on
the grading parameter.
In this case, the eigenvalue formula (\ref{QTM-eigen})
of the QTM has good analyticity to derive the NLIE.
We expect that this assumption does not spoil generality
as the free energy will be independent of the order of the
grading parameters.
Let us define
an auxiliary function \cite{T97,T98} (see also \cite{T98-2}):
\begin{eqnarray}
T_{m}^{(a)}(v)=\sum_{\{d_{j,k}\}} \prod_{j=1}^{a}\prod_{k=1}^{m}
z(d_{j,k};v-\frac{i}{2}(a-m-2j+2k)),
\label{DVF}
\end{eqnarray}
where $m,a \in \mathbb{Z}_{\ge 1}$, and the summation is taken over
$d_{j,k}\in B$ ($ 1 < 2 < \cdots < r+s+2$)
such that
\begin{eqnarray}
&& d_{j,k} \le d_{j+1,k} \quad {\rm and} \quad d_{j,k} \le d_{j,k+1} \label{rule1} \\
&& d_{j,k} < d_{j,k+1} \quad {\rm if} \quad
d_{j,k} \in B_{-} \quad {\rm or} \quad d_{j,k+1} \in B_{-}
\label{rule2} \\
&& d_{j,k} < d_{j+1,k} \quad {\rm if} \quad d_{j,k} \in B_{+}
\quad {\rm or} \quad d_{j+1,k} \in B_{+}. \label{rule3}
\end{eqnarray}
This function contains
$T_{1}^{(1)}(v)$ (\ref{QTM-eigen}) as a special case
$(a,m)=(1,1)$.
(\ref{DVF}) can be interpreted as a
summation over a Young (super) tableaux labeled by
$a \times m$ Young (super) diagram.
It is related to a system of eigenvalue formulae of the
QTM for fusion models \cite{KRS81}.
Note that the condition (\ref{rule2}) is void if $s=-1$, then
(\ref{DVF}) reduces to the Bazhanov-Reshetikhin formula \cite{BR90}.
For $a,m \in {\mathbb Z}_{\ge 1}$, we
will normalize (\ref{DVF}) as
$ \widetilde{T}^{(a)}_{m}(v)=
T^{(a)}_{m}(v)/{\mathcal N}^{(a)}_{m}(v)$,
where
\begin{eqnarray}
\hspace{-30pt} && {\mathcal N}^{(a)}_{m}(v)=
\frac{\phi_{-}(v- \frac{a+m}{2} \xi i)
\phi_{+}(v+ \frac{a+m}{2}\xi i)}{
\phi_{-}(v-\frac{a-m}{2}i)\phi_{+}(v+\frac{a-m}{2}i)}
\nonumber \\
\hspace{-30pt} && \hspace{20pt} \times
\prod_{j=1}^{a}\prod_{k=1}^{m}
\phi_{-}(v-\frac{a-m-2j+2k}{2}i)\phi_{+}(v-\frac{a-m-2j+2k}{2}i).
\label{normal}
\end{eqnarray}
Here we introduce a parameter $\xi \in \{-1,1 \}$.
$T^{(a)}_{m}(v)$ has no pole on $v$ due to the BAE (\ref{BAE}).
In contrast, $\widetilde{T}^{(a)}_{m}(v)$ has
poles at $v=\pm (\frac{m+a}{2}\xi i +iu)+\frac{n \pi}{\eta}$
($n \in {\mathbb Z}$) for
$(a,m) \in {\mathbb Z}_{\ge 1} \times \{1,2,\dots,s+1 \} \cup
\{1,2,\dots,r+1 \}\times {\mathbb Z}_{\ge 1}$.
One can show that
$\widetilde{T}^{(a)}_{m}(v)$ satisfies the
so called $T$-system for $U_{q}(\widehat{sl}(r+1|s+1))$ \cite{T97,T98}
(see also \cite{JKS98} for a derivation of TBA equations from the
$T$-system).
For $m,a \in {\mathbb Z}_{\ge 1}$,
\begin{eqnarray}
&& \hspace{-10pt}
\widetilde{T}^{(a)}_{m}(v-\frac{i}{2})\widetilde{T}^{(a)}_{m}(v+\frac{i}{2})=
\widetilde{T}^{(a)}_{m-1}(v)\widetilde{T}^{(a)}_{m+1}(v)+
\widetilde{T}^{(a-1)}_{m}(v)\widetilde{T}^{(a+1)}_{m}(v)\label{T-sys} \\
&& \hspace{-10pt} \mbox{for} \quad
a \in \{1,2,\dots, r\} \quad \mbox{or} \quad m \in \{1,2,\dots, s\}
\quad \mbox{or}\quad (a,m)=(r+1,s+1), \nonumber \\
&& \hspace{-10pt}
\widetilde{T}^{(r+1)}_{m}(v-\frac{i}{2})\widetilde{T}^{(r+1)}_{m}(v+\frac{i}{2})=
\widetilde{T}^{(r+1)}_{m-1}(v)\widetilde{T}^{(r+1)}_{m+1}(v)
\quad \mbox{for} \quad m \in {\mathbb Z}_{\ge s+2}, \label{T-sys-m} \\
&& \hspace{-10pt}
\widetilde{T}^{(a)}_{s+1}(v-\frac{i}{2})\widetilde{T}^{(a)}_{s+1}(v+\frac{i}{2})=
\widetilde{T}^{(a-1)}_{s+1}(v)\widetilde{T}^{(a+1)}_{s+1}(v)
\quad \mbox{for} \quad a \in {\mathbb Z}_{\ge r+2}, \label{T-sys-a}
\end{eqnarray}
where
\begin{eqnarray}
&& \hspace{-35pt}
\widetilde{T}^{(a)}_{0}(v)=\frac{\phi_{-}(v-\frac{a}{2}i)\phi_{+}(v+\frac{a}{2}i)}
{\phi_{-}(v-\frac{a}{2}\xi i)\phi_{+}(v+\frac{a}{2} \xi i)}
\quad {\rm for} \quad a \in {\mathbb Z}_{\ge 1},\label{a0} \\
&& \hspace{-35pt}
\widetilde{T}^{(0)}_{m}(v)=
\frac{\phi_{-}(v+\frac{m}{2}i)\phi_{+}(v-\frac{m}{2}i)}
{\phi_{-}(v-\frac{m}{2} \xi i)\phi_{+}(v+\frac{m}{2} \xi i)}
\quad {\rm for} \quad m \in {\mathbb Z}_{\ge 1}. \label{0m}
\end{eqnarray}
There is a duality relation for the auxiliary function.
\begin{eqnarray}
&& \hspace{-35pt}
\widetilde{T}^{(r+1)}_{a+s}(v)=
\zeta^{a-1}
\widetilde{T}^{(r+a)}_{s+1}(v) \quad {\rm for} \quad a \in Z_{\ge 1} ,
\label{dual}
\end{eqnarray}
where
$\zeta = \frac{\prod_{a \in B_{+}} \xi_{a}
e^{\frac{\mu_{a}}{T}}}{\prod_{b \in B_{-}}\xi_{b}e^{\frac{\mu_{b}}{T}}}$.
(\ref{a0}) (resp. (\ref{0m})) becomes $1$ if $\xi=1$ (resp. $\xi=-1$).
Note that there is no upper bound for the index $a$ of $\widetilde{T}^{(a)}_{m}(v)$
for $m \in \{1,2,\dots, s+1 \}$ if $s \in {\mathbb Z}_{\ge 0}$.
For $s=-1$, this $T$-system reduces the one for $U_{q}(\widehat{sl}(r+1))$
\cite{KNS94} (see also \cite{KR87}).
In this case, (\ref{dual}) reduces to
$\widetilde{T}^{(r+1)}_{a-1}(v)=\zeta^{a-1}=
e^{\frac{(a-1)(\mu_{1}+\mu_{2}+\cdots +\mu_{r+1})}{T}}$
if $ \xi =1 $ (see eq. (2.21) in \cite{TT05}).
From the relations (\ref{T-sys-m}), (\ref{T-sys-a}), (\ref{dual}) and
(\ref{T-sys}) for $(a,m)=(r+1,s+1)$, one can derive the following relation
for $a \in {\mathbb Z}_{\ge 2}$:
\begin{eqnarray}
&& \hspace{-20pt} \widetilde{T}^{(r+1)}_{s+a}(v) =
\zeta^{a-1}
\widetilde{T}^{(r+a)}_{s+1}(v) \nonumber \\
&& =
\frac{
\zeta^{a-1}
\prod_{j=1}^{a} \widetilde{T}^{(r+1)}_{s+1}(v+\frac{a-2j+1}{2}i) }
{\prod_{j=2}^{a} \bigl(
\zeta
\widetilde{T}^{(r+1)}_{s}(v+\frac{a-2j+2}{2}i)+
\widetilde{T}^{(r)}_{s+1}(v+\frac{a-2j+2}{2}i) \bigr)} .
\nonumber \\
\label{sol}
\end{eqnarray}
$\widetilde{T}^{(a)}_{m}(v)$ can also be written in terms of a determinant
(the quantum (supersymmetric) Jacobi-Trudi and Giambelli formula \cite{T97,T98}
(for $s=-1$ case, \cite{BR90};
for $U_{q}(B_{r}^{(1)})$ case, \cite{KOS95}))
\begin{eqnarray}
\widetilde{T}^{(a)}_{m}(v)&=&
W^{(a)}_{m}(v)\det _{1\le j,k \le m}
\left(\widetilde{T}^{(a+j-k)}_{1}
\left(
v-\frac{j+k-m-1}{2}i
\right)
\right) \label{jacobi-trudi} \\
&=& Z^{(a)}_{m}(v) \det _{1\le j,k \le a}
\left(\widetilde{T}^{(1)}_{m+j-k}
\left(
v-\frac{a-j-k+1}{2}i
\right)
\right), \label{jacobi-trudi2}
\end{eqnarray}
where $\widetilde{T}^{(a)}_{1}(v)=0$ for $a <0$ and
$\widetilde{T}^{(1)}_{m}(v)=0$ for $m <0$.
$ W^{(a)}_{m}(v)$ and $ Z^{(a)}_{m}(v)$ are normalization functions:
\begin{eqnarray}
&& W^{(a)}_{m}(v)=\frac{1}{\prod_{j=1}^{m-1}\widetilde{T}^{(a)}_{0}(v+\frac{m-2j}{2}i)}, \\
&& Z^{(a)}_{m}(v)= \frac{1}{\prod_{j=1}^{a-1}\widetilde{T}^{(0)}_{m}(v-\frac{a-2j}{2}i)},
\end{eqnarray}
where $\prod_{j=1}^{0}(\cdots )=1$.
Substituting (\ref{jacobi-trudi}) into (\ref{dual}), we obtain an equation
\begin{eqnarray}
&& W^{(r+1)}_{a+s}(v) \det _{1\le j,k \le a+s}
\left(\widetilde{T}^{(r+1+j-k)}_{1}
\left(
v-\frac{j+k-a-s-1}{2}i
\right)
\right) \nonumber \\
&&=
\zeta^{a-1}
W^{(r+a)}_{s+1}(v)
\det _{1\le j,k \le s+1}
\left(\widetilde{T}^{(r+a+j-k)}_{1}
\left(
v-\frac{j+k-s-2}{2}i
\right)
\right)
\nonumber \\
&& \hspace{180pt} \mbox{for} \quad
a \in {\mathbb Z}_{\ge 1}. \label{det-eq}
\end{eqnarray}
Expanding partially (\ref{det-eq}) on both side,
we obtain
\begin{eqnarray}
&& \widetilde{T}^{(a+r+s)}_{1}(v)=
\frac{
\widetilde{A}_{1}(v)-
\zeta^{a-1}
\frac{W^{(r+a)}_{s+1}(v)}{W^{(r+1)}_{a+s}(v)}
\widetilde{A}_{2}(v)
}
{(-1)^{a+s}\widetilde{A}_{3}(v)+(-1)^{s}
\zeta^{a-1}
\frac{W^{(r+a)}_{s+1}(v)}{W^{(r+1)}_{a+s}(v)}
\widetilde{A}_{4}(v)}
\nonumber \\
&& \hspace{160pt} \mbox{for} \quad
a \in {\mathbb Z}_{\ge 2},
\label{a+r+s}
\end{eqnarray}
where
\begin{eqnarray}
&& \widetilde{A}_{1}(v)=\det _{1\le j,k \le a+s}
\left(\widetilde{f}_{j,k}
\left(
v-\frac{j+k-a-s-1}{2}i
\right)
\right) \\
&& \quad \widetilde{f}_{j,k}(v)=\widetilde{T}^{(r+1+j-k)}_{1}(v)
\quad \mbox{for} \quad (j,k) \ne (a+s,1),
\quad \widetilde{f}_{a+s,1}(v)=0, \nonumber \\
&&\widetilde{A}_{2}(v)=
\det _{1\le j,k \le s+1}
\left(\widetilde{g}_{j,k}
\left(
v-\frac{j+k-s-2}{2}i
\right)
\right)
\\
&& \quad \widetilde{g}_{j,k}(v)=\widetilde{T}^{(r+a+j-k)}_{1}(v)
\quad \mbox{for} \quad (j,k) \ne (s+1,1),
\quad \widetilde{g}_{s+1,1}(v)=0, \nonumber \\
&& \widetilde{A}_{3}(v)=\det _{1\le j,k \le a+s-1}
\left(\widetilde{T}^{(r+j-k)}_{1}
\left(
v-\frac{j+k-a-s}{2}i
\right)
\right), \\
&&\widetilde{A}_{4}(v)=
\det _{1\le j,k \le s}
\left(\widetilde{T}^{(r+a+j-k-1)}_{1}
\left(
v-\frac{j+k-s-1}{2}i
\right)
\right)
.
\end{eqnarray}
It turns out that $\widetilde{T}^{(a+r+s)}_{1}(v)$ is written in
terms of $\{\widetilde{T}^{(d)}_{1}(v)\}$ where $ \max (0,r-s+2-a) \le d \le a+r+s-1$.
Then $ \widetilde{T}^{(a)}_{1}(v) $ for $a \in {\mathbb Z}_{\ge r+s+2}$
can be expressed in
terms of $\{\widetilde{T}^{(d)}_{1}(v)\}$ where $ 0 \le d \le r+s+1$.
Similarly, we can derive the
following relation from (\ref{dual}) and (\ref{jacobi-trudi2}).
\begin{eqnarray}
&& \widetilde{T}^{(1)}_{a+r+s}(v)=
\frac{
\zeta^{a-1}
\frac{Z^{(r+a)}_{s+1}(v)}{Z^{(r+1)}_{a+s}(v)}
\widetilde{A}_{5}(v)-
\widetilde{A}_{6}(v)
}
{(-1)^{a+r}
\zeta^{a-1}
\frac{Z^{(r+a)}_{s+1}(v)}{Z^{(r+1)}_{a+s}(v)}
\widetilde{A}_{7}(v)+(-1)^{r}
\widetilde{A}_{8}(v)}
\nonumber \\
&& \hspace{140pt} \mbox{for} \quad
a \in {\mathbb Z}_{\ge 2},
\label{a+r+s-b}
\end{eqnarray}
where
\begin{eqnarray}
&& \widetilde{A}_{5}(v)=\det _{1\le j,k \le a+r}
\left(\widetilde{h}_{j,k}
\left(
v-\frac{a+r+1-j-k}{2}i
\right)
\right) \\
&& \quad \widetilde{h}_{j,k}(v)=\widetilde{T}^{(1)}_{s+1+j-k}(v)
\quad \mbox{for} \quad (j,k) \ne (a+r,1),
\quad \widetilde{h}_{a+r,1}(v)=0, \nonumber \\
&&\widetilde{A}_{6}(v)=
\det _{1\le j,k \le r+1}
\left(\widetilde{b}_{j,k}
\left(
v-\frac{r+2-j-k}{2}i
\right)
\right)
\\
&& \quad \widetilde{b}_{j,k}(v)=\widetilde{T}^{(1)}_{a+s+j-k}(v)
\quad \mbox{for} \quad (j,k) \ne (r+1,1),
\quad \widetilde{b}_{r+1,1}(v)=0, \nonumber \\
&& \widetilde{A}_{7}(v)=\det _{1\le j,k \le a+r-1}
\left(\widetilde{T}^{(1)}_{s+j-k}
\left(
v-\frac{a+r-j-k}{2}i
\right)
\right), \\
&&\widetilde{A}_{8}(v)=
\det _{1\le j,k \le r}
\left(\widetilde{T}^{(1)}_{a+s-1+j-k}
\left(
v-\frac{r+1-j-k}{2}i
\right)
\right)
.
\end{eqnarray}
Let us consider the limit
\begin{eqnarray}
&& Q^{(a)}_{m}:=\lim_{v \to i \eta^{-1} \infty} \widetilde{T}^{(a)}_{m}(v)
=\sum_{\{ d_{j,k}\}}
\prod_{j=1}^{a}\prod_{k=1}^{m} \xi_{d_{j,k}}
\exp \left(\frac{\mu_{d_{j,k}}}{T} \right),
\label{limit}
\end{eqnarray}
where the summation is taken over $\{ d_{j,k}\}$ ($d_{j,k} \in B$)
which obey the rules (\ref{rule1})-(\ref{rule3}).
For example, for $U_{q}(\widehat{sl}(2|1))$ ($B_{+}=\{1,3\}$, $B_{-}=\{2\}$) case, we have,
\begin{eqnarray}
Q^{(1)}_{1}&=& \xi_{1}e^{\frac{\mu_{1}}{T}}+\xi_{2}e^{\frac{\mu_{2}}{T}}
+\xi_{3} e^{\frac{\mu_{3}}{T}},
\label{Q11-sl21} \\
Q^{(a)}_{1}&=&
\xi_{1} \xi_{2}^{a-1} e^{\frac{\mu_{1}+(a-1)\mu_{2}}{T}}
+\xi_{1} \xi_{2}^{a-2} \xi_{3} e^{\frac{\mu_{1}+(a-2)\mu_{2}+\mu_{3}}{T}}
+\xi_{2}^{a}e^{\frac{a \mu_{2}}{T}}
+\xi_{2}^{a-1} \xi_{3} e^{\frac{(a-1)\mu_{2}+\mu_{3}}{T}} \nonumber \\
&=& \xi_{2}^{a-2}e^{ \frac{(a-2) \mu_{2}}{T}}Q^{(2)}_{1}
\qquad \mbox{for} \quad a \in {\mathbb Z}_{\ge 2}.
\label{Q-sl21}
\end{eqnarray}
We can also rewrite (\ref{Q-sl21}) as
\begin{eqnarray}
Q^{(a)}_{1}=\frac{{Q^{(3)}_{1}}^{a-2}}{{Q^{(2)}_{1}}^{a-3}}
=\frac{{Q^{(2)}_{1}}^{a-1}}{(\zeta +Q^{(1)}_{1})^{a-2}}.
\label{Qa1-sl21}
\end{eqnarray}
This quantity (\ref{limit}) corresponds to
the character of $a$-th anti-(super)symmetric and
$m$-th (super)symmetric tensor representation.
We will use $Q^{(a)}_{1}$ and $Q^{(1)}_{m}$ later.
$Q^{(a)}_{m}$ also satisfies the so called $Q$-system,
which is the $T$-system (\ref{T-sys})-(\ref{dual})
without the spectral parameter $v$:
for $ m,a \in {\mathbb Z}_{\ge 1}$, we have
\begin{eqnarray}
\hspace{-20pt} && {Q^{(a)}_{m}}^{2}=Q^{(a)}_{m-1}Q^{(a)}_{m+1}+Q^{(a-1)}_{m}Q^{(a+1)}_{m}
\label{Q-sys} \\
&&\hspace{10pt} \mbox{for} \quad
a \in \{1,2,\dots, r\} \quad \mbox{or} \quad m \in \{1,2,\dots, s\}
\nonumber \\
&& \hspace{130pt} \mbox{or}\quad (a,m)=(r+1,s+1), \nonumber \\
&&{Q^{(r+1)}_{m}}^{2}=Q^{(r+1)}_{m-1}Q^{(r+1)}_{m+1}
\quad \mbox{for} \quad m \in {\mathbb Z}_{\ge s+2},\\
&&{Q^{(a)}_{s+1}}^{2} =Q^{(a-1)}_{s+1}Q^{(a+1)}_{s+1}
\quad \mbox{for} \quad a \in {\mathbb Z}_{\ge r+2},
\end{eqnarray}
where
\begin{eqnarray}
&& Q^{(a)}_{0}=Q^{(0)}_{m}=1
\quad {\rm for} \quad a,m \in {\mathbb Z}_{\ge 1},\nonumber \\
&& Q^{(r+1)}_{a+s}=
\zeta^{a-1}
Q^{(r+a)}_{s+1} \quad {\rm for} \quad a \in Z_{\ge 1} .
\end{eqnarray}
The $Q$-system was introduced \cite{K89,KR90} as functional relations among
characters of finite dimensional representations of
Yangians (or quantum affine algebras) associated with simple Lie algebras.
The above system of equations is a superalgebra version of them.
In closing this section,
let us comment on the analyticity of the auxiliary function (\ref{DVF}).
As mentioned before, the free energy (\ref{free-en-qtm}) is given only by the
largest eigenvalue of the QTM (\ref{QTM}).
Then we are only interested in a root of the BAE (\ref{BAE})
which gives the largest eigenvalue of the QTM.
Judging from numerical calculations \cite{JKS97,JKS98,T03,TT05},
such a root will exist in the sector
$\frac{N}{2}=M_{1}=\cdots =M_{r+s+1}$ of the BAE,
and it will form \symbol{"60}one-string' on the complex plane.
For this root, the zeros of the auxiliary function (\ref{DVF}) will
exist near the lines ${\rm Im} v= \pm \frac{a+m}{2}$
at least for $\{\mu_{a}\}=\{0\}$ and small $u$
(see, figures in \cite{JKS98,T03,TT05}).
In this sector, we have
\begin{eqnarray}
&& \xi_{b}=1 \qquad {\rm for} \qquad b \in B,
\nonumber \\
&& \varepsilon_{b}=1 \qquad {\rm for} \qquad b \in B-\{r+s+2 \},
\label{para}
\\
&& \zeta=\exp(\frac{\sum_{a\in B_{+}}\mu_{a}-\sum_{a\in B_{-}}\mu_{a}}{T}).
\nonumber \end{eqnarray}
From now on, we only consider the largest eigenvalue of the
QTM, and assume these values (\ref{para}) of the parameters.
\section{The nonlinear integral equations}
In this section, we will derive NLIE by using formulae in the previous section.
We will treat two kind of NLIE paying attention to the value of the
parameter $\xi \in \{-1,1\}$.
Although the first step of calculations (\ref{mustput})-(\ref{nlie2}) is similar to
$s=-1$ case \cite{TSK01,T03,TT05}, we will present it for reader's convenience.
Taking note on
the limit (\ref{limit}) and
the fact that $\widetilde{T}^{(a)}_{m}(v)$ has
poles at $v=\pm (\frac{m+a}{2}\xi i +iu)+ \frac{n \pi}{\eta}$
($n \in {\mathbb Z}$)
for $(a,m) \in \{1,2,\dots, r+1\}\times {\mathbb Z}_{\ge 1} \cup
{\mathbb Z}_{\ge 1} \times \{1,2,\dots, s+1\}$,
we can expand ${\widetilde T}^{(a)}_{m}(v)$ as follows.
\begin{eqnarray}
&& {\widetilde T}^{(a)}_{m}(v)=Q^{(a)}_{m}
\label{mustput} \\
&& \hspace{20pt} +
\sum_{n \in {\mathbb Z}}
\sum_{j=1}^{\frac{N}{2}}
\left\{
\frac{A^{(a)}_{m,j}}{(v-\frac{a+m}{2}\xi i-iu-\frac{\pi n}{\eta})^{j}}
+
\frac{{\bar A}^{(a)}_{m,j}}{(v+\frac{a+m}{2}\xi i+iu+\frac{\pi n}{\eta})^{j}}
\right\},
\nonumber
\end{eqnarray}
where the coefficients $A^{(a)}_{m,j}, {\bar A}^{(a)}_{m,j} \in {\mathbb C}$
can be expressed as contour integrals:
\begin{eqnarray}
&& A^{(a)}_{m,j}= \oint_{{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} v}{2\pi i}
\widetilde{T}^{(a)}_{m}(v)(v-\frac{a+m}{2}\xi i-iu)^{j-1},\nonumber \\
&& \overline{A}^{(a)}_{m,j}=
\oint_{\overline{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} v}{2\pi i}
\widetilde{T}^{(a)}_{m}(v)(v+\frac{a+m}{2}\xi i+iu)^{j-1}.
\label{coeff}
\end{eqnarray}
Here the contour ${\tilde C}^{(a)}_{m}$ (resp. $\overline{\tilde C}^{(a)}_{m}$)
is a counterclockwise closed loop
which surrounds $v=\frac{a+m}{2}\xi i+iu$ (resp. $v=-\frac{a+m}{2}\xi i-iu$)
and does not surround $v=-\frac{a+m}{2}\xi i-iu-\frac{\pi n}{\eta},
\frac{a+m}{2}\xi i+iu+\frac{\pi k}{\eta},$
(resp. $v=\frac{a+m}{2}\xi i+iu+\frac{\pi n}{\eta},
-\frac{a+m}{2}\xi i-iu-\frac{\pi k}{\eta}$), where $n \in {\mathbb Z}, k \in {\mathbb Z}-\{0\} $.
Using the $T$-system (\ref{T-sys})-(\ref{T-sys-a}),
we can rewrite (\ref{coeff}) as
\begin{eqnarray}
&& A^{(a)}_{m,j}= \oint_{{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} v}{2\pi i}
\bigg\{
\frac{\widetilde{T}^{(a)}_{m-1}(v-\frac{\xi i}{2})
\widetilde{T}^{(a)}_{m+1}(v-\frac{\xi i}{2})}
{\widetilde{T}^{(a)}_{m}(v-\xi i)} \nonumber \\
&& \hspace{80pt} +
\frac{\widetilde{T}^{(a-1)}_{m}(v-\frac{\xi i}{2})
\widetilde{T}^{(a+1)}_{m}(v-\frac{\xi i}{2})}
{\widetilde{T}^{(a)}_{m}(v-\xi i)}
\bigg\}
(v-\frac{a+m}{2}\xi i-iu)^{j-1},\nonumber \\
&& \overline{A}^{(a)}_{m,j}=
\oint_{\overline{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} v}{2\pi i}
\bigg\{
\frac{\widetilde{T}^{(a)}_{m-1}(v+\frac{\xi i}{2})
\widetilde{T}^{(a)}_{m+1}(v+\frac{\xi i}{2})}
{\widetilde{T}^{(a)}_{m}(v+\xi i)}
\label{coeff2} \\
&& \hspace{80pt} +
\frac{\widetilde{T}^{(a-1)}_{m}(v+\frac{\xi i}{2})
\widetilde{T}^{(a+1)}_{m}(v+\frac{\xi i}{2})}
{\widetilde{T}^{(a)}_{m}(v+\xi i)}
\bigg\}
(v+\frac{a+m}{2}\xi i+iu)^{j-1},
\nonumber
\end{eqnarray}
where we admit $\widetilde{T}^{(b)}_{n}(v)=0$ if
$(b,n) \in {\mathbb Z }_{\ge r+2}\times {\mathbb Z}_{\ge s+2}$
(cf. \cite{DM92,MR92}).
Substituting (\ref{coeff2}) into (\ref{mustput}) and taking the summation
over $j$, we obtain
\begin{eqnarray}
&& \hspace{-30pt}
\widetilde{T}^{(a)}_{m}(v)=Q^{(a)}_{m} \nonumber \\
&& +
\sum_{n \in {\mathbb Z}}
\oint_{{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{1-\left(\frac{y}{v-\frac{a+m}{2} \xi i-iu -\frac{\pi n}{\eta}}\right)^{\frac{N}{2}}}
{v-y-\frac{a+m}{2} \xi i-iu -\frac{\pi n}{\eta}}
\nonumber \\
&&\hspace{20pt} \times
\bigg\{
\frac{\widetilde{T}^{(a)}_{m-1}(y+\frac{a+m-1}{2} \xi i+iu)
\widetilde{T}^{(a)}_{m+1}(y+\frac{a+m-1}{2} \xi i+iu)}
{\widetilde{T}^{(a)}_{m}(y+\frac{a+m-2}{2} \xi i+iu)} \nonumber \\
&& \hspace{50pt} +
\frac{\widetilde{T}^{(a-1)}_{m}(y+\frac{a+m-1}{2} \xi i+iu)
\widetilde{T}^{(a+1)}_{m}(y+\frac{a+m-1}{2} \xi i+iu)}
{\widetilde{T}^{(a)}_{m}(y+\frac{a+m-2}{2} \xi i+iu)}
\bigg\} \nonumber \\
&& +
\sum_{n \in {\mathbb Z}}
\oint_{\overline{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{1-\left(\frac{y}{v+\frac{a+m}{2} \xi i+iu +\frac{\pi n}{\eta}}\right)^{\frac{N}{2}}}
{v-y+\frac{a+m}{2} \xi i+iu +\frac{\pi n}{\eta}}
\nonumber \\
&&\hspace{20pt} \times
\bigg\{
\frac{\widetilde{T}^{(a)}_{m-1}(y-\frac{a+m-1}{2} \xi i-iu)
\widetilde{T}^{(a)}_{m+1}(y-\frac{a+m-1}{2} \xi i-iu)}
{\widetilde{T}^{(a)}_{m}(y-\frac{a+m-2}{2} \xi i-iu)}
\label{nlie1} \\
&& \hspace{50pt} +
\frac{\widetilde{T}^{(a-1)}_{m}(y-\frac{a+m-1}{2} \xi i-iu)
\widetilde{T}^{(a+1)}_{m}(y-\frac{a+m-1}{2} \xi i-iu)}
{\widetilde{T}^{(a)}_{m}(y-\frac{a+m-2}{2} \xi i-iu)}
\bigg\}.
\nonumber
\end{eqnarray}
Here the contours are shifted as follows:
the contour ${\tilde C}^{(a)}_{m}$ (resp. $\overline{\tilde C}^{(a)}_{m}$)
is a counterclockwise closed loop
which surrounds $y=0 $ (resp. $y=0$)
and does not surround $y=-(a+m)\xi i-2iu-\frac{\pi n}{\eta},\frac{\pi k}{\eta}$
(resp. $y=(a+m)\xi i+2iu+\frac{\pi n}{\eta},\frac{\pi k}{\eta}$),
where $n \in {\mathbb Z}, k \in {\mathbb Z}-\{0 \}$.
We can neglect the terms $\left(\frac{y}{v \pm \frac{a+m}{2} \xi i \pm iu \pm
\frac{\pi n}{\eta}}\right)^{\frac{N}{2}}$ in (\ref{nlie1}) since the poles at $y=0$ in
the two brackets $\{\cdots \}$
are canceled by the zeros from these terms.
By using the following relation
\begin{eqnarray}
\lim_{m \to \infty}
\sum_{n=-m}^{m}\frac{1}{v-\frac{\pi n}{\eta}}
=\frac{\eta}{\tan \eta v},
\end{eqnarray}
we can take the summation over $n \in {\mathbb Z}$.
\begin{eqnarray}
&& \hspace{-30pt}
\widetilde{T}^{(a)}_{m}(v)=Q^{(a)}_{m} \nonumber \\
&& +
\oint_{{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta }
{\tan \eta (v-y-\frac{a+m}{2} \xi i-iu)}
\nonumber \\
&&\hspace{20pt} \times
\bigg\{
\frac{\widetilde{T}^{(a)}_{m-1}(y+\frac{a+m-1}{2} \xi i+iu)
\widetilde{T}^{(a)}_{m+1}(y+\frac{a+m-1}{2} \xi i+iu)}
{\widetilde{T}^{(a)}_{m}(y+\frac{a+m-2}{2} \xi i+iu)} \nonumber \\
&& \hspace{50pt} +
\frac{\widetilde{T}^{(a-1)}_{m}(y+\frac{a+m-1}{2} \xi i+iu)
\widetilde{T}^{(a+1)}_{m}(y+\frac{a+m-1}{2} \xi i+iu)}
{\widetilde{T}^{(a)}_{m}(y+\frac{a+m-2}{2} \xi i+iu)}
\bigg\} \nonumber \\
&& +
\oint_{\overline{\tilde C}^{(a)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta }
{\tan \eta (v-y+\frac{a+m}{2} \xi i+iu)}
\nonumber \\
&&\hspace{20pt} \times
\bigg\{
\frac{\widetilde{T}^{(a)}_{m-1}(y-\frac{a+m-1}{2} \xi i-iu)
\widetilde{T}^{(a)}_{m+1}(y-\frac{a+m-1}{2} \xi i-iu)}
{\widetilde{T}^{(a)}_{m}(y-\frac{a+m-2}{2} \xi i-iu)}
\label{nlie2} \\
&& \hspace{50pt} +
\frac{\widetilde{T}^{(a-1)}_{m}(y-\frac{a+m-1}{2} \xi i-iu)
\widetilde{T}^{(a+1)}_{m}(y-\frac{a+m-1}{2} \xi i-iu)}
{\widetilde{T}^{(a)}_{m}(y-\frac{a+m-2}{2} \xi i-iu)}
\bigg\},
\nonumber \\
&& {\rm for} \quad (a,m) \in
\{1,2,\dots,r+1\} \times {\mathbb Z}_{\ge 1} \cup
{\mathbb Z}_{\ge 1} \times \{1,2,\dots,s+1\}.
\nonumber
\end{eqnarray}
In the next subsection, we will consider specializations
of this system of NLIE (\ref{nlie2}).
\subsection{The nonlinear integral equations for $\xi=1$}
Let us consider the NLIE (\ref{nlie2}) for $\xi=1$ and $m=1$.
Taking note on the fact ${\widetilde T}^{(a)}_{0}(v)=1$ (cf.(\ref{a0})),
we can drop the first terms in the two brackets $\{\cdots \}$ in (\ref{nlie2})
since they have no poles at $y=0$.
Then the NLIE (\ref{nlie2}) reduce to the following NLIE on
${\mathcal T}^{(a)}_{1}(v)=\lim_{N \to \infty}\widetilde{T}^{(a)}_{1}(v)$
after the Trotter limit $N \to \infty $ with $u=-\frac{J \sinh \eta }{\eta N T}$.
\begin{eqnarray}
{\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1}
&+&
\oint_{C^{(a)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(a-1)}_{1}(y+\frac{i a}{2})
\mathcal{T}^{(a+1)}_{1}(y+\frac{i a}{2})}
{\tan \eta (v-y-\frac{i(a+1)}{2})
\mathcal{T}^{(a)}_{1}(y+\frac{i(a-1)}{2})}
\nonumber \\
&+&
\oint_{\overline{C}^{(a)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(a-1)}_{1}(y-\frac{i a}{2})
\mathcal{T}^{(a+1)}_{1}(y-\frac{i a}{2})}
{\tan \eta (v-y+\frac{i(a+1)}{2})
\mathcal{T}^{(a)}_{1}(y-\frac{i(a-1)}{2})}
\nonumber \\
&& \hspace{120pt}
{\rm for} \quad a \in {\mathbb Z}_{\ge 1},
\label{nlie4}
\end{eqnarray}
where
the contour $C^{(a)}_{1}$ (resp. $\overline{C}^{(a)}_{1}$)
is a counterclockwise closed loop around $y=0$ (resp. $y=0$)
which satisfies the condition
$y \ne v-\frac{a+1}{2}i+\frac{\pi n}{\eta}$
(resp. $y \ne v+\frac{a+1}{2}i+\frac{\pi n}{\eta}$) and
does not surround
$z^{(a)}_{1}-\frac{a-1}{2}i+\frac{\pi n}{\eta}$,
$-(a+1)i
+\frac{\pi n}{\eta}$, $\frac{\pi k}{\eta}$
(resp.
$z^{(a)}_{1}+\frac{a-1}{2}i+\frac{\pi n}{\eta}$,
$(a+1)i +\frac{\pi n}{\eta}$, $\frac{\pi k}{\eta}$);
($n \in \mathbb{Z}$, $k \in \mathbb{Z}-\{0\}$).
Here we put the zeros of $\mathcal{T}^{(a)}_{1}(v)$ as $\{ z^{(a)}_{1} \} $:
$\mathcal{T}^{(a)}_{1}(z^{(a)}_{1})=0$.
$\mathcal{T}^{(0)}_{1}(v)$ is a known function:
\begin{eqnarray}
\mathcal{T}^{(0)}_{1}(v)=
\lim_{N \to \infty} \widetilde{T}^{(0)}_{1}(v)
=\exp \left(\frac{2J (\sinh \eta)^{2} }
{T(\cosh \eta -\cos (2\eta v))}\right).
\end{eqnarray}
Note that (\ref{nlie4}) are an infinite number of couple NLIE
if $ s \in {\mathbb Z}_{\ge 0} $.
This situation is quite different from the $U_{q}(\widehat{sl}(r+1))$
case \cite{TT05,T03,TSK01}.
However these NLIE are not independent, then
we will take the first $r+s+1$ of them ((\ref{nlie4}) for $a \in \{1,2,\dots r+s+1 \}$).
The NLIE for $a=r+s+1$ contains $\mathcal{T}^{(r+s+2)}_{1}(v)$, then we
will eliminate this by using the relation (\ref{a+r+s}),
where $W^{(a)}_{m}(v)=1$ for $\xi=1$.
\begin{eqnarray}
&& {\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1}
+
\oint_{C^{(a)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(a-1)}_{1}(y+\frac{i a}{2})
\mathcal{T}^{(a+1)}_{1}(y+\frac{i a}{2})}
{\tan \eta (v-y-\frac{i(a+1)}{2})
\mathcal{T}^{(a)}_{1}(y+\frac{i(a-1)}{2})}
\nonumber \\
&& \hspace{40pt} +
\oint_{\overline{C}^{(a)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(a-1)}_{1}(y-\frac{i a}{2})
\mathcal{T}^{(a+1)}_{1}(y-\frac{i a}{2})}
{\tan \eta (v-y+\frac{i(a+1)}{2})
\mathcal{T}^{(a)}_{1}(y-\frac{i(a-1)}{2})}
\nonumber \\
&& \hspace{70pt}
{\rm for} \quad a \in \{1,2,\dots r+s \},
\label{nlie-general} \\
&& {\mathcal T}^{(r+s+1)}_{1}(v)=Q^{(r+s+1)}_{1}
\nonumber \\
&&\hspace{20pt} +
\oint_{C^{(r+s+1)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(r+s)}_{1}(y+\frac{i (r+s+1)}{2})
\mathcal{F}(y+\frac{i (r+s+1)}{2})}
{\tan \eta (v-y-\frac{i(r+s+2)}{2})
\mathcal{T}^{(r+s+1)}_{1}(y+\frac{i(r+s)}{2})}
\nonumber \\
&& \hspace{20pt}+
\oint_{\overline{C}^{(r+s+1)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(r+s)}_{1}(y-\frac{i (r+s+1)}{2})
\mathcal{F}(y-\frac{i (r+s+1)}{2})}
{\tan \eta (v-y+\frac{i(r+s+2)}{2})
\mathcal{T}^{(r+s+1)}_{1}(y-\frac{i(r+s)}{2})} ,
\label{nlie-generalb}
\\
&& \hspace{20pt}
\mathcal{F}(v)=\lim_{N \to \infty }\widetilde{T}^{(r+s+2)}_{1}(v)=
\frac{
A_{1}(v)-
\zeta
A_{2}(v)
}
{(-1)^{s}A_{3}(v)+(-1)^{s}
\zeta
A_{4}(v)},
\label{det-hashi}
\end{eqnarray}
where
\begin{eqnarray}
&& A_{1}(v)=\det _{1\le j,k \le s+2}
\left(f_{j,k}
\left(
v-\frac{j+k-s-3}{2}i
\right)
\right) \\
&& \quad f_{j,k}(v)=\mathcal{T}^{(r+1+j-k)}_{1}(v)
\quad \mbox{for} \quad (j,k) \ne (s+2,1),
\quad f_{s+2,1}(v)=0, \nonumber \\
&& A_{2}(v)=
\det _{1\le j,k \le s+1}
\left(g_{j,k}
\left(
v-\frac{j+k-s-2}{2}i
\right)
\right)
\\
&& \quad g_{j,k}(v)=\mathcal{T}^{(r+2+j-k)}_{1}(v)
\quad \mbox{for} \quad (j,k) \ne (s+1,1),
\quad g_{s+1,1}(v)=0, \nonumber \\
&& A_{3}(v)=\det _{1\le j,k \le s+1}
\left(\mathcal{T}^{(r+j-k)}_{1}
\left(
v-\frac{j+k-2-s}{2}i
\right)
\right), \\
&& A_{4}(v)=
\det _{1\le j,k \le s}
\left(\mathcal{T}^{(r+j-k+1)}_{1}
\left(
v-\frac{j+k-s-1}{2}i
\right)
\right)
.
\end{eqnarray}
If $s=-1$, then $A_{1}(v)=A_{4}(v)=0$ and
$A_{2}(v)=A_{3}(v)=1$, and consequently
(\ref{det-hashi}) reduces to
${\mathcal F}(v)=\mathcal{T}^{(r+1)}_{1}(v)=Q^{(r+1)}_{1}=
\zeta =
e^{\frac{\mu_{1}+\cdots +\mu_{r+1}}{T}}$, where
the determinants should be interpreted as
$\det_{1\le j,k \le 0} (\cdots )=1$, $\det_{1\le j,k \le -1} (\cdots )=0$. Thus
(\ref{nlie-general}) and (\ref{nlie-generalb})
reduce to the NLIE for $U_{q}(\widehat{sl}(r+1))$ in \cite{TT05}.
In particular for $s=0$ ($U_{q}(\widehat{sl}(r+1|1))$ case, we can use
(\ref{sol}):
\begin{eqnarray}
&& {\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1}
+
\oint_{C^{(a)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(a-1)}_{1}(y+\frac{i a}{2})
\mathcal{T}^{(a+1)}_{1}(y+\frac{i a}{2})}
{\tan \eta (v-y-\frac{i(a+1)}{2})
\mathcal{T}^{(a)}_{1}(y+\frac{i(a-1)}{2})}
\nonumber \\
&& \hspace{76pt} +
\oint_{\overline{C}^{(a)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(a-1)}_{1}(y-\frac{i a}{2})
\mathcal{T}^{(a+1)}_{1}(y-\frac{i a}{2})}
{\tan \eta (v-y+\frac{i(a+1)}{2})
\mathcal{T}^{(a)}_{1}(y-\frac{i(a-1)}{2})}
\nonumber \\
&& \hspace{100pt}
{\rm for} \quad a \in \{1,2,\dots r \},
\label{nlie-s=0} \\
&& {\mathcal T}^{(r+1)}_{1}(v)=Q^{(r+1)}_{1}
\nonumber \\
&& \hspace{10pt}+
\oint_{C^{(r+1)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(r)}_{1}(y+\frac{i (r+1)}{2})
\mathcal{T}^{(r+1)}_{1}(y+\frac{i(r+2)}{2})}
{\tan \eta (v-y-\frac{i(r+2)}{2})
(
\zeta
+\mathcal{T}^{(r)}_{1}(y+\frac{i(r+1)}{2}))}
\nonumber \\
&& \hspace{10pt}+
\oint_{\overline{C}^{(r+1)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(r)}_{1}(y-\frac{i (r+1)}{2})
\mathcal{T}^{(r+1)}_{1}(y-\frac{i(r+2)}{2})}
{\tan \eta (v-y+\frac{i(r+2)}{2})
(
\zeta
+\mathcal{T}^{(r)}_{1}(y-\frac{i(r+1)}{2}))}.
\nonumber \\
&& \label{nlie-s=0b}
\end{eqnarray}
The free energy per site is given by a solution of these
NLIE (\ref{nlie-general})-(\ref{nlie-s=0b})
\begin{eqnarray}
f=J \cosh \eta -T \log \mathcal{T}^{(1)}_{1}(0).
\label{free-en}
\end{eqnarray}
In these NLIE (\ref{nlie-general})-(\ref{nlie-s=0b}),
the number of unknown functions and equations is
$r+s+1$, which contrasts with TBA equations \cite{Sch87,Sch92,EK94,JKS98,Sa99}.
\subsection{The nonlinear integral equations for $\xi=-1$}
Next,
let us consider the NLIE (\ref{nlie2}) for $\xi=-1$ and $a=1$.
Taking note on the fact ${\widetilde T}^{(0)}_{m}(v)=1$ (cf.(\ref{0m})),
we can drop the second terms in the two brackets $\{\cdots \}$ in (\ref{nlie2})
since they have no poles at $y=0$.
Then the NLIE (\ref{nlie2}) reduce to the following NLIE on
${\mathcal T}^{(1)}_{m}(v)=\lim_{N \to \infty}\widetilde{T}^{(1)}_{m}(v)$
after the Trotter limit $N \to \infty $ with $u=-\frac{J \sinh \eta }{\eta N T}$.
\begin{eqnarray}
{\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m}
&+&
\oint_{C^{(1)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{m-1}(y-\frac{i m}{2})
\mathcal{T}^{(1)}_{m+1}(y-\frac{i m}{2})}
{\tan \eta (v-y+\frac{i(m+1)}{2})
\mathcal{T}^{(1)}_{m}(y-\frac{i(m-1)}{2})}
\nonumber \\
&+&
\oint_{\overline{C}^{(1)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{m-1}(y+\frac{i m}{2})
\mathcal{T}^{(1)}_{m+1}(y+\frac{i m}{2})}
{\tan \eta (v-y-\frac{i(m+1)}{2})
\mathcal{T}^{(1)}_{m}(y+\frac{i(m-1)}{2})}
\nonumber \\
&& \hspace{70pt}
{\rm for} \quad m \in {\mathbb Z}_{\ge 1},
\label{infinitenlie-xi=-1}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{T}^{(1)}_{0}(v)=
\lim_{N \to \infty} \widetilde{T}^{(1)}_{0}(v)
=\exp \left(-\frac{2J (\sinh \eta)^{2} }
{T(\cosh \eta -\cos (2\eta v))}\right),
\end{eqnarray}
and the contour $C^{(1)}_{m}$ (resp. $\overline{C}^{(1)}_{m}$)
is a counterclockwise closed loop around $y=0$ (resp. $y=0$)
which satisfies the condition
$y \ne v+\frac{m+1}{2}i+\frac{\pi n}{\eta}$
(resp. $y \ne v-\frac{m+1}{2}i+\frac{\pi n}{\eta}$) and
does not surround
$z^{(1)}_{m}+\frac{m-1}{2}i+\frac{\pi n}{\eta}$,
$(1+m)i
+\frac{\pi n}{\eta}$, $\frac{\pi k}{\eta}$
(resp.
$z^{(1)}_{m}-\frac{m-1}{2}i+\frac{\pi n}{\eta}$,
$-(1+m)i +\frac{\pi n}{\eta}$, $\frac{\pi k}{\eta}$)
($n \in \mathbb{Z}$, $k \in \mathbb{Z}-\{0\}$).
Here $\{z^{(1)}_{m}\}$ are zeros of ${\mathcal T}^{(1)}_{m}(v)$:
${\mathcal T}^{(1)}_{m}(z^{(1)}_{m})=0$.
These are an infinite number of coupled NLIE.
We can reduce them as $\xi=1$ case.
By using (\ref{a+r+s-b}) in the limit $N \to \infty$,
we can reduce (\ref{infinitenlie-xi=-1})
as follows,
where $Z^{(a)}_{m}(v)=1$ for $\xi=-1$.
\begin{eqnarray}
{\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m}
&+&
\oint_{C^{(1)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{m-1}(y-\frac{i m}{2})
\mathcal{T}^{(1)}_{m+1}(y-\frac{i m}{2})}
{\tan \eta (v-y+\frac{i(m+1)}{2})
\mathcal{T}^{(1)}_{m}(y-\frac{i(m-1)}{2})}
\nonumber \\
&+&
\oint_{\overline{C}^{(1)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{m-1}(y+\frac{i m}{2})
\mathcal{T}^{(1)}_{m+1}(y+\frac{i m}{2})}
{\tan \eta (v-y-\frac{i(m+1)}{2})
\mathcal{T}^{(1)}_{m}(y+\frac{i(m-1)}{2})}
\nonumber \\
&& \hspace{70pt}
{\rm for} \quad m \in \{1,2,\dots r+s \},
\label{nlie-xi=-1} \\
{\mathcal T}^{(1)}_{r+s+1}(v)=Q^{(1)}_{r+s+1}
&+&
\oint_{C^{(1)}_{r+s+1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{r+s}(y-\frac{i (r+s+1)}{2})
\mathcal{G}(y-\frac{i (r+s+1)}{2})}
{\tan \eta (v-y+\frac{i(r+s+2)}{2})
\mathcal{T}^{(1)}_{r+s+1}(y-\frac{i(r+s)}{2})}
\nonumber \\
&& \hspace{-70pt}+
\oint_{\overline{C}^{(1)}_{r+s+1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{r+s}(y+\frac{i (r+s+1)}{2})
\mathcal{G}(y+\frac{i (r+s+1)}{2})}
{\tan \eta (v-y-\frac{i(r+s+2)}{2})
\mathcal{T}^{(1)}_{r+s+1}(y+\frac{i(r+s)}{2})} ,
\label{nlie-xi=-1b}
\end{eqnarray}
\begin{eqnarray}
\mathcal{G}(v)=
\lim_{N \to \infty}
\widetilde{T}^{(1)}_{r+s+2}(v)=
\frac{
\zeta
A_{5}(v)-A_{6}(v)
}
{(-1)^{r}
\zeta
A_{7}(v)+(-1)^{r} A_{8}(v)},
\end{eqnarray}
where
\begin{eqnarray}
&& A_{5}(v)=\det _{1\le j,k \le r+2}
\left( h_{j,k}
\left(
v-\frac{r+3-j-k}{2}i
\right)
\right) \\
&& \quad h_{j,k}(v)={\mathcal T}^{(1)}_{s+1+j-k}(v)
\quad \mbox{for} \quad (j,k) \ne (2+r,1),
\quad h_{r+2,1}(v)=0, \nonumber \\
&& A_{6}(v)=
\det _{1\le j,k \le r+1}
\left( b_{j,k}
\left(
v-\frac{r+2-j-k}{2}i
\right)
\right)
\\
&& \quad b_{j,k}(v)={\mathcal T}^{(1)}_{s+2+j-k}(v)
\quad \mbox{for} \quad (j,k) \ne (r+1,1),
\quad b_{r+1,1}(v)=0, \nonumber \\
&& A_{7}(v)=\det _{1\le j,k \le r+1}
\left({\mathcal T}^{(1)}_{s+j-k}
\left(
v-\frac{r+2-j-k}{2}i
\right)
\right), \\
&&A_{8}(v)=
\det _{1\le j,k \le r}
\left({\mathcal T}^{(1)}_{s+1+j-k}
\left(
v-\frac{r+1-j-k}{2}i
\right)
\right)
,
\end{eqnarray}
where ${\mathcal T}^{(1)}_{m}(v)=0$ for $m<0 $.
In particular for $r=0$ ($U_{q}(\widehat{sl}(1|s+1))$ case, we can use
(\ref{sol}):
\begin{eqnarray}
&& {\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m} +
\oint_{C^{(1)}_{m}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{m-1}(y-\frac{i m}{2})
\mathcal{T}^{(1)}_{m+1}(y-\frac{i m}{2})}
{\tan \eta (v-y+\frac{i(m+1)}{2})
\mathcal{T}^{(1)}_{m}(y-\frac{i(m-1)}{2})}
\nonumber \\
&& \hspace{76pt} +
\oint_{\overline{C}^{(1)}_{1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{m-1}(y+\frac{i m}{2})
\mathcal{T}^{(1)}_{m+1}(y+\frac{i m}{2})}
{\tan \eta (v-y-\frac{i(m+1)}{2})
\mathcal{T}^{(1)}_{m}(y+\frac{i(m-1)}{2})}
\nonumber \\
&& \hspace{100pt}
{\rm for} \quad m \in \{1,2,\dots s \},
\label{nlie-r=0} \\
&& {\mathcal T}^{(1)}_{s+1}(v)=Q^{(1)}_{s+1}
\nonumber \\
&& \hspace{8pt} +
\oint_{C^{(1)}_{s+1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{s}(y-\frac{i (s+1)}{2})
\mathcal{T}^{(1)}_{s+1}(y-\frac{i(s+2)}{2})}
{\tan \eta (v-y+\frac{i(s+2)}{2})
(
\zeta^{-1}
+\mathcal{T}^{(1)}_{s}(y-\frac{i(s+1)}{2}))}
\nonumber \\
&& \hspace{8pt}+
\oint_{\overline{C}^{(1)}_{s+1}} \frac{{\mathrm d} y}{2\pi i}
\frac{\eta
\mathcal{T}^{(1)}_{s}(y+\frac{i (s+1)}{2})
\mathcal{T}^{(1)}_{s+1}(y+\frac{i(s+2)}{2})}
{\tan \eta (v-y-\frac{i(s+2)}{2})
(
\zeta^{-1}
+\mathcal{T}^{(1)}_{s}(y+\frac{i(s+1)}{2}))}.
\nonumber \\
\label{nlie-r=0b}
\end{eqnarray}
The free energy per site is given by a solution of these
NLIE (\ref{nlie-xi=-1})-(\ref{nlie-r=0b})
\begin{eqnarray}
f=-J \cosh \eta -T \log \mathcal{T}^{(1)}_{1}(0).
\label{free-en2}
\end{eqnarray}
In some sense, these NLIE are \symbol{"60}dual' to the ones in the previous section.
The NLIE (\ref{nlie-xi=-1})-(\ref{nlie-r=0b})have only $r+s+1$ unknown functions.
These NLIE have never been considered before even for $U_{q}(\widehat{sl}(2))$ case.
\section{High temperature expansions}
In this section, we will calculate the high temperature
expansion of the free energy from our new NLIE.
For large $T/|J|$, we assume the following expansion :
\begin{eqnarray}
&&\mathcal{T}^{(a)}_{1}(v)=
\exp \left(\sum_{n=0}^{{\mathrm deg}}b_{n}^{(a)}(v)(\frac{J}{T})^{n}
+O((\frac{J}{T})^{{\mathrm deg}+1}) \right)
\nonumber
\\
&& =Q^{(a)}_{1}\Biggl\{ 1+b^{(a)}_{1}(v)\frac{J}{T}+
\left(b^{(a)}_{2}(v)+\frac{(b^{(a)}_{1}(v))^2}{2}\right)(\frac{J}{T})^2
+ \label{hte-ta} \\
&& \left(b^{(a)}_{3}(v)+b^{(a)}_{2}(v)b^{(a)}_{1}(v)+
\frac{(b^{(a)}_{1}(v))^3}{6}\right)
(\frac{J}{T})^3 +\cdots \Biggr\}+O((\frac{J}{T})^{{\mathrm deg}+1}),
\nonumber
\end{eqnarray}
where $b_{0}^{(a)}(v)=\log Q^{(a)}_{1}$.
Here we do not expand $\{Q^{(b)}_{1}\}_{b \ge 1}$ with respect to $\frac{J}{T}$.
Thus the coefficients $\{b^{(a)}_{n}(v) \}$
themselves depend on $\frac{1}{T}$.
In this sense, our high temperature expansion formula
is different from ordinary one.
Substituting this (\ref{hte-ta}) into some of the NLIE
(\ref{nlie4})-(\ref{nlie-s=0b}),
we can calculate the coefficients $\{b^{(a)}_{n}(v) \}$ up to the order of $n={\mathrm deg}$.
Note that we only need $\{b^{(1)}_{n}(0) \}$ to calculate the free energy (\ref{free-en}).
Taking note on this fact,
firstly we use
\footnote{As for numerical calculations of the free energy,
we expect that the reduced NLIE (\ref{nlie-general})-(\ref{nlie-s=0b})
are easier to use than the non-reduced NLIE (\ref{nlie4}).}
a subset (NLIE for $a \in \{1,2,\dots, {\mathrm deg} \}$)
of the non-reduced NLIE (\ref{nlie4})
rather than the reduced NLIE (\ref{nlie-general})-(\ref{nlie-s=0b}).
We have observed that $b^{(1)}_{n}(0)$ can be expressed in terms of
\footnote{For $s=-1$ case,
they are
$Q^{(1)}_{1},Q^{(2)}_{1}, \dots ,Q^{(d)}_{1}$:
$d=\min (n+1,r+1)$ since
$Q^{(a)}_{1}=0$ if $a \ge r+2$.}
$Q^{(1)}_{1},Q^{(2)}_{1}, \dots ,Q^{(n+1)}_{1}$.
We have calculated the coefficients by using Mathematica.
As examples, we shall enumerate the coefficients $\{b^{(1)}_{n}(0) \}$ up to the
order of $5$, where we put $\Delta=\cosh \eta $.
\begin{eqnarray}
&& \hspace{-20pt}
b^{(1)}_{1}(0)= \frac{2 \Delta Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2},
\label{coe1} \\
&& \hspace{-20pt}
b^{(1)}_{2}(0)=-\frac{6 \Delta^2 {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^4}+\frac{\left(2 \Delta^2+1\right)
Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}+\frac{\left(4 \Delta^2-1\right) Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3},
\label{coe2} \\
&& \hspace{-20pt}
b^{(1)}_{3}(0)=\frac{80 {Q^{(2)}_{1}}^3 \Delta^3}{3
{Q^{(1)}_{1}}^6}
+\frac{8 Q^{(3)}_{1} \Delta^3}{{Q^{(1)}_{1}}^3}
+\frac{\left(\frac{4 \Delta^3}{3}+2 \Delta\right)
Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}
\nonumber \\
&&
\hspace{-15pt}
+\frac{\left(8 \Delta-32 \Delta^3\right) Q^{(2)}_{1} Q^{(3)}_{1}}{{Q^{(1)}_{1}}^5}
+\frac{\left(-12 \Delta^3-6
\Delta\right) {Q^{(2)}_{1}}^2
+\left(8 \Delta^3-4 \Delta\right) Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4},
\label{coe3} \\
&&\hspace{-20pt}
b^{(1)}_{4}(0)=-\frac{140 \Delta^4
{Q^{(2)}_{1}}^4}{{Q^{(1)}_{1}}^8}
+\frac{\left(240 \Delta^4-60 \Delta^2\right) Q^{(3)}_{1}
{Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^7}
\nonumber \\
&&
+\frac{\left(\frac{2 \Delta^4}{3}+2 \Delta^2+\frac{1}{4}\right)
Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}
+\frac{\left(\frac{28 \Delta^4}{3}+\frac{14 \Delta^2}{3}-\frac{1}{4}\right)
Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3}
\nonumber \\
&&
+\frac{\left(-14 \Delta^4-\frac{56 \Delta^2}{3}-\frac{3}{2}\right)
{Q^{(2)}_{1}}^2+\left(24 \Delta^4-8
\Delta^2-1\right) Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4}
\nonumber \\
&&
+\frac{\left(80 \Delta^4+40 \Delta^2\right) {Q^{(2)}_{1}}^3+\left(40 \Delta^2-80 \Delta^4\right)
Q^{(4)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^6}
\nonumber \\
&&
+\frac{\left(-40 \Delta^4+20 \Delta^2-\frac{5}{2}\right) {Q^{(3)}_{1}}^2}{{Q^{(1)}_{1}}^6}
\nonumber \\
&&
+\frac{\left(-96 \Delta^4-8
\Delta^2+4\right) Q^{(2)}_{1} Q^{(3)}_{1}
+\left(16 \Delta^4-12 \Delta^2+1\right) Q^{(5)}_{1}}{{Q^{(1)}_{1}}^5},
\label{coe4}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-15pt} b^{(1)}_{5}(0)=\frac{4032 \Delta^5
{Q^{(2)}_{1}}^5}{5 {Q^{(1)}_{1}}^{10}}
+\frac{\left(448 \Delta^3-1792 \Delta^5\right) Q^{(3)}_{1}
{Q^{(2)}_{1}}^3}{{Q^{(1)}_{1}}^9}
\nonumber \\
&&
+\frac{\left(\frac{4 \Delta^5}{15}+\frac{4 \Delta^3}{3}+\frac{\Delta}{2}\right)
Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}
+\frac{\left(8 \Delta^5+10 \Delta^3+\frac{\Delta}{2}\right) Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3}
\nonumber \\
&&
+\frac{\left(-12
\Delta^5-30 \Delta^3-8 \Delta\right) {Q^{(2)}_{1}}^2+\left(40 \Delta^5-6 \Delta\right)
Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4}
\nonumber \\
&&
+\frac{\left(-560 \Delta^5-280
\Delta^3\right) {Q^{(2)}_{1}}^4+\left(672 \Delta^5-336 \Delta^3\right)
Q^{(4)}_{1} {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^8}
\nonumber \\
&&
+\frac{\left(672 \Delta^5-336 \Delta^3+42 \Delta\right)
{Q^{(3)}_{1}}^2 Q^{(2)}_{1}}{{Q^{(1)}_{1}}^8}
\nonumber \\
&&
+\frac{\left(-160 \Delta^5-100 \Delta^3+11 \Delta\right) Q^{(2)}_{1} Q^{(3)}_{1}
+\left(64 \Delta^5-40
\Delta^3\right) Q^{(5)}_{1}}{{Q^{(1)}_{1}}^5}
\nonumber \\
&&
\hspace{-10pt}
+\frac{\left(960 \Delta^5+120 \Delta^3-60 \Delta\right) Q^{(3)}_{1} {Q^{(2)}_{1}}^2+\left(-192
\Delta^5+144 \Delta^3-12 \Delta\right) Q^{(5)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^7}
\nonumber \\
&&
+\frac{\left(-192 \Delta^5+144 \Delta^3-24 \Delta\right) Q^{(3)}_{1}
Q^{(4)}_{1}}{{Q^{(1)}_{1}}^7}
\nonumber \\
&&
+\frac{\left(\frac{400 \Delta^5}{3}+\frac{500 \Delta^3}{3}+20 \Delta\right) {Q^{(2)}_{1}}^3+\left(-320
\Delta^5+80 \Delta^3+30 \Delta\right) Q^{(4)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^6}
\nonumber \\
&&
+\frac{\left(40 \Delta^3-160 \Delta^5\right) {Q^{(3)}_{1}}^2+\left(32 \Delta^5-32 \Delta^3+6
\Delta\right) Q^{(6)}_{1}}{{Q^{(1)}_{1}}^6}.
\label{coe5}
\end{eqnarray}
In deriving these coefficients (\ref{coe1})-(\ref{coe5}), we
did not assume (\ref{limit}). Of course, when one calculate the free energy of the model,
one must assume (\ref{limit}) and (\ref{para}).
We can also rewrite the coefficient $b^{(1)}_{n}(0)$ in terms of
$Q^{(1)}_{1},Q^{(2)}_{1},\dots,Q^{(d)}_{1}$ and $\zeta$
\footnote{
$Q^{(r+1)}_{1}=\zeta$ if $s=-1$.}
( $d=\min (n+1,r+s+1)$ ) since $Q^{(a)}_{1}$ for $a \in {\mathbb Z}_{\ge r+s+2}$ can
be written in terms of $Q^{(1)}_{1},Q^{(2)}_{1},\dots,Q^{(r+s+1)}_{1}$ and $\zeta$
due to the relation (\ref{a+r+s}) in the limit $v \to i\eta^{-1} \infty $
(see also an example: (\ref{Q11-sl21})-(\ref{Qa1-sl21})).
If $b^{(n)}_{1}(0)$ is written in terms of
$Q^{(1)}_{1},Q^{(2)}_{1},\dots,Q^{(d)}_{1}$ and $\zeta$ (
$d=\min (n+1,r+s+1)$), it should be the coefficient
of the high temperature expansion directly derived from
the reduced NLIE (\ref{nlie-general})-(\ref{nlie-s=0b}).
Of course these two expressions of the coefficient $b^{(1)}_{n}(0)$ are
equivalent under the relations (\ref{limit}) and (\ref{para}).
For fixed values of parameters, we have calculated
the high temperature expansion for much higher order (see, appendix).
We have plotted the high temperature expansion
of the specific heat (Figure \ref{specific2}-\ref{specific4}).
Here we have adopted the Pade approximation method.
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]
{specific2.eps}
\end{center}
\caption{Temperature dependence of the high temperature
expansion of the specific heat $C$
for the rank 2 case ($r+s=1$, $J=1$, $q=1$,
$\mu_{a}=0$ ($a \in B$)). We have plotted
plan series (dotted lines) of $C$ in Appendix and their Pade approximations
of order [$n$,$d$] (numerator: a degree $n$ polynomial of $1/T$,
denominator: a degree $d$ polynomial of $1/T$)
by using Mathematica:
each line denotes $C$ for
$sl(3|0)$ with [20,20] (thin), $sl(2|1)$ with [17,17] (medium),
$sl(1|2)$ with [17,17] (thick), $sl(0|3)$ [20,20] (dashed thick) respectively.
We have also plotted (thick dots)
a result of numerical calculation from another NLIE by J\"uttner
and Kl\"umper \cite{JK97} for the $sl(2|1)$ case.
C for the $sl(3|0)$ case was also
considered in \cite{FK02,FK99}.}
\label{specific2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]
{specific3.eps}
\end{center}
\caption{Temperature dependence of the high temperature
expansion of the specific heat $C$
for the rank 3 case ($r+s=2$, $J=1$, $q=1$,
$\mu_{a}=0$ ($a \in B$)). We have plotted
plan series (dotted lines) of $C$ in Appendix and their Pade approximations
of order [$n$,$d$] (numerator: a degree $n$ polynomial of $1/T$,
denominator: a degree $d$ polynomial of $1/T$):
each line denotes $C$ for
$sl(4|0)$ with [19,20] (thin), $sl(3|1)$ with [17,17] (medium),
$sl(2|2)$ with [16,16] (thick), $sl(1|3)$ with [17,17]
(dashed medium), $sl(0|4)$ with [18,21] (dashed thick) respectively.
C for the $sl(4|0)$ case was also
considered in \cite{FK02}.}
\label{specific3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]
{specific4.eps}
\end{center}
\caption{Temperature dependence of the high temperature
expansion of the specific heat $C$
for the rank 4 case ($r+s=3$, $J=1$, $q=1$,
$\mu_{a}=0$ ($a \in B$)). We have plotted
plan series (dotted lines) of $C$ in Appendix and their Pade approximations
of order [$n$,$d$] (numerator: a degree $n$ polynomial of $1/T$,
denominator: a degree $d$ polynomial of $1/T$):
each line denotes $C$ for
$sl(5|0)$ with [17,21] (thin), $sl(4|1)$ with [16,18] (medium),
$sl(3|2)$ with [17,17] (thick),
$sl(2|3)$ with [16,17] (dashed thin), $sl(1|4)$ with [16,18]
(dashed medium), $sl(0|5)$ with [17,21] (dashed thick) respectively. }
\label{specific4}
\end{figure}
There is a duality among the specific heats with respect to interchange of
$r$ and $s$.
In particular, $r=s$ case is self-dual, then
the specific heat becomes an even function of $T$ (see (\ref{hte-sl22})).
In Figure \ref{specific2}, we have also plotted a result of
a numerical calculation by another NLIE \cite{JK97}.
We find a good agreement
between our result and their result except for very low temperature region.
We can also calculate the high temperature expansion from the NLIE
for $\xi=-1$ in subsection 3.2.
Similar to $\xi=1$ case, we assume
\begin{eqnarray}
&&\mathcal{T}^{(1)}_{m}(v)=
\exp \left(\sum_{n=0}^{{\mathrm deg}}\widehat{b}_{m,n}(v)(\frac{J}{T})^{n}
+O((\frac{J}{T})^{{\mathrm deg}+1}) \right) ,
\label{hte-tm}
\end{eqnarray}
where $\widehat{b}_{m,0}(v)=\log Q^{(1)}_{m}$.
Here we do not expand $\{ Q^{(1)}_{k} \}_{k \ge 1}$ with respect to $\frac{J}{T}$.
(\ref{hte-ta}) for $a=1$ should coincide with
(\ref{hte-tm}) for $m=1$ up to a factor from
the normalization function (\ref{normal}).
Thus we have
\begin{eqnarray}
b^{(1)}_{n}(0)=\widehat{b}_{1,n}(0)+2\Delta \delta_{n,1}
\label{ty1}
\end{eqnarray}
Due to symmetry between the NLIE for $\xi=1$ and the one for $\xi=-1$,
the following relation follows:
\begin{eqnarray}
\widehat{b}_{1,n}(0)=(-1)^{n}b^{(1)}_{n}(0)|_{Q^{(a)}_{1} \to Q^{(1)}_{a}
\ {\rm for} \ a \ge 1}.
\label{ty2}
\end{eqnarray}
For example, (\ref{ty1}) and (\ref{ty2}) for $n=1$
and (\ref{coe1}) reproduce
the $Q$-system (\ref{Q-sys}) for $(a,m)=(1,1)$.
From the relations
(\ref{ty1}) and (\ref{ty2}) for $n=2$ and (\ref{coe2}), we obtain
identities among characters
\begin{eqnarray}
&& \hspace{-40pt}
-3 {Q^{(2)}_{1}}^{2}+Q^{(2)}_{1}{Q^{(1)}_{1}}^{2}+2 Q^{(3)}_{1}Q^{(1)}_{1}
=-3 {Q^{(1)}_{2}}^{2}+Q^{(1)}_{2}{Q^{(1)}_{1}}^{2}+2 Q^{(1)}_{3}Q^{(1)}_{1}, \\
&& \hspace{-40pt}
Q^{(2)}_{1}Q^{(1)}_{1}-Q^{(3)}_{1}=Q^{(1)}_{2}Q^{(1)}_{1}-Q^{(1)}_{3},
\end{eqnarray}
where we have used the fact that $Q^{(a)}_{m}$ does not depend on $\Delta $.
These relations can be proved from the
relations (\ref{jacobi-trudi}), (\ref{jacobi-trudi2}) and (\ref{limit}).
Some comments on references on the high temperature expansion
are in order.
The high temperature expansion of the free energy was
calculated from the Takahashi's NLIE for
the $XXX$-model up to the order of 100 \cite{ShT02};
the $XXZ$-model up to the order of 99 \cite{TT05}.
As for the higher rank or higher spin case, we have some results
\cite{T02,T03,T04,TT05} from NLIE.
In particular, our result on the $sl(r+1)$ Uimin-Sutherland model
in \cite{T03} was applied \cite{BGOSTF03,YRFC04,YRZ04,BGO04,BGOF04,BGOT05}
to spin ladder models and
good agreement
was seen between theoretical results and
experimental data.
We note that
the coefficients (\ref{coe1})-(\ref{coe3}) coincide with eqs.
(4.14)-(4.16) in \cite{TT05}.
Note however that the coefficients in our paper are more general than the ones in
\cite{TT05} since the value of $Q^{(a)}_{1}$ (\ref{limit})
was restricted to $s=-1$ case in \cite{TT05}.
There are also several works on high temperature expansions by different methods
(see for example, \cite{DV95,RST02,BEU00,FK02,F03}).
\section{Concluding remarks}
In this paper, we have derived NLIE which contain only $r+s+1$ unknown functions
for the $U(\widehat{sl}(r+1|s+1))$ Perk-Schultz model.
The key is a duality for the auxiliary function (\ref{dual})
and the quantum (supersymmetric) Jacobi-Trudi and Giambelli
formula (\ref{jacobi-trudi}) and (\ref{jacobi-trudi2}).
Although we assumed that $q$ is generic,
we expect that our NLIE (at least reduced ones
(\ref{nlie-general})-(\ref{nlie-s=0b}),
(\ref{nlie-xi=-1})-(\ref{nlie-r=0b})) will also be
valid even for the case where $q$ is root of unity
as we will not need to take into account truncation of the
$T$-system.
The high temperature expansion of the free energy
in terms of characters was calculated from our NLIE.
There are NLIE with a finite number of unknown functions
for algebras of arbitrary rank in different context \cite{Z98,DDT00}.
These NLIE are different from Takahashi-type.
Whether one can generalize (or modify) their NLIE for finite
temperature case
is still not clear.
A deeper understanding of this subject is desirable.
There is an another kind of formulation of transfer matrices
which is based on the graded formulation of the
quantum inverse scattering method.
In this formulation, the row-to-row transfer matrix
is defined as a supertrace:
$\widehat{t}(v)={\mathrm str}_{0}(\widehat{R}_{0L}(v)
\cdots \widehat{R}_{02}(v)\widehat{R}_{01}(v))$, where
the $R$-matrix is defined as $\widehat{ R}^{a_{1},b_{1}}_{a_{2},b_{2}}(v)=
(-1)^{p(a_{1})p(b_{1})}
R^{a_{1},b_{1}}_{a_{2},b_{2}}(v)$ and the graded tensor product is adopted.
As far as the free energy (in the thermodynamic limit)
is concerned, we think that there is no difference
between this graded formulation and the one we have adopted.
\section*{Acknowledgments}
The author would like to thank A. Kl\"umper and K. Sakai for
comments on a figure of specific heats.
He also thank Y. Nagatani for a remark
on programming of Mathematica.
\noindent
\renewcommand{\theequation}{A.1.\arabic{equation}}
\begin{landscape}
\section*{Appendix: The high temperature expansion of the specific heat}
We will list the high temperature expansion of the
specific heat $C_{sl(r+1|s+1)}$ for the $U_{q}(\widehat{sl}(r+1|s+1))$
Perk-Schultz model at $q=1$,
$\mu_{a}=0$ ($a \in B$).
Here we put $t=\frac{J}{T}$.
In this case, $Q^{(a)}_{1}$ (cf. (\ref{limit})) becomes
\begin{eqnarray}
Q^{(a)}_{1}=\sum_{j=0}^{a}\binom{r+1}{j}\binom{a+s-j}{a-j}, \label{Q-q=1}
\end{eqnarray}
which is the dimension of $a$-th anti-(super)symmetric tensor representation
of $sl(r+1|s+1)$.
If one substitute (\ref{Q-q=1}), $\Delta=1$ and the values of $(r,s)$
into (\ref{coe1})-(\ref{coe5}),
one can recover (\ref{hte-sl30})-(\ref{hte-sl32}) up to the order of 5
through $C=-T\frac{\partial^{2} f}{\partial T^{2}}$.
A formula for $r<s$ can
be obtained from the relation $C_{sl(s+1|r+1)}=C_{sl(r+1|s+1)}|_{t \to -t}$.
\begin{tiny}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(3|0)}=
\frac{8 t^2}{9} + \frac{16 t^3}{27} - \frac{40 t^4}{27} - \frac{400 t^5}{243} + \frac{1246 t^6}{729} +
\frac{11228 t^7}{3645} - \frac{43343 t^8}{32805} - \frac{649298 t^9}{137781} +
\frac{120769 t^{10}}{918540} + \frac{5559367 t^{11}}{885735} + \frac{36953579 t^{12}}{18600435} -
\frac{4333458857 t^{13}}{584585100} - \frac{298222277909 t^{14}}{58926178080}
\nonumber \\
&& +
\frac{44130279500393 t^{15}}{5745302362800} + \frac{2885993845291237 t^{16}}{321736932316800} -
\frac{47755357995530701 t^{17}}{7239080977128000} -
\frac{4655618035381741733 t^{18}}{347475886902144000} +
\frac{45436230289647581 t^{19}}{12306437661117600} +
\frac{1590783575674338541 t^{20}}{89350942346265600}
\nonumber \\
&& +
\frac{11365203602766081451 t^{21}}{8102505908218176000} -
\frac{1297481476315241042696509 t^{22}}{60606744193471956480000} -
\frac{8454419484929269090011049 t^{23}}{954556221047183314560000} +
\frac{46816780786984484371594673 t^{24}}{2016022738851651160350720} +
\frac{8261193033376436054715299 t^{25}}{445851182630653622000640}
\nonumber \\
&& -
\frac{4614044757865223484648570791543 t^{26}}{208658353471145895096299520000} -
\frac{16687567624201209926686045552339 t^{27}}{558906303940569361865088000000} +
\frac{764945949570840761119882192959107 t^{28}}{45209309918748277270864896000000} +
\frac{4370526181353809390696401823321023 t^{29}}{104627260097674584541144473600000}
\nonumber \\
&& -
\frac{4077089856720735402715997482797183733 t^{30}}{615208289374326557101929504768000000} -
\frac{469645986012529902658517386886308977221 t^{31}}{8920520195927735077977977819136000000} -
\frac{5501623615004327359193974711230889260281 t^{32}}{583888594642542659649467639070720000000}
\nonumber \\
&& +
\frac{81517987350487140844545467182506851908591 t^{33}}{1351398263165884933985080078663680000000} +
\frac{8539638490748569692670776190970340550336847 t^{34}}
{273059671917403950089785323323129856000000} -
\frac{565870839129464697660769748292242672448332479 t^{35}}
{9097613107632737375587559089563893760000000}
\nonumber \\
&&-
\frac{2803571976313303389947366028586799714153992385183
t^{36}}{48253739922884039040116413411046892503040000000} +
\frac{821880309698434533395036032147535806012394330483
t^{37}}{14814744713166152336877846222689835417600000000}
\nonumber \\
&& +
\frac{58773945021047530582522114436884890912882464905123
t^{38}}{668128706624548232863150339537572357734400000000} -
\frac{903065874685632945085489557823621181308117028891323
t^{39}}{24102743091480577500538148498817922805268480000000}
\nonumber \\
&& -
\frac{69053384918361487529760006169534549420582996627140401
t^{40}}{586849397009961886969624485188610294389145600000000}
+O(t^{41})
\label{hte-sl30}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(2|1)}=
\frac{32 t^2}{27} + \frac{304 t^3}{729} - \frac{5480 t^4}{2187} -
\frac{8320 t^5}{6561} + \frac{736708 t^6}{177147} +
\frac{1470644 t^7}{531441} - \frac{146834891 t^8}{23914845} -
\frac{14149151840 t^9}{2711943423} +
\frac{228231260059 t^{10}}{27119434230} +
\frac{3300969899909 t^{11}}{366112362105} -
\frac{86046211353427 t^{12}}{7908027021468}
\nonumber \\
&& - \frac{6841741391685967 t^{13}}{466008735193650} +
\frac{41669257450473325 t^{14}}{3131578700501328} +
\frac{209509171518293955313 t^{15}}{9159867698966384400} -
\frac{23504662660033768183787 t^{16}}{1538857773426352579200} -
\frac{59568189209735825524303 t^{17}}{1731214995104646651600}
\nonumber \\
&& +
\frac{361017420632075530992718067 t^{18}}{22436546336556220604736000} +
\frac{3424500450806080358078749 t^{19}}{68110944235974241121520} -
\frac{8630572663979741453598354479 t^{20}}{588478558198817443289932800} -
\frac{17202731244586324123474774048501 t^{21}}{240139375283176527142516896000}
\nonumber \\
&& +
\frac{74148256847472328975368306828013 t^{22}}{7924599384344825395703057568000} +
\frac{10062652817270791187839874880710933 t^{23}}{100858537618934141399857096320000} +
\frac{708322746602409944512187717871912169 t^{24}}{316324648705015526355199808330342400}
\nonumber \\
&&-
\frac{15024962981052794085153555040621422973 t^{25}}{110457493727965947968161876390656000} -
\frac{231254246428418304396673721279263409933713 t^{26}}{9821880342290732093328954048657131520000}
+
\frac{38713134042756953404786472697685030409352013 t^{27}}
{213099725283629276667762128019971692800000}
\nonumber \\
&& +
\frac{174322192700322839649823295381972425935690253 t^{28}}{2938193181940949117691871765123852128000000} -
\frac{15926771354416351317222940636770636517312076257 t^{29}}
{67019011202800272994904518213769017498828800}
\nonumber \\
&& -
\frac{668493937475506754534088912524547499759600367865761
t^{30}}{5757542326058750725471342701091974685126656000000}
+
\frac{335250111063276140660586885786195145564952534202441623
t^{31}}{1101993601207644888855214992989003954733241958400000}
\nonumber \\
&& +
\frac{24356652704738251987509066739814465042906238637219292447
t^{32}}{120217483768106715147841635598800431425444577280000000}
-
\frac{238144042414431030572413060565358221701372451889917292229
t^{33}}{626042066206424505863222816152802925415196958720000000}
\nonumber \\
&&-
\frac{19283712635849148986736121736755878449365536043407449116181
t^{34}}{58382894402793416775358836340649964244434367807488000000}
+O(t^{35})
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(4|0)}=
\frac{15 t^2}{16} + \frac{15 t^3}{32} - \frac{435 t^4}{256} - \frac{345 t^5}{256}
+ \frac{9555 t^6}{4096} +
\frac{21917 t^7}{8192} - \frac{172967 t^8}{65536} - \frac{3052445 t^9}{688128}
+ \frac{53684587 t^{10}}{22020096} +
\frac{41381153 t^{11}}{6291456} - \frac{24190901579 t^{12}}{15854469120}
- \frac{74629743461 t^{13}}{8304721920} -
\frac{59210969497 t^{14}}{186025771008}
\nonumber \\
&&+ \frac{831873403828903 t^{15}}{72550050693120} +
\frac{40380622501051099 t^{16}}{12188408516444160} - \frac{69674366936531941 t^{17}}{5078503548518400} -
\frac{677763076075244557 t^{18}}{88642971028684800}
+ \frac{27733137112330033541 t^{19}}{1808316608985169920} +
\frac{916368739307996439457 t^{20}}{68199369253154979840}
\nonumber \\
&& -
\frac{195301776246305171789377 t^{21}}{12368885605458562252800} -
\frac{71292804015129538500833 t^{22}}{3445590165496528896000} +
\frac{2200476451580705384154142457 t^{23}}{152384670659249486954496000} +
\frac{78446060458744023170123124563 t^{24}}{2681970203602790970399129600}
\nonumber \\
&& -
\frac{125238936013261236945687178727 t^{25}}{11862560515935421599842304000} -
\frac{191117971515476319277466700295697 t^{26}}{4934825174629135385534398464000} +
\frac{1065649288731276091956499544404357 t^{27}}{317238761226158703355782758400000}
\nonumber \\
&&+
\frac{11192351020362615444556022460039019153 t^{28}}{230949818172643536043009848115200000} +
\frac{156907755522484105311278338009333117 t^{29}}{19795698700512303089400844124160000} -
\frac{2360085804850806890473467996881990318573 t^{30}}{41081897067886708999650692982374400000}
\nonumber \\
&&-
\frac{29423655375616416313737080966732640595067 t^{31}}{1227477288149584699201684341837004800000} +
\frac{3157300240979374909778816498331173586672649 t^{32}}{49099091525983387968067373673480192000000} +
\frac{16704208032661485316059904933586012489234849 t^{33}}
{369326254640301513906859729911545856000000}
\nonumber \\
&&-
\frac{4632447279343820265500763653185467289421668351 t^{34}}{68884622579768236650970866196073467084800000} -
\frac{61387162556304338047935313287382785565501241 t^{35}}
{855075033232118492709251293240098816000000}
\nonumber \\
&&+
\frac{1387457285241446733857164858489037532362389241038513
t^{36}}{21640793029659989226269007324158440419360768000000} +
\frac{1371535919121592468470837511909090297230678904817833
t^{37}}{13288206246282449524902022041149919555747840000000}
\nonumber \\
&& -
\frac{20934050741656543039384025455100075026671071927370779
t^{38}}{399522332855261339561889365984463515434352640000000}
-
\frac{10632927999476868411724450072247105587260120276680120833
t^{39}}{76868096841352281731707514015410780369569447936000000}
+O(t^{40})
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(3|1)}=
\frac{69 t^2}{64} + \frac{45 t^3}{128} - \frac{8745 t^4}{4096} -
\frac{4065 t^5}{4096} + \frac{447405 t^6}{131072} +
\frac{2734683 t^7}{1310720} - \frac{406592713 t^8}{83886080}
- \frac{222295155 t^9}{58720256} +
\frac{359912058803 t^{10}}{56371445760} + \frac{2122554602333 t^{11}}{338228674560}
-
\frac{143332776011113 t^{12}}{18038862643200}
\nonumber \\
&&- \frac{2496231619276031 t^{13}}{255121057382400} +
\frac{2384568827915515 t^{14}}{253987186016256}
+ \frac{4333850790231468637 t^{15}}{297165007639019520} -
\frac{13960280493348579178073 t^{16}}{1331299234222807449600} -
\frac{6984410297833152298633 t^{17}}{332824808555701862400} +
\frac{633205038656776496328727 t^{18}}{58093057493358870528000}
\nonumber \\
&&+
\frac{14162457941778109685145689 t^{19}}{482817855611471501721600} -
\frac{604540593802926375008290949 t^{20}}{59593518178330196783923200} -
\frac{1296710847223129254977218906963 t^{21}}{32424291481573293431980032000} +
\frac{86141375198512658190978194262097 t^{22}}{11413350601513799288056971264000}
\nonumber \\
&&+
\frac{99308058853685747403471990563549 t^{23}}{1862318279967286597118853120000} -
\frac{49493071668124032051338572924297499 t^{24}}{22497996705704001156617901755596800} -
\frac{6931534160144573731055517563665688207 t^{25}}{99510370044460005115809950072832000}
\nonumber \\
&& -
\frac{587874112017235875229135097766679692569 t^{26}}{82792627876990724256353878460596224000} +
\frac{29312181553180719038186824685428443001969 t^{27}}{328542174115042556572832851034112000000} +
\frac{1698158664136245991126625182277922005467823563 t^{28}}
{77493899692863317903947230239118065664000000}
\nonumber \\
&& -
\frac{3126686061624145346779158233961466863595139419 t^{29}}{27897803889430794445421002886082503639040000} -
\frac{197403265536059174349045556856630021916456067183 t^{30}}
{4463648622308927111267360461773200582246400000}
\nonumber \\
&& +
\frac{3751792081077294678466899166499824476489567047562019
t^{31}}{27183620109861366107618225212198791545880576000000} +
\frac{80680001031677274734526645657250828568042355729432743
t^{32}}{1054394961837046927810646311261044035719004160000000}
\nonumber \\
&& -
\frac{1319799435656923687275493800075182418349748116331178151
t^{33}}{7931220926171316229046258650147412121621626880000000} -
\frac{6324243289350644149071369870231550459419389543237919619
t^{34}}{51904772136387320644826041572092537674131308544000000}
+O(t^{35})
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(2|2)}=
\frac{9 t^2}{8} - \frac{305 t^4}{128} + \frac{4165 t^6}{1024} - \frac{1028409 t^8}{163840} +
\frac{758654369 t^{10}}{82575360} - \frac{775100578187 t^{12}}{59454259200} +
\frac{2108183654669 t^{14}}{116266106880} - \frac{36086372927030207 t^{16}}{1451001013862400} +
\frac{123454173470857039087 t^{18}}{3656522554933248000}
\nonumber \\
&& -
\frac{775360975454089529227 t^{20}}{17049842313288744960} +
\frac{414116620493362593763666669 t^{22}}{6802887083002209239040000} -
\frac{18120085561666997913793728601 t^{24}}{223497516966899247533260800} +
\frac{4422094856669703488323197127729 t^{26}}{41123543121909461546119987200}
\nonumber \\
&& -
\frac{61544192334285763277931254839087079063 t^{28}}{433030909073706630080643465216000000} +
\frac{24524774786846908325218781376565239554743 t^{30}}{130948546903888884936386583881318400000} -
\frac{1066449426872776917417866273408875411974943 t^{32}}{4332272781704416585417709441777664000000}
\nonumber \\
&&+
\frac{3971063172619299668637043400605041165950069531 t^{34}}
{12300825460672899401959083249298833408000000}
+O(t^{36})
\label{hte-sl22}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(5|0)}=
\frac{24 t^2}{25} + \frac{48 t^3}{125} - \frac{1128 t^4}{625} -
\frac{3504 t^5}{3125} + \frac{8274 t^6}{3125} +
\frac{178444 t^7}{78125} - \frac{1306457 t^8}{390625} - \frac{160692418 t^9}{41015625} +
\frac{3091451869 t^{10}}{820312500} + \frac{177674519 t^{11}}{29296875}
- \frac{173473316029 t^{12}}{46142578125} -
\frac{4227122268577 t^{13}}{483398437500}
\nonumber \\
&& + \frac{421192483420837 t^{14}}{135351562500000} +
\frac{157545290437200577 t^{15}}{13196777343750000} - \frac{499112291761171129 t^{16}}{316722656250000000} -
\frac{20544397967491432423 t^{17}}{1319677734375000000} -
\frac{2522947397527190428811 t^{18}}{2217058593750000000000}
\nonumber \\
&& +
\frac{344105756804540679754433 t^{19}}{17667185668945312500000} +
\frac{6481099627451266704510167 t^{20}}{1211464160156250000000000} -
\frac{514322825251322335204973701 t^{21}}{21971554541015625000000000} -
\frac{8742897574113739154665330649 t^{22}}{767260634765625000000000000}
\nonumber \\
&& +
\frac{11410614110336974477848290940503 t^{23}}{422952424914550781250000000000} +
\frac{3647621524555860483885516418494301 t^{24}}{186099066962402343750000000000000} -
\frac{6101990866174335715593365406813719 t^{25}}{205782622121887207031250000000000}
\nonumber \\
&& -
\frac{517943244817437855867592189832638271 t^{26}}{17121114160541015625000000000000000} +
\frac{21134646896102733105412988229490963037 t^{27}}{687901908236022949218750000000000000} +
\frac{9912970564455830832161043262688106009427 t^{28}}
{227632995089011230468750000000000000000}
\nonumber \\
&& -
\frac{1100169921280159675483593435467841886949801 t^{29}}{37559444189686853027343750000000000000000} -
\frac{14414030290129165020160816003617622620734617 t^{30}}{242071098102329589843750000000000000000000} +
\frac{11096391762725899024492170038454077512197326053 t^{31}}
{457474030230385869873046875000000000000000000}
\nonumber \\
&& +
\frac{1298913265483771488098839606916528515217512843843 t^{32}}
{16635419281104940722656250000000000000000000000} -
\frac{35827816153435004340581856161243956139756265512563
t^{33}}{2502652047730934464599609375000000000000000000000}
\nonumber \\
&& -
\frac{5904769218118219660609568211765655164631746178169 t^{34}}
{59827681730247539062500000000000000000000000000} -
\frac{93947676821931398955438041874300826546727311315759613
t^{35}}{46799593292568474488012695312500000000000000000000000}
\nonumber \\
&& +
\frac{6904178434557905871447348617674173162386567620702330884299
t^{36}}{57282702190103812773327539062500000000000000000000000000}
+
\frac{100656404268726322722978537972143488460344388408592120437
t^{37}}{3823216202619438304855957031250000000000000000000000000}
\nonumber \\
&& -
\frac{286059137038561139282916743342892323995039400157684633307181
t^{38}}{2011599909685919846554980468750000000000000000000000000000}
+O(t^{39})
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(4|1)}=
\frac{648 t^2}{625} + \frac{912 t^3}{3125} - \frac{159176 t^4}{78125}
- \frac{8087472 t^5}{9765625} +
\frac{158309102 t^6}{48828125} + \frac{10619486948 t^7}{6103515625}
- \frac{139915063629 t^8}{30517578125} -
\frac{50193776068378 t^9}{16021728515625}
+ \frac{145147029308647729 t^{10}}{24032592773437500}
\nonumber \\
&&+
\frac{154561110991941973 t^{11}}{30040740966796875}
- \frac{511262973476333776477 t^{12}}{67591667175292968750} -
\frac{562798123639186677983 t^{13}}{70810317993164062500} +
\frac{27263166260140540122161 t^{14}}{3004074096679687500000} +
\frac{72774085086186213945892331 t^{15}}{6195902824401855468750000}
\nonumber \\
&& -
\frac{26156239133657469488082458441 t^{16}}{2505898475646972656250000000} -
\frac{2149440171158051134748079150511 t^{17}}{128142535686492919921875000000} +
\frac{11616816820943999915517438797465393 t^{18}}{1014888882637023925781250000000000} +
\frac{44865031236953872796448471717341173 t^{19}}{1925570424646139144897460937500000}
\nonumber \\
&& -
\frac{272821227014028414160064869868451715981 t^{20}}{23106845095753669738769531250000000000}
-
\frac{66246387219780884484696724793606687399357 t^{21}}{2095370725728571414947509765625000000000} +
\frac{256119119131347681110777846055145382444833999 t^{22}}
{23049077983014285564422607421875000000000000}
\nonumber \\
&& +
\frac{8476891798073835488652808808090088670094654917 t^{23}}{201679432351374998688697814941406250000000000} -
\frac{434354342487729446922438313016331279852618182267 t^{24}}
{49299416797002777457237243652343750000000000000}
\nonumber \\
&& -
\frac{74214528639575465586270534190435386898519833666641
t^{25}}{1352823539616565540200099349021911621093750000000} +
\frac{241489502492350466125582435489379440678333756529927
t^{26}}{57851356445462442934513092041015625000000000000000}
\nonumber \\
&& +
\frac{360758243689593696611421633292891289316448278820393793389
t^{27}}{5125268610090188303729519248008728027343750000000000000} +
\frac{209490702367563791064096969855392414710044548635093273111309
t^{28}}{55967933222184856276726350188255310058593750000000000000000}
\nonumber \\
&&-
\frac{372970955002091736988968941025597451968680727107685289984993771
t^{29}}{4197594991663864220754476264119148254394531250000000000000000} -
\frac{4270970709112276953496423117872370698202654386252499239578642355083
t^{30}}{264448484474823445907532004639506340026855468750000000000000000000}
\nonumber \\
&&+
\frac{705454893266763100765781948219461395410990670735178037505736499754037
t^{31}}{6390838374808233276098690112121403217315673828125000000000000000000} +
\frac{40167992980468192551156832646604563149328974033305535140279044561345513
t^{32}}{1161970613601496959290670929476618766784667968750000000000000000000000}
\nonumber \\
&&-
\frac{137948929879317690635779621721238444356037066949039424111273171238709303
t^{33}}{1022270328271698821254176436923444271087646484375000000000000000000000}
\nonumber \\
&& -
\frac{2953939948969279942248671840928447255547963144680752974314176599201612597751
t^{34}}{48518201539360464871606382075697183609008789062500000000000000000000000000}
+O(t^{35})
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-10pt} c_{sl(3|2)}=
\frac{672 t^2}{625} + \frac{336 t^3}{3125} - \frac{172296 t^4}{78125}
- \frac{3045504 t^5}{9765625} +
\frac{179227188 t^6}{48828125} + \frac{4080719804 t^7}{6103515625}
- \frac{168141570529 t^8}{30517578125} -
\frac{19837767385216 t^9}{16021728515625}
+ \frac{1492774189466571 t^{10}}{190734863281250} +
\frac{63203192343466109 t^{11}}{30040740966796875}
\nonumber \\
&& - \frac{484746376821625940723 t^{12}}{45061111450195312500} -
\frac{119657043279360047669 t^{13}}{35405158996582031250} +
\frac{34147743686560410475681 t^{14}}{2360343933105468750000} +
\frac{20015701333284035643585049 t^{15}}{3835558891296386718750000} -
\frac{1295881833233097980209112581867 t^{16}}{67659258842468261718750000000}
\nonumber \\
&& -
\frac{306358982994385261087366331797 t^{17}}{39154663681983947753906250000} +
\frac{1412566068743727245477698757563469 t^{18}}{56382715702056884765625000000000} +
\frac{38642600881761570643294935210984427 t^{19}}{3369748243130743503570556640625000}
\nonumber \\
&& -
\frac{749943826781288708352355220297511952081 t^{20}}{23106845095753669738769531250000000000} -
\frac{823393490024195742531909258671419921007 t^{21}}{49889779184013605117797851562500000000} +
\frac{30040873842462516099488089062008818909101223 t^{22}}
{720283686969196423888206481933593750000000}
\nonumber \\
&& +
\frac{4719753433491069947419730357115496145119402591 t^{23}}
{201679432351374998688697814941406250000000000} -
\frac{1816573282828621256378963795157922371189821325851 t^{24}}
{34130365474848076701164245605468750000000000000}
\nonumber \\
&& -
\frac{8322591197319584873907765768566656873942892240011 t^{25}}{253946171687857713550329208374023437500000000000} +
\frac{78284346134584453501416727068291711888925901387282239
t^{26}}{1159656736020406242460012435913085937500000000000000}
\nonumber \\
&& +
\frac{58189014600621207023093446380939189933968282293477470193
t^{27}}{1281317152522547075932379812002182006835937500000000000} -
\frac{282847733617417987501884583482556260456466391798052174307
t^{28}}{3321933358391788715380243957042694091796875000000000000}
\nonumber \\
&& -
\frac{130877056590281430510536720696452523358544877286866176351960231
t^{29}}{2098797495831932110377238132059574127197265625000000000000000} +
\frac{4709049750740948620371923839229875772436358549551968355418845451943
t^{30}}{44074747412470574317922000773251056671142578125000000000000000000}
\nonumber \\
&& +
\frac{11079182595131990060363516585430527467174991034289759500690770327879
t^{31}}{130425272955270066859156941063702106475830078125000000000000000000} -
\frac{51676596103109252978856677265778347122077560630622361028974962093289069
t^{32}}{387323537867165653096890309825539588928222656250000000000000000000000}
\nonumber \\
&& -
\frac{1673903716405893043635199894308218634748017092628006510068625803350703093
t^{33}}{14567352177871708202872014226159080862998962402343750000000000000000000}
\nonumber \\
&& +
\frac{2560112840573273875628881469331020252804964139812191437239808660979503389001
t^{34}}{15437609580705602459147485205903649330139160156250000000000000000000000000}
+O(t^{35})
\label{hte-sl32}
\end{eqnarray}
\end{tiny}
\end{landscape}
| proofpile-arXiv_065-2110 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |